Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech Guides - Data

281 Articles
article-image-quantum-expert-robert-sutor-explains-the-basics-of-quantum-computing
Packt Editorial Staff
12 Dec 2019
9 min read
Save for later

Quantum expert Robert Sutor explains the basics of Quantum Computing

Packt Editorial Staff
12 Dec 2019
9 min read
What if we could do chemistry inside a computer instead of in a test tube or beaker in the laboratory? What if running a new experiment was as simple as running an app and having it completed in a few seconds? For this to really work, we would want it to happen with complete fidelity. The atoms and molecules as modeled in the computer should behave exactly like they do in the test tube. The chemical reactions that happen in the physical world would have precise computational analogs. We would need a completely accurate simulation. If we could do this at scale, we might be able to compute the molecules we want and need. These might be for new materials for shampoos or even alloys for cars and airplanes. Perhaps we could more efficiently discover medicines that are customized to your exact physiology. Maybe we could get a better insight into how proteins fold, thereby understanding their function, and possibly creating custom enzymes to positively change our body chemistry. Is this plausible? We have massive supercomputers that can run all kinds of simulations. Can we model molecules in the above ways today?  This article is an excerpt from the book Dancing with Qubits written by Robert Sutor. Robert helps you understand how quantum computing works and delves into the math behind it with this quantum computing textbook.  Can supercomputers model chemical simulations? Let’s start with C8H10N4O2 – 1,3,7-Trimethylxanthine.  This is a very fancy name for a molecule that millions of people around the world enjoy every day: caffeine. An 8-ounce cup of coffee contains approximately 95 mg of caffeine, and this translates to roughly 2.95 × 10^20 molecules. Written out, this is 295, 000, 000, 000, 000, 000, 000 molecules. A 12 ounce can of a popular cola drink has 32 mg of caffeine, the diet version has 42 mg, and energy drinks often have about 77 mg. These numbers are large because we are counting physical objects in our universe, which we know is very big. Scientists estimate, for example, that there are between 10^49 and 10^50 atoms in our planet alone. To put these values in context, one thousand = 10^3, one million = 10^6, one billion = 10^9, and so on. A gigabyte of storage is one billion bytes, and a terabyte is 10^12 bytes. Getting back to the question I posed at the beginning of this section, can we model caffeine exactly on a computer? We don’t have to model the huge number of caffeine molecules in a cup of coffee, but can we fully represent a single molecule at a single instant? Caffeine is a small molecule and contains protons, neutrons, and electrons. In particular, if we just look at the energy configuration that determines the structure of the molecule and the bonds that hold it all together, the amount of information to describe this is staggering. In particular, the number of bits, the 0s and 1s, needed is approximately 10^48: 10, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000. And this is just one molecule! Yet somehow nature manages to deal quite effectively with all this information. It handles the single caffeine molecule, to all those in your coffee, tea, or soft drink, to every other molecule that makes up you and the world around you. How does it do this? We don’t know! Of course, there are theories and these live at the intersection of physics and philosophy. However, we do not need to understand it fully to try to harness its capabilities.  We have no hope of providing enough traditional storage to hold this much information. Our dream of exact representation appears to be dashed. This is what Richard Feynman meant in his quote: “Nature isn’t classical.” However, 160 qubits (quantum bits) could hold 2^160 ≈ 1.46 × 10^48 bits while the qubits were involved in a computation. To be clear, I’m not saying how we would get all the data into those qubits and I’m also not saying how many more we would need to do something interesting with the information. It does give us hope, however. In the classical case, we will never fully represent the caffeine molecule. In the future, with enough very high-quality qubits in a powerful quantum computing system, we may be able to perform chemistry on a computer. How quantum computing is different than classical computing I can write a little app on a classical computer that can simulate a coin flip. This might be for my phone or laptop. Instead of heads or tails, let’s use 1 and 0. The routine, which I call R, starts with one of those values and randomly returns one or the other. That is, 50% of the time it returns 1 and 50% of the time it returns 0. We have no knowledge whatsoever of how R does what it does. When you see “R,” think “random.” This is called a “fair flip.” It is not weighted to slightly prefer one result over the other. Whether we can produce a truly random result on a classical computer is another question. Let’s assume our app is fair. If I apply R to 1, half the time I expect 1 and another half 0. The same is true if I apply R to 0. I’ll call these applications R(1) and R(0), respectively. If I look at the result of R(1) or R(0), there is no way to tell if I started with 1 or 0. This is just like a  secret coin flip where I can’t tell whether I began with heads or tails just by looking at how the coin has landed. By “secret coin flip,” I mean that someone else has flipped it and I can see the result, but I have no knowledge of the mechanics of the flip itself or the starting state of the coin.  If R(1) and R(0) are randomly 1 and 0, what happens when I apply R twice? I write this as R(R(1)) and R(R(0)). It’s the same answer: random result with an equal split. The same thing happens no matter how many times we apply R. The result is random, and we can’t reverse things to learn the initial value.  Now for the quantum version, Instead of R, I use H. It too returns 0 or 1 with equal chance, but it has two interesting properties. It is reversible. Though it produces a random 1 or 0 starting from either of them, we can always go back and see the value with which we began. It is its own reverse (or inverse) operation. Applying it two times in a row is the same as having done nothing at all.  There is a catch, though. You are not allowed to look at the result of what H does if you want to reverse its effect. If you apply H to 0 or 1, peek at the result, and apply H again to that, it is the same as if you had used R. If you observe what is going on in the quantum case at the wrong time, you are right back at strictly classical behavior.  To summarize using the coin language: if you flip a quantum coin and then don’t look at it, flipping it again will yield heads or tails with which you started. If you do look, you get classical randomness. A second area where quantum is different is in how we can work with simultaneous values. Your phone or laptop uses bytes as individual units of memory or storage. That’s where we get phrases like “megabyte,” which means one million bytes of information. A byte is further broken down into eight bits, which we’ve seen before. Each bit can be a 0 or 1. Doing the math, each byte can represent 2^8 = 256 different numbers composed of eight 0s or 1s, but it can only hold one value at a time. Eight qubits can represent all 256 values at the same time This is through superposition, but also through entanglement, the way we can tightly tie together the behavior of two or more qubits. This is what gives us the (literally) exponential growth in the amount of working memory. How quantum computing can help artificial intelligence Artificial intelligence and one of its subsets, machine learning, are extremely broad collections of data-driven techniques and models. They are used to help find patterns in information, learn from the information, and automatically perform more “intelligently.” They also give humans help and insight that might have been difficult to get otherwise. Here is a way to start thinking about how quantum computing might be applicable to large, complicated, computation-intensive systems of processes such as those found in AI and elsewhere. These three cases are in some sense the “small, medium, and large” ways quantum computing might complement classical techniques: There is a single mathematical computation somewhere in the middle of a software component that might be sped up via a quantum algorithm. There is a well-described component of a classical process that could be replaced with a quantum version. There is a way to avoid the use of some classical components entirely in the traditional method because of quantum, or the entire classical algorithm can be replaced by a much faster or more effective quantum alternative. As I write this, quantum computers are not “big data” machines. This means you cannot take millions of records of information and provide them as input to a quantum calculation. Instead, quantum may be able to help where the number of inputs is modest but the computations “blow up” as you start examining relationships or dependencies in the data.  In the future, however, quantum computers may be able to input, output, and process much more data. Even if it is just theoretical now, it makes sense to ask if there are quantum algorithms that can be useful in AI someday. To summarize, we explored how quantum computing works and different applications of artificial intelligence in quantum computing. Get this quantum computing book Dancing with Qubits by Robert Sutor today where he has explored the inner workings of quantum computing. The book entails some sophisticated mathematical exposition and is therefore best suited for those with a healthy interest in mathematics, physics, engineering, and computer science. Intel introduces cryogenic control chip, ‘Horse Ridge’ for commercially viable quantum computing Microsoft announces Azure Quantum, an open cloud ecosystem to learn and build scalable quantum solutions Amazon re:Invent 2019 Day One: AWS launches Braket, its new quantum service and releases
Read more
  • 0
  • 0
  • 6324

article-image-5-key-reinforcement-learning-principles-explained-by-ai-expert
Packt Editorial Staff
10 Dec 2019
10 min read
Save for later

5 key reinforcement learning principles explained by AI expert, Hadelin de Ponteves

Packt Editorial Staff
10 Dec 2019
10 min read
When people refer to artificial intelligence, some think of it as machine learning, while others think of it as deep learning or reinforcement learning, etc. While artificial intelligence is a broad term which involves machine learning, reinforcement learning is a type of machine learning, thereby a branch of AI. In this article we will understand 5 key reinforcement learning principles with some simple examples. Reinforcement learning allows machines and software agents to automatically determine the ideal behavior within a specific context, in order to maximize its performance. It is employed by various software and machines to find the best possible behavior or path it should take in a specific situation. This article is an excerpt from the book AI Crash Course written by Hadelin de Ponteves. In this book Hadelin helps you understand what you really need to build AI systems with reinforcement learning. The book involves descriptive and practical projects to put ideas into action and show how to build intelligent software step by step. While reinforcement learning in some way is a form of AI, machine learning does not include the process of taking action and interacting with an environment like we humans do. Indeed, as intelligent human beings, what we constantly keep doing is the following: We observe some input, whether it's what we see with our eyes, what we hear with our ears, or what we remember in our memory. These inputs are then processed in our brain. Eventually, we make decisions and take actions. This process of interacting with an environment is what we are trying to reproduce in terms of artificial intelligence. And to that extent, the branch of AI that works on this is reinforcement learning. This is the closest match to the way we think; the most advanced form of artificial intelligence, if we see AI as the science that tries to mimic (or surpass) human intelligence. Reinforcement learning principles also has the most impressive results in business applications of AI. For example, Alibaba leveraged reinforcement learning to increase its ROI in online advertising by 240% without increasing their advertising budget. Five reinforcement learning principles Let's begin building the first pillars of your intuition into how reinforcement learning works. These are the fundamental reinforcement learning principles, which will get you started with the right, solid basics in AI. Here are the five principles: Principle #1: The input and output system Principle #2: The reward Principle #3: The AI environment Principle #4: The Markov decision process Principle #5: Training and inference Principle #1 – The input and output system The first step is to understand that today, all AI models are based on the common principle of input and output. Every single form of artificial intelligence, including machine learning models, chatBots, recommender systems, robots, and of course reinforcement learning models, will take something as input, and will return another thing as output. Figure 1: The input and output system In reinforcement learning, this input and output have a specific name: the input is called the state, or input state. The output is the action performed by the AI. And in the middle, we have nothing other than a function that takes a state as input and returns an action as output. That function is called a policy. Remember the name, "policy," because you will often see it in AI literature. As an example, consider a self-driving car. Try to imagine what the input and output would be in that case. The input would be what the embedded computer vision system sees, and the output would be the next move of the car: accelerate, slow down, turn left, turn right, or brake. Note that the output at any time (t) could very well be several actions performed at the same time. For instance, the self-driving car can accelerate while at the same time turning left. In the same way, the input at each time (t) can be composed of several elements: mainly the image observed by the computer vision system, but also some parameters of the car such as the current speed, the amount of gas remaining in the tank, and so on. That's the very first important principle in artificial intelligence: it is an intelligent system (a policy) that takes some elements as input, does its magic in the middle, and returns some actions to perform as output. Remember that the inputs are also called the states. Principle #2 – The reward Every AI has its performance measured by a reward system. There's nothing confusing about this; the reward is simply a metric that will tell the AI how well it does over time. The simplest example is a binary reward: 0 or 1. Imagine an AI that has to guess an outcome. If the guess is right, the reward will be 1, and if the guess is wrong, the reward will be 0. This could very well be the reward system defined for an AI; it really can be as simple as that! A reward doesn't have to be binary, however. It can be continuous. Consider the famous game of Breakout: Figure 2: The Breakout game Imagine an AI playing this game. Try to work out what the reward would be in that case. It could simply be the score; more precisely, the score would be the accumulated reward over time in one game, and the rewards could be defined as the derivative of that score. This is one of the many ways we could define a reward system for that game. Different AIs will have different reward structures; we will build five rewards systems for five different real-world applications in this book. With that in mind, remember this as well: the ultimate goal of the AI will always be to maximize the accumulated reward over time. Those are the first two basic, but fundamental, principles of artificial intelligence as it exists today; the input and output system, and the reward. Principle #3 – AI environment The third reinforcement learning principles involves an "AI environment." It is a very simple framework where you will define three things at each time (t): The input (the state) The output (the action) The reward (the performance metric) For each and every single AI based on reinforcement learning that is built today, we always define an environment composed of the preceding elements. It is, however, important to understand that there are more than these three elements in a given AI environment. For example, if you are building an AI to beat a car racing game, the environment will also contain the map and the gameplay of that game. Or, in the example of a self-driving car, the environment will also contain all the roads along which the AI is driving and the objects that surround those roads. But what you will always find in common when building any AI, are the three elements of state, action, and reward. Principle #4 – The Markov decision process The Markov decision process, or MDP, is simply a process that models how the AI interacts with the environment over time. The process starts at t = 0, and then, at each next iteration, meaning at t = 1, t = 2, … t = n units of time (where the unit can be anything, for example, 1 second), the AI follows the same format of transition: The AI observes the current state, st The AI performs the action, at The AI receives the reward, rt = R(st,at) The AI enters the following state, st+1 The goal of the AI is always the same in reinforcement learning: it is to maximize the accumulated rewards over time, that is, the sum of all the rt = R(st,at) received at each transition. received at each transition. The following graphic will help you visualize and remember an MDP better, the basis of reinforcement learning models: Figure 3: The Markov Decision process Now four essential pillars are already shaping your intuition of AI. Adding a last important one completes the foundation of your understanding of AI. The last principle is training and inference; in training, the AI learns, and in inference, it predicts. Principle #5 – Training and inference The final principle you must understand is the difference between training and inference. When building an AI, there is a time for the training mode, and a separate time for the inference mode. I'll explain what that means starting with the training mode. Training mode Now you understand, from the three first principles, that the very first step of building an AI is to build an environment in which the input states, the output actions, and a system of rewards are clearly defined. From the fourth principle, you also understand that inside this environment an AI will be built that interacts with it, trying to maximize the total reward accumulated over time. To put it simply, there will be a preliminary (and long) period during which the AI will be trained to do that. That period is called the training; we can also say that the AI is in training mode. During that time, the AI tries to accomplish a certain goal repeatedly until it succeeds. After each attempt, the parameters of the AI model are modified in order to do better at the next attempt. Inference mode Inference mode simply comes after your AI is fully trained and ready to perform well. It will simply consist of interacting with the environment by performing the actions to accomplish the goal the AI was trained to achieve before in training mode. In inference mode, no parameters are modified at the end of each episode. For example, imagine you have an AI company that builds customized AI solutions for businesses, and one of your clients asked you to build an AI to optimize the flows in a smart grid. First, you'd enter an R&D phase during which you would train your AI to optimize these flows (training mode), and as soon as you reached a good level of performance, you'd deliver your AI to your client and go into production. Your AI would regulate the flows in the smart grid only by observing the current states of the grid and performing the actions it has been trained to do. That's inference mode. Sometimes, the environment is subject to change, in which case you must alternate fast between training and inference modes so that your AI can adapt to the new changes in the environment. An even better solution is to train your AI model every day and go into inference mode with the most recently trained model. That was the last fundamental principle common to every AI. To summarize, we explored the five key reinforcement learning principles which involves the input and output system, a reward system, AI environment, Markov decision process, training and inference mode for AI. Get this guide AI Crash Course by Hadelin de Ponteves today to learn about programming an AI software in Python without any math or data science background. It will also help you master the key skills of deep learning, reinforcement learning, and deep reinforcement learning. How artificial intelligence and machine learning can help us tackle the climate change emergency DeepMind introduces OpenSpiel, a reinforcement learning-based framework for video games OpenAI’s AI robot hand learns to solve a Rubik Cube using Reinforcement learning and Automatic Domain Randomization (ADR) DeepMind’s AI uses reinforcement learning to defeat humans in multiplayer games
Read more
  • 0
  • 0
  • 10538

article-image-postgresql-committer-stephen-frost-shares-vision-postgresql-12-beyond
Sugandha Lahoti
04 Dec 2019
8 min read
Save for later

PostgreSQL committer Stephen Frost shares his vision for PostgreSQL version 12 and beyond

Sugandha Lahoti
04 Dec 2019
8 min read
PostgreSQL version 12 was released in October this year and has earned a strong reputation for being reliable, feature robust, and performant. During the PostgreSQL Conference for the Asia Pacific, PostgreSQL major contributor and committer, Stephen Frost talked about a number of new features available in PostgreSQL 12 including  Pluggable Storage, Partitioning and performance improvements as well as SQL features. In this post we have tried to cover a short synopsis of his Keynote. The full talk is available on YouTube. Want to learn how to build your own PostgreSQL applications? PostgreSQL 12 has an array of interesting features such as advanced indexing, high availability, database configuration, database monitoring, to efficiently manage and maintain your database. If you are a database developer and want to leverage PostgreSQL 12, we recommend you to go through our latest book Mastering PostgreSQL 12 - Third Edition written by Hans-Jürgen Schönig. This book examines in detail the newly released features in PostgreSQL 12 to help you build efficient and fault-tolerant PostgreSQL applications. [dropcap]S[/dropcap]tephen Frost is the CTO of Crunchy Data. As a PostgreSQL major contributor he has implemented ‘roles support’ in version 8.1 to replace the existing user/group system, SQL column-level privileges in version 8.4, and Row Level Security in PostgreSQL 9.5. He has also spoken at numerous conferences, including pgConf.EU, pgConf.US, PostgresOpen, SCALE and others. Stephen Frost on PostgreSQL 12 features Pluggable Storage This release introduces the pluggable table storage interface, which allows developers to create their own methods for storing data. Postgres before version 12 had one storage engine - one primary heap. All indexes were secondary indexes which means that they referred directly into pointers on disk. Also, this heap structure was row-based. Every row has a big header associated with the row which may cause issues when storing very narrow tables (which have two or fewer columns included in it.) PostgreSQL 12 now has the ability to have multiple different storage formats underneath - the pluggable storage. This new feature is going to be the basis for columnar storage coming up probably in v13 or v14. It's also going to be the basis for Z heap - which is an alternative heap that is going to allow in-place updates and uses an undo log instead of using the redo log that PostgreSQL has. Version 12 is building on the infrastructure for pluggable storage, and the team does not have anything user-facing yet. It's not going to be until v13 and later that they will actually have new storage mechanisms that are built on top of the pluggable storage feature. Partitioning improvements Postgres is adding in a whole bunch of new features to make working with partitions in Postgres easier to deal with. Postgres 12 has major improvements in declarative partitioning capability which makes working with partitions more effective and easier to deal with. Partition selection has dramatically improved especially when selecting from a few partitions out of a large set. Postgres 12 has the ability to ATTACH/DETATCH CONCURRENTLY. This is the ability to, on the fly, attach a partition and detach a partition without having to take very heavy locks. You can add new partitions to your partitioning scheme without having to take any outage or downtime or have any impact on the ongoing operations of the system. This release also increases the number of partitions. The initial declarative partitioning patch made planning slow when you got over a few hundred partitions. This is now fixed with a much faster planning methodology for dealing with partitions. Postgres 12 allows Multi-INSERT during COPY statements into partitioned tables. COPY is a way in which you can bulk load data into Postgres. This feature makes it much faster to COPY into partitioned tables. There is also a new function pg_partition_tree to display partition info. Performance improvements and SQL features Parallel Query with SERIALIZABLE Parallel Query has been in Postgres since version 9.6 but it did not work with the serializable isolation level. Serializable is actually the highest level of isolation that you can have inside of Postgres. With Postgres 12, you have the ability to have a parallel query with serializable and have that highest level of isolation. This increases the number of places that parallel query can be used. This also allows application authors to worry less about concurrency because serializable in Postgres provides true serializability that exists in very few databases. Faster float handling Postgres 12 has a new library for converting floating point into text. This provides a significant speedup for many workloads where you're doing text-based transfer of data. Although it may result in slightly different (possibly more correct) output. Partial de-TOAST/decompress a value Historically to access any compressed TOAST value, you had to decompress the whole thing into memory. This was not very ideal in situations where you wanted access to, only the front of it. Partial de-TOAST allows decompressing of a section of the TOAST value. This also gives a great improvement in performance for cases like: PostGIS geometry/geography-  data at the front can be used for filtering Pulling just the start of a text string COPY FROM with WHERE Postgres 12 now has a WHERE clause supported by a COPY FROM statement. This  allows you to filter data/records while importing. Earlier it was done using the file_fdw, but it was tedious as it required creating a foreign table. Create or Replace Aggregate This features allows an aggregate to either be created, if it does not exist, or replaced it it does. It makes extension upgrade scripts much simpler.  This feature was requested specifically by the Postgres community. Inline Common Table Expressions Not having Inline CTEs was seen as an optimization barrier.  From version 12, Postgres, by default, inlines the CTEs if it can. It also supports the old behavior so in the event that you actually want CTE to be an optimization barrier, you can still do that. You just have to specify WITH MATERIALIZED when you go to write your CTE. SQL/JSON improvements There is also progress made towards supporting the SQL/JSON standard Added a number of json_path functions json_path_exists json_path_match json_path_query json_path_query_array json_path_query_first Added new operators for working with json: jsonb @? Jsonpath - wrapper for jsonpath_exists jsonb @? Jsonpath - wrapper for jsonpath_predicate Index support should also be added for these operators soon Recovery.conf moved into postgresql.conf Recovery.conf is no longer available in Postgresql 12 and all options are moved to postgresql.conf. This allows changing recovery parameters via ALTER SYSTEM. This feature increases flexibility meaning that it allows changing primary via ALTER SYSTEM/reload. However, this is a disruptive change. Every high-level environment will change but it reduces the fragility of high-level solutions moving forward. A new pg_promote function is added to allow promoting a replica from SQL. Control SSL protocol With Postgres 12, you can now control SSL protocols. Older SSL protocols are required to be disabled for security reasons.  They were enforced previously with FIPS mode. They are now addressed in CIS benchmark/ STIG updates. Covering GIST indexes GIST indexes can now also use INCLUDE. These are useful for adding columns to allow index-only queries. It also allows including columns that are not part of the search key. Add CSV output mode to psql Previously you could get CSV but you had to do that by taking your query inside a copy statement. Now you can use the new pset format for CSV output from psql. It returns data in each row in CSV format instead of tabular format. Add option to sample queries This is a new log_statement_sample_rate parameter which allows you to set log_min duration to be very very low or zero. Logging all statements is very expensive as it slows down the whole system and you end up having a backlog of processes trying to write into the logging system.  The new log_statement_sample_rate parameter includes only a sample of those queries in the output rather than logging every query. The log_min_duration_statement excludes very fast queries. It helps with analysis in environments with lots of fast queries. New HBA option called clientcert=verify-full This new HBA option allows you to do a two-factor authentication where one of the factors is a certificate and the other one might be a password or something else (PAM, LDAP, etc). It gives you the ability to say that every user has to have a client-side certificate and that the client-side certificate must be validated by the server on connection and have to provide a password. It works with non-cert authentication methods and requires client-side certificates to be used. In his talk, Stephen also answered commonly asked questions about Postgres, watch the full video to know more. You can read about other performance and maintenance enhancements in PostgreSQL 12 on the official blog. To learn advanced PostgreSQL 12 concepts with real-world examples and sample datasets, go through the book Mastering PostgreSQL 12 - Third Edition by Hans-Jürgen Schönig. Introducing PostgREST, a REST API for any PostgreSQL database written in Haskell Percona announces Percona Distribution for PostgreSQL to support open source databases  Wasmer’s first Postgres extension to run WebAssembly is here!
Read more
  • 0
  • 0
  • 4259

article-image-what-does-a-data-science-team-look-like
Fatema Patrawala
21 Nov 2019
11 min read
Save for later

What does a data science team look like?

Fatema Patrawala
21 Nov 2019
11 min read
Until a couple of years ago, people barely knew the term 'data science' which has now evolved into an extremely popular career field. The Harvard Business Review dubbed data scientist within the data science team as the sexiest job of the 21st century and expert professionals jumped on the data is the new oil bandwagon. As per the Figure Eight Report 2018, which takes the pulse of the data science community in the US, a lot has changed rapidly in the data science field over the years. For the 2018 report, they surveyed approximately 240 data scientists and found out that machine learning projects have multiplied and more and more data is required to power them. Data science and machine learning jobs are LinkedIn's fastest growing jobs. And the internet is creating 2.5 quintillion bytes of data to process and analyze each day. With all these changes, it is evident for data science teams to evolve and change among various organizations. The data science team is responsible for delivering complex projects where system analysis, software engineering, data engineering, and data science is used to deliver the final solution. To achieve all of this, the team does not only have a data scientist or a data analyst but also includes other roles like business analyst, data engineer or architect, and chief data officer. In this post, we will differentiate and discuss various job roles within a data science team, skill sets required and the compensation benefit for each one of them. For an in-depth understanding of data science teams, read the book, Managing Data Science by Kirill Dubovikov, which has interesting case studies on building successful data science teams. He also explores how the team can efficiently manage data science projects through the use of DevOps and ModelOps.  Now let's get into understanding individual data science roles and functions, but before that we take a look at the structure of the team.There are three basic team structures to match different stages of AI/ML adoption: IT centric team structure At times for companies hiring a data science team is not an option, and they have to leverage in-house talent. During such situations, they take advantage of the fully functional in-house IT department. The IT team manages functions like data preparation, training models, creating user interfaces, and model deployment within the corporate IT infrastructure. This approach is fairly limited, but it is made practical by MLaaS solutions. Environments like Microsoft Azure or Amazon Web Services (AWS) are equipped with approachable user interfaces to clean datasets, train models, evaluate them, and deploy. Microsoft Azure, for instance, supports its users with detailed documentation for a low entry threshold. The documentation helps in fast training and early deployment of models even without an expert data scientists on board. Integrated team structure Within the integrated structure, companies have a data science team which focuses on dataset preparation and model training, while IT specialists take charge of the interfaces and infrastructure for model deployment. Combining machine learning expertise with IT resource is the most viable option for constant and scalable machine learning operations. Unlike the IT centric approach, the integrated method requires having an experienced data scientist within the team. This approach ensures better operational flexibility in terms of available techniques. Additionally, the team leverages deeper understanding of machine learning tools and libraries – like TensorFlow or Theano which are specifically for researchers and data science experts. Specialized data science team Companies can also have an independent data science department to build an all-encompassing machine learning applications and frameworks. This approach entails the highest cost. All operations, from data cleaning and model training to building front-end interfaces, are handled by a dedicated data science team. It doesn't necessarily mean that all team members should have a data science background, but they should have technology background with certain service management skills. A specialized structure model aids in addressing complex data science tasks that include research, use of multiple ML models tailored to various aspects of decision-making, or multiple ML backed services. Today's most successful Silicon Valley tech operates with specialized data science teams. Additionally they are custom-built and wired for specific tasks to achieve different business goals. For example, the team structure at Airbnb is one of the most interesting use cases. Martin Daniel, a data scientist at Airbnb in this talk explains how the team emphasizes on having an experimentation-centric culture and apply machine learning rigorously to address unique product challenges. Job roles and responsibilities within data science team As discussed earlier, there are many roles within a data science team. As per Michael Hochster, Director of Data Science at Stitch Fix, there are two types of data scientists: Type A and Type B. Type A stands for analysis. Individuals involved in Type A are statisticians that make sense of data without necessarily having strong programming knowledge. Type A data scientists perform data cleaning, forecasting, modeling, visualization, etc. Type B stands for building. These individuals use data in production. They're good software engineers with strong programming knowledge and statistics background. They build recommendation systems, personalization use cases, etc. Though it is rare that one expert will fit into a single category. But understanding these data science functions can help make sense of the roles described further. Chief data officer/Chief analytics officer The chief data officer (CDO) role has been taking organizations by storm. A recent NewVantage Partners' Big Data Executive Survey 2018 found that 62.5% of Fortune 1000 business and technology decision-makers said their organization appointed a chief data officer. The role of chief data officer involves overseeing a range of data-related functions that may include data management, ensuring data quality and creating data strategy. He or she may also be responsible for data analytics and business intelligence, the process of drawing valuable insights from data. Even though chief data officer and chief analytics officer (CAO) are two distinct roles, it is often handled by the same person. Expert professionals and leaders in analytics also own the data strategy and how a company should treat its data. It does make sense as analytics provide insights and value to the data. Hence, with a CDO+CAO combination companies can take advantage of a good data strategy and proper data management without losing on quality. According to compensation analysis from PayScale, the median chief data officer salary is $177,405 per year, including bonuses and profit share, ranging from $118,427 to $313,791 annually. Skill sets required: Data science and analytics, programming skills, domain expertise, leadership and visionary abilities are required. Data analyst The data analyst role implies proper data collection and interpretation activities. The person in this job role will ensure that collected data is relevant and exhaustive while also interpreting the results of the data analysis. Some companies also require data analysts to have visualization skills to convert alienating numbers into tangible insights through graphics. As per Indeed, the average salary for a data analyst is $68,195 per year in the United States. Skill sets required: Programming languages like R, Python, JavaScript, C/C++, SQL. With this critical thinking, data visualization and presentation skills will be good to have. Data scientist Data scientists are data experts who have the technical skills to solve complex problems and the curiosity to explore what problems are needed to be solved. A data scientist is an individual who develops machine learning models to make predictions and is well versed in algorithm development and computer science. This person will also know the complete lifecycle of the model development. A data scientist requires large amounts of data to develop hypotheses, make inferences, and analyze customer and market trends. Basic responsibilities include gathering and analyzing data, using various types of analytics and reporting tools to detect patterns, trends and relationships in data sets. According to Glassdoor, the current U.S. average salary for a data scientist is $118,709. Skills set required: A data scientist will require knowledge of big data platforms and tools like  Seahorse powered by Apache Spark, JupyterLab, TensorFlow and MapReduce; and programming languages that include SQL, Python, Scala and Perl; and statistical computing languages, such as R. They should also have cloud computing capabilities and knowledge of various cloud platforms like AWS, Microsoft Azure etc.You can also read this post on how to ace a data science interview to know more. Machine learning engineer At times a data scientist is confused with machine learning engineers, but a machine learning engineer is a distinct role that involves different responsibilities. A machine learning engineer is someone who is responsible for combining software engineering and machine modeling skills. This person determines which model to use and what data should be used for each model. Probability and statistics are also their forte. Everything that goes into training, monitoring, and maintaining a model is the ML engineer's job. The average machine learning engineer's salary is $146,085 in the US, and is ranked No.1 on the Indeed's Best Jobs in 2019 list. Skill sets required: Machine learning engineers will be required to have expertise in computer science and programming languages like R, Python, Scala, Java etc. They would also be required to have probability techniques, data modelling and evaluation techniques. Data architects and data engineers The data architects and data engineers work in tandem to conceptualize, visualize, and build an enterprise data management framework. The data architect visualizes the complete framework to create a blueprint, which the data engineer can use to build a digital framework. The data engineering role has recently evolved from the traditional software-engineering field.  Recent enterprise data management experiments indicate that the data-focused software engineers are needed to work along with the data architects to build a strong data architecture. Average salary for a data architect in the US ranges from $1,22,000 to $1,29, 000 annually as per a recent LinkedIn survey. Skill sets required: A data architect or an engineer should have a keen interest and experience in programming languages frameworks like HTML5, RESTful services, Spark, Python, Hive, Kafka, and CSS etc. They should have the required knowledge and experience to handle database technologies such as PostgreSQL, MapReduce and MongoDB and visualization platforms such as; Tableau, Spotfire etc. Business analyst A business analyst (BA) basically handles Chief analytics officer's role but on the operational level. This implies converting business expectations into data analysis. If your core data scientist lacks domain expertise, a business analyst can bridge the gap. They are responsible for using data analytics to assess processes, determine requirements and deliver data-driven recommendations and reports to executives and stakeholders. BAs engage with business leaders and users to understand how data-driven changes will be implemented to processes, products, services, software and hardware. They further articulate these ideas and balance them against technologically feasible and financially reasonable. The average salary for a business analyst is $75,078 per year in the United States, as per Indeed. Skill sets required: Excellent domain and industry expertise will be required. With this good communication as well as data visualization skills and knowledge of business intelligence tools will be good to have. Data visualization engineer This specific role is not present in each of the data science teams as some of the responsibilities are realized by either a data analyst or a data architect. Hence, this role is only necessary for a specialized data science model. The role of a data visualization engineer involves having a solid understanding of UI development to create custom data visualization elements for your stakeholders. Regardless of the technology, successful data visualization engineers have to understand principles of design, both graphical and more generally user-centered design. As per Payscale, the average salary for a data visualization engineer is $98,264. Skill sets required: A data visualization engineer need to have rigorous knowledge of data visualization methods and be able to produce various charts and graphs to represent data. Additionally they must understand the fundamentals of design principles and visual display of information. To sum it up, a data science team has evolved to create a number of job roles and opportunities, but companies still face challenges in building up the team from scratch and find it hard to figure where to start from. If you are facing a similar dilemma, check out this book, Managing Data Science, written by Kirill Dubovikov. It covers concepts and methodologies to manage and deliver top-notch data science solutions, while also providing guidance on hiring, growing and sustaining a successful data science team. How to learn data science: from data mining to machine learning How to ace a data science interview Data science vs. machine learning: understanding the difference and what it means today 30 common data science terms explained 9 Data Science Myths Debunked
Read more
  • 0
  • 0
  • 10156

article-image-why-jvm-java-virtual-machine-for-deep-learning
Guest Contributor
10 Nov 2019
5 min read
Save for later

Why use JVM (Java Virtual Machine) for deep learning

Guest Contributor
10 Nov 2019
5 min read
Deep learning is one of the revolutionary breakthroughs of the decade for enterprise application development. Today, majority of organizations and enterprises have to transform their applications to exploit the capabilities of deep learning. In this article, we will discuss how to leverage the capabilities of JVM (Java virtual machine) to build deep learning applications. Entreprises prefer JVM Major JVM languages used in enterprise are Java, Scala, Groovy and Kotlin. Java is the most widely used programming language in the world. Nearly all major enterprises in the world use Java in some way or the other. Enterprises use JVM based languages such as Java to build complex applications because JVM features are optimal for production applications. JVM applications are also significantly faster and require much fewer resources to run compared to their counterparts such as Python. Java can perform more computational operations per second compared to Python. Here is an interesting performance benchmarking for the same. JVM optimizes performance benchmarks Production applications represent a business and are very sensitive to performance degradation, latency, and other disruptions. Application performance is estimated from latency/throughput measures. Memory overload and high resource usage can influence above said measures. Applications that demand more resources or memory require good hardware and further optimization from the application itself. JVM helps in optimizing performance benchmarks and tune the application to the hardware’s fullest capabilities. JVM can also help in avoiding memory footprints in the application. We have discussed on JVM features so far, but there’s an important context on why there’s a huge demand for JVM based deep learning in production. We’re going to discuss that next. Python is undoubtedly the leading programming language used in deep learning applications. For the same reason, the majority of enterprise developers i.e, Java developers are forced to switch to a technology stack that they’re less familiar with. On top of that, they need to address compatibility issues and deployment in a production environment while integrating neural network models. DeepLearning4J, deep learning library for JVM Java Developers working on enterprise applications would want to exploit deployment tools like Maven or Gradle for hassle-free deployments. So, there’s a demand for a JVM based deep learning library to simplify the whole process. Although there are multiple deep learning libraries that serve the purpose, DL4J (Deeplearning4J) is one of the top choices. DL4J is a deep learning library for JVM and is among the most popular repositories on GitHub. DL4J, developed by the Skymind team, is the first open-source deep learning library that is commercially supported. What makes it so special is that it is backed by ND4J (N-Dimensional Arrays for Java) and JavaCPP. ND4J is a scientific computational library developed by the Skymind team. It acts as the required backend dependency for all neural network computations in DL4J. ND4J is much faster in computations than NumPy. JavaCPP acts as a bridge between Java and native C++ libraries. ND4J internally depends on JavaCPP to run native C++ libraries. DL4J also has a dedicated ETL component called DataVec. DataVec helps to transform the data into a format that a neural network can understand. Data analysis can be done using DataVec just like Pandas, a popular Python data analysis library. Also, DL4J uses Arbiter component for hyperparameter optimization. Arbiter finds the best configuration to obtain good model scores by performing random/grid search using the hyperparameter values defined in a search space. Why choose DL4J for your deep learning applications? DL4J is a good choice for developing distributed deep learning applications. It can leverage the capabilities of Apache Spark and Hadoop to develop high performing distributed deep learning applications. Its performance is equivalent to Caffe in case multi-GPU hardware is used. We can use DL4J to develop multi-layer perceptrons, convolutional neural networks, recurrent neural networks, and autoencoders. There are a number of hyperparameters that can be adjusted to further optimize the neural network training. The Skymind team did a good job in explaining the important basics of DL4J on their website. On top of that, they also have a gitter channel to discuss or report bugs straight to their developers. If you are keen on exploring reinforcement learning further, then there’s a dedicated library called RL4J (Reinforcement Learning for Java) developed by Skymind. It can already play doom game! DL4J combines all the above-mentioned components (DataVec, ND4J, Arbiter and RL4J) for the deep learning workflow thus forming a powerful software suite. Most importantly, DL4J enables productionization of deep learning applications for the business. If you are interested to learn how to develop real-time applications on DL4J, checkout my new book Java Deep Learning Cookbook. In this book, I show you how to install and configure Deeplearning4j to implement deep learning models. You can also explore recipes for training and fine-tuning your neural network models using Java. By the end of this book, you’ll have a clear understanding of how you can use Deeplearning4j to build robust deep learning applications in Java. Author Bio Rahul Raj has more than 7 years of IT industry experience in software development, business analysis, client communication and consulting for medium/large scale projects. He has extensive experience in development activities comprising requirement analysis, design, coding, implementation, code review, testing, user training, and enhancements. He has written a number of articles about neural networks in Java and is featured by DL4J and Official Java community channel. You can follow Rahul on Twitter, LinkedIn, and GitHub. Top 6 Java Machine Learning/Deep Learning frameworks you can’t miss 6 most commonly used Java Machine learning libraries Deeplearning4J 1.0.0-beta4 released with full multi-datatype support, new attention layers, and more!
Read more
  • 0
  • 0
  • 5277

article-image-how-hackers-are-using-deepfakes-to-trick-people
Guest Contributor
02 Oct 2019
7 min read
Save for later

How hackers are using Deepfakes to trick people

Guest Contributor
02 Oct 2019
7 min read
Cybersecurity analysts have warned that spoofing using artificial intelligence is within the realm of possibility and that people should be aware of the possibility of getting fooled with such voice or picture-based deepfakes. What is Deepfake? Deepfakes rely on a branch of AI called Generative Adversarial Networks (GANs). It requires two machine learning networks that teach each other with an ongoing feedback loop. The first one takes real content and alters it. Then, the second machine learning network, known as the discriminator, tests the authenticity of the changes. As the machine learning networks keep passing the material back and forth and receiving feedback about it, they get smarter. GANs are still in the early stages, but people expect numerous potential commercial applications. For example, some can convert a single image into different poses. Others can suggest outfits similar to what a celebrity wears in a photo or turn a low-quality picture into a high-resolution snapshot. But, outside of those helpful uses, deepfakes could have sinister purposes. Consider the blowback if a criminal creates a deepfake video of something that would hurt someone's reputation — for instance, a deepfake video of a politician "admitting" to illegal activities, like accepting a bribe. Other instances of this kind of AI that are already possible include cases of misleading spoken dialogue. Then, the lips of someone saying something offensive get placed onto someone else. In one of the best-known examples of Deepfake manipulation, BuzzFeed published a clip now widely known as "ObamaPeele." It combined a video of President Obama with film director Jordan Peele's lips. The result made it seem as if Obama cursed and said things he never would in public. Deepfakes are real enough to cause action The advanced deepfake efforts that cybersecurity analysts warn about rely on AI to create something so real that it causes people to act. For example, in March of 2019, the CEO of a British energy firm received a call from what sounded like his boss. The message was urgent — the executive needed to transfer a large amount of funds to a Hungarian supplier within the hour. Only after the money was sent did it become clear the executive’s boss was never on the line. Instead, cybercriminals had used AI to generate an audio clip that mimicked his boss’s voice. The criminals called the British man and played the clip, convincing him to transfer the funds. The unnamed victim was scammed out of €220,000 — an amount equal to $243,000. Reports indicate it's the first successful hack of its kind, although it's an unusual way for hackers to go about fooling victims. Some analysts point out other hacks like this may have happened but have gone unreported, or perhaps the people involved did not know hackers used this technology. According to Rüdiger Kirsch, a fraud expert at the insurance company that covered the full amount of the claim, this is the first time the insurer dealt with such an instance. The AI technology apparently used to mimic the voice was so authentic that it captured the parent company leader's German accent and the melody of his voice. Deepfakes capitalize on urgency One of the telltale signs of deepfakes and other kinds of spoofing — most of which currently happen online — is a false sense of urgency. For example, lottery scammers emphasize that their victims must send personal details immediately to avoid missing out on their prizes. The deepfake hackers used time constraints to fool this CEO, as well. The AI technology on the other end of the phone told the CEO that he needed to send the money to a Hungarian supplier within the hour, and he complied. Even more frighteningly, the deceiving tech was so advanced that hackers used it for several phone calls to the victim. One of the best ways to avoid scams is to get further verification from outside sources, rather than immediately responding to the person engaging with you. For example, if you're at work and get a call or email from someone in accounting who asks for your Social Security number or bank account details to update their records, the safest thing to do is to contact the accounting department yourself and verify the legitimacy. Many online spoofing attempts have spelling or grammatical errors, too. The challenging thing about voice trickery, though, is that those characteristics don't apply. You can only go by what your ears tell you. Since these kinds of attacks are not yet widespread, the safest thing to do for avoiding disastrous consequences is to ignore the urgency and take the time you need to verify the requests through other sources. Hackers can target deepfake victims indefinitely One of the most impressive things about this AI deepfake case is that it involved more than one phone conversation. The criminals called again after receiving the funds to say that the parent company had sent reimbursement funds to the United Kingdom firm. But, they didn't stop there. The CEO received a third call that impersonated the parent company representative again and requested another payment. That time, though, the CEO became suspicious and didn't agree. As, the promised reimbursed funds had not yet come through. Moreover, the latest call requesting funds originated from an Austrian phone number. Eventually, the CEO called his boss and discovered the fakery by handling calls from both the real person and the imposter simultaneously. Evidence suggests the hackers used commercially available voice generation software to pull off their attack. However, it is not clear if the hackers used bots to respond when the victim asked questions of the caller posing as the parent company representative. Why do deepfakes work so well? This deepfake is undoubtedly more involved than the emails hackers send out in bulk, hoping to fool some unsuspecting victims. Even those that use company logos, fonts and familiar phrases are arguably not as realistic as something that mimics a person's voice so well that the victim can't distinguish the fake from the real thing. The novelty of these incidents also makes individuals less aware that they could happen. Although many people receive training that helps them spot some online scams, the curriculum does not yet extend to these advanced deepfake cases. Making the caller someone in a position of power increases the likelihood of compliance, too. Generally, if a person hears a voice on the other end of the phone that they recognize as their superior, they won't question it. Plus, they might worry that any delays in fulfilling the caller's request might get perceived as them showing a lack of trust in their boss or an unwillingness to follow orders. You've probably heard people say, "I'll believe it when I see it." But, thanks to this emerging deepfake technology, you can't necessarily confirm the authenticity of something by hearing or seeing it. That's an unfortunate development, plus something that highlights how important it is to investigate further before acting. That may mean checking facts or sources or getting in touch with superiors directly to verify what they want you to do. Indeed, those extra steps take more time. But, they could save you from getting fooled. Author Bio Kayla Matthews writes about big data, cybersecurity, and technology. You can find her work on The Week, Information Age, KDnuggets and CloudTweaks, or over at ProductivityBytes.com. Media manipulation by Deepfakes and cheap fakes require both AI and social fixes, finds a Data & Society report Terrifyingly realistic Deepfake video of Bill Hader transforming into Tom Cruise is going viral on YouTube Now there is a Deepfake that can animate your face with just your voice and a picture using Temporal GANs
Read more
  • 0
  • 0
  • 5095
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-what-are-the-challenges-of-adopting-ai-powered-tools-in-sales-how-salesforce-can-help
Guest Contributor
24 Aug 2019
8 min read
Save for later

What are the challenges of adopting AI-powered tools in Sales? How Salesforce can help

Guest Contributor
24 Aug 2019
8 min read
Artificial intelligence is a hot topic for many industries. When it comes to sales, the situation gets complicated. According to the latest Salesforce State of Sales report, just 21% of organizations use AI in sales today, while its adoption in sales is expected to grow 155% by 2020. Let’s explore what keeps sales teams from implementing AI and how to overcome these challenges to unlock new opportunities. Why do so few teams adopt AI in Sales There are a few reasons behind such a low rate of AI application in sales. First, some teams don’t feel they are prepared to integrate AI into their existing strategies. Second, AI technologies are often applied in a hectic way: many businesses have high expectations of AI and concentrate mostly on its benefits rather than contemplating possible difficulties upfront. Such an approach rarely results in positive business transformation. Here are some common challenges that businesses need to overcome to turn their sales AI projects into success stories. Businesses don’t know how to apply AI in their workflow Problem: Different industries call for different uses of AI. Still, companies tend to buy AI platforms to use them for the same few popular tasks, like predictions based on historical data or automatic data logging. In reality, the business type and direction should dictate what AI solution will best fit the needs of an organization. For example, in e-commerce, AI can serve dynamic product recommendations on the basis of the customer’s previous purchases or views. Teams relying on email marketing can use AI to serve personalized email content as well as optimize send times. Solution: Let a sales team participate in AI onboarding. Prior to setup, gain insight into your sales reps’ daily routine, needs, and pains. Then, get their feedback continuously during the actual AI implementation. Such a strategy will ensure the sales team benefits from a tailored, rather than a generic, AI system. AI requires data businesses don’t have Problem: AI is most efficient when fed with huge amounts of data. It’s true, a company with a few hundred leads per week will train AI for better predictions than the company with the same amount of leads per month. Frequently, companies assume they don’t have so much data or they cannot present it in a suitable format to train an AI algorithm. Solution: In reality, AI can be trained with incomplete and imperfect data. Instead of trying to integrate the whole set of data prior to implementing AI, it’s possible to use it with data subsets, like historical purchase data or promotional campaign analytics. Plus, AI can improve the quality of data by predicting missing elements or identifying possible errors. Businesses lack skills to manage AI platforms Problem: AI is a sophisticated algorithm that requires special skills to implement and use it. Thus, sales teams need to be augmented with specialized knowledge in data management, software optimization, and integration. Otherwise, AI tools can be used incorrectly and thus provide little value. Solution: There are two ways of solving this problem. First, it’s possible to create a new team of big data, machine learning, and analytics experts to run AI implementation and coordinate it with the sales team. This option is rather time-consuming. Second, it’s possible to buy an AI-driven platform, like Salesforce, for example, that includes both out-of-the-box features as well as plenty of customization opportunities. Instead of hiring new specialists to manage the platform, you can reach out to Salesforce consultants who will help you select the best-fit plan, configure, and implement it. If your requirements go beyond the features available by default, then it’s possible to add custom functionality. How AI can change the sales of tomorrow When you have a clear vision of the AI implementation challenges and understand how to overcome them, it’s time to make use of AI-provided benefits. A core benefit of any AI system is its ability to analyze large amounts of data across multiple platforms and then connect the dots, i.e. draw actionable conclusions. To illustrate these AI opportunities, let’s take Salesforce, one of the most popular solutions in this domain today, and see how its AI technology, Einstein, can enhance a sales workflow. Time-saving and productivity boost Administrative work eats up sales reps’ time that they can spend selling. That’s why many administrative tasks should be automated. Salesforce Einstein can save time usually wasted on manual data entry by: Automating contact creation and update Activity logging Generating lead status reports Syncing emails and calendars Scheduling meetings Efficient lead management When it comes to leads, sales reps tend to base their lead management strategies on gut feeling. In spite of its importance, intuition cannot be the only means of assessing leads. The approach should be more holistic. AI has unmatched abilities to analyze large amounts of information from different sources to help score and prioritize leads. In combination with sales reps’ intuition, such data can bring lead management to a new level. For example, Einstein AI can help with: Scoring leads based on historical data and performance metrics of the best customers Classifying opportunities in terms of their readiness to convert Tracking reengaged opportunities and nurturing them Predictive forecasting AI is well-known for its predictive capabilities that help sales teams make smarter decisions without running endless what-if scenarios. AI forecasting builds sales models using historical data. Such models anticipate possible outcomes of multiple scenarios common in sales reps’ work. Salesforce Einstein, for example, can give the following predictions: Prospects most likely to convert Deals most likely to close Prospects or deals to target New leads Opportunities to upsell or cross-sell The same algorithm can be used for forecasting sales team performance during a specified period of time and taking proactive steps based on those predictions. What’s more, sales intelligence is shifting from predictive to prescriptive, where prescriptive AI does not recommend but prescribes exact actions to be taken by sales reps to achieve a particular outcome. Watching out for pitfalls of AI in sales While AI promises to fulfil sales reps’ advanced requests, there are still some fears and doubts around it. First of all, as a rising technology, AI still carries ethical issues related to its safe and legitimate use in the workplace, such as those of the integrity of autonomous AI-driven decisions and legitimate origin of data fed to algorithms. While the full-fledged legal framework is yet to be worked out, governments have already stepped in. For example, the High-Level Expert Group on AI of the European Commission came up with the Ethics Guidelines for Trustworthy Artificial Intelligence covering every aspect from human oversight and technical robustness to data privacy and non-discrimination. In particular, non-discrimination relates to potential bias,, such as algorithmic bias that comes from human bias when sourcing data, and the one where correlation does not equal causation. Thus, AI-driven analysis should be incorporated in decision-making cautiously as just one of the many sources of insights. AI won’t replace a human mind⁠—the data still needs to be processed critically. When it comes to sales, another common concern is that AI will take sales reps’ jobs. Yes, some tasks that are deemed monotonous and time-consuming are indeed taken over by AI automation. However, it is actually a blessing as AI does not replace jobs but augments them. This way, sales reps can have more time on their hands to complete more creative and critical tasks. It's true, however, that employers would need people who know how to work with AI technologies. It means either ongoing training or new hires, which can be rather costly. The stakes are high, though. To keep up with the fast-changing world, one has to bargain their way to success, finding one’s way around current limitations and challenges. In a nutshell AI is key to boosting sales team performance. However, successful AI integration into sales and marketing strategies requires teams to overcome challenges posed by sophisticated AI technologies. Such popular AI-driven platforms like Salesforce help sales reps get hold of the AI potential as well as enjoy vast opportunities for saving time and increasing productivity. Author Bio Valerie Nechay is MarTech and CX Observer at Iflexion, a Denver-based custom software development provider. Using her writing powers, she's translating complex technologies into fascinating topics and shares them with the world. Now her focus is on Salesforce implementation how-tos, challenges, insights, and shortcuts, as well as broader applications of enterprise tech for business development. IBM halt sales of Watson AI tool for drug discovery amid tepid growth: STAT report. Salesforce Einstein team open sources TransmogrifAI, their automated machine learning library How to create sales analysis app in Qlik Sense using DAR method [Tutorial]
Read more
  • 0
  • 0
  • 4024

article-image-hot-chips-31-ibm-power10-amds-ai-ambitions-intel-nnp-t-cerebras-largest-chip-with-1-2-trillion-transistors-and-more
Fatema Patrawala
23 Aug 2019
7 min read
Save for later

Hot Chips 31: IBM Power10, AMD’s AI ambitions, Intel NNP-T, Cerebras largest chip with 1.2 trillion transistors and more

Fatema Patrawala
23 Aug 2019
7 min read
Hot Chips 31, the premiere event for the biggest semiconductor vendors to highlight their latest architectural developments is held in August every year. The event this year was held at the Memorial Auditorium on the Stanford University Campus in California, from August 18-20, 2019. Since its inception it is co-sponsored by IEEE and ACM SIGARCH. Hot Chips is amazing for the level of depth it provides on the latest technology and the upcoming releases in the IoT, firmware and hardware space. This year the list of presentations for Hot Chips was almost overwhelming with a wide range of technical disclosures on the latest chip logic innovations. Almost all the major chip vendors and IP licensees involved in semiconductor logic designs took part: Intel, AMD, NVIDIA, Arm, Xilinx, IBM, were on the list. But companies like Google, Microsoft, Facebook and Amazon also took part. There are notable absences from the likes of Apple, who despite being on the Committee, last presented at the conference in 1994. Day 1 kicked off with tutorials and sponsor demos. On the cloud side, Amazon AWS covered the evolution of hypervisors and the AWS infrastructure. Microsoft described its acceleration strategy with FPGAs and ASICs, with details on Project Brainwave and Project Zipline. Google covered the architecture of Google Cloud with the TPU v3 chip.  And a 3-part RISC-V tutorial rounded off by afternoon, so the day was spent well with insights into the latest cloud infrastructure and processor architectures. The detailed talks were presented on Day 2 and Day 3, below are some of the important highlights of the event: IBM’s POWER10 Processor expected by 2021 IBM which creates families of processors to address different segments, with different models for tasks like scale-up, scale-out, and now NVLink deployments. The company is adding new custom models that use new acceleration and memory devices, and that was the focus of this year’s talk at Hot Chips. They also announced about POWER10 which is expected to come with these new enhancements in 2021, they additionally announced, core counts of POWER10 and process technology. IBM also spoke about focusing on developing diverse memory and accelerator solutions to differentiate its product stack with heterogeneous systems. IBM aims to reduce the number of PHYs on its chips, so now it has PCIe Gen 4 PHYs while the rest of the SERDES run with the company's own interfaces. This creates a flexible interface that can support many types of accelerators and protocols, like GPUs, ASICs, CAPI, NVLink, and OpenCAPI. AMD wants to become a significant player in Artificial Intelligence AMD does not have an artificial intelligence–focused chip. However, AMD CEO Lisa Su in a keynote address at Hot Chips 31 stated that the company is working toward becoming a more significant player in artificial intelligence. Lisa stated that the company had adopted a CPU/GPU/interconnect strategy to tap artificial intelligence and HPC opportunity. She said that AMD would use all its technology in the Frontier supercomputer. The company plans to fully optimize its EYPC CPU and Radeon Instinct GPU for supercomputing. It would further enhance the system’s performance with its Infinity Fabric and unlock performance with its ROCM (Radeon Open Compute) software tools. Unlike Intel and NVIDIA, AMD does not have a dedicated artificial intelligence chip or application-specific accelerators. Despite this, Su noted, “We’ll absolutely see AMD be a large player in AI.” AMD is considering whether to build a dedicated AI chip or not. This decision will depend on how artificial intelligence evolves. Lisa explained that companies have been improving their CPU (central processing unit) performance by leveraging various elements. These elements are process technology, die size, TDP (thermal design power), power management, microarchitecture, and compilers. Process technology is the biggest contributor, as it boosts performance by 40%. Increasing die size also boosts performance in the double digits, but it is not cost-effective. While AMD used microarchitecture to boost EPYC Rome server CPU IPC (instructions per cycle) by 15% in single-threaded and 23% in multi-threaded workloads. This IPC improvement is above the industry average IPC improvement of around 5%–8%. Intel’s Nervana NNP-T and Lakefield 3D Foveros hybrid processors Intel revealed fine-grained details about its much-anticipated Spring Crest Deep Learning Accelerators at Hot Chips 31. The Nervana Neural Network Processor for Training (NNP-T) comes with 24 processing cores and a new take on data movement that's powered by 32GB of HBM2 memory. The spacious 27 billion transistors are spread across a 688mm2 die. The NNP-T also incorporates leading-edge technology from Intel-rival TSMC. Intel Lakefield 3D Foveros Hybrid Processors Intel in another presentation talked about Lakefield 3D Foveros hybrid processors that are the first to come to market with Intel's new 3D chip-stacking technology. The current design consists of two dies. The lower die houses all of the typical southbridge features, like I/O connections, and is fabbed on the 22FFL process. The upper die is a 10nm CPU that features one large compute core and four smaller Atom-based 'efficiency' cores, similar to an ARM big.LITTLE processor. Intel calls this a "hybrid x86 architecture," and it could denote a fundamental shift in the company's strategy. Finally, the company stacks DRAM atop the 3D processor in a PoP (package-on-Package) implementation. Cerebras largest chip ever with 1.2 trillion transistors California artificial intelligence startup Cerebras Systems introduced its Cerebras Wafer Scale Engine (WSE), the world’s largest-ever chip built for neural network processing. Sean Lie the Co-Founder and Chief Hardware Architect at Cerebras Lie presented the gigantic chip ever at Hot Chips 31. The 16nm WSE is a 46,225 mm2 silicon chip which is slightly larger than a 9.7-inch iPad. It features 1.2 trillion transistors, 400,000 AI optimized cores, 18 Gigabytes of on-chip memory, 9 petabyte/s memory bandwidth, and 100 petabyte/s fabric bandwidth. It is 56.7 times larger than the largest Nvidia graphics processing unit, which accommodates 21.1 billion transistors on a 815 mm2 silicon base. NVIDIA’s multi-chip solution for deep neural networks accelerator NVIDIA which announced about designing a test multi-chip solution for DNN computations at a VLSI conference last year, the company explained chip technology at Hot Chips 31 this year. It is currently a test chip which involves a multi-chip DL inference. It is designed for CNNs and has a RISC-V chip controller. It has 36 small chips, 8 Vector MACs per PE, and each chip has 12 PEs and each package has 6x6 chips. Few other notable talks at Hot Chips 31 Microsoft unveiled its new product Hololens 2.0 silicone. It has a holographic processor and a custom silicone. The application processor runs the app, and the HPU modifies the rendered image and sends to the display. Facebook presented details on Zion, its next generation in-memory unified training platform. Zion which is designed for Facebook sparse workloads, has a unified BFLOAT 16 format with CPU and accelerators. Huawei spoke about its Da Vinci architecture, a single Ascend 310 which can deliver 16 TeraOPS of 8-bit integer performance, support real-time analytics across 16 channels of HD video, and consume less than 8W of power. Xiling Versal AI engine Xilinx, the manufacturer of FPGAs, announced its new Versal AI engine last year as a way of moving FPGAs into the AI domain. This year at Hot Chips they expanded on its technology and more. Ayar Labs, an optical chip making startup, showcased results of its work with DARPA (U.S. Department of Defense's Defense Advanced Research Projects Agency) and Intel on an FPGA chiplet integration platform. The final talk on Day 3 ended with a presentation by Habana, they discussed about an innovative approach to scaling AI Training systems with its GAUDI AI Processor. AMD competes with Intel by launching EPYC Rome, world’s first 7 nm chip for data centers, luring in Twitter and Google Apple advanced talks with Intel to buy its smartphone modem chip business for $1 billion, reports WSJ Alibaba’s chipmaker launches open source RISC-V based ‘XuanTie 910 processor’ for 5G, AI, IoT and self-driving applications
Read more
  • 0
  • 0
  • 5340

article-image-the-most-asked-questions-on-big-data-privacy-and-democracy-in-last-months-international-hearing-by-canada-standing-committee
Savia Lobo
16 Jun 2019
16 min read
Save for later

The most asked questions on Big Data, Privacy and Democracy in last month’s international hearing by Canada Standing Committee

Savia Lobo
16 Jun 2019
16 min read
The Canadian Parliament's Standing Committee on Access to Information, Privacy and Ethics hosted the hearing of the International Grand Committee on Big Data, Privacy and Democracy from Monday, May 27 to Wednesday, May 29.  Witnesses from at least 11 countries appeared before representatives to testify on how governments can protect democracy and citizen rights in the age of big data. This section of the hearing took place on May 28, and includes the following witnesses: - Jim Balsillie, Chair, Centre for International Governance Innovation; Retired Chairman and co-CEO of BlackBerry - Roger McNamee, Author of Zucked: Waking up to the Facebook Catastrophe - Shoshana Zuboff, Author of The Age of Surveillance Capitalism - Maria Ressa, CEO and Executive Editor, Rappler Witnesses were asked various questions based on data privacy, data regulation, the future of digital tech considering current data privacy model, and much more. Why we cannot enforce independent regulators to oversee user rights data privacy Damion Collins to McNamee:  “In your book you said as far as I can tell Zack has always believed that users value privacy more than they should. On that basis, do you think we will have to establish in law the standards we want to see enforced in terms of users rights data privacy with independent regulators to oversee them? because the companies will never do that effectively themselves because they just don't share the concerns we have about how the systems are being abused” Roger McNamee: “I believe that it's not only correct in terms of their philosophy, as Professor Zuboff points out, but it is also baked into their business model--this notion--that any data that exists in the world, claimed or otherwise, they will claim for their own economic use and framing. How you do that privacy, I think is extremely difficult and in my opinion, would be best done by simply banning the behaviors that are used to gather the data.” Zuckerberg is more afraid of privacy regulation Jo Stevens, Member of Parliament for Cardiff Central, asked McNamee,  “What you think Mark Zuckerberg is more frightened about privacy regulation or antitrust action?” McNamee replied saying that Zuckerberg is more afraid of privacy.  He further adds, “to Lucas I would just say the hardest part of this is setting the standard of what the harm is these guys have hidden behind the fact that's very hard to quantify many of these things.” In the future can our homes be without digital tech? Michel Picard, Member of the Canadian House of Commons asked Zuboff, “your question at the beginning is, can the digital future be our home? My reaction to that was, in fact, the question should be in the future home be without digital.” Zubov replied, “that's such an important distinction because I don't think there's a single one of us in this room that is against the digital per se. It's this is not about being anti-technology, it's about technology being hijacked by a rogue economic logic that has turned it to its own purposes. We talked about the idea that conflating the digital with surveillance capitalism is a dangerous category error. What we need is to be able to free the potential of the digital to get back to those values of Democritus democratization of knowledge and individual emancipation and empowerment that it was meant to serve and that it still can serve.” Picard further asks, “compared to the Industrial Revolution where somewhere although we were scared of the new technology, this technology was addressed to people for them to be beneficiaries of that progress, now, it's we're not beneficiary at all. The second step of this revolution, it is a situation where people become a producer of the raw material and as you mentioned as you write “Google's invention reveals new capabilities to infer and deduce the thoughts feelings intention interests of individual and groups with an automated architecture that operates as a one-way mirror irrespective of a person's awareness. So like people connected to the machine and matrix.” Zuboff replies, “From the very beginning the data scientists at Google, who are inventing surveillance capitalism, celebrated in their written patterns and in their research, published research, the fact that they could hunt and capture behavioral surplus without users ever being aware of these backstage operations. Surveillance was baked into the DNA of this economic logic essential to its strange form of value creation. So it's with that kind of sobriety and gravitas that it is called surveillance capitalism because without the surveillance piece it cannot exist.” Can Big data be simply pulled out of jurisdictions in the absence of harmonized regulation across democracies? Peter Kent, Member of Parliament Thornhill, asked Balsillie, “with regards to what we've seen that Google has said in response to the new federal elections, the education on advertising will simply withdraw from accepting advertising. Is it possible that big data could simply pull out of jurisdictions where regulations, in the absence of harmonized regulation, across the democracies are present?” To this, Balsillie replies, “ well that's the best news possible because as everyone's attested here. The purpose of surveillance capitalism is to undermine personal autonomy and yet elections democracy are centered on the sovereign self exercised their sovereign will. Now, why in the world would you want to undermine the core bedrock of election in a non-transparent fashion to the highest bidder at the very time your whole citizenry is on the line and in fact, the revenue for is immaterial to these companies. So one of my recommendations is, just banning personalized online ads during elections. We have a lot of things you're not allowed to do for six or eight weeks just put that into the package it's simple and straightforward.” McNamee further adds his point on the question by saying, “point that I think is being overlooked here which is really important is, if these companies disappeared tomorrow, the services they offer would not disappear from the marketplace. In a matter of weeks, you could replicate Facebook, which would be the harder one. There are substitutes for everything that Google does that are done without surveillance capitalism. Do not in your mind allow any kind of connection between the services you like and the business model of surveillance capitalism. There is no inherent link, none at all this is something that has been created by these people because it's wildly more profitable.” Committee lends a helping hand as an ‘act of Solidarity’ to press freedom Charlie Angus, a member of the Canada House of Commons, “Facebook and YouTube transformed the power of indigenous communities to speak to each other, to start to change the dynamic of how white society spoke about them. So I understand its incredible power for the good. I see more and more thought in my region which has self-radicalized people like the flat earthers, anti-vaxxers, 9/11 truthers and I've seen its effect in our elections through the manipulation of anti-immigrant anti-muslim materials. People are dying in Asia for the main implication of these platforms. I want to ask you is there some in an act of solidarity with our Parliament with our legislators if there are statements that should be made public through our Parliament to give you support so that we can maintain a link with you as an important ally on the front line.” Ressa replied, “Canada has been at the forefront of holding fast to the values of human rights of press freedom. I think the more we speak about this then the more the values are reiterated especially since someone like president Trump truly likes president detective and vice versa it's very personal. But sir, when you talked about  where people are dying you've seen this all over Asia there's Myanmar there is the drug war here in the Philippines, India and Pakistan just instances when this tool for empowerment just like in your district it is something that we do not want to go away not shut down and despite the great threats that we face that I face and my company faces Facebook the social media platforms still give us the ability to organize to create communities of action that had not been there before.” Do fear, outrage, hate speech, conspiracy theories sell more than truths? Edwin Tong, a member of the Singapore parliament asked McNamee, on the point McNamee made during his presentation that “the business model of these platforms really is focussed on algorithms that drive content to people who think they want to see this content. And you also mentioned that fear outraged hate speech conspiracy theories is what sells more and I assume what you mean to say by that is it sells more than truths, would that be right?” McNamee replied, “So there was a study done at MIT in Cambridge Massachusetts that suggested, disinformation spreads 70% further and six times faster than fact and there are actually good human explanations for why hate speech and conspiracy theories move so rapidly it's just it's about treating the flight-or-fight reflex.” Tong further highlighted what Ressa said about how this information is spread through the use of BOTS. “I think she said 26 fake accounts is translating the 3 million different accounts which spread the information. I think we are facing a situation where disinformation if not properly checked gets exponentially viral. People get to see it all the time and overtime unchecked this leads to a serious erosion of trust serious undermining of institutions we can't trust elections and fundamentally democracy becomes marginalized and eventually demolished.”   To this, McNamee said, “I agree with that statement completely to me the challenge is in how you manage it so if you think about this censorship and moderation were never designed to handle things at the scale that these Internet platforms operate at. So in my view, the better strategy is to do the interdiction upstream to either ask the fundamental question of what is the role of platforms like this in society right and then secondly what's the business model associated with them. So to me, what you really want to do my partner Renee de resto who's a researcher in this area it talks about the issue of freedom of speech versus freedom of reach. The latter being the amplification mechanism and so what's really going on on these platforms is the fact that the algorithms find what people engage with and amplify that more and sadly hate speech disinformation conspiracy theories are, as I said the catnip that's what really gets the algorithms humming and gets people to react and so in that context eliminating that amplification is essential and the question is how you're gonna go about doing that and how are you gonna how are you going to essentially verify that it's been done and in my mind the simplest way to do that's to prevent the data from getting in there in the first place.” Tong further said, “I think you must go upstream to deal with it fundamentally in terms of infrastructure and I think some witnesses also mentioned that we need to look at education which I totally agree with but when it does happen and when you have that proliferation of false information there must be a downstream or an end result kind of reach and that's where I think your example of Sri Lanka is very pertinent because it shows and demonstrates that left uncheck the platforms to do nothing about they're about the false information is wrong and what we do need is to have regulators and governments be clothed with powers and levers to intervene, intervene swiftly, and to disrupt the viral spread of online falsehoods very quickly would you agree as a generalization.” McNamee said, “I would not be in favor of the level of government intervention I have recommended here I simply don't see alternatives at the moment that in order to do what Shoshanna's talked about in order to do what Jim is talking about you have to have some leverage and the only leverage governments have today is their ability to shut these things down well nothing else works quickly enough.” Sun Xueling, another member from the Parliament of Singapore asked McNamee, “I like to make reference to the Christchurch shooting on the 15th of March 2019 after which the New York Times had published an article by Kevin Roos.” She quoted what Roos mentioned in his article, “We do know that the design of Internet platforms can create and reinforce extremist beliefs. Their recommendation algorithms often steer users towards a jeer content, a loop that results in more time spent on the app, and more advertising revenue for the company.” McNamee said, “not only do I agree with that I would like to make a really important point which is that the design of the Internet itself is part of the problem that I'm of the generation as Jim is as well that were around when the internet was originally conceived in design and the notion in those days was that people could be trusted with anonymity and that was a mistake because bad actors use anonymity to do bad things and the Internet is essentially enabled disaffected people to find each other in a way they could never find each other in the road and to organize in ways they could not in the real world so when we're looking at Christchurch we have to recognize that the first step this was this was a symphonic work this man went in and organized at least a thousand co-conspirators prior to the act using the anonymous functions of the internet to gather them and prepare for this act. It was then and only then after all that groundwork had been laid that the amplification processes of the system went to work but keep in mind those same people kept reposting the film; it is still up there today.” How can one eliminate the tax deductibility of specific categories of online ads? Jens Zimmermann, from the Republic of Germany asked Jim Basse to explain a bit more deeply “ the question of taxation”, which he mentioned in one of his six recommendations. To this Balsillie said, “I'm talking about those that are buying the ads. The core problem here is when your ad driven you've heard extremely expert testimony that they'll do whatever it takes to get more eyeballs and the subscription-based model is a much safer place to be because it's not attention driven and one of the purposes of taxes to manage externalities if you don't like the externalities that we're grappling with that are illuminated here then disadvantage those and many of these platforms are moving more towards subscription-based models anyway. So just use tax as a vehicle to do that and the good benefit is it gives you revenue this the second thing it could do is also begin to shift towards more domestic services. I think it attacks has not been a lever that's been used and it's right there for you all right.” Thinking beyond behavioral manipulation, data surveillance-driven business models Keit Pentus, the representative from Estonia asked McNamee, “If you were sitting in my chair today, what would be the three steps you would recommend or you would do if we leave those shutting down the platforms aside for a second.” McNamee said, “In the United States or in North America roughly 70% of all the artificial intelligence professionals are working at Google, Facebook, Microsoft, or Amazon and to a first approximation they're all working on behavioral manipulation. There are at least a million great applications of artificial intelligence and behavioral manipulation is not on them. I would argue that it's like creating time-release anthrax or cloning human babies. It's just a completely inappropriate and morally repugnant idea and yet that is what these people are doing. I would simply observe that it is the threat of shutting them down and the willingness to do it for brief periods of time that creates the leverage to do what I really want to do which is, to eliminate the business model of behavioral manipulation and data surveillance.” “I don't think this is about putting the toothpaste back into tubes, this is about formulating toothpaste that doesn't poison people. I believe this is directly analogous to what happened with the chemical industry in the 50s. The chemical industry used to pour its waste products, mercury, chromium, and things like that direct into freshwater, which left mine tailings on the side of hills. State petrol stations would pour spent oil into sewers and there were no consequences. So the chemical industry grew like crazy, had incredibly high marches. It was the internet platform industry of its era. And then one day society woke up and realized that those companies should be responsible for the externalities that they were creating. So, this is not about stopping progress this is my world this is what I do.” “I just think we should stop hurting people we should stop killing people in Myanmar, we should stop killing people in the Philippines, and we should stop destroying democracy everywhere else. We can do way better than that and it's all about the business model, and I don't want to pretend I have all the solutions what I know is the people in this room are part of the solution and our job is to help you get there. So don't view anything I say as a fixed point of view.” “This is something that we're gonna work on together and you know the three of us are happy to take bullets for all of you okay because we recognize it's not easy to be a public servant with these issues out there. But do not forget you're not gonna be asking your constituents to give up the stuff they love. The stuff they love existed before this business model and it'll exist again after this business pop.” To know more and listen to other questions asked by some other representatives, you can listen to the full hearing video titled, “Meeting No. 152 ETHI - Standing Committee on Access to Information, Privacy and Ethics” on ParlVU. Speech2Face: A neural network that “imagines” faces from hearing voices. Is it too soon to worry about ethnic profiling? UK lawmakers to social media: “You’re accessories to radicalization, accessories to crimes”, hearing on spread of extremist content Key Takeaways from Sundar Pichai’s Congress hearing over user data, political bias, and Project Dragonfly
Read more
  • 0
  • 0
  • 2558

article-image-maria-ressa-on-astroturfs-that-turns-make-believe-lies-into-facts
Savia Lobo
15 Jun 2019
4 min read
Save for later

Maria Ressa on Astroturfs that turns make-believe lies into facts

Savia Lobo
15 Jun 2019
4 min read
The Canadian Parliament's Standing Committee on Access to Information, Privacy and Ethics hosted the hearing of the International Grand Committee on Big Data, Privacy and Democracy from Monday May 27 to Wednesday May 29.  Witnesses from at least 11 countries appeared before representatives to testify on how governments can protect democracy and citizen rights in the age of big data. This section of the hearing, which took place on May 28, includes Maria Ressa, CEO and Executive Editor, Rappler, who talks about how information is powerful and if molded into make-believe lies can turn these lies into facts. In her previous presentation, Maria gave a glimpse of this presentation where she said, “Information is power and if you can make people believe lies, then you can control them. Information can be used for commercial benefits as well as a means to gain geopolitical power.” She resumes by saying, the Philippines, we, here are a cautionary tale for you. An example of how quickly democracy crumbles, is eroded from within and how these information operations can take over the entire ecosystem and transform lies in the facts. If you can make people believe lies are facts then you can control them. “Without facts, you don't have the truth, without truth you don't have trust”, she says. Journalists have long been the gatekeepers for facts. When we come under attack, democracy is under attack and when this situation happens the voice with the loudest megaphone wins. She says that the Philippines is a petri dish for social media. She stated, as of January 2019, HootSuite has said that Filipinos spent the most time online and the most time on social media, globally. Facebook is our internet; however, it’s about introducing a virus into our information ecosystem. Over time, that virus lies masquerading as fact, that virus takes over the body politic and you need to develop a vaccine. That's what we're in search of and she says she does see a solution. “If social networks are your family and friends in the physical world, social media is your family and friends on steroids; no boundaries of time and space.” She showed that astroturfing is typically a three-prong attack. She has also demonstrated certain examples of how she was subject to an astroturf attack. In the long term, it’s education and you've heard from the other three witnesses before me exactly some of the things that can be done in the medium term i.e. media literacy. However, in the short term, it's only the social media platforms that can do something immediately and we're on the front lines, we need immediate help and immediate solution. She said, her company Rappler, is one of three fact-checking partners of Facebook in the Philippines and they do take that response really seriously. She further says, “We don't look at the content alone. Once we check to make sure that it is a lie, we look at the network that spreads the lies”. She says, the first step is to stop new viruses from entering the ecosystem. It is whack-a-mole if one only looks at the content. But when you begin to look at the networks that spread it and you have something that you can pull out. “It's very difficult to go through 90 hate messages per hour sustained over days and months,'' she said. That is what we're going through the kind of Astroturfing thing that turns lies into truth for us this is a matter of survival. To know more and listen to other questions asked by some other representatives, you can listen to the full hearing video titled, “Meeting No. 152 ETHI - Standing Committee on Access to Information, Privacy and Ethics” on ParlVU. ‘Facial Recognition technology is faulty, racist, biased, abusive to civil rights; act now to restrict misuse’ say experts to House Oversight and Reform Committee Zuckberg just became the target of the world's first high profile white hat deepfake op. Can Facebook come out unscathed? Facebook bans six toxic extremist accounts and a conspiracy theory organization
Read more
  • 0
  • 0
  • 1260
article-image-grover-a-gan-that-fights-neural-fake-news-as-long-as-it-creates-said-news
Vincy Davis
11 Jun 2019
7 min read
Save for later

GROVER: A GAN that fights neural fake news, as long as it creates said news

Vincy Davis
11 Jun 2019
7 min read
Last month, a team of researchers from the University of Washington and the Allen Institute for Artificial Intelligence, published a paper titled ‘Defending Against Neural Fake News’. The goal of this paper is to reliably detect “neural fake news”, so that its harm can be minimized. With this regard, the researchers have built a model named ‘GROVER’. This works as a generator of fake news, which can also spot its own generated fake news articles, as well as those generated by other AI models. GROVER (Generating aRticles by Only Viewing mEtadata Records) models can generate an efficient yet controllable news article, with not only the body, but also the title, news source, publication date, and author list. The researchers affirm that the ‘best models for generating neural disinformation are also the best models at detecting it’. The framework for GROVER represents fake news generation and detection as an adversarial game: Adversary This system will generate fake stories that match specified attributes: generally, being viral or persuasive. The stories must be realistic to read for both human users as well as the verifier. Verifier This system will classify news stories as real or fake. A verifier will have access to unlimited real news stories and few fake news stories from a specific adversary. The dual objective of these two systems suggest an escalating ‘arms race’ between attackers and defenders. It is expected that as the verification systems get better, the adversaries too will follow. Modeling Conditional Generation of Neural Fake News using GROVER GROVER adopts a language modeling framework which allows for flexible decomposition of an article in the order of p(domain, date, authors, headline, body). During inference time, a set of fields are set as ‘F’ for context, with each field ‘f ‘ containing field-specific start and end tokens. During training, the inference is simulated by randomly partitioning an article’s fields into two disjoint sets F1 and F2. The researchers also randomly drop out individual fields with probability 10%, and drop out all but the body with probability 35%. This allows the model to learn how to perform unconditional generation. For Language Modeling, two evaluation modes are considered: unconditional, where no context is provided and the model must generate the article body; and conditional, in which the full metadata is provided as context. The researchers evaluate the quality of disinformation generated by their largest model, GROVER-Mega, using p=.96. The articles are classified into four classes: human-written articles from reputable news websites (Human News), GROVER-written articles conditioned on the same metadata (Machine News), human-written articles from known propaganda websites (Human Propaganda), and GROVER-written articles conditioned on the propaganda metadata (Machine Propaganda). Image Source: Defending Against Neural Fake News When rated by qualified workers on Amazon Mechanical Turk, it was found that though the quality of GROVER-written news is not as high as human-written news, it is very skilled at rewriting propaganda. The overall trustworthiness score of propaganda increases from 2.19 to 2.42 (out of 3) when rewritten by GROVER. Neural Fake News Detection using GROVER The role of the Verifier is to mitigate the harm of neural fake news by classifying articles as Human or Machine written. The neural fake news detection is framed in a semi-supervised method. The neural verifier (or discriminator) will have access to many human-written news articles from March 2019 and before, i.e., the entire RealNews training set. However, it will   have limited access to generations, and more recent news articles. For example, using 10k news articles from April 2019, for generating article body text; another 10k articles are used as a set of human-written news articles, it is split in a balanced way, with 10k for training, 2k for validation, and 8k for testing. It is evaluated using two modes: In the unpaired setting, a verifier is provided single news articles, which must be classified independently as Human or Machine.  In the paired setting, a model is given two news articles with the same metadata, one real and one machine-generated. The verifier must assign the machine-written article a higher Machine probability than the human-written article. Both the modes are evaluated in terms of accuracy. Image Source: Defending Against Neural Fake News It was found that the paired setting appears significantly easier than the unpaired setting across the board, suggesting that it is often difficult for the model to calibrate its predictions. Second, model size is highly important in the arms race between generators and discriminators. Using GROVER to discriminate GROVER’s generations results in roughly 90% accuracy across the range of sizes. If a larger generator is used, accuracy slips below 81%; conversely, if the discriminator is larger, accuracy is above 98%. Lastly, other discriminators perform worse than GROVER overall. This suggests that effective discrimination requires having a similar inductive bias, as the generator. Thus it has been found that GROVER can rewrite propaganda articles, with humans rating the rewritten versions as more trustworthy. At the same time, GROVER can also defend these models. The researchers are of the opinion that an ensemble of deep generative model, such as GROVER should be used to analyze the content of a text. Obviously the working of the GROVER model has caught many people’s attention. https://twitter.com/str_t5/status/1137108356588605440 https://twitter.com/currencyat/status/1137420508092391424 While some are finding this to be an interesting mechanism to combat fake news, others point out that, it doesn't matter if GROVER can identify its own texts, if it can't identify the texts generated by other models. Releasing a model like GROVERcan turn out to be extremely irresponsible rather than defensive. A user on Reddit says that “These techniques for detecting fake news are fundamentally misguided. You cannot just train a statistical model on a bunch of news messages and expect it to be useful in detecting fake news. The reason for this should be obvious: there is no real information about the label ('fake' vs 'real' news) encoded in the data. Whether or not a piece of news is fake or real depends on the state of the external world, which is simply not present in the data. The label is practically independent of the data.” Another user on Hacker News comments that “Generative neural networks these days are both fascinating and depressing - feels like we're finally tapping into how subsets of human thinking & creativity work. But that knocks us off our pedestal, and threatens to make even the creative tasks we thought were strictly a human specialty irrelevant; I know we're a long way off from generalized AI, but we seem to be making rapid progress, and I'm not sure society's mature enough or ready for it. Especially if the cutting edge tools are in the service of AdTech and such, endlessly optimizing how to absorb everybody's spare attention. Perhaps there's some bright future where we all just relax and computers and robots take care of everything for us, but can't help feeling like some part of the human spirit is dying.” Few users feel that this ‘generating and detecting its own fake news’, kind of model is going to be unnecessary in the future. It’s just a matter of time that the text written by algorithms will be exactly similar to a human written text. At that point, there will be no way to distinguish between such articles. A user suggests that “I think to combat fake news, especially algorithmic one, we'll need to innovate around authentication mechanism that can effectively prove who you are and how much effort you put into writing something. Digital signatures or things like that.” For more details about the GROVER model, head over to the research paper. Worried about Deepfakes? Check out the new algorithm that manipulate talking-head videos by altering the transcripts Speech2Face: A neural network that “imagines” faces from hearing voices. Is it too soon to worry about ethnic profiling? OpenAI researchers have developed Sparse Transformers, a neural network which can predict what comes next in a sequence
Read more
  • 0
  • 0
  • 5231

article-image-shoshana-zuboff-on-21st-century-solutions-for-tackling-the-unique-complexities-of-surveillance-capitalism
Savia Lobo
05 Jun 2019
4 min read
Save for later

Shoshana Zuboff on 21st century solutions for tackling the unique complexities of surveillance capitalism

Savia Lobo
05 Jun 2019
4 min read
The Canadian Parliament's Standing Committee on Access to Information, Privacy and Ethics hosted the hearing of the International Grand Committee on Big Data, Privacy and Democracy from Monday, May 27 to Wednesday, May 29.  Witnesses from at least 11 countries appeared before representatives to testify on how governments can protect democracy and citizen rights in the age of big data. This section of the hearing, which took place on May 28, includes Shoshana Zuboff’s take on how to tackle the complexities of surveillance capitalism. She has also provided 21st-century solutions to help tackle the same. Shoshana Zuboff, Author of 'The Age of Surveillance Capitalism', talks about economic imperatives within surveillance capitalism. Zuboff says that the unilateral claiming of private human experience, its translation into behavioral data. These predictions are sold in a new kind of marketplace that trades exclusively in human futures. When we deconstruct the competitive dynamics of these markets we get to understand what the new imperatives are, which are, Scale: as they need a lot of data in order to make good predictions economies of scale; secondly, scope: they need a variety of data to make good predictions. She shared a brief quote from a data scientist, which says, “We can engineer the context around a particular behavior and force change. That way we are learning how to rate the music and then we let the music make them dance.” This behavioral modification is systemically institutionalized on a global scale and mediated by a now ubiquitous digital infrastructure. She further explains the kind of law and regulation needed today will be 21st century solutions aimed at the unique 21st century complexities of surveillance capitalism. She mentioned three arenas in which legislative and regulatory strategies can effectively align with the structure and consequences of surveillance capitalism briefly: We need lawmakers to devise strategies that interrupt and in many cases outlaw surveillance capitalism's foundational mechanisms. This includes the unilateral taking of private human experience as a free source of raw material and its translation into data. It includes the extreme information asymmetries necessary for predicting human behavior. It includes the manufacture of computational prediction products based on the unilateral and secret capture of human experience. It includes the operation of prediction markets that trade in human futures. From the point of view of supply and demand, surveillance capitalism can be understood as a market failure. Every piece of research over the last decades has shown that when users are informed of the backstage operations of surveillance capitalism they want no part of it, they want protection, they reject it, they want alternatives. We need laws and regulatory frameworks designed to advantage companies that want to break with the surveillance capitalist paradigm. Forging an alternative trajectory to the digital future will require alliances of new competitors who can summon and institutionalize an alternative ecosystem. True competitors that align themselves with the actual needs of people and the norms of market democracy are likely to attract just about every person on earth as their customers. Lawmakers will need to support new forms of citizen action, collective action just as nearly a century ago workers won legal protection for their rights to organize to bargain and to and to strike. New forms of citizen solidarity are already emerging in municipalities that seek an alternative to the Google-owned Smart City future. In communities that want to resist the social cost of so-called disruption imposed for the sake of others gained and among workers who seek fair wages and reasonable security in the precarious conditions of the so-called gig economy. She says, “Citizens need your help but you need citizens because ultimately they will be the wind behind your wings, they will be the sea change in public opinion and public awareness that supports your political initiatives.” “If together we aim to shift the trajectory of the digital future back toward its emancipatory promise, we resurrect the possibility that the future can be a place that all of us might call home,” she concludes. To know more you can listen to the full hearing video titled, “Meeting No. 152 ETHI - Standing Committee on Access to Information, Privacy and Ethics” on ParlVU. WWDC 2019 highlights: Apple introduces SwiftUI, new privacy-focused sign in, updates to iOS, macOS, and iPad and more Experts present most pressing issues facing global lawmakers on citizens’ privacy, democracy and rights to freedom of speech Apple previews iOS 13: Sign in with Apple, dark mode, advanced photo and camera features, and an all-new Maps experience  
Read more
  • 0
  • 0
  • 3346

article-image-time-for-data-privacy-duckduckgo-ceo-gabe-weinberg-in-an-interview-with-kara-swisher
Vincy Davis
28 May 2019
8 min read
Save for later

Time for data privacy: DuckDuckGo CEO Gabe Weinberg in an interview with Kara Swisher

Vincy Davis
28 May 2019
8 min read
On the latest Recode Decode episode, Kara Swisher (co-founder) interviewed DuckDuckGo CEO, Gabriel Weinberg on data tracking and why it’s time for Congress to act now as federal legislation is necessary in the current scenario of constant surveillance. DuckDuckGo is an Internet search engine that emphasizes on protecting searchers' privacy. Its market share in the U.S. is about 1%, as compared to more than 88% share owned by Google. Given below are some of the key highlights of the interview. On how DuckDuckGo is different from Google DuckDuckGo which is a internet privacy company, helps users’ to “escape the creepiness and tracking on the internet”. DuckDuckGo has been an alternative to Google since 11 years. It has about a billion searches a month and is the fourth-largest search engine in the U.S. Weinberg states that “Google and Facebook are the largest traders of trackers”, and claims that his company blocks trackers from hundreds of companies. DuckDuckGo also enables more encryption as they force users to go to the unencrypted version of a website. This prevents Internet Service Providers(ISPs)  from tracking the user. When asked the reason for settling into the ‘search business’, Weinberg replied that being from a tech background (tech policy from MIT), he has always been interested in search. After developing this business, he got many privacy queries. It's then that he realized that, “One, searches are essentially the most private thing on the internet. You just type in all your deepest, darkest secrets and search, right? The second thing is, you don’t need to actually track people to make money on search,” so he realized that this would be a “better user experience, and just made the decision not to track people.” Read More: DuckDuckGo chooses to improve its products without sacrificing user privacy The switch from contextual advertising to behavioral advertising From the time internet started working till mid-2000s, the kind of advertising used is called as contextual advertising. It had a very simple routine, “sites used to sell their own ads, they would put advertising based on the content of the article”. Post mid-2000, the working shifted to behavioral advertising. It includes the “creepy ads, the ones that kind of follow you around the internet.” Weinberg added that when website publishers in the Google Network of content sites used to sell their biggest inventory, banner advertising was done at the top of the page. To explore more money, the bottom of the pages was sold to ad networks, to target the site content and audience. These advertisements are administered, sorted, and maintained by Google, under the name AdSense. This helped Google to get all the behavioral data. So if a user searched for something, Google can follow them around with that search. As these advertisements became more lucrative, publishers ceded most of their page over to this behavioral advertising. There has been “no real regulation in tech” to prevent this. Through these trackers, companies like Google and Facebook and many others get user information and browsing history, including purchase history, location history, browsing history, search history, and even user location. Read More: Ireland’s Data Protection Commission initiates an inquiry into Google’s online Ad Exchange services Read More: Advocacy groups push FTC to fine Facebook and break it up for repeatedly violating the consent order and unfair business practices Weinberg informs that, “when you go to, now, a website that has advertising from one of these networks, there’s a real-time bidding against you, as a person. There’s an auction to sell you an ad based on all this creepy information you didn’t even realize people captured” People do ‘care about privacy’ Weinberg says that “before you knew about it, you were okay with it because you didn’t realize it was so invasive, but after Cambridge Analytica and all the stories about the tracking, that number just keeps going up and up and up.” He also explained about the setting “do not track”, which is available in most of the privacy settings of the browser. He says “People are like, ‘No one ever goes into settings and looks at privacy.’ That’s not true. Literally, tens of millions of Americans have gone into their browser settings and checked this thing. So, people do care!”. Weinberg believes ‘do not track’ is a better mechanism for privacy laws, because once the user makes the setting, no more popups will be allowed i.e., no more sites can track you. He also hopes that the ‘do not track’ mechanism is passed by Congress as it will allow all the people in the country to not being tracked. On challenging Google One main issue faced by DuckDuckGo is that not many people are aware of it. Weinberg says, “There’s 20 percent of people that we think would be interested in switching to DuckDuckGo, but it’s hard to convey all these privacy concepts.” He also claimed that companies like Google are altering people’s searches through ‘filter bubble’. As an example, he added, “when you search, you expect to get the results right? But we found that it varies a lot by location”. Last year, DuckDuckGo had accused Google, that their search personalization contributes to “filter bubbles”. In 2012, DuckDuckGo ran a study showing Google's filter bubble may have significantly influenced the 2012 U.S. Presidential election by inserting tens of millions of more links for Obama than for Romney in the run-up to that election. Read More: DeepMind researchers provide theoretical analysis on recommender system, ‘echo chamber’ and ‘filter bubble effect’ How to prevent online tracking Other than using DuckDuckGo and not using say, any of Google’s internet home devices, Swisher asked Weinberg, what are other ways to protect ourselves from being tracked online. To this, Weinberg says there are plenty of other options available. He suggested, “For Google, there are actually alternatives in every category.” For emails, he suggested ProtonMail, FastMail as options. When asked about Facebook, he admitted that “there aren’t great alternatives to it” and added cheekily, “Just leave it”. He further added that there are a bunch of privacy settings available in the devices themselves. He also mentioned about DuckDuckGo blog spreadprivacy.com which provides advice tips. Also there are things which users can do, like turning off ad tracking in the device or to use an end-to-end encryption. On Facial recognition system Weinberg says “Facial recognition is hard”. A person can wear any minor thing to avoid getting caught on the camera. He admits, “you’re going to need laws” to regulate the use of it and thinks San Francisco started a great trend in banning the technology. Many other points were also discussed by Swisher and Weinberg, which included the Communications Decency Act 230 to control sensitive data on the internet. Weinberg also asserted that there’s a need for a national bill like GDPR in the U.S. There were also questions raised on Amazon’s growing advertisements through Google and Facebook. Weinberg also dismissed the probability of having a DuckDuckGo for YouTube anytime soon. Many users agree with Gabriel Weinberg that we should opt into data tracking and it is time to make ‘Do not track’ the norm. A user on Hacker News commented, “Discounting Internet by axing privacy is a nasty idea. Privacy should be available by default without any added price tags.” Another user added, “In addition to not stalking you across the web, DDG also does not store data on you even when using their products directly. For me that is still cause for my use of DDG.” However, as mentioned by Weinberg, there are still people who do not mind being tracked online. It can be because they are not aware of the big trades that takes place behind a user’s one click. A user on Reddit has given an apt basis for this,  “Privacy matters to people at home, but not online, for some reason. I think because it hasn't been transparent, and isn't as obvious as a person looking in your windows. That slowly seems to be changing as more of these concerns are making the news, more breaches, more scandals. You can argue the internet is "wandering outside", which is true to some degree, but it doesn't feel that way. It feels private, just you and your computer/phone, but it's not. What we experience is not matching up with reality. That is what's dangerous/insidious about the whole thing. People should be able to choose when to make themselves "public", and you largely can't because it's complicated and obfuscated.” For more details about their conversation, check out the full interview. Speech2Face: A neural network that “imagines” faces from hearing voices. Is it too soon to worry about ethnic profiling? ‘Facial Recognition technology is faulty, racist, biased, abusive to civil rights; act now to restrict misuse’ say experts to House Oversight and Reform Committee GDPR complaint in EU claim billions of personal data leaked via online advertising bids
Read more
  • 0
  • 0
  • 3375
article-image-deoldify-colorising-and-restoring-bw-images-and-videos-using-a-nogan-approach
Savia Lobo
17 May 2019
5 min read
Save for later

DeOldify: Colorising and restoring B&W images and videos using a NoGAN approach

Savia Lobo
17 May 2019
5 min read
Wouldn’t it be magical if we could watch old black and white movie footages and images in color? Deep learning, more precisely, GANs can help here. A recent approach by a software researcher Jason Antic tagged as ‘DeOldify’ is a deep learning based project for colorizing and restoring old images and film footages. https://twitter.com/johnbreslin/status/1127690102560448513 https://twitter.com/johnbreslin/status/1129360541955366913 In one of the sessions at the recent Facebook Developer Conference held from April 30 - May 1, 2019, Antic, along with Jeremy Howard, and Uri Manor talked about how by using GANs one can reconstruct images and videos, such as increasing their resolution or adding color to a black and white film. However, they also pointed out that GANs can be slow, and difficult and expensive to train. They demonstrated how to colorize old black & white movies and drastically increase the resolution of microscopy images using new PyTorch-based tools from fast.ai, the Salk Institute, and DeOldify that can be trained in just a few hours on a single GPU. https://twitter.com/citnaj/status/1123748626965114880 DeOldify makes use of a NoGAN training, which combines the benefits of GAN training (wonderful colorization) while eliminating the nasty side effects (like flickering objects in the video). NoGAN training is crucial while getting some images or videos stable and colorful. An example of DeOldify trying to achieve a stable video is as follows: Source: GitHub Antic said, “the video is rendered using isolated image generation without any sort of temporal modeling tacked on. The process performs 30-60 minutes of the GAN portion of "NoGAN" training, using 1% to 3% of Imagenet data once. Then, as with still image colorization, we "DeOldify" individual frames before rebuilding the video.” The three models in DeOldify DeOldify includes three models including video, stable and artistic. Each of the models has its strengths and weaknesses, and their own use cases. The Video model is for video and the other two are for images. Stable https://twitter.com/johnbreslin/status/1126733668347564034 This model achieves the best results with landscapes and portraits and produces fewer zombies (where faces or limbs stay gray rather than being colored in properly). It generally has less unusual miscolorations than artistic, but it's also less colorful in general. This model uses a resnet101 backbone on a UNet with an emphasis on width of layers on the decoder side. This model was trained with 3 critic pretrain/GAN cycle repeats via NoGAN, in addition to the initial generator/critic pretrain/GAN NoGAN training, at 192px. This adds up to a total of 7% of Imagenet data trained once (3 hours of direct GAN training). Artistic https://twitter.com/johnbreslin/status/1129364635730272256 This model achieves the highest quality results in image coloration, with respect to interesting details and vibrance. However, in order to achieve this, one has to adjust the rendering resolution or render_factor. Additionally, the model does not do as well as ‘stable’ in a few key common scenarios- nature scenes and portraits. Artistic model uses a resnet34 backbone on a UNet with an emphasis on depth of layers on the decoder side. This model was trained with 5 critic pretrain/GAN cycle repeats via NoGAN, in addition to the initial generator/critic pretrain/GAN NoGAN training, at 192px. This adds up to a total of 32% of Imagenet data trained once (12.5 hours of direct GAN training). Video https://twitter.com/citnaj/status/1124719757997907968 The Video model is optimized for smooth, consistent and flicker-free video. This would definitely be the least colorful of the three models; while being almost close to the ‘stable’ model. In terms of architecture, this model is the same as "stable"; however, differs in training. It's trained for a mere 2.2% of Imagenet data once at 192px, using only the initial generator/critic pretrain/GAN NoGAN training (1 hour of direct GAN training). DeOldify was achieved by combining certain approaches including: Self-Attention Generative Adversarial Network: Here, Antic has modified the generator, a pre-trained U-Net, to have the spectral normalization and self-attention. Two Time-Scale Update Rule: It’s just one to one generator/critic iterations and higher critic learning rate. This is modified to incorporate a "threshold" critic loss that makes sure that the critic is "caught up" before moving on to generator training. This is particularly useful for the "NoGAN" method. NoGAN doesn’t have a separate research paper. This, in fact, is a new type of GAN training developed to solve some key problems in the previous DeOldify model. NoGAN includes the benefits of GAN training while spending minimal time doing direct GAN training. Antic says, “I'm looking to make old photos and film look reeeeaaally good with GANs, and more importantly, make the project useful.” “I'll be actively updating and improving the code over the foreseeable future. I'll try to make this as user-friendly as possible, but I'm sure there's going to be hiccups along the way”, he further added. To further know about the hardware components and other details head over to Jason Antic’s GitHub page. Training Deep Convolutional GANs to generate Anime Characters [Tutorial] Sherin Thomas explains how to build a pipeline in PyTorch for deep learning workflows Using deep learning methods to detect malware in Android Applications
Read more
  • 0
  • 0
  • 7197

article-image-what-can-artificial-intelligence-do-for-the-aviation-industry
Guest Contributor
14 May 2019
6 min read
Save for later

What can Artificial Intelligence do for the Aviation industry

Guest Contributor
14 May 2019
6 min read
The use of AI (Artificial Intelligence) technology in commercial aviation has brought some significant changes in the ways flights are being operated today. World’s leading airliner service providers are now using AI tools and technologies to deliver a more personalized traveling experience to their customers. From building AI-powered airport kiosks to using it for automating airline operations and security checking, AI will play even more critical roles in the aviation industry. Engineers have found AI can help the aviation industry with machine vision, machine learning, robotics, and natural language processing. Artificial intelligence has been found to be highly potent and various researches have shown how the use of artificial intelligence can bring significant changes in aviation. Few airlines now use artificial intelligence for predictive analytics, pattern recognition, auto scheduling, targeted advertising, and customer feedback analysis showing promising results for better flight experience. A recent report shows that aviation professionals are thinking to use artificial intelligence to monitor pilot voices for a hassle-free flying experience of the passengers. This technology is to bring huge changes in the world of aviation. Identification of the Passengers There’s no need to explain how modern inventions are contributing towards the betterment of mankind and AI can help in air transportation in numerous ways. Check-in before boarding is a vital task for an airline and they can simply take the help of artificial intelligence to do it easily, the same technology can be also used for identifying the passengers as well. American airline company Delta Airlines took the initiative in 2017. Their online check-in via Delta mobile app and ticketing kiosks have shown promising results and nowadays you can see many airlines taking similar features to the whole new level. The Transportation Security Administration of the United States has introduced new AI technology to identify potential threats at the John F. Kennedy, Los Angeles International Airport and Phoenix airports. Likewise, Hartsfield-Jackson Airport is planning to launch America’s first biometric terminal. Once installed, “the AI technology will make the process of passenger identification fast and easy for officials. Security scanners, biometric identification”, and machine learning are some of the AI technologies that will make a number of jobs easy for us. In this way, AI helps us predict disruption in airline services. Baggage Screening Baggage screening is another tedious but important task that needs to be done at the airport. However, AI has simplified the process of baggage screening. The American airlines once conducted a competition on app development on artificial intelligence and Team Avatar became the winner of the competition for making an app that would allow the users to determine the size of their baggage at the airport. Osaka Airport in Japan is planning to install the Syntech ONE 200, which is an AI technology developed to screen baggage for multiple passenger lanes. Such tools will not only automate the process of baggage screening but also help authorities detect illegal items effectively. Syntech ONE 200is compatible with the X-ray security system and it increases the probability of identification of potential threats. Assisting Customers AI can be used to assist customers in the airport and it can help a company reduce its operational costs and labor costs at the same time. Airlines companies are now using AI technologies to help their customers to resolve issues quickly by getting accurate information on future flights trips on their internet-enabled devices. More than 52% of airlines companies across the world have planned to install AI-based tools to improve their customer service functions in the next five years. Artificial Intelligence can answer various common questions of the customers, assisting them for check-in requests, the status of the flight and more. Nowadays artificial intelligence is also used in air cargo for different purposes such as revenue management, safety, and maintenance and it has shown impressive results till date. Maintenance Prediction Airlines companies are planning to implement AI technology to predict potential failures of maintenance on aircraft. Leading aircraft manufacturer Airbus is taking measures to improve the reliability of aircraft maintenance. They are using Skywise, a cloud-based data storing system. It helps the fleet to collect and record a huge amount of real-time data. The use of AI in the predictive maintenance analytics will pave the way for a systematic approach on how and when the aircraft maintenance should be done.  Nowadays you can see how top-rated airlines use artificial intelligence to make the process of maintenance easy and improve the user experience at the same time. Pitfalls of using AI in Aviation Despite being considered as a future of the aviation industry,  AI has some pitfalls. For instance, it takes time for implementation and it cannot be used as an ideal tool for customer service. The recent incident of Ethiopian Airlines Boeing 737 was an eye-opener for us and it clearly represents the drawback of AI technology in the aviation sector. The Boeing 737 crashed a few minutes after it took off from the capital of Ethiopia. The failure of the MCAC system was the key reasons behind the fatal accident. Also, AI is quite expensive; for example, if an airline company is planning to deploy a chatbot, it will have to invest more than $15,000. Thus, it would be a hard thing for small companies to invest for the same and this could create a barrier between small and big airlines in the future. As the market is becoming highly competitive, big airlines will conquer the market and the small airlines might face an existential threat due to this reason.   Conclusion The use of artificial intelligence in aviation has made many tasks easy for airlines and airport authorities across the world. From identifying passengers to screening the bags and providing fast and efficient customer care solutions. Unlike the software industry, the risks of real life harms are exponentially higher in the aviation industry. While other industries have started using this technology long back, the adoption of AI in aviation has been one of caution, and rightly so. As the aviation industry embraces the benefits of artificial intelligence and machine learning, it must also invest in putting in place checks and balances to identify, reduce and eliminate harmful consequences of AI, whether intended or otherwise.  As Silicon Valley reels in ethical dilemmas, the aviation industry will do well to learn from Silicon Valley while making a transition to a smart future. The aviation industry known for its rigorous safety measures and processes may, in fact, have a thing or two to teach Silicon Valley when it comes to designing, adopting and deploying AI systems into live systems that have high-risk profiles. Author Bio Maria Brown is Content Writer, Blogger and maintaining Social Media Optimization for 21Twelve Interactive. She believes in sharing her solid knowledge base with a focus on entrepreneurship and business. You can find her on Twitter.
Read more
  • 0
  • 0
  • 12671