Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News - Artificial Intelligence

61 Articles
article-image-convolutional-neural-networks%e2%80%afcnns-a-breakthrough-in-image-recognition
Expert Network
15 Mar 2021
9 min read
Save for later

Convolutional Neural Networks (CNNs) - A Breakthrough In Image Recognition 

Expert Network
15 Mar 2021
9 min read
A CNN is a combination of two components: a feature extractor module followed by a trainable classifier. The first component includes a stack of convolution, activation, and pooling layers. A dense neural network (DNN) does the classification. Each neuron in a layer is connected to those in the next layer.  This article is an excerpt from the book, Machine Learning Using TensorFlow Cookbook by Alexia Audevart, Konrad Banachewicz and Luca Massaron who are Kaggle Masters and Google Developer Experts.  Implementing a simple CNN  In this section, we will develop a CNN based on the LeNet-5 architecture, which was first introduced in 1998 by Yann LeCun et al. for handwritten and machine-printed character recognition.      Figure 1: LeNet-5 architecture – Original image published in [LeCun et al., 1998]  This architecture consists of two sets of CNNs composed of convolution-ReLU-max pooling operations used for feature extraction, followed by a flattening layer and two fully connected layers to classify the images. Our goal will be to improve upon our accuracy in predicting MNIST digits.  Getting ready  To access the MNIST data, Keras provides a package (tf.keras.datasets) that has excellent dataset-loading functionalities. (Note that TensorFlow also provides its own collection of ready-to-use datasets with the TF Datasets API.) After loading the data, we will set up our model variables, create the model, train the model in batches, and then visualize loss, accuracy, and some sample digits.  How to do it... Perform the following steps:  First, we'll load the necessary libraries and start a graph session:  import matplotlib.pyplot as plt  import numpy as np  import tensorflow as tf   2. Next, we will load the data and reshape the images in a four-dimensional matrix:  (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()   # Reshape  x_train = x_train.reshape(-1, 28, 28, 1)  x_test = x_test.reshape(-1, 28, 28, 1)  #Padding the images by 2 pixels  x_train = np.pad(x_train, ((0,0),(2,2),(2,2),(0,0)), 'constant')  x_test = np.pad(x_test, ((0,0),(2,2),(2,2),(0,0)), 'constant')  “Note that the MNIST dataset downloaded here includes training and test datasets. These datasets are composed of the grayscale images (integer arrays with shape (num_sample, 28,28)) and the labels (integers in the range 0-9). We pad the images by 2 pixels since in the LeNet-5 paper input images were 32x32.”  3. Now, we will set the model parameters. Remember that the depth of the image (number of channels) is 1 because these images are grayscale. We'll also set up a seed to have reproducible results:  image_width = x_train[0].shape[0]  image_height = x_train[0].shape[1]  num_channels = 1 # grayscale = 1 channel  seed = 98  np.random.seed(seed)  tf.random.set_seed(seed)  4. We'll declare our training data variables and our test data variables. We will have different batch sizes for training and evaluation. You may change these, depending on the physical memory that is available for training and evaluating:  batch_size = 100  evaluation_size = 500  epochs = 300  eval_every = 5  5. We'll normalize our images to change the values of all pixels to a common scale:  x_train = x_train / 255  x_test = x_test/ 255  6. Now we'll declare our model. We will have the feature extractor module composed of two convolutional/ReLU/max pooling layers followed by the classifier with fully connected layers. Also, to get the classifier to work, we flatten the output of the feature extractor module so we can use it in the classifier. Note that we use a softmax activation function at the last layer of the classifier. Softmax turns numeric output (logits) into probabilities that sum to one.  input_data = tf.keras.Input(dtype=tf.float32, shape=(image_width,image_height, num_channels), name="INPUT")   # First Conv-ReLU-MaxPool Layer  conv1 = tf.keras.layers.Conv2D(filters=6,                                kernel_size=5,                                padding='VALID',                                activation="relu",                                name="C1")(input_data)  max_pool1 = tf.keras.layers.MaxPool2D(pool_size=2,                                       strides=2,                                        padding='SAME',                                       name="S1")(conv1)  # Second Conv-ReLU-MaxPool Layer  conv2 = tf.keras.layers.Conv2D(filters=16,                                kernel_size=5,                                padding='VALID',                                strides=1,                                activation="relu",                                name="C3")(max_pool1)  max_pool2 = tf.keras.layers.MaxPool2D(pool_size=2,                                       strides=2,                                        padding='SAME',                                       name="S4")(conv2)  # Flatten Layer  flatten = tf.keras.layers.Flatten(name="FLATTEN")(max_pool2)  # First Fully Connected Layer  fully_connected1 = tf.keras.layers.Dense(units=120,                                          activation="relu",                                          name="F5")(flatten)  # Second Fully Connected Layer  fully_connected2 = tf.keras.layers.Dense(units=84,                                          activation="relu",                                          name="F6")(fully_connected1)  # Final Fully Connected Layer  final_model_output = tf.keras.layers.Dense(units=10,                                            activation="softmax",                                            name="OUTPUT"                                            )(fully_connected2)  model = tf.keras.Model(inputs= input_data, outputs=final_model_output)  7. Next, we will compile the model using an Adam (Adaptive Moment Estimation) optimizer. Adam uses Adaptive Learning Rates and Momentum that allow us to get to local minima faster, and so, to converge faster. As our targets are integers and not in a one-hot encoded format, we will use the sparse categorical cross-entropy loss function. Then we will also add an accuracy metric to determine how accurate the model is on each batch.  model.compile(     optimizer="adam",      loss="sparse_categorical_crossentropy",     metrics=["accuracy"]     8. Next, we print a string summary of our network.  model.summary()   Figure 4: The LeNet-5 architecture  The LeNet-5 model has 7 layers and contains 61,706 trainable parameters. So, let's go to train the model.  9. We can now start training our model. We loop through the data in randomly chosen batches. Every so often, we choose to evaluate the model on the train and test batches and record the accuracy and loss. We can see that, after 300 epochs, we quickly achieve 96%-97% accuracy on the test data:  train_loss = []  train_acc = []  test_acc = []  for i in range(epochs):     rand_index = np.random.choice(len(x_train), size=batch_size)     rand_x = x_train[rand_index]     rand_y = y_train[rand_index]     history_train = model.train_on_batch(rand_x, rand_y)     if (i+1) % eval_every == 0:         eval_index = np.random.choice(len(x_test), size=evaluation_size)         eval_x = x_test[eval_index]         eval_y = y_test[eval_index]         history_eval = model.evaluate(eval_x,eval_y)         # Record and print results         train_loss.append(history_train[0])         train_acc.append(history_train[1])         test_acc.append(history_eval[1])         acc_and_loss = [(i+1), history_train  [0], history_train[1], history_eval[1]]         acc_and_loss = [np.round(x,2) for x in acc_and_loss]         print('Epoch # {}. Train Loss: {:.2f}. Train Acc (Test Acc): {:.2f} ({:.2f})'.format(*acc_and_loss))   10. This results in the following output:  Epoch # 5. Train Loss: 2.19. Train Acc (Test Acc): 0.23 (0.34)  Epoch # 10. Train Loss: 2.01. Train Acc (Test Acc): 0.59 (0.58)  Epoch # 15. Train Loss: 1.71. Train Acc (Test Acc): 0.74 (0.73)  Epoch # 20. Train Loss: 1.32. Train Acc (Test Acc): 0.73 (0.77)  ...  Epoch # 290. Train Loss: 0.18. Train Acc (Test Acc): 0.95 (0.94)  Epoch # 295. Train Loss: 0.13. Train Acc (Test Acc): 0.96 (0.96)  Epoch # 300. Train Loss: 0.12. Train Acc (Test Acc): 0.95 (0.97)  11. The following is the code to plot the loss and accuracy using Matplotlib:  # Matlotlib code to plot the loss and accuracy  eval_indices = range(0, epochs, eval_every)  # Plot loss over time  plt.plot(eval_indices, train_loss, 'k-')  plt.title('Loss per Epoch')  plt.xlabel('Epoch')  plt.ylabel('Loss')  plt.show()  # Plot train and test accuracy  plt.plot(eval_indices, train_acc, 'k-', label='Train Set Accuracy')  plt.plot(eval_indices, test_acc, 'r--', label='Test Set Accuracy')  plt.title('Train and Test Accuracy')  plt.xlabel('Epoch')  plt.ylabel('Accuracy')  plt.legend(loc='lower right')  plt.show()   We then get the following plots:  Figure 5: The left plot is the train and test set accuracy across our 300 training epochs. The right plot is the softmax loss value over 300 epochs.    If we want to plot a sample of the latest batch results, here is the code to plot a sample consisting of six of the latest results:  # Plot some samples and their predictions  actuals = y_test[30:36]  preds = model.predict(x_test[30:36])  predictions = np.argmax(preds,axis=1)  images = np.squeeze(x_test[30:36])  Nrows = 2  Ncols = 3 for i in range(6):     plt.subplot(Nrows, Ncols, i+1)     plt.imshow(np.reshape(images[i], [32,32]), cmap='Greys_r')     plt.title('Actual: ' + str(actuals[i]) + ' Pred: ' + str(predictions[i]),                                fontsize=10)     frame = plt.gca()     frame.axes.get_xaxis().set_visible(False)     frame.axes.get_yaxis().set_visible(False)  plt.show()   We get the following output for the code above:  Figure 6: A plot of six random images with the actual and predicted values in the title. The lower-left picture was predicted to be a 6, when in fact it is a 4.  Using a simple CNN, we achieved a good result in accuracy and loss for this dataset.  How it works...  We increased our performance on the MNIST dataset and built a model that quickly achieves about 97% accuracy while training from scratch. Our features extractor module is a combination of convolutions, ReLU, and max pooling. Our classifier is a stack of fully connected layers. We trained in batches of size 100 and looked at the accuracy and loss across the epochs. Finally, we also plotted six random digits and found that the model prediction fails to predict one image. The model predicts a 6 when in fact it's a 4.  CNN does very well with image recognition. Part of the reason for this is that the convolutional layer creates its low-level features that are activated when they come across a part of the image that is important. This type of model creates features on its own and uses them for prediction.  Summary:  This article highlights how to create a simple CNN, based on the LeNet-5 architecture. The recipes cited in the book Machine Learning Using TensorFlow enable you to perform complex data computations and gain valuable insights into data.  About the Authors  Alexia Audevart, is a Google Developer Expert in machine learning and the founder of Datactik. She is a data scientist and helps her clients solve business problems by making their applications smarter.   Konrad Banachewicz holds a PhD in statistics from Vrije Universiteit Amsterdam. He is a lead data scientist at eBay and a Kaggle Grandmaster.   Luca Massaron is a Google Developer Expert in machine learning with more than a decade of experience in data science. He is also a Kaggle master who reached number 7 for his performance in data science competitions. 
Read more
  • 0
  • 0
  • 4746

article-image-automobile-repair-self-diagnosis-and-traffic-light-management-enabled-by-ai-from-ai-trends
Matthew Emerick
15 Oct 2020
5 min read
Save for later

Automobile Repair Self-Diagnosis and Traffic Light Management Enabled by AI from AI Trends

Matthew Emerick
15 Oct 2020
5 min read
By AI Trends Staff Looking inside and outside, AI is being applied to the self-diagnosis of automobiles and to the connection of vehicles to traffic infrastructure. A data scientist at BMW Group in Munich, while working on his PhD, created a system for self-diagnosis called the Automated Damage Assessment Service, according to an account in  Mirage. Milan Koch was completing his studies at the Leiden Institute of Advanced Computer Science in the Netherlands when he got the idea. “It should be a nice experience for customers,” he stated. The system gathers data over time from sensors in different parts of the car. “From scratch, we have developed a service idea that is about detecting damaged parts from low speed accidents,” Koch stated. “The car itself is able to detect the parts that are broken and can estimate the costs and the time of the repair.” Milan Koch, data scientist, BMW Group, Munich Koch developed and compared different multivariate time series methods, based on machine learning, deep learning and also state-of-the-art automated machine learning (AutoML) models. He tested different levels of complexity to find the best way to solve the time series problems. Two of the AutoML methods and his hand-crafted machine learning pipeline showed the best results. The system may have application to other multivariate time series problems, where multiple time-dependent variables must be considered, outside the automotive field. Koch collaborated with researchers from the Leiden University Medical Center (LUMC) to use his hand-crafted pipeline to analyze Electroencephalography (EEG) data.  Koch stated, ‘We predicted the cognition of patients based on EEG data, because an accurate assessment of cognitive function is required during the screening process for Deep Brain Stimulation (DBS) surgery. Patients with advanced cognitive deterioration are considered suboptimal candidates for DBS as cognitive function may deteriorate after surgery. However, cognitive function is sometimes difficult to assess accurately, and analysis of EEG patterns may provide additional biomarkers. Our machine learning pipeline was well suited to apply to this problem.” He added, “We developed algorithms for the automotive domain and initially we didn’t have the intention to apply it to the medical domain, but it worked out really well.” His models are now also applied to Electromyography (EMG) data, to distinguish between people with a motor disease and healthy people. Koch intends to continue his work at BMW Group, where he will focus on customer-oriented services, predictive maintenance applications and optimization of vehicle diagnostics. DOE Grant to Research Traffic Management Delays Aims to Reduce Emissions Getting automobiles to talk to the traffic management infrastructure is the goal of research at the University of Tennesse at Chattanooga, which has been awarded $1.89 million from the US Department of Energy to create a new model for traffic intersections that would reduce energy consumption. The UTC Center for Urban Informatics and Progress (CUIP)  will leverage its existing “smart corridor” to accommodate the new research. The smart corridor is a 1.25-mile span on a main artery in downtown Chattanooga, used as a test bed for research into smart city development and connected vehicles in a real-world environment.  “This project is a huge opportunity for us,” stated Dr. Mina Sartipi, CUIP Director and principal investigator, in a press release. “Collaborating on a project that is future-oriented, novel, and full of potential is exciting. This work will contribute to the existing body of literature and lead the way for future research.” UTC is collaborating with the University of Pittsburgh, the Georgia Institute of Technology, the Oak Ridge National Laboratory, and the City of Chattanooga on the project. Dr. Mina Sartipi, Director, UTC Center for Urban Informatics and Progress In the grant proposal for the DOE, the research team noted that the US transportation sector accounted for more than 69 percent of petroleum consumption, and more than 37 percent of the country’s CO2 emissions. An earlier National Traffic Signal Report Card found that inefficient traffic signals contribute to 295 million vehicle hours of traffic delay, making up to 10 percent of all traffic-related delays.  The project intends to leverage the capabilities of connected vehicles and infrastructures to optimize and manage traffic flow. While adaptive traffic control systems (ATCS) have been in use for a half century to improve mobility and traffic efficiency, they were not designed to address fuel consumption and emissions. Inefficient traffic systems increase idling time and stop-and-go traffic. The National Transportation Operations Coalition has graded the state of the nation’s traffic signals as D+. “The next step in the evolution [of intelligent transportation systems] is the merging of these systems through AI,” noted Aleksandar Stevanovic, associate professor of civil and environmental engineering at Pitt’s Swanson School of Engineering and director of the Pittsburgh Intelligent Transportation Systems (PITTS) Lab. “Creation of such a system, especially for dense urban corridors and sprawling exurbs, can greatly improve energy and sustainability impacts. This is critical as our transportation portfolio will continue to have a heavy reliance on gasoline-powered vehicles for some time.” The goal of the three-year project is to develop a dynamic feedback Ecological Automotive Traffic Control System (Eco-ATCS), which reduces fuel consumption and greenhouse gases while maintaining a highly operable and safe transportation environment. The integration of AI will allow additional infrastructure enhancements including emergency vehicle preemption, transit signal priority, and pedestrian safety. The ultimate goal is to reduce corridor-level fuel consumption by 20 percent. Read the source articles and information in Mirage, and in a press release from the UTC Center for Urban Informatics and Progress.
Read more
  • 0
  • 0
  • 2451

article-image-data-governance-in-operations-needed-to-ensure-clean-data-for-ai-projects-from-ai-trends
Matthew Emerick
15 Oct 2020
5 min read
Save for later

Data Governance in Operations Needed to Ensure Clean Data for AI Projects from AI Trends

Matthew Emerick
15 Oct 2020
5 min read
By AI Trends Staff Data governance in data-driven organizations is a set of practices and guidelines that define where responsibility for data quality lives. The guidelines support the operation’s business model, especially if AI and machine learning applications are at work.  Data governance is an operations issue, existing between strategy and the daily management of operations, suggests a recent account in the MIT Sloan Management Review.  “Data governance should be a bridge that translates a strategic vision acknowledging the importance of data for the organization and codifying it into practices and guidelines that support operations, ensuring that products and services are delivered to customers,” stated author Gregory Vial is an assistant professor of IT at HEC Montréal. To prevent data governance from being limited to a plan that nobody reads, “governing” data needs to be a verb and not a noun phrase as in “data governance.” Vial writes, “The difference is subtle but ties back to placing governance between strategy and operations — because these activities bridge and evolve in step with both.” Gregory Vial, assistant professor of IT at HEC Montréal An overall framework for data governance was proposed by Vijay Khatri and Carol V. Brown in a piece in Communications of the ACM published in 2010. The two suggested the strategy is based on five dimensions that represent a combination of structural, operational and relational mechanisms. The five dimensions are: Principles at the foundation of the framework that relate to the role of data as an asset for the organization; Quality to define the requirements for data to be usable and the mechanisms in place to assess that those requirements are met; Metadata to define the semantics crucial for interpreting and using data — for example, those found in a data catalog that data scientists use to work with large data sets hosted on a data lake. Accessibility to establish the requirements related to gaining access to data, including security requirements and risk mitigation procedures; Life cycle to support the production, retention, and disposal of data on the basis of organization and/or legal requirements. “Governing data is not easy, but it is well worth the effort,” stated Vial. “Not only does it help an organization keep up with the changing legal and ethical landscape of data production and use; it also helps safeguard a precious strategic asset while supporting digital innovation.” Master Data Management Seen as a Path to Clean Data Governance Once the organization commits to data quality, what’s the best way to get there? Naturally entrepreneurs are in position to step forward with suggestions. Some of them are around master data management (MDM), a discipline where business and IT work together to ensure the accuracy and consistency of the enterprise’s master data assets. Organizations starting down the path with AI and machine learning may be tempted to clean the data that feeds a specific application project, a costly approach in the long run suggests one expert.   “A better, more sustainable way is to continuously cure the data quality issues by using a capable data management technology. This will result in your training data sets becoming rationalized production data with the same master data foundation,” suggests Bill  O’Kane, author of a recent account from tdwi.org on master data management. Formerly an analyst with Gartner, O’Kane is now the VP and MDM strategist at Profisee, a firm offering an MDM solution. If the data feeding into the AI system is not unique, accurate, consistent and time, the models will not produce reliable results and are likely to lead to unwanted business outcomes. These could include different decisions being made on two customer records thought to represent different people, but in fact describe the same person. Or, recommending a product to a customer that was previously returned or generated a complaint. Perceptilabs Tries to Get in the Head of the Machine Learning Scientist Getting inside the head of a machine learning scientist might be helpful in understanding how a highly trained expert builds and trains complex mathematical models. “This is a complex time-consuming process, involving thousands of lines of code,” writes Martin Isaksson, co-founder and CEO of Perceptilabs, in a recent account in VentureBeat. Perceptilabs offers a product to help automation the building of machine learning models, what it calls a “GUI for TensorFlow.”. Martin Isaksson, co-founder and CEO, Perceptilabs “As AI and ML took hold and the experience levels of AI practitioners diversified, efforts to democratize ML materialized into a rich set of open source frameworks like TensorFlow and datasets. Advanced knowledge is still required for many of these offerings, and experts are still relied upon to code end-to-end ML solutions,” Isaksson wrote.. AutoML tools have emerged to help adjust parameters and train machine learning models so that they are deployable. Perceptilabs is adding a visual modeler to the mix. The company designed its tool as a visual API on top of TensorFlow, which it acknowledges as the most popular ML framework. The approach gives developers access to the low-level TensorFlow API and the ability to pull in other Python modules. It also gives users transparency into how the model is architected and a view into how it performs. Read the source articles in the MIT Sloan Management Review, Communications of the ACM,  tdwi.org and VentureBeat.
Read more
  • 0
  • 0
  • 1938

article-image-startup-focus-sea-machines-winning-contracts-for-autonomous-marine-systems-from-ai-trends
Matthew Emerick
15 Oct 2020
8 min read
Save for later

Startup Focus: Sea Machines Winning Contracts for Autonomous Marine Systems from AI Trends

Matthew Emerick
15 Oct 2020
8 min read
By AI Trends Staff The ability to add automation to an existing marine vessel to make it autonomous is here today and is being proven by a Boston company. Sea Machines builds autonomous vessel software and systems for the marine industry. Founded in 2015, the company recently raised $15 million in a Series B round, making it total raised $27.5 million since 2017.  Founder and CEO Michael G. Johnson, a licensed marine engineer, recently took the time to answer via email some questions AI Trends poses to selected startups. Describe your team, the key people Sea Machines is led by a team of mariners, engineers, coders and autonomy scientists. The company today has a crew of 30 people based in Boston; Hamburg, Germany; and Esbjerg, Denmark. Sea Machines is also hiring for a variety of positions, which can be viewed at sea-machines.com/careers. Michael Johnson, Founder and CEO, Sea Machines What business problem are you trying to solve? The global maritime industry is responsible for billions in economic output and is a major driver of jobs and commerce. Despite the sector’s success and endurance, it faces significant challenges that can negatively impact operator safety, performance and profitability. Sea Machines is solving many of these challenges by developing technologies that are helping the marine industry transition into a new era of task-driven, computer-guided vessel operations.   How does your solution address the problem? Autonomous systems solve for these challenges in several ways: Autonomous grid and waypoint following capabilities relieve mariners from manually executing planned routes. Today’s autonomous systems uniquely execute with human-like behavior, intelligently factoring in environmental and sea conditions (including wave height, pitch, heave and roll); change speeds between waypoints; and actively detect obstacles for collision avoidance purposes. Autonomous marine systems also enable optionally manned or autonomous-assist (reduced crew) modes that can reduce mission delays and maximize effort. This is an important feature for anyone performing time-sensitive operations, such as on-water search-and-rescues or other urgent missions. Autonomous marine systems offer obstacle detection and collision avoidance capabilities that keep people and assets safe and out of harm’s way. These advanced technologies are much more reliable and accurate than the human eye, especially in times of low light or in poor sea conditions. Because today’s systems enable remote-helm control and remote payload management, there is a reduced need for mariners (such as marine fire or spill response crews) to physically man a vessel in a dangerous environment. A remote-helm control beltpack also improves visibility by enabling mariners to step outside of the wheelhouse to whatever location provides the best vantage point when performing tight maneuvers, dockings and other precision operations. Autonomous marine systems enable situational awareness with multiple cameras and sensors streaming live over a 4G connection. This real-time data allows shoreside or at-sea operators a full view of an autonomous vessel’s environment, threats and opportunities. Minimally manned vessels can autonomously collaborate to cover more ground with less resources required, creating a force-multiplier effect. A single shoreside operator can command multiple autonomous boats with full situational awareness. These areas of value overlap for all sectors but for the government and military sector, new on-water capabilities and unmanned vessels are a leading driver. By contrast, the commercial sector is looking for increased productivity, efficiency, and predictable operations. Our systems meet all of these needs. Our technology is designed to be installed on new vessels as well as existing vessels. Sea Machines’ ability to upgrade existing fleets greatly reduces the time and cost to leverage the value of our autonomous systems.  How are you getting to the market? Is there competition? Sea Machines has an established dealer program to support the company’s global sales across key commercial marine markets. The program includes many strategic partners who are enabled to sell, install and service the company’s line of intelligent command and control systems for workboats. To date, Sea Machines dealers are located across the US and Canada, in Europe, in Singapore and UAE. We have competition for autonomous marine systems, but our products are the only ones that are retrofit ready, not requiring new vessels to be built. Do you have any users or customers?   Yes we have achieved significant sales traction since launching our SM series of products in 2018.  Just since the summer, Sea Machines has been awarded several significant contracts and partnerships:  The first allowed us to begin serving the survey vessel market with the first announced collaboration with DEEP BV in the Netherlands. DEEP’s vessel outfitted with the SM300 entered survey service very recently.  Next, we partnered with Castine-based Maine Maritime Academy (MMA) and representatives of the U.S. Maritime Administration (MARAD)’s Maritime Environmental and Technical Assistance (META) Program to bring valuable, hands-on education about autonomous marine systems into the MMA curriculum.  Then we recently announced a partnership with shipbuilder Metal Shark Boats, of Jeanerette, Louisiana, to supply the U.S. Coast Guard (USCG)’s Research and Development Center (RDC) with a new Sharktech 29 Defiant vessel for the purposes of testing and evaluating the capabilities of available autonomous vessel technology. USCG demonstrations are happening now (through November 5) off the coast of Hawaii.  Finally, just this month, we announced that the U.S. Department of Defense (DOD)’s Defense Innovation Unit (DIU) awarded us with a multi-year Other Transaction (OT) agreement. The primary purpose of the agreement is to initiate a prototype that will enable commercial ocean-service barges as autonomous Forward Arming and Refueling Point (FARP) units for an Amphibious Maritime Projection Platform (AMPP). Specifically, Sea Machines will engineer, build and demonstrate ready-to-deploy system kits that enable autonomous, self-propelled operation of opportunistically available barges to land and replenish military aircraft. In the second half of 2020 we are also commencing onboard collaborations with some crew-transfer vessel (CTV) operators serving the wind farm industry. How is the company funded? The company recently completed a successful Series B round, which provided $15M in funds, with a total amount raised of $27.5M since 2017. The most recent funds we were able to raise are going to significantly impact Sea Machines, and therefore the maritime and marine industries as a whole. The funds will be put to use to further strengthen our technical development team as well as build out our next level of systems manufacturing and scale our operations group to support customer deployments.  We will also be investing in some supporting technologies to speed our course to full dock-to-dock, over-the-horizon autonomy. The purpose of our technology is to optimize vessel operations with increased performance, productivity, predictability and ultimately safety. In closing, we’d like to add that the marine industries are a critically significant component of the global economy and it’s up to us to keep it strong and relevant. Along with people, processes and capital, pressing the bounds of technology is a key driver. The world is being revolutionized by intelligent and autonomous self-piloting technology and today we find ourselves just beyond the starting line of a busy road to broad adoption through all marine sectors. If Sea Machines continues to chart the course with forward-looking pertinence, then you will see us rise up to become one of the most significant companies and brands serving the industry in the 21st century.   Any anecdotes/stories? This month we released software version 1.7 on our SM300. That’s seven significant updates in just over 18 months, each one providing increased technical hardening and new features for specific workboat sectors.  Another interesting story is about our Series B funding, which, due to the pandemic, we raised virtually.  Because of where we are as a company, we have been proving our ability to retool the marine industry with our technology, and therefore we are delivering confidence to investors. We were forced to conduct the entire process by video conference, which may have increased overall efficiency of the raise as these rounds traditionally require thousands if not tens of thousands of miles of travel for face-to-face meetings, diligence, and handshakes. Remote pitches also proved to be an advantage because it allowed us to showcase our technology in a more direct way. We did online demos where we had our team remotely connected to our vessels off Boston Harbor. We were able to get the investors into the captain’s chair, as if they were remotely commanding a vessel in real-world operations. Finally, in January, we announced the receipt of ABS and USCG approval for our SM200 wireless helm and control systems on a major class of U.S.-flag articulated tug-barges (ATBs), the first unit has been installed and is in operation, and we look forward to announcing details around it.  We will be taking the SM200 forward into the type-approval process. Learn more at Sea Machines.
Read more
  • 0
  • 0
  • 1887

Banner background image
article-image-web-applications-are-focus-of-cybercrime-gangs-in-data-breaches-report-finds-from-ai-trends
Matthew Emerick
15 Oct 2020
7 min read
Save for later

Web Applications are Focus of Cybercrime Gangs in Data Breaches, Report Finds from AI Trends

Matthew Emerick
15 Oct 2020
7 min read
By John P. Desmond, AI Trends Editor Web applications are the primary focus of many cybercrime gangs engaged in data breaches, a primary security concern to retailers, according to the 2020 Data Breach Investigations Report (DBIR) recently released by Verizon, in its 13th edition of the report. Verizon analyzed a total of 157,525 incidents; 3,950 were confirmed data breaches.  “These data breaches are the most serious type of incident retailers face. Such breaches generally result in the loss of customer data, including, in the worst cases, payment data and log-in and password combinations,” stated Ido Safruti, co-founder and chief technology officer, PerimeterX, a provider of security services for websites, in an account in Digital Commerce 360. Among the reports highlights: Misconfiguration errors, resulting from failure to implement all security controls, top the list of the fastest-growing risk to web applications. Across all industries, misconfiguration errors increased from below 20 percent in the 2017 survey to over 40 percent in the 2020 survey. “The reason for this is simple,” Safruti stated. “Web applications are growing more and more complex. What were formerly websites are now full-blown applications made up of dozens of components and leveraging multiple external services.” Ido Safruti, co-founder and chief technology officer, PerimeterX External code can typically comprise 70 percent or more of web applications, many of them JavaScript calls to external libraries and services. “A misconfigured service or setting for any piece of a web application offers a path to compromise the application and skim sensitive customer data,” Safruti stated. Cybercriminal gangs work to exploit rapid changes on web applications, as development teams build and ship new code faster and faster, often tapping third-party libraries and services. Weaknesses in version control and monitoring of changes to web applications for unauthorized introductions of code, are vulnerabilities. Magecart attacks, from a consortium of malicious hacker groups who target online shopping cart systems especially on large ecommerce sites, insert rogue elements as components of Web applications with the goal of stealing credit card data of shoppers.  “Retailers should consider advanced technology using automated and audited processes to manage configuration changes,” Safruti advises. Vulnerabilities are not patched quickly enough, leaving holes for attacks to exploit. Only half of vulnerabilities are patched within three months of discovery, the 2020 DBIR report found. These attacks offer hackers the potential of  large amounts of valuable customer information with the least amount of effort.   Attacks against web application servers made up nearly 75% of breached assets in 2019, up from roughly 50% in 2017, the DBIR report found. Organized crime groups undertook roughly two-thirds of breaches and 86% of breaches were financially motivated. The global average cost of a data breach is $3.92 million, with an average of over $8 million in the United States, according to a 2019 study from the Ponemon Institute, a research center focused on privacy, data protection and information security. Another analysis of the 2020 DBIT report found that hacking and social attacks have leapfrogged malware as the top attack tactic. “Sophisticated malware is no longer necessary to perform an attack,” stated the report in SecurityBoulevard.  Developers and QA engineers who develop and test web applications would benefit from the use of automated security testing tools and security processes that integrate with their workflow. “We believe developers and DevOps personnel are one of the weakest links in the chain and would benefit the most from remediation techniques,” the authors stated. Credential Stuffing Attack Exploit Users with Same Password Across Sites Credential stuffing is a cyberattack where lists of stolen usernames and/or email addresses are used to gain unauthorized access to user accounts through large-scale automated login requests directed against a web application.  “Threat actors are always conducting credential stuffing attacks,” found a “deep dive” analysis of the 2020 DBIR report from SpyCloud, a security firm focused on preventing online fraud.   The SpyCloud researchers advise users never to reuse passwords across online accounts. “Password reuse is a major factor in credential stuffing attacks,” the authors state. They advise using a password manager and storing a unique complex password for each account. The 2020 DBIR report found this year’s top malware variant to be password dumpers, malware that extracts passwords from infected systems. This malware is aimed at acquiring credentials stored on target computers, or involve keyloggers that acquire credentials as users enter them.  Some 22 percent of breaches found were the result of social attacks, which are cyber attacks that involve social engineering and phishing. Phishing – making fake websites, emails, text messages, and social media messages to impersonate trusted entities – is still a major way that sensitive authentication credentials are acquired illicitly, SpyCloud researchers found. Average consumers are each paying more than $290 in out-of-pocket costs and spending 16 hours to resolve the effects of this data loss and the resultant account takeover, SpyCloud found.  Business Increasing Investment in AI for Cybersecurity, Capgemini Finds To defend against the new generation of cyberattacks, businesses are increasing their investment in AI systems to help. Two-thirds of organizations surveyed by Capgemini Research last year said they will not be able to respond to critical threats without AI. Capgemini surveyed 850 senior IT executives from IT information security, cybersecurity and IT operations across 10 countries and seven business sectors. Among the highlights was that AI-enabled cybersecurity is now an imperative: Over half (56%) of executives say their cybersecurity analysts are overwhelmed by the vast array of data points they need to monitor to detect and prevent intrusion. In addition, the type of cyberattacks that require immediate intervention, or that cannot be remediated quickly enough by cyber analysts, have notably increased, including: cyberattacks affecting time-sensitive applications (42% saying they had gone up, by an average of 16%). automated, machine-speed attacks that mutate at a pace that cannot be neutralized through traditional response systems (43% reported an increase, by an average of 15%). Executives interviewed cited benefits of using AI in cybersecurity:  64% said it lowers the cost of detecting breaches and responding to them – by an average of 12%. 74% said it enables a faster response time: reducing time taken to detect threats, remedy breaches and implement patches by 12%. 69% also said AI improves the accuracy of detecting breaches, and 60% said it increases the efficiency of cybersecurity analysts, reducing the time they spend analyzing false positives and improving productivity. Budgets for AI in cybersecurity are projected to rise, with almost half (48%) of respondents said they are planning 29 percent increases in FY2020; some 73 percent were testing uses cases for AI in cybersecurity; only one in five organizations reported using AI in cybersecurity before 2019. “AI offers huge opportunities for cybersecurity,” stated Oliver Scherer, CISO of Europe’s leading consumer electronics retailer, MediaMarktSaturn Retail Group, in the Capgemini report. “This is because you move from detection, manual reaction and remediation towards an automated remediation, which organizations would like to achieve in the next three or five years.” Geert van der Linden, Cybersecurity Business Lead, Capgemini Group Barriers remain, including a lack of understanding in how to scale use cases from proof of concept to full-scale deployment.   “Organizations are facing an unparalleled volume and complexity of cyber threats and have woken up to the importance of AI as the first line of defense,” stated Geert van der Linden, Cybersecurity Business Lead at Capgemini Group. “As cybersecurity analysts are overwhelmed, close to a quarter of them declaring they are not able to successfully investigate all identified incidents, it is critical for organizations to increase investment and focus on the business benefits that AI can bring in terms of bolstering their cybersecurity.” Read the source articles in the 2020 Data Breach Investigations Report from Verizon,  in Digital Commerce 360, in SecurityBoulevard, from SpyCloud and from Capgemini Research.  
Read more
  • 0
  • 0
  • 2135

article-image-ai-autonomous-cars-might-have-just-a-four-year-endurance-lifecycle-from-ai-trends
Matthew Emerick
15 Oct 2020
14 min read
Save for later

AI Autonomous Cars Might Have Just A Four-Year Endurance Lifecycle from AI Trends

Matthew Emerick
15 Oct 2020
14 min read
By Lance Eliot, the AI Trends Insider   After AI autonomous self-driving cars have been abundantly fielded onto our roadways, one intriguing question that has so far gotten scant attention is how long will those self-driving cars last.    It is easy to simply assume that the endurance of a self-driving car is presumably going to be the same as today’s conventional cars, especially since most of the self-driving cars are currently making use of a conventional car rather than a special-purpose built vehicle.  But there is something to keep in mind about self-driving cars that perhaps does not immediately meet the eye, namely, they are likely to get a lot of miles in a short period. Given that the AI is doing the driving, there is no longer a dampening on the number of miles that a car might be driven in any noted time period, which usually is based on the availability of a human driver. Instead, the AI is a 24 x 7 driver that can be used non-stop and attempts to leverage the self-driving car into a continuously moving and available ride-sharing vehicle.  With all that mileage, the number of years of endurance is going to be lessened in comparison to a comparable conventional car that is driven only intermittently. You could say that the car is still the car, while the difference is that the car might get as many miles of use in a much shorter period of time and thus reach its end-of-life sooner (though nonetheless still racking up the same total number of miles).  Some automotive makers have speculated that self-driving cars might only last about four years.  This comes as quite a shocking revelation that AI-based autonomous cars might merely be usable for a scant four years at a time and then presumably end-up on the scrap heap.  Let’s unpack the matter and explore the ramifications of a presumed four-year life span for self-driving cars.  For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/  Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/  For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/  For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/  Life Span Of Cars  According to various stats about today’s cars, the average age of a conventional car in the United States is estimated at 11.6 years old.  Some tend to use the 11.6 years or a rounded 12 years as a surrogate for how long a car lasts in the U.S, though this is somewhat problematic to do since the average age is not the endpoint of a car and encapsulates a range of ages of cars, including a slew of cars that were retired at a much younger age and those that hang-on to a much older age.  Indeed, one of the fastest-growing segments of car ages is the group that is 16 years or older, amounting to an estimated 81 million such cars by the year 2021. Of those 81 million cars, around one-fourth are going to be more than 25 years old.  In short, cars are being kept around longer and longer.  When you buy a new car, the rule-of-thumb often quoted by automakers is that the car should last about 8 years or 150,000 miles.  This is obviously a low-ball kind of posturing, trying to set expectations so that car buyers will be pleased if their cars last longer. One supposes it also perhaps gets buyers into the mental mode of considering buying their next car in about eight years or so.  Continuing the effort to consider various stats about cars, Americans drive their cars for about 11,000 miles per year. If a new car is supposed to last for 150,000 miles, the math then suggests that at 11,000 miles per year you could drive the car for 14 years (that’s 150,000 miles divided by 11,000 miles per year).  Of course, the average everyday driver is using their car for easy driving such as commuting to work and driving to the grocery store. Generally, you wouldn’t expect the average driver to be putting many miles onto a car.  What about those that are pushing their cars to the limit and driving their cars in a much harsher manner?  Various published stats about ridesharing drivers such as Uber and Lyft suggest that they are amassing about 1,000 miles per week on their cars. If so, you could suggest that the number of miles per year would be approximately 50,000 miles. At the pace of 50,000 miles per year, presumably, these on-the-go cars would only last about 3 years, based on the math of 150,000 miles divided by 50,000 miles per year.  In theory, this implies that a ridesharing car being used today will perhaps last about 3 years.  For self-driving cars, most would agree that a driverless car is going to be used in a similar ridesharing manner and be on-the-road quite a lot.  This seems sensible. To make as much money as possible with a driverless car, you would likely seek to maximize the use of it. Put it onto a ridesharing network and let it be used as much as people are willing to book it and pay to use it.  Without the cost and hassle of having to find and use a human driver for a driverless car, the AI will presumably be willing to drive a car whenever and however long is needed. As such, a true self-driving car is being touted as likely to be running 24×7.  In reality, you can’t actually have a self-driving car that is always roaming around, since there needs to be time set aside for ongoing maintenance of the car, along with repairs, and some amount of time for fueling or recharging of the driverless car.  Overall, it would seem logical to postulate that a self-driving car will be used at least as much as today’s human-driven ridesharing cars, plus a lot more so since the self-driving car is not limited by human driving constraints.  In short, if it is the case that today’s ridesharing cars are hitting their boundaries at perhaps three to five years, you could reasonably extend that same thinking to driverless cars and assume therefore that self-driving cars might only last about four years.  The shock that a driverless car might only last four years is not quite as surprising when you consider that a true self-driving car is going to be pushed to its limits in terms of usage and be a ridesharing goldmine (presumably) that will undergo nearly continual driving time.  For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/  To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/  The ethical implications of AI driving systems are significant, see my indication here: http://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/  Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/  Factors Of Car Aging  Three key factors determine how long a car will last, namely:  How the car was built  How the car is used  How the car is maintained  Let’s consider how those key factors apply to self-driving cars.  In the case of today’s early versions of what are intended to be driverless cars, by-and-large most of the automakers are using a conventional car as the basis for their driverless car, rather than building an entirely new kind of car.  We will eventually see entirely new kinds of cars being made to fully leverage a driverless car capability, but for right now it is easier and more expedient to use a conventional car as the cornerstone for an autonomous car.  Therefore, for the foreseeable future, we can assume that the manner of how a driverless car was built is in keeping with how a conventional car is built, implying that the car itself will last as long as a conventional car might last.  In terms of car usage, as already mentioned, a driverless car is going to get a lot more usage than the amount of driving by an average everyday driver and be used at least as much as today’s ridesharing efforts. The usage is bound to be much higher.  The ongoing maintenance of a self-driving car will become vital to the owner of a driverless car.  I say this because any shortcomings in the maintenance would tend to mean that the driverless car will be in the shop and not be as available on the streets. The revenue stream from an always-on self-driving car will be a compelling reason for owners to make sure that their self-driving car is getting the proper amount of maintenance.  In that sense, the odds would seem to be the case that a driverless car will likely be better maintained than either an average everyday car or even today’s ridesharing cars.  One additional element to consider for driverless cars consists of the add-ons for the sensory capabilities and the computer processing aspects. Those sensory devices such as cameras, radar, ultrasonic, LIDAR, and so on, need to be factored into the longevity of the overall car, and the same applies to the computer chips and memory on-board too.  Why Retire A Car  The decision to retire a car is based on a trade-off between trying to continue to pour money into a car that is breaking down and excessively costing money to keep afloat, versus ditching the car and opting to get a new or newer car instead.  Thus, when you look at how long a car will last, you are also silently considering the cost of a new or newer car.  We don’t yet know what the cost of a driverless car is going to be.  If the cost is really high to purchase a self-driving car, you would presumably have a greater incentive to try and keep a used self-driving car in sufficient working order.  There is also a safety element that comes to play in deciding whether to retire a self-driving car.  Suppose a driverless car that is being routinely maintained is as safe as a new self-driving car, but eventually, the maintenance can only achieve so much in terms of ensuring that the driverless car remains as safe while driving on the roadways as would be a new or newer self-driving car.  The owner of the used self-driving car would need to ascertain whether the safety degradation means that the used driverless car needs to be retired.  Used Market For Self-Driving Cars  With conventional cars, an owner that first purchased a new car will likely sell the car after a while. We all realize that a conventional car might end-up being passed from one buyer to another over its lifespan.  Will there be an equivalent market for used self-driving cars?  You might be inclined to immediately suggest that once a self-driving car has reached some point of no longer being safe enough, it needs to be retired. We don’t yet know, and no one has established what that safety juncture or threshold might be.  There could be a used self-driving car market that involved selling a used driverless car that was still within some bounds of being safe.  Suppose a driverless car owner that had used their self-driving car extensively in a downtown city setting opted to sell the autonomous car to someone that lived in a suburban community. The logic might be that the self-driving car no longer was sufficient for use in a stop-and-go traffic environment but might be viable in a less stressful suburban locale.  Overall, no one is especially thinking about used self-driving cars, which is admittedly a concern that is far away in the future and therefore not a topic looming over us today.  Retirement Of A Self-Driving Car  Other than becoming a used car, what else might happen to a self-driving car after it’s been in use for a while?  Some have wondered whether it might be feasible to convert a self-driving car into becoming a human-driven car, doing so to place the car into the used market for human-driven cars.  Well, it depends on how the self-driving car was originally made. If the self-driving car has all of the mechanical and electronic guts for human driving controls, you could presumably unplug the autonomy and revert the car into being a human-driven car.  I would assert that this is very unlikely, and you won’t see self-driving cars being transitioned into becoming human-driven cars.  All told, it would seem that once a self-driving car has reached its end of life, the vehicle would become scrapped.  If self-driving cars are being placed into the junk heap every four years, this raises the specter that we are going to have a lot of car junk piling up. For environmentalists, this is certainly disconcerting.  Generally, today’s cars are relatively highly recyclable and reusable. Estimates suggest that around 80% of a car can be recycled or reused.  For driverless cars, assuming they are built like today’s conventional cars, you would be able to potentially attain a similar recycled and reused parts percentage. The add-ons of the sensory devices and computer processors might be recyclable and reusable too, though this is not necessarily the case depending upon how the components were made.  For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/  To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/  The ethical implications of AI driving systems are significant, see my indication here: http://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/  Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/  Conclusion  Some critics would be tempted to claim that the automakers would adore having self-driving cars that last only four years.  Presumably, it would mean that the automakers will be churning out new cars hand-over-fist, doing so to try and keep up with the demand for an ongoing supply of new driverless cars.  On the other hand, some pundits have predicted that we won’t need as many cars as we have today, since a smaller number of ridesharing driverless cars will fulfill our driving needs, abetting the need for everyone to have a car.  No one knows.  Another facet to consider involves the pace at which high-tech might advance and thus cause a heightened turnover in self-driving cars. Suppose the sensors and computer processors put into a driverless car are eclipsed in just a few years by faster, cheaper, and better sensors and computer processors.  If the sensors and processors of a self-driving car are built-in, meaning that you can’t just readily swap them out, it could be that another driving force for the quicker life cycle of a driverless car might be as a result of the desire to make use of the latest in high-tech.  The idea of retiring a driverless car in four years doesn’t seem quite as shocking after analyzing the basis for such a belief.  Whether society is better off or not as a result of self-driving cars, and also the matter of those self-driving cars only lasting four years, is a complex question. We’ll need to see how this all plays out.  Copyright 2020 Dr. Lance Eliot   This content is originally posted on AI Trends.  [Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]       
Read more
  • 0
  • 0
  • 2112
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-india-engages-in-a-national-initiative-to-support-its-ai-industry-from-ai-trends
Matthew Emerick
08 Oct 2020
5 min read
Save for later

India Engages in a National Initiative to Support Its AI Industry  from AI Trends

Matthew Emerick
08 Oct 2020
5 min read
By AI Trends Staff    The government of India is engaged in an initiative on AI that aims to promote the industry, which a recent IDC report maintains is growing at over a 30% annual clip.   India’s Artificial Intelligence spending will grow from $300.7 million in 2019 to $880.5 million in 2023 at a compound annual growth rate (CAGR) of 30.8 per cent, states IDC’s Worldwide Artificial Intelligence Spending Guide.  Rishu Sharma, Principal Analyst, Cloud and AI at IDC in India Enterprises are relying on AI to maintain business continuity, transform how businesses operate, and gain competitive advantage. “COVID-19 is pushing the boundaries of organizations’ AI lens. Businesses are considering investments in intelligent solutions to tackle issues associated with business continuity, labor shortages, and workspace monitoring. Organizations are now realizing that their business plans must be closely aligned with their AI strategies,” stated Rishu Sharma, Principal Analyst, Cloud and AI at IDC in India, in an IDC press release.  In other report highlights:   Enterprises rely on AI to maintain business continuity, transform how businesses operate and gain competitive advantage. Almost 20% of enterprises are still devising AI strategies to explore new businesses and ventures; Half of India enterprises plan to increase their AI spending in 2020; Data trustworthiness and difficulty in selecting the right algorithm, are among the top challenges that hold organizations back from implementing AI technology. “The variety of industry-specific tech solutions supported by emerging technologies like IoT and Robotics are getting powered by complex AI algorithms,” stated Ashutosh Bisht, Senior Research Manager for IDC’s Customer Insights and Analysis group. “With the fast adoption of cloud technologies in India, more than 60% of AI Applications will be migrated to the cloud by 2024.”    As per IDC’s 2020 COVID-19 Impact Survey, half of Indian enterprises plan to increase their AI spending this year. However, data trustworthiness and difficulty in selecting the right algorithm, are among top challenges that hold organizations back from implementing AI technology, according to IDC.  Prime Minister Speaking at RAISE 2020 Global Summit  Indian Prime Minister Nrendra Modi was to address a virtual summit on AI this week (October 5) in India. Called RAISE 2020, for Responsible AI for Social Empowerment, the summit is planned as a global meeting to exchange ideas and chart a course for using AI for social transformation, inclusion and empowerment in areas like healthcare, agriculture, education and smart mobility, according to an account from the South Asian news agency ANI.  Indian AI startups will be showcasing their offerings as part of the AI Solution Challenge, a government effort to support tech entrepreneurs and startups by providing exposure, recognition and guidance.  India’s strengths that position it well to become an AI lead include its healthy startup ecosystem, home to elite science and technology institutions, a robust digital infrastructure and millions of STEM graduates each year, the release indicated.    Prime Minister Modi was to articulate an “AI for All” strategy, intent on building a model for the world on how to responsibly direct AI for social empowerment, the release stated. Government Has Launched AI Portal   The Indian government earlier this year launched the National AI Portal, as a collaboration of the National Association of Software and Service Companies (Nasscom) and the National e-Governance Division of the Ministry of Electronics and Information Technology (MeitY).   The portal’s objective is to function as a platform for AI-related advancements in India, with sharing of resources in articles, investment funding news for AI startups, and AI education resources in India. The portal will also distribute documents, case studies and research reports, and describe new job roles related to AI.   Named IndiaAI, the site’s education focus aims to help professionals and students learn about and find work in the field of AI. Free and paid AI courses are available on subjects of Machine Learning, Data Visualization, and Cybersecurity, provided by educational institutions including IIT Bombay, third party content providers including SkillUp and edX, or private companies like IBM. The AI education program is open to students in classes 8-12 across thousands of schools in India.   Some Skeptical of India’s Ability to Unlock AI’s Potential  Skepticism about India’s ability to capitalize on its opportunities in AI is being voiced in some quarters. “The country is still miles away from unlocking the true value of AI in both the government and the private sector,” stated an account from CXOToday.com.   India lags behind the top five geographies for private sector investment in AI, the account stated. The US is far ahead, with investments worth $18 billion, followed by Europe ($2.6 billion) and Israel ($1.8 billion).   Only a few large companies are investing in AI R&D, being “risk averse.” Startups are having difficulty finding capital. Most vital is the need for the government and the private sectors to work hand-in-hand, particularly on investment in AI R&D.   Sanjay Gupta, Country Head & VP, Google India, has stated that close collaboration between the private and public sector, and a focus on collective expertise and energies on the most pressing problems of today, will go a long way towards achieving the vision of a socially empowered, inclusive, and digitally transformed India, where AI has a big role to play.  Read the source articles in an IDC press release, from the South Asian news agency ANI and CXOToday.com. 
Read more
  • 0
  • 0
  • 1961

article-image-update-pandemic-driving-more-ai-business-researchers-fighting-fraud-cure-posts-from-ai-trends
Matthew Emerick
08 Oct 2020
6 min read
Save for later

Update: Pandemic Driving More AI Business; Researchers Fighting Fraud ‘Cure’ Posts  from AI Trends

Matthew Emerick
08 Oct 2020
6 min read
By AI Trends Staff   The impact of the coronavirus pandemic around AI has many shades, from driving higher rates of IT spending on AI, to spurring researchers to fight fraud “cure” claims on social media, and hackers seeking to tap the medical data stream   IT leaders are planning to spend more on AI/ML, and the pandemic is increasing demand for people with related job skills, according to the survey of over 100 IT executives with AI initiatives going on at companies spending at least $1 million annually on AI/ML before the pandemic. The survey was conducted in August by Algorithmia, a provider of ML operations and management platforms.  Some 50% of respondents reported they are planning to spend more on AI/ML in the coming year, according to an account based on the survey from TechRepublic.   A lack of in-house staff with AI/ML skills was the primary challenge for IT leaders before the pandemic, according to 59% of respondents. The most important job skills coming out of the pandemic are going to be security (69%), data management (64%), and systems integration (62%).   Diego Oppenheimer, CEO of Algorithmia “When we come through the pandemic, the companies that will emerge the strongest will be those that invested in tools, people, and processes that enable them to scale delivery of AI and ML-based applications to production,” stated Diego Oppenheimer, CEO of Algorithmia, in a press release. “We believe investments in AI/ML operations now will pay off for companies sooner than later. Despite the fact that we’re still dealing with the pandemic, CIOs should be encouraged by the results of our survey.”     Researchers Tracking Increase in Fraudulent COVID-19 ‘Cure’ Posts   Legitimate businesses are finding opportunities from COVID-19, and so are the scammers. Researchers at UC San Diego are studying the increase of fraudulent posts around COVID-19 “cures” being posted on social media.   In a new study published in the Journal of Medical Internet Research Public Health and Surveillance on August 25, 2020, researchers at University of California San Diego School of Medicine found thousands of social media posts on two popular platforms — Twitter and Instagram — tied to financial scams and possible counterfeit goods specific to COVID-19 products and unapproved treatments, according to a release from UC San Diego via EurekAlert  “We started this work with the opioid crisis and have been performing research like this for many years in order to detect illicit drug dealers,” stated Timothy Mackey, PhD, associate adjunct professor at UC San Diego School of Medicine and lead author of the study. “We are now using some of those same techniques in this study to identify fake COVID-19 products for sale. From March to May 2020, we have identified nearly 2,000 fraudulent postings likely tied to fake COVID-19 health products, financial scams, and other consumer risk.”   The first two waves of fraudulent posts focused on unproven marketing claims for prevention or cures and fake testing kits. The third wave of fake pharmaceutical treatments is now materializing. Prof. Mackey expects it to get worse when public health officials announce development of an effective vaccine or other therapeutic treatments.   The research team identified suspect posts through a combination of Natural Language Processing and machine learning. Topic model clusters were transferred into a deep learning algorithm to detect fraudulent posts. The findings were customized to a data dashboard in order to enable public health intelligence and provide reports to authorities, including the World Health Organization and U.S. Food & Drug Administration (FDA).   “Criminals seek to take advantage of those in need during times of a crisis,” Mackey stated.   Sandia Labs, BioBright Working on a Better Way to Secure Critical Health Data    Complementing the scammers, hackers are also seeing opportunity in these pandemic times. Hackers that threaten medical data are of particular concern.    One effort to address this is a partnership between Sandia National Laboratories and the Boston firm BioBright to improve the security of synthetic biology data, a new commercial field.   Corey Hudson, senior member, technical staff, Sandia Labs “In the past decade, genomics and synthetic biology have grown from principally academic pursuits to a major industry,” said computational biology manager Corey Hudson, senior member of the technical staff at Sandia Labs in a press release. “This shift paves the way toward rapid production of small molecules on demand, precision healthcare, and advanced materials.”  BioBright is a scientific lab data automation company, recently acquired by Dotmatics, a UK company working on the Lab of the Future. The two companies are working to develop a better security model since currently, large volumes of data about the health and pharmaceutical information of patients are being handled with security models developed two decades ago, Hudon suggested.  The situation potentially leaves open the risk of data theft or targeted attack by hackers to interrupt production of vaccines and therapeutics or the manufacture of controlled, pathogenic, or toxic materials, he suggested.  “Modern synthetic biology and pharmaceutical workflows rely on digital tools, instruments, and software that were designed before security was such an important consideration,” stated Charles Fracchia, CEO of BioBright. The new effort seeks to better secure synthetic biology operations and genomic data across industry, government, and academia.  The team is using Emulytics, a research initiative developed at Sandia for evaluating realistic threats against critical systems, to help develop countermeasures to the risks.  C3.ai Sponsors COVID-19 Grand Challenge Competition with $200,000 in Awards  If all else fails, participate in a programming challenge and try to win some money.  Enterprise AI software provider C3.ai is inviting data scientists, developers, researchers and creative thinkers to participate in the C3.ai COVID-19 Grand Challenge and win prizes totaling $200,000.    The judging panel will prioritize data science projects that help to understand and mitigate the spread of the virus, improve the response capabilities of the medical community, minimize the impact of this disease on society, and help policymakers navigate responses to COVID-19.  C3.ai will award one Grand Prize of $100,000, two second-place awards of $25,000 each, and four third-place awards of $12,500 each.   “The C3.ai COVID-19 Grand Challenge represents an opportunity to inform decision makers at the local, state, and federal levels and transform the way the world confronts this pandemic,” stated Thomas M. Siebel, CEO of C3.ai, in a press release.  “As with the C3.ai COVID-19 Data Lake and the C3.ai Digital Transformation Institute, this initiative will tap our community’s collective IQ to make important strides toward necessary, innovative solutions that will help solve a global crisis.”   The competition is now open. Registration ends Oct. 25 and final submissions are due Nov. 18, 2020. By Dec. 9, C3.ai will announce seven competition winners and award $200,000 in cash prizes to honorees.  Judges include Michael Callagy, County Manager, County of San Mateo; S. Shankar Sastry, Professor of Electrical Engineering & Computer Science, UC Berkeley; and Zico Kolter, Associate Professor Computer Science, Carnegie Mellon University.   Launched in April 2020, the C3.ai COVID-19 Data Lake now consists of 40 unique datasets, said to be among the largest unified, federated image of COVID-19 data in the world.  Read the source articles and information at TechRepublic, from UC San Diego via EurekAlert, a press release from Sandia Labs, a press release from C3.ai about the COVID-19 Grand Challenge. 
Read more
  • 0
  • 0
  • 1791

article-image-breaking-ai-workflow-into-stages-reveals-investment-opportunities-from-ai-trends
Matthew Emerick
08 Oct 2020
6 min read
Save for later

Breaking AI Workflow Into Stages Reveals Investment Opportunities  from AI Trends

Matthew Emerick
08 Oct 2020
6 min read
By John P. Desmond, AI Trends Editor  An infrastructure–first approach to AI investing has the potential to yield greater returns with a lower risk profile, suggests a recent account in Forbes. To identify the technologies supporting the AI system, deconstruct the workflow into two steps as a starting point: training and inference.    MBA candidate at Columbia Business School, MBA Associate at Primary Venture Partners “Training is the process by which a framework for deep-learning is applied to a dataset,” states Basil Alomary, author of the Forbes account. An MBA candidate at Columbia Business School and MBA Associate at Primary Venture Partners, his background and experience are in early-stage SaaS ventures, as an operator and an investor. “That data needs to be relevant, large enough, and well-labeled to ensure that the system is being trained appropriately. Also, the machine learning models being created need to be validated, to avoid overfitting to the training data and to maintain a level of generalizability. The inference portion is the application of this model and the ongoing monitoring to identify its efficacy.”  He identifies these stages in the AI/ML development lifecycle: data acquisition, data preparation, training, inference, and implementation. The stages of acquisition, preparation, and implementation have arguably attracted the least amount of attention from investors.   Where to get the data for training the models is a chief concern. If a company is old enough to have historical customer data, it can be helpful. That approach should be inexpensive, but the data needs to be clean and complete enough to help in whatever decisions it works on. Companies without the option of historical data, can try publicly-available datasets, or they can buy the data directly. A new class of suppliers is emerging that primarily focus on selling clean, well-labeled datasets specifically for machine learning applications.   One such startup is Narrative, based in New York City. The company sells data tailored to the client’s use case. The OpenML and Amazon Datasets have marketplace characteristics but are entirely open source, which is limiting for those who seek to monetize their own assets.    Nick Jordan, CEO and founder, Narrative “Essentially, the idea was to take the best parts of the e-commerce and search models and apply that to a non-consumer offering to find, discover and ultimately buy data,” stated Narrative founder and CEO Nick Jordan in an account in TechCrunch. “The premise is to make it as easy to buy data as it is to buy stuff online.”  In a demonstration, Jordan showed how a marketer could browse and search for data using the Narrative tools. The marketer could select the mobile IDs of people who have the Uber Driver app installed on their phone, or the Zoom app, at a price that is often subscription-based. The data selection is added to the shopping cart and checked out, like any online transaction.   Founded in 2016, Narrative collects data sellers into its market, vetting each one, working to understand how the data is collected, its quality, and whether it could  be useful in a regulated environment. Narrative does not attempt to grade the quality of the data. “Data quality is in the eye of the beholder,” Jordan stated. Buyers are able to conduct their own research into the data quality if so desired. Narrative is working on building a marketplace of third-party applications, which could include scoring of data sets.    Data preparation is critical to making the machine learning model effective. Raw data needs to be preprocessed so that machine learning algorithms can produce a model, a structural description of the data. In an image database, for example, the images may have to be labelled, which can be labor-intensive.    Automating Data Preparation is an Opportunity Area   Platforms are emerging to support the process of data preparation with a layer of automation that seeks to accelerate the process. Startup Labelbox recently raised a $25 million Series B financing round to help grow its data labeling platform for AI model training, according to a recent account in VentureBeat.  Founded in 2018 in San Francisco, Labelbox aims to be the data platform that acts as a central hub for data science teams to coordinate with dispersed labeling teams. In April, the company won a contract with the Department of Defense  for the US Air Force AFWERX program, which is building out technology partnerships.   Manu Sharma, CEO and co-founder, Labelbox A press release issued by Labelbox on the contract award contained some history of the company. “I grew up in a poor family, with limited opportunities and little infrastructure” stated Manu Sharma, CEO and one of Labelbox’s co-founders, who was raised in a village in India near the Himalayas. He said that opportunities afforded by the U.S. have helped him achieve more success in ten years than multiple generations of his family back home. “We’ve made a principled decision to work with the government and support the American system,” he stated.   The Labelbox platform is supporting supervised-learning, a branch of AI that uses labeled data to train algorithms to recognize patterns in images, audio, video or text. The platform enables collaboration among team members as well as these functions: rework, rework, quality assurance, model evaluation, audit trails, and model-assisted labeling.   “Labelbox is an integrated solution for data science teams to not only create the training data but also to manage it in one place,” stated Sharma. “It’s the foundational infrastructure for customers to build their machine learning pipeline.”  Deploying the AI model into the real world requires an ongoing evaluation, a data pipeline that can handle continued training, scaling and managing computing resources, suggests Alomary in Forbes. An example product is Amazon’s Sagemaker, supporting deployment. Amazon offers a managed service that includes human interventions to monitor deployed models.   DataRobot of Boston in 2012 saw the opportunity to develop a platform for building, deploying, and managing machine learning models. The company raised a Series E round of $206 million in September and now has $431 million in venture-backed funding to date, according to Crunchbase.   Unfortunately DataRobot in March had to shrink its workforce by an undisclosed number of people, according to an account in BOSTINNO. The company employed 250 full-time employees as of October 2019.   DataRobot announced recently that it was partnering with Amazon Web Services to provide its enterprise AI platform free of charge to anyone using it to help with the coronavirus response effort.  Read the source articles and releases in Forbes, TechCrunch, VentureBeat and BOSTINNO. 
Read more
  • 0
  • 0
  • 1772

article-image-ai-tools-assisting-with-mental-health-issues-brought-on-by-pandemic-from-ai-trends
Matthew Emerick
08 Oct 2020
5 min read
Save for later

AI Tools Assisting with Mental Health Issues Brought on by Pandemic  from AI Trends

Matthew Emerick
08 Oct 2020
5 min read
By Shannon Flynn, AI Trends Contributor   The pandemic is a perfect storm for mental health issues. Isolation from others, economic uncertainty, and fear of illness can all contribute to poor mental health — and right now, most people around the world face all three.  New research suggests that the virus is tangibly affecting mental health. Rates of depression and anxiety symptoms are much higher than normal. In some population groups, like students and young people, these numbers are almost double what they’ve been in the past.  Some researchers are even concerned that the prolonged, unavoidable stress of the virus may result in people developing long-term mental health conditions — including depression, anxiety disorders and even PTSD, according to an account in Business Insider. Those on the front lines, like medical professionals, grocery store clerks and sanitation workers, may be at an especially high risk.  Use of Digital Mental Health Tools with AI on the Rise   Automation is already widely used in health care, primarily in the form of technology like AI-based electronic health records and automated billing tools, according to a blog post from ZyDoc, a supplier of medical transcription applications. It’s likely that COVID-19 will only increase the use of automation in the industry. Around the world, medical providers are adopting new tech, like self-piloting robots that act as hospital nurses. These providers are also using UV light-based cleaners to sanitize entire rooms more quickly.  Digital mental health tools are also on the rise, along with fully automated AI tools that help patients get the care they need.   The AI-powered behavioral health platform Quartet, for example, is one of several automated tools that aim to help diagnose patients, screening them for common conditions like depression, anxiety, and bipolar spectrum disorders, according to a recent account in AI Trends. Other software — like a new app developed by engineers at the University of New South Wales in Sydney, Australia — can screen patients for different mental health conditions, including dementia. With a diagnosis, patients are better equipped to find the care they need, such as from mental health professionals with in-depth knowledge of a particular condition.   Another tool, an AI-based chatbot called Woebot, developed by Woebot Labs, Inc., uses brief daily chats to help people maintain their mental health. The bot is designed to teach skills related to cognitive behavioral therapy (CBT), a form of talk therapy that assists patients with identifying and managing maladaptive thought patterns.   In April, Woebot Labs updated the bot to provide specialized COVID-19-related support in the form of a new therapeutic modality, called Interpersonal Psychotherapy (IPT), which helps users “process loss and role transition,” according to a press release from the company.  Both Woebot and Quartet provide 24/7 access to mental health resources via the internet. This means that — so long as a person has an internet connection — they can’t be deterred by an inaccessible building or lengthy waitlist.  New AI Tools Supporting Clinicians   Some groups need more support than others. Clinicians working in hospitals are some of the most vulnerable to stress and anxiety. Right now, they’re facing long hours, high workloads, and frequent potential exposure to COVID.  Developers and health care professionals are also working together to create new AI tools that will support clinicians as they tackle the challenges of providing care during the pandemic.  Kavi Misri, founder and CEO of Rose One new AI-powered mental health platform, developed by the mobile mental health startup Rose, will gather real-time data on how clinicians are feeling via “questionnaires and free-response journal entries, which can be completed in as few as 30 seconds,” according to an account in Fierce Healthcare. The tool will scan through these responses, tracking the clinician’s mental health and stress levels. Over time, it should be able to identify situations and events likely to trigger dips in mental health or increased anxiety and tentatively diagnose conditions like depression, anxiety, and trauma.  Front-line health care workers are up against an unprecedented challenge, facing a wave of new patients and potential exposure to COVID, according to Kavi Misri, founder and CEO of Rose. As a result, many of these workers may be more vulnerable to stress, anxiety and other mental health issues.   “We simply can’t ignore this emerging crisis that threatens the mental health and stability of our essential workers – they need support,” stated Misri.  Rose is also providing clinicians access to more than 1,000 articles and videos on mental health topics. Each user’s feed of content is curated based on the data gathered by the platform.  Right now, Brigham and Women’s Hospital, the second-largest teaching hospital at Harvard, is experimenting with the technology in a pilot program. If effective, the tech could soon be used around the country to support clinicians on the front lines of the crisis.  Mental health will likely stay a major challenge for as long as the pandemic persists. Fortunately, AI-powered experimental tools for mental health should help to manage the stress, depression and trauma that has developed from dealing with COVID-19.  Read the source articles and information in Business Insider, a blog post from ZyDoc, in AI Trends,  press release from Woebot Labs, and in Fierce Healthcare.   Shannon Flynn is a managing editor at Rehack, a website featuring coverage of a range of technology niches. 
Read more
  • 0
  • 0
  • 1610
article-image-gender-bias-in-the-driving-systems-of-ai-autonomous-cars-from-ai-trends
Matthew Emerick
08 Oct 2020
17 min read
Save for later

Gender Bias In the Driving Systems of AI Autonomous Cars  from AI Trends

Matthew Emerick
08 Oct 2020
17 min read
By Lance Eliot, the AI Trends Insider    Here’s a topic that entails intense controversy, oftentimes sparking loud arguments and heated responses. Prepare yourself accordingly. Do you think that men are better drivers than women, or do you believe that women are better drivers than men?    Seems like most of us have an opinion on the matter, one way or another.    Stereotypically, men are often characterized as fierce drivers that have a take-no-prisoners attitude, while women supposedly are more forgiving and civil in their driving actions. Depending on how extreme you want to take these tropes, some would say that women shouldn’t be allowed on our roadways due to their timidity, while the same could be said that men should not be at the wheel due to their crazed pedal-to-the-metal predilection.  What do the stats say? According to the latest U.S. Department of Transportation data, based on their FARS or Fatality Analysis Reporting System, the number of males annually killed in car crashes is nearly twice that of the number of females killed in car crashes.    Ponder that statistic for a moment. Some would argue that it definitely is evidence that male drivers are worse drivers than female drivers, which seems logically sensible under the assumption that since more males are being killed in car crashes than women, men must be getting into a lot more car crashes, ergo they must be worse drivers.    Presumably, it would seem that women are better able to avoid getting into death-producing car crashes, thus they are more adept at driving and are altogether safer drivers.    Whoa, exclaim some that don’t interpret the data in that way. Maybe women are somehow able to survive deadly car crashes better than men, and therefore it isn’t fair to compare the count of how many perished. Or, here’s one to get your blood boiling, perhaps women trigger car crashes by disrupting traffic flow and are not being agile enough at the driving controls, and somehow men pay a dear price by getting into deadly accidents while contending with that kind of driving obfuscation.    There seems to be little evidentiary support for those contentions. A more straightforward counterargument is that men tend to drive more miles than women. By the very fact that men are on the roadways more so than women, they are obviously going to be vulnerable to a heightened risk of getting into bad car crashes. In a sense, it’s a situation of rolling the dice more times than women do.    Insurance companies opt for that interpretation, including too that the stats show that men are more likely to drive while intoxicated, they are more likely to be speeding, and more likely to not use seatbelts.     There could be additional hidden factors involved in these outcomes. For example, some studies suggest that the gender differences begin to dissipate with aging, namely that at older ages, the chances of getting killed in a car crash becomes about equal for both male and female drivers. Of course, even that measure has controversy, which for some it is a sign that men lose their driving edge and spirit as they get older, become more akin to the skittishness of women.    Yikes, it’s all a can of worms and a topic that can readily lend itself to fisticuffs.    Suppose there were some means to do away with all human driving, and we had only AI-based driving that took place. One would assume that the AI would not fall into any gender-based camp. In other words, since we all think of AI as a kind of machine, it wouldn’t seem to make much sense to say that an AI system is male or that an AI system is female.  As an aside, there have been numerous expressed concerns that the AI-fostered Natural Language Processing (NLP) systems that are increasingly permeating our lives are perhaps falling into a gender trap, as it were. When you hear an Alexa or Siri voice that speaks to you if it has a male intonation do you perceive the system in a manner differently than if it has a female intonation?  Some believe that if every time you want to learn something new that you invoke an NLP that happens to have said a female sounding voice, it will tend to cause children especially to start to believe that women are the sole arbiters of the world’s facts. This could also work in other ways such as if the female sounding NLP was telling you to do your homework, would that cause kids to be leery of women as though they are always being bossy?    The same can be said about using a male voice for today’s NLP systems. If a male-sounding voice is always used, perhaps the context of what the NLP system is telling you might be twisted into being associated with males versus females.  As a result, some argue that the NLP systems ought to have gender-neutral sounding voices.    The aim is to get away from the potential of having people try to stereotype human males and human females by stripping out the gender element from our verbally interactive AI systems.    There’s another perhaps equally compelling reason for wanting to excise any male or female intonation from an NLP system, namely that we might tend to anthropomorphize the AI system, unduly so.    Here’s what that means.  AI systems are not yet even close to being intelligent, and yet the more that AI systems have the appearance of human-like qualities, we are bound to assume that the AI is as intelligent as humans. Thus, when you interact with Alexa or Siri, and it uses either a male or female intonation, the argument is that the male or female verbalization acts as a subtle and misleading signal that the underlying system is human-like and ergo intelligent.  You fall readily for the notion that Alexa or Siri must be smart, simply by extension of the aspect that it has a male or female sounding embodiment.  In short, there is ongoing controversy about whether the expanding use of NLP systems in our society ought to not “cheat” by using a male or female sounding basis and instead should be completely neutralized in terms of the spoken word and not lean toward using either gender.  Getting back to the topic of AI driving systems, there’s a chance that the advent of true self-driving cars might encompass gender traits, akin to how there’s concern about Alexa and Siri doing so.    Say what?  You might naturally be puzzled as to why AI driving systems would include any kind of gender specificity.    Here’s the question for today’s analysis: Will AI-based true self-driving cars be male, female, gender fluid, or gender-neutral when it comes to the act of driving?  Let’s unpack the matter and see.  For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/  Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/    For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/    For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/   The Levels Of Self-Driving Cars   It is important to clarify what I mean when referring to true self-driving cars.  True self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.  These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).  There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.    Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).  Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different from driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).  For semi-autonomous cars, the public must be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.  You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.    For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/    To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/    The ethical implications of AI driving systems are significant, see my indication here: http://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/  Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/    Self-Driving Cars And Gender Biases  For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.    All occupants will be passengers.    The AI is doing the driving.    At first glance, it seems on the surface that the AI is going to drive like a machine does, doing so without any type of gender influence or bias.  How could gender get somehow shoehorned into the topic of AI driving systems?    There are several ways that the nuances of gender could seep into the matter.    We’ll start with the acclaimed use of Machine Learning (ML) or Deep Learning (DL).    As you’ve likely heard or read, part of the basis for today’s rapidly expanding use of AI is partially due to the advances made in ML/DL.    You might have also heard or read that one of the key underpinnings of ML/DL is the need for data, lots, and lots of data.  In essence, ML/DL is a computational pattern matching approach.  You feed lots of data into the algorithms being used, and patterns are sought to be discovered. Based on those patterns, the ML/DL can then henceforth potentially detect in new data those same patterns and report as such that those patterns were found.  If I feed tons and tons of pictures that have a rabbit somewhere in each photo into an ML/DL system, the ML/DL can potentially statistically ascertain that a certain shape and color and size of a blob in those photos is a thing that we would refer to as a rabbit.    Please note that the ML/DL is not likely to use any human-like common-sense reasoning, which is something not often pointed out about these AI-based systems.  For example, the ML/DL won’t “know” that a rabbit is a cute furry animal and that we like to play with them and around Easter, they are especially revered. Instead, the ML/DL simply based on mathematical computations has calculated that a blob in a picture can be delineated, and possibly readily detected whenever you feed a new picture into the system, attempting to probabilistically state whether there’s such a blob present or not.    There’s no higher-level reasoning per se, and we are a long ways away from the day when human-like reasoning of that nature is going to be embodied into AI systems (which, some argue, maybe we won’t ever achieve, while others keep saying that the day of the grand singularity is nearly upon us.  In any case, suppose that we fed pictures of only white-furry rabbits into the ML/DL when we were training it to find the rabbit blobs in the images.    One aspect that might arise would be that the ML/DL would associate the rabbit blob as always and only being white in color.    When we later on fed in new pictures, the ML/DL might fail to detect a rabbit if it was one that had black fur, because the lack of white fur diminished the calculated chances that the blob was a rabbit (as based on the training set that was used).  In a prior piece, I emphasized that one of the dangers about using ML/DL is the possibility of getting stuck on various biases, such as the aspect that true self-driving cars could end up with a form of racial bias, due to the data that the AI driving system was trained on.  Lo and behold, it is also possible that an AI driving system could incur a gender-related bias.    Here’s how.    If you believe that men drive differently than women, and likewise that women drive differently than men, suppose that we collected a bunch of driving-related data that was based on human driving and thus within the data there was a hidden element, specifically that some of the driving was done by men and some of the driving was done by women.    Letting loose an ML/DL system on this dataset, the ML/DL is aiming to try and find driving tactics and strategies as embodied in the data.    Excuse me for a moment as I leverage the stereotypical gender-differences to make my point.  It could be that the ML/DL discovers “aggressive” driving tactics that are within the male-oriented driving data and will incorporate such a driving approach into what the true self-driving car will do while on the roadways.    This could mean that when the driverless car roams on our streets, it is going to employ a male-focused driving style and presumably try to cut off other drivers in traffic, and otherwise be quite pushy.    Or, it could be that the ML/DL discovers the “timid” driving tactics that are within the female-oriented driving data and will incorporate a driving approach accordingly, such that when a self-driving car gets in traffic, the AI is going to act in a more docile manner.    I realize that the aforementioned seems objectionable due to the stereotypical characterizations, but the overall point is that if there is a difference between how males tend to drive and how females tend to drive, it could potentially be reflected in the data.    And, if the data has such differences within it, there’s a chance that the ML/DL might either explicitly or implicitly pick-up on those differences.  Imagine too that if we had a dataset that perchance was based only on male drivers, this landing on a male-oriented bias driving approach would seem even more heightened (similarly, if the dataset was based only on female drivers, a female-oriented bias would be presumably heightened).    Here’s the rub.  Since male drivers today have twice the number of deadly car crashes than women, if an AI true self-driving car was perchance trained to drive via predominantly male-oriented driving tactics, would the resulting driverless car be more prone to car accidents than otherwise?    That’s an intriguing point and worth pondering.  Assuming that no other factors come to play in the nature of the AI driving system, we might certainly reasonably assume that the driverless car so trained might indeed falter in a similar way to the underlying “learned” driving behaviors.  Admittedly, there are a lot of other factors involved in the crafting of an AI driving system, and thus it is hard to say that training datasets themselves could lead to such a consequence.    That being said, it is also instructive to realize that there are other ways that gender-based elements could get infused into the AI driving system.  For example, suppose that rather than only using ML/DL, there was also programming or coding involved in the AI driving system, which indeed is most often the case.    It could be that the AI developers themselves would allow their own biases to be encompassed into the coding, and since by-and-large stats indicate that AI software developers tend to be males rather than females (though, thankfully, lots of STEM efforts are helping to change this dynamic), perhaps their male-oriented perspective would get included into the AI system coding.    For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/  To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/    The ethical implications of AI driving systems are significant, see my indication here: http://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/  Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/  In The Field Biases Too   Yet another example involves the AI dealing with other drivers on the roadways.    For many years to come, we will have both self-driving cars on our highways and byways and simultaneously have human-driven cars. There won’t be a magical overnight switch of suddenly having no human-driven cars and only AI driverless cars.    Presumably, self-driving cars are supposed to be crafted to learn from the driving experiences encountered while on the roadways.  Generally, this involves the self-driving car collecting its sensory data during driving journeys, and then uploading the data via OTA (Over-The-Air) electronic communications into the cloud of the automaker or self-driving tech firm. Then, the automaker or self-driving tech firm uses various tools to analyze the voluminous data, including likely ML/DL and pushes out to the fleet of driverless cars some updates based on what was gleaned from the roadway data collected.   How does this pertain to gender?    Assuming again that male drivers and female drivers do drive differently, the roadway experiences of the driverless cars will involve the driving aspects of the human-driven cars around them.    It is quite possible that the ML/DL doing analysis of the fleet collected data would discover the male-oriented or the female-oriented driving tactics, though it and the AI developers might not realize that the deeply buried patterns were somehow tied to gender.    Indeed, one of the qualms about today’s ML/DL is that it oftentimes is not amenable to explanation.    The complexity of the underlying computations does not necessarily lend itself to readily being interpreted or explained in everyday ways (for how the need for XAI or Explainable AI is becoming increasingly important).  Conclusion  Some people affectionately refer to their car as a “he” or a “she,” as though the car itself was of a particular gender.    When an AI system is at the wheel of a self-driving car, it could be that the “he” or “she” labeling might be applicable, at least in the aspect that the AI driving system could be gender-biased toward male-oriented driving or female-oriented driving (if you believe such a difference exists).  Some believe that the AI driving system will be gender fluid, meaning that based on all how the AI system “learns” to drive, it will blend together the driving tactics that might be ascribed as male-oriented and those that might be ascribed as female-oriented.    If you don’t buy into the notion that there are any male versus female driving differences, presumably the AI will be gender-neutral in its driving practices.  No matter what your gender driving beliefs might be, one thing is clear that the whole topic can drive one crazy.  Copyright 2020 Dr. Lance Eliot   This content is originally posted on AI Trends.   [Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]  http://ai-selfdriving-cars.libsyn.com/website 
Read more
  • 0
  • 0
  • 1644

article-image-lyft-releases-an-autonomous-driving-dataset-level-5-and-sponsors-research-competition
Amrata Joshi
25 Jul 2019
3 min read
Save for later

Lyft releases an autonomous driving dataset “Level 5” and sponsors research competition

Amrata Joshi
25 Jul 2019
3 min read
This week, the team at Lyft released a subset of their autonomous driving data, the Level 5 Dataset, and will be sponsoring a research competition. The Level 5 Dataset includes over 55,000 human-labelled 3D annotated frames, a drivable surface map, as well as an HD spatial semantic map for contextualizing the data. The team has been perfecting their hardware and autonomy stack for the last two years. As the sensor hardware needs to be built and properly calibrated, there is also the need for a localization stack and an HD semantic map must be created. Only then it is possible to unlock higher-level functionality like 3D perception, prediction, and planning. The dataset allows a broad cross-section of researchers in contributing to downstream research in self-driving technology.  The team is iterating on the third generation of Lyft’s self-driving car and has already patented a new sensor array and a proprietary ultra-high dynamic range (100+DB) camera. Since HD mapping is crucial to autonomous vehicles, the teams in Munich and Palo Alto have been working towards building high-quality lidar-based geometric maps and high-definition semantic maps that are used by the autonomy stack. The team is also working towards building high quality and cost-effective geometric maps that would use only a camera phone for capturing the source data.  Lyft’s autonomous platform team has been deploying partner vehicles on the Lyft network. Along with their partner Aptiv, the team has successfully provided over 50,000 self-driving rides to Lyft passengers in Las Vegas, which becomes the largest paid commercial self-driving service in operation. Waymo vehicles are also now available on the Lyft network in Arizona that expands the opportunity for our passengers to experience self-driving rides. To advance self-driving vehicles, the team will also be launching a competition for individuals for training algorithms on the dataset. The dataset makes it possible for researchers to work on problems such as prediction of agents over time, scene depth estimation from cameras with lidar as ground truth and many more. The blog post reads, “We have segmented this dataset into training, validation, and testing sets — we will release the validation and testing sets once the competition opens.” It further reads, “There will be $25,000 in prizes, and we’ll be flying the top researchers to the NeurIPS Conference in December, as well as allowing the winners to interview with our team. Stay tuned for specific details of the competition!” To know more about this news, check out the Medium post. Lyft announces Envoy Mobile, an iOS and Android client network library for mobile application networking Uber and Lyft drivers go on strike a day before Uber IPO roll-out Lyft introduces Amundsen;  a data discovery and metadata engine for its researchers and data scientists
Read more
  • 0
  • 0
  • 2611

article-image-sherin-thomas-explains-how-to-build-a-pipeline-in-pytorch-for-deep-learning-workflows
Packt Editorial Staff
09 May 2019
8 min read
Save for later

Sherin Thomas explains how to build a pipeline in PyTorch for deep learning workflows

Packt Editorial Staff
09 May 2019
8 min read
A typical deep learning workflow starts with ideation and research around a problem statement, where the architectural design and model decisions come into play. Following this, the theoretical model is experimented using prototypes. This includes trying out different models or techniques, such as skip connection, or making decisions on what not to try out. PyTorch was started as a research framework by a Facebook intern, and now it has grown to be used as a research or prototype framework and to write an efficient model with serving modules. The PyTorch deep learning workflow is fairly equivalent to the workflow implemented by almost everyone in the industry, even for highly sophisticated implementations, with slight variations. In this article, we explain the core of ideation and planning, design and experimentation of the PyTorch deep learning workflow. This article is an excerpt from the book PyTorch Deep Learning Hands-On by Sherin Thomas and Sudhanshi Passi. This book attempts to provide an entirely practical introduction to PyTorch. This PyTorch publication has numerous examples and dynamic AI applications and demonstrates the simplicity and efficiency of the PyTorch approach to machine intelligence and deep learning. Ideation and planning Usually, in an organization, the product team comes up with a problem statement for the engineering team, to know whether they can solve it or not. This is the start of the ideation phase. However, in academia, this could be the decision phase where candidates have to find a problem for their thesis. In the ideation phase, engineers brainstorm and find the theoretical implementations that could potentially solve the problem. In addition to converting the problem statement to a theoretical solution, the ideation phase is where we decide what the data types are and what dataset we should use to build the proof of concept (POC) of the minimum viable product (MVP). Also, this is the stage where the team decides which framework to go with by analyzing the behavior of the problem statement, available implementations, available pretrained models, and so on. This stage is very common in the industry, and I have come across numerous examples where a well-planned ideation phase helped the team to roll out a reliable product on time, while a non-planned ideation phase destroyed the whole product creation. Design and experimentation The crucial part of design and experimentation lies in the dataset and the preprocessing of the dataset. For any data science project, the major timeshare is spent on data cleaning and preprocessing. Deep learning is no exception from this. Data preprocessing is one of the vital parts of building a deep learning pipeline. Usually, for a neural network to process, real-world datasets are not cleaned or formatted. Conversion to floats or integers, normalization and so on, is required before further processing. Building a data processing pipeline is also a non-trivial task, which consists of writing a lot of boilerplate code. For making it much easier, dataset builders and DataLoader pipeline packages are built into the core of PyTorch. The dataset and DataLoader classes Different types of deep learning problems require different types of datasets, and each of them might require different types of preprocessing depending on the neural network architecture we use. This is one of the core problems in deep learning pipeline building. Although the community has made the datasets for different tasks available for free, writing a preprocessing script is almost always painful. PyTorch solves this problem by giving abstract classes to write custom datasets and data loaders. The example given here is a simple dataset class to load the fizzbuzz dataset, but extending this to handle any type of dataset is fairly straightforward. PyTorch's official documentation uses a similar approach to preprocess an image dataset before passing that to a complex convolutional neural network (CNN) architecture. A dataset class in PyTorch is a high-level abstraction that handles almost everything required by the data loaders. The custom dataset class defined by the user needs to override the __len__ and __getitem__ functions of the parent class, where __len__ is being used by the data loaders to determine the length of the dataset and __getitem__ is being used by the data loaders to get the item. The __getitem__ function expects the user to pass the index as an argument and get the item that resides on that index: from dataclasses import dataclassfrom torch.utils.data import Dataset, DataLoader@dataclass(eq=False)class FizBuzDataset(Dataset):    input_size: int    start: int = 0    end: int = 1000    def encoder(self,num):        ret = [int(i) for i in '{0:b}'.format(num)]        return[0] * (self.input_size - len(ret)) + ret    def __getitem__(self, idx):        x = self.encoder(idx)        if idx % 15 == 0:            y = [1,0,0,0]        elif idx % 5 ==0:            y = [0,1,0,0]        elif idx % 3 == 0:            y = [0,0,1,0]        else:            y = [0,0,0,1]        return x,y           def __len__(self):        return self.end - self.start The implementation of a custom dataset uses brand new dataclasses from Python 3.7. dataclasses help to eliminate boilerplate code for Python magic functions, such as __init__, using dynamic code generation. This needs the code to be type-hinted and that's what the first three lines inside the class are for. You can read more about dataclasses in the official documentation of Python (https://docs.python.org/3/library/dataclasses.html). The __len__ function returns the difference between the end and start values passed to the class. In the fizzbuzz dataset, the data is generated by the program. The implementation of data generation is inside the __getitem__ function, where the class instance generates the data based on the index passed by DataLoader. PyTorch made the class abstraction as generic as possible such that the user can define what the data loader should return for each id. In this particular case, the class instance returns input and output for each index, where, input, x is the binary-encoder version of the index itself and output is the one-hot encoded output with four states. The four states represent whether the next number is a multiple of three (fizz), or a multiple of five (buzz), or a multiple of both three and five (fizzbuzz), or not a multiple of either three or five. Note: For Python newbies, the way the dataset works can be understood by looking first for the loop that loops over the integers, starting from zero to the length of the dataset (the length is returned by the __len__ function when len(object) is called). The following snippet shows the simple loop: dataset = FizBuzDataset()for i in range(len(dataset)):    x, y = dataset[i]dataloader = DataLoader(dataset, batch_size=10, shuffle=True,                     num_workers=4)for batch in dataloader:    print(batch) The DataLoader class accepts a dataset class that is inherited from torch.utils.data.Dataset. DataLoader accepts dataset and does non-trivial operations such as mini-batching, multithreading, shuffling, and so on, to fetch the data from the dataset. It accepts a dataset instance from the user and uses the sampler strategy to sample data as mini-batches. The num_worker argument decides how many parallel threads should be operating to fetch the data. This helps to avoid a CPU bottleneck so that the CPU can catch up with the GPU's parallel operations. Data loaders allow users to specify whether to use pinned CUDA memory or not, which copies the data tensors to CUDA's pinned memory before returning it to the user. Using pinned memory is the key to fast data transfers between devices, since the data is loaded into the pinned memory by the data loader itself, which is done by multiple cores of the CPU anyway. Most often, especially while prototyping, custom datasets might not be available for developers and in such cases, they have to rely on existing open datasets. The good thing about working on open datasets is that most of them are free from licensing burdens, and thousands of people have already tried preprocessing them, so the community will help out. PyTorch came up with utility packages for all three types of datasets with pretrained models, preprocessed datasets, and utility functions to work with these datasets. This article is about how to build a basic pipeline for deep learning development. The system we defined here is a very common/general approach that is followed by different sorts of companies, with slight changes. The benefit of starting with a generic workflow like this is that you can build a really complex workflow as your team/project grows on top of it. Build deep learning workflows and take deep learning models from prototyping to production with PyTorch Deep Learning Hands-On written by Sherin Thomas and Sudhanshu Passi. F8 PyTorch announcements: PyTorch 1.1 releases with new AI tools, open sourcing BoTorch and Ax, and more Facebook AI open-sources PyTorch-BigGraph for faster embeddings in large graphs Top 10 deep learning frameworks
Read more
  • 0
  • 0
  • 8472
article-image-european-union-fined-google-1-49-billion-euros-for-antitrust-violations-in-online-advertising
Amrata Joshi
22 Mar 2019
3 min read
Save for later

European Union fined Google 1.49 billion euros for antitrust violations in online advertising

Amrata Joshi
22 Mar 2019
3 min read
On Wednesday, European authorities fined Google 1.49 billion euros for antitrust violations in online advertising and it seems to be the third antitrust fine by the European Union against Google since 2017. As per the regulators,  Google had imposed unfair terms on companies that used its search bar on their websites in Europe. Google has been abusing its power in its Android mobile phone operating system, shopping comparison services, and now search adverts. Last year, EU competition commissioner Margrethe Vestager had fined Google €4.34 billion for using its Android mobile operating system for unfairly keeping its rivals away in the mobile phone market. Two years ago, Google was fined 2.4 billion euros for unfairly favoring its own shopping services over those of its rivals. Newspaper websites or blog aggregators usually have a search function embedded to them. When a user searches something on this search function, the website provides search results and search adverts that appear alongside the search result. Google uses AdSense for Search, that provides the search adverts to the owner of the publisher websites. Google acts as an advertising broker, between advertisers and website owners that provide the space. AdSense also works as an online search advertising broker platform. Google has been at the top in online search advertising intermediation in the European Economic Area (EEA), with a market share of more than 70% from 2006 to 2016. Last year Google held nearly 75.8% and this year it’s already 77.8%. There is constant growth happening in Google’s search ad market. And it is impossible for competitors such as Microsoft and Yahoo to sell advertising space in Google's own search engine results pages. So, they need to work with third-party websites to grow their business and compete with Google. In 2006, Google had included exclusivity clauses in its contracts that prohibit the publishers from placing any search adverts from competitors on their search results pages. In March 2009, Google started to replace the exclusivity clauses with “Premium Placement” clauses. According to these clauses, the publishers had to reserve the most profitable space on their search results pages for Google's adverts and further request a minimum number of Google adverts. This, in turn, affected Google's competitors as they got restricted from placing their search adverts in the most visible and clickable parts of the websites' search results pages. It got more difficult for the competitors when Google included the clauses that would require publishers to seek written approval from Google before making any changes to the way in which the rival adverts were displayed. Google has control over how attractive the competing search adverts would be. Google also imposed an exclusive supply obligation, which would prevent competitors from placing any search adverts on the most significant websites. The company gave the most valuable positions to its adverts and also controlled the performance of the rivals’ adverts. European Commission found that Google's conduct harmed competition and consumers, and affected innovation. Google might face civil actions before the courts of the Member States for damages suffered by any person or business because of its anti-competitive behaviour. To know more about this news, check out the official press release. Google announces Stadia, a cloud-based game streaming service, at GDC 2019 Google is planning to bring Node.js support to Fuchsia Google Open-sources Sandboxed API, a tool that helps in automating the process of porting existing C and C++ code    
Read more
  • 0
  • 0
  • 2278

article-image-openai-lp-a-new-capped-profit-company-to-accelerate-agi-research-and-attract-top-ai-talent
Fatema Patrawala
12 Mar 2019
3 min read
Save for later

OpenAI LP, a new “capped-profit” company to accelerate AGI research and attract top AI talent

Fatema Patrawala
12 Mar 2019
3 min read
A move that has surprised many, OpenAI yesterday announced the creation of a new for-profit company to balance its huge expenditures into compute and AI talents. Sam Altman, the former president of Y Combinator who stepped down last week, has been named CEO of the new “capped-profit” company, OpenAI LP. But some worry that this move may result in making the innovative company no different from the other AI startups out there. With the OpenAI LP their mission is to ensure that artificial general intelligence (AGI) benefits all of humanity, primarily by attempting to build safe AGI and share the benefits with the world. OpenAI mentions on their blog that “returns for our first round of investors are capped at 100x their investment (commensurate with the risks in front of us), and we expect this multiple to be lower for future rounds as we make further progress.” Any returns beyond the cap amount will revert to OpenAI. OpenAI LP’s primary obligation is to advance the aims of the OpenAI Charter. All investors and employees sign agreements that OpenAI LP’s obligation to the Charter always comes first, even at the expense of some or all of their financial stake. But the major reason behind the new for-profit subsidiary can be explicitly put up as OpenAI in need of more money. The company anticipates to spend billions of dollars in building large-scale cloud compute, attracting and retaining talented people, and developing AI supercomputers in the coming years. The cash burn rate of a top AI research company is staggering. Consider OpenAI’s recent OpenAI Five project — a set of coordinated AI bots trained to compete against human professionals in the video game Dota 2. OpenAI rented 128,000 CPU cores and 256 GPUs at approximately US$2500 per hour for the time-consuming process of training and fine-tuning its OpenAI Five models. Additionally consider the skyrocketing cost of retaining top AI talents. A New York Times story revealed that OpenAI paid its Chief Scientist Ilya Sutskever more than US$1.9 million in 2016. The company currently employs some 100 pricey talents for developing its AI capabilities, safety, and policies. OpenAI LP will be governed by the original OpenAI Board. Only a few on the Board of Directors are allowed to hold financial stakes, and those who do not will be able to vote on decisions if the financial interests are seen to conflict with OpenAI’s mission. People have linked the new for-profit company with OpenAI’s recent controversial decision to withhold the code and training dataset for their language model GPT-2, ostensibly due concerns they might be used for malicious purposes such as generating fake news. A tweet from a software engineer suggested an ulterior motive: “I now see why you didn’t release the fully trained model of #gpt2”. OpenAI Chairman and CTO Greg Brockman shot back: “Nope. We aren’t going to commercialize GPT-2.” OpenAI aims to forge a sustainable path towards long-term AI development. And it also plans to strike a balance between benefiting humanity and turning a profit. A big part of OpenAI’s appeal to top AI talents is it's not-for-profit character — will OpenAI LP mar that? And can OpenAI really strike a balance between benefiting humanity and turning a profit? Whether the for-profit shift will accelerate OpenAI’s mission or prove a detrimental detour remains to be seen, but the journey ahead is bound to be challenging. OpenAI’s new versatile AI model, GPT-2 can efficiently write convincing fake news from just a few words  
Read more
  • 0
  • 0
  • 2768