Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News

3711 Articles
article-image-breaking-ai-workflow-into-stages-reveals-investment-opportunities-from-ai-trends
Matthew Emerick
08 Oct 2020
6 min read
Save for later

Breaking AI Workflow Into Stages Reveals Investment Opportunities  from AI Trends

Matthew Emerick
08 Oct 2020
6 min read
By John P. Desmond, AI Trends Editor  An infrastructure–first approach to AI investing has the potential to yield greater returns with a lower risk profile, suggests a recent account in Forbes. To identify the technologies supporting the AI system, deconstruct the workflow into two steps as a starting point: training and inference.    MBA candidate at Columbia Business School, MBA Associate at Primary Venture Partners “Training is the process by which a framework for deep-learning is applied to a dataset,” states Basil Alomary, author of the Forbes account. An MBA candidate at Columbia Business School and MBA Associate at Primary Venture Partners, his background and experience are in early-stage SaaS ventures, as an operator and an investor. “That data needs to be relevant, large enough, and well-labeled to ensure that the system is being trained appropriately. Also, the machine learning models being created need to be validated, to avoid overfitting to the training data and to maintain a level of generalizability. The inference portion is the application of this model and the ongoing monitoring to identify its efficacy.”  He identifies these stages in the AI/ML development lifecycle: data acquisition, data preparation, training, inference, and implementation. The stages of acquisition, preparation, and implementation have arguably attracted the least amount of attention from investors.   Where to get the data for training the models is a chief concern. If a company is old enough to have historical customer data, it can be helpful. That approach should be inexpensive, but the data needs to be clean and complete enough to help in whatever decisions it works on. Companies without the option of historical data, can try publicly-available datasets, or they can buy the data directly. A new class of suppliers is emerging that primarily focus on selling clean, well-labeled datasets specifically for machine learning applications.   One such startup is Narrative, based in New York City. The company sells data tailored to the client’s use case. The OpenML and Amazon Datasets have marketplace characteristics but are entirely open source, which is limiting for those who seek to monetize their own assets.    Nick Jordan, CEO and founder, Narrative “Essentially, the idea was to take the best parts of the e-commerce and search models and apply that to a non-consumer offering to find, discover and ultimately buy data,” stated Narrative founder and CEO Nick Jordan in an account in TechCrunch. “The premise is to make it as easy to buy data as it is to buy stuff online.”  In a demonstration, Jordan showed how a marketer could browse and search for data using the Narrative tools. The marketer could select the mobile IDs of people who have the Uber Driver app installed on their phone, or the Zoom app, at a price that is often subscription-based. The data selection is added to the shopping cart and checked out, like any online transaction.   Founded in 2016, Narrative collects data sellers into its market, vetting each one, working to understand how the data is collected, its quality, and whether it could  be useful in a regulated environment. Narrative does not attempt to grade the quality of the data. “Data quality is in the eye of the beholder,” Jordan stated. Buyers are able to conduct their own research into the data quality if so desired. Narrative is working on building a marketplace of third-party applications, which could include scoring of data sets.    Data preparation is critical to making the machine learning model effective. Raw data needs to be preprocessed so that machine learning algorithms can produce a model, a structural description of the data. In an image database, for example, the images may have to be labelled, which can be labor-intensive.    Automating Data Preparation is an Opportunity Area   Platforms are emerging to support the process of data preparation with a layer of automation that seeks to accelerate the process. Startup Labelbox recently raised a $25 million Series B financing round to help grow its data labeling platform for AI model training, according to a recent account in VentureBeat.  Founded in 2018 in San Francisco, Labelbox aims to be the data platform that acts as a central hub for data science teams to coordinate with dispersed labeling teams. In April, the company won a contract with the Department of Defense  for the US Air Force AFWERX program, which is building out technology partnerships.   Manu Sharma, CEO and co-founder, Labelbox A press release issued by Labelbox on the contract award contained some history of the company. “I grew up in a poor family, with limited opportunities and little infrastructure” stated Manu Sharma, CEO and one of Labelbox’s co-founders, who was raised in a village in India near the Himalayas. He said that opportunities afforded by the U.S. have helped him achieve more success in ten years than multiple generations of his family back home. “We’ve made a principled decision to work with the government and support the American system,” he stated.   The Labelbox platform is supporting supervised-learning, a branch of AI that uses labeled data to train algorithms to recognize patterns in images, audio, video or text. The platform enables collaboration among team members as well as these functions: rework, rework, quality assurance, model evaluation, audit trails, and model-assisted labeling.   “Labelbox is an integrated solution for data science teams to not only create the training data but also to manage it in one place,” stated Sharma. “It’s the foundational infrastructure for customers to build their machine learning pipeline.”  Deploying the AI model into the real world requires an ongoing evaluation, a data pipeline that can handle continued training, scaling and managing computing resources, suggests Alomary in Forbes. An example product is Amazon’s Sagemaker, supporting deployment. Amazon offers a managed service that includes human interventions to monitor deployed models.   DataRobot of Boston in 2012 saw the opportunity to develop a platform for building, deploying, and managing machine learning models. The company raised a Series E round of $206 million in September and now has $431 million in venture-backed funding to date, according to Crunchbase.   Unfortunately DataRobot in March had to shrink its workforce by an undisclosed number of people, according to an account in BOSTINNO. The company employed 250 full-time employees as of October 2019.   DataRobot announced recently that it was partnering with Amazon Web Services to provide its enterprise AI platform free of charge to anyone using it to help with the coronavirus response effort.  Read the source articles and releases in Forbes, TechCrunch, VentureBeat and BOSTINNO. 
Read more
  • 0
  • 0
  • 1801

article-image-ai-tools-assisting-with-mental-health-issues-brought-on-by-pandemic-from-ai-trends
Matthew Emerick
08 Oct 2020
5 min read
Save for later

AI Tools Assisting with Mental Health Issues Brought on by Pandemic  from AI Trends

Matthew Emerick
08 Oct 2020
5 min read
By Shannon Flynn, AI Trends Contributor   The pandemic is a perfect storm for mental health issues. Isolation from others, economic uncertainty, and fear of illness can all contribute to poor mental health — and right now, most people around the world face all three.  New research suggests that the virus is tangibly affecting mental health. Rates of depression and anxiety symptoms are much higher than normal. In some population groups, like students and young people, these numbers are almost double what they’ve been in the past.  Some researchers are even concerned that the prolonged, unavoidable stress of the virus may result in people developing long-term mental health conditions — including depression, anxiety disorders and even PTSD, according to an account in Business Insider. Those on the front lines, like medical professionals, grocery store clerks and sanitation workers, may be at an especially high risk.  Use of Digital Mental Health Tools with AI on the Rise   Automation is already widely used in health care, primarily in the form of technology like AI-based electronic health records and automated billing tools, according to a blog post from ZyDoc, a supplier of medical transcription applications. It’s likely that COVID-19 will only increase the use of automation in the industry. Around the world, medical providers are adopting new tech, like self-piloting robots that act as hospital nurses. These providers are also using UV light-based cleaners to sanitize entire rooms more quickly.  Digital mental health tools are also on the rise, along with fully automated AI tools that help patients get the care they need.   The AI-powered behavioral health platform Quartet, for example, is one of several automated tools that aim to help diagnose patients, screening them for common conditions like depression, anxiety, and bipolar spectrum disorders, according to a recent account in AI Trends. Other software — like a new app developed by engineers at the University of New South Wales in Sydney, Australia — can screen patients for different mental health conditions, including dementia. With a diagnosis, patients are better equipped to find the care they need, such as from mental health professionals with in-depth knowledge of a particular condition.   Another tool, an AI-based chatbot called Woebot, developed by Woebot Labs, Inc., uses brief daily chats to help people maintain their mental health. The bot is designed to teach skills related to cognitive behavioral therapy (CBT), a form of talk therapy that assists patients with identifying and managing maladaptive thought patterns.   In April, Woebot Labs updated the bot to provide specialized COVID-19-related support in the form of a new therapeutic modality, called Interpersonal Psychotherapy (IPT), which helps users “process loss and role transition,” according to a press release from the company.  Both Woebot and Quartet provide 24/7 access to mental health resources via the internet. This means that — so long as a person has an internet connection — they can’t be deterred by an inaccessible building or lengthy waitlist.  New AI Tools Supporting Clinicians   Some groups need more support than others. Clinicians working in hospitals are some of the most vulnerable to stress and anxiety. Right now, they’re facing long hours, high workloads, and frequent potential exposure to COVID.  Developers and health care professionals are also working together to create new AI tools that will support clinicians as they tackle the challenges of providing care during the pandemic.  Kavi Misri, founder and CEO of Rose One new AI-powered mental health platform, developed by the mobile mental health startup Rose, will gather real-time data on how clinicians are feeling via “questionnaires and free-response journal entries, which can be completed in as few as 30 seconds,” according to an account in Fierce Healthcare. The tool will scan through these responses, tracking the clinician’s mental health and stress levels. Over time, it should be able to identify situations and events likely to trigger dips in mental health or increased anxiety and tentatively diagnose conditions like depression, anxiety, and trauma.  Front-line health care workers are up against an unprecedented challenge, facing a wave of new patients and potential exposure to COVID, according to Kavi Misri, founder and CEO of Rose. As a result, many of these workers may be more vulnerable to stress, anxiety and other mental health issues.   “We simply can’t ignore this emerging crisis that threatens the mental health and stability of our essential workers – they need support,” stated Misri.  Rose is also providing clinicians access to more than 1,000 articles and videos on mental health topics. Each user’s feed of content is curated based on the data gathered by the platform.  Right now, Brigham and Women’s Hospital, the second-largest teaching hospital at Harvard, is experimenting with the technology in a pilot program. If effective, the tech could soon be used around the country to support clinicians on the front lines of the crisis.  Mental health will likely stay a major challenge for as long as the pandemic persists. Fortunately, AI-powered experimental tools for mental health should help to manage the stress, depression and trauma that has developed from dealing with COVID-19.  Read the source articles and information in Business Insider, a blog post from ZyDoc, in AI Trends,  press release from Woebot Labs, and in Fierce Healthcare.   Shannon Flynn is a managing editor at Rehack, a website featuring coverage of a range of technology niches. 
Read more
  • 0
  • 0
  • 1650

article-image-gender-bias-in-the-driving-systems-of-ai-autonomous-cars-from-ai-trends
Matthew Emerick
08 Oct 2020
17 min read
Save for later

Gender Bias In the Driving Systems of AI Autonomous Cars  from AI Trends

Matthew Emerick
08 Oct 2020
17 min read
By Lance Eliot, the AI Trends Insider    Here’s a topic that entails intense controversy, oftentimes sparking loud arguments and heated responses. Prepare yourself accordingly. Do you think that men are better drivers than women, or do you believe that women are better drivers than men?    Seems like most of us have an opinion on the matter, one way or another.    Stereotypically, men are often characterized as fierce drivers that have a take-no-prisoners attitude, while women supposedly are more forgiving and civil in their driving actions. Depending on how extreme you want to take these tropes, some would say that women shouldn’t be allowed on our roadways due to their timidity, while the same could be said that men should not be at the wheel due to their crazed pedal-to-the-metal predilection.  What do the stats say? According to the latest U.S. Department of Transportation data, based on their FARS or Fatality Analysis Reporting System, the number of males annually killed in car crashes is nearly twice that of the number of females killed in car crashes.    Ponder that statistic for a moment. Some would argue that it definitely is evidence that male drivers are worse drivers than female drivers, which seems logically sensible under the assumption that since more males are being killed in car crashes than women, men must be getting into a lot more car crashes, ergo they must be worse drivers.    Presumably, it would seem that women are better able to avoid getting into death-producing car crashes, thus they are more adept at driving and are altogether safer drivers.    Whoa, exclaim some that don’t interpret the data in that way. Maybe women are somehow able to survive deadly car crashes better than men, and therefore it isn’t fair to compare the count of how many perished. Or, here’s one to get your blood boiling, perhaps women trigger car crashes by disrupting traffic flow and are not being agile enough at the driving controls, and somehow men pay a dear price by getting into deadly accidents while contending with that kind of driving obfuscation.    There seems to be little evidentiary support for those contentions. A more straightforward counterargument is that men tend to drive more miles than women. By the very fact that men are on the roadways more so than women, they are obviously going to be vulnerable to a heightened risk of getting into bad car crashes. In a sense, it’s a situation of rolling the dice more times than women do.    Insurance companies opt for that interpretation, including too that the stats show that men are more likely to drive while intoxicated, they are more likely to be speeding, and more likely to not use seatbelts.     There could be additional hidden factors involved in these outcomes. For example, some studies suggest that the gender differences begin to dissipate with aging, namely that at older ages, the chances of getting killed in a car crash becomes about equal for both male and female drivers. Of course, even that measure has controversy, which for some it is a sign that men lose their driving edge and spirit as they get older, become more akin to the skittishness of women.    Yikes, it’s all a can of worms and a topic that can readily lend itself to fisticuffs.    Suppose there were some means to do away with all human driving, and we had only AI-based driving that took place. One would assume that the AI would not fall into any gender-based camp. In other words, since we all think of AI as a kind of machine, it wouldn’t seem to make much sense to say that an AI system is male or that an AI system is female.  As an aside, there have been numerous expressed concerns that the AI-fostered Natural Language Processing (NLP) systems that are increasingly permeating our lives are perhaps falling into a gender trap, as it were. When you hear an Alexa or Siri voice that speaks to you if it has a male intonation do you perceive the system in a manner differently than if it has a female intonation?  Some believe that if every time you want to learn something new that you invoke an NLP that happens to have said a female sounding voice, it will tend to cause children especially to start to believe that women are the sole arbiters of the world’s facts. This could also work in other ways such as if the female sounding NLP was telling you to do your homework, would that cause kids to be leery of women as though they are always being bossy?    The same can be said about using a male voice for today’s NLP systems. If a male-sounding voice is always used, perhaps the context of what the NLP system is telling you might be twisted into being associated with males versus females.  As a result, some argue that the NLP systems ought to have gender-neutral sounding voices.    The aim is to get away from the potential of having people try to stereotype human males and human females by stripping out the gender element from our verbally interactive AI systems.    There’s another perhaps equally compelling reason for wanting to excise any male or female intonation from an NLP system, namely that we might tend to anthropomorphize the AI system, unduly so.    Here’s what that means.  AI systems are not yet even close to being intelligent, and yet the more that AI systems have the appearance of human-like qualities, we are bound to assume that the AI is as intelligent as humans. Thus, when you interact with Alexa or Siri, and it uses either a male or female intonation, the argument is that the male or female verbalization acts as a subtle and misleading signal that the underlying system is human-like and ergo intelligent.  You fall readily for the notion that Alexa or Siri must be smart, simply by extension of the aspect that it has a male or female sounding embodiment.  In short, there is ongoing controversy about whether the expanding use of NLP systems in our society ought to not “cheat” by using a male or female sounding basis and instead should be completely neutralized in terms of the spoken word and not lean toward using either gender.  Getting back to the topic of AI driving systems, there’s a chance that the advent of true self-driving cars might encompass gender traits, akin to how there’s concern about Alexa and Siri doing so.    Say what?  You might naturally be puzzled as to why AI driving systems would include any kind of gender specificity.    Here’s the question for today’s analysis: Will AI-based true self-driving cars be male, female, gender fluid, or gender-neutral when it comes to the act of driving?  Let’s unpack the matter and see.  For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/  Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/    For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/    For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/   The Levels Of Self-Driving Cars   It is important to clarify what I mean when referring to true self-driving cars.  True self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.  These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).  There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.    Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).  Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different from driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).  For semi-autonomous cars, the public must be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.  You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.    For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/    To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/    The ethical implications of AI driving systems are significant, see my indication here: http://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/  Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/    Self-Driving Cars And Gender Biases  For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.    All occupants will be passengers.    The AI is doing the driving.    At first glance, it seems on the surface that the AI is going to drive like a machine does, doing so without any type of gender influence or bias.  How could gender get somehow shoehorned into the topic of AI driving systems?    There are several ways that the nuances of gender could seep into the matter.    We’ll start with the acclaimed use of Machine Learning (ML) or Deep Learning (DL).    As you’ve likely heard or read, part of the basis for today’s rapidly expanding use of AI is partially due to the advances made in ML/DL.    You might have also heard or read that one of the key underpinnings of ML/DL is the need for data, lots, and lots of data.  In essence, ML/DL is a computational pattern matching approach.  You feed lots of data into the algorithms being used, and patterns are sought to be discovered. Based on those patterns, the ML/DL can then henceforth potentially detect in new data those same patterns and report as such that those patterns were found.  If I feed tons and tons of pictures that have a rabbit somewhere in each photo into an ML/DL system, the ML/DL can potentially statistically ascertain that a certain shape and color and size of a blob in those photos is a thing that we would refer to as a rabbit.    Please note that the ML/DL is not likely to use any human-like common-sense reasoning, which is something not often pointed out about these AI-based systems.  For example, the ML/DL won’t “know” that a rabbit is a cute furry animal and that we like to play with them and around Easter, they are especially revered. Instead, the ML/DL simply based on mathematical computations has calculated that a blob in a picture can be delineated, and possibly readily detected whenever you feed a new picture into the system, attempting to probabilistically state whether there’s such a blob present or not.    There’s no higher-level reasoning per se, and we are a long ways away from the day when human-like reasoning of that nature is going to be embodied into AI systems (which, some argue, maybe we won’t ever achieve, while others keep saying that the day of the grand singularity is nearly upon us.  In any case, suppose that we fed pictures of only white-furry rabbits into the ML/DL when we were training it to find the rabbit blobs in the images.    One aspect that might arise would be that the ML/DL would associate the rabbit blob as always and only being white in color.    When we later on fed in new pictures, the ML/DL might fail to detect a rabbit if it was one that had black fur, because the lack of white fur diminished the calculated chances that the blob was a rabbit (as based on the training set that was used).  In a prior piece, I emphasized that one of the dangers about using ML/DL is the possibility of getting stuck on various biases, such as the aspect that true self-driving cars could end up with a form of racial bias, due to the data that the AI driving system was trained on.  Lo and behold, it is also possible that an AI driving system could incur a gender-related bias.    Here’s how.    If you believe that men drive differently than women, and likewise that women drive differently than men, suppose that we collected a bunch of driving-related data that was based on human driving and thus within the data there was a hidden element, specifically that some of the driving was done by men and some of the driving was done by women.    Letting loose an ML/DL system on this dataset, the ML/DL is aiming to try and find driving tactics and strategies as embodied in the data.    Excuse me for a moment as I leverage the stereotypical gender-differences to make my point.  It could be that the ML/DL discovers “aggressive” driving tactics that are within the male-oriented driving data and will incorporate such a driving approach into what the true self-driving car will do while on the roadways.    This could mean that when the driverless car roams on our streets, it is going to employ a male-focused driving style and presumably try to cut off other drivers in traffic, and otherwise be quite pushy.    Or, it could be that the ML/DL discovers the “timid” driving tactics that are within the female-oriented driving data and will incorporate a driving approach accordingly, such that when a self-driving car gets in traffic, the AI is going to act in a more docile manner.    I realize that the aforementioned seems objectionable due to the stereotypical characterizations, but the overall point is that if there is a difference between how males tend to drive and how females tend to drive, it could potentially be reflected in the data.    And, if the data has such differences within it, there’s a chance that the ML/DL might either explicitly or implicitly pick-up on those differences.  Imagine too that if we had a dataset that perchance was based only on male drivers, this landing on a male-oriented bias driving approach would seem even more heightened (similarly, if the dataset was based only on female drivers, a female-oriented bias would be presumably heightened).    Here’s the rub.  Since male drivers today have twice the number of deadly car crashes than women, if an AI true self-driving car was perchance trained to drive via predominantly male-oriented driving tactics, would the resulting driverless car be more prone to car accidents than otherwise?    That’s an intriguing point and worth pondering.  Assuming that no other factors come to play in the nature of the AI driving system, we might certainly reasonably assume that the driverless car so trained might indeed falter in a similar way to the underlying “learned” driving behaviors.  Admittedly, there are a lot of other factors involved in the crafting of an AI driving system, and thus it is hard to say that training datasets themselves could lead to such a consequence.    That being said, it is also instructive to realize that there are other ways that gender-based elements could get infused into the AI driving system.  For example, suppose that rather than only using ML/DL, there was also programming or coding involved in the AI driving system, which indeed is most often the case.    It could be that the AI developers themselves would allow their own biases to be encompassed into the coding, and since by-and-large stats indicate that AI software developers tend to be males rather than females (though, thankfully, lots of STEM efforts are helping to change this dynamic), perhaps their male-oriented perspective would get included into the AI system coding.    For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/  To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/    The ethical implications of AI driving systems are significant, see my indication here: http://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/  Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/  In The Field Biases Too   Yet another example involves the AI dealing with other drivers on the roadways.    For many years to come, we will have both self-driving cars on our highways and byways and simultaneously have human-driven cars. There won’t be a magical overnight switch of suddenly having no human-driven cars and only AI driverless cars.    Presumably, self-driving cars are supposed to be crafted to learn from the driving experiences encountered while on the roadways.  Generally, this involves the self-driving car collecting its sensory data during driving journeys, and then uploading the data via OTA (Over-The-Air) electronic communications into the cloud of the automaker or self-driving tech firm. Then, the automaker or self-driving tech firm uses various tools to analyze the voluminous data, including likely ML/DL and pushes out to the fleet of driverless cars some updates based on what was gleaned from the roadway data collected.   How does this pertain to gender?    Assuming again that male drivers and female drivers do drive differently, the roadway experiences of the driverless cars will involve the driving aspects of the human-driven cars around them.    It is quite possible that the ML/DL doing analysis of the fleet collected data would discover the male-oriented or the female-oriented driving tactics, though it and the AI developers might not realize that the deeply buried patterns were somehow tied to gender.    Indeed, one of the qualms about today’s ML/DL is that it oftentimes is not amenable to explanation.    The complexity of the underlying computations does not necessarily lend itself to readily being interpreted or explained in everyday ways (for how the need for XAI or Explainable AI is becoming increasingly important).  Conclusion  Some people affectionately refer to their car as a “he” or a “she,” as though the car itself was of a particular gender.    When an AI system is at the wheel of a self-driving car, it could be that the “he” or “she” labeling might be applicable, at least in the aspect that the AI driving system could be gender-biased toward male-oriented driving or female-oriented driving (if you believe such a difference exists).  Some believe that the AI driving system will be gender fluid, meaning that based on all how the AI system “learns” to drive, it will blend together the driving tactics that might be ascribed as male-oriented and those that might be ascribed as female-oriented.    If you don’t buy into the notion that there are any male versus female driving differences, presumably the AI will be gender-neutral in its driving practices.  No matter what your gender driving beliefs might be, one thing is clear that the whole topic can drive one crazy.  Copyright 2020 Dr. Lance Eliot   This content is originally posted on AI Trends.   [Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]  http://ai-selfdriving-cars.libsyn.com/website 
Read more
  • 0
  • 0
  • 1672

article-image-amundsen-one-year-later-lyft-engineering-from-linux-com
Matthew Emerick
08 Oct 2020
2 min read
Save for later

Amundsen: one year later (Lyft Engineering) from Linux.com

Matthew Emerick
08 Oct 2020
2 min read
On October 30, 2019, we officially open sourced Amundsen, our solution to solve metadata catalog and data discovery challenges. Ten months later, Amundsen joined the Linux foundation AI (LFAI) as its incubation project. In almost every modern data-driven company, each interaction with the platform is powered by data. As data resources are constantly growing, it becomes increasingly difficult to understand what data resources exist, how to access them, and what information is available in those sources without tribal knowledge. Poor understanding of data leads to bad data quality, low productivity, duplication of work, and most importantly, a lack of trust in the data. The complexity of managing a fragmented data landscape is not just a problem unique to Lyft, but a common one that exists throughout the industry. In a nutshell, Amundsen is a data discovery and metadata platform for improving the productivity of data analysts, data scientists, and engineers when interacting with data. By indexing the data resources (tables, dashboards, users, etc.) and powering a page-rank style search based on usage patterns (e.g. highly-queried tables show up earlier than less-queried tables), these customers are able to address their data needs faster. Read more at Lyft Engineering The post Amundsen: one year later (Lyft Engineering) appeared first on Linux.com.
Read more
  • 0
  • 0
  • 727
Banner background image

article-image-deadline-extended-for-app-updates-using-uiwebview-from-news-apple-developer
Matthew Emerick
08 Oct 2020
1 min read
Save for later

Deadline extended for app updates using UIWebView from News - Apple Developer

Matthew Emerick
08 Oct 2020
1 min read
Apple designed WKWebView in 2014 to ensure that you can integrate web content into your app quickly, securely, and consistently across iOS and macOS. Since then, we’ve recommended that you adopt WKWebView instead of UIWebView and WebView — both of which were formally deprecated. New apps containing these frameworks are no longer accepted by the App Store. And last year, we announced that the App Store will no longer accept app updates containing UIWebView as of December 2020. However, to provide additional time for you to adopt WKWebView and to ensure that it supports the features most often requested by developers, this deadline for app updates has been extended beyond the end of 2020. We’ll let you know when a new deadline is confirmed. Learn about the latest in WKWebView
Read more
  • 0
  • 0
  • 1598

article-image-7-new-ways-cloudera-is-investing-in-our-culture-from-cloudera-blog
Matthew Emerick
08 Oct 2020
5 min read
Save for later

7 New Ways Cloudera Is Investing in Our Culture from Cloudera Blog

Matthew Emerick
08 Oct 2020
5 min read
As Cloudera offices around the world continue to cope with the impact of COVID-19, we have worked hard to ease stress and adapt to remote working. People are the heart of our company and we’re investing in creative, new ways to make every Clouderan feel valued and appreciated. Clouderans are superstars at work and at home, and burn-out is unhealthy for employees, their families, and the company. Our plan is to adapt the amazing workplace culture we have at Cloudera to our new remote workstyle.  Here are some of our recent initiatives geared toward supporting employees and reducing burn-out: We’re Pledging to Be Good Colleagues Toward the start of our work-from-home tenure, we developed a Cloudera WFH Code. It was designed to help us all rally around a common set of guidelines and rules that would help set the tone for WFH moving forward. We’re family first and people first. We’re Unplugging Starting at the beginning of July, we designated certain days as “Unplug days,” when employees are given the day (or in some cases multiple days) off to step away from work and do something to make their lives easier. That might mean pursuing a hobby, volunteering, spending time with family, or simply lying in bed and watching movies all day. With studies showing employees are working longer hours during this pandemic period, we need to make sure that Clouderans know that not only is it okay to unplug, we want them to.  To date, Clouderans took off 10 Unplugged days between July and September, with 22 more scheduled between now and Spring 2021. We’re making the most of our time. One of the more inspiring stories I heard was from one of our Singapore employees. Her office is participating in Mercy Relief’s Ground Zero virtual run challenge to raise money for local communities affected by natural disasters. Over our last Unplug weekend, the team covered 55 km collectively and garnered $3,000 to donate to the cause. We’re Taking Time off to Vote Cloudera pledged to #MakeTimeToVote, as part of the Time to Vote initiative. We are actively encouraging all employees around the world to take the time off needed to become informed voters and participate in their community elections. We’re Investing More in Diversity & Inclusion (D&I) Efforts Given the world climate around racism and injustice, and the spotlight on inclusion shortcomings in tech, we’ve doubled down on our commitment to D&I initiatives. Our new Chief Diversity Officer, Sarah Shin, is making speedy progress implementing these initiatives and getting them out to Clouderans. (And if you’re interested in fostering D&I within the workplace, we happen to have six new and open roles on her team.) One of Sarah’s first initiatives was implementing Bias Busters workshops for 379 managers. These sessions shared tools and best practices our managers can use to identify and interrupt unconscious biases. We also had the pleasure of meeting with Dr. Mary Frances Berry, acclaimed activist, writer, lawyer, and professor, to discuss diversity in tech and Cloudera’s role in leading the way toward a more inclusive workforce. Our Equality Committee had direct one-on-one time with Dr. Berry while our CEO, Rob Bearden, had a compelling discussion with her in our recent Cloudera Now virtual event as well as at a company Town Hall.  We’re Reinvigorating Our Creativity Whether it’s through the free virtual Medicine for the Soul Yoga memberships we’re providing, our new virtual cooking class program, or the meeting-free days we offer each week, we’re committed to helping Clouderans spark creativity. We each de-stress, find motivation, and thrive in different environments. The ability to choose is what’s most important. Plus, many of us will pick up a new hobby in the process. We’re Communicating This year, we launched a new weekly e-newsletter, Thriving Together, to keep our employees connected to the company and each other. Each issue features a Q&A with a Clouderan, highlights virtual events, links to employee (and world) news, shares work from home tips and articles and offers some levity for the workweek. My favorite section? Cool Things Clouderans Are Up To. I love learning about the creative ways our employees are connecting with each other while apart. We also launched a monthly manager newsletter. It keeps our leaders up-to-date on company initiatives and shares ways to support their team – and themselves – while we all learn how to navigate this 100% remote work world. Plus, we’re extraordinarily active on Slack. With channels for everything from dad jokes to pets to solo quarantining, we have something for everyone, and we’re seeing a high level of engagement across the board. We’re Volunteering While we traditionally have one Global Day of Service each year when employees have the day to volunteer and give back to their communities, this year we had three. The 2020 theme was Embracing Different Perspectives, and we provided Clouderans with multiple online opportunities to learn, volunteer, and give back. We were also able to participate in important conversations with leading nonprofits tackling the thorniest local and global issues.As we keep our fingers on the pulse of our company culture, we continue to roll out new initiatives to help meet our employees’ needs. We’ll continue to invest in emotional, mental, and physical well-being and keep our workplace culture at the forefront as we move forward, together. To learn more about our commitment to diversity and inclusion, take a look at our CEO Rob Bearden’s blog post on the topic and stay tuned to hear from our new CDO Sarah Shin later this month. The post 7 New Ways Cloudera Is Investing in Our Culture appeared first on Cloudera Blog.
Read more
  • 0
  • 0
  • 1756
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-apple-developer-app-updates-for-the-united-kingdom-and-ireland-from-news-apple-developer
Matthew Emerick
08 Oct 2020
1 min read
Save for later

Apple Developer app updates for the United Kingdom and Ireland from News - Apple Developer

Matthew Emerick
08 Oct 2020
1 min read
Now it’s simpler than ever for developers based in the United Kingdom and Ireland to enroll in the Apple Developer Program. The Apple Developer app now supports enrollment in these regions, allowing developers to start and finish their membership purchase with local payment methods on iPhone or iPad. And since membership is provided as an auto-renewable subscription, keeping it active is easy. View on the App Store
Read more
  • 0
  • 0
  • 1484

article-image-new-subscription-server-notifications-available-to-test-from-news-apple-developer
Matthew Emerick
08 Oct 2020
1 min read
Save for later

New subscription server notifications available to test from News - Apple Developer

Matthew Emerick
08 Oct 2020
1 min read
App Store server notifications provide real-time updates on a subscriber’s status, so you can create customized user experiences. The following new notifications are now available in the App Store sandbox environment and you can use them in production later this year: DID_RENEW lets you know when a subscriber successfully auto-renews. PRICE_INCREASE_CONSENT lets you know when the App Store starts asking users to agree to your subscription’s new price, so you can remind them of your service’s value as encouragement to stay subscribed. In addition, the following will be deprecated in the App Store sandbox environment in November 2020: RENEWAL notifications and these top-level objects: latest_receipt, latest_receipt_info, latest_expired_receipt, and latest_expired_receipt_info. Update your code to continue providing a seamless user experience. Learn more about App Store server notifications
Read more
  • 0
  • 0
  • 1460

article-image-should-i-use-wkwebview-or-sfsafariviewcontroller-for-web-views-in-my-app-from-news-apple-developer
Matthew Emerick
08 Oct 2020
4 min read
Save for later

Should I use WKWebView or SFSafariViewController for web views in my app? from News - Apple Developer

Matthew Emerick
08 Oct 2020
4 min read
Whether your app needs to provide a full web browsing experience, display richly-styled content, or incorporate external websites without taking people out of your app, you can make the experience smooth and seamless by choosing the right API. You can display web content inside of your app with both the WKWebView and SFSafariViewController APIs. But which is the best for your app’s needs? WKWebView is part of the WebKit framework: It allows you to embed web content into your app as a seamless part of your app’s UI. You can present a full or partial view of web content directly in your app by loading a view that leverages existing HTML, CSS, and JavaScript content or create your own if your layout and styling requirements are better satisfied by using web technologies. Note: If your app uses the deprecated UIWebView API to display web content, you should update your code for improved security, performance, and reliability. Learn more: Deadline extended for app updates using UIWebView SFSafariViewController is part of the SafariServices framework, and lets your users browse a web page, or a website right inside your app. With it, people can enjoy the same web browsing experience they get in Safari — including features like Password Autofill, Reader, and Secure Browsing — without ever having to leave your app. These two APIs can provide a lot of the heavy lifting for web technologies in your app, though there are a few instances where we recommend alternative frameworks. For example, when presenting a web-based login screen for your app, use ASWebAuthenticationSession to provide people with the most secure experience. When should I use WKWebView? If you need to customize or control the display of web content — or interact with the content itself — WKWebView will be most flexible in helping you build the implementation that suits your needs. (If your app is designed to be used offline, make sure any WKWebView content has appropriate fallbacks and alerts.) Additionally, consider WKWebView if you need to display HTML or CSS content inline or as part of rest of your app’s user interface. The Washington Post’s development team implemented WKWebView to display content from the Washington Post website within their app. In short, WKWebView is an incredibly powerful technology that works in tandem with iOS and macOS frameworks. That said, WKWebView is not designed to outright replace system technologies and frameworks. For example, you should avoid using it in place of device-optimized UIKit classes like UITableView, UIImage, and UIButton, as you lose out on core system behaviors and provide a subpar experience for people who use your app. When should I use SFSafariViewController? When you want display websites inside your app without sending people to Safari, the best tool is SFSafariViewController. By using this API, you can effectively embed the Safari interface — and many of its key features and privacy protections — into your app. The Apple Developer app displays web links through SFSafariViewController. SFSafariViewController is best used when you need to display interactive web experiences on websites you don’t own, or showcase parts of your web content that are generally outside the scope of your app. Resources WKWebView SFSafariViewController WWDC17 What's New in Safari View Controller Safari View Controller brings Safari’s features into your app for browsing the web and logging in with 3rd party services. Learn how to use new APIs to customize Safari View Controller’s UI to fit your app’s style. WWDC17 Customized Loading in WKWebView WKWebView allows you to seamlessly integrate web content into your app. Learn how new features in WKWebView allow you to manage cookies, filter unwanted content, and give you more control over loading web content.
Read more
  • 0
  • 0
  • 2003

article-image-jdk-16-whats-coming-in-java-16-from-infoworld-java
Matthew Emerick
08 Oct 2020
1 min read
Save for later

JDK 16: What’s coming in Java 16 from InfoWorld Java

Matthew Emerick
08 Oct 2020
1 min read
Although not due to arrive until March 2021, Java Development Kit (JDK) 16 has begun to take shape, with proposed features including concurrent thread-stack processing for garbage collection, support for C++ 14 language features, and an “elastic metaspace” capability to more quickly return unused class metadata memory to the OS. JDK 16 will be the reference implementation of the version of standard Java set to follow JDK 15, which arrived September 15. The six-month release cadence for standard Java would have JDK 16 arriving next March. [ Also on InfoWorld: JDK 15: The new features in Java 15 ] As of October 8, eight proposals officially target JDK 16. The new capabilities coming to Java 16 include: To read this article in full, please click here
Read more
  • 0
  • 0
  • 2092
article-image-kotlin-queues-up-new-compiler-webassembly-back-end-from-infoworld-java
Matthew Emerick
08 Oct 2020
1 min read
Save for later

Kotlin queues up new compiler, WebAssembly back end from InfoWorld Java

Matthew Emerick
08 Oct 2020
1 min read
Kotlin, the JetBrains-developed, statically typed language for JVM, Android, and web development, is due for a compiler rewrite, multiplatform mobile improvements, and a Kotlin-to-WebAssembly compiler back end, according to a public roadmap for the platform. Unveiled October 5, the roadmap covers priorities for the language, which received a strategic boost in 2017 when Google backed it for building Android mobile apps, alongside Java and C++. To read this article in full, please click here
Read more
  • 0
  • 0
  • 1540

article-image-new-redis-6-compatibility-for-amazon-elasticache-from-aws-news-blog
Matthew Emerick
07 Oct 2020
5 min read
Save for later

New – Redis 6 Compatibility for Amazon ElastiCache from AWS News Blog

Matthew Emerick
07 Oct 2020
5 min read
After the last Redis 5.0 compatibility for Amazon ElastiCache, there has been lots of improvements to Amazon ElastiCache for Redis including upstream supports such as 5.0.6. Earlier this year, we announced Global Datastore for Redis that lets you replicate a cluster in one region to clusters in up to two other regions. Recently we improved your ability to monitor your Redis fleet by enabling 18 additional engine and node-level CloudWatch metrics. Also, we added support for resource-level permission policies, allowing you to assign AWS Identity and Access Management (IAM) principal permissions to specific ElastiCache resource or resources. Today, I am happy to announce Redis 6 compatibility to Amazon ElastiCache for Redis. This release brings several new and important features to Amazon ElastiCache for Redis: Managed Role-Based Access Control – Amazon ElastiCache for Redis 6 now provides you with the ability to create and manage users and user groups that can be used to set up Role-Based Access Control (RBAC) for Redis commands. You can now simplify your architecture while maintaining security boundaries by having several applications use the same Redis cluster without being able to access each other’s data. You can also take advantage of granular access control and authorization to create administration and read-only user groups. Amazon ElastiCache enhances the new Access Control Lists (ACL) introduced in open source Redis 6 to provide a managed RBAC experience, making it easy to set up access control across several Amazon ElastiCache for Redis clusters. Client-Side Caching – Amazon ElastiCache for Redis 6 comes with server-side enhancements to deliver efficient client-side caching to further improve your application performance. Redis clusters now support client-side caching by tracking client requests and sending invalidation messages for data stored on the client. In addition, you can also take advantage of a broadcast mode that allows clients to subscribe to a set of notifications from Redis clusters. Significant Operational Improvements – This release also includes several enhancements that improve application availability and reliability. Specifically, Amazon ElastiCache has improved replication under low memory conditions, especially for workloads with medium/large sized keys, by reducing latency and the time it takes to perform snapshots. Open source Redis enhancements include improvements to expiry algorithm for faster eviction of expired keys and various bug fixes. Note that open source Redis 6 also announced support for encryption-in-transit, a capability that is already available in Amazon ElastiCache for Redis 4.0.10 onwards. This release of Amazon ElastiCache for Redis 6 does not impact Amazon ElastiCache for Redis’ existing support for encryption-in-transit. In order to apply RBAC to a new or existing Redis 6 cluster, we first need to ensure you have a user and user group created. We’ll review the process to do this below. Using Role-Based Access Control – How it works An alternative to Authenticating Users with the Redis AUTH Command, Amazon ElastiCache for Redis 6 offers Role-Based Access Control (RBAC). With RBAC, you create users and assign them specific permissions via an Access String. If you want to create, modify, and delete users and user groups, you will need to select to the User Management and User Group Management sections in the ElastiCache console. ElastiCache will automatically configure a default user with user ID and user name “default”, and then you can add it or new created users to new groups in User Group Management. If you want to change the default user with your own password and access setting, you need to create a new user with the username set to “default” and can then swap it with the original default user. We recommend using your own strong password for a default user. The following example shows how to swap the original default user with another default that has a modified access string via AWS CLI. $ aws elasticache create-user --user-id "new-default-user" --user-name "default" --engine "REDIS" --passwords "a-str0ng-pa))word" --access-string "off +get ~keys*" Create a user group and add the user you created previously. $ aws elasticache create-user-group --user-group-id "new-default-group" --engine "REDIS" --user-ids "default" Swap the new default user with the original default user. $ aws elasticache modify-user-group --user-group-id "new-default-group" --user-ids-to-add "new-default-user" --user-ids-to-remove "default" Also, you can modify a user’s password or change its access permissions using modify-user command, or remove a specific user using delete-user command. It will be removed from any user groups to which it belongs. Similarly you can modify a user group by adding new users and/or removing current users using modify-user-group command, or delete a user group using delete-user-group command. Note that the user group itself, not the users belonging to the group, will be deleted. Once you have created a user group and added users, you can assign the user group to a replication group, or migrate between Redis AUTH and RBAC. For more information, see the documentation in detail. Redis 6 cluster for ElastiCache – Getting Started As usual, you can use the ElastiCache Console, CLI, APIs, or a CloudFormation template to create to new Redis 6 cluster. I’ll use the Console, choose Redis from the navigation pane and click Create with the following settings: Select “Encryption in-transit” checkbox to ensure you can see the “Access Control” options. You can select an option of Access Control either User Group Access Control List by RBAC features or Redis AUTH default user. If you select RBAC, you can choose one of the available user groups. My cluster is up and running within minutes. You can also use the in-place upgrade feature on existing cluster. By selecting the cluster, click Action and Modify. You can change the Engine Version from 5.0.6-compatible engine to 6.x. Now Available Amazon ElastiCache for Redis 6 is now available in all AWS regions. For a list of ElastiCache for Redis supported versions, refer to the documentation. Please send us feedback either in the AWS forum for Amazon ElastiCache or through AWS support, or your account team. – Channy;
Read more
  • 0
  • 0
  • 3572

article-image-xamarin-essentials-1-6-preview-macos-media-and-more-from-xamarin-blog
Matthew Emerick
07 Oct 2020
4 min read
Save for later

Xamarin.Essentials 1.6 preview: macOS, media, and more! from Xamarin Blog

Matthew Emerick
07 Oct 2020
4 min read
Xamarin.Essentials has been a staple for developers building iOS, Android, and Windows apps with Xamarin and .NET since it was first released last year. Now, we are introducing Xamarin.Essentials 1.6, which adds new APIs including MediaPicker, AppActions, Contacts, and more. Not to mention that this release also features official support for macOS! This means that Xamarin.Essentials now offers over 50 native integrations with support for 7 different operating systems. All from a single library that is optimized for performance, linker safe, and production ready. Here is a highlight reel of all the new features: https://devblogs.microsoft.com/xamarin/wp-content/uploads/sites/44/2020/09/Xamarin.Essentials-1.6.mp4 Welcome macOS Since the first release of Xamarin.Essentials the team and community have been continuously working to add more platforms to fit developer’s needs. After adding tvOS, watchOS, and Tizen support the next natural step was first class support for macOS to compliment the UWP desktop support. I am pleased to announce most APIs are now supported for macOS 10.12.6 (Sierra) and higher! Take a look at the update platform support page to see all of the APIs that you can leverage on your macOS apps. MediaPicker and FilePicker The time has finally come for brand new media capabilities in Xamarin.Essentials. These new APIs enable you to easily access device features such as picking a file from the system, selecting photos or videos, or having your user take a photo or video with the camera. async Task TakePhotoAsync() { try { var photo = await MediaPicker.CapturePhotoAsync(); await LoadPhotoAsync(photo); Console.WriteLine($"CapturePhotoAsync COMPLETED: {PhotoPath}"); } catch (Exception ex) { Console.WriteLine($"CapturePhotoAsync THREW: {ex.Message}"); } } App Actions App actions, shortcuts, and jump lists have all been simplified across iOS, Android, and UWP with this new API. You can now manually create and react to actions when the user selects them from the app icon. try { await AppActions.SetAsync( new AppAction("app_info", "App Info", icon: "app_info_action_icon"), new AppAction("battery_info", "Battery Info")); } catch (FeatureNotSupportedException ex) { Debug.WriteLine("App Actions not supported"); } Contacts Does your app need the ability to get contact information? The brand-new Contacts API has you covered with a single line of code to launch a contact picker to gather information: try { var contact = await Contacts.PickContactAsync(); if(contact == null) return; var name = contact.Name; var contactType = contact.ContactType; // Unknown, Personal, Work var numbers = contact.Numbers; // List of phone numbers var emails = contact.Emails; // List of email addresses } catch (Exception ex) { // Handle exception here. } So Much More That is just the start of brand-new features in Xamarin.Essentials 1.6. When you install the latest update, you will also find new APIs including Screenshot, Haptic Feedback, and an expanded Permissions API. Additionally, there has been tweaks and optimizations to existing features and of course some bug fixes. Built with the Community One of the most exciting parts of working on Xamarin.Essentials is seeing the amazing community contributions. The additions this month included exciting new large new APIs, small tweaks, and plenty of bug fixes. Thank you to everyone that has filed an issue, filed a feature request, reviewed code, or sent a full pull request down. sung-su.kim – Tizen FilePicker Andrea Galvani – UWP Authenticator Fixes, Pedro Jesus – Contacts, Color.ToHsv/FromHsva Dimov Dima – HapticFeedback API Dogukan Demir – Android O Fixes in Permissions Sreeraj P R – Audio fixes on Text-to-Speech Martin Kuckert – iOS Web Authenticator Fixes solomonfried – WebAuthenticator Email vividos – FilePicker API Janus Weil – Location class fixes, AltitudeReferenceSystem addition Ed Snider – App Actions Learn More Be sure to read the full release notes and the updated documentation to learn more about each of the new features. The post Xamarin.Essentials 1.6 preview: macOS, media, and more! appeared first on Xamarin Blog.
Read more
  • 0
  • 0
  • 2239
article-image-amazon-sagemaker-continues-to-lead-the-way-in-machine-learning-and-announces-up-to-18-lower-prices-on-gpu-instances-from-aws-news-blog
Matthew Emerick
07 Oct 2020
11 min read
Save for later

Amazon SageMaker Continues to Lead the Way in Machine Learning and Announces up to 18% Lower Prices on GPU Instances from AWS News Blog

Matthew Emerick
07 Oct 2020
11 min read
Since 2006, Amazon Web Services (AWS) has been helping millions of customers build and manage their IT workloads. From startups to large enterprises to public sector, organizations of all sizes use our cloud computing services to reach unprecedented levels of security, resiliency, and scalability. Every day, they’re able to experiment, innovate, and deploy to production in less time and at lower cost than ever before. Thus, business opportunities can be explored, seized, and turned into industrial-grade products and services. As Machine Learning (ML) became a growing priority for our customers, they asked us to build an ML service infused with the same agility and robustness. The result was Amazon SageMaker, a fully managed service launched at AWS re:Invent 2017 that provides every developer and data scientist with the ability to build, train, and deploy ML models quickly. Today, Amazon SageMaker is helping tens of thousands of customers in all industry segments build, train and deploy high quality models in production: financial services (Euler Hermes, Intuit, Slice Labs, Nerdwallet, Root Insurance, Coinbase, NuData Security, Siemens Financial Services), healthcare (GE Healthcare, Cerner, Roche, Celgene, Zocdoc), news and media (Dow Jones, Thomson Reuters, ProQuest, SmartNews, Frame.io, Sportograf), sports (Formula 1, Bundesliga, Olympique de Marseille, NFL, Guiness Six Nations Rugby), retail (Zalando, Zappos, Fabulyst), automotive (Atlas Van Lines, Edmunds, Regit), dating (Tinder), hospitality (Hotels.com, iFood), industry and manufacturing (Veolia, Formosa Plastics), gaming (Voodoo), customer relationship management (Zendesk, Freshworks), energy (Kinect Energy Group, Advanced Microgrid Systems), real estate (Realtor.com), satellite imagery (Digital Globe), human resources (ADP), and many more. When we asked our customers why they decided to standardize their ML workloads on Amazon SageMaker, the most common answer was: “SageMaker removes the undifferentiated heavy lifting from each step of the ML process.” Zooming in, we identified five areas where SageMaker helps them most. #1 – Build Secure and Reliable ML Models, Faster As many ML models are used to serve real-time predictions to business applications and end users, making sure that they stay available and fast is of paramount importance. This is why Amazon SageMaker endpoints have built-in support for load balancing across multiple AWS Availability Zones, as well as built-in Auto Scaling to dynamically adjust the number of provisioned instances according to incoming traffic. For even more robustness and scalability, Amazon SageMaker relies on production-grade open source model servers such as TensorFlow Serving, the Multi-Model Server, and TorchServe. A collaboration between AWS and Facebook, TorchServe is available as part of the PyTorch project, and makes it easy to deploy trained models at scale without having to write custom code. In addition to resilient infrastructure and scalable model serving, you can also rely on Amazon SageMaker Model Monitor to catch prediction quality issues that could happen on your endpoints. By saving incoming requests as well as outgoing predictions, and by comparing them to a baseline built from a training set, you can quickly identify and fix problems like missing features or data drift. Says Aude Giard, Chief Digital Officer at Veolia Water Technologies: “In 8 short weeks, we worked with AWS to develop a prototype that anticipates when to clean or change water filtering membranes in our desalination plants. Using Amazon SageMaker, we built a ML model that learns from previous patterns and predicts the future evolution of fouling indicators. By standardizing our ML workloads on AWS, we were able to reduce costs and prevent downtime while improving the quality of the water produced. These results couldn’t have been realized without the technical experience, trust, and dedication of both teams to achieve one goal: an uninterrupted clean and safe water supply.” You can learn more in this video. #2 – Build ML Models Your Way When it comes to building models, Amazon SageMaker gives you plenty of options. You can visit AWS Marketplace, pick an algorithm or a model shared by one of our partners, and deploy it on SageMaker in just a few clicks. Alternatively, you can train a model using one of the built-in algorithms, or your own code written for a popular open source ML framework (TensorFlow, PyTorch, and Apache MXNet), or your own custom code packaged in a Docker container. You could also rely on Amazon SageMaker AutoPilot, a game-changing AutoML capability. Whether you have little or no ML experience, or you’re a seasoned practitioner who needs to explore hundreds of datasets, SageMaker AutoPilot takes care of everything for you with a single API call. It automatically analyzes your dataset, figures out the type of problem you’re trying to solve, builds several data processing and training pipelines, trains them, and optimizes them for maximum accuracy. In addition, the data processing and training source code is available in auto-generated notebooks that you can review, and run yourself for further experimentation. SageMaker Autopilot also now creates machine learning models up to 40% faster with up to 200% higher accuracy, even with small and imbalanced datasets. Another popular feature is Automatic Model Tuning. No more manual exploration, no more costly grid search jobs that run for days: using ML optimization, SageMaker quickly converges to high-performance models, saving you time and money, and letting you deploy the best model to production quicker. “NerdWallet relies on data science and ML to connect customers with personalized financial products“, says Ryan Kirkman, Senior Engineering Manager. “We chose to standardize our ML workloads on AWS because it allowed us to quickly modernize our data science engineering practices, removing roadblocks and speeding time-to-delivery. With Amazon SageMaker, our data scientists can spend more time on strategic pursuits and focus more energy where our competitive advantage is—our insights into the problems we’re solving for our users.” You can learn more in this case study. Says Tejas Bhandarkar, Senior Director of Product, Freshworks Platform: “We chose to standardize our ML workloads on AWS because we could easily build, train, and deploy machine learning models optimized for our customers’ use cases. Thanks to Amazon SageMaker, we have built more than 30,000 models for 11,000 customers while reducing training time for these models from 24 hours to under 33 minutes. With SageMaker Model Monitor, we can keep track of data drifts and retrain models to ensure accuracy. Powered by Amazon SageMaker, Freddy AI Skills is constantly-evolving with smart actions, deep-data insights, and intent-driven conversations.“ #3 – Reduce Costs Building and managing your own ML infrastructure can be costly, and Amazon SageMaker is a great alternative. In fact, we found out that the total cost of ownership (TCO) of Amazon SageMaker over a 3-year horizon is over 54% lower compared to other options, and developers can be up to 10 times more productive. This comes from the fact that Amazon SageMaker manages all the training and prediction infrastructure that ML typically requires, allowing teams to focus exclusively on studying and solving the ML problem at hand. Furthermore, Amazon SageMaker includes many features that help training jobs run as fast and as cost-effectively as possible: optimized versions of the most popular machine learning libraries, a wide range of CPU and GPU instances with up to 100GB networking, and of course Managed Spot Training which lets you save up to 90% on your training jobs. Last but not least, Amazon SageMaker Debugger automatically identifies complex issues developing in ML training jobs. Unproductive jobs are terminated early, and you can use model information captured during training to pinpoint the root cause. Amazon SageMaker also helps you slash your prediction costs. Thanks to Multi-Model Endpoints, you can deploy several models on a single prediction endpoint, avoiding the extra work and cost associated with running many low-traffic endpoints. For models that require some hardware acceleration without the need for a full-fledged GPU, Amazon Elastic Inference lets you save up to 90% on your prediction costs. At the other end of the spectrum, large-scale prediction workloads can rely on AWS Inferentia, a custom chip designed by AWS, for up to 30% higher throughput and up to 45% lower cost per inference compared to GPU instances. Lyft, one of the largest transportation networks in the United States and Canada, launched its Level 5 autonomous vehicle division in 2017 to develop a self-driving system to help millions of riders. Lyft Level 5 aggregates over 10 terabytes of data each day to train ML models for their fleet of autonomous vehicles. Managing ML workloads on their own was becoming time-consuming and expensive. Says Alex Bain, Lead for ML Systems at Lyft Level 5: “Using Amazon SageMaker distributed training, we reduced our model training time from days to couple of hours. By running our ML workloads on AWS, we streamlined our development cycles and reduced costs, ultimately accelerating our mission to deliver self-driving capabilities to our customers.“ #4 – Build Secure and Compliant ML Systems Security is always priority #1 at AWS. It’s particularly important to customers operating in regulated industries such as financial services or healthcare, as they must implement their solutions with the highest level of security and compliance. For this purpose, Amazon SageMaker implements many security features, making it compliant with the following global standards: SOC 1/2/3, PCI, ISO, FedRAMP, DoD CC SRG, IRAP, MTCS, C5, K-ISMS, ENS High, OSPAR, and HITRUST CSF. It’s also HIPAA BAA eligible. Says Ashok Srivastava, Chief Data Officer, Intuit: “With Amazon SageMaker, we can accelerate our Artificial Intelligence initiatives at scale by building and deploying our algorithms on the platform. We will create novel large-scale machine learning and AI algorithms and deploy them on this platform to solve complex problems that can power prosperity for our customers.” #5 – Annotate Data and Keep Humans in the Loop As ML practitioners know, turning data into a dataset requires a lot of time and effort. To help you reduce both, Amazon SageMaker Ground Truth is a fully managed data labeling service that makes it easy to annotate and build highly accurate training datasets at any scale (text, image, video, and 3D point cloud datasets). Says Magnus Soderberg, Director, Pathology Research, AstraZeneca: “AstraZeneca has been experimenting with machine learning across all stages of research and development, and most recently in pathology to speed up the review of tissue samples. The machine learning models first learn from a large, representative data set. Labeling the data is another time-consuming step, especially in this case, where it can take many thousands of tissue sample images to train an accurate model. AstraZeneca uses Amazon SageMaker Ground Truth, a machine learning-powered, human-in-the-loop data labeling and annotation service to automate some of the most tedious portions of this work, resulting in reduction of time spent cataloging samples by at least 50%.” Amazon SageMaker is Evaluated The hundreds of new features added to Amazon SageMaker since launch are testimony to our relentless innovation on behalf of customers. In fact, the service was highlighted in February 2020 as the overall leader in Gartner’s Cloud AI Developer Services Magic Quadrant. Gartner subscribers can click here to learn more about why we have an overall score of 84/100 in their “Solution Scorecard for Amazon SageMaker, July 2020”, the highest rating among our peer group. According to Gartner, we met 87% of required criteria, 73% of preferred, and 85% of optional. Announcing a Price Reduction on GPU Instances To thank our customers for their trust and to show our continued commitment to make Amazon SageMaker the best and most cost-effective ML service, I’m extremely happy to announce a significant price reduction on all ml.p2 and ml.p3 GPU instances. It will apply starting October 1st for all SageMaker components and across the following regions: US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), EU (Frankfurt), EU (London), Canada (Central), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Seoul), Asia Pacific (Tokyo), Asia Pacific (Mumbai), and AWS GovCloud (US-Gov-West). Instance Name Price Reduction ml.p2.xlarge -11% ml.p2.8xlarge -14% ml.p2.16xlarge -18% ml.p3.2xlarge -11% ml.p3.8xlarge -14% ml.p3.16xlarge -18% ml.p3dn.24xlarge -18% Getting Started with Amazon SageMaker As you can see, there are a lot of exciting features in Amazon SageMaker, and I encourage you to try them out! Amazon SageMaker is available worldwide, so chances are you can easily get to work on your own datasets. The service is part of the AWS Free Tier, letting new users work with it for free for hundreds of hours during the first two months. If you’d like to kick the tires, this tutorial will get you started in minutes. You’ll learn how to use SageMaker Studio to build, train, and deploy a classification model based on the XGBoost algorithm. Last but not least, I just published a book named “Learn Amazon SageMaker“, a 500-page detailed tour of all SageMaker features, illustrated by more than 60 original Jupyter notebooks. It should help you get up to speed in no time. As always, we’re looking forward to your feedback. Please share it with your usual AWS support contacts, or on the AWS Forum for SageMaker. - Julien
Read more
  • 0
  • 0
  • 2075

article-image-giving-material-angular-io-a-refresh-from-angular-blog-medium
Matthew Emerick
07 Oct 2020
3 min read
Save for later

Giving material.angular.io a refresh from Angular Blog - Medium

Matthew Emerick
07 Oct 2020
3 min read
Hi everyone, I’m Annie and I recently joined the Angular Components team after finishing up my rotations as an Engineering Resident here at Google. During the first rotation of my residency I worked on the Closure Compiler and implemented some new ES2020 features including nullish coalesce and optional chaining. After that, my second rotation project was with the Angular Components where I took on giving material.angular.io a long awaited face lift. If you have recently visited the Angular Materials documentation site you will have noticed some new visual updates. We’ve included new vibrant images on the components page, updates to the homepage, a guides page revamp and so much more! Today I would like to highlight how we generated these fun colorful images. We were inspired by the illustrations on the Material Design components page which had aesthetic abstract designs that represented each component. We wanted to adapt the idea for material.angular.io but had some constraints and requirements to consider. First of all, we didn’t have a dedicated illustrator or designer for the project because of the tight deadline of my residency. Second of all, we wanted the images to be compact but clearly showcase each component and its usage. Finally, we wanted to be able to update these images easily when a component’s appearance changed. For the team the choice became clear: we’re going to need to build something ourselves to solve for these requirements. While weighing our design options, we decided that we preferred a more realistic view of the components instead of abstract representations. This is where we came up with the idea of creating “scenes” for each component and capturing them as they would appear in use. We needed a way to efficiently capture these components. We turned to a technique called screenshot testing. Screenshot testing is a technique that captures an image of the page of the provided url and compares it to an expected image. Using this technique we were able to generate the scenes for all 35 components. Here’s how we did it: Set up a route for each component that contains a “scene” using the actual material component Create an end-to-end testing environment and take screenshots of each route with protractor Save the screenshots instead of comparing them to an expected image Load the screenshots from the site One of the benefits of our approach is that whenever we update a component, we can just take new screenshots. This process saves incredible amounts of time and effort. To create each of the scenes we held a mini hackathon to come up with fun ideas! For example, for the button component (top) we wanted to showcase all the different types and styles of buttons available (icon, FAB, raised, etc.). For the button toggle component (bottom) we wanted to show the toggle in both states in a realistic scenario where someone might use a button toggle. Conclusion It was really exciting to see the new site go live with all the changes we made and we hope you enjoy them too! Be sure to check out the site and let us know what your favorite part is! Happy coding, friends! Giving material.angular.io a refresh was originally published in Angular Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.
Read more
  • 0
  • 0
  • 3998