Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

How-To Tutorials - Data

1210 Articles
article-image-tesla-autonomy-day-takeaways-full-self-driving-computer-robotaxis-launching-next-year-and-more
Bhagyashree R
24 Apr 2019
6 min read
Save for later

Tesla Autonomy Day takeaways: Full Self-Driving computer, Robotaxis launching next year, and more

Bhagyashree R
24 Apr 2019
6 min read
This Monday, Tesla’s “Autonomy Investor Day” kickstarted at its headquarters in Palo Alto. At this invitation-only event, Elon Musk, the CEO of Tesla, with his fellow executives, talked about its new microchip, robotaxis hitting the road by next year, and more. Here are some of the key takeaways from the event: The Full Self-Driving (FSD) computer Tesla shared details of its new custom chip, Full Self-Driving (FSD) computer, previously known as Autopilot Hardware 3.0. Elon Musk, the CEO of Tesla, believes that the FSD computer is “the best chip in the world…objectively.” Tesla replaced Nvidia’s Autopilot 2.5 computer with its own custom chip for Model S and Model X about a month ago. For Model 3 vehicles this change happened about 10 days ago. Musk said, “All cars being produced all have the hardware necessary — computer and otherwise — for full self-driving. All you need to do is improve the software.” FSD is a high-performance, special-purpose chip built by Samsung with main focus on autonomy and safety. It comes with a factor of 21 improvements in frame per second processing as compared to the previous generation Tesla Autopilot hardware, which was powered by Nvidia hardware. The company further shared that retrofits will be offered to current Tesla owners who bought the ‘Full Self-Driving package’ in the next few months. Here’s the new Tesla FSD computer: Credits: Tesla Musk shared that the company has already started working on a next-generation chip. The design of FSD was completed within 2 years of time and Tesla is now about halfway through the design of the next-generation chip. Musk’s claims of building the best chip can be taken with a pinch of salt as it could surely upset some engineers from Nvidia, Mobileye, and other companies who have been in the chip manufacturing market for a long time. Nvidia, in a blog post, along with applauding Tesla for its FSD computer, highlighted “few inaccuracies” in the comparison made by Musk during the event: “It’s not useful to compare the performance of Tesla’s two-chip Full Self Driving computer against NVIDIA’s single-chip driver assistance system. Tesla’s two-chip FSD computer at 144 TOPs would compare against the NVIDIA DRIVE AGX Pegasus computer which runs at 320 TOPS for AI perception, localization and path planning.” While pointing out the “inaccuracies”, Nvidia did miss out the key point here: the power consumption. “Having a system that can do 160 TOPS means little if it uses 500 watts while tesla's 144 TOPS system uses 72 watts,” a Redditor said. Robotaxis will hit the roads in 2020 Musk shared that within the next year or so we will see Tesla’s robotaxis coming into the ride-hailing market giving competition to Uber and Lyft. Musk made a bold claim saying that though, similar to other ride-hailing services, the robotaxis will allow users to hail a Tesla for a ride, they will not have drivers. Musk announced, “I feel very confident predicting that there will be autonomous robotaxis from Tesla next year — not in all jurisdictions because we won’t have regulatory approval everywhere.” He did not share many details on what regulations he was talking about. The service will allow Tesla-owners to add their properly equipped vehicles to Tesla’s own ride-sharing app, following a similar business model as Uber or Airbnb. The company will provide a dedicated number of robotaxis in areas where there are not enough loanable cars. Musk predicted that the average robotaxi will be able to yield $30,000 in gross profit per car, annually. Of this profit, about 25% to 30% will go to Tesla, therefore, an owner will be able to make $21,000 a year. Musk's plans for launching robotaxis next year looks ambitious. Experts and the media are quite skeptical about his plan. The Partners for Automated Vehicle Education (PAVE) industry group tweeted: https://twitter.com/PAVECampaign/status/1120436981220237312 https://twitter.com/RobMcCargow/status/1120961462678245376 Musk says “Anyone relying on lidar is doomed” Musk has been pretty vocal about his dislike towards LIDAR. He calls this technology “a crutch for self-driving cars”. When this topic came up at the event, Musk said: “Lidar is a fool’s errand. Anyone relying on lidar is doomed. Doomed! [They are] expensive sensors that are unnecessary. It’s like having a whole bunch of expensive appendices. Like, one appendix is bad, well now you have a whole bunch of them, it’s ridiculous, you’ll see.” LIDAR, which stands for Light Direction and Ranging, is used by Uber, Waymo, Cruise, and many other self-driving vehicles manufacturing companies. LIDAR projects low-intensity, harmless, and invisible laser beams at a target, or in the case of self-driving cars, all around. The reflected pulses are then measured for return time and wavelength to calculate the distance of an object from the sender. Lidar is capable of producing pretty detailed visualizations of the environment around a self-driving car. However, Tesla believes that this same functionality can be facilitated by cameras. According to Musk, cameras can provide much better resolutions and when combined with the neural net can predict depth very well. Andrej Karpathy, Tesla’s Senior Director of AI, took to the stage to explain the limitations of Lidar. He said, “In that sense, lidar is really a shortcut. It sidesteps the fundamental problems, the important problem of visual recognition, that is necessary for autonomy. It gives a false sense of progress and is ultimately a crutch. It does give, like, really fast demos!”. Karpathy further added, “You were not shooting lasers out of your eyes to get here.” While true, many felt that the reasoning is completely flawed. A Redditor in a discussion thread, said, “Musk's argument that "you drove here using your own two eyes with no lasers coming out of them" is reductive and flawed. It should be obvious to anyone that our eyes are more complex than simple stereo cameras. If the Tesla FSD system can reliably perceive depth at or above the level of the human eye in all conditions, then they have done something truly remarkable. Judging by how Andrej Karpathy deflected the question about how well the system works in snowy conditions, I would assume they have not reached that level.” Check out the live stream of the autonomy day on Tesla’s official website. Tesla v9 to incorporate neural networks for autopilot Tesla is building its own AI hardware for self-driving cars Nvidia Tesla V100 GPUs publicly available in beta on Google Compute Engine and Kubernetes Engine  
Read more
  • 0
  • 0
  • 2648

article-image-openai-five-bots-destroyed-human-dota-2-players-this-weekend
Richard Gall
23 Apr 2019
3 min read
Save for later

OpenAI Five bots destroyed human Dota 2 players this weekend

Richard Gall
23 Apr 2019
3 min read
Last week, the team at OpenAI made it possible for humans to play the OpenAI Five bot at Dota 2 online. The results were staggering - over a period of just a few days, from April 18 to April 21, OpenAI Five had a win rate of 99.4%, winning 7,215 games (that includes humans giving up and abandoning their games 3,140 times) and losing only 42. But perhaps we shouldn't be that surprised. The artificial intelligence bot did, after all, defeat OG, one of the best e-sports teams on the planet earlier this month. https://twitter.com/OpenAI/status/1120421259274334209 What does OpenAI Five's Dota 2 dominance tell us about artificial intelligence? The dominance of OpenAI Five over the weekend is important because it indicates that it is possible to build artificial intelligence that can deal with complex strategic decision-making consistently. Indeed, that's what sets this experiment apart from other artificial intelligence gaming challenges - from the showdown with OG to DeepMind's AlphaZero defeating a professional Go and chess players, bots are typically playing individuals or small teams of players. By taking on the world, it would appear that OpenAI have developed an artificial intelligence system that a large group of intelligent humans with specific domain experience have found it consistently difficult to out-think. Learning how to win The key issue when it comes to artificial intelligence and games - Dota 2 or otherwise - is the ability of the bot to learn. One Dota 2 gamer, quoted on a Reddit thread, said "the bots are locked, they are not learning, but we humans are. We will win." This is true - up to a point. The reality is that they aren't locked - they are, in fact, continually learning, processing the consequences of every decision that is made and feeding it into its system. And although adaptability will remain an issue for any artificial intelligence system, the more games it plays and the more strategies it 'learns' it will essentially build adaptability into its system. This is something OpenAI CTO Greg Brockman noted when responding to suggestions that OpenAI Five's tiny proportion of defeats indicates a lack of adaptability. "When we lost at The International (100% vs pro teams), they said it was because Five can’t do strategy. So we trained for longer. When we lose (0.7% vs the entire Internet), they say it’s because Five can’t adapt." https://twitter.com/gdb/status/1119963994754670594 It's important to remember that this doesn't necessarily signal that much about the possibility of Artificial General Intelligence. OpenAI Five's decision making power is centered around a very specific domain - even if it is one that is relatively complex. However, it does highlight that the relationship between video games and artificial intelligence is particularly important. On the one hand, video games are a space that can help us develop AI further and explore the boundaries of what's possible. But equally, AI will likely evolve the way we think about gaming - and esports - too. Read next: How Artificial Intelligence and Machine Learning can turbocharge a Game Developer’s career
Read more
  • 0
  • 0
  • 3272

article-image-ai-now-institute-publishes-a-report-on-the-diversity-crisis-in-ai-and-offers-12-solutions-to-fix-it
Bhagyashree R
22 Apr 2019
7 min read
Save for later

AI Now Institute publishes a report on the diversity crisis in AI and offers 12 solutions to fix it

Bhagyashree R
22 Apr 2019
7 min read
Earlier this month, the AI Now Institute published a report, authored by Sarah Myers West, Meredith Whittaker, and Kate Crawford, highlighting the link between the diversity issue in the current AI industry and the discriminating behavior of AI systems. The report further recommends some solutions to these problems that companies and the researchers behind these systems need to adopt to address these issues. Sarah Myers West is a postdoc researcher at the AI Now Institute and an affiliate researcher at the Berkman-Klein Center for Internet and Society. Meredith Whittaker is the co-founder of the AI Now Institute and leads Google's Open Research Group and the Google Measurement Lab.  Kate Crawford is a Principal Researcher at Microsoft Research and the co-founder and Director of Research at the AI Now Institute. Kate Crawford tweeted about this study. https://twitter.com/katecrawford/status/1118509988392112128 The AI industry lacks diversity, gender neutrality, and bias-free systems In recent years, we have come across several cases of “discriminating systems”. Facial recognition systems miscategorize black people and sometimes fails to work for trans drivers. When trained in online discourse, chatbots easily learn racist and misogynistic language. This type of behavior by machines is actually a reflection of society. “In most cases, such bias mirrors and replicates existing structures of inequality in the society,” says the report. The study also sheds light on gender bias in the current workforce. According to the report, only 18% of authors at some of the biggest AI conferences are women. On the other side of the spectrum are men who cover 80%. The tech giants, Facebook and Google, have a meager 15% and 10% women as their AI research staff. The situation for black workers in the AI industry looks even worse. While Facebook and Microsoft have 4% of their current workforce as black workers, Google stands at just 2.5%. Also, vast majority of AI studies assume gender is binary, and commonly assigns people as ‘male’ or ‘female’ based on physical appearance and stereotypical assumptions, erasing all other forms of gender identity. The report further reveals that, though there have been various “pipeline studies” to check the flow of diverse job candidates, they have failed to show substantial progress in bringing diversity in the AI industry. “The focus on the pipeline has not addressed deeper issues with workplace cultures, power asymmetries, harassment, exclusionary hiring practices, unfair compensation, and tokenization that are causing people to leave or avoid working in the AI sector altogether,” the report reads. What steps can industries take to address bias and discrimination in AI Systems The report lists 12 recommendations that AI researchers and companies should employ to improve workplace diversity and address bias and discrimination in AI systems. Publish compensation levels, including bonuses and equity, across all roles and job categories, broken down by race and gender. End pay and opportunity inequality, and set pay and benefit equity goals that include contract workers, temps, and vendors. Publish harassment and discrimination transparency reports, including the number of claims over time, the types of claims submitted, and actions taken. Change hiring practices to maximize diversity: include targeted recruitment beyond elite universities, ensure more equitable focus on under-represented groups, and create more pathways for contractors, temps, and vendors to become full-time employees. Commit to transparency around hiring practices, especially regarding how candidates are leveled, compensated, and promoted. Increase the number of people of color, women and other under-represented groups at senior leadership levels of AI companies across all departments. Ensure executive incentive structures are tied to increases in hiring and retention of underrepresented groups. For academic workplaces, ensure greater diversity in all spaces where AI research is conducted, including AI-related departments and conference committees. Remedying bias in AI systems is almost impossible when these systems are opaque. Transparency is essential, and begins with tracking and publicizing where AI systems are used, and for what purpose. Rigorous testing should be required across the lifecycle of AI systems in sensitive domains. Pre-release trials, independent auditing, and ongoing monitoring are necessary to test for bias, discrimination, and other harms. The field of research on bias and fairness needs to go beyond technical debiasing to include a wider social analysis of how AI is used in context. This necessitates including a wider range of disciplinary expertise. The methods for addressing bias and discrimination in AI need to expand to include assessments of whether certain systems should be designed at all, based on a thorough risk assessment. AI-related departments and conference committees. Credits: AI Now Institute Bringing diversity in the AI workforce In order to address the diversity issue in the AI industry, companies need to make changes in the current hiring practices. They should have a more equitable focus on under-represented groups. People of color, women, and other under-represented groups should get fair chance to get into senior leadership level of AI companies across all departments. Further opportunities should be created for contractors, temps, and vendors to become full-time employees. To bridge the gender pay gap in the AI industry, it is important that companies maintain transparency regarding the compensation levels, including bonuses and equity, regardless of gender or race. In the past few years, several cases of sexual misconducts involving some of the biggest companies like Google, Microsoft, have come into light because of movements like #MeToo, Google Walkout, and more. These movements gave the victims and other supporting employees  the courage to speak against employees at higher positions who were taking undue advantage of their power. There are cases were the sexual harassment complaints were not taken seriously by the HRs and victims were told to just “get over it”. This is why, companies should  publish harassment and discrimination transparency reports that include information like the number and types of claims made and the actions taken by the company. Academic workplaces should ensure diversity in all AI-related departments and conference committees. In the past, some of the biggest AI conferences like Neural Information Processing Systems conference has failed to provide a welcoming and safer environment for women. In a survey conducted last year, many respondents shared that they have experienced sexual harassment. Women reported persistent advances from men at the conference. The organizers of such conferences should ensure an inclusive and welcoming environment for everyone. Addressing bias and discrimination in AI systems To address bias and discrimination in AI systems, the report recommends to do rigorous testing across the lifecycle of these systems. These systems should have pre-release trials, independent auditing, and monitoring to check bias, discrimination, and other harms. Looking at the social implications of AI systems, just addressing the algorithmic bias is not enough. “The field of research on bias and fairness needs to go beyond technical debiasing to include a wider social analysis of how AI is used in context. This necessitates including a wider range of disciplinary expertise,” says the report. While assessing a AI system, researchers and developers should also check whether designing a certain system is required at all, considering the risks it poses. The study calls for re-evaluating the current AI systems used for classifying, detecting, and predicting the race and gender. The idea of identifying a race or gender just by appearance is flawed and can be easily abused. Especially, systems that use physical appearance to find interior states, for instance, those that claim to detect sexuality from headshots. These systems are urgently in need to be checked. To know more in detail, read the full report: Discriminating Systems. Microsoft’s #MeToo reckoning: female employees speak out against workplace harassment and discrimination Desmond U. Patton, Director of SAFElab shares why AI systems should be a product of interdisciplinary research and diverse teams Google’s Chief Diversity Officer, Danielle Brown resigns to join HR tech firm Gusto
Read more
  • 0
  • 0
  • 3493
Banner background image

article-image-brett-lantz-shows-how-data-scientists-learn-building-algorithms-in-third-edition-machine-learning-r
Packt Editorial Staff
22 Apr 2019
3 min read
Save for later

The hands-on guide to Machine Learning with R by Brett Lantz

Packt Editorial Staff
22 Apr 2019
3 min read
If science fiction stories are to be believed, the invention of Artificial Intelligence inevitably leads to apocalyptic wars between machines and their makers. Thankfully, at the time of this writing, machines still require user input. Though your impressions of Machine Learning may be colored by these mass-media depictions, today's algorithms are too application-specific to pose any danger of becoming self-aware. The goal of today's Machine Learning is not to create an artificial brain, but rather to assist us with making sense of the world's massive data stores. Conceptually, the learning process involves the abstraction of data into a structured representation, and the generalization of the structure into action that can be evaluated for utility. In practical terms, a machine learner uses data containing examples and features of the concept to be learned, then summarizes this data in the form of a model, which is used for predictive or descriptive purposes. The field of machine learning provides a set of algorithms that transform data into actionable knowledge. Among the many possible methods, machine learning algorithms are chosen on the basis of the input data and the learning task. This fact makes machine learning well-suited to the present-day era of big data. Machine Learning with R, Third Edition introduces you to the fundamental concepts that define and differentiate the most commonly used machine learning approaches and how easy it is to use R to start applying machine learning to real-world problems. Many of the algorithms needed for machine learning are not included as part of the base R installation. Instead, the algorithms are available via a large community of experts who have shared their work freely. These powerful tools are available to download at no cost, but must be installed on top of base R manually. This book covers a small portion of all of R's machine learning packages and will get you up to speed with the learning landscape of machine learning with R. Machine Learning with R, Third Edition updates the classic R data science book with newer and better libraries, advice on ethical and bias issues in machine learning, and an introduction to deep learning. Whether you are an experienced R user or new to the language, Brett Lantz teaches you everything you need to uncover key insights, make new predictions, and visualize your findings. Introduction to Machine Learning with R Machine Learning with R How to make machine learning based recommendations using Julia [Tutorial]
Read more
  • 0
  • 0
  • 4029

article-image-eu-approves-labour-protection-laws-for-whistleblowers-and-gig-economy-workers-with-implications-for-tech-companies
Savia Lobo
17 Apr 2019
5 min read
Save for later

EU approves labour protection laws for ‘Whistleblowers’ and ‘Gig economy’ workers with implications for tech companies

Savia Lobo
17 Apr 2019
5 min read
The European Union approved two new labour protection laws recently. This time, for the two not so hyped sects, the whistleblowers and the ones earning their income via the ‘gig economy’. As for the whistleblowers, with the new law, they receive an increased protection landmark legislation aimed at encouraging reports of wrongdoing. On the other hand, for those working for ‘on-demand’ jobs, thus, termed as the gig economy, the law sets minimum rights and demands increased transparency for such workers. Let’s have a brief look at each of the newly approved by the EU. Whistleblowers’ shield against retaliation On Tuesday, the EU parliament approved a new law for whistleblowers safeguarding them from any retaliation within an organization. The law protects whistleblowers against dismissal, demotion and other forms of punishment. “The law now needs to be approved by EU ministers. Member states will then have two years to comply with the rules”, the EU proposal states. Transparency International calls this as “pathbreaking legislation”, which will also give employees a "greater legal certainty around their rights and obligations". The new law creates a safe channel which allows the whistleblowers to report of an EU law breach both within an organization and to public authorities. “It is the first time whistleblowers have been given EU-wide protection. The law was approved by 591 votes, with 29 votes against and 33 abstentions”, the BBC reports. In cases where no appropriate action is taken by the organization’s authorities even after reporting, whistleblowers are allowed to make public disclosure of the wrongdoing by communicating with the media. European Commission Vice President, Frans Timmermans, says, “potential whistleblowers are often discouraged from reporting their concerns or suspicions for fear of retaliation. We should protect whistleblowers from being punished, sacked, demoted or sued in court for doing the right thing for society.” He further added, “This will help tackle fraud, corruption, corporate tax avoidance and damage to people's health and the environment.” “The European Commission says just 10 members - France, Hungary, Ireland, Italy, Lithuania, Malta, the Netherlands, Slovakia, Sweden, and the UK - had a "comprehensive law" protecting whistleblowers”, the BBC reports. “Attempts by some states to water down the reform earlier this year were blocked at an early stage of the talks with Luxembourg, Ireland, and Hungary seeking to have tax matters excluded. However, a coalition of EU states, including Germany, France, and Italy, eventually prevailed in keeping tax revelations within the proposal”, the Reuters report. “If member states fail to properly implement the law, the European Commission can take formal disciplinary steps against the country and could ultimately refer the case to the European Court of Justice”, BBC reports. To know more about this new law for whistleblowers, read the official proposal. EU grants protection to workers in Gig economy (casual or short-term employment) In a vote on Tuesday, the Members of the European Parliament (MEP) announced minimum rights for workers with on-demand, voucher-based or platform jobs, such as Uber or Deliveroo. However, genuinely self-employed workers would be excluded from the new rules. “The law states that every person who has an employment contract or employment relationship as defined by law, collective agreements or practice in force in each member state should be covered by these new rights”, BBC reports. “This would mean that workers in casual or short-term employment, on-demand workers, intermittent workers, voucher-based workers, platform workers, as well as paid trainees and apprentices, deserve a set of minimum rights, as long as they meet these criteria and pass the threshold of working 3 hours per week and 12 hours per 4 weeks on average”, according to EU’s official website. For this, all workers need to be informed from day one as a general principle, but no later than seven days where justified. Following are the specific set of rights to cover new forms of employment includes: Workers with on-demand contracts or similar forms of employment should benefit from a minimum level of predictability such as predetermined reference hours and reference days. They should also be able to refuse, without consequences, an assignment outside predetermined hours or be compensated if the assignment was not cancelled in time. Member states shall adopt measures to prevent abusive practices, such as limits to the use and duration of the contract. The employer should not prohibit, penalize or hinder workers from taking jobs with other companies if this falls outside the work schedule established with that employer. Enrique Calvet Chambon, the MEP responsible for seeing the law through, said, “This directive is the first big step towards the implementation of the European Pillar of Social Rights, affecting all EU workers. All workers who have been in limbo will now be granted minimum rights thanks to this directive, and the European Court of Justice rulings, from now on no employer will be able to abuse the flexibility in the labour market.” To know more about this new law on Gig economy, visit EU’s official website. 19 nations including The UK and Germany give thumbs-up to EU’s Copyright Directive Facebook discussions with the EU resulted in changes of its terms and services for users The EU commission introduces guidelines for achieving a ‘Trustworthy AI’
Read more
  • 0
  • 0
  • 1645

article-image-wikileaks-founder-julian-assange-arrested-for-conspiracy-to-commit-computer-intrusion
Savia Lobo
12 Apr 2019
6 min read
Save for later

Wikileaks founder, Julian Assange, arrested for “conspiracy to commit computer intrusion”

Savia Lobo
12 Apr 2019
6 min read
Julian Assange, the Wikileaks founder, was arrested yesterday in London, in accordance with the U.S./UK Extradition Treaty. He was charged with assisting Chelsea Manning, a former intelligence analyst in the U.S. Army, to crack a password on a classified U.S. government computer. The indictment states that in March 2010, Assange assisted Manning by cracking password stored on U.S. Department of Defense computers connected to the Secret Internet Protocol Network (SIPRNet), a U.S. government network used for classified documents and communications. Being an intelligence analyst, Manning had access to certain computers and used these to download classified records to transmit to WikiLeaks. “Cracking the password would have allowed Manning to log on to the computers under a username that did not belong to her. Such a deceptive measure would have made it more difficult for investigators to determine the source of the illegal disclosures”, the indictment report states. “Manning confessed to leaking more than 725,000 classified documents to WikiLeaks following her deployment to Iraq in 2009—including battlefield reports and five Guantanamo Bay detainee profiles”, Gizmodo reports. In 2013, Manning was convicted of leaking the classified U.S. government documents to WikiLeaks. She was jailed in early March this year as a recalcitrant witness after she refused to answer the grand jury’s questions. According to court filings, after Manning’s arrest, she was held in solitary confinement in a Virginia jail for nearly a month. Following Assange’s arrest, a Swedish software developer and digital privacy activist, Ola Bini, who is allegedly close to Wikileaks founder Julian Assange has also been detained. “The official said they are looking into whether he was part of a possible effort by Assange and Wikileaks to blackmail Ecuador’s President, Lenin Moreno”, the Washington Post reports. Bini was detained at Quito’s airport as he was preparing to board a flight for Japan. Martin Fowler, a British software developer and renowned author and speaker, tweeted on Bini’s arrest. He said that Bini is a strong advocate and developer supporting privacy, and has not been able to speak to any lawyers. https://twitter.com/martinfowler/status/1116520916383621121 Following Assange’s arrest, Hillary Clinton, who was the nominee for the 2016 Presidential elections, said, “The bottom line is that he has to answer for what he has done”. “WikiLeaks’ publication of Democratic emails stolen by Russian intelligence officers during the 2016 election season hurt Clinton’s presidential campaign”, the Washington Post reports. Assange, who is an Australian citizen, was dragged out of Ecuador’s embassy in London after his seven-year asylum was revoked. He was granted Asylum by former Ecuadorian President Rafael Correa in 2012 for publishing sensitive information about U.S. national security interests. Australian PM, Scott Morrison told Australian Broadcasting Corp. the charge is a “matter for the United States” and has nothing to do with Australia. He was granted asylum just after “he was released on bail while facing extradition to Sweden on sexual assault allegations. The accusations have since been dropped but he was still wanted for jumping bail”, the Washington Post states. A Swedish woman alleged that she was raped by Julian Assange during a visit to Stockholm in 2010. Post Assange’s arrest on Thursday, Elisabeth Massi Fritz, the lawyer for the unnamed woman, said in a text message sent to The Associated Press that “we are going to do everything” to have the Swedish case reopened “so Assange can be extradited to Sweden and prosecuted for rape.” She further added, “no rape victim should have to wait nine years to see justice be served.” “In 2017, Sweden’s top prosecutor dropped a long-running inquiry into a rape claim against Assange, saying there was no way to have Assange detained or charged within a foreseeable future because of his protected status inside the embassy”, the Washington Post reports. In a tweet, Wikileaks posted a photo of Assange with the words: “This man is a son, a father, a brother. He has won dozens of journalism awards. He’s been nominated for the Nobel Peace Prize every year since 2010. Powerful actors, including CIA, are engaged in a sophisticated effort to dehumanize, delegitimize and imprison him. #ProtectJulian.” https://twitter.com/wikileaks/status/1116283186860953600 Duncan Ross, a data philanthropist, tweeted, “Random thoughts on Assange: 1) journalists don’t have to be nice people but 2) being a journalist (if he is) doesn’t put you above the law.” https://twitter.com/duncan3ross/status/1116610139023237121 Edward Snowden, a former security contractor who leaked classified information about U.S. surveillance programs, says the arrest of WikiLeaks founder Julian Assange is a blow to media freedom. “Assange’s critics may cheer, but this is a dark moment for press freedom”, he tweets. According to the Washington Post, in an interview with The Associated Press, Rafael Correa, Ecuador’s former president, was harshly critical of his successor’s decision to expel the Wikileaks founder from Ecuador’s embassy in London. He said that “although Julian Assange denounced war crimes, he’s only the person supplying the information.” Correa said “It’s the New York Times, the Guardian and El Pais publishing it. Why aren’t those journalists and media owners thrown in jail?” Yanis Varoufakis, Economics professor and former Greek finance minister, tweeted, “It was never about Sweden, Putin, Trump or Hillary. Assange was persecuted for exposing war crimes. Will those duped so far now stand with us in opposing his disappearance after a fake trial where his lawyers will not even now the charges?” https://twitter.com/yanisvaroufakis/status/1116308671645061120 The Democracy in Europe Movement 2025 (@DiEM_25) tweeted that Assange’s arrest is “a chilling demonstration of the current disregard for human rights and freedom of speech by establishment powers and the rising far-right.” The movement has also put a petition against Assange’s extradition. https://twitter.com/DiEM_25/status/1116379013461815296 Google employees filed petition to remove anti-trans, anti-LGBTQ and anti-immigrant Kay Coles James from the AI council A security researcher reveals his discovery on 800+ Million leaked Emails available online Leaked memo reveals that Facebook has threatened to pull investment projects from Canada and Europe if their data demands are not met
Read more
  • 0
  • 0
  • 3304
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-katie-bouman-unveils-the-first-ever-black-hole-image-with-her-brilliant-algorithm
Amrata Joshi
11 Apr 2019
11 min read
Save for later

Katie Bouman unveils the first ever black hole image with her brilliant algorithm

Amrata Joshi
11 Apr 2019
11 min read
Remember how we got to see the supermassive black hole in the movie Interstellar? Well, that wasn’t for real. We know that black holes end up sucking everything that’s too close to it, even light for that matter. Black hole’s event horizon cast a shadow and that shadow is enough for answering a lot of questions attached to black hole theory. And scientists and researchers have been working towards it since years to get that one image to give an angle to their research. And finally comes the biggest news that a team of astronomers, engineers, researchers and scientists have managed to capture the first ever image of a black hole, which is located in a distant galaxy. It is three million times the size of the Earth and it measures 40 billion Km across. The team describes it as "a monster" and was photographed by a network of eight telescopes across the world. In this article, we give you a glimpse of how did the image of the black hole got captured? Katie Bouman, a PhD student at MIT appeared at TED Talks and discussed the efforts taken by the team of researchers, engineers, astronomers and scientists to capture the first ever image of the black hole. Katie is a part of an international team of astronomers who worked for creating the world’s largest telescope, Event Horizon Telescope to click the first ever picture of the black hole. She led the development of a computer programme that made this impossible, possible! She started working on the algorithm three years ago while she was a graduate student. https://twitter.com/jenzhuscott/status/1115987618464968705 Katie wrote in the caption to one of the Facebook post, "Watching in disbelief as the first image I ever made of a black hole was in the process of being reconstructed." https://twitter.com/MIT_CSAIL/status/1116035007406116864 Further, she explains how the stars we see in the sky basically orbit an invisible object. And according to the astronomers, the only thing that can cause this motion of the stars is a supermassive black hole. Zooming in at radio wavelengths to see a ring of light “Well, it turns out that if we were to zoom in at radio wavelengths, we'd expect to see a ring of light caused by the gravitational lensing of hot plasma zipping around the black hole. Is it possible to see something that, by definition, is impossible to see? ” -Katie Bouman If we closely look at it, we can see that the black hole casts a shadow on the backdrop of bright material that carves out a sphere of darkness. It is a bright ring that reveals the black hole's event horizon, where the gravitational pull becomes so powerful that even light can’t escape. Einstein's equations have predicted the size and shape of this ring and taking a picture of it would help to verify that these equations hold in the extreme conditions around the black hole. Capturing black hole needs a telescope the size of the Earth “So how big of a telescope do we need in order to see an orange on the surface of the moon and, by extension, our black hole? Well, it turns out that by crunching the numbers, you can easily calculate that we would need a telescope the size of the entire Earth.” -Katie Bouman Bouman further explains that black hole is so far away from Earth that this ring appears incredibly small, as small as an orange on the surface of the moon. And this makes it difficult to capture the photo of the black hole. There are fundamental limits to the smallest objects that we can see because of diffraction. So the astronomers realized that they need to make their telescope bigger and bigger. Even the most powerful optical telescopes couldn’t get close to the resolution necessary to image on the surface of the moon. She showed one of the highest resolution images ever taken of the moon from Earth to the audience which contained around 13,000 pixels, and each pixel contained over 1.5 million oranges. Capturing the black hole turned into reality by connecting telescopes “And so, my role in helping to take the first image of a black hole is to design algorithms that find the most reasonable image that also fits the telescope measurements.” -Katie Bouman According to Bouman, we would require a telescope as big as earth’s size to see an orange on the surface of the moon. Capturing a black hole seemed to be imaginary back then as it was nearly impossible to have a powerful telescope. Bouman highlighted the famous words of Mick Jagger, "You can't always get what you want, but if you try sometimes, you just might find you get what you need." Capturing the black hole turned into a reality by connecting telescopes from around the world. Event Horizon Telescope, an international collaboration created a computational telescope the size of the Earth which was capable of resolving structure on the scale of a black hole's event horizon. The setup was such that each telescope in the worldwide network worked together. The researcher teams at each of the sites collected thousands of terabytes of data. This data then processed in a lab in Massachusetts. Let’s understand this in depth by assuming that we can build an Earth sized telescope! Further imagining that Earth is a spinning disco ball and each of the mirror of the ball can collect light that can be combined together to form a picture. If most of those mirrors are removed then a few will remain. In this case, it is still possible to combine this information together, but now there will be a lot of holes. The remaining mirrors represent the locations where these telescopes have been setup. Though this seems like a small number of measurements to make a picture from but it is effective. The light gets collected at a few telescope locations but as the Earth rotates, other new measurements also get explored. So, as the disco ball spins, the mirrors change locations and the astronomers get to observe different parts of the image. The imaging algorithms developed by the experts, scientists and researchers fill in the missing gaps of the disco ball in order to reconstruct the underlying black hole image. Katie Bouman said, “If we had telescopes located everywhere on the globe -- in other words, the entire disco ball -- this would be trivial. However, we only see a few samples, and for that reason, there are an infinite number of possible images that are perfectly consistent with our telescope measurements.” According to Bouman, not all the images are created equal. So some of those images look more like what the astronomers, scientists and researchers think of as images as compared to others. Bouman’s role in helping to take the first image of the black hole was to design the algorithms that find the most relevant or reasonable image that fits the telescope measurements. The imaging algorithms developed by Katie used the limited telescope data to guide the astronomers to a picture. With the help of these algorithms, it was possible to bring together the pieces of pictures from the sparse and noisy data. How was the algorithm used in creation of the black hole image “I'd like to encourage all of you to go out and help push the boundaries of science, even if it may at first seem as mysterious to you as a black hole.” -Katie Bouman There is an infinite number of possible images that perfectly explain the telescope measurements and the astronomers and researchers have to choose between them. This is possible by ranking the images based upon how likely they are to be the black hole image and further selecting the one that's most likely. Bouman explained it with the help of an example, “Let's say we were trying to make a model that told us how likely an image were to appear on Facebook. We'd probably want the model to say it's pretty unlikely that someone would post this noise image on the left, and pretty likely that someone would post a selfie like this one on the right. The image in the middle is blurry, so even though it's more likely we'd see it on Facebook compared to the noise image, it's probably less likely we'd see it compared to the selfie.” While talking about the images from the black hole, according to Katie it gets confusing for the astronomers and researchers as they have never seen a black hole before. She further explained how difficult it is to rely on any of the previous theories for these images. It is even difficult to completely rely on the images of the simulations for comparison. She said, “What is a likely black hole image, and what should we assume about the structure of black holes? We could try to use images from simulations we've done, like the image of the black hole from "Interstellar," but if we did this, it could cause some serious problems. What would happen if Einstein's theories didn't hold? We'd still want to reconstruct an accurate picture of what was going on. If we bake Einstein's equations too much into our algorithms, we'll just end up seeing what we expect to see. In other words, we want to leave the option open for there being a giant elephant at the center of our galaxy.” According to Bouman, different types of images have distinct features, so it is quite possible to identify the difference between black hole simulation images and images captured by the team. So the researchers need to let the algorithms know what images look like without imposing one type of image features. And this can be done by imposing the features of different kinds of images and then looking at how the image type we assumed affects the reconstruction of the final image. The researchers and astronomers become more confident about their image assumptions if the images' types produce a very similar-looking image. She said, “This is a little bit like giving the same description to three different sketch artists from all around the world. If they all produce a very similar-looking face, then we can start to become confident that they're not imposing their own cultural biases on the drawings.” It is possible to impose different image features by using pieces of existing images. So the astronomers and researchers took a large collection of images and broke them down into little image patches. And then they treated each image patch like piece of a puzzle. They use commonly seen puzzle pieces to piece together an image that also fits in their telescope measurements. She said, “Let's first start with black hole image simulation puzzle pieces. OK, this looks reasonable. This looks like what we expect a black hole to look like. But did we just get it because we just fed it little pieces of black hole simulation images?” If we take a set of puzzle pieces from everyday images, like the ones we take with our own personal camera then we get the same image from all different sets of puzzle pieces. And we then become more confident that the image assumptions made by us aren't biasing the final image. According to Bouman, another thing that can be done is take the same set of puzzle pieces like the ones derived from everyday images and then use them to reconstruct different kinds of source images. Bouman said, “So in our simulations, we pretend a black hole looks like astronomical non-black hole objects, as well as everyday images like the elephant in the center of our galaxy.” And when the results of the algorithms look very similar to the simulated image then researchers and astronomers become more confident about their algorithms. She emphasized that all of these pictures were created by piecing together little pieces of everyday photographs, like the ones we take with own personal camera. So an image of a black hole which we have never seen before can be created by piecing together pictures we see regularly like images of people, buildings, trees, cats and dogs. She concluded by appreciating the efforts taken by her team, “But of course, getting imaging ideas like this working would never have been possible without the amazing team of researchers that I have the privilege to work with. It still amazes me that although I began this project with no background in astrophysics. But big projects like the Event Horizon Telescope are successful due to all the interdisciplinary expertise different people bring to the table.” This project will surely encourage many researchers, engineers, astronomers and students who are under dark and not confident of themselves but have the potential to make the impossible, possible. https://twitter.com/fchollet/status/1116294486856851459 Is the YouTube algorithm’s promoting of #AlternativeFacts like Flat Earth having a real-world impact? YouTube disables all comments on videos featuring children in an attempt to curb predatory behavior and appease advertisers Using Genetic Algorithms for optimizing your models [Tutorial]  
Read more
  • 0
  • 0
  • 21810

article-image-uk-online-harms-white-paper-divides-internet-puts-tech-companies-government-crosshairs
Fatema Patrawala
10 Apr 2019
10 min read
Save for later

Online Safety vs Free Speech: UK’s "Online Harms" white paper divides the internet and puts tech companies in government crosshairs

Fatema Patrawala
10 Apr 2019
10 min read
The internet is an integral part of everyday life for so many people. It has definitely added a new dimension to the spaces of imagination in which we all live. But it seems the problems of the offline world have moved there, too. As the internet continues to grow and transform our lives, often for the better, we should not ignore the very real harms which people face online every day. And the lawmakers around the world are taking decisive action to make people safer online. On Monday, Europe drafted EU Regulation on preventing the dissemination of terrorist content online. Last week, the Australian parliament passed legislation to crack down on violent videos on social media. Recently Sen. Elizabeth Warren, US 2020 presidential hopeful proposed to build strong anti-trust laws and break big tech companies like Amazon, Google, Facebook and Apple. On 3rd April, Elizabeth introduced Corporate Executive Accountability Act, a new piece of legislation that would make it easier to criminally charge company executives when Americans’ personal data is breached. Last year, the German parliament enacted the NetzDG law, requiring large social media sites to remove posts that violate certain provisions of the German code, including broad prohibitions on “defamation of religion,” “hate speech,” and “insult.” And here’s yet another tech regulation announcement on Monday, a white paper on online harms was announced by the UK government. The Department for Digital, Culture, Media and Sport (DCMS) has proposed an independent watchdog that will write a "code of practice" for tech companies. According to Jeremy Wright, Secretary of State for Digital, Media & Sport and Sajid Javid, Home Secretary, “nearly nine in ten UK adults and 99% of 12 to 15 year olds are online. Two thirds of adults in the UK are concerned about content online, and close to half say they have seen hateful content in the past year. The tragic recent events in New Zealand show just how quickly horrific terrorist and extremist content can spread online.” Further they emphasized on not allowing such harmful behaviours and content to undermine the significant benefits that the digital revolution can offer. The white paper therefore puts forward ambitious plans for a new system of accountability and oversight for tech companies, moving far beyond self-regulation. It includes a new regulatory framework for online safety which will clarify companies’ responsibilities to keep UK users safer online with the most robust action to counter illegal content and activity. The paper suggests 3 major steps for tech regulation: establishing an independent regulator that can write a "code of practice" for social networks and internet companies giving the regulator enforcement powers including the ability to fine companies that break the rules considering additional enforcement powers such as the ability to fine company executives and force internet service providers to block sites that break the rules Outlining the proposals, Culture Secretary Jeremy Wright discussed the fine percentage with BBC UK, "If you look at the fines available to the Information Commissioner around the GDPR rules, that could be up to 4% of company's turnover... we think we should be looking at something comparable here." What are the kind of 'online harms' cited in the paper? The paper cover a range of issues that are clearly defined in law such as spreading terrorist content, child sex abuse, so-called revenge pornography, hate crimes, harassment and the sale of illegal goods. It also covers harmful behaviour that has a less clear legal definition such as cyber-bullying, trolling and the spread of fake news and disinformation. The paper cites that in 2018 online CSEA (Child Sexual Exploitation and Abuse) reported over 18.4 million referrals of child sexual abuse material by US tech companies to the National Center for Missing and Exploited Children (NCMEC). Out of those, there were 113, 948 UK-related referrals in 2018, up from 82,109 in 2017. In the third quarter of 2018, Facebook reported removing 8.7 million pieces of content globally for breaching policies on child nudity and sexual exploitation. Another type of online harm occurs when terrorists use online services to spread their vile propaganda and mobilise support. Paper emphasizes that terrorist content online threatens the UK’s national security and the safety of the public. Giving an example of the five terrorist attacks in the UK during 2017, had an online element. And online terrorist content remains a feature of contemporary radicalisation. It is seen across terrorist investigations, including cases where suspects have become very quickly radicalised to the point of planning attacks. This is partly as a result of the continued availability and deliberately attractive format of the terrorist material they are accessing online. Further it suggests that social networks must tackle material that advocates self-harm and suicide, which became a prominent issue after 14-year-old Molly Russell took her own life in 2017. After she died her family found distressing material about depression and suicide on her Instagram account. Molly's father Ian Russell holds the social media giant partly responsible for her death. Home Secretary Sajid Javid said tech giants and social media companies had a moral duty "to protect the young people they profit from". Despite our repeated calls to action, harmful and illegal content - including child abuse and terrorism - is still too readily available online.” What does the new proposal suggest to tackle online harm The paper calls for an independent regulator to hold internet companies to account. While it did not specify whether a new body will be established, or an existing one will be handed new powers. The regulator will define a "code of best practice" that social networks and internet companies must adhere to. It applies to tech companies like Facebook, Twitter and Google, and the rules would also apply to messaging services such as Whatsapp, Snapchat and cloud storage services. The regulator will have the power to fine companies and publish notices naming and shaming those that break the rules. The paper suggests it is also considering fines for individual company executives and making search engines remove links to offending websites and also consulting over blocking harmful websites. Another area discussed in the paper is about developing a culture of transparency, trust and accountability as a critical element of the new regulatory framework. The regulator will have the power to require annual transparency reports from companies in scope, outlining the prevalence of harmful content on their platforms and what measures they are taking to address this. These reports will be published online by the regulator, so that users can make informed decisions about online use. Additionally it suggests the spread of fake news could be tackled by forcing social networks to employ fact-checkers and promote legitimate news sources. How it plans to deploy technology as a part of solution The paper mentions that companies should invest in the development of safety technologies to reduce the burden on users to stay safe online. As in November 2018, the Home Secretary of UK co-hosted a hackathon with five major technology companies to develop a new tool to identify online grooming. So they have proposed this tool to be licensed for free to other companies, and plan more such innovative and collaborative efforts with them. The government also plans to work with the industry and civil society to develop a safety by design framework, linking up with existing legal obligations around data protection by design and secure by design principles. This will make it easier for startups and small businesses to embed safety during the development or update of products and services. They also plan to understand how AI can be best used to detect, measure and counter online harms, while ensuring its deployment remains safe and ethical. A new project led by Turing is setting out to address this issue. The ‘Hate Speech: Measures and Counter-measures’ project will use a mix of natural language processing techniques and qualitative analyses to create tools which identify and categorize different strengths and types of online hate speech. Other plans include launching of online safety apps which will combine state-of-the-art machine-learning technology to track children’s activity on their smartphone with the ability for children to self-report their emotional state. Why is the white paper receiving critical comments Though the paper seems to be a welcome step towards a sane internet regulation and looks sensible at the first glance. In some cases it has been regarded as too ambitious and unrealistically feeble. It reflects the conflicting political pressures under which it has been generated. TechUK, an umbrella group representing the UK's technology industry, said the government must be "clear about how trade-offs are balanced between harm prevention and fundamental rights". Jim Killock, executive director of Open Rights Group, said the government's proposals would "create state regulation of the speech of millions of British citizens". Matthew Lesh, head of research at free market think tank the Adam Smith Institute, went further saying "The government should be ashamed of themselves for leading the western world in internet censorship. The proposals are a historic attack on freedom of speech and the free press. At a time when Britain is criticising violations of freedom of expression in states like Iran, China and Russia, we should not be undermining our freedom at home." No one doubts the harm done by child sexual abuse or terrorist propaganda online, but these things are already illegal. The difficulty is its enforcement, which the white paper does nothing to address. Effective enforcement would demand a great deal of money and human time. The present system relies on a mixture of human reporting and algorithms. The algorithms can be fooled without too much trouble: 300,000 of the 1.5m copies of the Christchurch terrorist videos that were uploaded to Facebook within 24 hours of the crime were undetected by automated systems. Apart from this there is a criticism about the vision of the white paper which says it wants "A free, open and secure internet with freedom of expression online" "where companies take effective steps to keep their users safe". But it is actually not explained how it is going to protect free expression and seems to be a contradiction to the regulation. https://twitter.com/jimkillock/status/1115253155007205377 Beyond this, there is a conceptual problem. Much of the harm done on and by social media does not come from deliberate criminality, but from ordinary people released from the constraints of civility. It is here that the white paper fails most seriously. It talks about material – such as “intimidation, disinformation, the advocacy of self-harm” – that is harmful but not illegal yet proposes to regulate it in the same way as material which is both. Even leaving aside politically motivated disinformation, this is an area where much deeper and clearer thought is needed. https://twitter.com/guy_herbert/status/1115180765128667137 There is no doubt that some forms of disinformation do serious harms both to individuals and to society as a whole. And regulating the internet is necessary, but it won’t be easy or cheap. Too much of this white paper looks like an attempt to find cheap and easy solutions to really hard questions. Tech companies in EU to face strict regulation on Terrorist content: One hour take down limit; Upload filters and private Terms of Service Tech regulation to an extent of sentence jail: Australia’s ‘Sharing of Abhorrent Violent Material Bill’ to Warren’s ‘Corporate Executive Accountability Act’ How social media enabled and amplified the Christchurch terrorist attack  
Read more
  • 0
  • 0
  • 2540

article-image-2019-stack-overflow-survey-quick-overview
Sugandha Lahoti
10 Apr 2019
5 min read
Save for later

2019 Stack Overflow survey: A quick overview

Sugandha Lahoti
10 Apr 2019
5 min read
The results of the 2019 Stack Overflow survey have just been published: 90,000 developers took the 20-minute survey this year. The survey shed light on some very interesting insights – from the developers’ preferred language for programming, to the development platform they hate the most, to the blockers to developer productivity. As the survey is quite detailed and comprehensive, here’s a quick look at the most important takeaways. Key highlights from the Stack Overflow Survey Programming languages Python again emerged as the fastest-growing programming language, a close second behind Rust. Interestingly, Python and Typescript achieved the same votes with almost 73% respondents saying it was their most loved language. Python was the most voted language developers wanted to learn next and JavaScript remains the most used programming language. The most dreaded languages were VBA and Objective C. Source: Stack Overflow Frameworks and databases in the Stack Overflow survey Developers preferred using React.js and Vue.js web frameworks while dreaded Drupal and jQuery. Redis was voted as the most loved database and MongoDB as the most wanted database. MongoDB’s inclusion in the list is surprising considering its controversial Server Side Public License. Over the last few months, Red Hat dropped support for MongoDB over this license, so did GNU Health Federation. Both of these organizations choose PostgreSQL over MongoDB, which is one of the reasons probably why PostgreSQL was the second most loved and wanted database of Stack Overflow Survey 2019. Source: Stack Overflow It’s interesting to see WebAssembly making its way in the popular technology segment as well as one of the top paying technologies. Respondents who use Clojure, F#, Elixir, and Rust earned the highest salaries Stackoverflow also did a new segment this year called "Blockchain in the real world" which gives insight into the adoption of Blockchain. Most respondents (80%) on the survey said that their organizations are not using or implementing blockchain technology. Source: Stack Overflow Developer lifestyles and learning About 80% of our respondents say that they code as a hobby outside of work and over half of respondents had written their first line of code by the time they were sixteen, although this experience varies by country and by gender. For instance, women wrote their first code later than men and non-binary respondents wrote code earlier than men. About one-quarter of respondents are enrolled in a formal college or university program full-time or part-time. Of professional developers who studied at the university level, over 60% said they majored in computer science, computer engineering, or software engineering. DevOps specialists and site reliability engineers are among the highest paid, most experienced developers most satisfied with their jobs, and are looking for new jobs at the lowest levels. The survey also noted that developers who are system admins or DevOps specialists are 25-30 times more likely to be men than women. Chinese developers are the most optimistic about the future while developers in Western European countries like France and Germany are among the least optimistic. Developers also overwhelmingly believe that Elon Musk will be the most influential person in tech in 2019. With more than 30,000 people responding to a free text question asking them who they think will be the most influential person this year, an amazing 30% named Tesla CEO Musk. For perspective, Jeff Bezos was in second place, being named by ‘only’ 7.2% of respondents. Although, this year the US survey respondents proportion of women, went up from 9% to 11%, it’s still a slow growth and points to problems with inclusion in the tech industry in general and on Stack Overflow in particular. When thinking about blockers to productivity, different kinds of developers report different challenges. Men are more likely to say that being tasked with non-development work is a problem for them, while gender minority respondents are more likely to say that toxic work environments are a problem. Stack Overflow survey demographics and diversity challenges This report is based on a survey of 88,883 software developers from 179 countries around the world. It was conducted between January 23 to February 14 and the median time spent on the survey for qualified responses was 23.3 minutes. The majority of survey respondents this year were people who said they are professional developers or who code sometimes as part of their work, or are students preparing for such a career. Majority of them were from the US, India, China and Europe. Stack Overflow acknowledged that their results did not represent racial disparities evenly and people of color continue to be underrepresented among developers. This year nearly 71% of respondents continued to be of White or European descent, a slight improvement from last year (74%). The survey notes that, “In the United States this year, 22% of respondents are people of color; last year 19% of United States respondents were people of color.” This clearly signifies that a lot of work is still needed to be done particularly for people of color, women, and underrepresented groups. Although, last year in August, Stack Overflow revamped its Code of Conduct to include more virtues around kindness, collaboration, and mutual respect. It also updated  its developers salary calculator to include 8 new countries. Go through the full report to learn more about developer salaries, job priorities, career values, the best music to listen to while coding, and more. Developers believe Elon Musk will be the most influential person in tech in 2019, according to Stack Overflow survey results Creators of Python, Java, C#, and Perl discuss the evolution and future of programming language design at PuPPy Stack Overflow is looking for a new CEO as Joel Spolsky becomes Chairman
Read more
  • 0
  • 0
  • 5737

article-image-the-eu-commission-introduces-guidelines-for-achieving-a-trustworthy-ai
Savia Lobo
09 Apr 2019
4 min read
Save for later

The EU commission introduces guidelines for achieving a ‘Trustworthy AI’

Savia Lobo
09 Apr 2019
4 min read
On the third day of the Digital Day 2019 held in Brussels, the European Commission introduced a set of essential guidelines for building a trustworthy AI, which will guide companies and government to build ethical AI applications. By introducing these new guidelines, the commission is working towards a three-step approach including, Setting out the key requirements for trustworthy AI Launching a large scale pilot phase for feedback from stakeholders Working on international consensus building for human-centric AI EU’s high-level expert group on AI, which consists of 52 independent experts representing academia, industry, and civil society, came up with seven requirements, which according to them, the future AI systems should meet. Seven guidelines for achieving an ethical AI Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy. Robustness and safety: A trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems. Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them. Transparency: The traceability of AI systems should be ensured. Diversity, non-discrimination, and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility. Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility. Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. According to EU’s official press release, “Following the pilot phase, in early 2020, the AI expert group will review the assessment lists for the key requirements, building on the feedback received. Building on this review, the Commission will evaluate the outcome and propose any next steps.” The plans fall under the Commission’s AI strategy of April 2018, which “aims at increasing public and private investments to at least €20 billion annually over the next decade, making more data available, fostering talent and ensuring trust ”, the press release states. Andrus Ansip, Vice-President for the Digital Single Market, said, “The ethical dimension of AI is not a luxury feature or an add-on. It is only with trust that our society can fully benefit from technologies. Ethical AI is a win-win proposition that can become a competitive advantage for Europe: being a leader of human-centric AI that people can trust.” Mariya Gabriel, Commissioner for Digital Economy and Society, said, “We now have a solid foundation based on EU values and following an extensive and constructive engagement from many stakeholders including businesses, academia and civil society. We will now put these requirements to practice and at the same time foster an international discussion on human-centric AI." Thomas Metzinger, a Professor of Theoretical Philosophy at the University of Mainz and who was also a member of the commission's expert group that has worked on the guidelines has put forward an article titled, ‘Ethics washing made in Europe’. Metzinger said he has worked on the Ethics Guidelines for nine months. “The result is a compromise of which I am not proud, but which is nevertheless the best in the world on the subject. The United States and China have nothing comparable. How does it fit together?”, he writes. Eline Chivot, a senior policy analyst at the Center for Data Innovation think tank, told The Verge, “We are skeptical of the approach being taken, the idea that by creating a golden standard for ethical AI it will confirm the EU’s place in global AI development. To be a leader in ethical AI you first have to lead in AI itself.” To know more about this news in detail, read the EU press release. Is Google trying to ethics-wash its decisions with its new Advanced Tech External Advisory Council? IEEE Standards Association releases ethics guidelines for automation and intelligent systems Sir Tim Berners-Lee on digital ethics and socio-technical systems at ICDPPC 2018
Read more
  • 0
  • 0
  • 2999
article-image-tech-regulation-heats-up-australias-abhorrent-violent-material-bill-to-warrens-corporate-executive-accountability-act
Fatema Patrawala
04 Apr 2019
6 min read
Save for later

Tech regulation to an extent of sentence jail: Australia’s ‘Sharing of Abhorrent Violent Material Bill’ to Warren’s ‘Corporate Executive Accountability Act’

Fatema Patrawala
04 Apr 2019
6 min read
Businesses in powerful economies like USA, UK, Australia are as arguably powerful as politics or more than that. Especially now that we inhabit a global economy where an intricate web of connections can show the appalling employment conditions of Chinese workers who assemble the Apple smartphones we depend on. Amazon holds a revenue bigger than Kenya’s GDP. According to Business Insider, 25 major American corporations have revenues greater than the GDP of countries around the world. Because corporations create millions of jobs and control vast amounts of money and resources, their sheer economic power dwarfs government's ability to regulate and oversee them. With the recent global scale scandals that the tech industry has found itself in, with some resulting in deaths of groups of people, governments are waking up to the urgency for the need to hold tech companies responsible. While some government laws are reactionary, others are taking a more cautious approach. One thing is for sure, 2019 will see a lot of tech regulation come to play. How effective they are and what intended and unintended consequences they bear, how masterfully big tech wields its lobbying prowess, we’ll have to wait and see. Holding Tech platforms enabling hate and violence, accountable Australian govt passes law that criminalizes companies and execs for hosting abhorrent violent content Today, Australian parliament has passed legislation to crack down on violent videos on social media. The bill, described the attorney general, Christian Porter, as “most likely a world first”, was drafted in the wake of the Christchurch terrorist attack by a White supremacist Australian, when video of the perpetrator’s violent attack spread on social media faster than it could be removed. The Sharing of Abhorrent Violent Material bill creates new offences for content service providers and hosting services that fail to notify the Australian federal police about or fail to expeditiously remove videos depicting “abhorrent violent conduct”. That conduct is defined as videos depicting terrorist acts, murders, attempted murders, torture, rape or kidnap. The bill creates a regime for the eSafety Commissioner to notify social media companies that they are deemed to be aware they are hosting abhorrent violent material, triggering an obligation to take it down. While the Digital Industry Group which consists of Google, Facebook, Twitter, Amazon and Verizon Media in Australia has warned that the bill is passed without meaningful consultation and threatens penalties against content created by users. Sunita Bose, the group’s managing director says, “ with the vast volumes of content uploaded to the internet every second, this is a highly complex problem”. She further debates that “this pass it now, change it later approach to legislation creates immediate uncertainty to the Australia’s tech industry”. The Chief Executive of Atlassian Scott Farquhar said that the legislation fails to define how “expeditiously” violent material should be removed, and did not specify on who should be punished in the social media company. https://twitter.com/scottfarkas/status/1113391831784480768 The Law Council of Australia president, Arthur Moses, said criminalising social media companies and executives was a “serious step” and should not be legislated as a “knee-jerk reaction to a tragic event” because of the potential for unintended consequences. Contrasting Australia’s knee-jerk legislation, the US House Judiciary committee has organized a hearing on white nationalism and hate speech and their spread online. They have invited social media platform execs and civil rights organizations to participate. Holding companies accountable for reckless corporate behavior Facebook has undergone scandals after scandals with impunity in recent years given the lack of legislation in this space. Facebook has repeatedly come under the public scanner for data privacy breaches to disinformation campaigns and beyond. Adding to its ever-growing list of data scandals yesterday CNN Business uncovered  hundreds of millions of Facebook records were stored on Amazon cloud servers in a way that it allowed to be downloaded by the public. Earlier this month on 8th March, Sen. Warren has proposed to build strong anti-trust laws and break big tech companies like Amazon, Google, Facebook and Apple. Yesterday, she introduced Corporate Executive Accountability Act and also reintroduced the “too big to fail” bill a new piece of legislation that would make it easier to criminally charge company executives when Americans’ personal data is breached, among other corporate negligent behaviors. “When a criminal on the street steals money from your wallet, they go to jail. When small-business owners cheat their customers, they go to jail,” Warren wrote in a Washington Post op-ed published on Wednesday morning. “But when corporate executives at big companies oversee huge frauds that hurt tens of thousands of people, they often get to walk away with multimillion-dollar payouts.” https://twitter.com/SenWarren/status/1113448794912382977 https://twitter.com/SenWarren/status/1113448583771185153 According to Elizabeth, just one banker went to jail after the 2008 financial crisis. The CEO of Wells Fargo and his successor walked away from the megabank with multimillion-dollar pay packages after it was discovered employees had created millions of fake accounts. The same goes for the Equifax CEO after its data breach. The new legislation Warren introduced would make it easier to hold corporate executives accountable for their companies’ wrongdoing. Typically, it’s been hard to prove a case against individual executives for turning a blind eye toward risky or questionable activity, because prosecutors have to prove intent — basically, that they meant to do it. This legislation would change that, Heather Slavkin Corzo, a senior fellow at the progressive nonprofit Americans for Financial Reform, said to the Vox reporter. “It’s easier to show a lack of due care than it is to show the mental state of the individual at the time the action was committed,” she said. A summary of the legislation released by Warren’s office explains that it would “expand criminal liability to negligent executives of corporations with over $1 billion annual revenue” who: Are found guilty, plead guilty, or enter into a deferred or non-prosecution agreement for any crime. Are found liable or enter a settlement with any state or Federal regulator for the violation of any civil law if that violation affects the health, safety, finances, or personal data of 1% of the American population or 1% of the population of any state. Are found liable or guilty of a second civil or criminal violation for a different activity while operating under a civil or criminal judgment of any court, a deferred prosecution or non prosecution agreement, or settlement with any state or Federal agency. Executives found guilty of these violations could get up to a year in jail. And a second violation could mean up to three years. The Corporate Executive Accountability Act is yet another push from Warren who has focused much of her presidential campaign on holding corporations and their leaders responsible for both their market dominance and perceived corruption. Elizabeth Warren wants to break up tech giants like Amazon, Google Facebook, and Apple and build strong antitrust laws Zuckerberg wants to set the agenda for tech regulation in yet another “digital gangster” move Facebook under criminal investigations for data sharing deals: NYT report
Read more
  • 0
  • 0
  • 2142

article-image-over-30-ai-experts-join-shareholders-in-calling-on-amazon-to-stop-selling-rekognition-its-facial-recognition-tech-for-government-surveillance
Natasha Mathur
04 Apr 2019
6 min read
Save for later

Over 30 AI experts join shareholders in calling on Amazon to stop selling Rekognition, its facial recognition tech, for government surveillance

Natasha Mathur
04 Apr 2019
6 min read
Update, 12th April 2018: Amazon shareholders will now be voting on at the 2019 Annual Meeting of Shareholders of Amazon, on whether the company board should prohibit sales of Facial recognition tech to the government. The meeting will be held at 9:00 a.m., Pacific Time, on Wednesday, May 22, 2019, at Fremont Studios, Seattle, Washington.  Over 30 researchers from top tech firms (Google, Microsoft, et al), academic institutions and civil rights groups signed an open letter, last week, calling on Amazon to stop selling Amazon Rekognition to law enforcement. The letter, published on Medium, has been signed by the likes of this year’s Turing award winner, Yoshua Bengio, and Anima Anandkumar, a Caltech professor, director of Machine Learning research at NVIDIA, and former principal scientist at AWS among others. https://twitter.com/rajiinio/status/1113480353308651520 Amazon Rekognition is a deep-learning based service that is capable of storing and searching tens of millions of faces at a time. It allows detection of objects, scenes, activities and inappropriate content. However, Amazon Rekognition has long been a bone of contention among public eye and rights groups. This is due to the inaccuracies in its face recognition capability and over the concerns that selling Rekognition to law enforcement can hamper public privacy. For instance, an anonymous Amazon employee spoke out against Amazon selling its facial recognition technology to the police, last year, calling it a “Flawed technology”. Also, a group of seven House Democrats sent a letter to Amazon CEO, last November, over Amazon Rekognition, raising concerns and questions about its accuracy and the possible effects. Moreover, a group of over 85 coalition groups sent a letter to Amazon, earlier this year, urging the company to not sell its facial surveillance technology to the government. Researchers argue against unregulated Amazon Rekognition use Researchers state in the letter that a study conducted by Inioluwa Deborah Raji and Joy Buolamwini shows that Rekognition possesses much higher error rates and is imprecise in classifying the gender of darker skinned women than lighter skinned men. However, Dr. Matthew Wood, general manager, AI, AWS and Michael Punke, vice president of global public policy, AWS, were irreverent about the research and disregarded it by labeling it as “misleading”. Dr. Wood also stated that “facial analysis and facial recognition are completely different in terms of the underlying technology and the data used to train them. Trying to use facial analysis to gauge the accuracy of facial recognition is ill-advised”.  Researchers in the letter have called on that statement saying that it is 'problematic on multiple fronts’. The letter also sheds light on the real world implications of the misuse of face recognition tools. It talks about Clare Garvie, Alvaro Bedoya and Jonathan Frankle of the Center on Privacy & Technology at Georgetown Law who studies law enforcement’s use of face recognition. According to them, using face recognition tech can put the wrong people to trial due to cases of mistaken identity. Also, it is quite common that the law enforcement operators are neither aware of the parameters of these tools, nor do they know how to interpret some of their results. Relying on decisions from automated tools can lead to “automation bias”. Another argument Dr. Wood makes to defend the technology is that “To date (over two years after releasing the service), we have had no reported law enforcement misuses of Amazon Rekognition.”However, the letter states that this is unfair as there are currently no laws in place to audit Rekognition’s use. Moreover, Amazon has not disclosed any information about its customers or any details about the error rates of Rekognition across different intersectional demographics. “How can we then ensure that this tool is not improperly being used as Dr. Wood states? What we can rely on are the audits by independent researchers, such as Raji and Buolamwini..that demonstrates the types of biases that exist in these products”, reads the letter. Researchers say that they find Dr. Wood and Mr. Punke’s response to the peer-reviewed research is ‘disappointing’ and hope Amazon will dive deeper into examining all of its products before deciding on making it available for use by the Police. More trouble for Amazon: SEC approves Shareholders’ proposal for need to release more information on Rekognition Just earlier this week, the U.S. Securities and Exchange Commission (SEC) announced a ruling that considers Amazon shareholders’ proposal to demand Amazon to provide more information about the company’s use and sale of biometric facial recognition technology as appropriate. The shareholders said that they are worried about the use of Rekognition and consider it a significant risk to human rights and shareholder value. Shareholders mentioned two new proposals regarding Rekognition and requested their inclusion in the company’s proxy materials: The first proposal called on Board of directors to prohibit the selling of Rekognition to the government unless it has been evaluated that the tech does not violate human and civil rights. The second proposal urges Board Commission to conduct an independent study of Rekognition. This would further help examine the risks of Rekognition on the immigrants, activists, people of color, and the general public of the United States. Also, the study would help analyze how such tech is marketed and sold to foreign governments that may be “repressive”, along with other financial risks associated with human rights issues. Amazon chastised the proposals and claimed that both the proposals should be discarded under the subsections of Rule 14a-8 as they related to the company’s “ordinary business and operations that are not economically significant”. But, SEC’s Division of Corporation Finance countered Amazon’s arguments. It told Amazon that it is unable to conclude that “proposals are not otherwise significantly related to the Company’s business” and approved their inclusion in the company’s proxy materials, reports Compliance Week. “The Board of Directors did not provide an opinion or evidence needed to support the claim that the issues raised by the Proposals are ‘an insignificant public policy issue for the Company”, states the division. “The controversy surrounding the technology threatens the relationship of trust between the Company and its consumers, employees, and the public at large”. SEC Ruling, however, only expresses informal views, and whether Amazon is obligated to accept the proposals can only be decided by the U.S. District Court should the shareholders further legally pursue these proposals.   For more information, check out the detailed coverage at Compliance Week report. AWS updates the face detection, analysis and recognition capabilities in Amazon Rekognition AWS makes Amazon Rekognition, its image recognition AI, available for Asia-Pacific developers Amazon Rekognition can now ‘recognize’ faces in a crowd at real-time
Read more
  • 0
  • 0
  • 3538

article-image-un-global-working-group-on-big-data-publishes-a-handbook-on-privacy-preserving-computation-techniques
Bhagyashree R
03 Apr 2019
4 min read
Save for later

UN Global Working Group on Big Data publishes a handbook on privacy-preserving computation techniques

Bhagyashree R
03 Apr 2019
4 min read
On Monday, the UN Global Working Group (GWG) on Big Data published UN Handbook on Privacy-Preserving Computation Techniques. This book talks about the emerging privacy-preserving computation techniques and also outlines the key challenges in making these techniques more mainstream. https://twitter.com/UNBigData/status/1112739047066255360 Motivation behind writing this handbook In recent years, we have come across several data breaches. Companies collect users’ personal data without their consent to show them targeted content. The aggregated personal data can be misused to identify individuals and localize their whereabouts. Individuals can be singled out with the help of just a small set of attributes. This large collections of data are very often an easy target for cybercriminals. Previously, when cyber threats were not that advanced, people used to focus mostly on protecting the privacy of data at rest. This led to development of technologies like symmetric key encryption. Later, when sharing data on unprotected networks became common, technologies like Transport Layer Security (TLS) came into the picture. Today, when attackers are capable of penetrating servers worldwide, it is important to be aware of the technologies that help in ensuring data privacy during computation. This handbook focuses on technologies that protect the privacy of data during and after computation, which are called privacy-preserving computation techniques. Privacy Enhancing Technologies (PET) for statistics This book lists five Privacy Enhancing Technologies for statistics that will help reduce the risk of data leakage. I say “reduce” because there is, in fact, no known technique that can give a complete solution to the privacy question. #1 Secure multi-party computation Secure multi-party computation is also known as secure computation, multi-party computation (MPC), or privacy-preserving computation. A subfield of cryptography, this technology deals with scenarios where multiple parties are jointly working on a function. It aims to prevent any participant from learning anything about the input provided by other parties. MPC is based on secret sharing, in which data is divided into shares that are random themselves, but when combined it gives the original data. Each data input is shared into two or more shares and distributed among the parties involved. These when combined produce the correct output of the computed function. #2 Homomorphic encryption Homomorphic encryption is an encryption technique using which you can perform computations on encrypted data without the need for a decryption key. The advantage of this encryption scheme is that it enables computation on encrypted data without revealing the input data or result to the computing party. The result can only be decrypted by a specific party that has access to the secret key, typically it is the owner of the input data. #3 Differential Privacy (DP) DP is a statistical technique that makes it possible to collect and share aggregate information about users, while also ensuring that the privacy of individual users is maintained. This technique was designed to address the pitfalls that previous attempts to define privacy suffered, especially in the context of multiple releases and when adversaries have access to side knowledge. #4 Zero-knowledge proofs Zero-knowledge proofs involve two parties: prover and verifier. The prover has to prove statements to the verifier based on secret information known only to the prover. ZKP allows you to prove that you know a secret or secrets to the other party without actually revealing it. This is why this technology is called “zero knowledge”, as in, “zero” information about the secret is revealed. But, the verifier is convinced that the prover knows the secret in question. #5 Trusted Execution Environments (TEEs) This last technique on the list is different from the above four as it uses both hardware and software to protect data and code. It provides users secure computation capability by combining special-purpose hardware and software built to use those hardware features. In this technique, a process is run on a processor without its memory or execution state being exposed to any other process on the processor. This free 50-pager handbook is targeted towards statisticians and data scientists, data curators and architects, IT specialists, and security and information assurance specialists. So, go ahead and have a read: UN Handbook for Privacy-Preserving Techniques! Google employees filed petition to remove anti-trans, anti-LGBTQ and anti-immigrant Kay Coles James from the AI council Ahead of Indian elections, Facebook removes hundreds of assets spreading fake news and hate speech, but are they too late? Researchers successfully trick Tesla autopilot into driving into opposing traffic via “small stickers as interference patches on the ground”
Read more
  • 0
  • 0
  • 3281
article-image-zuckerberg-agenda-for-tech-regulation-yet-another-digital-gangster-move
Sugandha Lahoti
01 Apr 2019
7 min read
Save for later

Zuckerberg wants to set the agenda for tech regulation in yet another “digital gangster” move

Sugandha Lahoti
01 Apr 2019
7 min read
Facebook has probably made the biggest April Fool’s joke of this year. Over the weekend, Mark Zuckerberg, CEO of Facebook, penned a post detailing the need to have tech regulation in four major areas: “harmful content, election integrity, privacy, and data portability”. However, privacy advocates and tech experts were frustrated rather than pleased with this announcement, stating that seeing recent privacy scandals, Facebook CEO shouldn’t be the one making the rules. The term ‘digital gangster’ was first coined by the Guardian, when the Digital, Culture, Media and Sport Committee published its final report on Facebook’s Disinformation and ‘fake news practices. Per the publishing firm, “Facebook behaves like a ‘digital gangster’ destroying democracy. It considers itself to be ‘ahead of and beyond the law’. It ‘misled’ parliament. It gave statements that were ‘not true’”. Last week, Facebook rolled out a new Ad Library to provide more stringent transparency for preventing interference in worldwide elections. It also rolled out a policy to ban white nationalist content from its platforms. Zuckerberg’s four new regulation ideas “I believe we need a more active role for governments and regulators. By updating the rules for the internet, we can preserve what’s best about it — the freedom for people to express themselves and for entrepreneurs to build new things — while also protecting society from broader harms.”, writes Zuckerberg. Reducing harmful content For harmful content, Zuckerberg talks about having a certain set of rules that govern what types of content tech companies should consider harmful. According to him, governments should set "baselines" for online content that require filtering. He suggests that third-party organizations should also set standards governing the distribution of harmful content and measure companies against those standards. "Internet companies should be accountable for enforcing standards on harmful content," he writes. "Regulation could set baselines for what’s prohibited and require companies to build systems for keeping harmful content to a bare minimum." Ironically, over the weekend, Facebook was accused of enabling the spreading of anti-Semitic propaganda after its refusal to take down repeatedly flagged hate posts. Facebook stated that it will not remove the posts as they do not breach its hate speech rules and are not against UK law. Preserving election integrity The second tech regulation revolves around election integrity. Facebook has been taken steps in this direction by making significant changes to its advertising policies. Facebook’s new Ad library which was released last week, now provides advertising transparency on all active ads running on a Facebook page, including politics or issue ads. Ahead of the European Parliamentary election in May 2019, Facebook is also introducing ads transparency tools in the EU. He advises other tech companies to build a searchable ad archive as well. "Deciding whether an ad is political isn’t always straightforward. Our systems would be more effective if regulation created common standards for verifying political actors," Zuckerberg says. He also talks about improving online political advertising laws for political issues rather than primarily focussing on candidates and elections. “I believe”, he says “legislation should be updated to reflect the reality of the threats and set standards for the whole industry.” What is surprising is that just 24 hrs after Zuckerberg published his post committing to preserve election integrity, Facebook took down over 700 pages, groups, and accounts that were engaged in “coordinated inauthentic behavior” on Indian politics ahead of the country’s national elections. According to DFRLab, who analyzed these pages, Facebook was in fact quite late to take actions against these pages. Per DFRLab, "Last year, AltNews, an open-source fact-checking outlet, reported that a related website called theindiaeye.com was hosted on Silver Touch servers. Silver Touch managers denied having anything to do with the website or the Facebook page, but Facebook’s statement attributed the page to “individuals associated with” Silver Touch. The page was created in 2016. Even after several regional media outlets reported that the page was spreading false information related to Indian politics, the engagements on posts kept increasing, with a significant uptick from June 2018 onward." Adhering to privacy and data portability For privacy, Zuckerberg talks about the need to develop a “globally harmonized framework” along the lines of European Union's GDPR rules for US and other countries “I believe a common global framework — rather than regulation that varies significantly by country and state — will ensure that the internet does not get fractured, entrepreneurs can build products that serve everyone, and everyone gets the same protections.”, he writes. Which makes us wonder what is stopping him from implementing EU style GDPR on Facebook globally until a common framework is agreed upon by countries? Lastly, he adds, “regulation should guarantee the principle of data portability”, allowing people to freely port their data across different services. “True data portability should look more like the way people use our platform to sign into an app than the existing ways you can download an archive of your information. But this requires clear rules about who’s responsible for protecting information when it moves between services.” He also endorses the need for a standard data transfer format by supporting the open source Data Transfer Project. Why this call for regulation now? Zuckerberg's post comes at a strategic point of time when Facebook is battling a large number of investigations. Most recent of which is the housing discrimination charge by the U.S. Department of Housing and Urban Development (HUD) who alleged that Facebook is using its advertising tools to violate the Fair Housing Act. Also to be noticed is the fact, that Zuckerberg’s blog post comes weeks after Senator Elizabeth Warren, stated that if elected president in 2020, her administration will break up Facebook. Facebook was quick to remove and then restore several ads placed by Warren, that called for the breakup of Facebook and other tech giants. A possible explanation to Zuckerberg's post can be the fact that Facebook will be able to now say that it's actually pro-government regulation. This means it can lobby governments to make a decision that would be the most beneficial for the company. It may also set up its own work around political advertising and content moderation as the standard for other industries. By blaming decisions on third parties, it may also possibly reduce scrutiny from lawmakers. According to a report by Business Insider, just as Zuckerberg posted about his news today, a large number of Zuckerberg’s previous posts and announcements have been deleted from the FB Blog. Reaching for comment, a Facebook spokesperson told Business Insider that the posts were "mistakenly deleted" due to "technical errors." Now if this is a deliberate mistake or an unintentional one, we don’t know. Zuckerberg’s post sparked a huge discussion on Hacker news with most people drawing negative conclusions based on Zuckerberg’s writeup. Here are some of the views: “I think Zuckerberg's intent is to dilute the real issue (privacy) with these other three points. FB has a bad record when it comes to privacy and they are actively taking measures against it. For example, they lobby against privacy laws. They create shadow profiles and they make it difficult or impossible to delete your account.” “harmful content, election integrity, privacy, data portability Shut down Facebook as a company and three of those four problems are solved.” “By now it's pretty clear, to me at least, that Zuckerberg simply doesn't get it. He could have fixed the issues for over a decade. And even in 2019, after all the evidence of mismanagement and public distrust, he still refuses to relinquish any control of the company. This is a tone-deaf opinion piece.” Twitteratis also shared the same sentiment. https://twitter.com/futureidentity/status/1112455687169327105 https://twitter.com/BrendanCarrFCC/status/1112150281066819584 https://twitter.com/davidcicilline/status/1112085338342727680 https://twitter.com/DamianCollins/status/1112082926232092672 https://twitter.com/MaggieL/status/1112152675699834880 Ahead of EU 2019 elections, Facebook expands it’s Ad Library to provide advertising transparency in all active ads Facebook will ban white nationalism, and separatism content in addition to white supremacy content. Are the lawmakers and media being really critical towards Facebook?
Read more
  • 0
  • 0
  • 2044

article-image-why-did-mcdonalds-acqui-hire-300-million-machine-learning-startup-dynamic-yield
Fatema Patrawala
29 Mar 2019
7 min read
Save for later

Why did McDonalds acqui-hire $300 million machine learning startup, Dynamic Yield?

Fatema Patrawala
29 Mar 2019
7 min read
Mention McDonald’s to someone today, and they're more likely to think about Big Mac than Big Data. But that could soon change. As the fast-food giant embraced machine learning, with plans to become a tech-innovator in a fittingly super-sized way. McDonald's stunned a lot of people when it announced its biggest acquisition in 20 years, one that reportedly cost it over $300 million. It plans to acquire Dynamic Yield, a New York based startup that provides retailers with algorithmically driven "decision logic" technology. When you add an item to an online shopping cart, “decision logic” is the tech that nudges you about what other customers bought as well. Dynamic Yield’s client list includes blue-chip retail clients like Ikea, Sephora, and Urban Outfitters. McDonald’s vetted around 30 firms offering similar personalization engine services, and landed on Dynamic Yield. It has been recently valued in the hundreds of millions of dollars; people familiar with the details of the McDonald’s offer put it at over $300 million. This makes the company's largest purchase as per a tweet by the McDonald’s CEO Steve Easterbrook. https://twitter.com/SteveEasterbrk/status/1110313531398860800 The burger giant can certainly afford it; in 2018 alone it tallied nearly $6 billion of net income, and ended the year with a free cash flow of $4.2 billion. McDonalds, a food-tech innovator from the start Over the last several years, McDonalds has invested heavily in technology by bringing stores up to date with self-serve kiosks. The company also launched an app and partnered with Uber Eats in that time, in addition to a number of infrastructure improvements. It even relocated its headquarters less than a year ago from the suburbs to Chicago’s vibrant West Town neighborhood, in a bid to attract young talent. Collectively, McDonald’s serves around 68 million customers every single day. And the majority of those people are at their drive-thru window who never get out of their car, instead place and pick up their orders from the window. And that’s where McDonalds is planning to deploy Dynamic Yield tech first. “What we hadn’t done is begun to connect the technology together, and get the various pieces talking to each other,” says Easterbrook. “How do you transition from mass marketing to mass personalization? To do that, you’ve really got to unlock the data within that ecosystem in a way that’s useful to a customer.” Here’s what that looks like in practice: When you drive up to place your order at a McDonald’s today, a digital display greets you with a handful of banner items or promotions. As you inch up toward the ordering area, you eventually get to the full menu. Both of these, as currently implemented, are largely static, aside from the obvious changes like rotating in new offers, or switching over from breakfast to lunch. But in a pilot program at a McDonald’s restaurant in Miami, powered by Dynamic Yield, those displays have taken on new dexterity. In the new McDonald’s machine-learning paradigm, that particular display screen will show customers what other items have been popular at that location, and prompt them with potential upsells. Thanks for your Happy Meal order; maybe you’d like a Sprite to go with it. “We’ve never had an issue in this business with a lack of data,” says Easterbrook. “It’s drawing the insight and the intelligence out of it.” Revenue aspects likely to double with the acquisition McDonald’s hasn’t shared any specific insights gleaned so far, or numbers around the personalization engine’s effect on sales. But it’s not hard to imagine some of the possible scenarios. If someone orders two Happy Meals at 5 o’clock, for instance, that’s probably a parent ordering for their kids; highlight a coffee or snack for them, and they might decide to treat themselves to a pick-me-up. And as with any machine-learning system, the real benefits will likely come from the unexpected. While customer satisfaction may be the goal, the avenues McDonald’s takes to get there will increase revenues along the way. Customer personalization is another goal to achieve As you may think, McDonald’s didn’t spend over $300 million on a machine-learning company to only juice up its drive-thru sales. An important part is to figure how to leverage the “personalization” part of a personalization engine. Fine-tuned insights at the store level are one thing, but Easterbrook envisions something even more granular. “If customers are willing to identify themselves—there’s all sorts of ways you can do that—we can be even more useful to them, because now we call up their favorites,” according to Easterbrook, who stresses that privacy is paramount. As for what form that might ultimately take, Easterbrook raises a handful of possibilities. McDonald’s already uses geofencing around its stores to know when a mobile app customer is approaching and prepare their order accordingly. On the downside of this tech integration When you know you have to change so much in your company, it's easy to forget some of the consequences. You race to implement all new things in tech and don't adequately think about what your employees might think of it all. This seems to be happening to McDonald's. As the fast-food chain tries to catch up to food trends that have been established for some time, their employees seem to be not happy about the fact. As Bloomberg reports, the more McDonald's introduces, fresh beef, touchscreen ordering and delivery, the more its employees are thinking: "This is all too much work." One of the employees at the McDonalds franchisee revealed at the beginning of this year. "Employee turnover is at an all-time high for us," he said, adding "Our restaurants are way too stressful, and people do not want to work in them." Workers are walking away rather than dealing with new technologies and menu options. The result: customers will wait longer. Already, drive-through times at McDonald’s slowed to 239 seconds last year -- more than 30 seconds slower than in 2016, according to QSR magazine. Turnover at U.S. fast-food restaurants jumped to 150% meaning a store employing 20 workers would go through 30 in one year. Having said that it does not come to us as a surprise that McDonalds on Tuesday announced to the National Restaurant Association that it will no longer participate in lobby efforts against minimum-wage hikes at the federal, state or local level. It does makes sense when they are already paying low wages and an all time high attrition rate hail as a bigger problem. Of course, technology is supposed to solve all the world's problems, while simultaneously eliminating the need for many people. Looks like McDonalds has put all its eggs in the machine learning and automation basket. Would it not be a rich irony, if people saw technology being introduced and walked out, deciding it was all too much trouble for just a burger? 25 Startups using machine learning differently in 2018: From farming to brewing beer to elder care An AI startup now wants to monitor your kids’ activities to help them grow ‘securly’ Microsoft acquires AI startup Lobe, a no code visual interface tool to build deep learning models easily
Read more
  • 0
  • 0
  • 3797