Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

How-To Tutorials - News

104 Articles
article-image-uk-online-harms-white-paper-divides-internet-puts-tech-companies-government-crosshairs
Fatema Patrawala
10 Apr 2019
10 min read
Save for later

Online Safety vs Free Speech: UK’s "Online Harms" white paper divides the internet and puts tech companies in government crosshairs

Fatema Patrawala
10 Apr 2019
10 min read
The internet is an integral part of everyday life for so many people. It has definitely added a new dimension to the spaces of imagination in which we all live. But it seems the problems of the offline world have moved there, too. As the internet continues to grow and transform our lives, often for the better, we should not ignore the very real harms which people face online every day. And the lawmakers around the world are taking decisive action to make people safer online. On Monday, Europe drafted EU Regulation on preventing the dissemination of terrorist content online. Last week, the Australian parliament passed legislation to crack down on violent videos on social media. Recently Sen. Elizabeth Warren, US 2020 presidential hopeful proposed to build strong anti-trust laws and break big tech companies like Amazon, Google, Facebook and Apple. On 3rd April, Elizabeth introduced Corporate Executive Accountability Act, a new piece of legislation that would make it easier to criminally charge company executives when Americans’ personal data is breached. Last year, the German parliament enacted the NetzDG law, requiring large social media sites to remove posts that violate certain provisions of the German code, including broad prohibitions on “defamation of religion,” “hate speech,” and “insult.” And here’s yet another tech regulation announcement on Monday, a white paper on online harms was announced by the UK government. The Department for Digital, Culture, Media and Sport (DCMS) has proposed an independent watchdog that will write a "code of practice" for tech companies. According to Jeremy Wright, Secretary of State for Digital, Media & Sport and Sajid Javid, Home Secretary, “nearly nine in ten UK adults and 99% of 12 to 15 year olds are online. Two thirds of adults in the UK are concerned about content online, and close to half say they have seen hateful content in the past year. The tragic recent events in New Zealand show just how quickly horrific terrorist and extremist content can spread online.” Further they emphasized on not allowing such harmful behaviours and content to undermine the significant benefits that the digital revolution can offer. The white paper therefore puts forward ambitious plans for a new system of accountability and oversight for tech companies, moving far beyond self-regulation. It includes a new regulatory framework for online safety which will clarify companies’ responsibilities to keep UK users safer online with the most robust action to counter illegal content and activity. The paper suggests 3 major steps for tech regulation: establishing an independent regulator that can write a "code of practice" for social networks and internet companies giving the regulator enforcement powers including the ability to fine companies that break the rules considering additional enforcement powers such as the ability to fine company executives and force internet service providers to block sites that break the rules Outlining the proposals, Culture Secretary Jeremy Wright discussed the fine percentage with BBC UK, "If you look at the fines available to the Information Commissioner around the GDPR rules, that could be up to 4% of company's turnover... we think we should be looking at something comparable here." What are the kind of 'online harms' cited in the paper? The paper cover a range of issues that are clearly defined in law such as spreading terrorist content, child sex abuse, so-called revenge pornography, hate crimes, harassment and the sale of illegal goods. It also covers harmful behaviour that has a less clear legal definition such as cyber-bullying, trolling and the spread of fake news and disinformation. The paper cites that in 2018 online CSEA (Child Sexual Exploitation and Abuse) reported over 18.4 million referrals of child sexual abuse material by US tech companies to the National Center for Missing and Exploited Children (NCMEC). Out of those, there were 113, 948 UK-related referrals in 2018, up from 82,109 in 2017. In the third quarter of 2018, Facebook reported removing 8.7 million pieces of content globally for breaching policies on child nudity and sexual exploitation. Another type of online harm occurs when terrorists use online services to spread their vile propaganda and mobilise support. Paper emphasizes that terrorist content online threatens the UK’s national security and the safety of the public. Giving an example of the five terrorist attacks in the UK during 2017, had an online element. And online terrorist content remains a feature of contemporary radicalisation. It is seen across terrorist investigations, including cases where suspects have become very quickly radicalised to the point of planning attacks. This is partly as a result of the continued availability and deliberately attractive format of the terrorist material they are accessing online. Further it suggests that social networks must tackle material that advocates self-harm and suicide, which became a prominent issue after 14-year-old Molly Russell took her own life in 2017. After she died her family found distressing material about depression and suicide on her Instagram account. Molly's father Ian Russell holds the social media giant partly responsible for her death. Home Secretary Sajid Javid said tech giants and social media companies had a moral duty "to protect the young people they profit from". Despite our repeated calls to action, harmful and illegal content - including child abuse and terrorism - is still too readily available online.” What does the new proposal suggest to tackle online harm The paper calls for an independent regulator to hold internet companies to account. While it did not specify whether a new body will be established, or an existing one will be handed new powers. The regulator will define a "code of best practice" that social networks and internet companies must adhere to. It applies to tech companies like Facebook, Twitter and Google, and the rules would also apply to messaging services such as Whatsapp, Snapchat and cloud storage services. The regulator will have the power to fine companies and publish notices naming and shaming those that break the rules. The paper suggests it is also considering fines for individual company executives and making search engines remove links to offending websites and also consulting over blocking harmful websites. Another area discussed in the paper is about developing a culture of transparency, trust and accountability as a critical element of the new regulatory framework. The regulator will have the power to require annual transparency reports from companies in scope, outlining the prevalence of harmful content on their platforms and what measures they are taking to address this. These reports will be published online by the regulator, so that users can make informed decisions about online use. Additionally it suggests the spread of fake news could be tackled by forcing social networks to employ fact-checkers and promote legitimate news sources. How it plans to deploy technology as a part of solution The paper mentions that companies should invest in the development of safety technologies to reduce the burden on users to stay safe online. As in November 2018, the Home Secretary of UK co-hosted a hackathon with five major technology companies to develop a new tool to identify online grooming. So they have proposed this tool to be licensed for free to other companies, and plan more such innovative and collaborative efforts with them. The government also plans to work with the industry and civil society to develop a safety by design framework, linking up with existing legal obligations around data protection by design and secure by design principles. This will make it easier for startups and small businesses to embed safety during the development or update of products and services. They also plan to understand how AI can be best used to detect, measure and counter online harms, while ensuring its deployment remains safe and ethical. A new project led by Turing is setting out to address this issue. The ‘Hate Speech: Measures and Counter-measures’ project will use a mix of natural language processing techniques and qualitative analyses to create tools which identify and categorize different strengths and types of online hate speech. Other plans include launching of online safety apps which will combine state-of-the-art machine-learning technology to track children’s activity on their smartphone with the ability for children to self-report their emotional state. Why is the white paper receiving critical comments Though the paper seems to be a welcome step towards a sane internet regulation and looks sensible at the first glance. In some cases it has been regarded as too ambitious and unrealistically feeble. It reflects the conflicting political pressures under which it has been generated. TechUK, an umbrella group representing the UK's technology industry, said the government must be "clear about how trade-offs are balanced between harm prevention and fundamental rights". Jim Killock, executive director of Open Rights Group, said the government's proposals would "create state regulation of the speech of millions of British citizens". Matthew Lesh, head of research at free market think tank the Adam Smith Institute, went further saying "The government should be ashamed of themselves for leading the western world in internet censorship. The proposals are a historic attack on freedom of speech and the free press. At a time when Britain is criticising violations of freedom of expression in states like Iran, China and Russia, we should not be undermining our freedom at home." No one doubts the harm done by child sexual abuse or terrorist propaganda online, but these things are already illegal. The difficulty is its enforcement, which the white paper does nothing to address. Effective enforcement would demand a great deal of money and human time. The present system relies on a mixture of human reporting and algorithms. The algorithms can be fooled without too much trouble: 300,000 of the 1.5m copies of the Christchurch terrorist videos that were uploaded to Facebook within 24 hours of the crime were undetected by automated systems. Apart from this there is a criticism about the vision of the white paper which says it wants "A free, open and secure internet with freedom of expression online" "where companies take effective steps to keep their users safe". But it is actually not explained how it is going to protect free expression and seems to be a contradiction to the regulation. https://twitter.com/jimkillock/status/1115253155007205377 Beyond this, there is a conceptual problem. Much of the harm done on and by social media does not come from deliberate criminality, but from ordinary people released from the constraints of civility. It is here that the white paper fails most seriously. It talks about material – such as “intimidation, disinformation, the advocacy of self-harm” – that is harmful but not illegal yet proposes to regulate it in the same way as material which is both. Even leaving aside politically motivated disinformation, this is an area where much deeper and clearer thought is needed. https://twitter.com/guy_herbert/status/1115180765128667137 There is no doubt that some forms of disinformation do serious harms both to individuals and to society as a whole. And regulating the internet is necessary, but it won’t be easy or cheap. Too much of this white paper looks like an attempt to find cheap and easy solutions to really hard questions. Tech companies in EU to face strict regulation on Terrorist content: One hour take down limit; Upload filters and private Terms of Service Tech regulation to an extent of sentence jail: Australia’s ‘Sharing of Abhorrent Violent Material Bill’ to Warren’s ‘Corporate Executive Accountability Act’ How social media enabled and amplified the Christchurch terrorist attack  
Read more
  • 0
  • 0
  • 2535

article-image-2019-stack-overflow-survey-quick-overview
Sugandha Lahoti
10 Apr 2019
5 min read
Save for later

2019 Stack Overflow survey: A quick overview

Sugandha Lahoti
10 Apr 2019
5 min read
The results of the 2019 Stack Overflow survey have just been published: 90,000 developers took the 20-minute survey this year. The survey shed light on some very interesting insights – from the developers’ preferred language for programming, to the development platform they hate the most, to the blockers to developer productivity. As the survey is quite detailed and comprehensive, here’s a quick look at the most important takeaways. Key highlights from the Stack Overflow Survey Programming languages Python again emerged as the fastest-growing programming language, a close second behind Rust. Interestingly, Python and Typescript achieved the same votes with almost 73% respondents saying it was their most loved language. Python was the most voted language developers wanted to learn next and JavaScript remains the most used programming language. The most dreaded languages were VBA and Objective C. Source: Stack Overflow Frameworks and databases in the Stack Overflow survey Developers preferred using React.js and Vue.js web frameworks while dreaded Drupal and jQuery. Redis was voted as the most loved database and MongoDB as the most wanted database. MongoDB’s inclusion in the list is surprising considering its controversial Server Side Public License. Over the last few months, Red Hat dropped support for MongoDB over this license, so did GNU Health Federation. Both of these organizations choose PostgreSQL over MongoDB, which is one of the reasons probably why PostgreSQL was the second most loved and wanted database of Stack Overflow Survey 2019. Source: Stack Overflow It’s interesting to see WebAssembly making its way in the popular technology segment as well as one of the top paying technologies. Respondents who use Clojure, F#, Elixir, and Rust earned the highest salaries Stackoverflow also did a new segment this year called "Blockchain in the real world" which gives insight into the adoption of Blockchain. Most respondents (80%) on the survey said that their organizations are not using or implementing blockchain technology. Source: Stack Overflow Developer lifestyles and learning About 80% of our respondents say that they code as a hobby outside of work and over half of respondents had written their first line of code by the time they were sixteen, although this experience varies by country and by gender. For instance, women wrote their first code later than men and non-binary respondents wrote code earlier than men. About one-quarter of respondents are enrolled in a formal college or university program full-time or part-time. Of professional developers who studied at the university level, over 60% said they majored in computer science, computer engineering, or software engineering. DevOps specialists and site reliability engineers are among the highest paid, most experienced developers most satisfied with their jobs, and are looking for new jobs at the lowest levels. The survey also noted that developers who are system admins or DevOps specialists are 25-30 times more likely to be men than women. Chinese developers are the most optimistic about the future while developers in Western European countries like France and Germany are among the least optimistic. Developers also overwhelmingly believe that Elon Musk will be the most influential person in tech in 2019. With more than 30,000 people responding to a free text question asking them who they think will be the most influential person this year, an amazing 30% named Tesla CEO Musk. For perspective, Jeff Bezos was in second place, being named by ‘only’ 7.2% of respondents. Although, this year the US survey respondents proportion of women, went up from 9% to 11%, it’s still a slow growth and points to problems with inclusion in the tech industry in general and on Stack Overflow in particular. When thinking about blockers to productivity, different kinds of developers report different challenges. Men are more likely to say that being tasked with non-development work is a problem for them, while gender minority respondents are more likely to say that toxic work environments are a problem. Stack Overflow survey demographics and diversity challenges This report is based on a survey of 88,883 software developers from 179 countries around the world. It was conducted between January 23 to February 14 and the median time spent on the survey for qualified responses was 23.3 minutes. The majority of survey respondents this year were people who said they are professional developers or who code sometimes as part of their work, or are students preparing for such a career. Majority of them were from the US, India, China and Europe. Stack Overflow acknowledged that their results did not represent racial disparities evenly and people of color continue to be underrepresented among developers. This year nearly 71% of respondents continued to be of White or European descent, a slight improvement from last year (74%). The survey notes that, “In the United States this year, 22% of respondents are people of color; last year 19% of United States respondents were people of color.” This clearly signifies that a lot of work is still needed to be done particularly for people of color, women, and underrepresented groups. Although, last year in August, Stack Overflow revamped its Code of Conduct to include more virtues around kindness, collaboration, and mutual respect. It also updated  its developers salary calculator to include 8 new countries. Go through the full report to learn more about developer salaries, job priorities, career values, the best music to listen to while coding, and more. Developers believe Elon Musk will be the most influential person in tech in 2019, according to Stack Overflow survey results Creators of Python, Java, C#, and Perl discuss the evolution and future of programming language design at PuPPy Stack Overflow is looking for a new CEO as Joel Spolsky becomes Chairman
Read more
  • 0
  • 0
  • 5730

article-image-the-eu-commission-introduces-guidelines-for-achieving-a-trustworthy-ai
Savia Lobo
09 Apr 2019
4 min read
Save for later

The EU commission introduces guidelines for achieving a ‘Trustworthy AI’

Savia Lobo
09 Apr 2019
4 min read
On the third day of the Digital Day 2019 held in Brussels, the European Commission introduced a set of essential guidelines for building a trustworthy AI, which will guide companies and government to build ethical AI applications. By introducing these new guidelines, the commission is working towards a three-step approach including, Setting out the key requirements for trustworthy AI Launching a large scale pilot phase for feedback from stakeholders Working on international consensus building for human-centric AI EU’s high-level expert group on AI, which consists of 52 independent experts representing academia, industry, and civil society, came up with seven requirements, which according to them, the future AI systems should meet. Seven guidelines for achieving an ethical AI Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy. Robustness and safety: A trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems. Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them. Transparency: The traceability of AI systems should be ensured. Diversity, non-discrimination, and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility. Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility. Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. According to EU’s official press release, “Following the pilot phase, in early 2020, the AI expert group will review the assessment lists for the key requirements, building on the feedback received. Building on this review, the Commission will evaluate the outcome and propose any next steps.” The plans fall under the Commission’s AI strategy of April 2018, which “aims at increasing public and private investments to at least €20 billion annually over the next decade, making more data available, fostering talent and ensuring trust ”, the press release states. Andrus Ansip, Vice-President for the Digital Single Market, said, “The ethical dimension of AI is not a luxury feature or an add-on. It is only with trust that our society can fully benefit from technologies. Ethical AI is a win-win proposition that can become a competitive advantage for Europe: being a leader of human-centric AI that people can trust.” Mariya Gabriel, Commissioner for Digital Economy and Society, said, “We now have a solid foundation based on EU values and following an extensive and constructive engagement from many stakeholders including businesses, academia and civil society. We will now put these requirements to practice and at the same time foster an international discussion on human-centric AI." Thomas Metzinger, a Professor of Theoretical Philosophy at the University of Mainz and who was also a member of the commission's expert group that has worked on the guidelines has put forward an article titled, ‘Ethics washing made in Europe’. Metzinger said he has worked on the Ethics Guidelines for nine months. “The result is a compromise of which I am not proud, but which is nevertheless the best in the world on the subject. The United States and China have nothing comparable. How does it fit together?”, he writes. Eline Chivot, a senior policy analyst at the Center for Data Innovation think tank, told The Verge, “We are skeptical of the approach being taken, the idea that by creating a golden standard for ethical AI it will confirm the EU’s place in global AI development. To be a leader in ethical AI you first have to lead in AI itself.” To know more about this news in detail, read the EU press release. Is Google trying to ethics-wash its decisions with its new Advanced Tech External Advisory Council? IEEE Standards Association releases ethics guidelines for automation and intelligent systems Sir Tim Berners-Lee on digital ethics and socio-technical systems at ICDPPC 2018
Read more
  • 0
  • 0
  • 2991
Banner background image

article-image-tech-regulation-heats-up-australias-abhorrent-violent-material-bill-to-warrens-corporate-executive-accountability-act
Fatema Patrawala
04 Apr 2019
6 min read
Save for later

Tech regulation to an extent of sentence jail: Australia’s ‘Sharing of Abhorrent Violent Material Bill’ to Warren’s ‘Corporate Executive Accountability Act’

Fatema Patrawala
04 Apr 2019
6 min read
Businesses in powerful economies like USA, UK, Australia are as arguably powerful as politics or more than that. Especially now that we inhabit a global economy where an intricate web of connections can show the appalling employment conditions of Chinese workers who assemble the Apple smartphones we depend on. Amazon holds a revenue bigger than Kenya’s GDP. According to Business Insider, 25 major American corporations have revenues greater than the GDP of countries around the world. Because corporations create millions of jobs and control vast amounts of money and resources, their sheer economic power dwarfs government's ability to regulate and oversee them. With the recent global scale scandals that the tech industry has found itself in, with some resulting in deaths of groups of people, governments are waking up to the urgency for the need to hold tech companies responsible. While some government laws are reactionary, others are taking a more cautious approach. One thing is for sure, 2019 will see a lot of tech regulation come to play. How effective they are and what intended and unintended consequences they bear, how masterfully big tech wields its lobbying prowess, we’ll have to wait and see. Holding Tech platforms enabling hate and violence, accountable Australian govt passes law that criminalizes companies and execs for hosting abhorrent violent content Today, Australian parliament has passed legislation to crack down on violent videos on social media. The bill, described the attorney general, Christian Porter, as “most likely a world first”, was drafted in the wake of the Christchurch terrorist attack by a White supremacist Australian, when video of the perpetrator’s violent attack spread on social media faster than it could be removed. The Sharing of Abhorrent Violent Material bill creates new offences for content service providers and hosting services that fail to notify the Australian federal police about or fail to expeditiously remove videos depicting “abhorrent violent conduct”. That conduct is defined as videos depicting terrorist acts, murders, attempted murders, torture, rape or kidnap. The bill creates a regime for the eSafety Commissioner to notify social media companies that they are deemed to be aware they are hosting abhorrent violent material, triggering an obligation to take it down. While the Digital Industry Group which consists of Google, Facebook, Twitter, Amazon and Verizon Media in Australia has warned that the bill is passed without meaningful consultation and threatens penalties against content created by users. Sunita Bose, the group’s managing director says, “ with the vast volumes of content uploaded to the internet every second, this is a highly complex problem”. She further debates that “this pass it now, change it later approach to legislation creates immediate uncertainty to the Australia’s tech industry”. The Chief Executive of Atlassian Scott Farquhar said that the legislation fails to define how “expeditiously” violent material should be removed, and did not specify on who should be punished in the social media company. https://twitter.com/scottfarkas/status/1113391831784480768 The Law Council of Australia president, Arthur Moses, said criminalising social media companies and executives was a “serious step” and should not be legislated as a “knee-jerk reaction to a tragic event” because of the potential for unintended consequences. Contrasting Australia’s knee-jerk legislation, the US House Judiciary committee has organized a hearing on white nationalism and hate speech and their spread online. They have invited social media platform execs and civil rights organizations to participate. Holding companies accountable for reckless corporate behavior Facebook has undergone scandals after scandals with impunity in recent years given the lack of legislation in this space. Facebook has repeatedly come under the public scanner for data privacy breaches to disinformation campaigns and beyond. Adding to its ever-growing list of data scandals yesterday CNN Business uncovered  hundreds of millions of Facebook records were stored on Amazon cloud servers in a way that it allowed to be downloaded by the public. Earlier this month on 8th March, Sen. Warren has proposed to build strong anti-trust laws and break big tech companies like Amazon, Google, Facebook and Apple. Yesterday, she introduced Corporate Executive Accountability Act and also reintroduced the “too big to fail” bill a new piece of legislation that would make it easier to criminally charge company executives when Americans’ personal data is breached, among other corporate negligent behaviors. “When a criminal on the street steals money from your wallet, they go to jail. When small-business owners cheat their customers, they go to jail,” Warren wrote in a Washington Post op-ed published on Wednesday morning. “But when corporate executives at big companies oversee huge frauds that hurt tens of thousands of people, they often get to walk away with multimillion-dollar payouts.” https://twitter.com/SenWarren/status/1113448794912382977 https://twitter.com/SenWarren/status/1113448583771185153 According to Elizabeth, just one banker went to jail after the 2008 financial crisis. The CEO of Wells Fargo and his successor walked away from the megabank with multimillion-dollar pay packages after it was discovered employees had created millions of fake accounts. The same goes for the Equifax CEO after its data breach. The new legislation Warren introduced would make it easier to hold corporate executives accountable for their companies’ wrongdoing. Typically, it’s been hard to prove a case against individual executives for turning a blind eye toward risky or questionable activity, because prosecutors have to prove intent — basically, that they meant to do it. This legislation would change that, Heather Slavkin Corzo, a senior fellow at the progressive nonprofit Americans for Financial Reform, said to the Vox reporter. “It’s easier to show a lack of due care than it is to show the mental state of the individual at the time the action was committed,” she said. A summary of the legislation released by Warren’s office explains that it would “expand criminal liability to negligent executives of corporations with over $1 billion annual revenue” who: Are found guilty, plead guilty, or enter into a deferred or non-prosecution agreement for any crime. Are found liable or enter a settlement with any state or Federal regulator for the violation of any civil law if that violation affects the health, safety, finances, or personal data of 1% of the American population or 1% of the population of any state. Are found liable or guilty of a second civil or criminal violation for a different activity while operating under a civil or criminal judgment of any court, a deferred prosecution or non prosecution agreement, or settlement with any state or Federal agency. Executives found guilty of these violations could get up to a year in jail. And a second violation could mean up to three years. The Corporate Executive Accountability Act is yet another push from Warren who has focused much of her presidential campaign on holding corporations and their leaders responsible for both their market dominance and perceived corruption. Elizabeth Warren wants to break up tech giants like Amazon, Google Facebook, and Apple and build strong antitrust laws Zuckerberg wants to set the agenda for tech regulation in yet another “digital gangster” move Facebook under criminal investigations for data sharing deals: NYT report
Read more
  • 0
  • 0
  • 2128

article-image-over-30-ai-experts-join-shareholders-in-calling-on-amazon-to-stop-selling-rekognition-its-facial-recognition-tech-for-government-surveillance
Natasha Mathur
04 Apr 2019
6 min read
Save for later

Over 30 AI experts join shareholders in calling on Amazon to stop selling Rekognition, its facial recognition tech, for government surveillance

Natasha Mathur
04 Apr 2019
6 min read
Update, 12th April 2018: Amazon shareholders will now be voting on at the 2019 Annual Meeting of Shareholders of Amazon, on whether the company board should prohibit sales of Facial recognition tech to the government. The meeting will be held at 9:00 a.m., Pacific Time, on Wednesday, May 22, 2019, at Fremont Studios, Seattle, Washington.  Over 30 researchers from top tech firms (Google, Microsoft, et al), academic institutions and civil rights groups signed an open letter, last week, calling on Amazon to stop selling Amazon Rekognition to law enforcement. The letter, published on Medium, has been signed by the likes of this year’s Turing award winner, Yoshua Bengio, and Anima Anandkumar, a Caltech professor, director of Machine Learning research at NVIDIA, and former principal scientist at AWS among others. https://twitter.com/rajiinio/status/1113480353308651520 Amazon Rekognition is a deep-learning based service that is capable of storing and searching tens of millions of faces at a time. It allows detection of objects, scenes, activities and inappropriate content. However, Amazon Rekognition has long been a bone of contention among public eye and rights groups. This is due to the inaccuracies in its face recognition capability and over the concerns that selling Rekognition to law enforcement can hamper public privacy. For instance, an anonymous Amazon employee spoke out against Amazon selling its facial recognition technology to the police, last year, calling it a “Flawed technology”. Also, a group of seven House Democrats sent a letter to Amazon CEO, last November, over Amazon Rekognition, raising concerns and questions about its accuracy and the possible effects. Moreover, a group of over 85 coalition groups sent a letter to Amazon, earlier this year, urging the company to not sell its facial surveillance technology to the government. Researchers argue against unregulated Amazon Rekognition use Researchers state in the letter that a study conducted by Inioluwa Deborah Raji and Joy Buolamwini shows that Rekognition possesses much higher error rates and is imprecise in classifying the gender of darker skinned women than lighter skinned men. However, Dr. Matthew Wood, general manager, AI, AWS and Michael Punke, vice president of global public policy, AWS, were irreverent about the research and disregarded it by labeling it as “misleading”. Dr. Wood also stated that “facial analysis and facial recognition are completely different in terms of the underlying technology and the data used to train them. Trying to use facial analysis to gauge the accuracy of facial recognition is ill-advised”.  Researchers in the letter have called on that statement saying that it is 'problematic on multiple fronts’. The letter also sheds light on the real world implications of the misuse of face recognition tools. It talks about Clare Garvie, Alvaro Bedoya and Jonathan Frankle of the Center on Privacy & Technology at Georgetown Law who studies law enforcement’s use of face recognition. According to them, using face recognition tech can put the wrong people to trial due to cases of mistaken identity. Also, it is quite common that the law enforcement operators are neither aware of the parameters of these tools, nor do they know how to interpret some of their results. Relying on decisions from automated tools can lead to “automation bias”. Another argument Dr. Wood makes to defend the technology is that “To date (over two years after releasing the service), we have had no reported law enforcement misuses of Amazon Rekognition.”However, the letter states that this is unfair as there are currently no laws in place to audit Rekognition’s use. Moreover, Amazon has not disclosed any information about its customers or any details about the error rates of Rekognition across different intersectional demographics. “How can we then ensure that this tool is not improperly being used as Dr. Wood states? What we can rely on are the audits by independent researchers, such as Raji and Buolamwini..that demonstrates the types of biases that exist in these products”, reads the letter. Researchers say that they find Dr. Wood and Mr. Punke’s response to the peer-reviewed research is ‘disappointing’ and hope Amazon will dive deeper into examining all of its products before deciding on making it available for use by the Police. More trouble for Amazon: SEC approves Shareholders’ proposal for need to release more information on Rekognition Just earlier this week, the U.S. Securities and Exchange Commission (SEC) announced a ruling that considers Amazon shareholders’ proposal to demand Amazon to provide more information about the company’s use and sale of biometric facial recognition technology as appropriate. The shareholders said that they are worried about the use of Rekognition and consider it a significant risk to human rights and shareholder value. Shareholders mentioned two new proposals regarding Rekognition and requested their inclusion in the company’s proxy materials: The first proposal called on Board of directors to prohibit the selling of Rekognition to the government unless it has been evaluated that the tech does not violate human and civil rights. The second proposal urges Board Commission to conduct an independent study of Rekognition. This would further help examine the risks of Rekognition on the immigrants, activists, people of color, and the general public of the United States. Also, the study would help analyze how such tech is marketed and sold to foreign governments that may be “repressive”, along with other financial risks associated with human rights issues. Amazon chastised the proposals and claimed that both the proposals should be discarded under the subsections of Rule 14a-8 as they related to the company’s “ordinary business and operations that are not economically significant”. But, SEC’s Division of Corporation Finance countered Amazon’s arguments. It told Amazon that it is unable to conclude that “proposals are not otherwise significantly related to the Company’s business” and approved their inclusion in the company’s proxy materials, reports Compliance Week. “The Board of Directors did not provide an opinion or evidence needed to support the claim that the issues raised by the Proposals are ‘an insignificant public policy issue for the Company”, states the division. “The controversy surrounding the technology threatens the relationship of trust between the Company and its consumers, employees, and the public at large”. SEC Ruling, however, only expresses informal views, and whether Amazon is obligated to accept the proposals can only be decided by the U.S. District Court should the shareholders further legally pursue these proposals.   For more information, check out the detailed coverage at Compliance Week report. AWS updates the face detection, analysis and recognition capabilities in Amazon Rekognition AWS makes Amazon Rekognition, its image recognition AI, available for Asia-Pacific developers Amazon Rekognition can now ‘recognize’ faces in a crowd at real-time
Read more
  • 0
  • 0
  • 3532

article-image-un-global-working-group-on-big-data-publishes-a-handbook-on-privacy-preserving-computation-techniques
Bhagyashree R
03 Apr 2019
4 min read
Save for later

UN Global Working Group on Big Data publishes a handbook on privacy-preserving computation techniques

Bhagyashree R
03 Apr 2019
4 min read
On Monday, the UN Global Working Group (GWG) on Big Data published UN Handbook on Privacy-Preserving Computation Techniques. This book talks about the emerging privacy-preserving computation techniques and also outlines the key challenges in making these techniques more mainstream. https://twitter.com/UNBigData/status/1112739047066255360 Motivation behind writing this handbook In recent years, we have come across several data breaches. Companies collect users’ personal data without their consent to show them targeted content. The aggregated personal data can be misused to identify individuals and localize their whereabouts. Individuals can be singled out with the help of just a small set of attributes. This large collections of data are very often an easy target for cybercriminals. Previously, when cyber threats were not that advanced, people used to focus mostly on protecting the privacy of data at rest. This led to development of technologies like symmetric key encryption. Later, when sharing data on unprotected networks became common, technologies like Transport Layer Security (TLS) came into the picture. Today, when attackers are capable of penetrating servers worldwide, it is important to be aware of the technologies that help in ensuring data privacy during computation. This handbook focuses on technologies that protect the privacy of data during and after computation, which are called privacy-preserving computation techniques. Privacy Enhancing Technologies (PET) for statistics This book lists five Privacy Enhancing Technologies for statistics that will help reduce the risk of data leakage. I say “reduce” because there is, in fact, no known technique that can give a complete solution to the privacy question. #1 Secure multi-party computation Secure multi-party computation is also known as secure computation, multi-party computation (MPC), or privacy-preserving computation. A subfield of cryptography, this technology deals with scenarios where multiple parties are jointly working on a function. It aims to prevent any participant from learning anything about the input provided by other parties. MPC is based on secret sharing, in which data is divided into shares that are random themselves, but when combined it gives the original data. Each data input is shared into two or more shares and distributed among the parties involved. These when combined produce the correct output of the computed function. #2 Homomorphic encryption Homomorphic encryption is an encryption technique using which you can perform computations on encrypted data without the need for a decryption key. The advantage of this encryption scheme is that it enables computation on encrypted data without revealing the input data or result to the computing party. The result can only be decrypted by a specific party that has access to the secret key, typically it is the owner of the input data. #3 Differential Privacy (DP) DP is a statistical technique that makes it possible to collect and share aggregate information about users, while also ensuring that the privacy of individual users is maintained. This technique was designed to address the pitfalls that previous attempts to define privacy suffered, especially in the context of multiple releases and when adversaries have access to side knowledge. #4 Zero-knowledge proofs Zero-knowledge proofs involve two parties: prover and verifier. The prover has to prove statements to the verifier based on secret information known only to the prover. ZKP allows you to prove that you know a secret or secrets to the other party without actually revealing it. This is why this technology is called “zero knowledge”, as in, “zero” information about the secret is revealed. But, the verifier is convinced that the prover knows the secret in question. #5 Trusted Execution Environments (TEEs) This last technique on the list is different from the above four as it uses both hardware and software to protect data and code. It provides users secure computation capability by combining special-purpose hardware and software built to use those hardware features. In this technique, a process is run on a processor without its memory or execution state being exposed to any other process on the processor. This free 50-pager handbook is targeted towards statisticians and data scientists, data curators and architects, IT specialists, and security and information assurance specialists. So, go ahead and have a read: UN Handbook for Privacy-Preserving Techniques! Google employees filed petition to remove anti-trans, anti-LGBTQ and anti-immigrant Kay Coles James from the AI council Ahead of Indian elections, Facebook removes hundreds of assets spreading fake news and hate speech, but are they too late? Researchers successfully trick Tesla autopilot into driving into opposing traffic via “small stickers as interference patches on the ground”
Read more
  • 0
  • 0
  • 3281
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-zuckerberg-agenda-for-tech-regulation-yet-another-digital-gangster-move
Sugandha Lahoti
01 Apr 2019
7 min read
Save for later

Zuckerberg wants to set the agenda for tech regulation in yet another “digital gangster” move

Sugandha Lahoti
01 Apr 2019
7 min read
Facebook has probably made the biggest April Fool’s joke of this year. Over the weekend, Mark Zuckerberg, CEO of Facebook, penned a post detailing the need to have tech regulation in four major areas: “harmful content, election integrity, privacy, and data portability”. However, privacy advocates and tech experts were frustrated rather than pleased with this announcement, stating that seeing recent privacy scandals, Facebook CEO shouldn’t be the one making the rules. The term ‘digital gangster’ was first coined by the Guardian, when the Digital, Culture, Media and Sport Committee published its final report on Facebook’s Disinformation and ‘fake news practices. Per the publishing firm, “Facebook behaves like a ‘digital gangster’ destroying democracy. It considers itself to be ‘ahead of and beyond the law’. It ‘misled’ parliament. It gave statements that were ‘not true’”. Last week, Facebook rolled out a new Ad Library to provide more stringent transparency for preventing interference in worldwide elections. It also rolled out a policy to ban white nationalist content from its platforms. Zuckerberg’s four new regulation ideas “I believe we need a more active role for governments and regulators. By updating the rules for the internet, we can preserve what’s best about it — the freedom for people to express themselves and for entrepreneurs to build new things — while also protecting society from broader harms.”, writes Zuckerberg. Reducing harmful content For harmful content, Zuckerberg talks about having a certain set of rules that govern what types of content tech companies should consider harmful. According to him, governments should set "baselines" for online content that require filtering. He suggests that third-party organizations should also set standards governing the distribution of harmful content and measure companies against those standards. "Internet companies should be accountable for enforcing standards on harmful content," he writes. "Regulation could set baselines for what’s prohibited and require companies to build systems for keeping harmful content to a bare minimum." Ironically, over the weekend, Facebook was accused of enabling the spreading of anti-Semitic propaganda after its refusal to take down repeatedly flagged hate posts. Facebook stated that it will not remove the posts as they do not breach its hate speech rules and are not against UK law. Preserving election integrity The second tech regulation revolves around election integrity. Facebook has been taken steps in this direction by making significant changes to its advertising policies. Facebook’s new Ad library which was released last week, now provides advertising transparency on all active ads running on a Facebook page, including politics or issue ads. Ahead of the European Parliamentary election in May 2019, Facebook is also introducing ads transparency tools in the EU. He advises other tech companies to build a searchable ad archive as well. "Deciding whether an ad is political isn’t always straightforward. Our systems would be more effective if regulation created common standards for verifying political actors," Zuckerberg says. He also talks about improving online political advertising laws for political issues rather than primarily focussing on candidates and elections. “I believe”, he says “legislation should be updated to reflect the reality of the threats and set standards for the whole industry.” What is surprising is that just 24 hrs after Zuckerberg published his post committing to preserve election integrity, Facebook took down over 700 pages, groups, and accounts that were engaged in “coordinated inauthentic behavior” on Indian politics ahead of the country’s national elections. According to DFRLab, who analyzed these pages, Facebook was in fact quite late to take actions against these pages. Per DFRLab, "Last year, AltNews, an open-source fact-checking outlet, reported that a related website called theindiaeye.com was hosted on Silver Touch servers. Silver Touch managers denied having anything to do with the website or the Facebook page, but Facebook’s statement attributed the page to “individuals associated with” Silver Touch. The page was created in 2016. Even after several regional media outlets reported that the page was spreading false information related to Indian politics, the engagements on posts kept increasing, with a significant uptick from June 2018 onward." Adhering to privacy and data portability For privacy, Zuckerberg talks about the need to develop a “globally harmonized framework” along the lines of European Union's GDPR rules for US and other countries “I believe a common global framework — rather than regulation that varies significantly by country and state — will ensure that the internet does not get fractured, entrepreneurs can build products that serve everyone, and everyone gets the same protections.”, he writes. Which makes us wonder what is stopping him from implementing EU style GDPR on Facebook globally until a common framework is agreed upon by countries? Lastly, he adds, “regulation should guarantee the principle of data portability”, allowing people to freely port their data across different services. “True data portability should look more like the way people use our platform to sign into an app than the existing ways you can download an archive of your information. But this requires clear rules about who’s responsible for protecting information when it moves between services.” He also endorses the need for a standard data transfer format by supporting the open source Data Transfer Project. Why this call for regulation now? Zuckerberg's post comes at a strategic point of time when Facebook is battling a large number of investigations. Most recent of which is the housing discrimination charge by the U.S. Department of Housing and Urban Development (HUD) who alleged that Facebook is using its advertising tools to violate the Fair Housing Act. Also to be noticed is the fact, that Zuckerberg’s blog post comes weeks after Senator Elizabeth Warren, stated that if elected president in 2020, her administration will break up Facebook. Facebook was quick to remove and then restore several ads placed by Warren, that called for the breakup of Facebook and other tech giants. A possible explanation to Zuckerberg's post can be the fact that Facebook will be able to now say that it's actually pro-government regulation. This means it can lobby governments to make a decision that would be the most beneficial for the company. It may also set up its own work around political advertising and content moderation as the standard for other industries. By blaming decisions on third parties, it may also possibly reduce scrutiny from lawmakers. According to a report by Business Insider, just as Zuckerberg posted about his news today, a large number of Zuckerberg’s previous posts and announcements have been deleted from the FB Blog. Reaching for comment, a Facebook spokesperson told Business Insider that the posts were "mistakenly deleted" due to "technical errors." Now if this is a deliberate mistake or an unintentional one, we don’t know. Zuckerberg’s post sparked a huge discussion on Hacker news with most people drawing negative conclusions based on Zuckerberg’s writeup. Here are some of the views: “I think Zuckerberg's intent is to dilute the real issue (privacy) with these other three points. FB has a bad record when it comes to privacy and they are actively taking measures against it. For example, they lobby against privacy laws. They create shadow profiles and they make it difficult or impossible to delete your account.” “harmful content, election integrity, privacy, data portability Shut down Facebook as a company and three of those four problems are solved.” “By now it's pretty clear, to me at least, that Zuckerberg simply doesn't get it. He could have fixed the issues for over a decade. And even in 2019, after all the evidence of mismanagement and public distrust, he still refuses to relinquish any control of the company. This is a tone-deaf opinion piece.” Twitteratis also shared the same sentiment. https://twitter.com/futureidentity/status/1112455687169327105 https://twitter.com/BrendanCarrFCC/status/1112150281066819584 https://twitter.com/davidcicilline/status/1112085338342727680 https://twitter.com/DamianCollins/status/1112082926232092672 https://twitter.com/MaggieL/status/1112152675699834880 Ahead of EU 2019 elections, Facebook expands it’s Ad Library to provide advertising transparency in all active ads Facebook will ban white nationalism, and separatism content in addition to white supremacy content. Are the lawmakers and media being really critical towards Facebook?
Read more
  • 0
  • 0
  • 2035

article-image-why-did-mcdonalds-acqui-hire-300-million-machine-learning-startup-dynamic-yield
Fatema Patrawala
29 Mar 2019
7 min read
Save for later

Why did McDonalds acqui-hire $300 million machine learning startup, Dynamic Yield?

Fatema Patrawala
29 Mar 2019
7 min read
Mention McDonald’s to someone today, and they're more likely to think about Big Mac than Big Data. But that could soon change. As the fast-food giant embraced machine learning, with plans to become a tech-innovator in a fittingly super-sized way. McDonald's stunned a lot of people when it announced its biggest acquisition in 20 years, one that reportedly cost it over $300 million. It plans to acquire Dynamic Yield, a New York based startup that provides retailers with algorithmically driven "decision logic" technology. When you add an item to an online shopping cart, “decision logic” is the tech that nudges you about what other customers bought as well. Dynamic Yield’s client list includes blue-chip retail clients like Ikea, Sephora, and Urban Outfitters. McDonald’s vetted around 30 firms offering similar personalization engine services, and landed on Dynamic Yield. It has been recently valued in the hundreds of millions of dollars; people familiar with the details of the McDonald’s offer put it at over $300 million. This makes the company's largest purchase as per a tweet by the McDonald’s CEO Steve Easterbrook. https://twitter.com/SteveEasterbrk/status/1110313531398860800 The burger giant can certainly afford it; in 2018 alone it tallied nearly $6 billion of net income, and ended the year with a free cash flow of $4.2 billion. McDonalds, a food-tech innovator from the start Over the last several years, McDonalds has invested heavily in technology by bringing stores up to date with self-serve kiosks. The company also launched an app and partnered with Uber Eats in that time, in addition to a number of infrastructure improvements. It even relocated its headquarters less than a year ago from the suburbs to Chicago’s vibrant West Town neighborhood, in a bid to attract young talent. Collectively, McDonald’s serves around 68 million customers every single day. And the majority of those people are at their drive-thru window who never get out of their car, instead place and pick up their orders from the window. And that’s where McDonalds is planning to deploy Dynamic Yield tech first. “What we hadn’t done is begun to connect the technology together, and get the various pieces talking to each other,” says Easterbrook. “How do you transition from mass marketing to mass personalization? To do that, you’ve really got to unlock the data within that ecosystem in a way that’s useful to a customer.” Here’s what that looks like in practice: When you drive up to place your order at a McDonald’s today, a digital display greets you with a handful of banner items or promotions. As you inch up toward the ordering area, you eventually get to the full menu. Both of these, as currently implemented, are largely static, aside from the obvious changes like rotating in new offers, or switching over from breakfast to lunch. But in a pilot program at a McDonald’s restaurant in Miami, powered by Dynamic Yield, those displays have taken on new dexterity. In the new McDonald’s machine-learning paradigm, that particular display screen will show customers what other items have been popular at that location, and prompt them with potential upsells. Thanks for your Happy Meal order; maybe you’d like a Sprite to go with it. “We’ve never had an issue in this business with a lack of data,” says Easterbrook. “It’s drawing the insight and the intelligence out of it.” Revenue aspects likely to double with the acquisition McDonald’s hasn’t shared any specific insights gleaned so far, or numbers around the personalization engine’s effect on sales. But it’s not hard to imagine some of the possible scenarios. If someone orders two Happy Meals at 5 o’clock, for instance, that’s probably a parent ordering for their kids; highlight a coffee or snack for them, and they might decide to treat themselves to a pick-me-up. And as with any machine-learning system, the real benefits will likely come from the unexpected. While customer satisfaction may be the goal, the avenues McDonald’s takes to get there will increase revenues along the way. Customer personalization is another goal to achieve As you may think, McDonald’s didn’t spend over $300 million on a machine-learning company to only juice up its drive-thru sales. An important part is to figure how to leverage the “personalization” part of a personalization engine. Fine-tuned insights at the store level are one thing, but Easterbrook envisions something even more granular. “If customers are willing to identify themselves—there’s all sorts of ways you can do that—we can be even more useful to them, because now we call up their favorites,” according to Easterbrook, who stresses that privacy is paramount. As for what form that might ultimately take, Easterbrook raises a handful of possibilities. McDonald’s already uses geofencing around its stores to know when a mobile app customer is approaching and prepare their order accordingly. On the downside of this tech integration When you know you have to change so much in your company, it's easy to forget some of the consequences. You race to implement all new things in tech and don't adequately think about what your employees might think of it all. This seems to be happening to McDonald's. As the fast-food chain tries to catch up to food trends that have been established for some time, their employees seem to be not happy about the fact. As Bloomberg reports, the more McDonald's introduces, fresh beef, touchscreen ordering and delivery, the more its employees are thinking: "This is all too much work." One of the employees at the McDonalds franchisee revealed at the beginning of this year. "Employee turnover is at an all-time high for us," he said, adding "Our restaurants are way too stressful, and people do not want to work in them." Workers are walking away rather than dealing with new technologies and menu options. The result: customers will wait longer. Already, drive-through times at McDonald’s slowed to 239 seconds last year -- more than 30 seconds slower than in 2016, according to QSR magazine. Turnover at U.S. fast-food restaurants jumped to 150% meaning a store employing 20 workers would go through 30 in one year. Having said that it does not come to us as a surprise that McDonalds on Tuesday announced to the National Restaurant Association that it will no longer participate in lobby efforts against minimum-wage hikes at the federal, state or local level. It does makes sense when they are already paying low wages and an all time high attrition rate hail as a bigger problem. Of course, technology is supposed to solve all the world's problems, while simultaneously eliminating the need for many people. Looks like McDonalds has put all its eggs in the machine learning and automation basket. Would it not be a rich irony, if people saw technology being introduced and walked out, deciding it was all too much trouble for just a burger? 25 Startups using machine learning differently in 2018: From farming to brewing beer to elder care An AI startup now wants to monitor your kids’ activities to help them grow ‘securly’ Microsoft acquires AI startup Lobe, a no code visual interface tool to build deep learning models easily
Read more
  • 0
  • 0
  • 3795

article-image-amazon-joins-nsf-funding-fairness-ai-public-outcry-big-tech-ethicswashing
Sugandha Lahoti
27 Mar 2019
5 min read
Save for later

Amazon joins NSF in funding research exploring fairness in AI amidst public outcry over big tech #ethicswashing

Sugandha Lahoti
27 Mar 2019
5 min read
Behind the heels of Stanford’s HCAI Institute ( which, mind you, received public backlash for non-representative faculty makeup). Amazon is collaborating with the National Science Foundation (NSF) to develop systems based on fairness in AI. The company will be investing $10M each in artificial intelligence research grants over a three-year period. The official announcement was made by Prem Natarajan, VP of natural understanding in the Alexa AI group, who wrote in a blog post “With the increasing use of AI in everyday life, fairness in artificial intelligence is a topic of increasing importance across academia, government, and industry. Here at Amazon, the fairness of the machine learning systems we build to support our businesses is critical to establishing and maintaining our customers’ trust.” Per the blog post, Amazon will be collaborating with NSF to build trustworthy AI systems to address modern challenges. They will explore topics of transparency, explainability, accountability, potential adverse biases and effects, mitigation strategies, validation of fairness, and considerations of inclusivity. Proposals will be accepted from March 26 until May 10, to result in new open source tools, publicly available data sets, and publications. The two organizations plan to continue the program with calls for additional proposals in 2020 and 2021. There will be 6 to 9 awards of type Standard Grant or Continuing Grant. The award size will be $750,000 - up to a maximum of $1,250,000 for periods of up to 3 years. The anticipated funding amount is $7,600,000. “We are excited to announce this new collaboration with Amazon to fund research focused on fairness in AI,” said Jim Kurose, NSF's head for Computer and Information Science and Engineering. “This program will support research related to the development and implementation of trustworthy AI systems that incorporate transparency, fairness, and accountability into the design from the beginning.” The insidious nexus of private funding in public research: What does Amazon gain from collab with NSF? Amazon’s foray into fairness system looks more of a publicity stunt than eliminating AI bias. For starters, Amazon said that they will not be making the award determinations for this project. NSF would solely be awarding in accordance with its merit review process. However, Amazon said that Amazon researchers may be involved with the projects as an advisor only at the request of an awardee, or of NSF with the awardee's consent. As advisors, Amazon may host student interns who wish to gain further industry experience, which seems a bit dicey. Amazon will also not participate in the review process or receive proposal information. NSF will only be sharing with Amazon summary-level information that is necessary to evaluate the program, specifically the number of proposal submissions, number of submitting organizations, and numbers rated across various review categories. There was also the question of who exactly is funding since VII.B section of the proposal states: "Individual awards selected for joint funding by NSF and Amazon will be   funded through separate NSF and Amazon funding instruments." https://twitter.com/nniiicc/status/1110335108634951680 https://twitter.com/nniiicc/status/1110335004989521920 Nic Weber, the author of the above tweets and Assistant Professor at UW iSchool, also raises another important question: “Why does Amazon get to put its logo on a national solicitation (for a paltry $7.6 million dollars in basic research) when it profits in the multi-billions off of AI that is demonstrably unfair and harmful.” Twitter was abundant with tweets from those in working tech questioning Amazon’s collaboration. https://twitter.com/mer__edith/status/1110560653872373760 https://twitter.com/patrickshafto/status/1110748217887649793 https://twitter.com/smunson/status/1110657292549029888 https://twitter.com/haldaume3/status/1110697325251448833 Amazon has already been under the fire due to its controversial decisions in the recent past. In June last year, when the US Immigration and Customs Enforcement agency (ICE) began separating migrant children from their parents, Amazon came under fire as one of the tech companies that aided ICE with the software required to do so. Amazon has also faced constant criticisms since the news came that Amazon had sold its facial recognition product Rekognition to a number of law enforcement agencies in the U.S. in the first half of 2018. Amazon is also under backlash after a study by the Massachusetts Institute of Technology in January, found Amazon Rekognition incapable of reliably determining the sex of female and darker-skinned faces in certain scenarios. Amazon is yet to fix this AI-bias anomaly, and yet it has now started a new collaboration with NSF that ironically focusses on building bias-free AI systems. Amazon’s Ring (a smart doorbell company) also came under public scrutiny in January, after it gave access to its employees to watch live footage from cameras of the customers. In other news, yesterday, Google also formed an external AI advisory council to help advance the responsible development of AI. More details here. Amazon won’t be opening its HQ2 in New York due to public protests Amazon admits that facial recognition technology needs to be regulated Amazon’s Ring gave access to its employees to watch live footage of the customers, The Intercept reports
Read more
  • 0
  • 0
  • 3767

article-image-four-versions-of-wikipedia-goes-offline-in-a-protest-against-eu-copyright-directive-which-will-affect-free-speech-online
Savia Lobo
22 Mar 2019
5 min read
Save for later

Four versions of Wikipedia goes offline in a protest against EU copyright Directive which will affect free speech online

Savia Lobo
22 Mar 2019
5 min read
Yesterday, March 21, four versions of Wikipedia, German, Danish, Czech, and Slovak were blacked off as a move to oppose the recent EU Copyright Directive, which will be up for voting on Tuesday, March 26. These long-awaited updates to the copyright law include “important wins for the open community in the current text”, the Wikimedia foundation reports. However, “the inclusion of Articles 11 and 13 will harm the way people find and share information online”, Wikimedia further states. However, the major opposition is towards the controversial Article 13. Article 11 states that if a text contains more than a snippet from an article, it must be licensed and paid for by whoever quotes the text. “While each country can define "snippet" however it wants, the Directive does not stop countries from making laws that pass using as little as three words from a news story”, the Electronic Frontier Foundation mentions. Article 13 is, however, the most controversial and is all set to restructure how copyright works on the web. As of now, in order to take down content that is subject to copyright infringement, the rights holder just have to send a ‘takedown notice’. However, with Article 13 in place, there will be no protection for online services and also “relieves rights-holders of the need to check the Internet for infringement and send out notices. Instead, it says that online platforms have a duty to ensure that none of their users infringe copyright.” According to The Next Web, “To make people understand how serious the effects of the Copyright Reform will be if it’s passed, Reddit and Wikipedia will hinder access to their sites in the EU to mimic the effects of the directive.” Both Article 11 and 13 were reintroduced under the leadership of German Member of the European Parliament (MEP) Axel Voss. However, these had already been discarded as unworkable after expert advice. “Voss's insistence that Articles 11 and 13 be included in the final Directive has been a flashpoint for public anger, drawing criticism from the world's top technical, copyright, journalistic, and human rights experts and organizations”, the Electronic Frontier Foundation reports. “Critics say the politicians behind the legislation do not understand the breadth of the laws they are proposing, and that the directive, if implemented, will harm free expression online”, The Verge reports. Platforms such as Tumblr, YouTube, and many others, that host user-generated content will be under the radar if Article 13 is passed and will be legally responsible if the users upload copyrighted content. According to The Verge, “The only way to stop these uploads, say critics, will be to scan content before its uploaded, leading to the creation of filters that will likely be error-prone and abused by copyright trolls.” Many have protested against Article 13 in recent weeks. In Germany, about 3,500 people took out a rally in Berlin as a protest against the new copyright plans. Also, a petition ‘Save the Internet’ has already gathered more than five million signatures. Reddit has also taken an action against the Copyright Directive by flashing a simulated error message citing failure when Reddit desktop users in EU countries attempt to make a top-level post on Reddit. According to Reddit, “This experience, meant to mimic the automated filters that users would encounter should the Directive pass, will last through March 23rd, when IRL demonstrations are planned across Europe.” Julia Reda, a member of the European Parliament from Germany, in her blog post mentions, “For two years we’ve debated different drafts and versions of the controversial Articles 11 and 13. Now, there is no more ambiguity: This law will fundamentally change the internet as we know it – if it is adopted in the upcoming final vote. But we can still prevent that!” United Nations’ free-speech rapporteur, David Kaye, said, “Europe has a responsibility to modernize its copyright law to address the challenges of the digital age. But this should not be done at the expense of the freedom of expression that Europeans enjoy today… Article 13 of the proposed Directive appears destined to drive internet platforms toward monitoring and restriction of user-generated content even at the point of upload. Such sweeping pressure for pre-publication filtering is neither a necessary nor proportionate response to copyright infringement online.” A user on HackerNews writes, “I hope they win and that Article 11 and 13 will be removed. I think this is an important moment in the birth of EU democracy because it feels to me that one of the first times, there is a big public discussion about an issue and the people at the center aren't national politicians like Merkel or Macron but EU MEPs, namely Voss vs Reda. The EU has rightfully been criticized of not being democratic enough, and this discussion feels like it's very much democratic.” https://twitter.com/Wikipedia/status/1108595296068501504 Five EU countries oppose the EU copyright directive Reddit’s 2018 Transparency report includes copyright removals, restorations, and more! Drafts of Article 13 and the EU Copyright Directive have been finalized
Read more
  • 0
  • 0
  • 2707
article-image-google-facebook-working-hard-to-clean-image-after-media-backlash-from-attack
Fatema Patrawala
22 Mar 2019
8 min read
Save for later

Google and Facebook working hard to clean image after the media backlash from the Christchurch terrorist attack

Fatema Patrawala
22 Mar 2019
8 min read
Last Friday’s uncontrolled spread of horrific videos on the Christchurch mosque attack and a propaganda coup for espousing hateful ideologies raised questions about social media. The tech companies scrambled to take action on time due to the speed and volume of content which was uploaded, reuploaded and shared by the users worldwide. In Washington and Silicon Valley, the incident crystallized growing concerns about the extent to which government and market forces have failed to check the power of social media. The failure highlighted the social media companies struggle to police content that are massively lucrative and persistently vulnerable to outside manipulation despite years of promises to do better. After the white supremacist live-streamed the attack and uploaded the video to Facebook, Twitter, YouTube, and other platforms across the internet. These tech companies faced back lashes from the media and internet users worldwide, to an extent where they were regarded as complicit in promoting white supremacism too. In response to the backlash, Google and Facebook provides status report on what they went through when the video was reported, the kind of challenges they faced and what are the next steps to combat such incidents in future. Google’s report so far... Google in an email to Motherboard says it employs 10,000 people across to moderate the company’s platforms and products. They also described a process they would follow when a user reports a piece of potentially violating content—such as the attack video; which is The user flagged report will go to a human moderator to assess. The moderator is instructed to flag all pieces of content related to the attack as “Terrorist Content,” including full-length or sections of the manifesto. Because of the document’s length the email tells moderators not to spend an extensive amount of time trying to confirm whether a piece of content does contain part of the manifesto. Instead, if the moderator is unsure, they should err on the side of caution and still label the content as “Terrorist Content,” which will then be reviewed by a second moderator. The second moderator is told to take time to verify that it is a piece of the manifesto, and appropriately mark the content as terrorism no matter how long or short the section may be. Moderators are told to mark the manifesto or video as terrorism content unless there is an Educational, Documentary, Scientific, or Artistic (EDSA) context to it. Further Google adds that they want to preserve journalistic or educational coverage of the event, but does not want to allow the video or manifesto itself to spread throughout the company’s services without additional context. Google at some point had taken the unusual step of automatically rejecting any footage of violence from the attack video, cutting out the process of a human determining the context of the clip. If, say, a news organization was impacted by this change, the outlet could appeal the decision, Google commented. “We made the call to basically err on the side of machine intelligence, as opposed to waiting for human review,” YouTube’s Product Officer Neal Mohan told the Washington Post in an article published Monday. Google also tweaked the search function to show results from authoritative news sources. It suspended the ability to search for clips by upload date, making it harder for people to find copies of the attack footage. "Since Friday’s horrific tragedy, we’ve removed tens of thousands of videos and terminated hundreds of accounts created to promote or glorify the shooter," a YouTube spokesperson said. “Our teams are continuing to work around the clock to prevent violent and graphic content from spreading, we know there is much more work to do,” the statement added. Facebook’s update so far... Facebook on Wednesday also shared an update on how they have been working with the New Zealand Police to support their investigation. It provided additional information on how their products were used to circulate videos and how they plan to improve them. So far Facebook has provided the following information: The video was viewed fewer than 200 times during the live broadcast. No users reported the video during the live broadcast. Including the views during the live broadcast, the video was viewed about 4,000 times in total before being removed from Facebook. Before Facebook was alerted to the video, a user on 8chan posted a link to a copy of the video on a file-sharing site. The first user report on the original video came in 29 minutes after the video started, and 12 minutes after the live broadcast ended. In the first 24 hours, Facebook removed more than 1.2 million videos of the attack at upload, which were therefore prevented from being seen on our services. Approximately 300,000 additional copies were removed after they were posted. As there were questions asked to Facebook about why artificial intelligence (AI) didn’t detect the video automatically. Facebook says AI has made massive progress over the years to proactively detect the vast majority of the content it can remove. But it’s not perfect. “To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare.” says Guy Rosen VP Product Management at Facebook. Guy further adds, “AI is an incredibly important part of our fight against terrorist content on our platforms, and while its effectiveness continues to improve, it is never going to be perfect. People will continue to be part of the equation, whether it’s the people on our team who review content, or people who use our services and report content to us. That’s why last year Facebook more than doubled the number of people working on safety and security to over 30,000 people, including about 15,000 content reviewers to report content that they find disturbing.” Facebook further plans to: Improve the image and video matching technology so that they can stop the spread of viral videos of such nature, regardless of how they were originally produced. React faster to this kind of content on a live streamed video. Continue to combat hate speech of all kinds on their platform. Expand industry collaboration through the Global Internet Forum to Counter Terrorism (GIFCT). Challenges Google and Facebook faced to report the video content According to Motherboard, Google saw an unprecedented number of attempts to post footage from the attack, sometimes as fast as a piece of content per second. But the challenge they faced was to block access to the killer’s so-called manifesto, a 74-page document that spouted racist views and explicit calls for violence. Google described the difficulties of moderating the manifesto, pointing to its length and the issue of users sharing the snippets of the manifesto that Google’s content moderators may not immediately recognise. “The manifesto will be particularly challenging to enforce against given the length of the document and that you may see various segments of various lengths within the content you are reviewing,” says Google. A source with knowledge of Google’s strategy for moderating the New Zealand attack material said this can complicate moderation efforts because some outlets did use parts of the video and manifesto. UK newspaper The Daily Mail let readers download the terrorist’s manifesto directly from the paper’s own website, and Sky News Australia aired parts of the attack footage, BuzzFeed News reported. On the other hand Facebook faces a challenge to automatically discern such content from visually similar, innocuous content. For example if thousands of videos from live-streamed video games are flagged by the systems, reviewers could miss the important real-world videos where they could alert first responders to get help on the ground. Another challenge for Facebook is similar to what Google faces, which is the proliferation of many different variants of videos makes it difficult for the image and video matching technology to prevent spreading further. Facebook found that a core community of bad actors working together to continually re-upload edited versions of the video in ways designed to defeat their detection. Second, a broader set of people distributed the video and unintentionally made it harder to match copies. Websites and pages, eager to get attention from people seeking out the video, re-cut and re-recorded the video into various formats. In total, Facebook found and blocked over 800 visually-distinct variants of the video that were circulating. Both companies seem to be working hard to improve their products and gain user’s trust and confidence back. How social media enabled and amplified the Christchurch terrorist attack Google to be the founding member of CDF (Continuous Delivery Foundation) Google announces the stable release of Android Jetpack Navigation
Read more
  • 0
  • 0
  • 2953

article-image-women-win-all-open-board-director-seats-in-open-source-initiative-2019-board-elections
Savia Lobo
19 Mar 2019
3 min read
Save for later

Women win all open board director seats in Open Source Initiative 2019 board elections

Savia Lobo
19 Mar 2019
3 min read
The recently held Open Source Initiative’s 2019 Board elections elected six Board of Directors to its eleven-person Board. Two were elected from the affiliate membership, and four from the individual membership. If it wasn’t incredible enough that many women ran for the seats,  they have won all the seats! The six seats include two returning directors, Carol Smith and Molly de Blanc; and three new directors Pamela Chestek, Elana Hashman, and Hong Phuc Dang. Pamela Chestek (nominated by The Document Foundation) and Molly de Blanc (nominated by the Debian Project) captured the most votes from OSI Affiliate Members. The last seat is a tie between Christine Hall and Mariatta Wijaya and hence a runoff election will be required to identify the final OSI Board Director. The run off election started yesterday, March 18th (opening at 12:00 a.m. / 00:00) and will end on Monday, March 25th (closing at 12:00 a.m. / 00:00). Mariatta Wijaya, a core Python developer and a platform engineer at Zapier, told in a statement to Business Insider that she found not all open source projects were as welcoming, especially to women. That's one reason why she's running for the board of the Open Source Initiative, an influential organization that promotes and protects open source software communities. Wijaya also said, "I really want to see better diversity across the people who contribute to open source. Not just the users, the creators of open source. I would love to see that diversity improve. I would like to see a better representation. I did find it a barrier initially, not seeing more people who look like me in this space, and I felt like an outsider." A person discussed six female candidates in misogynistic language on Slashdot, which is a tech-focussed social news website. The post also then labeled each woman with how much of a "threat" they were. Slashdot immediately took down this post “shortly afterward the OSI started seeing inappropriate comments posted on its website”. https://twitter.com/alicegoldfuss/status/1102609189342371840 Molly de Blanc and Patrick Masson said this was the first time they saw such type of harassment of female OSI board candidates. They also said that such harassments in open source are not uncommon. Joshua R. Simmons, an Open source advocate, and web developer tweeted, “women winning 100% of the open seats in an election that drew attention from a cadre of horrible misogynists” https://twitter.com/joshsimmons/status/1107303020293832704 OSI President, Simon Phipps said that the OSI committee is “thrilled the electorate has picked an all-female cohort to the new Board” https://twitter.com/webmink/status/1107367907825274886 To know more about these elections in detail, head over to the OSI official blog post. UPDATED: In the previous draft, Pamela Chestek who was listed as returning board member, is a new board member; and Carol Smith who was listed as a new board member, is a returning member. #GoogleWalkout demanded a ‘truly equitable culture for everyone’; Pichai shares a “comprehensive” plan for employees to safely report sexual harassment MongoDB withdraws controversial Server Side Public License from the Open Source Initiative’s approval process Google’s pay equity analysis finds men, not women, are underpaid; critics call out design flaws in the analysis  
Read more
  • 0
  • 0
  • 5035

article-image-social-media-enabled-and-amplified-christchurch-terrorist-attack
Fatema Patrawala
19 Mar 2019
11 min read
Save for later

How social media enabled and amplified the Christchurch terrorist attack

Fatema Patrawala
19 Mar 2019
11 min read
The recent horrifying terrorist attack in New Zealand has cast new blame on how technology platforms police content. There are now questions about whether global internet services are designed to work this way? And if online viral hate is uncontainable? Fifty one people so far have been reported to be dead and 50 more injured after the terrorist attacks on two New Zealand mosques on Friday. The victims included children as young as 3 and 4 years old, and elderly men and women. The alleged shooter is identified as a 28 year old Australian man named Brenton Tarrant. Brenton announced the attack on the anonymous-troll message board 8chan. There, he posted images of the weapons days before the attack, and made an announcement an hour before the shooting. On 8chan, Facebook and Twitter, he also posted links to a 74-page manifesto, titled “The Great Replacement,” blaming immigration for the displacement of whites in Oceania and elsewhere. The manifesto cites “white genocide” as a motive for the attack, and calls for “a future for white children” as its goal. Further he live-streamed the attacks on Facebook, YouTube; and posted a link to the stream on 8chan. It’s terrifying and disgusting, especially when 8chan is one of the sites where disaffected internet misfits create memes and other messages to provoke dismay and sow chaos among people. “8chan became the new digital home for some of the most offensive people on the internet, people who really believe in white supremacy and the inferiority of women,” Ethan Chiel wrote. “It’s time to stop shitposting,” the alleged shooter’s 8chan post reads, “and time to make a real-life effort post.” Many of the responses, anonymous by 8chan’s nature, celebrate the attack, with some posting congratulatory Nazi memes. A few seem to decry it, just for logistical quibbles. And others lament that the whole affair might destroy the site, a concern that betrays its users’ priorities. Social media encourages performance crime The use of social media technology and livestreaming marks the attack as different from many other terrorist incidents. It is a form of violent “performance crime”. That is, the video streaming is a central component of the violence itself, it’s not somehow incidental to the crime, or a trophy for the perpetrator to re-watch later. In the past, terrorism functioned according to what has been called the “theatre of terror”, which required the media to report on the spectacle of violence created by the group. Nowadays with social media in our hands it's much easier for someone to both create the spectacle of horrific violence and distribute it widely by themselves. There is a tragic and recent history of performance crime videos that use live streaming and social media video services as part of their tactics. In 2017, for example, the sickening murder video of an elderly man in Ohio was uploaded to Facebook, and the torture of a man with disabilities in Chicago was live streamed. In 2015, the murder of two journalists was simultaneously broadcast on-air, and live streamed. Tech companies on the radar Social-media companies scrambled to take action as the news—and the video—of the attack spread. Facebook finally managed to pull down Tarrant’s profiles and the video, but only after New Zealand police brought the live-stream to the company’s attention. It has been working "around the clock" to remove videos of the incident shared on its platform. In a statement posted to Twitter on Sunday, the tech company said that within 24 hours of Friday’s shooting it had removed 1.5 million videos of the attack from its platform globally. YouTube said it had also removed an “unprecedented volume” of videos of the shooting. Twitter also suspended Tarrant’s account, where he had posted links to the manifesto from several file-sharing sites. The chaotic aftermath mostly took place while many North Americans slept unaware, waking up to the news and its associated confusion. By morning on the East Coast, news outlets had already weighed in on whether technology companies might be partly to blame for catastrophes such as the New Zealand massacre because they have failed to catch offensive content before it spreads. One of the tweets say Google, Twitter and Facebook made a choice to not use tools available to them to stop white supremacist terrorism. https://twitter.com/samswey/status/1107055372949286912 Countries like Germany and France already have a law in place that demands social media sites move quickly to remove hate speech, fake news and illegal material. Sites that do not remove "obviously illegal" posts could face fines of up to 50m euro (£44.3m). In the wake of the attack, a consortium of New Zealand’s major companies has pledged to pull their advertising from Facebook. In a joint statement, the Association of New Zealand Advertisers (ANZA) and the Commercial Communications Council asked domestic companies to think about where “their advertising dollars are spent, and carefully consider, with their agency partners, where their ads appear.” They added, “We challenge Facebook and other platform owners to immediately take steps to effectively moderate hate content before another tragedy can be streamed online.” Additionally internet service providers like Vodafone, Spark and Vocus in New Zealand are blocking access to websites that do not respond or refuse to comply to requests to remove reuploads of the shooter’s original live stream. The free speech vs safety debate puts social media platforms in the crosshairs Tech Companies are facing new questions on content moderation following the New Zealand attack. The shooter posted a link to the live stream, and soon after he was apprehended, reuploads were found on other platforms like YouTube and Twitter. “Tech companies basically don’t see this as a priority,” the counter-extremism policy adviser Lucinda Creighton commented. “They say this is terrible, but what they’re not doing is preventing this from reappearing.” Others affirmed the importance of quelling the spread of the manifesto, video, and related materials, for fear of producing copycats, or of at least furthering radicalization among those who would be receptive to the message. The circulation of ideas might have motivated the shooter as much as, or even more than, ethnic violence. As Charlie Warzel wrote at The New York Times, the New Zealand massacre seems to have been made to go viral. Tarrant teased his intentions and preparations on 8chan. When the time came to carry out the act, he provided a trove of resources for his anonymous members, scattered to the winds of mirror sites and repositories. Once the live-stream started, one of the 8chan user posted “capped for posterity” on Tarrant’s thread, meaning that he had downloaded the stream’s video for archival and, presumably, future upload to other services, such as Reddit or 4chan, where other like-minded trolls or radicals would ensure the images spread even further. As Warzel put it, “Platforms like Facebook, Twitter, and YouTube … were no match for the speed of their users.” The internet is a Pandora’s box that never had a lid. Camouflaging stories is easy but companies trying hard in building AI to catch it Last year, Mark Zuckerberg defended himself and Facebook before Congress against myriad failures, which included Russian operatives disrupting American elections and permitting illegal housing ads that discriminate by race. Mark Zuckerberg repeatedly invoked artificial intelligence as a solution for the problems his and other global internet companies have created. There’s just too much content for human moderators to process, even when pressed hard to do so under poor working conditions. The answer, Zuckerberg has argued, is to train AI to do the work for them. But that technique has proved insufficient. That’s because detecting and scrubbing undesirable content automatically is extremely difficult. False positives enrage earnest users or foment conspiracy theories among paranoid ones, thanks to the black-box nature of computer systems. Worse, given a pool of billions of users, the clever ones will always find ways to trick any computer system, for example, by slightly modifying images or videos in order to make them appear different to the computer but identical to human eyes. 8chan, as it happens, is largely populated by computer-savvy people who have self-organized to perpetrate exactly those kinds of tricks. The primary sources of content are only part of the problem. Long after the deed, YouTube users have bolstered conspiracy theories about murders, successfully replacing truth with lies among broad populations of users who might not even know they are being deceived. Even stock-photo providers are licensing stills from the New Zealand shooter’s video; a Reuters image that shows the perpetrator wielding his rifle as he enters the mosque is simply credited, “Social media.” Interpreting real motives is difficult on social The video is just the tip of the iceberg. Many smaller and less obviously inflamed messages have no hope of being found, isolated, and removed by technology services. The shooter praised Donald Trump as a “symbol of renewed white identity” and incited the conservative commentator Candace Owens, who took the bait on Twitter in a post that got retweeted thousands of times by the morning after the attack. The shooter’s forum posts and video are littered with memes and inside references that bear special meaning within certain communities on 8chan, 4chan, Reddit, and other corners of the internet, offering tempting receptors for consumption and further spread. Perhaps worst of all, the forum posts, the manifesto, and even the shooting itself might not have been carried out with the purpose that a literal read of their contents suggests. At the first glance, it seems impossible to deny that this terrorist act was motivated by white-extremist hatred, an animosity that authorities like the FBI expert and the Facebook officials would want to snuff out before it spreads. But 8chan is notorious for users with an ironic and rude behaviour under the shades of anonymity.They use humor, memes and urban slang to promote chaos and divisive rhetoric. As the internet separates images from context and action from intention, and then spreads those messages quickly among billions of people scattered all around the globe. That structure makes it impossible to even know what individuals like Tarrant “really mean” by their words and actions. As it spreads, social-media content neuters earnest purpose entirely, putting it on the same level as anarchic randomness. What a message means collapses into how it gets used and interpreted. For 8chan trolls, any ideology might be as good as any other, so long as it produces chaos. We all have a role to play It’s easy to say that technology companies can do better. They can, and they should. But ultimately, content moderation is not the solution by itself. The problem is the media ecosystem they have created. The only surprise is that anyone would still be surprised that social media produce this tragic abyss, for this is what social media are supposed to do, what they were designed to do: spread the images and messages that accelerate interest and invoke raw emotions, without check, and absent concern for their consequences. We hope that social media companies get better at filtering out violent content and explore alternative business models, and governments think critically about cyber laws that protect both people and speech. But until they do we should reflect on our own behavior too. As news outlets, we shape the narrative through our informed perspectives which makes it imperative to publish legitimate & authentic content. Let’s as users too make a choice of liking and sharing content on social platforms. Let’s consider how our activities could contribute to an overall spectacle society that might inspire future perpetrator-produced videos of such gruesome crime – and act accordingly. In this era of social spectacle, we all have a role to play in ensuring that terrorists aren’t rewarded for their crimes with our clicks and shares. The Indian government proposes to censor social media content and monitor WhatsApp messages Virality of fake news on social media: Are weaponized AI bots to blame, questions Destin Sandlin Mastodon 2.7, a decentralized alternative to social media silos, is now out!
Read more
  • 0
  • 0
  • 3549
article-image-the-u-s-dod-wants-to-dominate-russia-and-china-in-artificial-intelligence-last-week-gave-us-a-glimpse-into-that-vision
Savia Lobo
18 Mar 2019
9 min read
Save for later

The U.S. DoD wants to dominate Russia and China in Artificial Intelligence. Last week gave us a glimpse into that vision.

Savia Lobo
18 Mar 2019
9 min read
In a hearing on March 12, the sub-committee on emerging threats and capabilities received testimonies on Artificial Intelligence Initiatives within the Department of Defense(DoD). The panel included Peter Highnam, Deputy Director of the Defense Advanced Research Projects Agency; Michael Brown, DoD Defense Innovation Unit Director; and Lieutenant General John Shanahan, director of the Joint Artificial Intelligence Center (JAIC). The panel broadly testified to senators that AI will significantly transform DoD’s capabilities and that it is critical the U.S. remain competitive with China and Russia in developing AI applications. Dr. Peter T. Highnam on DARPA’s achievements and future goals Dr. Peter T. Highnam, Deputy Director, Defense Advanced Research Projects Agency talked about DARPA’s significant role in the development of AI technologies that have produced game-changing capabilities for the Department of Defense and beyond. In his testimony, he mentions, “DARPA’s AI Next effort is simply a continuing part of its 166 historic investment in the exploration and advancement of AI technologies.” Dr. Highnam highlighted different waves of AI technologies. The first wave, which was nearly 70 years ago, emphasized handcrafted knowledge, and computer scientists constructed so-called expert systems that captured the rules that the system could then apply to situations of interest. However, handcrafting rules was costly and time-consuming. The second wave that brought in machine learning that applies statistical and probabilistic methods to large data sets to create generalized representations that can be applied to future samples. However, this required training deep learning (artificial) neural networks with a variety of classification and prediction tasks when adequate historical data. Therein lies the rub, however, as the task of collecting, labelling, and vetting data on which to train. Such a process is prohibitively costly and time-consuming too. He says, “DARPA envisions a future in which machines are more than just tools that execute human programmed rules or generalize from human-curated data sets. Rather, the machines DARPA envisions will function more as colleagues than as tools.” Towards this end, DARPA is focusing its investments on a “third wave” of AI technologies that brings forth machines that can reason in context. Incorporating these technologies in military systems that collaborate with warfighters will facilitate better decisions in complex, time-critical, battlefield environments; enable a shared understanding of massive, incomplete, and contradictory information; and empower unmanned systems to perform critical missions safely and with high degrees of autonomy. DARPA’s more than $2 billion “AI Next” campaign, announced in September 2018, includes providing robust foundations for second wave technologies, aggressively applying the second wave AI technologies into appropriate systems, and exploring and creating third wave AI science and technologies. DARPA’s third wave research efforts will forge new theories and methods that will make it possible for machines to adapt contextually to changing situations, advancing computers from tools to true collaborative partners. Furthermore, the agency will be fearless about exploring these new technologies and their capabilities – DARPA’s core function – pushing critical frontiers ahead of our nation’s adversaries. To know more about this in detail, read Dr. Peter T. Highnam’s complete statement. Michael Brown on (Defense Innovation Unit) DIU’s efforts in Artificial Intelligence Michael Brown, Director of the Defense Innovation Unit, started the talk by highlighting on the fact how China and Russia are investing heavily to become dominant in AI.  “By 2025, China will aim to achieve major breakthroughs in AI and increase its domestic market to reach $59.6 billion (RMB 400 billion) To achieve these targets, China’s National Development and Reform Commission (China’s industrial policy-making agency) funded the creation of a national AI laboratory, and Chinese local governments have pledged more than $7 billion in AI funding”, Brown said in his statement. He said that these Chinese firms are in a way leveraging U.S. talent by setting up research institutes in the state, investing in U.S. AI-related startups and firms, recruiting U.S.-based talent, and commercial and academic partnerships. Brown said that DIU will engage with DARPA and JAIC(Joint Artificial Intelligence Center) and also make its commercial knowledge and relationships with potential vendors available to any of the Services and Service Labs. DIU also anticipates that with its close partnership with the JAIC, DIU will be at the leading edge of the Department’s National Mission Initiatives (NMIs), proving that commercial technology can be applied to critical national security challenges via accelerated prototypes that lay the groundwork for future scaling through JAIC. “DIU looks to bring in key elements of AI development pursued by the commercial sector, which relies heavily on continuous feedback loops, vigorous experimentation using data, and iterative development, all to achieve the measurable outcome, mission impact”, Brown mentions. DIU’s AI portfolio team combines depth of commercial AI, machine learning, and data science experience from the commercial sector with military operators. However, they have specifically prioritized projects that address three major impact areas or use cases which employ AI technology, including: Computer vision The DIU is prototyping computer vision algorithms in humanitarian assistance and disaster recovery scenarios. “This use of AI holds the potential to automate post-disaster assessments and accelerate search and rescue efforts on a global scale”, Brown said in his statement. Large dataset analytics and predictions DIU is prototyping predictive maintenance applications for Air Force and Army platforms. For this DIU plans to partner with JAIC to scale this solution across multiple aircraft platforms, as well as ground vehicles beginning with DIU’s complementary predictive maintenance project focusing on the Army’s Bradley Fighting Vehicle. Brown says this is one of DIU’s highest priority projects for FY19 given its enormous potential for impact on readiness and reducing costs. Strategic reasoning DIU is prototyping an application from Project VOLTRON that leverages AI to reason about high-level strategic questions, map probabilistic chains of events, and develop alternative strategies. This will make DoD owned systems more resilient to cyber attacks and inform program offices of configuration errors faster and with fewer errors than humans. Know more about what more DIU plans in partnership with DARPA and JAIC, in detail, in Michael Brown’s complete testimony. Lieutenant General Jack Shanahan on making JAIC “AI-Ready” Lieutenant General Jack Shanahan, Director, Joint Artificial Intelligence Center, touches upon  how the JAIC is partnering with the Under Secretary of Defense (USD) Research & Engineering (R&E), the role of the Military Services, the Department’s initial focus areas for AI delivery, and how JAIC is supporting whole-of-government efforts in AI. “To derive maximum value from AI application throughout the Department, JAIC will operate across an end-to-end lifecycle of problem identification, prototyping, integration, scaling, transition, and sustainment. Emphasizing commerciality to the maximum extent practicable, JAIC will partner with the Services and other components across the Joint Force to systematically identify, prioritize, and select new AI mission initiatives”, Shanahan mentions in his testimony. The AI capability delivery efforts that will go through this lifecycle will fall into two categories including National Mission Initiatives (NMI) and Component Mission Initiatives (CMI). NMI is an operational or business reform joint challenge, typically identified from the National Defense Strategy’s key operational problems and requiring multi-service innovation, coordination, and the parallel introduction of new technology and new operating concepts. On the other hand, Component Mission Initiatives (CMI) is a component-level challenge that can be solved through AI. JAIC will work closely with individual components on CMIs to help identify, shape, and accelerate their Component-specific AI deployments through: funding support; usage of common foundational tools, libraries, cloud infrastructure; application of best practices; partnerships with industry and academia; and so on. The Component will be responsible for identifying and implementing the organizational structure required to accomplish its project in coordination and partnership with the JAIC. Following are some examples of early NMI’s by JAIC to deliver mission impact at speed, demonstrate the proof of concept for the JAIC operational model, enable rapid learning and iterative process refinement, and build their library of reusable tools while validating JAIC’s enterprise cloud architecture. Perception Improve the speed, completeness, and accuracy of Intelligence, Surveillance, Reconnaissance (ISR) Processing, Exploitation, and Dissemination (PED). Shanahan says Project Maven’s efforts are included here. Predictive Maintenance (PMx) Provide computational tools to decision-makers to help them better forecast, diagnose, and manage maintenance issues to increase availability, improve operational effectiveness, and ensure safety, at a reduced cost. Humanitarian Assistance/Disaster Relief (HA/DR) Reduce the time associated with search and discovery, resource allocation decisions, and executing rescue and relief operations to save lives and livelihood during disaster operations. Here, JAIC plans to apply lessons learned and reusable tools from Project Maven to field AI capabilities in support of federal responses to events such as wildfires and hurricanes—where DoD plays a supporting role. Cyber Sensemaking Detect and deter advanced adversarial cyber actors who infiltrate and operate within the DoD Information Network (DoDIN) to increase DoDIN security, safeguard sensitive information, and allow warfighters and engineers to focus on strategic analysis and response. Shanahan states, “Under the DoD CIO’s authorities and as delineated in the JAIC establishment memo, JAIC will coordinate all DoD AI-related projects above $15 million annually.” “It does mean that we will start to ensure, for example, that they begin to leverage common tools and libraries, manage data using best practices, reflect a common governance framework, adhere to rigorous testing and evaluation methodologies, share lessons learned, and comply with architectural principles and standards that enable scale”, he further added. To know more about this in detail, read Lieutenant General Jack Shanahan’s complete testimony. To know more about this news in detail, watch the entire hearing on 'Artificial Intelligence Initiatives within the Department of Defense' So, you want to learn artificial intelligence. Here’s how you do it. What can happen when artificial intelligence decides on your loan request Mozilla partners with Ubisoft to Clever-Commit its code, an artificial intelligence assisted assistant
Read more
  • 0
  • 0
  • 4062

article-image-googlepayoutsforall-a-digital-protest-against-googles-135-million-execs-payout-for-misconduct
Natasha Mathur
14 Mar 2019
6 min read
Save for later

#GooglePayoutsForAll: A digital protest against Google’s $135 million execs payout for misconduct

Natasha Mathur
14 Mar 2019
6 min read
The Google Walkout for Real Change group tweeted out their protest against the news of ‘Google confirming that it paid $135 million as exit packages to the two top execs accused of sexual assault, on Twitter, earlier this week. The group castigated the ‘multi-million dollar payouts’ and asked people to use the hashtag #GooglePayoutsForAll to demonstrate different and better ways this obscenely large amount of ‘hush money’ could have been used. https://twitter.com/GoogleWalkout/status/1105556617662214145 The news of Google paying its senior execs, namely, Amit Singhal (former Senior VP of Google search) and Andy Rubin (creator of Android) high exit packages was first highlighted in a report by the New York Times, last October. As per the report, Google paid $90 million to Rubin and $15 million to Singhal. A lawsuit filed by James Martin, an Alphabet shareholder, on Monday this week, further confirmed this news. The lawsuit states that this decision taken by directors of Alphabet caused significant financial harm to the company apart from deteriorating its reputation, goodwill, and market capitalization. Meredith Whittaker, one of the early organizers of the Google Walkout in November last month tweeted, “$135 million could fix Flint's water crisis and still have $80 million left.” Vicki Tardif, another Googler summed up the sentiments in her tweet, “$135M is 1.35 times what Google.org  gave out in grants in 2016.” An ACLU researcher pointed out that $135M could have in addition to feeding the hungry, housing the homeless and pay off some student loans, It could also support local journalism killed by online ads. The public support to the call for protest using the hashtag #GooglePayoutsForAll has been awe-inspiring. Some shared their stories of injustice in cases of sexual assault, some condemned Google for its handling of sexual misconduct, while others put the amount of money Google wasted on these execs into a larger perspective. Better ways Google could have used $135 million it wasted on execs payouts, according to Twitter Invest in people to reduce structural inequities in the company $135M could have been paid to the actual victims who faced harassment and sexual assault. https://twitter.com/xzzzxxzx/status/1105681517584572416 Google could have used the money to fix the wage and level gap for women of color within the company. https://twitter.com/sparker2/status/1105511306465992705 $135 million could be used to adjust the 16% median pay gap of the 1240 women working in Google’s UK offices https://twitter.com/crschmidt/status/1105645484104998913 $135M could have been used by Google for TVC benefits. It could also be used to provide rigorous training to the Google employees on what impact misinformation within the company can have on women and other marginalized groups.   https://twitter.com/EricaAmerica/status/1105546835526107136 For $135M, Google could have paid the 114 creators featured in its annual "YouTube Rewind" who are otherwise unpaid for their time and participation. https://twitter.com/crschmidt/status/1105641872033230848 Improve communities by supporting social causes Google could have paid $135M to RAINN, a largest American nonprofit anti-sexual assault organization, covering its expenses for the next 18 years. https://twitter.com/GoogleWalkout/status/1105450565193121792 For funding 1800 school psychologists for 1 year in public schools https://twitter.com/markfickett/status/1105640930936324097 To build real, affordable housing solutions in collaboration with London Breed, SFGOV, and other Bay Area officials https://twitter.com/jillianpuente/status/1105922474930245636 $135M could provide insulin for nearly 10,000 people with Type 1 diabetes in the US https://twitter.com/GoogleWalkout/status/1105585078590210051 To pay for the first year for 1,000 people with stage IV breast cancer https://twitter.com/GoogleWalkout/status/1105845951938347008 Be a responsible corporate citizen To fund approximately 5300 low-cost electric vehicles for Google staff, and saving around 25300 metric tons of carbon dioxide from vehicle emissions per year. https://twitter.com/crschmidt/status/1105698893361233926 Providing free Google Fiber internet to 225,000 homes for a year https://twitter.com/markfickett/status/1105641215389773825 To give $5/hr raise to 12,980 service workers at Silicon Valley tech campuses https://twitter.com/LAuerhahn/status/1105487572069801985 $135M could have been used for the construction of affordable homes, protecting 1,100 low-income families in San Jose from coming rent hikes of Google’s planned mega-campus. https://twitter.com/JRBinSV/status/1105478979543154688 #GooglePayoutsForAll: Another initiative to promote awareness of structural inequities in tech   The core idea behind launching #GooglePayoutsForAll on Twitter by the Google walkout group was to promote awareness among people regarding the real issues within the company. It urged people to discuss how Google is failing at maintaining the ‘open culture’ that it promises to the outside world. It also highlights how mottos such as “Don’t be Evil” and “Do the right thing” that Google stood by only make for pretty wall decor and there’s still a long way to go to see those ideals in action. The group gained its name when more than 20,000 Google employees along with vendors, and contractors, temps, organized Google “walkout for real change” and walked out of their offices in November 2018. The walkout was a protest against the hushed and unfair handling of sexual misconduct within Google. Ever since then, Googlers have been consistently taking initiatives to bring more transparency, accountability, and fairness within the company. For instance, the team launched an industry-wide awareness campaign to fight against forced arbitration in January, where they shared information about arbitration on their Twitter and Instagram accounts throughout the day. The campaign was a success as Google finally ended its forced arbitration policy which goes into effect this month for all the employees (including contractors, temps, vendors) and for all kinds of discrimination. Also, House and Senate members in the US have proposed a bipartisan bill to prohibit companies from using forced arbitration clauses, last month.    Although many found the #GooglePayoutsForAll idea praiseworthy, some believe this initiative doesn’t put any real pressure on Google to bring about a real change within the company. https://twitter.com/Jeffanie16/status/1105541489722081290 https://twitter.com/Jeffanie16/status/1105546783063752709 https://twitter.com/Jeffanie16/status/1105547341862457344 Now, we don’t necessarily disagree with this opinion, however, the initiative can't be completely disregarded as it managed to make people who’d otherwise hesitate to open up talk extensively regarding the real issues within the company. As Liz Fong-Jones puts it, “Strikes and walkouts are more sustainable long-term than letting Google drive each organizer out one by one. But yes, people *are* taking action in addition to speaking up. And speaking up is a bold step in companies where workers haven't spoken up before”. The Google Walkout group have not yet announced what they intend to do next following this digital protest. However, the group has been organizing meetups such as the one earlier this month on March 6th where it invited the tech contract workers for discussion about building solidarity to make work better for everyone. We are only seeing the beginning of a powerful worker movement take shape in Silicon Valley. Recode Decode #GoogleWalkout interview shows why data and evidence don’t always lead to right decisions in even the world’s most data-driven company Liz Fong Jones, prominent ex-Googler shares her experience at Google and ‘grave concerns’ for the company Google’s pay equity analysis finds men, not women, are underpaid; critics call out design flaws in the analysis
Read more
  • 0
  • 0
  • 3660