Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

How-To Tutorials - News

104 Articles
article-image-why-are-experts-worried-about-microsofts-billion-dollar-bet-in-openais-agi-pipe-dream
Sugandha Lahoti
23 Jul 2019
6 min read
Save for later

Why are experts worried about Microsoft's billion dollar bet in OpenAI's AGI pipe dream?

Sugandha Lahoti
23 Jul 2019
6 min read
Microsoft has invested $1 billion in OpenAI with the goal of building next-generation supercomputers and a platform within Microsoft Azure which will scale to AGI (Artificial General Intelligence). This is a multiyear partnership with Microsoft becoming OpenAI’s preferred partner for commercializing new AI technologies. Open AI will become a big Azure customer, porting its services to run on Microsoft Azure. The $1 billion is a cash investment into OpenAI LP, which is Open AI’s for-profit corporate subsidiary. The investment will follow a standard capital commitment structure which means OpenAI can call for it, as they need it. But the company plans to spend it in less than five years. Per the official press release, “The companies will focus on building a computational platform in Azure for training and running advanced AI models, including hardware technologies that build on Microsoft’s supercomputing technology. These will be implemented in a safe, secure and trustworthy way and is a critical reason the companies chose to partner together.” They intend to license some of their pre-AGI technologies, with Microsoft becoming their preferred partner. “My goal in running OpenAI is to successfully create broadly beneficial A.G.I.,” Sam Altman, who co-founded Open AI with Elon Musk, said in a recent interview. “And this partnership is the most important milestone so far on that path.” Musk left the company in February 2019, to focus on Tesla and because he didn’t agree with some of what OpenAI team wanted to do. What does this partnership mean for Microsoft and Open AI OpenAI may benefit from this deal by keeping their innovations private which may help commercialization, raise more funds and get to AGI faster. For OpenAI this means the availability of resources for AGI, while potentially allowing founders and other investors with the opportunity to either double-down on OpenAI or reallocate resources to other initiatives However, this may also lead to them not disclosing progress, papers with details, and open source code as much as in the past. https://twitter.com/Pinboard/status/1153380118582054912 As for Microsoft, this deal is another attempt in quietly taking over open source. First, with the acquisition of GitHub and the subsequent launch of GitHub Sponsors, and now with becoming OpenAI’s ‘preferred partner’ for commercialization. Last year at an Investor conference, Nadella said, “AI is going to be one of the trends that is going to be the next big shift in technology. It's going to be AI at the edge, AI in the cloud, AI as part of SaaS applications, AI as part of in fact even infrastructure. And to me, to be the leader in it, it's not enough just to sort of have AI capability that we can exercise—you also need the ability to democratize it so that every business can truly benefit from it. That to me is our identity around AI.” Partnership with OpenAI seems to be a part of this plan. This deal can also possibly help Azure catch up with Google and Amazon both in hardware scalability and Artificial Intelligence offerings. A hacker news user comments, “OpenAI will adopt and make Azure their preferred platform. And Microsoft and Azure will jointly "develop new Azure AI supercomputing technologies", which I assume is advancing their FGPA-based deep learning offering. Google has a lead with TensorFlow + TPUs and this is a move to "buy their way in", which is a very Microsoft thing to do.” https://twitter.com/soumithchintala/status/1153308199610511360 It is also likely that Microsoft is investing money which will eventually be pumped back into its own company, as OpenAI buys computing power from the tech giant. Under the terms of the contract, Microsoft will eventually become the sole cloud computing provider for OpenAI, and most of that $1 billion will be spent on computing power, Altman says. OpenAI, who were previously into building ethical AI will now pivot to build cutting edge AI and move towards AGI. Sometimes even neglecting ethical ramifications, wanting to deploy tech at the earliest which is what Microsoft would be interested in monetizing. https://twitter.com/CadeMetz/status/1153291410994532352 I see two primary motivations: For OpenAI—to secure funding and to gain some control over hardware which in turn helps differentiate software. For MSFT—to elevate Azure in the minds of developers for AI training. - James Wang, Analyst at ARKInvest https://twitter.com/jwangARK/status/1153338174871154689 However, the news of this investment did not go down well with some experts in the field who saw this as a pure commercial deal and questioned whether OpenAI’s switch to for-profit research undermines its claims to be “democratizing” AI. https://twitter.com/fchollet/status/1153489165595504640 “I can't really parse its conversion into an LP—and Microsoft's huge investment—as anything but a victory for capital” - Robin Sloan, Author https://twitter.com/robinsloan/status/1153346647339876352 “What is OpenAI? I don't know anymore.” - Stephen Merity, Deep learning researcher https://twitter.com/Smerity/status/1153364705777311745 https://twitter.com/SamNazarius/status/1153290666413383682 People are also speculating whether creating AGI is really even possible. In a recent survey experts estimated that there was a 50 percent chance of creating AGI by the year 2099. Pet New York Times, most experts believe A.G.I. will not arrive for decades or even centuries. Even Altman admits OpenAI may never get there. But the race is on nonetheless. Then why is Microsoft delivering the $1 billion over five years considering that is neither enough money nor enough time to produce AGI. Although, OpenAI has certainly impressed the tech community with its AI innovations. In April, OpenAI’s new algorithm that is trained to play the complex strategy game, Dota 2, beat the world champion e-sports team OG at an event in San Francisco, winning the first two matches of the ‘best-of-three’ series. The competition included a human team of five professional Dota 2 players and AI team of five OpenAI bots. In February, they released a new AI model GPT-2, capable of generating coherent paragraphs of text without needing any task-specific training. However experts felt that the move signalled towards ‘closed AI’ and propagated the ‘fear of AI’ for its ability to write convincing fake news from just a few words. Github Sponsors: Could corporate strategy eat FOSS culture for dinner? Microsoft is seeking membership to Linux-distros mailing list for early access to security vulnerabilities OpenAI: Two new versions and the output dataset of GPT-2 out!
Read more
  • 0
  • 0
  • 2409

article-image-why-intel-is-betting-on-bfloat16-to-be-a-game-changer-for-deep-learning-training-hint-range-trumps-precision
Vincy Davis
22 Jul 2019
4 min read
Save for later

Why Intel is betting on BFLOAT16 to be a game changer for deep learning training? Hint: Range trumps Precision.

Vincy Davis
22 Jul 2019
4 min read
A group of researchers from Intel Labs and Facebook have published a paper titled, “A Study of BFLOAT16 for Deep Learning Training”. The paper presents a comprehensive study indicating the success of Brain Floating Point (BFLOAT16) half-precision format in Deep Learning training across image classification, speech recognition, language modeling, generative networks and industrial recommendation systems. BFLOAT16 has a 7-bit mantissa and an 8-bit exponent, similar to FP32, but with less precision. BFLOAT16 was originally developed by Google and implemented in its third generation Tensor Processing Unit (TPU). https://twitter.com/JeffDean/status/1134524217762951168 Many state of the art training platforms use IEEE-754 or automatic mixed precision as their preferred numeric format for deep learning training. However, these formats lack in representing error gradients during back propagation. Thus, they are not able to satisfy the required  performance gains. BFLOAT16 exhibits a dynamic range which can be used to represent error gradients during back propagation. This enables easier migration of deep learning workloads to BFLOAT16 hardware. Image Source: BFLOAT16 In the above table, all the values are represented as trimmed full precision floating point values with 8 bits of mantissa with their dynamic range comparable to FP32. By adopting to BFLOAT16 numeric format, the core compute primitives such as Fused Multiply Add (FMA) can be built using 8-bit multipliers. This leads to significant reduction in area and power while preserving the full dynamic range of FP32. How Deep neural network(DNNs) is trained with BFLOAT16? The below figure shows the mixed precision data flow used to train deep neural networks using BFLOAT16 numeric format. Image Source: BFLOAT16 The BFLOAT16 tensors are taken as input to the core compute kernels represented as General Matrix Multiply (GEMM) operations. It is then forwarded to the FP32 tensors as output.   The researchers have developed a library called Quantlib, represented as Q in the figure, to implement the emulation in multiple deep learning frameworks. One of the functions of a Quantlib is to modify the elements of an input FP32 tensor to echo the behavior of BFLOAT16. Quantlib is also used to modify a copy of the FP32 weights to BFLOAT16 for the forward pass.   The non-GEMM computations include batch-normalization and activation functions. The  FP32 always maintains the bias tensors.The FP32 copy of the weights updates the step uses to maintain model accuracy. How does BFLOAT16 perform compared to FP32? Convolution Neural Networks Convolutional neural networks (CNN) are primarily used for computer vision applications such as image classification, object detection and semantic segmentation. AlexNet and ResNet-50 are used as the two representative models for the BFLOAT16 evaluation. AlexNet demonstrates that BFLOAT16 emulation follows very near to the actual FP32 run and achieves 57.2% top-1 and 80.1% top-5 accuracy. Whereas in ResNet-50, the BFLOAT16 emulation follows the FP32 baseline almost exactly and achieves the same top-1 and top-5 accuracy. Image Source: BFLOAT16 Similarly, the researchers were able to successfully demonstrate that BFLOAT16 is able to represent tensor values across many application domains including Recurrent Neural Networks, Generative Adversarial Networks (GANs) and Industrial Scale Recommendation System. The researchers thus established that the dynamic range of BFLOAT16 is of the same range as that of FP32 and its conversion to/from FP32 is also easy. It is important to maintain the same range as FP32 since no hyper-parameter tuning is required for convergence in FP32. A hyperparameter is a parameter of choosing a set of optimal hyperparameters in machine learning for a learning algorithm. Researchers of this paper expect to see an industry-wide adoption of BFLOAT16 across emerging domains. Recent reports suggest that Intel is planning to graft Google’s BFLOAT16 onto its processors  as well as on its initial Nervana Neural Network Processor for training, the NNP-T 1000. Pradeep Dubey, who directs the Parallel Computing Lab at Intel and is also one of the researchers of this paper believes that for deep learning, the range of the processor is more important than the precision, which is the inverse of the rationale used for IEEE’s floating point formats. Users are finding it interesting that a BFLOAT16 half-precision format is suitable for deep learning applications. https://twitter.com/kevlindev/status/1152984689268781056 https://twitter.com/IAmMattGreen/status/1152769690621448192 For more details, head over to the “A Study of BFLOAT16 for Deep Learning Training” paper. Intel’s new brain inspired neuromorphic AI chip contains 8 million neurons, processes data 1K times faster Google plans to remove XSS Auditor used for detecting XSS vulnerabilities from its Chrome web browser IntelliJ IDEA 2019.2 Beta 2 released with new Services tool window and profiling tools
Read more
  • 0
  • 0
  • 6060

article-image-techwontbuildit-entropic-maintainer-calls-for-a-ban-on-palantir-employees-contributing-to-the-project-and-asks-other-open-source-communities-to-take-a-stand-on-ethical-grounds
Sugandha Lahoti
19 Jul 2019
6 min read
Save for later

#TechWontBuildIt: Entropic maintainer calls for a ban on Palantir employees contributing to the project and asks other open source communities to take a stand on ethical grounds

Sugandha Lahoti
19 Jul 2019
6 min read
The tech industry is being plagued by moral and ethical issues as top players are increasingly becoming explicit about prioritizing profits over people or planet. Recent times are rift with cases of tech companies actively selling facial recognition technology to law enforcement agencies, helping ICE separate immigrant families, taking large contracts with the Department of Defense, accelerating the extraction of fossil fuels, deployment of surveillance technology. As the US gets alarmingly dangerous for minority groups, asylum seekers and other vulnerable communities, it has awakened the tech worker community to organize for keeping their employers in check. They have been grouping together to push back against ethically questionable decisions made by their employers using the hashtag #TechWontBuildIt since 2018. Most recently, several open source communities, activists and developers have strongly demonstrated against Palantir for their involvement with ICE. Palantir, a data analytics company, founded by Peter Thiel, one of President Trump’s most vocal supporters in Silicon Valley, has been called out for its association with the Immigration and Customs Enforcement (ICE). According to emails obtained by WNYC, Palantir’s mobile app FALCON is being used by ICE to carry out raids on immigrant communities as well as enable workplace raids. According to the emails, an ICE supervisor sent an email to his officers before a planned spate of raids in New York City in 2017. The emails ordered them to use a Palantir program, called FALCON mobile, for the operation. The email was sent in preparation for a worksite enforcement briefing on January 8, 2018. Two days later, ICE raided nearly a hundred 7-Elevens across U.S. According to WNYC, ICE workplace raids led to 1,525 arrests over immigration status from October 2017 to October 2018. The email reads, “[REDACTION] we want all the team leaders to utilize the FALCON mobile app on your GOV iPhones, We will be using the FALCON mobile app to share info with the command center about the subjects encountered in the stores as well as team locations." Other emails obtained by WYNC detail a Palantir staffer notifying an ICE agent to test out their FALCON mobile application because of his or her “possible involvement in an upcoming operation.” Another message, in April 2017, shows a Palantir support representative instructing an agent on how to classify a datapoint, so that Palantir’s Investigative Case Management [ICM] platform could properly ingest records of a cell phone seizure. In December 2018, Palantir told the New York Times‘ Dealbook that Palantir technology is not used by the division of ICE responsible for carrying out the deportation and detention of undocumented immigrants. Palantir declined WNYC’s requests for comment. Citing law enforcement “sensitivities,” ICE also declined to comment on how it uses Palantir during worksite enforcement operations. In May this year, new documents released by Mijente, an advocacy organization, revealed that Palantir was responsible for 2017 operation that targeted and arrested family members of children crossing the border alone. The documents show a huge contrast to what Palantir said its software was doing. As part of the operation, ICE arrested 443 people solely for being undocumented. Mijente has then urged Palantir to drop its contract with ICE and stop providing software to agencies that aid in tracking, detaining, and deporting migrants, refugees, and asylum seekers. Open source communities, activists and developers strongly oppose Palantir Post the revelation of Palantir’s involvement with ICE, several open-source developers are strongly opposing Palantir. The Entropic project, a JS package registry, is debating the idea of banning Palantir employees from participating in the project. Kat Marchán, Entropic maintainer posted on the forum, “I find it unconscionable for tech folks to be building the technological foundations for this deeply unethical and immoral (and fascist) practice, and I would like it if we, in our limited power as a community to actually affect the situation, officially banned any Palantir employees from participating in or receiving any sort of direct support from the Entropic community.” She has further proposed explicitly banning Palantir employees from the Discourse, the Discord, as well as the GitHub communities and any other forums, Entropic may use for coordinating the project. https://twitter.com/maybekatz/status/1151355320314187776 Amazon is also facing renewed calls from employees and external immigration advocates to stop working with Palantir. According to an internal email obtained by Forbes, Amazon employees are recirculating a June 2018 letter to executives calling for Palantir to be kicked off Amazon Web Services. More than 500 Amazon employees have signed the letter addressed to CEO Jeff Bezos and AWS head Andy Jassy. Not just that, pro-immigration organizations such as Mijente and Jews for Racial and Economic Justice, interrupted the keynote speech at Amazon’s annual AWS Summit, last Thursday. https://twitter.com/altochulo/status/1149326296092164097 More than a dozen groups of activists also protested on July 12 against Palantir Technologies in Palo Alto for the company’s provision of software facilitating ICE raids, detentions, and deportations. City residents also joined the protests expanding the total to hundreds. Back in August 2018, the Lerna team had taken a strong stand against ICE by modifying their MIT license to ban companies who have collaborated with ICE from using Lerna. The updated license banned companies that are known collaborators with ICE such as Microsoft, Palantir, and Amazon, among the others from using Lerna. To quote Meredith Whittaker, Google walkout organizer who recently left the company, from her farewell letter, “Tech workers have emerged as a force capable of making real change, pushing for public accountability, oversight, and meaningful equity. And this right when the world needs it most” She further adds, “The stakes are extremely high. The use of AI for social control and oppression is already emerging, even in the face of developers’ best of intentions. We have a short window in which to act, to build in real guardrails for these systems before AI is built into our infrastructure and it’s too late.” Extraordinary times call for extraordinary measures. As the tech industry grapples with the consequences of its hypergrowth technosolutionist mindset, where do tech workers draw the line? Can tech workers afford to be apolitical or separate their values from the work they do? There are no simple answers, but one thing is for sure - the questions must be asked and faced. Open source, as part of the commons, has a key role to play and how it evolves in the next couple of years is likely to define the direction the world would take. Lerna relicenses to ban major tech giants like Amazon, Microsoft, Palantir from using its software as a protest against ICE Palantir’s software was used to separate families in a 2017 operation reveals Mijente ACLU files lawsuit against 11 federal criminal and immigration enforcement agencies for disclosure of information on government hacking.
Read more
  • 0
  • 0
  • 3486
Banner background image

article-image-how-bad-is-the-gender-diversity-crisis-in-ai-research-study-analysing-1-5million-arxiv-papers-says-its-serious
Fatema Patrawala
18 Jul 2019
9 min read
Save for later

How bad is the gender diversity crisis in AI research? Study analysing 1.5million arxiv papers says it’s “serious”

Fatema Patrawala
18 Jul 2019
9 min read
Yesterday the team at Nesta organization, an innovation firm based out of UK published a research on gender diversity in the AI research workforce. The authors of this research are Juan Mateos Garcis, the Director, Konstantinos Stathoulopoulos, the Principal Researcher and Hannah Owen, the Programme Coordinator at Nesta. https://twitter.com/JMateosGarcia/status/1151517641103872006 They have prepared an analysis purely based on 1.5 million arxiv papers. The team claims that it is the first ever study of gender diversity in AI which is not on any convenience sampling or proprietary database. The team posted on its official blog post, “We conducted a large-scale analysis of gender diversity in AI research using publications from arXiv, a repository with more than 1.5 million preprints widely used by the AI community. We aim to expand the evidence base on gender diversity in AI research and create a baseline with which to interrogate the impact of current and future policies and interventions.  To achieve this, we enriched the ArXiv data with geographical, discipline and gender information in order to study the evolution of gender diversity in various disciplines, countries and institutions as well as examine the semantic differences between AI papers with and without female co-authors.” With this research the team also aims to bring prominent female figures they have identified under the spotlight. Key findings from the research Serious gender diversity crisis in AI research The team found a severe gender diversity gap in AI research with only 13.83% of authors being women. Moreover, in relative terms, the proportion of AI papers co-authored by at least one woman has not improved since the 1990s. Juan Mateos thinks this kind of crisis is a waste of talent and it increases the risk of discriminatory AI systems. https://twitter.com/JMateosGarcia/status/1151517642236276736 Location and research domain are significant drivers of gender diversity Women in the Netherlands, Norway and Denmark are more likely to publish AI papers while those in Japan and Singapore are less likely. In the UK, 26.62% of the AI papers have at least one female co-author, placing the country at the 22nd spot worldwide. The US follows the UK in terms of having at least one female co-authors at 25% and for the unique female author US leads one position above UK. Source: Nesta research report Regarding the research domains, women working in Physics and Education, Computer Ethics and other societal issues and Biology are more likely to publish their work on AI in comparison to those working in Computer Science or Mathematics. Source: Nesta research report Significant gender diversity gap in universities, big tech companies and other research institutions Apart from the University of Washington, every other academic institution and organisation in the dataset has less than 25% female AI researchers. Regarding some of the big tech, only 11.3% of Google’s employees who have published their AI research on arXiv are women, while the proportion is similar for Microsoft (11.95%) and is slightly better for IBM (15.66%). Important semantic differences between AI paper with and without a female co-author When examining the publications in the Machine Learning and Societal topics in the UK in 2012 and 2015, papers involving at least one female co-author tend to be more semantically similar to each other than with those without any female authors. Moreover, papers with at least one female co-author tend to be more applied and socially aware, with terms such as fairness, human mobility, mental, health, gender and personality being among the most salient ones. Juan Mateos noted that this is an area which deserves further research. https://twitter.com/JMateosGarcia/status/1151517647361781760   The top 15 women with the most AI publications on arXiv identified Aarti Singh, Associate Professor at the Machine learning department of Carnegie Mellon University Cordelia Schmid, is a part of Google AI team and holds a permanent research position at Inria Grenoble Rhone-Alpes Cynthia Rudin, an associate professor of computer science, electrical and computer engineering, statistical science and mathematics at Duke University Devi Parikh, an Assistant Professor in the School of Interactive Computing at Georgia Tech Karen Livescu, an Associate Professor at Toyota Technical Institute at Chicago Kate Saenko,  an Associate Professor at the Department of Computer at Boston University Kristina Lerman, a Project Leader at the Information Sciences Institute at the University of Southern California Marilyn A. Walker, a Professor at the Department of Computer Science at the University of California Mihaela van der Schaar, is John Humphrey Plummer Professor of Machine Learning, Artificial Intelligence and Medicine at the University of Cambridge and a Turing Fellow at The Alan Turing Institute in London Petia Radeva, a professor at the Department of Mathematics and Computer Science, Faculty of Mathematics and Computer Science at the Universitat de Barcelona Regina Barzilay is a professor at the Massachusetts Institute of Technology and a member of the MIT Computer Science and Artificial Intelligence Laboratory Svetha Venkatesh, an ARC Australian Laureate Fellow, Alfred Deakin Professor and Director of the Centre for Pattern Recognition and Data Analytics (PRaDA) at Deakin University Xiaodan Liang, an Associate Professor at the School of Intelligent Systems Engineering, Sun Yat-sen University Yonina C. Elda, a Professor of Electrical Engineering, Weizmann Faculty of Mathematics and Computer Science at the University of Israel Zeynep Akata, an Assistant Professor with the University of Amsterdam in the Netherlands There are 5 other women researchers who were not identified in the study. Interviews bites from few women contributors and institutions The research team also interviewed few researchers and institutions identified in their work and they think a system wide reform is needed. When the team discussed the findings with the most cited female researcher Mihaela Van Der Schaar, she did feel that her presence in the field has only started to be recognised, having begun her career in 2003, ‘I think that part of the reason for this is because I am a woman, and the experience of (the few) other women in AI in the same period has been similar.’ she says. Professor Van Der Schaar also described herself and many of her female colleagues as ‘faceless’, she suggested that the work of celebrating leading women in the field could have a positive impact on the representation of women, as well as the disparity in the recognition that these women receive. This suggests that work is needed across the pipeline, not just with early-stage invention in education, but support for those women in the field. She also highlighted the importance of open discussion about the challenges women face in the AI sector and that workplace changes such as flexible hours are needed to enable researchers to participate in a fast-paced sector without sacrificing their family life. The team further discussed the findings with the University of Washington’s Eve Riskin, Associate Dean of Diversity and Access in the College of Engineering. Riskin described that much of her female faculty experienced a ‘toxic environment’ and pervasive imposter syndrome. She also emphasized the fact that more research is needed in terms of the career trajectories of the male and female researchers including the recruitment and retention. Some recent examples of exceptional women in AI research and their contribution While these women talk about the diversity gaps in this field recently we have seen works from female researchers like Katie Bouman which gained significant attention. Katie is a post-doctoral fellow at MIT whose algorithm led to an image of a supermassive black hole. But then all the attention became a catalyst for a sexist backlash on social media and YouTube. It set off “what can only be described as a sexist scavenger hunt,” as The Verge described it, in which an apparently small group of vociferous men questioned Bouman’s role in the project. “People began going over her work to see how much she’d really contributed to the project that skyrocketed her to unasked-for fame.” Another incredible example in the field of AI research and ethics is of Meredith Whittaker, an ex-Googler, now a program manager, activist, and co-founder of the AI Now Institute at New York University. Meredith is committed to the AI Now Institute, her AI ethics work, and to organize an accountable tech industry. On Tuesday,  Meredith left Google after facing retaliation from company for organizing last year’s protest of Google Walkout for Real Change demanding the company for structural changes to ensure a safe and conducive work environment for everyone.. Other observations from the research and next steps The research also highlights the fact that women are as capable as men in contributing to technical topics while they tend to contribute more than men to publications with a societal or ethical output. Some of the leading AI researchers in the field shared their opinion on this: Petia Radeva, Professor at the Department of Mathematics and Computer Science at the University of Barcelona, was positive that the increasingly broad domains of application for AI and the potential impact of this technology will attract more women into the sector. Similarly, Van Der Schaar suggests that “publicising the interdisciplinary scope of possibilities and career paths that studying AI can lead to will help to inspire a more diverse group of people to pursue it. In parallel, the industry will benefit from a pipeline of people who are motivated by combining a variety of ideas and applying them across domains.” The research team in future will explore the temporal co-authorship network of AI papers to examine how different the career trajectory of male and female researchers might be. They will survey AI researchers on arXiv and investigate the drivers of the diversity gap in more detail through their innovation mapping methods. They also plan to extend this analysis to identify the representation of other underrepresented groups. Meredith Whittaker, Google Walkout organizer, and AI ethics researcher is leaving the company, adding to its brain-drain woes over ethical concerns “I’m concerned about Libra’s model for decentralization”, says co-founder of Chainspace, Facebook’s blockchain acquisition DeepMind’s Alphastar AI agent will soon anonymously play with European StarCraft II players
Read more
  • 0
  • 0
  • 2654

article-image-elon-musks-neuralink-unveils-a-sewing-machine-like-robot-to-control-computers-via-the-brain
Sugandha Lahoti
17 Jul 2019
8 min read
Save for later

Elon Musk's Neuralink unveils a “sewing machine-like” robot to control computers via the brain

Sugandha Lahoti
17 Jul 2019
8 min read
After two years of being super-secretive about their work, Neuralink, Elon’s Musk’s neurotechnology company, has finally presented their progress in brain-computer interface technology. The Livestream which was uploaded on YouTube showcases a “sewing machine-like” robot that can implant ultrathin threads deep into the brain giving people the ability to control computers and smartphones using their thoughts. For its brain-computer interface tech, the company has received $158 million in funding and has 90 employees. Note: All images are taken from Neuralink Livestream video unless stated otherwise. Elon Musk opened the presentation talking about the primary aim of Neuralink which is to use brain-computer interface tech to understand and treat brain disorders, preserve and enhance the brain, and ultimately and this may sound weird, “achieve a symbiosis with artificial intelligence”. He added, “This is not a mandatory thing. It is a thing you can choose to have if you want. This is something that I think will be really important on a civilization-level scale.” Neuralink wants to build, record from and selectively stimulate as many neurons as possible across diverse brain areas. They have three goals: Increase by orders of magnitude, the number of neurons you can read from and write to in safe, long-lasting ways. At each stage, produce devices that serve critical unmet medical needs of patients. Make inserting a computer connection into your brain as safe and painless as LASIK eye surgery. The robot that they have built was designed to be completely wireless, with a  practical bandwidth that is usable at home and lasts for a long time. Their system has an N1 sensor, which is an 8mm wide, 4mm tall cylinder having 1024 electrodes. It consists of a thin film, which has threads. The threads are placed using thin needles, into the brain by a robotic system in a manner akin to a sewing machine avoiding blood vessels. The robot peels off the threads one by one from the N1 Sensor and places it in the brain. A needle would grab each thread by a small loop and then is inserted into the brain by the robot. The robot is under the supervision of a human neurosurgeon who lays out where the threads are placed. The actual needle which the robot uses is 24 microns. The process puts a 2mm incision near the human ear, which is dilated to 8mm. The threads A robot implants threads using a needle For the first patients, the Neuralink team is looking at four sensors which will be connected via very small wires under the scalp to an inductive coil behind the ear. This is encased in a wearable device that they call the ‘Link’ which contains a Bluetooth radio and a battery. They will be controlled through an iPhone app. Source: NYT Neuralink/MetaLab iPhone app The goal is to drill four 8mm holes into paralyzed patients’ skulls and insert implants that will give them the ability to control computers and smartphones using their thoughts. For the first product, they are focusing on giving patients the ability to control their mobile device, and then redirect the output from their phone to a keyboard or a mouse. The company will seek U.S. Food and Drug Administration approval and is aspiring to target first-in-human clinical study by 2020. They will use it for treating upper cervical spinal cord injury. They’re expecting those patients to get four 1024 channel sensors, one each in the primary motor cortex, supplementary motor area, premotor cortex and closed-loop feedback into the primary somatosensory cortex. As reported by Bloomberg who got a pre-media briefing, Neuralink said it has performed at least 19 surgeries on animals with its robots and successfully placed the wires, which it calls “threads,” about 87% of the time. They used a lab rat and implanted a USB-C port in its head. A wire attached to the port transmitted its thoughts to a nearby computer where a software recorded and analyzed its brain activity, measuring the strength of brain spikes. The amount of data being gathered from a lab rat was about 10 times greater than what today’s most powerful sensors can collect. The flexibility of the Neuralink threads would be an advance, said Terry Sejnowski, the Francis Crick Professor at the Salk Institute for Biological Studies, in La Jolla, Calif to the New York Times. However, he noted that the Neuralink researchers still needed to prove that the insulation of their threads could survive for long periods in a brain’s environment, which has a salt solution that deteriorates many plastics. Musk's bizarre attempts to revolutionalize the world are far from reality Elon Musk is known for his dramatic promises and showmanship as much as he is for his eccentric projects. But how far they are grounded in reality is another thing. In May he successfully launched his mammoth space mission, Starlink sending 60 communications satellites to the orbit which will eventually be part of a single constellation providing high-speed internet to the globe. However, the satellites were launched after postponing it two times to “update satellite software”. Not just that,  three of the 60 satellites have lost contact with ground control teams, a SpaceX spokesperson said on June 28. Experts are already worried about how the Starlink constellation will contribute to the space debris problem. Currently, there are 2,000 operational satellites in orbit around Earth, according to the latest figures from the European Space Agency, and the completed Starlink constellation will drastically add to that number. Observers had also noticed some Starlink satellites had not initiated orbit raising after being released. Musk’s much-anticipated Hyperloop (first publicly mentioned in 2012) was supposed to shuttle passengers at near-supersonic speeds via pods traveling in a long, underground tunnel. But it was soon reduced to a car in a very small tunnel. When they unveiled the underground tunnel to the media in California last year in December, reporters climbed into electric cars made by Musk’s Tesla and were treated to a 40 mph ride along a bumpy path. Here as well there have been public concerns regarding its impact on public infrastructure and the environment. The biggest questions surrounding hyperloop’s environmental impact are its effect on carbon dioxide emissions, the effect of infrastructure on ecosystems, and the environmental footprint of the materials used to build it. Other concerns include noise pollution and how to repurpose hyperloop tubes and tunnels at the end of their lifespan. Researchers from Tencent Keen Security Lab criticized Tesla’s self-driving car software, publishing a report detailing their successful attacks on Tesla firmware. It includes remote control over the steering and an adversarial example attack on the autopilot that confuses the car into driving into oncoming traffic lane. Musk had also made promises to have a fully self-driving car for Tesla by 2020 which caused a lot of activity in the stock markets. But most are skeptical about this claim as well. Whether Elon Musk’s AI symbiotic visions will come in existence in the foreseeable future is questionable. Neuralink's long-term goals are characteristically unrealistic, considering not much is known about the human brain; cognitive functions and their representation as brain signals are still an area where much further research is required. While Musk’s projects are known for their technical excellence, History shows a lack of thought into the broader consequences and cost of such innovations such as the ethical concerns, environmental and societal impacts. Neuralink’s implant is also prone to invading one’s privacy as it will be storing sensitive medical information of a patient. There is also the likelihood of it violating one’s constitutional rights such as freedom of speech, expression among others. What does it mean to live in a world where one’s thoughts are constantly monitored and not truly one’s own? Then, because this is an implant what if the electrodes malfunction and send wrong signals to the brain. Who will be accountable in such scenarios? Although the FDA will be probing into such questions, these are some questions any responsible company should ask of itself proactively while developing life-altering products or services. These are equally important aspects that are worthy of stage time in a product launch. Regardless, Musk’s bold claims and dramatic representations are sure to gain the attention of investors and enthusiasts for now. Elon Musk reveals big plans with Neuralink SpaceX shares new information on Starlink after the successful launch of 60 satellites What Elon Musk can teach us about Futurism & Technology Forecasting
Read more
  • 0
  • 0
  • 4535

article-image-amazons-partnership-with-nhs-to-make-alexa-offer-medical-advice-raises-privacy-concerns-and-public-backlash
Bhagyashree R
12 Jul 2019
6 min read
Save for later

Amazon’s partnership with NHS to make Alexa offer medical advice raises privacy concerns and public backlash

Bhagyashree R
12 Jul 2019
6 min read
Virtual assistants like Alexa and smart speakers are being increasingly used in today’s time because of the convenience they come packaged with. It is good to have someone play a song or restock your groceries just on your one command, or probably more than one command. You get the point! But, how comfortable will you be if these assistants can provide you some medical advice? Amazon has teamed up with UK’s National Health Service (NHS) to make Alexa your new medical consultant. The voice-enabled digital assistant will now answer your health-related queries by looking through the NHS website vetted by professional doctors. https://twitter.com/NHSX/status/1148890337504583680 The NHSX initiative to drive digital innovation in healthcare Voice search definitely gives us the most “humanized” way of finding information from the web. One of the striking advantages of voice-enabled digital assistants is that the elderly, the blind and those who are unable to access the internet in other ways can also benefit from them. UK’s health secretary, Matt Hancock, believes that “embracing” such technologies will not only reduce the pressure General Practitioners (GPs) and pharmacists face but will also encourage people to take better control of their health care. He adds, "We want to empower every patient to take better control of their healthcare." Partnering with Amazon is just one of many steps by NHS to adopt technology for healthcare. The NHS launched a full-fledged unit named NHSX (where X stands for User Experience) last week. Its mission is to provide staff and citizens “the technology they need” with an annual investment of more than $1 billion a year. This partnership was announced last year and NHS plans to partner with other companies such as Microsoft in the future to achieve its goal of “modernizing health services.” Can we consider Alexa’s advice safe Voice assistants are very fun and convenient to use, but only when they are actually working. Many a time it happens that the assistant fails to understand something and we have to yell the command again and again, which makes the experience outright frustrating. Furthermore, the track record of consulting the web to diagnose our symptoms has not been the most accurate one. Many Twitter users trolled this decision saying that Alexa is not yet capable of doing simple tasks like playing a song accurately and the NHS budget could have been instead used on additional NHS staff, lowering drug prices, and many other facilities. The public was also left sore because the government has given Amazon a new means to make a profit, instead of forcing them to pay taxes. Others also talked about the times when Google (mis)-diagnosed their symptoms. https://twitter.com/NHSMillion/status/1148883285952610304 https://twitter.com/doctor_oxford/status/1148857265946079232 https://twitter.com/TechnicallyRon/status/1148862592254906370 https://twitter.com/withorpe/status/1148886063290540032 AI ethicists and experts raise data privacy issues Amazon has been involved in several controversies around privacy concerns regarding Alexa. Earlier this month, it admitted that a few voice recordings made by Alexa are never deleted from the company's server, even when the user manually deletes them. Another news in April this year revealed that when you speak to an Echo smart speaker, not only does Alexa but potentially Amazon employees also listen to your requests. Last month, two lawsuits were filed in Seattle stating that Amazon is recording voiceprints of children using its Alexa devices without their consent. Last year, an Amazon Echo user in Portland, Oregon was shocked when she learned that her Echo device recorded a conversation with her husband and sent the audio file to one of his employees in Seattle. Amazon confirmed that this was an error because of which the device’s microphone misheard a series of words. Another creepy, yet funny incident was when Alexa users started hearing an unprompted laugh from their smart speaker devices. Alexa laughed randomly when the device was not even being used. https://twitter.com/CaptHandlebar/status/966838302224666624 Big tech including Amazon, Google, and Facebook constantly try to reassure their users that their data is safe and they have appropriate privacy measures in place. But, these promises are hard to believe when there is so many news of data breaches involving these companies. Last year, a German computer magazine c’t reported that a user received 1,700 Alexa voice recordings from Amazon when he asked for copies of the personal data Amazon has about him. Many experts also raised their concerns about using Alexa for giving medical advice. A Berlin-based tech expert Manthana Stender calls this move a “corporate capture of public institutions”. https://twitter.com/StenderWorld/status/1148893625914404864 Dr. David Wrigley, a British medical doctor who works as a general practitioner also asked how the voice recordings of people asking for health advice will be handled. https://twitter.com/DavidGWrigley/status/1148884541144219648 Director of Big Brother Watch, Silkie Carlo told BBC,  "Any public money spent on this awful plan rather than frontline services would be a breathtaking waste. Healthcare is made inaccessible when trust and privacy is stripped away, and that's what this terrible plan would do. It's a data protection disaster waiting to happen." Prof Helen Stokes-Lampard, of the Royal College of GPs, believes that the move has "potential", especially for minor ailments. She added that it is important individuals do independent research to ensure the advice given is safe or it could "prevent people from seeking proper medical help and create even more pressure". She further said that not everyone is comfortable using such technology or could afford it. Amazon promises that the data will be kept confidential and will not be used to build a profile on customers. A spokesman shared with The Times, "All data was encrypted and kept confidential. Customers are in control of their voice history and can review or delete recordings." Amazon is being sued for recording children’s voices through Alexa without consent Amazon Alexa is HIPAA-compliant: bigger leap in the health care sector Amazon is supporting research into conversational AI with Alexa fellowships
Read more
  • 0
  • 0
  • 2687
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-british-airways-set-to-face-a-record-breaking-fine-of-183m-by-the-ico-over-customer-data-breach
Sugandha Lahoti
08 Jul 2019
6 min read
Save for later

British Airways set to face a record-breaking fine of £183m by the ICO over customer data breach

Sugandha Lahoti
08 Jul 2019
6 min read
UK’s watchdog ICO is all set to fine British Airways more than £183m over a customer data breach. In September last year, British Airways notified ICO about a data breach that compromised personal identification information of over 500,000 customers and is believed to have begun in June 2018. ICO said in a statement, “Following an extensive investigation, the ICO has issued a notice of its intention to fine British Airways £183.39M for infringements of the General Data Protection Regulation (GDPR).” Information Commissioner Elizabeth Denham said, "People's personal data is just that - personal. When an organisation fails to protect it from loss, damage or theft, it is more than an inconvenience. That's why the law is clear - when you are entrusted with personal data, you must look after it. Those that don't will face scrutiny from my office to check they have taken appropriate steps to protect fundamental privacy rights." How did the data breach occur? According to the details provided by the British Airways website, payments through its main website and mobile app were affected from 22:58 BST August 21, 2018, until 21:45 BST September 5, 2018. Per ICO’s investigation, user traffic from the British Airways site was being directed to a fraudulent site from where customer details were harvested by the attackers. Personal information compromised included log in, payment card, and travel booking details as well name and address information. The fraudulent site performed what is known as a supply chain attack embedding code from third-party suppliers to run payment authorisation, present ads or allow users to log into external services, etc. According to a cyber-security expert, Prof Alan Woodward at the University of Surrey, the British Airways hack may possibly have been a company insider who tampered with the website and app's code for malicious purposes. He also pointed out that live data was harvested on the site rather than stored data. https://twitter.com/EerkeBoiten/status/1148130739642413056 RiskIQ, a cyber security company based in San Francisco, linked the British Airways attack with the modus operandi of a threat group Magecart. Magecart injects scripts designed to steal sensitive data that consumers enter into online payment forms on e-commerce websites directly or through compromised third-party suppliers. Per RiskIQ, Magecart set up custom, targeted infrastructure to blend in with the British Airways website specifically and to avoid detection for as long as possible. What happens next for British Airways? The ICO noted that British Airways cooperated with its investigation, and has made security improvements since the breach was discovered. They now have 28 days to appeal. Responding to the news, British Airways’ chairman and chief executive Alex Cruz said that the company was “surprised and disappointed” by the ICO’s decision, and added that the company has found no evidence of fraudulent activity on accounts linked to the breach. He said, "British Airways responded quickly to a criminal act to steal customers' data. We have found no evidence of fraud/fraudulent activity on accounts linked to the theft. We apologise to our customers for any inconvenience this event caused." ICO was appointed as the lead supervisory authority to tackle this case on behalf of other EU Member State data protection authorities. Under the GDPR ‘one stop shop’ provisions the data protection authorities in the EU whose residents have been affected will also have the chance to comment on the ICO’s findings. The penalty is divided up between the other European data authorities, while the money that comes to the ICO goes directly to the Treasury. What is somewhat surprising is that ICO disclosed the fine publicly even before Supervisory Authorities commented on ICOs findings and a final decision has been taken based on their feedback, as pointed by Simon Hania. https://twitter.com/simonhania/status/1148145570961399808 Record breaking fine appreciated by experts The penalty imposed on British Airways is the first one to be made public since GDPR’s new policies about data privacy were introduced. GDPR makes it mandatory to report data security breaches to the information commissioner. They also increased the maximum penalty to 4% of turnover of the penalized company. The fine would be the largest the ICO has ever issued; last ICO fined Facebook £500,000 fine for the Cambridge Analytica scandal, which was the maximum under the 1998 Data Protection Act. The British Airways penalty amounts to 1.5% of its worldwide turnover in 2017, making it roughly 367 times than of Facebook’s. Infact, it could have been even worse if the maximum penalty was levied;  the full 4% of turnover would have meant a fine approaching £500m. Such a massive fine would clearly send a sudden shudder down the spine of any big corporation responsible for handling cybersecurity - if they compromise customers' data, a severe punishment is in order. https://twitter.com/j_opdenakker/status/1148145361799798785 Carl Gottlieb, Privacy Lead & Data Protection Officer at Duolingo has summarized the factoids of this attack in a twitter thread which were much appreciated. GDPR fines are for inappropriate security as opposed to getting breached. Breaches are a good pointer but are not themselves actionable. So organisations need to implement security that is appropriate for their size, means, risk and need. Security is an organisation's responsibility, whether you host IT yourself, outsource it or rely on someone else not getting hacked. The GDPR has teeth against anyone that messes up security, but clearly action will be greatest where the human impact is most significant. Threats of GDPR fines are what created change in privacy and security practices over the last 2 years (not orgs suddenly growing a conscience). And with very few fines so far, improvements have slowed, this will help. Monetary fines are a great example to change behaviour in others, but a TERRIBLE punishment to drive change in an affected organisation. Other enforcement measures, e.g. ceasing processing personal data (e.g. ban new signups) would be much more impactful. https://twitter.com/CarlGottlieb/status/1148119665257963521 Facebook fined $2.3 million by Germany for providing incomplete information about hate speech content European Union fined Google 1.49 billion euros for antitrust violations in online advertising French data regulator, CNIL imposes a fine of 50M euros against Google for failing to comply with GDPR.
Read more
  • 0
  • 0
  • 5609

article-image-im-concerned-about-libras-model-for-decentralization-says-co-founder-of-chainspace-facebooks-blockchain-acquisition
Fatema Patrawala
26 Jun 2019
7 min read
Save for later

“I'm concerned about Libra's model for decentralization”, says co-founder of Chainspace, Facebook’s blockchain acquisition

Fatema Patrawala
26 Jun 2019
7 min read
In February, Facebook made its debut into the blockchain space by acquiring Chainspace, a London-based, Gibraltar-registered blockchain venture. Chainspace was a small start-up founded by several academics from the University College London Information Security Research Group. Authors of the original Chainspace paper were Mustafa Al-Bassam, Alberto Sonnino, Shehar Bano, Dave Hrycyszyn and George Danezis, some of the UK’s leading privacy engineering researchers. Following the acquisition, last week Facebook announced the launch of its new cryptocurrency, Libra which is expected to go live by 2020. The Libra whitepaper involves a wide array of authors including the Chainspace co-founders namely Alberto Sonnino, Shehar Bano and George Danezis. According to Wired, David Marcus, a former Paypal president and a Coinbase board member, who resigned from the board last year, is appointed by Facebook to lead the project Libra. Libra isn’t like other cryptocurrencies such as Bitcoin or Ethereum. As per the Reuters report, the Libra blockchain will be permissioned, meaning that only entities authorized by the governing association will be able to run the computers. Mustafa Al-Bassam, one of the research co-founders of Chainspace who did not join Facebook posted a detailed Twitter thread yesterday. The thread included particularly his views on this new crypto-currency - Libra. https://twitter.com/musalbas/status/1143629828551270401 On Libra’s decentralized model being less censorship resistant Mustafa says, “I don't have any doubt that the Libra team is building Libra for the right reasons: to create an open, decentralized payment system, not to empower Facebook. However, the road to dystopia is paved with good intentions, and I'm concerned about Libra's model for decentralization.” He further pointed the discussion towards a user comment on GitHub which reads, “Replace "decentralized" with "distributed" in readme”. Mustafa explains that Libra’s 100 node closed set of validators is seen more as decentralized in comparison to Bitcoin. Whereas Bitcoin has 4 pools that control >51% of hashpower. According to the Block Genesis, decentralized networks are particularly prone to Sybil attacks due to their permissionless nature. Mustafa takes this into consideration and poses a question if Libra is Sybil resistant, he comments, “I'm aware that the word "decentralization" is overused. I'm looking at decentralization, and Sybil-resistance, as a means to achieve censorship-resistance. Specifically: what do you have to do to reverse or censor transaction, how much does it cost, and who has that power? My concern is that Libra could end up creating a financial system that is *less* censorship-resistant than our current traditional financial system. You see, our current banking system is somewhat decentralized on a global scale, as money travels through a network of banks.” He further explains that, “In the banking system there is no majority of parties that can collude together to deny two banks the ability to maintain a relationship which each other - in the worst case scenario they can send physical cash to each other, which does not require a ledger. It's permissionless.” Mustafa adds to this point with a surreal imagination that if Libra was the only way to transfer currency and it is less censorship resistant than we’d be in worse situations, he says, “With cryptocurrency systems (even decentralized ones), there is always necessarily a majority of consensus nodes (e.g. a 51% attack) that can collude together from censor or reverse transactions. So if you're going to create digital cash, this is extremely important to consider. With Libra, censorship-resistance is even more important, as Libra could very well end up being the world's "de facto" currency, and if the Libra network is the only way to transfer that currency, and it's less censorship-resistant, we're worse off than where we started.” On Libra's permissioned consensus node selection authority Mustafa says that, “Libra's current permissioned consensus node selection authority is derived directly from nation state-enforced (Switzerland's) organization laws, rather than independently from stakeholders holding sovereign cryptographic keys.” Source - Libra whitepaper What this means is the "root API" for Libra's node selection mechanism is the Libra Association via the Swiss Federal Constitution and the Swiss courts, rather than public key cryptography. Mustafa also pointed out that the association members for Libra are large $1b+ companies, and US-based. Source - Libra whitepaper To this there could be an argument that governments can regulate the people who hold those public keys, but a key difference is that this can't be directly enforced without access to the private key. Mustafa explained this point with an example from last year, where Iran tested how resistant global payments are to US censorship. Iran requested a 300 million Euro cash withdrawal via Germany's central bank which they rejected under US pressure. Mustafa added, “US sanctions have been bad on ordinary people in Iran, but they can at least use cash to transact with other countries. If people wouldn't even be able to use cash in the future because Libra digital cash isn't censorship-resistant, that would be *brutal*.” On Libra’s proof-of-stake based permissionless mechanism Mustafa argues that the Libra whitepaper confuses consensus with Sybil-resistance. His views are Sybil-resistant node selection through permissionless mechanisms such as proof-of-stake, which selects a set of cryptographic keys that participate in consensus, is necessarily more censorship-resistant than the Association-based model. Proof-of-stake is a Sybil-resistance mechanism, not a consensus mechanism. The "longest chain rule", on the other hand, is the consensus mechanism. He says that Libra has outlined a proof-of-stake-based permissionless roadmap and will transition to this in the next 5 years. Mustafa feels 5 years for this will be way too late when Group of seven nations (G7) are already lining up the taskforce to control Libra. Mustafa also thinks that it isn’t appropriate about Libra's whitepaper to claim the need to start permissioned for the next five years. He says permissionlessness and scalable secure blockchains are an unsolved technical problem, and they need community's help to research this. Source - Libra whitepaper He says, “It's as if they ignored the past decade of blockchain scalability research efforts. Secure layer-one scalability is a solved research problem. Ethereum 2.0, for example, is past the research stage and is now in the implementation stage, and will handle more than Libra's 1000tps.” Mustafa also points out that Chainspace was specifically in the middle of implementing a permissionless sharded blockchain with higher on-chain scalability than Libra's 1000tps. With FB's resources, this could've easily been accelerated and made a reality. He says, there are many research-led blockchain projects that have implemented or are implementing scalability strategies that achieve higher than Libra's 1000tps without heavily trading off security, so the "community" research on this is plentiful; it is just that Facebook is being lazy. He concludes, “I find it a great shame that Facebook has decided to be anti-social and launch a permissioned system as they need the community's help as scalable blockchains are an unsolved problem, instead of using their resources to implement on a decade of research in this area.” People have appreciated Mustafa on giving a detailed review of Libra, one of the tweets read, “This was a great thread, with several acute and correct observations.” https://twitter.com/ercwl/status/1143671361325490177 Another tweet reads, “Isn't a shard (let's say a blockchain sharded into 100 shards) by its nature trading off 99% of its consensus forming decentralization for 100x (minus overhead, so maybe 50x?) increased scalability?” Mustafa responded, “No because consensus participants are randomly sampled into shards from the overall consensus set, so shards should be roughly uniformly secure, and in the event that a shard misbehaves, fraud and data availability proofs kick in.” https://twitter.com/ercwl/status/1143673925643243522 In one of the tweets it is also suggested that 1/3 of Libra validators can enforce censorship even against the will of the 2/3 majority. In contrast it requires majority of miners to censor Bitcoin. Also unlike Libra, there is no entry barrier other than capital to become a Bitcoin miner. https://twitter.com/TamasBlummer/status/1143766691089977346 Let us know what are your views on Libra and how it is expected to perform. Facebook launches Libra and Calibra in a move to seriously disrupt the financial sector Facebook content moderators work in filthy, stressful conditions and experience emotional trauma daily, reports The Verge Facebook releases Pythia, a deep learning framework for vision and language multimodal research
Read more
  • 0
  • 0
  • 2977

article-image-a-new-study-reveals-how-shopping-websites-use-dark-patterns-to-deceive-you-into-buying-things-you-may-not-want
Sugandha Lahoti
26 Jun 2019
6 min read
Save for later

A new study reveals how shopping websites use ‘dark patterns’ to deceive you into buying things you may not want

Sugandha Lahoti
26 Jun 2019
6 min read
A new study by researchers from Princeton University and the University of Chicago suggests that shopping websites are abundant with dark patterns that rely on consumer deception. The researchers conducted a large-scale study, analyzing almost 53K product pages from 11K shopping websites to characterize and quantify the prevalence of dark patterns. They discovered 1,841 instances of dark patterns on shopping websites, which together represent 15 types of dark patterns. Note: All images in the article are taken from the research paper. What are dark patterns Dark patterns are generally used by shopping websites as a part of their user interface design choices. These dark patterns coerce, steer, or deceive users into making unintended and potentially harmful decisions, benefiting an online service. Shopping websites trick users into signing up for recurring subscriptions and making unwanted purchases, resulting in concrete financial loss. These patterns are not just limited to shopping websites, and find common applications on digital platforms including social media, mobile apps, and video games as well. At extreme levels, dark patterns can lead to financial loss, tricking users into giving up vast amounts of personal data, or inducing compulsive and addictive behavior in adults and children. Researchers used a web crawler to identify text-based dark patterns The paper uses an automated approach that enables researchers to identify dark patterns at scale on the web. The researchers crawled 11K shopping websites using a web crawler, built on top of OpenWPM, which is a web privacy measurement platform. The web crawler was used to simulate a user browsing experience and identify user interface elements. The researchers used text clustering to extract recurring user interface designs from the resulting data and then inspected the resulting clusters for instances of dark patterns. The researchers also developed a novel taxonomy of dark pattern characteristics to understand how dark patterns influence user decision-making. Based on the taxonomy, the dark patterns were classified basis whether they lead to an asymmetry of choice, are covert in their effect, are deceptive in nature, hide information from users, and restrict choice. The researchers also mapped the dark patterns in their data set to the cognitive biases they exploit. These biases collectively described the consumer psychology underpinnings of the dark patterns identified. They also determine that many instances of dark patterns are enabled by third-party entities, which provide shopping websites with scripts and plugins to easily implement these patterns on their websites. Key stats from the research There are 1,841 instances of dark patterns on shopping websites, which together represent 15 types of dark patterns and 7 broad categories. These 1,841 dark patterns were present on 1,267 of the 11K shopping websites (∼11.2%) in their data set. Shopping websites that were more popular, according to Alexa rankings, were more likely to feature dark patterns. 234 instances of deceptive dark patterns were uncovered across 183 websites 22 third-party entities were identified that provide shopping websites with the ability to create dark patterns on their sites. Dark pattern categories Sneaking Attempting to misrepresent user actions. Delaying information that users would most likely object to once made available. Sneak into Basket: The “Sneak into Basket” dark pattern adds additional products to users’ shopping carts without their consent Hidden Subscription:  Dark pattern charges users a recurring fee under the pretense of a one-time fee or a free trial Hidden Costs: Reveals new, additional, and often unusually high charges to users just before they are about to complete a purchase. Urgency Imposing a deadline on a sale or deal, thereby accelerating user decision-making and purchases. Countdown Timers: Dynamic indicator of a deadline counting down until the deadline expires. Limited-time Messages: Static urgency message without an accompanying deadline Misdirection Using visuals, language, or emotion to direct users toward or away from making a particular choice. Confirmshaming:  It uses language and emotion to steer users away from making a certain choice. Trick Questions: It uses confusing language to steer users into making certain choices. Visual Interference: It uses style and visual presentation to steer users into making certain choices over others. Pressured Selling: It refers to defaults or often high-pressure tactics that steer users into purchasing a more expensive version of a product (upselling) or into purchasing related products (cross-selling). Social proof Influencing users' behavior by describing the experiences and behavior of other users. Activity Notification:  Recurring attention grabbing message that appears on product pages indicating the activity of other users. Testimonials of Uncertain Origin: The use of customer testimonials whose origin or how they were sourced and created is not clearly specified. Scarcity Signalling that a product is likely to become unavailable, thereby increasing its desirability to users. Examples such as Low-stock Messages and High-demand Messages come under this category. Low-stock Messages: It signals to users about limited quantities of a product High-demand Messages: It signals to users that a product is in high demand, implying that it is likely to sell out soon. Obstruction Making it easy for the user to get into one situation but hard to get out of it. The researchers observed one type of the Obstruction dark pattern: “Hard to Cancel”. The Hard to Cancel dark pattern is restrictive (it limits the choices users can exercise to cancel their services). In cases where websites do not disclose their cancellation policies upfront, Hard to Cancel also becomes information hiding (it fails to inform users about how cancellation is harder than signing up). Forced Action Forcing the user to do something tangential in order to complete their task. The researchers observed one type of the Forced Action dark pattern: “Forced Enrollment” on 6 websites. Limitations of the research The researchers have acknowledged that their study has certain limitations. Only text-based dark patterns are taken into account for this study. There is still work needed to be done for inherently visual patterns (e.g., a change of font size or color to emphasize one part of the text more than another from an otherwise seemingly harmless pattern). The web crawling lead to a fraction of Selenium crashes, which did not allow researchers to either retrieve product pages or complete data collection on certain websites. The crawler failed to completely simulate the product purchase flow on some websites. They only crawled product pages and checkout pages, missing out on dark patterns present in other common pages such as the homepage of websites, product search pages, and account creation pages. The list of dark patterns can be downloaded as a CSV file. For more details, we recommend you to read the research paper. U.S. senators introduce a bipartisan bill that bans social media platforms from using ‘dark patterns’ to trick its users. How social media enabled and amplified the Christchurch terrorist attack Can an Open Web Index break Google’s stranglehold over the search engine market?
Read more
  • 0
  • 0
  • 3481

article-image-google-rejects-all-13-shareholder-proposals-at-its-annual-meeting-despite-protesting-workers
Sugandha Lahoti
20 Jun 2019
11 min read
Save for later

Google rejects all 13 shareholder proposals at its annual meeting, despite protesting workers

Sugandha Lahoti
20 Jun 2019
11 min read
Yesterday at Google’s annual shareholder meeting, Alphabet, Google’s parent company, faced 13 independent stockholder proposals ranging from sexual harassment and diversity, to the company's policies regarding China and forced arbitration. There was also a proposal to limit Alphabet's power by breaking up the company. However, as expected, every stockholder proposal was voted down after a few minutes of ceremonial voting, despite protesting workers. The company’s co-founders, Larry Page, and Sergey Brin were both no-shows and Google CEO Sundar Pichai didn’t answer any questions. Google has seen a massive escalation in employee activism since 2018. The company faced backlash from its employees over a censored version of its search engine for China, forced arbitration policies, and its mishandling of sexual misconduct. Google Walkout for Real Change organizers Claire Stapleton, and Meredith Whittaker were also retaliated against by their supervisors for speaking out. According to widespread media reports the US Department of Justice is readying to investigate into Google. It has been reported that the probe would examine whether the tech giant broke antitrust law in the operation of its online and advertisement businesses. Source: Google’s annual meeting MOM Equal Shareholder Voting Shareholders requested that Alphabet’s Board initiate and adopt a recapitalization plan for all outstanding stock to have one vote per share. Currently, the company has a multi-class voting structure, where each share of Class A common stock has one vote and each share of Class B common stock has 10 votes. As a result, Page and Brin currently control over 51% of the company’s total voting power, while owning less than 13% of stock. This raises concerns that the interests of public shareholders may be subordinated to those of our co-founders. However, board of directors rejected the proposal stating that their capital and governance structure has provided significant stability to the company, and is therefore in the best interests of the stockholders. Commit to not use Inequitable Employment Practices This proposal urged Alphabet to commit not to use any of the Inequitable Employment Practices, to encourage focus on human capital management and improve accountability.  “Inequitable Employment Practices” are mandatory arbitration of employment-related claims, non-compete agreements with employees, agreements with other companies not to recruit one another’s employees, and involuntary non-disclosure agreements (“NDAs”) that employees are required to sign in connection with settlement of claims that any Alphabet employee engaged in unlawful discrimination or harassment. Again Google rejected this proposal stating that the updated code of conduct already covers these requirements and do not believe that implementing this proposal would provide any incremental value or benefit to its stockholders or employees. Establishment of a Societal Risk Oversight Committee Stockholders asked Alphabet to establish a Societal Risk Oversight Committee (“the Committee”) of the Board of Directors, composed of independent directors with relevant experience. The Committee should provide an ongoing review of corporate policies and procedures, above and beyond legal and regulatory matters, to assess the potential societal consequences of the Company’s products and services, and should offer guidance on strategic decisions. As with the other Board committees, a formal charter for the Committee and a summary of its functions should be made publicly available. This proposal was also rejected. Alphabet said, “The current structure of our Board of Directors and its committees (Audit, Leadership Development and Compensation, and Nominating and Corporate Governance) already covers these issues.” Report on Sexual Harassment Risk Management Shareholders requested management to review its policies related to sexual harassment to assess whether the company needs to adopt and implement additional policies and to report its findings, omitting proprietary information and prepared at a reasonable expense by December 31, 2019. However, this was rejected as well based on the conclusion that Google has robust Conduct Policies already in place. There is also ongoing improvement and reporting in this area, and Google has a concrete plan of action to do more. Majority Vote for the Election of Directors This proposal demands that director nominees should be elected by the affirmative vote of the majority of votes cast at an annual meeting of shareholders, with a plurality vote standard retained for contested director elections, i.e., when the number of director nominees exceeds the number of board seats. This proposal also demands that a director who receives less than such a majority vote should be removed from the board immediately or as soon as a replacement director can be qualified on an expedited basis. This proposal was rejected basis, “Our Board of Directors believes that current nominating and voting procedures for election to our Board of Directors, as opposed to a mandated majority voting standard, provide the board the flexibility to appropriately respond to stockholder interests without the risk of potential corporate governance complications arising from failed elections.” Report on Gender Pay Stockholders demand that Google should report on the company’s global median gender pay gap, including associated policy, reputational, competitive, and operational risks, and risks related to recruiting and retaining female talent. The report should be prepared at a reasonable cost, omitting proprietary information, litigation strategy, and legal compliance information. Google says it already releases an annual report on its pay equity analyses, “ensuring they pay fairly and equitably.” They don’t think an additional report as detailed in the proposal above would enhance Alphabet’s existing commitment to fostering a fair and inclusive culture. Strategic Alternatives The proposal outlines the need for an orderly process to retain advisors to study strategic alternatives. They believe Alphabet may be too large and complex to be managed effectively and want a voluntary strategic reduction in the size of the company than from asset sales compelled by regulators. They also want a committee of independent directors to evaluate those alternatives in the exercise of their fiduciary responsibilities to maximize shareholder value. This proposal was also rejected basis, “Our Board of Directors and management do not favor a given size of the company or focus on any strategy based on ideological grounds. Instead, we develop a strategy based on the company’s customers, partners, users and the communities we serve, and focus on strategies that maximize long-term sustainable stockholder value.” Nomination of an Employee Representative Director An Employee Representative Director should be nominated for election to the Board by shareholders at Alphabet’s 2020 annual meeting of shareholders. Stockholders say that employee representation on Alphabet’s Board would add knowledge and insight on issues critical to the success of the Company, beyond that currently present on the Board, and may result in more informed decision making. Alphabet quoted their “Consideration of Director Nominees” section of the proxy statement, stating that the Nominating and Corporate Governance Committee of Alphabet’s Board of Directors look for several critical qualities in screening and evaluating potential director candidates to serve our stockholders. They only allow the best and most qualified candidates to be elected to the Board of Directors. Accordingly, this proposal was also rejected. Simple Majority Vote The shareholders call for the simple majority vote to be eliminated and replaced by a requirement for a majority of the votes cast for and against applicable proposals, or a simple majority in compliance with applicable laws. Currently, the voice of regular shareholders is diminished because certain insider shares have 10-times as many votes per share as regular shares. Plus shareholders have no right to act by written consent. The rejection reason laid out by Google was that more stringent voting requirements in certain limited circumstances are appropriate and in the best interests of Google’s stockholders and the company. Integrating Sustainability Metrics Report into Performance measures Shareholders requested the Board Compensation Committee to prepare a report assessing the feasibility of integrating sustainability metrics into performance measures. These should apply to senior executives under the Company’s compensation plans or arrangements. They state that Alphabet remains predominantly white, male, and occupationally segregated. Among Alphabet’s top 290 managers in 2017, just over one-quarter were women and only 17 managers were underrepresented people of color. Even after proof of Alphabet being not inclusive, the proposal was rejected. Alphabet said that the company already supports corporate sustainability, including environmental, social and diversity considerations. Google Search in China Google employees, as well as Human rights organizations, have called on Google to end work on Dragonfly. Google employees have quit to avoid working on products that enable censorship; 1,400 current employees have signed a letter protesting Dragonfly. Employees said: “Currently we do not have the information required to make ethically-informed decisions about our work, our projects, and our employment.” Some employees have threatened to strike. Dragonfly may also be inconsistent with Google’s AI Principles. The proposal urged Google to publish a Human Rights Impact Assessment by no later than October 30, 2019, examining the actual and potential impacts of censored Google search in China. Google rejected this proposal stating that before Google launches any new search product, it would conduct proper human rights due diligence, confer with Global Network Initiative (GNI) partners and other key stakeholders. If Google ever considers re-engaging this work, it will do so transparently, engaging and consulting widely. Clawback Policy Alphabet currently does not disclose an incentive compensation clawback policy in its proxy statement. The proposal urges Alphabet’s Leadership Development and Compensation Committee to adopt a clawback policy. This committee will review the incentive compensation paid, granted or awarded to a senior executive in case there is misconduct resulting in a material violation of law or Alphabet’s policy that causes significant financial or reputational harm to Alphabet. Alphabet rejected this proposal based on the following claims: The company has announced significant revisions to its workplace culture policies. Google has ended forced arbitration for employment claims. Any violation of our Code of Conduct or other policies may result in disciplinary action, including termination of employment and forfeiture of unvested equity awards. Report on Content Governance Stockholders are concerned that Alphabet’s Google is failing to effectively address content governance concerns, posing risks to shareholder value. They request Alphabet to issue a report to shareholders assessing the risks posed by content management controversies related to election interference, freedom of expression, and the spread of hate speech. Google said that they have already done significant work in the areas of combatting violent or extremist content, misinformation, and election interference and have continued to keep the public informed about our efforts. Thus they do not believe that implementing this proposal would benefit our stockholders. Read the full report here. Protestors showed disappointment As the annual meeting was conducted, thousands of activists protested across Google offices. A UK-based human rights organization called SumOfUs teamed up with Students for a Free Tibet to propose a breakup of Google. They organized a protest outside of 12 Google offices around the world to coincide with the shareholder meeting, including in San Francisco, Stockholm, and Mumbai. https://twitter.com/SumOfUs/status/1141363780909174784 Alphabet Board Chairman, John Hennessy opened the meeting by reflecting on Google's mission to provide information to the world. "Of course this comes with a deep and growing responsibility to ensure the technology we create benefits society as a whole," he said. "We are committed to supporting our users, our employees and our shareholders by always acting responsibly, inclusively and fairly." In the Q&A session which followed the meeting, protestors demanded why Page wasn't in attendance. "It's a glaring omission," one of the protestors said. "I think that's disgraceful." Hennessy responded, "Unfortunately, Larry wasn't able to be here" but noted Page has been at every board meeting. The stockholders pushed back, saying the annual meeting is the only opportunity for investors to address him. Toward the end of the meeting, protestors including Google employees, low-paid contract workers, and local community members were shouting outside. “We shouldn’t have to be here protesting. We should have been included.” One sign from the protest, listing the first names of the members of Alphabet’s board, read, “Sundar, Larry, Sergey, Ruth, Kent — You don’t speak for us!” https://twitter.com/GoogleWalkout/status/1141454091455008769 https://twitter.com/LauraEWeiss16/status/1141400193390141440   This meeting also marked the official departure of former Google CEO Eric Schmidt and former Google Cloud chief Diane Greene from Alphabet's board. Earlier this year, they said they wouldn’t be seeking re-election. Google Walkout organizer, Claire Stapleton resigns after facing retaliation from management US regulators plan to probe Google on anti-trust issues; Facebook, Amazon & Apple also under legal scrutiny. Google employees lay down actionable demands after staging a sit-in to protest retaliation
Read more
  • 0
  • 0
  • 5058
article-image-austrian-supreme-court-rejects-facebooks-bid-to-stop-a-gdpr-violation-lawsuit-against-it-by-privacy-activist-max-schrems
Bhagyashree R
13 Jun 2019
5 min read
Save for later

Austrian Supreme Court rejects Facebook’s bid to stop a GDPR-violation lawsuit against it by privacy activist, Max Schrems

Bhagyashree R
13 Jun 2019
5 min read
On Tuesday, the Austrian Supreme Court overturned Facebook’s appeal to block a lawsuit against it for not conforming to Europe’s General Data Protection Regulation (GDPR). This decision will also have an effect on other EU member states that give “special status to industry sectors.” https://twitter.com/maxschrems/status/1138703007594496000?s=19 The lawsuit was filed by Austrian lawyer and data privacy activist, Max Schrems. In the lawsuit, he has accused Facebook of using illegal privacy policies as it forces users to give their consent for processing their data in return for using the service. GDPR does not allow forced consent as a valid legal basis for processing user data. Schrems said in a statement, “Facebook has even blocked accounts of users who have not given consent. In the end users only had the choice to delete the account or hit the ‘agree’ button–that’s not a free choice; it more reminds of a North Korean election process. Many users do not know yet that this annoying way of pushing people to consent is actually forbidden under GDPR in most cases.” Facebook has been trying to block this lawsuit by questioning whether GDPR-based cases fall under the jurisdiction of courts. According to Facebook’s appeal, these lawsuits should be handled by data protection authorities, Irish Data Protection Commissioner (DPC) in this case. Dismissing Facebook’s argument, this landmark decision says that any complaints made under Article 79 of GDPR can be reviewed both by judges and data protection authorities. This verdict comes as a sigh of relief for Schrems, who has to wait for almost 5 years to even get this lawsuit to trial because of Facebook's continuous blockade attempts. “I am very pleased that we were able to clarify this fundamental issue. We are hoping for a speedy procedure now that the case has been pending for a good 5 years," Schrems said in a press release. He further added, “If we win even part of the case, Facebook would have to adapt its business model considerably. We are very confident that we will succeed on the substance too now. Of course, they wanted to prevent such a case by all means and blocked it for five years.“ Previously, the Vienna Regional Court did give the verdict in Facebook’s favor declaring that it did not have jurisdiction and Facebook could only be sued in Ireland, where its European headquarters are. Schrems believes that this verdict was given because there is “a tendency that civil judges are not keen to have (complex) GDPR cases on their table.” Now, both the Appellate Court and the Austrian Supreme Court have agreed that everyone can file a lawsuit for GDPR violations. Schrems original idea was to make a “class action” style suit against Facebook by allowing any Facebook user to join the case. But, the court did not allow that, and Schemes' was limited to bring only a model case to the court. This is Schrems’ second victory this year in the fight against Facebook. Last month, the Irish Supreme court dismissed Facebook from stopping the referral of privacy case regarding the transfer of EU citizens’ data to the United States. The hearing of this case is now scheduled to happen at the European Court of Justice (ECJ) in July. Schrems’ eight-year-long battle against Facebook Schrems’ fight against Facebook started way before we all realized the severity of tech companies harvesting our personal data. Back in 2011, Shcrems’ professor at Santa Clara University invited Facebook’s privacy lawyer Ed Palmieri to speak to his class. Schrems was surprised to see the lawyer's lack of awareness regarding data protection laws in Europe. He then decided to write his thesis paper about Facebook’s misunderstanding of EU privacy laws. As a part of the research, he requested his personal data from Facebook and found it had his entire user history. He went on to make 22 complaints to the Irish Data Protection Commission, in which he accused Facebook of breaking European data protection laws. His efforts finally showed results, when in 2015 the European Court of Justice took down the EU–US Safe Harbor Principles. As a part of his fight for global privacy rights, Schrems also co-founded the European non-profit noyb (None of Your Business), which aims to "make privacy real”. The organization aims to introduce ways to execute privacy enforcement more effectively. It holds companies accountable who fail to follow Europe's privacy laws and also takes media initiatives to support GDPR. Looks like things hasn’t been going well for Facebook. Along with losing these cases in the EU, in a revelation yesterday by the WSJ, several emails were found that indicate Mark Zuckerberg’s knowledge of potentially problematic privacy practices at the company. You can read the entire press release on NOYB’s official website. Facebook releases Pythia, a deep learning framework for vision and language multimodal research Zuckberg just became the target of the world’s first high profile white hat deepfake op. Can Facebook come out unscathed? US regulators plan to probe Google on anti-trust issues; Facebook, Amazon & Apple also under legal scrutiny  
Read more
  • 0
  • 0
  • 2998

article-image-highlights-from-mary-meekers-2019-internet-trends-report
Sugandha Lahoti
12 Jun 2019
8 min read
Save for later

Highlights from Mary Meeker’s 2019 Internet trends report

Sugandha Lahoti
12 Jun 2019
8 min read
At Recode by Vox’s 2019 Code Conference on Tuesday, Bond partner Mary Meeker made her presentation onstage, covering everything on the internet's latest trends. Meeker had first started presenting these reports in 1995, underlining the most important statistics and technology trends on the internet. Last year in September, Meeker quit Kleiner Perkins to start her own firm Bond and is popularly known as the Queen of the Internet. Mary Meeker’s 2019 Internet trends report highlighted that the internet is continuing to grow, slowly, as more users come online, especially with mobile devices. She also talked about increased internet ad spending, data growth, as well as the rise of freemium subscription business models, interactive gaming, the on-demand economy and more. https://youtu.be/G_dwZB5h56E The internet trends highlighted by Meeker include: Internet Users E-commerce and advertising Internet Usage Freemium business models Data growth Jobs and Work Online Education Immigration and Healthcare Internet Users More than 50% of the world’s population now has access to the internet. There are 3.8 billion internet users in the world with Asia-pacific leading in both users and potential. China is the largest market with 21% of total internet users and India is at 12%. However, the growth is slowing by 6% in 2018 versus 7% in 2017 because so many people have come online that new users are harder to come by. New smartphone unit shipments actually declined in 2018. Per the global internet market cap leaders, the U.S. is stable at 18 of the top 30 and China is stable at 7 of the top 30. These are the two leading countries where internet innovation is at an especially high level. If we look at revenue growth for the internet market cap leaders it continues to slow - 11 percent year-on-year in Q1 versus 13 percent in Q4. Internet usage Internet usage had a solid growth, driven by investment in innovation. The digital media usage in the U.S. is accelerating up 7% versus 5% growth in 2017. The average US adult spends 6.3 hours each day with digital media, over half of which is spent on their mobiles. Wearables had 52 million users which doubled in four years. Roughly 70 million people globally listen to podcasts in the US, a figure that’s doubled in about four years. Outside the US, there's especially high innovation in data-driven and direct fulfillment that's growing very rapidly in China. Innovation outside the US is also especially strong in financial services. Images are also becoming an increasingly relevant way to communicate. More than 50% of the tweets of impressions today are images, video or other forms of media. Interactive gaming innovation is rising across platforms as interactive games like Fortnite become the new social media for certain people. It is accelerating with 2.4 billion users up, 6 percent year-on-year in 2018. On the flip side Almost 26% of adults are constantly online versus 21% three years ago. That number jumped to 39% for 18 to 29 year-olds surveyed. However, digital media users are taking action to reduce their usage and businesses are also taking actions to help users monitor their usage. Social media usage has decelerated up 1% in 2018 versus 6% in 2017. Privacy concerns are high but they're moderating. Regulators and businesses are improving consumer privacy control. In digital media encrypted messaging and traffic are rising rapidly. In Q1, 87 percent of global web traffic was encrypted, up from 53 percent three years ago. Another usage concern is problematic content. Problematic content on the Internet can be less filtered and more amplified. Images and streaming can be more powerful than text. Algorithms can amplify users on patterns  and social media can amplify trending topics. Bad actors can amplify ideologies, unintended bad actors can amplify misinformation and extreme views can amplify polarization. However internet platforms are indeed driving efforts to reduce problematic content as do consumers and businesses. 88% percent of people in the U.S. believe the Internet has been mostly good for them and 70% believe the Internet has been mostly good for society. Cyber attacks have continued to rise. These include state-sponsored attacks, large-scale data provider attacks, and monetary extortion attacks. E-commerce and online advertising E-commerce is now 15 percent of retail sales. Its growth has slowed — up 12.4 percent in Q1 compared with a year earlier — but still towers over growth in regular retail, which was just 2 percent in Q1. In online advertising, on comparing the amount of media time spent versus the amount of advertising dollars spent, mobile hit equilibrium in 2018 while desktop hit that equilibrium point in 2015. The Internet ads spending on an annual basis accelerated a little bit in 2018 up 22 percent.  Most of the spending is still on Google and Facebook, but companies like Amazon and Twitter are getting a growing share. Some 62 percent of all digital display ad buying is for programmatic ads, which will continue to grow. According to the leading tech companies the internet average revenue has been decelerating on a quarterly basis of 20 percent in Q1. Google and Facebook still account for the majority of online ad revenue, but the growth of US advertising platforms like Amazon, Twitter, Snapchat, and Pinterest is outstripping the big players: Google’s ad revenue grew 1.4 times over the past nine quarters and Facebook’s grew 1.9 times, while the combined group of new players grew 2.6 times. Customer acquisition costs — the marketing spending necessary to attract each new customer — is going up. That’s unsustainable because in some cases it surpasses the long-term revenue those customers will bring. Meeker suggests cheaper ways to acquire customers, like free trials and unpaid tiers. Freemium business models Freemium business models are growing and scaling. Freemium businesses equals free user experience which enables more usage, engagement, social sharing and network effects. It also equals premium user experience which drives monetization and product innovation. Freemium business evolution started in gaming, evolving and emerging in consumer and enterprise. One of the important factors for this growth is cloud deployment revenue which grew about 58% year-over-year. Another enabler of freemium subscription business models is efficient digital payments which account for more than 50% of day-to-day transactions around the world. Data growth Internet trends indicate that a number of data plumbers are helping a lot of companies collect data, manage connections, and optimize data. In a survey of retail customers, 91% preferred brands that provided personalized offers and recommendations. 83% were willing to passively share data in exchange for personalized services and 74% were willing to actively share data in exchange for personalized experiences. Data volume and utilization is also evolving rapidly. Enterprise surpassed consumer in 2018 and cloud is overtaking both. More data is now stored in the cloud than on private enterprise servers or consumer devices. Jobs and Work Strong economic indicators, internet enabled services, and jobs are helping work. If we look at global GDP. China, the US and India are rising, but Europe is falling. Cross-border trade is at 29% of global GDP and has been growing for many years. Global relative unemployment concerns are very high outside the US and low in itself. Consumer confidence index is high and rising. Unemployment is at a 19-year low but job openings are at an all-time high and wages are rising. On-demand work is creating internet-enabled opportunities and efficiencies. There are 7 million on-demand workers up 22 percent year-on-year. Remote work is also creating internet enabled work opportunities and efficiency. Americans working remotely have risen from 5 percent versus 3 percent in 2000. Online education Education costs and student debt are rising in the US whereas post-secondary education enrollment is slowing. Online education enrollment is high across a diverse base of universities - public, private for-profit, and private not-for-profit.  Top offline institutions are ramping their online offerings at a very rapid rate - most recently University of Pennsylvania, University of London, University of Michigan and UC Boulder. Google's growth in creating certificates for in-demand jobs is growing rapidly which they are doing in collaboration with Coursera. Immigration and Healthcare In the U.S. 60% of the most highly valued tech companies are founded by first or second generation Americans. They employed 1.9 million people last year. USA entitlements account for 61% of government spending versus 42% 30 years ago, and shows no signs of stopping. Healthcare is steadily digitizing, driven by consumers and the trends are very powerful. You can expect more telemedicine and on-demand consultations. For details and infographics, we recommend you to go through the slide deck of the Internet trends report. What Elon Musk and South African conservation can teach us about technology forecasting. Jim Balsillie on Data Governance Challenges and 6 Recommendations to tackle them Experts present the most pressing issues facing global lawmakers on citizens’ privacy, democracy and the rights to freedom of speech.
Read more
  • 0
  • 0
  • 3019

article-image-salesforce-is-buying-tableau-in-a-15-7-billion-all-stock-deal
Richard Gall
10 Jun 2019
4 min read
Save for later

Salesforce is buying Tableau in a $15.7 billion all-stock deal

Richard Gall
10 Jun 2019
4 min read
Salesforce, one of the world's leading CRM platforms, is buying data visualization software Tableau in an all-stock deal worth $15.7 billion. The news comes just days after it emerged that Google is buying one of Tableau's competitors in the data visualization market, Looker. Taken together, the stories highlight the importance of analytics to some of the planet's biggest companies. They suggest that despite years of the big data revolution, it's only now that market-leading platforms are starting to realise that their customers want the level of capabilities offered by the best in the data visualization space. Salesforce shareholders will use their stock to purchase Tableau. As the press release published on the Salesforce site explains "each share of Tableau Class A and Class B common stock will be exchanged for 1.103 shares of Salesforce common stock, representing an enterprise value of $15.7 billion (net of cash), based on the trailing 3-day volume weighted average price of Salesforce's shares as of June 7, 2019." The acquisition is expected to be completed by the end of October 2019. https://twitter.com/tableau/status/1138040596604575750 Why is Salesforce buying Tableau? The deal is an incredible result for Tableau shareholders. At the end of last week, its market cap was $10.7 billion. This has led to some scepticism about just how good a deal this is for Salesforce. One commenter on Hacker News said "this seems really high for a company without earnings and a weird growth curve. Their ticker is cool and maybe sales force [sic] wants to be DATA on nasdaq. Otherwise, it will be hard to justify this high markup for a tool company." With Salesforce shares dropping 4.5% as markets opened this week, it seems investors are inclined to agree - Salesforce is certainly paying a premium for Tableau. However, whatever the long term impact of the acquisition, the price paid underlines the fact that Salesforce views Tableau as exceptionally important to its long term strategy. It opens up an opportunity for Salesforce to reposition and redefine itself as much more than just a CRM platform. It means it can start compete with the likes of Microsoft, which has a full suite of professional and business intelligence tools. Moreover, it also provides the platform with another way of potentially onboarding customers - given Tableau is well-known as a powerful yet accessible data visualization tool, it create an avenue through which new users can find their way to the Salesforce product. Marc Benioff, Chair and co-CEO of Salesforce, said "we are bringing together the world’s #1 CRM with the #1 analytics platform. Tableau helps people see and understand data, and Salesforce helps people engage and understand customers. It’s truly the best of both worlds for our customers--bringing together two critical platforms that every customer needs to understand their world.” Tableau has been a target for Salesforce for some time. Leaked documents from 2016 found that the data visualization was one of 14 companies that Salesforce had an interest in (another was LinkedIn, which would eventually be purchased by Microsoft). Read next: Alteryx vs. Tableau: Choosing the right data analytics tool for your business What's in it for Tableau (aside from the money...)? For Tableau, there are many other benefits of being purchased by Salesforce alongside the money. Primarily this is about expanding the platform's reach - Salesforce users are people who are interested in data with a huge range of use cases. By joining up with Salesforce, Tableau will become their go-to data visualization tool. "As our two companies began joint discussions," Tableau CEO Adam Selipsky said, "the possibilities of what we might do together became more and more intriguing. They have leading capabilities across many CRM areas including sales, marketing, service, application integration, AI for analytics and more. They have a vast number of field personnel selling to and servicing customers. They have incredible reach into the fabric of so many customers, all of whom need rich analytics capabilities and visual interfaces... On behalf of our customers, we began to dream about we might accomplish if we could combine our ability to help people see and understand data with their ability to help people engage and understand customers." What will happen to Tableau? Tableau won't be going anywhere. It will continue to exist under its own brand with the current leadership all remaining, including Selipsky. What does this all mean for the technology market? At the moment, it's too early to say - but the last year or so has seen some major high-profile acquisitions by tech companies. Perhaps we're seeing the emergence of a tooling arms race as the biggest organizations attempt to arm themselves with ecosystems of established market-leading tools. Whether this is good or bad for users remains to be seen, however.  
Read more
  • 0
  • 0
  • 3054
article-image-did-unfettered-growth-kill-maker-media-financial-crisis-leads-company-to-shutdown-maker-faire-and-lay-off-all-staff
Savia Lobo
10 Jun 2019
5 min read
Save for later

Did unfettered growth kill Maker Media? Financial crisis leads company to shutdown Maker Faire and lay off all staff

Savia Lobo
10 Jun 2019
5 min read
Updated: On July 10, 2019, Dougherty announced the relaunch of Maker Faire and Maker Media with the new name “Make Community“. Maker Media Inc., the company behind Maker Faire, the popular event that hosts arts, science, and engineering DIY projects for children and their parents, has laid off all its employees--22 employees--and have decided to shut down due to financial troubles. In January 2005, the company first started off with MAKE, an American bimonthly magazine focused on do it yourself and/or DIWO projects involving computers, electronics, robotics, metalworking, woodworking, etc. for both adults and children. In 2006, the company first held its Maker Faire event, that lets attendees wander amidst giant, inspiring art and engineering installations. Maker Faire now includes 200 owned and licensed events per year in over 40 countries. The Maker movement gained momentum and popularity when MAKE magazine first started publishing 15 years ago.  The movement emerged as a dominant source of livelihood as individuals found ways to build small businesses using their creative activity. In 2014, The WhiteHouse blog posted an article stating, “Maker Faires and similar events can inspire more people to become entrepreneurs and to pursue careers in design, advanced manufacturing, and the related fields of science, technology, engineering and mathematics (STEM).” With funding from the Department of Labor, “the AFL-CIO and Carnegie Mellon University are partnering with TechShop Pittsburgh to create an apprenticeship program for 21st-century manufacturing and encourage startups to manufacture domestically.” Recently, researchers from Baylor University and the University of North Carolina, in their research paper, have highlighted opportunities for studying the conditions under which the Maker movement might foster entrepreneurship outcomes. Dale Dougherty, Maker Media Inc.’s founder and CEO, told TechCrunch, “I started this 15 years ago and it’s always been a struggle as a business to make this work. Print publishing is not a great business for anybody, but it works…barely. Events are hard . . . there was a drop off in corporate sponsorship”. “Microsoft and Autodesk failed to sponsor this year’s flagship Bay Area Maker Faire”, TechCrunch reports. Dougherty further told that the company is trying to keep the servers running. “I hope to be able to get control of the assets of the company and restart it. We’re not necessarily going to do everything we did in the past but I’m committed to keeping the print magazine going and the Maker Faire licensing program”, he further added. In 2016, the company laid off 17 of its employees, followed by 8 employees recently in March. “They’ve been paid their owed wages and PTO, but did not receive any severance or two-week notice”, TechCrunch reports. These layoffs may have hinted the staff of the financial crisis affecting the company. Maker Media Inc. had raised $10 million from Obvious Ventures, Raine Ventures, and Floodgate. Dougherty says, “It started as a venture-backed company but we realized it wasn’t a venture-backed opportunity. The company wasn’t that interesting to its investors anymore. It was failing as a business but not as a mission. Should it be a non-profit or something like that? Some of our best successes, for instance, are in education.” The company has a huge public following for its products. Dougherty told TechCrunch that despite the rain, Maker Faire’s big Bay Area event last week met its ticket sales target. Also, about 1.45 million people attended its events in 2016. “MAKE: magazine had 125,000 paid subscribers and the company had racked up over one million YouTube subscribers. But high production costs in expensive cities and a proliferation of free DIY project content online had strained Maker Media”, writes TechCrunch. Dougherty told TechCrunch he has been overwhelmed by the support shown by the Maker community. As of now, licensed Maker Faire events around the world will proceed as planned. “Dougherty also says he’s aware of Oculus co-founder Palmer Luckey’s interest in funding the company, and a GoFundMe page started for it”, TechCrunch reports. Mike Senese, Executive Editor, MAKE magazine, tweeted, “Nothing but love and admiration for the team that I got to spend the last six years with, and the incredible community that made this amazing part of my life a reality.” https://twitter.com/donttrythis/status/1137374732733493248 https://twitter.com/xeni/status/1137395288262373376 https://twitter.com/chr1sa/status/1137518221232238592 Former Mythbusters co-host Adam Savage, who was a regular presence at the Maker Faire, told The Verge, “Make Media has created so many important new connections between people across the world. It showed the power from the act of creation. We are the better for its existence and I am sad. I also believe that something new will grow from what they built. The ground they laid is too fertile to lie fallow for long.” On July 10, 2019, Dougherty announced he’ll relaunch Maker Faire and Maker Media with the new name “Make Community“. The official launch of Make Community will supposedly be next week. The company is also working on a new issue of Make Magazine that is planned to be published quarterly and the online archives of its do-it-yourself project guides will remain available. Dougherty told TechCrunch “with the goal that we can get back up to speed as a business, and start generating revenue and a magazine again. This is where the community support needs to come in because I can’t fund it for very long.” GitHub introduces ‘Template repository’ for easy boilerplate code management and distribution 12 Visual Studio Code extensions that Node.js developers will love [Sponsored by Microsoft] Shoshana Zuboff on 21st century solutions for tackling the unique complexities of surveillance capitalism
Read more
  • 0
  • 0
  • 2177

article-image-worried-about-deepfakes-check-out-the-new-algorithm-that-manipulate-talking-head-videos-by-altering-the-transcripts
Vincy Davis
07 Jun 2019
6 min read
Save for later

Worried about Deepfakes? Check out the new algorithm that manipulate talking-head videos by altering the transcripts

Vincy Davis
07 Jun 2019
6 min read
Last week, a team of researchers from Stanford University, Max Planck Institute for Informatics, Princeton University and Adobe Research published a paper titled “Text-based Editing of Talking-head Video”. This paper proposes a method to edit a talking-head video based on its transcript to produce a realistic output video, in which the dialogue of the speaker has been modified. Basically, the editor modifies a video using a text transcript, to add new words, delete unwanted ones or completely rearrange the pieces by dragging and dropping. This video will maintain a seamless audio-visual flow, without any jump cuts and will look almost flawless to the untrained eye. The researchers want this kind of text-based editing approach to lay the foundation for better editing tools, in post production of movies and television. Actors often botch small bits of performance or leave out a critical word. This algorithm can help video editors fix that, which has until now involves expensive reshoots. It can also help in easy adaptation of audio-visual video content to specific target audiences. The tool supports three types of edit operations- add new words, rearrange existing words, delete existing words. Ohad Fried, a researcher in the paper says that “This technology is really about better storytelling. Instructional videos might be fine-tuned to different languages or cultural backgrounds, for instance, or children’s stories could be adapted to different ages.” https://youtu.be/0ybLCfVeFL4 How does the application work? The method uses an input talking-head video and a transcript to perform text-based editing. The first step is to align phonemes to the input audio and track each input frame to construct a parametric head model. Next, a 3D parametric face model with each frame of the input talking-head video is registered. This helps in selectively blending different aspects of the face. Then, a background sequence is selected and is used for pose data and background pixels. The background sequence allows editors to edit challenging videos with hair movement and slight camera motion. As Facial expressions are an important parameter, the researchers have tried to preserve the retrieved expression parameters as much as possible, by smoothing out the transition between them. This provides an output of edited parameter sequence which describes the new desired facial motion and a corresponding retimed background video clip. This is forwarded to a ‘neural face rendering’ approach. This step changes the facial motion of the retimed background video to match the parameter sequence. Thus the rendering procedure produces photo-realistic video frames of the subject, appearing to speak the new phrase.These localized edits seamlessly blends into the original video, producing an edited result. Lastly to add the audio, the resulted video is retimed to match the recording at the level of phones. The researchers have used the performers own voice in all their synthesis results. Image Source: Text-based Editing of Talking-head Video The researchers have tested the system with a series of complex edits including adding, removing and changing words, as well as translations to different languages. When the application was tried in a crowd-sourced study with 138 participants, the edits were rated as “real”, almost 60% of the time. Fried said that “The visual quality is such that it is very close to the original, but there’s plenty of room for improvement.” Ethical considerations: Erosion of truth, confusion and defamation Even though the application is quite useful for video editors and producers, it raises important and valid concerns about its potential for misuse. The researchers have also agreed that such a technology might be used for illicit purposes. “We acknowledge that bad actors might use such technologies to falsify personal statements and slander prominent individuals. We are concerned about such deception and misuse.” They have recommended certain precautions to be taken to avoid deception and misuse such as using watermarking. “The fact that the video is synthesized may be obvious by context, directly stated in the video or signaled via watermarking. We also believe that it is essential to obtain permission from the performers for any alteration before sharing a resulting video with a broad audience.” They urge the community to continue to develop forensics, fingerprinting and verification techniques to identify manipulated video. They also support the creation of appropriate regulations and laws that would balance the risks of misuse of these tools against the importance of creative, consensual use cases. The public however remain dubious pointing out valid arguments on why the ‘Ethical Concerns’ talked about in the paper, fail. A user on Hacker News comments, “The "Ethical concerns" section in the article feels like a punt. The author quoting "this technology is really about better storytelling" is aspirational -- the technology's story will be written by those who use it, and you can bet people will use this maliciously.” https://twitter.com/glenngabe/status/1136667296980701185 Another user feels that such kind of technology will only result in “slow erosion of video evidence being trustworthy”. Others have pointed out how the kind of transformation mentioned in the paper, does not come under the broad category of ‘video-editing’ ‘We need more words to describe this new landscape’ https://twitter.com/BrianRoemmele/status/1136710962348617728 Another common argument is that the algorithm can be used to generate terrifyingly real Deepfake videos. A Shallow Fake video was Nancy Pelosi’s altered video, which circulated recently, that made it appear she was slurring her words by slowing down the video. Facebook was criticized for not acting faster to slow the video’s spread. Not just altering speeches of politicians, altered videos like these can also, for instance, be used to create fake emergency alerts, or disrupt elections by dropping a fake video of one of the candidates before voting starts. There is also the issue of defaming someone on a personal capacity. Sam Gregory, Program Director at Witness, tweets that one of the main steps in ensuring effective use of such tools would be to “ensure that any commercialization of synthetic media tools has equal $ invested in detection/safeguards as in detection.; and to have a grounded conversation on trade-offs in mitigation”. He has also listed more interesting recommendations. https://twitter.com/SamGregory/status/1136964998864015361 For more details, we recommend you to read the research paper. OpenAI researchers have developed Sparse Transformers, a neural network which can predict what comes next in a sequence ‘Facial Recognition technology is faulty, racist, biased, abusive to civil rights; act now to restrict misuse’ say experts to House Oversight and Reform Committee Now there’s a CycleGAN to visualize the effects of climate change. But is this enough to mobilize action?
Read more
  • 0
  • 0
  • 4315