Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

How-To Tutorials - Data

1210 Articles
article-image-are-you-looking-at-transitioning-from-being-a-developer-to-manager-here-are-some-leadership-roles-to-consider
Packt Editorial Staff
04 Jul 2019
6 min read
Save for later

Are you looking at transitioning from being a developer to manager? Here are some leadership roles to consider

Packt Editorial Staff
04 Jul 2019
6 min read
What does the phrase "a manager" really mean anyway? This phrase means different things to different people and is often overused for the position which nearly matches an analyst-level profile! This term, although common, is worth defining what it really means, especially in the context of software development. This article is an excerpt from the book The Successful Software Manager written by an internationally experienced IT manager, Herman Fung. This book is a comprehensive and practical guide to managing software developers, software customers, and explores the process of deciding what software needs to be built, not how to build it. In this article, we’ll look into aspects you must be aware of before making the move to become a manager in the software industry. A simple distinction I once used to illustrate the difference between an analyst and a manager is that while an analyst identifies, collects, and analyzes information, a manager uses this analysis and makes decisions, or more accurately, is responsible and accountable for the decisions they make. The structure of software companies is now enormously diverse and varies a lot from one to another, which has an obvious impact on how the manager’s role and their responsibilities are defined, which will be unique to each company. Even within the same company, it's subject to change from time to time, as the company itself changes. Broadly speaking, a manager within software development can be classified into three categories, as we will now discuss: Team Leader/Manager This role is often a lead developer who also doubles up as the team spokesperson and single point of contact. They'll typically be the most senior and knowledgeable member of a small group of developers, who work on the same project, product, and technology. There is often a direct link between each developer in the team and their code, which means the team manager has a direct responsibility to ensure the product as a whole works. Usually, the team manager is also asked to fulfill the people management duties, such as performance reviews and appraisals, and day-to-day HR responsibilities. Development/Delivery Manager This person could be either a techie or a non-techie. They will have a good understanding of the requirements, design, code, and end product. They will manage running workshops and huddles to facilitate better overall team working and delivery. This role may include setting up visual aids, such as team/project charts or boards. In a matrix management model, where developers and other experts are temporarily asked to work in project teams, the development manager will not be responsible for HR and people management duties. Project Manager This person is most probably a non-techie, but there are exceptions, and this could be a distinct advantage on certain projects. Most importantly, a project manager will be process-focused and output-driven and will focus on distributing tasks to individuals. They are not expected to jump in to solve technical problems, but they are responsible for ensuring that the proper resources are available, while managing expectations. Specifically, they take part in managing the project budget, timeline, and risks. They should also be aware of the political landscape and management agenda within the organization to be able to navigate through them. The project manager ensures the project follows the required methodology or process framework mandated by the Project Management Office (PMO). They will not have people-management responsibilities for project team members. Agile practitioner As with all roles in today's world of tech, these categories will vary and overlap. They can even be held by the same person, which is becoming an increasingly common trait. They are also constantly evolving, which exemplifies the need to learn and grow continually, regardless of your role or position. If you are a true Agile practitioner, you may have issues in choosing these generalized categories, (Team Leader, Development Manager and Project Manager)  and you'd be right to do so! These categories are most applicable to an organization that practises the traditional Waterfall model. Without diving into the everlasting Waterfall vs Agile debate, let's just say that these are the categories that transcend any methodologies. Even if they're not referred to by these names, they are the roles that need to be performed, to varying degrees, at various times. For completeness, it is worth noting one role specific to Agile, that is being a scrum master. Scrum master A scrum master is a role often compared – rightly or wrongly – with that of the project manager. The key difference is that their focus is on facilitation and coaching, instead of organizing and control. This difference is as much of a mindset as it is a strict practice, and is often referred to as being attributes of Servant Leadership. I believe a good scrum master will show traits of a good project manager at various times, and vice versa. This is especially true in ensuring that there is clear communication at all times and the team stays focused on delivering together. Yet, as we look back at all these roles, it's worth remembering that with the advent of new disciplines such as big data, blockchain, artificial intelligence, and machine learning, there are new categories and opportunities to move from a developer role into a management position, for example, as an algorithm manager or data manager. Transitioning, growing, progressing, or simply changing from a developer to a manager is a wonderfully rewarding journey that is unique to everyone. After clarifying what being a “modern manager" really means, and the broad categories applicable in software development (Team / Development / Project / Agile), the overarching and often key consideration for developers is whether it means they will be managing people and writing less code. In this article, we looked into different leadership roles that are available for developers for their career progression plan. Develop crucial skills to enhance your performance and advance your career with The Successful Software Manager written by Herman Fung. “Developers don’t belong on a pedestal, they’re doing a job like everyone else” – April Wensel on toxic tech culture and Compassionate Coding [Interview] Curl’s lead developer announces Google’s “plan to reimplement curl in Libcrurl” ‘I code in my dreams too’, say developers in Jetbrains State of Developer Ecosystem 2019 Survey
Read more
  • 0
  • 0
  • 4284

article-image-experts-discuss-dark-patterns-and-deceptive-ui-designs-what-are-they-what-do-they-do-how-do-we-stop-them
Sugandha Lahoti
29 Jun 2019
12 min read
Save for later

Experts discuss Dark Patterns and deceptive UI designs: What are they? What do they do? How do we stop them?

Sugandha Lahoti
29 Jun 2019
12 min read
Dark patterns are often used online to deceive users into taking actions they would otherwise not take under effective, informed consent. Dark patterns are generally used by shopping websites, social media platforms, mobile apps and services as a part of their user interface design choices. Dark patterns can lead to financial loss, tricking users into giving up vast amounts of personal data, or inducing compulsive and addictive behavior in adults and children. Using dark patterns is unambiguously unlawful in the United States (under Section 5 of the Federal Trade Commission Act and similar state laws), the European Union (under the Unfair Commercial Practices Directive and similar member state laws), and numerous other jurisdictions. Earlier this week, at the Russell Senate Office Building, a panel of experts met to discuss the implications of Dark patterns in the session, Deceptive Design and Dark Patterns: What are they? What do they do? How do we stop them? The session included remarks from Senator. Mark Warner and Deb Fischer, sponsors of the DETOUR Act, and a panel of experts including Tristan Harris (Co-Founder and Executive Director, Center for Humane Technology). The entire panel of experts included: Tristan Harris (Co-Founder and Executive Director, Center for Humane Technology) Rana Foroohar (Global Business Columnist and Associate Editor, Financial Times) Amina Fazlullah (Policy Counsel, Common Sense Media) Paul Ohm (Professor of Law and Associate Dean, Georgetown Law School), also the moderator Katie McInnis (Policy Counsel, Consumer Reports) Marshall Erwin (Senior Director of Trust & Security, Mozilla) Arunesh Mathur (Dept. of Computer Science, Princeton University) Dark patterns are growing in social media platforms, video games, shopping websites, and are increasingly used to target children The expert session was inaugurated by Arunesh Mathur (Dept. of Computer Science, Princeton University) who talked about his new study by researchers from Princeton University and the University of Chicago. The study suggests that shopping websites are abundant with dark patterns that rely on consumer deception. The researchers conducted a large-scale study, analyzing almost 53K product pages from 11K shopping websites to characterize and quantify the prevalence of dark patterns. They so discovered 1,841 instances of dark patterns on shopping websites, which together represent 15 types of dark patterns. One of the dark patterns was Sneak into Website, which adds additional products to users’ shopping carts without their consent. For example, you would buy a bouquet on a website and the website without your consent would add a greeting card in the hopes that you will actually purchase it. Katie McInnis agreed and added that Dark patterns not only undermine the choices that are available to users on social media and shopping platforms but they can also cost users money. User interfaces are sometimes designed to push a user away from protecting their privacy, making it tough to evaluate them. Amina Fazlullah, Policy Counsel, Common Sense Media said that dark patterns are also being used to target children. Manipulative apps use design techniques to shame or confuse children into in-app purchases or trying to keep them on the app for longer. Children mostly are unable to discern these manipulative techniques. Sometimes the screen will have icons or buttons that will appear to be a part of game play and children will click on them not realizing that they're either being asked to make a purchase or being shown an ad or being directed onto another site. There are games which ask for payments or microtransactions to continue the game forward. Mozilla uses product transparency to curb Dark patterns Marshall Erwin, Senior Director of Trust & Security at Mozilla talked about the negative effects of dark patterns and how they make their own products at Mozilla more transparent.  They have a set of checks and principles in place to avoid dark patterns. No surprises: If users were to figure out or start to understand exactly what is happening with the browser, it should be consistent with their expectations. If the users are surprised, this means browsers need to make a change either by stopping the activity entirely or creating additional transparency that helps people understand. Anti-tracking technology: Cross-site tracking is one of the most pervasive and pernicious dark patterns across the web today that is enabled by cookies. Browsers should take action to decrease the attack surface in the browser and actively protect people from those patterns online.  Mozilla and Apple have introduced anti tracking technology to actively intervene to protect people from the diverse parties that are probably not trustworthy. Detour Act by Senators Warner and Fisher In April, Warner and Fischer had introduced the Deceptive Experiences To Online Users Reduction (DETOUR) Act, a bipartisan legislation to prohibit large online platforms from using dark patterns to trick consumers into handing over their personal data. This act focuses on the activities of large online service providers (over a hundred million users visiting in a given month). Under this act you cannot use practices that trick users into obtaining information or consenting. You will experience new controls about conducting ‘psychological experiments on your users’ and you will no longer be able to target children under 13 with the goal of hooking them into your service. It extends additional rulemaking and enforcement abilities to the Federal Trade Commission. “Protecting users personal data and user autonomy online are truly bipartisan issues”: Senator Mark Warner In his presentation, Warner talked about how 2019 is the year when we need to recognize dark patterns and their ongoing manipulation of American consumers.  While we've all celebrated the benefits that communities have brought from social media, there is also an enormous dark underbelly, he says. It is important that Congress steps up and we play a role as senators such that Americans and their private data is not misused or manipulated going forward. Protecting users personal data and user autonomy online are truly bipartisan issues. This is not a liberal versus conservative, it's much more a future versus past and how we get this future right in a way that takes advantage of social media tools but also put some of the appropriate constraints in place. He says that the driving notion behind the Detour act is that users should have the choice and autonomy when it comes to their personal data. When a company like Facebook asks you to upload your phone contacts or some other highly valuable data to their platform, you ought to have a simple choice yes or no. Companies that run experiments on you without your consent are coercive and Detour act aims to put appropriate protections in place that defend user's ability to make informed choices. In addition to prohibiting large online platforms from using dark patterns to trick consumers into handing over their personal data, the bill would also require informed consent for behavior experimentation. In the process, the bill will be sending a clear message to the platform companies and the FTC that they are now in the business of preserving user's autonomy when it comes to the use of their personal data. The goal, Warner says, is simple - to bring some transparency to what remains a very opaque market and give consumers the tools they need to make informed choices about how and when to share their personal information. “Curbing the use of dark patterns will be foundational to increasing trust online” : Senator Deb Fischer Fischer argued that tech companies are increasingly tailoring users’ online experiences in ways that are more granular. On one hand, she says, you get a more personalized user experience and platforms are more responsive, however it's this variability that allows companies to take that design just a step too far. Companies are constantly competing for users attention and this increases the motivation for a more intrusive and invasive user design. The ability for online platforms to guide the visual interfaces that billions of people view is an incredible influence. It forces us to assess the impact of design on user privacy and well-being. Fundamentally the detour act would prohibit large online platforms from purposely using deceptive user interfaces - dark patterns. The detour act would provide a better accountability system for improved transparency and autonomy online. The legislation would take an important step to restore the hidden options. It would give users a tool to get out of the maze that coaxes you to just click on ‘I agree’. A privacy framework that involves consent cannot function properly if it doesn't ensure the user interface presents fair and transparent options. The detour act would enable the creation of a professional standards body which can register with the Federal Trade Commission. This would serve as a self regulatory body to develop best practices for UI design with the FTC as a backup. She adds, “We need clarity for the enforcement of dark patterns that don't directly involve our wallets. We need policies that place value on user choice and personal data online. We need a stronger mechanism to protect the public interest when the goal for tech companies is to make people engage more and more. User consent remains weakened by the presence of dark patterns and unethical design. Curbing the use of dark patterns will be foundational to increasing trust online. The detour act does provide a key step in getting there.” “The DETOUR act is calling attention to asymmetry and preventing deceptive asymmetry”: Tristan Harris Tristan says that companies are now competing not on manipulating your immediate behavior but manipulating and predicting the future. For example, Facebook has something called loyalty prediction which allows them to sell to an advertiser the ability to predict when you're going to become disloyal to a brand. It can sell that opportunity to another advertiser before probably you know you're going to switch. The DETOUR act is a huge step in the right direction because it's about calling attention to asymmetry and preventing deceptive asymmetry. We need a new relationship for this  asymmetric power by having a duty of care. It’s about treating asymmetrically powerful technologies to be in the service of the systems that they are supposed to protect. He says, we need to switch to a regenerative energy economy that actually treats attention as sacred and not directly tying profit to user extraction. Top questions raised by the panel and online viewers Does A/B testing result in dark patterns? Dark patterns are often a result of A/B testing right where a designer may try things that lead to better engagement or maybe nudge users in a way where the company benefits. However, A/B testing isn't the problem, it’s the intention of how A/B testing is being used. Companies and other organizations should have an oversight on the different experiments that they are conducting to see if A/B testing is actually leading to some kind of concrete harm. The challenge in the space is drawing a line about A/B testing features and optimizing for engagement and decreasing friction. Are consumers smart enough to tackle dark patterns on their own or do we need a legislation? It's well established that for children whose brains are just developing, they're unable to discern these types of deceptive techniques so especially for kids, these types of practices should be banned. For vulnerable families who are juggling all sorts of concerns around income and access to jobs and transportation and health care, putting this on their plate as well is just unreasonable. Dark patterns are deployed for an array of opaque reasons the average user will never recognize. From a consumer perspective, going through and identifying dark pattern techniques--that these platform companies have spent hundreds of thousands  of dollars developing to be as opaque and as tricky as possible--is an unrealistic expectation put on consumers. This is why the DETOUR act and this type of regulation are absolutely necessary and the only way forward. What is it about the largest online providers that make us want to focus on them first or only? Is it their scale or do they have more powerful dark patterns? Is it because they're just harming more people or is it politics? Sometimes larger companies stay wary of indulging in dark patterns because they have a greater risk in terms of getting caught and the PR backlash. However, they do engage in manipulative practices and that warrants a lot of attention. Moreover, targeting bigger companies is just one part of a more comprehensive privacy enforcement environment. Hitting companies that have a large number of users is also great for consumer engagement.  Obviously there is a need to target more broadly but this is a starting point. If Facebook were to suddenly reclass itself and its advertising business model, would you still trust them? No, the leadership that's in charge now for Facebook can not be trusted, especially the organizational cultures that have been building. There are change efforts going on inside of Google and Facebook right now but it’s getting gridlocked. Even if employees want to see policies being changed, they still have bonus structures and employee culture to keep in mind. We recommend you to go through the full hearing here. You can read more about the Detour Act here. U.S. senators introduce a bipartisan bill that bans social media platforms from using ‘dark patterns’ to trick its users. How social media enabled and amplified the Christchurch terrorist attack A new study reveals how shopping websites use ‘dark patterns’ to deceive you into buying things you may not want
Read more
  • 0
  • 0
  • 3155

article-image-im-concerned-about-libras-model-for-decentralization-says-co-founder-of-chainspace-facebooks-blockchain-acquisition
Fatema Patrawala
26 Jun 2019
7 min read
Save for later

“I'm concerned about Libra's model for decentralization”, says co-founder of Chainspace, Facebook’s blockchain acquisition

Fatema Patrawala
26 Jun 2019
7 min read
In February, Facebook made its debut into the blockchain space by acquiring Chainspace, a London-based, Gibraltar-registered blockchain venture. Chainspace was a small start-up founded by several academics from the University College London Information Security Research Group. Authors of the original Chainspace paper were Mustafa Al-Bassam, Alberto Sonnino, Shehar Bano, Dave Hrycyszyn and George Danezis, some of the UK’s leading privacy engineering researchers. Following the acquisition, last week Facebook announced the launch of its new cryptocurrency, Libra which is expected to go live by 2020. The Libra whitepaper involves a wide array of authors including the Chainspace co-founders namely Alberto Sonnino, Shehar Bano and George Danezis. According to Wired, David Marcus, a former Paypal president and a Coinbase board member, who resigned from the board last year, is appointed by Facebook to lead the project Libra. Libra isn’t like other cryptocurrencies such as Bitcoin or Ethereum. As per the Reuters report, the Libra blockchain will be permissioned, meaning that only entities authorized by the governing association will be able to run the computers. Mustafa Al-Bassam, one of the research co-founders of Chainspace who did not join Facebook posted a detailed Twitter thread yesterday. The thread included particularly his views on this new crypto-currency - Libra. https://twitter.com/musalbas/status/1143629828551270401 On Libra’s decentralized model being less censorship resistant Mustafa says, “I don't have any doubt that the Libra team is building Libra for the right reasons: to create an open, decentralized payment system, not to empower Facebook. However, the road to dystopia is paved with good intentions, and I'm concerned about Libra's model for decentralization.” He further pointed the discussion towards a user comment on GitHub which reads, “Replace "decentralized" with "distributed" in readme”. Mustafa explains that Libra’s 100 node closed set of validators is seen more as decentralized in comparison to Bitcoin. Whereas Bitcoin has 4 pools that control >51% of hashpower. According to the Block Genesis, decentralized networks are particularly prone to Sybil attacks due to their permissionless nature. Mustafa takes this into consideration and poses a question if Libra is Sybil resistant, he comments, “I'm aware that the word "decentralization" is overused. I'm looking at decentralization, and Sybil-resistance, as a means to achieve censorship-resistance. Specifically: what do you have to do to reverse or censor transaction, how much does it cost, and who has that power? My concern is that Libra could end up creating a financial system that is *less* censorship-resistant than our current traditional financial system. You see, our current banking system is somewhat decentralized on a global scale, as money travels through a network of banks.” He further explains that, “In the banking system there is no majority of parties that can collude together to deny two banks the ability to maintain a relationship which each other - in the worst case scenario they can send physical cash to each other, which does not require a ledger. It's permissionless.” Mustafa adds to this point with a surreal imagination that if Libra was the only way to transfer currency and it is less censorship resistant than we’d be in worse situations, he says, “With cryptocurrency systems (even decentralized ones), there is always necessarily a majority of consensus nodes (e.g. a 51% attack) that can collude together from censor or reverse transactions. So if you're going to create digital cash, this is extremely important to consider. With Libra, censorship-resistance is even more important, as Libra could very well end up being the world's "de facto" currency, and if the Libra network is the only way to transfer that currency, and it's less censorship-resistant, we're worse off than where we started.” On Libra's permissioned consensus node selection authority Mustafa says that, “Libra's current permissioned consensus node selection authority is derived directly from nation state-enforced (Switzerland's) organization laws, rather than independently from stakeholders holding sovereign cryptographic keys.” Source - Libra whitepaper What this means is the "root API" for Libra's node selection mechanism is the Libra Association via the Swiss Federal Constitution and the Swiss courts, rather than public key cryptography. Mustafa also pointed out that the association members for Libra are large $1b+ companies, and US-based. Source - Libra whitepaper To this there could be an argument that governments can regulate the people who hold those public keys, but a key difference is that this can't be directly enforced without access to the private key. Mustafa explained this point with an example from last year, where Iran tested how resistant global payments are to US censorship. Iran requested a 300 million Euro cash withdrawal via Germany's central bank which they rejected under US pressure. Mustafa added, “US sanctions have been bad on ordinary people in Iran, but they can at least use cash to transact with other countries. If people wouldn't even be able to use cash in the future because Libra digital cash isn't censorship-resistant, that would be *brutal*.” On Libra’s proof-of-stake based permissionless mechanism Mustafa argues that the Libra whitepaper confuses consensus with Sybil-resistance. His views are Sybil-resistant node selection through permissionless mechanisms such as proof-of-stake, which selects a set of cryptographic keys that participate in consensus, is necessarily more censorship-resistant than the Association-based model. Proof-of-stake is a Sybil-resistance mechanism, not a consensus mechanism. The "longest chain rule", on the other hand, is the consensus mechanism. He says that Libra has outlined a proof-of-stake-based permissionless roadmap and will transition to this in the next 5 years. Mustafa feels 5 years for this will be way too late when Group of seven nations (G7) are already lining up the taskforce to control Libra. Mustafa also thinks that it isn’t appropriate about Libra's whitepaper to claim the need to start permissioned for the next five years. He says permissionlessness and scalable secure blockchains are an unsolved technical problem, and they need community's help to research this. Source - Libra whitepaper He says, “It's as if they ignored the past decade of blockchain scalability research efforts. Secure layer-one scalability is a solved research problem. Ethereum 2.0, for example, is past the research stage and is now in the implementation stage, and will handle more than Libra's 1000tps.” Mustafa also points out that Chainspace was specifically in the middle of implementing a permissionless sharded blockchain with higher on-chain scalability than Libra's 1000tps. With FB's resources, this could've easily been accelerated and made a reality. He says, there are many research-led blockchain projects that have implemented or are implementing scalability strategies that achieve higher than Libra's 1000tps without heavily trading off security, so the "community" research on this is plentiful; it is just that Facebook is being lazy. He concludes, “I find it a great shame that Facebook has decided to be anti-social and launch a permissioned system as they need the community's help as scalable blockchains are an unsolved problem, instead of using their resources to implement on a decade of research in this area.” People have appreciated Mustafa on giving a detailed review of Libra, one of the tweets read, “This was a great thread, with several acute and correct observations.” https://twitter.com/ercwl/status/1143671361325490177 Another tweet reads, “Isn't a shard (let's say a blockchain sharded into 100 shards) by its nature trading off 99% of its consensus forming decentralization for 100x (minus overhead, so maybe 50x?) increased scalability?” Mustafa responded, “No because consensus participants are randomly sampled into shards from the overall consensus set, so shards should be roughly uniformly secure, and in the event that a shard misbehaves, fraud and data availability proofs kick in.” https://twitter.com/ercwl/status/1143673925643243522 In one of the tweets it is also suggested that 1/3 of Libra validators can enforce censorship even against the will of the 2/3 majority. In contrast it requires majority of miners to censor Bitcoin. Also unlike Libra, there is no entry barrier other than capital to become a Bitcoin miner. https://twitter.com/TamasBlummer/status/1143766691089977346 Let us know what are your views on Libra and how it is expected to perform. Facebook launches Libra and Calibra in a move to seriously disrupt the financial sector Facebook content moderators work in filthy, stressful conditions and experience emotional trauma daily, reports The Verge Facebook releases Pythia, a deep learning framework for vision and language multimodal research
Read more
  • 0
  • 0
  • 2980
Banner background image

article-image-a-new-study-reveals-how-shopping-websites-use-dark-patterns-to-deceive-you-into-buying-things-you-may-not-want
Sugandha Lahoti
26 Jun 2019
6 min read
Save for later

A new study reveals how shopping websites use ‘dark patterns’ to deceive you into buying things you may not want

Sugandha Lahoti
26 Jun 2019
6 min read
A new study by researchers from Princeton University and the University of Chicago suggests that shopping websites are abundant with dark patterns that rely on consumer deception. The researchers conducted a large-scale study, analyzing almost 53K product pages from 11K shopping websites to characterize and quantify the prevalence of dark patterns. They discovered 1,841 instances of dark patterns on shopping websites, which together represent 15 types of dark patterns. Note: All images in the article are taken from the research paper. What are dark patterns Dark patterns are generally used by shopping websites as a part of their user interface design choices. These dark patterns coerce, steer, or deceive users into making unintended and potentially harmful decisions, benefiting an online service. Shopping websites trick users into signing up for recurring subscriptions and making unwanted purchases, resulting in concrete financial loss. These patterns are not just limited to shopping websites, and find common applications on digital platforms including social media, mobile apps, and video games as well. At extreme levels, dark patterns can lead to financial loss, tricking users into giving up vast amounts of personal data, or inducing compulsive and addictive behavior in adults and children. Researchers used a web crawler to identify text-based dark patterns The paper uses an automated approach that enables researchers to identify dark patterns at scale on the web. The researchers crawled 11K shopping websites using a web crawler, built on top of OpenWPM, which is a web privacy measurement platform. The web crawler was used to simulate a user browsing experience and identify user interface elements. The researchers used text clustering to extract recurring user interface designs from the resulting data and then inspected the resulting clusters for instances of dark patterns. The researchers also developed a novel taxonomy of dark pattern characteristics to understand how dark patterns influence user decision-making. Based on the taxonomy, the dark patterns were classified basis whether they lead to an asymmetry of choice, are covert in their effect, are deceptive in nature, hide information from users, and restrict choice. The researchers also mapped the dark patterns in their data set to the cognitive biases they exploit. These biases collectively described the consumer psychology underpinnings of the dark patterns identified. They also determine that many instances of dark patterns are enabled by third-party entities, which provide shopping websites with scripts and plugins to easily implement these patterns on their websites. Key stats from the research There are 1,841 instances of dark patterns on shopping websites, which together represent 15 types of dark patterns and 7 broad categories. These 1,841 dark patterns were present on 1,267 of the 11K shopping websites (∼11.2%) in their data set. Shopping websites that were more popular, according to Alexa rankings, were more likely to feature dark patterns. 234 instances of deceptive dark patterns were uncovered across 183 websites 22 third-party entities were identified that provide shopping websites with the ability to create dark patterns on their sites. Dark pattern categories Sneaking Attempting to misrepresent user actions. Delaying information that users would most likely object to once made available. Sneak into Basket: The “Sneak into Basket” dark pattern adds additional products to users’ shopping carts without their consent Hidden Subscription:  Dark pattern charges users a recurring fee under the pretense of a one-time fee or a free trial Hidden Costs: Reveals new, additional, and often unusually high charges to users just before they are about to complete a purchase. Urgency Imposing a deadline on a sale or deal, thereby accelerating user decision-making and purchases. Countdown Timers: Dynamic indicator of a deadline counting down until the deadline expires. Limited-time Messages: Static urgency message without an accompanying deadline Misdirection Using visuals, language, or emotion to direct users toward or away from making a particular choice. Confirmshaming:  It uses language and emotion to steer users away from making a certain choice. Trick Questions: It uses confusing language to steer users into making certain choices. Visual Interference: It uses style and visual presentation to steer users into making certain choices over others. Pressured Selling: It refers to defaults or often high-pressure tactics that steer users into purchasing a more expensive version of a product (upselling) or into purchasing related products (cross-selling). Social proof Influencing users' behavior by describing the experiences and behavior of other users. Activity Notification:  Recurring attention grabbing message that appears on product pages indicating the activity of other users. Testimonials of Uncertain Origin: The use of customer testimonials whose origin or how they were sourced and created is not clearly specified. Scarcity Signalling that a product is likely to become unavailable, thereby increasing its desirability to users. Examples such as Low-stock Messages and High-demand Messages come under this category. Low-stock Messages: It signals to users about limited quantities of a product High-demand Messages: It signals to users that a product is in high demand, implying that it is likely to sell out soon. Obstruction Making it easy for the user to get into one situation but hard to get out of it. The researchers observed one type of the Obstruction dark pattern: “Hard to Cancel”. The Hard to Cancel dark pattern is restrictive (it limits the choices users can exercise to cancel their services). In cases where websites do not disclose their cancellation policies upfront, Hard to Cancel also becomes information hiding (it fails to inform users about how cancellation is harder than signing up). Forced Action Forcing the user to do something tangential in order to complete their task. The researchers observed one type of the Forced Action dark pattern: “Forced Enrollment” on 6 websites. Limitations of the research The researchers have acknowledged that their study has certain limitations. Only text-based dark patterns are taken into account for this study. There is still work needed to be done for inherently visual patterns (e.g., a change of font size or color to emphasize one part of the text more than another from an otherwise seemingly harmless pattern). The web crawling lead to a fraction of Selenium crashes, which did not allow researchers to either retrieve product pages or complete data collection on certain websites. The crawler failed to completely simulate the product purchase flow on some websites. They only crawled product pages and checkout pages, missing out on dark patterns present in other common pages such as the homepage of websites, product search pages, and account creation pages. The list of dark patterns can be downloaded as a CSV file. For more details, we recommend you to read the research paper. U.S. senators introduce a bipartisan bill that bans social media platforms from using ‘dark patterns’ to trick its users. How social media enabled and amplified the Christchurch terrorist attack Can an Open Web Index break Google’s stranglehold over the search engine market?
Read more
  • 0
  • 0
  • 3486

article-image-deepfakes-house-committee-hearing-risks-vulnerabilities-and-recommendations
Vincy Davis
21 Jun 2019
16 min read
Save for later

Deepfakes House Committee Hearing: Risks, Vulnerabilities and Recommendations

Vincy Davis
21 Jun 2019
16 min read
Last week, the House Intelligence Committee held a hearing to examine the public risks posed by “deepfake” videos. Deepfake is identified as a technology that alters audio or video and then is passed off as true or original content. In this hearing, experts on AI and digital policy highlighted to the committee, deepfakes risk to national security, upcoming elections, public trust and the mission of journalism. They also offered potential recommendations on what Congress could do to combat deepfakes and misinformation. The chair of the committee Adam B. Schiff, initiated the hearing by stating that it is time to regulate the technology of deepfake videos as it is enabling sinister forms of deception and disinformation by malicious actors. He adds that “Advances in AI or machine learning have led to the emergence of advance digitally doctored type of media, the so-called deepfakes that enable malicious actors to foment chaos, division or crisis and have the capacity to disrupt entire campaigns including that for the Presidency.” For a quick glance, here’s a TL;DR: Jack Clerk believes that governments should be in the business of measuring and assessing deepfake threats by looking directly at the scientific literature and developing a base knowledge of it. David Doermann suggests that tools and processes which can identify fake content should be made available in the hands of individuals, rather than relying completely on the government or on social media platforms to police content. Danielle Citron warns that the phenomenon of deepfake is going to be increasingly felt by women and minorities and for people from marginalized communities. Clint Watts provides a list of recommendations which should be implemented to prohibit U.S. officials, elected representatives and agencies from creating and distributing false and manipulated content. A unified standard should be followed by all social media platforms. Also they should be pressurized to have a 10-15 seconds delay in all videos, so that they can decide, to label a particular video or not. Regarding 2020 Presidential election: State governments and social media companies should be ready with a response plan, if a fake video surfaces to cause disrupt. It was also recommended that the algorithms to make deepfakes should be open sourced. Laws should be altered, and strict actions should be awarded, to discourage deepfake videos. Being forewarned is forearmed in case of deepfake technology Jack Clerk, OpenAI Policy Director, highlighted in his testimony that he does not think A.I. is the cause of any disruption, but actually is an “accelerant to an issue which has been with us for some time.'' He adds that computer software aligned with A.I. technology has become significantly cheaper and more powerful, due to its increased accessibility. This has led to its usage in audio or video editing, which was previously very difficult. Similar technologies  are being used for production of synthetic media. Also deepfakes are being used in valuable scientific research. Clerk suggests that interventions should be made to avoid its misuse. He believes that “it may be possible for large-scale technology platforms to try and develop and share tools for the detection of malicious synthetic media at both the individual account level and the platform level. We can also increase funding.” He strongly believes that governments should be in the business of measuring and assessing these threats by looking directly at the scientific literature and developing a base knowledge. Clerk concludes saying that “being forewarned is forearmed here.” Make Deepfake detector tools readily availaible David Doermann, the former Project Manager at the Defense Advanced Research Projects Agency mentions that the phrase ‘seeing is believing’ is no longer true. He states that there is nothing fundamentally wrong or evil about the technology, like basic image and video desktop editors, deepfakes is only a tool. There are a lot of positive applications of generative networks just as there are negative ones. He adds that, as of today, there are some solutions that can identify deepfakes reliably. However, Doermann fears that it’s only a matter of time before the current detection capabilities will be rendered less effective. He adds that “it's likely to get much worse before it gets much better.” Doermann suggests that tools and processes which can identify such fake content should be made available in the hands of individuals, rather than relying completely on the government or on social media platforms to police content. At the same time, there should also be ways to verify it or prove it or easily report it. He also hopes that automated detection tools will be developed, in the future, which will help in filtering and detection at the front end of the distribution pipeline. He also adds that “appropriate warning labels should be provided, which suggests that this is not real or not authentic, or not what it's purported to be. This would be independent of whether this is done and the decisions are made, by humans, machines or a combination.” Groups most vulnerable to Deepfake attacks Women and minorities Danielle Citron, a Law Professor at the University of Maryland, describes Deepfake as “particularly troubling when they're provocative and destructive.” She adds that, we as humans, tend to believe what our eyes and ears are telling us and also tend to share information that confirms our biases. It’s particularly true when that information is novel and negative, so the more salacious, we're more willing to pass it on. She also specifies that the deepfakes on social media networks are ad-driven. When all of this is put together, it turns out that the more provocative the deepfake is, the salacious will be the spread virally.  She also informed the panel committee about an incident, involving an investigative journalist in India, who had her posters circulated over the internet and deepfake sex videos, with her face morphed into pornography, over a provocative article. Citron thus states that “the economic and the social and psychological harm is profound”. Also based on her work in cyber stalking, she believes that this phenomenon is going to be increasingly felt by women and minorities and for people from marginalized communities. She also shared other examples explaining the effect of deepfake on trades and businesses. Citron also highlighted that “We need a combination of law, markets and really societal resilience to get through this, but the law has a modest role to play.” She also mentioned that though there are laws to sue for defamation, intentional infliction of emotional distress, privacy torture, these procedures are quite expensive. She adds that criminal law offers very less opportunity for the public to push criminals to the next level. National security Clint Watts, a Senior Fellow at the Foreign Policy Research Institute provided insight into how such technologies can affect national security. He says that “A.I. provides purveyors of disinformation to identify psychological vulnerabilities and to create modified content digital forgeries advancing false narratives against Americans and American interests.” Watts suspects that Russia, “being an enduring purveyor of disinformation is and will continue to pursue the acquisition of synthetic media capability, and employ the output against adversaries around the world.” He also adds that China, being the U.S. rival, will join Russia “to get vast amounts of information stolen from the U.S. The country has already shown a propensity to employ synthetic media in broadcast journalism. They'll likely use it as part of disinformation campaigns to discredit foreign detractors, incite fear inside western-style democracy and then, distort the reality of audiences and the audiences of America's allies.” He also mentions that deepfake proliferation can present a danger to American constituency by demoralizing it. Watts suspects that the U.S. diplomats and military personnel deployed overseas, will be prime target for deepfake driven disinformation planted by adversaries. Watts provided a list of recommendations which should be implemented to “prohibit U.S. officials, elected representatives and agencies from creating and distributing false and manipulated content.” The U.S. government must be the sole purveyor of facts and truth to constituents, assuring the effective administration of democracy via productive policy debate from a shared basis of reality. Policy makers should work jointly with social media companies to develop standards for content and accountability. The U.S. government should partner with private sectors to implement digital verification designating a date, time and physical origination of the content. Social media companies should start labeling videos, and forward the same across all platforms. Consumers should be able to determine the source of the information and whether it's the authentic depiction of people and events. The U.S. government from a national security perspective, should maintain intelligence on capabilities of adversaries to conduct such information. The departments of defense and state should immediately develop response plans, for deepfake smear campaigns and mobilizations overseas, in an attempt to mitigate harm. Lastly he also added that public awareness of deepfakes and signatures, will assist in tamping down attempts to subvert the  U.S. democracy and incite violence. Schiff asked the witnesses, if it's “time to do away with the immunity that social media platforms enjoy”, Watts replied in the affirmative and listed suggestions in three particular areas. If social media platforms see something spiking in terms of virality, it should be put in a queue for human review, linked to fact checkers, then down rate it and don't let it into news feeds. Also make the mainstream understand what is manipulated content. Anything related to outbreaks of violence and public safety should be regulated immediately. Anything related to elected officials or public institutions, should immediately be flagged and pulled down and checked and then a context should be given to it. Co-chair of the committee, Devin Nunes asked Citron what kind of filters can be placed on these tech companies, as “it's not developed by partisan left wing like it is now, where most of the time, it's conservatives who get banned and not democrats”. Citron suggested that proactive filtering won’t be possible and hence companies should react responsibly and should be bipartisan. She added that “but rather, is this a misrepresentation in a defamatory way, right, that we would say it's a falsehood that is harmful to reputation. that's an impersonation, then we should take it down. This is the default I am imagining.” How laws could be altered according to the changing times, to discourage deepfake videos Citron says that laws could be altered, like in the case of Section 230 C. It states that “No speaker or publisher -- or no online service shall be treated as a speaker or publisher of someone else's content.” This law can be altered to “No online service that engages in reasonable content moderation practices shall be treated as a speaker or publisher of somebody else's content.” Citron believes that avoiding reasonability could lead to negligence of law. She also adds that “I've been advising Twitter and Facebook all of the time. There is meaningful reasonable practices that are emerging and have emerged in the last ten years. We already have a guide, it's not as if this is a new issue in 2019. So we can come up with reasonable practices.” Also Watts added that if any adversary from big countries like China, Iran, Russia makes a deepfake video to push the US downwards, we can trace them back if we have aggressive laws at our hand. He says it could be anything from an “arrest and extradition, if the sanction permits, response should be individually, or in terms of cyber response”, could help us to discourage deepfake. How to slow down the spread of videos One of the reasons that these types of manipulated images gain traction is because it's almost instantaneous - they can be shared around the world, shared across platforms in a few seconds. Doermann says that these social media platforms must be pressurized to have a 10-15 seconds delay, so that it can be decided whether to label a particular video or not. He adds that “We've done it for child pornography, we've done it for human trafficking, they're serious about those things. This is another area that's a little bit more in the middle, but I think they can take the same effort in these areas to do that type of triage.” This delay will allow third parties or fact checkers to decide on the authenticity of videos and label them. Citron adds that this is where labelling a particular video can help, “I think it is incredibly important and there are times in which, that's the perfect rather than second best, and we should err on the side of inclusion and label it as synthetic.” The representative of Ohio, Brad Wenstrup added that we can have internal extradition laws, which can punish somebody when “something comes from some other country, maybe even a friendly country, that defames and hurts someone here”. There should be an agreement among nations that “we'll extradite those people and they can be punished in your country for what they did to one of your citizens.” Terri Sewell, the Representative of Alabama further probed about the current scenario of detecting fake videos, to which Doermann replied that currently we have enough solutions to detect a fake video, however with a constant delay of 15-20 minutes. Deepfakes and 2020 Presidential elections Watts says that he’s concerned about deepfakes acting on the eve of election day 2020. Foreign adversaries may use a standard disinformation approach by “using an organic content that suits their narrative and inject it back.” This can escalate as more people are making deepfakes each year. He also added that “Right now I would be very worried about someone making a fake video about electoral systems being out or broken down on election day 2020.” So state governments and social media companies should be ready with a response plan in the wake of such an event. Sewell then asked the witnesses for suggestions on campaigns to political parties/candidates so that they are prepared for the possibility of deepfake content. Watts replied that the most important thing to counter fake content would be a unified standard, that all the social media industries should follow. He added that “if you're a manipulator, domestic or international, and you're making deep fakes, you're going to go to whatever platform allows you to post anything from inauthentic accounts. they go to wherever the weak point is and it spreads throughout the system.” He believes that this system would help counter extremism, disinformation and political smear campaigns. Watts added any sort of lag in responding to such videos should be avoided as “any sort of lag in terms of response allows that conspiracy to grow.” Citron also pointed out that firstly all candidates should have a clear policy about deep fakes and should commit that they won’t use them or spread them. Should the algorithms to make deepfakes be open sourced? Doermann answered that the algorithms of deepfakes have to be absolutely open sourced. He says that though this might help adversaries, but they are anyway going to learn about it. He believes this is significant as, “We need to get this type of stuff out there. We need to get it into the hands of users. There are companies out there that are starting to make these types of things.” He also states that people should be able to use this technology. The more we educate them, more the tools they learn, more the correct choices people can make. On Mark Zuckerberg’s deepfake video On being asked to comment on the decision of Mark Zuckerberg to not take down his deepfake video from his own platform, Facebook, Citron replied that Mark gave a perfect example of “satire and parody”, by not taking down the video. She added that private companies can make these kinds of choices, as they have an incredible amount of power, without any liability, “it seemed to be a conversation about the choices they make and what does that mean for society. So it was incredibly productive, I think.” Watts also opined that he likes Facebook for its consistency in terms of enforcement and that they are always trying to learn better things and implement it. He adds that he really like Facebook as its always ready to hear “from legislatures about what falls inside those parameters. The one thing that I really like is that they're doing is identifying inauthentic account creation and inauthentic content generation, they are enforcing it, they have increased the scale,and it is very very good in terms of how they have scaled it up, it’s not perfect, but it is better.”   Read More: Zuckberg just became the target of the world’s first high profile white hat deepfake op. Can Facebook come out unscathed? On the Nancy Pelosi doctored video Schiff asked the witnesses if there is any account on the number of millions of people who have watched the doctored video of Nancy Pelosi, and an account of how many of them ultimately got to know that it was not a real video. He said he’s asking this as according to psychologists, people never really forget their once constructed negative impression. Clarke replied that “Fact checks and clarifications tend not to travel nearly as far as the initial news.” He added that its becomes a very general thing as “If you care, you care about clarifications and fact checks. but if you're just enjoying media, you're enjoying media. You enjoy the experience of the media and the absolute minority doesn’t care whether it's true.” Schiff also recalled how in 2016, “some foreign actresses, particularly Russia had mimicked black lives matter to push out continent to racially divide people.” Such videos gave the impression of police violence, on people of colour. They “certainly push out videos that are enormously jarring and disruptive.” All the information revealed in the hearing was described as “scary and worrying”, by one of the representatives. The hearing was ended by Schiff, the chair of the committee, after thanking all the witnesses for their testimonies and recommendations. For more details, head over to the full Hearing on deepfake videos by the House Intelligence Committee. Worried about Deepfakes? Check out the new algorithm that manipulate talking-head videos by altering the transcripts Lawmakers introduce new Consumer privacy bill and Malicious Deep Fake Prohibition Act to support consumer privacy and battle deepfakes Machine generated videos like Deepfakes – Trick or Treat?
Read more
  • 0
  • 0
  • 2328

article-image-google-rejects-all-13-shareholder-proposals-at-its-annual-meeting-despite-protesting-workers
Sugandha Lahoti
20 Jun 2019
11 min read
Save for later

Google rejects all 13 shareholder proposals at its annual meeting, despite protesting workers

Sugandha Lahoti
20 Jun 2019
11 min read
Yesterday at Google’s annual shareholder meeting, Alphabet, Google’s parent company, faced 13 independent stockholder proposals ranging from sexual harassment and diversity, to the company's policies regarding China and forced arbitration. There was also a proposal to limit Alphabet's power by breaking up the company. However, as expected, every stockholder proposal was voted down after a few minutes of ceremonial voting, despite protesting workers. The company’s co-founders, Larry Page, and Sergey Brin were both no-shows and Google CEO Sundar Pichai didn’t answer any questions. Google has seen a massive escalation in employee activism since 2018. The company faced backlash from its employees over a censored version of its search engine for China, forced arbitration policies, and its mishandling of sexual misconduct. Google Walkout for Real Change organizers Claire Stapleton, and Meredith Whittaker were also retaliated against by their supervisors for speaking out. According to widespread media reports the US Department of Justice is readying to investigate into Google. It has been reported that the probe would examine whether the tech giant broke antitrust law in the operation of its online and advertisement businesses. Source: Google’s annual meeting MOM Equal Shareholder Voting Shareholders requested that Alphabet’s Board initiate and adopt a recapitalization plan for all outstanding stock to have one vote per share. Currently, the company has a multi-class voting structure, where each share of Class A common stock has one vote and each share of Class B common stock has 10 votes. As a result, Page and Brin currently control over 51% of the company’s total voting power, while owning less than 13% of stock. This raises concerns that the interests of public shareholders may be subordinated to those of our co-founders. However, board of directors rejected the proposal stating that their capital and governance structure has provided significant stability to the company, and is therefore in the best interests of the stockholders. Commit to not use Inequitable Employment Practices This proposal urged Alphabet to commit not to use any of the Inequitable Employment Practices, to encourage focus on human capital management and improve accountability.  “Inequitable Employment Practices” are mandatory arbitration of employment-related claims, non-compete agreements with employees, agreements with other companies not to recruit one another’s employees, and involuntary non-disclosure agreements (“NDAs”) that employees are required to sign in connection with settlement of claims that any Alphabet employee engaged in unlawful discrimination or harassment. Again Google rejected this proposal stating that the updated code of conduct already covers these requirements and do not believe that implementing this proposal would provide any incremental value or benefit to its stockholders or employees. Establishment of a Societal Risk Oversight Committee Stockholders asked Alphabet to establish a Societal Risk Oversight Committee (“the Committee”) of the Board of Directors, composed of independent directors with relevant experience. The Committee should provide an ongoing review of corporate policies and procedures, above and beyond legal and regulatory matters, to assess the potential societal consequences of the Company’s products and services, and should offer guidance on strategic decisions. As with the other Board committees, a formal charter for the Committee and a summary of its functions should be made publicly available. This proposal was also rejected. Alphabet said, “The current structure of our Board of Directors and its committees (Audit, Leadership Development and Compensation, and Nominating and Corporate Governance) already covers these issues.” Report on Sexual Harassment Risk Management Shareholders requested management to review its policies related to sexual harassment to assess whether the company needs to adopt and implement additional policies and to report its findings, omitting proprietary information and prepared at a reasonable expense by December 31, 2019. However, this was rejected as well based on the conclusion that Google has robust Conduct Policies already in place. There is also ongoing improvement and reporting in this area, and Google has a concrete plan of action to do more. Majority Vote for the Election of Directors This proposal demands that director nominees should be elected by the affirmative vote of the majority of votes cast at an annual meeting of shareholders, with a plurality vote standard retained for contested director elections, i.e., when the number of director nominees exceeds the number of board seats. This proposal also demands that a director who receives less than such a majority vote should be removed from the board immediately or as soon as a replacement director can be qualified on an expedited basis. This proposal was rejected basis, “Our Board of Directors believes that current nominating and voting procedures for election to our Board of Directors, as opposed to a mandated majority voting standard, provide the board the flexibility to appropriately respond to stockholder interests without the risk of potential corporate governance complications arising from failed elections.” Report on Gender Pay Stockholders demand that Google should report on the company’s global median gender pay gap, including associated policy, reputational, competitive, and operational risks, and risks related to recruiting and retaining female talent. The report should be prepared at a reasonable cost, omitting proprietary information, litigation strategy, and legal compliance information. Google says it already releases an annual report on its pay equity analyses, “ensuring they pay fairly and equitably.” They don’t think an additional report as detailed in the proposal above would enhance Alphabet’s existing commitment to fostering a fair and inclusive culture. Strategic Alternatives The proposal outlines the need for an orderly process to retain advisors to study strategic alternatives. They believe Alphabet may be too large and complex to be managed effectively and want a voluntary strategic reduction in the size of the company than from asset sales compelled by regulators. They also want a committee of independent directors to evaluate those alternatives in the exercise of their fiduciary responsibilities to maximize shareholder value. This proposal was also rejected basis, “Our Board of Directors and management do not favor a given size of the company or focus on any strategy based on ideological grounds. Instead, we develop a strategy based on the company’s customers, partners, users and the communities we serve, and focus on strategies that maximize long-term sustainable stockholder value.” Nomination of an Employee Representative Director An Employee Representative Director should be nominated for election to the Board by shareholders at Alphabet’s 2020 annual meeting of shareholders. Stockholders say that employee representation on Alphabet’s Board would add knowledge and insight on issues critical to the success of the Company, beyond that currently present on the Board, and may result in more informed decision making. Alphabet quoted their “Consideration of Director Nominees” section of the proxy statement, stating that the Nominating and Corporate Governance Committee of Alphabet’s Board of Directors look for several critical qualities in screening and evaluating potential director candidates to serve our stockholders. They only allow the best and most qualified candidates to be elected to the Board of Directors. Accordingly, this proposal was also rejected. Simple Majority Vote The shareholders call for the simple majority vote to be eliminated and replaced by a requirement for a majority of the votes cast for and against applicable proposals, or a simple majority in compliance with applicable laws. Currently, the voice of regular shareholders is diminished because certain insider shares have 10-times as many votes per share as regular shares. Plus shareholders have no right to act by written consent. The rejection reason laid out by Google was that more stringent voting requirements in certain limited circumstances are appropriate and in the best interests of Google’s stockholders and the company. Integrating Sustainability Metrics Report into Performance measures Shareholders requested the Board Compensation Committee to prepare a report assessing the feasibility of integrating sustainability metrics into performance measures. These should apply to senior executives under the Company’s compensation plans or arrangements. They state that Alphabet remains predominantly white, male, and occupationally segregated. Among Alphabet’s top 290 managers in 2017, just over one-quarter were women and only 17 managers were underrepresented people of color. Even after proof of Alphabet being not inclusive, the proposal was rejected. Alphabet said that the company already supports corporate sustainability, including environmental, social and diversity considerations. Google Search in China Google employees, as well as Human rights organizations, have called on Google to end work on Dragonfly. Google employees have quit to avoid working on products that enable censorship; 1,400 current employees have signed a letter protesting Dragonfly. Employees said: “Currently we do not have the information required to make ethically-informed decisions about our work, our projects, and our employment.” Some employees have threatened to strike. Dragonfly may also be inconsistent with Google’s AI Principles. The proposal urged Google to publish a Human Rights Impact Assessment by no later than October 30, 2019, examining the actual and potential impacts of censored Google search in China. Google rejected this proposal stating that before Google launches any new search product, it would conduct proper human rights due diligence, confer with Global Network Initiative (GNI) partners and other key stakeholders. If Google ever considers re-engaging this work, it will do so transparently, engaging and consulting widely. Clawback Policy Alphabet currently does not disclose an incentive compensation clawback policy in its proxy statement. The proposal urges Alphabet’s Leadership Development and Compensation Committee to adopt a clawback policy. This committee will review the incentive compensation paid, granted or awarded to a senior executive in case there is misconduct resulting in a material violation of law or Alphabet’s policy that causes significant financial or reputational harm to Alphabet. Alphabet rejected this proposal based on the following claims: The company has announced significant revisions to its workplace culture policies. Google has ended forced arbitration for employment claims. Any violation of our Code of Conduct or other policies may result in disciplinary action, including termination of employment and forfeiture of unvested equity awards. Report on Content Governance Stockholders are concerned that Alphabet’s Google is failing to effectively address content governance concerns, posing risks to shareholder value. They request Alphabet to issue a report to shareholders assessing the risks posed by content management controversies related to election interference, freedom of expression, and the spread of hate speech. Google said that they have already done significant work in the areas of combatting violent or extremist content, misinformation, and election interference and have continued to keep the public informed about our efforts. Thus they do not believe that implementing this proposal would benefit our stockholders. Read the full report here. Protestors showed disappointment As the annual meeting was conducted, thousands of activists protested across Google offices. A UK-based human rights organization called SumOfUs teamed up with Students for a Free Tibet to propose a breakup of Google. They organized a protest outside of 12 Google offices around the world to coincide with the shareholder meeting, including in San Francisco, Stockholm, and Mumbai. https://twitter.com/SumOfUs/status/1141363780909174784 Alphabet Board Chairman, John Hennessy opened the meeting by reflecting on Google's mission to provide information to the world. "Of course this comes with a deep and growing responsibility to ensure the technology we create benefits society as a whole," he said. "We are committed to supporting our users, our employees and our shareholders by always acting responsibly, inclusively and fairly." In the Q&A session which followed the meeting, protestors demanded why Page wasn't in attendance. "It's a glaring omission," one of the protestors said. "I think that's disgraceful." Hennessy responded, "Unfortunately, Larry wasn't able to be here" but noted Page has been at every board meeting. The stockholders pushed back, saying the annual meeting is the only opportunity for investors to address him. Toward the end of the meeting, protestors including Google employees, low-paid contract workers, and local community members were shouting outside. “We shouldn’t have to be here protesting. We should have been included.” One sign from the protest, listing the first names of the members of Alphabet’s board, read, “Sundar, Larry, Sergey, Ruth, Kent — You don’t speak for us!” https://twitter.com/GoogleWalkout/status/1141454091455008769 https://twitter.com/LauraEWeiss16/status/1141400193390141440   This meeting also marked the official departure of former Google CEO Eric Schmidt and former Google Cloud chief Diane Greene from Alphabet's board. Earlier this year, they said they wouldn’t be seeking re-election. Google Walkout organizer, Claire Stapleton resigns after facing retaliation from management US regulators plan to probe Google on anti-trust issues; Facebook, Amazon & Apple also under legal scrutiny. Google employees lay down actionable demands after staging a sit-in to protest retaliation
Read more
  • 0
  • 0
  • 5062
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-facebook-fails-to-block-ecj-data-security-case-from-proceeding
Guest Contributor
19 Jun 2019
6 min read
Save for later

Facebook fails to block ECJ data security case from proceeding

Guest Contributor
19 Jun 2019
6 min read
This July, the European Court of Justice (ECJ) in Luxembourg will now hear a case to answer questions on whether the American government's surveillance, Privacy Shield and Standard Contract Clauses, during EU-US data transfers, provides adequate protection of EU citizen's personal information. The ECJ set the case hearing after the supreme court of Ireland — where Facebook's international headquarters is located — decided, on Friday, May 31st, 2019, to dismiss an appeal by Facebook to block the data security case from progressing to the ECJ. The Austrian Supreme Court has also recently rejected Facebook’s bid to stop a similar case. If Europe's Court of Justice makes a ruling against the current legal arrangements, this would majorly impact thousands of companies, which make millions of data transfers every day. Companies potentially affected, include human resources databases, storage of internet browsing histories and credit card companies. Background on this case The case started with the Austrian privacy lawyer and campaigner, Max Schrems. In 2013, Schrems made a complaint regarding concerns that US surveillance programs like the PRISM system were accessing the data of European Facebook users, as whistleblower Edward Snowden described. His concerns also dealt with Facebook’s use of a separate data transfer mechanism — Standard Contractual Clauses (SCCs). Around the time Snowden disclosed about the US government's mass surveillance programs, Schrems also challenged the legality of the prior EU-US data transfer arrangement, Safe Harbor, eventually bringing it down. After Schrems stated that the transfer of his data by Facebook to the US infringed upon his rights as an EU citizen, Ireland's High Court ruled, in 2017, that the US government partook in "mass indiscriminate processing of data" and deferred concerns to the European Court of Justice. Then, in October of last year, the High Court referred this case to the ECJ based on the Data Protection Commissioner's "well-founded" concerns about whether or not US law provided adequate protection for EU citizens' data privacy rights. Within all of this, there also exist people questioning the compatibility between US law which focuses on national security and EU law which aims for personal privacy. Whistleblowers like Edward Snowden played a role in what has lead up to this case, and whistleblower attorneys and paraprofessionals continue working to expose fraud against the government through means of the False Claims Acts (FCA). Why Facebook appealed the case Although Irish law doesn't require an appeal against CJEU referrals, Facebook chose to stay and appeal the decision anyway, aiming to keep it from progressing to court. The court denied them the stay but granted them leave to appeal last year. Keep in mind that Facebook was already under a lot of scrutiny after playing a part in the Cambridge Analytica data scandal, which showed that up to 87 million users faced having their data compromised by Cambridge Analytica. One of the reasons Facebook said it wanted to block this case from progressing was that the High Court failed to regard the 'Privacy Shield' decision. Under the Privacy Shield decision, the European Commission had approved the use of certain EU-US data transfer channels. Another main issue here was whether Facebook actually had the legal rights to appeal a referral to the ECJ. Privacy Shield is also in question by French digital rights groups who claim it disrupts fundamental EU rights and will be heard by the General Court of the EU in July. Why the appeal was dismissed The five-judge High Court, headed by the Chief Justice Frank Clarke, decided they cannot entertain an appeal over the referral decision itself. In addition, he said Facebook’s criticisms related to the “proper characterization” of underlying facts rather than the facts themselves. If there had been any actual finding of facts not sustainable on the evidence before the High Court per Irish procedural law, he would have overturned it, but no such matter had been established on this appeal, he ruled. "Joint Control" and its possible impact on the case In June 2018, after a Facebook fan page was found to have been allowing visitor data to be collected by Facebook via a cookie on the fan page, without informing visitors, The Federal Administrative Court of Germany referred the case to ECJ. This resulted in the ECJ deciding to deem joint responsibility between social media networks and administrators in the processing of visitor data. The ECJ´s ruling, in this case, has consequences not only for Facebook sites but for other situations where more than one company or administrator plays an active role in the data processing. The concept of “joint control” is now on the table, and further decisions of authorities and courts in this area are likely. What's next for data security Currently, Facebook also faces questioning by Ireland's Data Protection Commission over numerous potential infringements of strict European privacy laws that the new General Data Protection Regulation (GDPR) outlines. Facebook, however, already stated it will take the necessary steps to ensure the site operators can comply with the GDPR. There have even been pleas for Global Data Laws. A common misconception exists that only big organizations, governments and businesses are at risk for data security breaches, but this is simply not true. Data security is important for everyone — now more than ever. Your computer, tablet and mobile devices could be affected by attackers for their sensitive information, such as credit card details, banking details and passwords, by way of phishing attacks, malware attacks, ransomware attacks, man-in-the-middle attacks and more. Therefore, bringing continual awareness to these US and global data security issues will enable stricter laws to be put in place. Kayla Matthews writes about big data, cybersecurity and technology. You can find her work on The Week, Information Age, KDnuggets and CloudTweaks, or over at ProductivityBytes.com. Facebook releases Pythia, a deep learning framework for vision and language multimodal research Zuckberg just became the target of the world’s first high profile white hat deepfake op. Can Facebook come out unscathed? US regulators plan to probe Google on anti-trust issues; Facebook, Amazon & Apple also under legal scrutiny
Read more
  • 0
  • 0
  • 2346

article-image-mark-zuckerberg-is-a-liar-fraudster-unfit-to-be-the-c-e-o-of-facebook-alleges-aaron-greenspan-to-the-uk-parliamentary-committee
Vincy Davis
14 Jun 2019
10 min read
Save for later

Mark Zuckerberg is a liar, fraudster, unfit to be the C.E.O. of Facebook, alleges Aaron Greenspan to the UK Parliamentary Committee

Vincy Davis
14 Jun 2019
10 min read
Last week, the Digital, Culture, Media and Sport Sub-Committee held its hearing on Disinformation with Aaron Greenspan as the witness. Aaron is the founder, president, and CEO of Think Computer Corporation, an IT consulting service and the author of the book ‘Authoritas: One Student's Harvard Admissions and the Founding of the Facebook Era’. Aaron who claims to have had the original idea for Facebook, has been a long standing critic of the social network. In January this year, Aaron published a 75-page report, in which he states that fake accounts made up more than half of Facebook's 2.2 billion users. He has also testified the same, in front of the UK Parliamentary Committee. He minced no words when he said Mark Zuckerberg is a liar, fraudster and unfit to be the CEO of Facebook. Fake accounts Facebook differentiates between duplicated accounts and fake accounts. But Aaron considers “any account which is not in your name and you have a second account for whatever purpose, that account is ultimately a fake account”. He noticed the issue of fake accounts on Facebook last year, after which he did his own enquiry and found some “alarming” numbers. At the end of 2017, Facebook claimed that “only 1% of their accounts are fake”. By their own definition, two weeks ago they have said that the number of “fake accounts have increased to 5%.” This means that in a span of two years, their estimate has increased fivefold. The second issue, Aaron highlights is that it's “extremely unclear” how they have arrived at that estimate. Two years ago, Facebook came up with a ‘Transparency portal’, due to public pressure. The transparency portal of Facebook aims at publishing “regular reports to give our community visibility into how we enforce policies, respond to data requests and protect intellectual property, while monitoring dynamics that limit access to Facebook products.” Aaron states that he found “very difficult to reconcile the SEC filings of Facebook with the numbers from the transparency portal”. He says that the number of fake accounts published on the transparency portal of Facebook do not match with the SEC filings. This is the reason why he decided to do his own report and find out if the numbers from the “two sources of Facebook aligned” and has come to the conclusion that they “don’t”. Instead Aaron has arrived at a conditional conclusion that “around 30% of accounts on Facebook are fake”. He claims that Facebook always minimizes their numbers to make the problem look smaller than it is. Also he adds that “Based on my total use of the platform, the historical trend starting in 2006, when it was made public, up until the present, it seemed like it was safe that 50% are fake and I think this could actually be higher.” Some weeks ago, Facebook “finally updated their transparency portal and announced that in the fourth quarter of 2018, that they have disabled 1.2 billion fake accounts. And in the first quarter of 2019, the numbers counted to 2.2 billion fake accounts, which is an exponential growth curve in fake accounts, according to their own numbers, which has not been audited by any respective body”. If all the numbers are added, “by a conservative guess, it can be said that there are 10billion fake accounts, for a platform which has 2.2 billion active users”. While “Facebook says that fake accounts don’t matter very much and within their undisclosed methodology, they are doing a good job.” Aaron believes that transparency around fake accounts is the number one problem for Facebook to look after. FB is more ‘unwilling’ rather than ‘unable’ to tackle Fake accounts When asked, if he thinks that “Facebook wants to tackle these problems of fake accounts and others”, Aaron answered “No”. He believes very strongly that Mark has no clear intention of complying with the law, as he’s not appearing in front of the parliament, in the UK or in Canada or anywhere where serious questions will be aimed at him. This is because, “he has no genuine answers in many cases. So i have no faith in Facebook, I don't think it can be trusted and don't think it should be trusted and i would suggest independent analysis.” The illusion of Behavioural advertising effectiveness Aaron claims that “Behavioural advertising produces 4% benefit of revenue to advertisers”. So while adexchangers like “Google or Facebook might charge 59% more on average for targeted ads, or upto 4900% more” in some cases, they are going to get customers as they get benefit from them. He also described Facebook as a "black box", and claimed advertisers were "in the dark" about how effective their campaigns actually are on Facebook and if it’s actually reaching real users. Aaron adds that “I have come to the conclusion that Facebook is not in anybody’s control. The company has lost its capability to control its own platform. And i don’t think they can truly regain that ability”, he thus likened the social network to the ‘Chernobyl disaster’, the largest catastrophic nuclear disaster to hit the world in 1986. In this particular situation, there was a technology which was hyped and was “expected to be transformative and make some other problems go away”. Aaron says that in 2004, Mark had described the Facebook system as “something that would involve the problem of reaching critical mass, a nuclear power reference”. Aaron believes that by designing Facebook, the way it is now, “Mark has effectively removed the control from the reactor core”, and the result is an “enormous uninhabitable zone in the internet, which is polluted with disinformation and falsity, much like radiation, that transpires nearly impossible reverse.” History of Facebook Aaron who has always claimed that he is the original creator of Facebook, says that “My need to create ‘Universal Facebook’, was based on Harvard’s structure as an organization, while Mark wanted to build something cool.” He also states that “certainly neither of us had thought of this to be a global encompassing system”. Aaron says that if he would have known that Mark was planning on “something like a huge global system”, he would have made it clear to Mark that “this could end up being a privacy nightmare”. He adds that “In the early days, Facebook lost control of the platform and it will never get it back”, and “unfortunately, the Media has been a significant member in propelling Facebook since the last 10-12 years.” Facebook is not growing Aaron believes that Mark has been lying to investors and the global community about Facebook’s growth, which is not true. He says that from indicators, as an outside observer, everything suggests that the usage of Facebook is falling drastically as users are having concerns about their privacy and yet it is being publicised that Facebook is growing. In reality, Facebook is growing in countries like India, Philippines, Vietnam, and Indonesia, which are the same countries where Facebook claims in their disclaimer that “we have more fake accounts coming from these countries than anyone else.” This means that Facebook is growing in countries where there is a known problem of ‘fake accounts’ and this is more problematic than the rest of the world. Aaron alleges that this misrepresentation of facts to shareholders amounts to “fraud”. On respecting Privacy and Personal Data Aaron states that Facebook used to lie initially that “Openness is a universal good”, while the new lie Facebook says is that “Encryption is the same as privacy,” which is not true, “and that privacy is the universal good.” So Aaron believes that Mark is going completely in contrast to his earlier beliefs, as the “previous model was not working for him anymore so he made a new model, and this is going to increase” further for every next level of Facebook. Aaron also adds that “Encryption comes with a lot of pitfalls.” On the question of whether Zuckerberg respects personal data, Aaron has claimed that Mark does not believe in the concept of Personal Data as he has been performing security fraud on a number of occasions, in an incredibly blatant manner. He states that “the SEC has done nothing about it because they are afraid of targeting a billionaire”. He also pointed out that Mark is not the only executive who lies to stockholders, and claimed that even other tech giants get away with this. For example, “Elon Musk does it.” When asked “if there were any warning signs of the Cambridge Analytica scandal”, prior to 2016. Aaron says that “This wasn’t so much a breach as it was a designed behaviour, and that design was made so on Mark’s orders”. Aaron also recalled that, in 2007, when he was working with Ed Baker, Mark Zuckerberg’s current colleague, he was planning to break the law with a technique, allegedly designed to steal customer data. He added that “When I worked with Ed, he suggested for the software that we were building that, we should ask users for access to their address book. And regardless of whether the answer was yes or no, we should take that data anyway, and use that to send emails to other potential users”. Aaron claims that “At that point, I quit”. Eventually, Ed joined Facebook and is part of its growth team now. Antitrust Law Debate If an antitrust action is taken against Facebook, it would result in Whatsapp and Instagram being separated from its mother platform. However, Mark will still be responsible for 12 billion of users data of Facebook. According to Aaron, this is a huge problem. And this is the reason he believes that “antitrust action against Facebook, is not going to be effective in the long run”. He also adds that some other major methods needs to be undertaken to make Facebook behave responsibly. How do we solve a problem like Facebook? Aaron has some ideas Aaron believes that regulating an entity as large and complex like Facebook requires technical knowledge, and “by large US regulators lack the technical knowledge, to effectively enforce the laws that are already on the book.” Aaron proposes a number of solutions for ways to regulate Facebook, out of which, the most important step he believed is “to remove Mark from the C.E.O. position.” He considers Mark incapable in being a responsible CEO to Facebook. Aaron’s recommendation comes at a time when this week, around 68% of independent investors wanted the company to have an independent chairman. Despite the revolt, the proposal was not passed as Mark owns upto 75% of Class B stock, i.e., he has almost 60% of the voting power at Facebook. Obviously, he and his colleagues voted down the independent Chairman proposal very smoothly. Next, Aaron suggested to regulate Facebook, along the lines of a government regulated bank. He proposes to have KYC requirements for all social media at this point of time. Also he adds that “I think anonymous speech should not be banned, as it plays an important role, but if there’s a problem in any case, then the anonymous person should be held to account.” He says that as “Transparency around fake accounts, is the number one problem faced by Facebook”, a ‘Social Media Tax’ should be levied on all its users. He believes that it will provide some revenue to fund investigative journalism, and due to payment involved, it will require some kind of authentication. This can play a major role in identification of fake accounts. He says that this will make “the entire process manageable for governments.” Check out the full hearing on the Parliament TV website. US regulators plan to probe Google on anti-trust issues; Facebook, Amazon & Apple also under legal scrutiny Facebook argues it didn’t violate users’ privacy rights and thinks there’s no expectation of privacy because there is no privacy on social media Experts present most pressing issues facing global lawmakers on citizens’ privacy, democracy and rights to freedom of speech
Read more
  • 0
  • 0
  • 3845

article-image-austrian-supreme-court-rejects-facebooks-bid-to-stop-a-gdpr-violation-lawsuit-against-it-by-privacy-activist-max-schrems
Bhagyashree R
13 Jun 2019
5 min read
Save for later

Austrian Supreme Court rejects Facebook’s bid to stop a GDPR-violation lawsuit against it by privacy activist, Max Schrems

Bhagyashree R
13 Jun 2019
5 min read
On Tuesday, the Austrian Supreme Court overturned Facebook’s appeal to block a lawsuit against it for not conforming to Europe’s General Data Protection Regulation (GDPR). This decision will also have an effect on other EU member states that give “special status to industry sectors.” https://twitter.com/maxschrems/status/1138703007594496000?s=19 The lawsuit was filed by Austrian lawyer and data privacy activist, Max Schrems. In the lawsuit, he has accused Facebook of using illegal privacy policies as it forces users to give their consent for processing their data in return for using the service. GDPR does not allow forced consent as a valid legal basis for processing user data. Schrems said in a statement, “Facebook has even blocked accounts of users who have not given consent. In the end users only had the choice to delete the account or hit the ‘agree’ button–that’s not a free choice; it more reminds of a North Korean election process. Many users do not know yet that this annoying way of pushing people to consent is actually forbidden under GDPR in most cases.” Facebook has been trying to block this lawsuit by questioning whether GDPR-based cases fall under the jurisdiction of courts. According to Facebook’s appeal, these lawsuits should be handled by data protection authorities, Irish Data Protection Commissioner (DPC) in this case. Dismissing Facebook’s argument, this landmark decision says that any complaints made under Article 79 of GDPR can be reviewed both by judges and data protection authorities. This verdict comes as a sigh of relief for Schrems, who has to wait for almost 5 years to even get this lawsuit to trial because of Facebook's continuous blockade attempts. “I am very pleased that we were able to clarify this fundamental issue. We are hoping for a speedy procedure now that the case has been pending for a good 5 years," Schrems said in a press release. He further added, “If we win even part of the case, Facebook would have to adapt its business model considerably. We are very confident that we will succeed on the substance too now. Of course, they wanted to prevent such a case by all means and blocked it for five years.“ Previously, the Vienna Regional Court did give the verdict in Facebook’s favor declaring that it did not have jurisdiction and Facebook could only be sued in Ireland, where its European headquarters are. Schrems believes that this verdict was given because there is “a tendency that civil judges are not keen to have (complex) GDPR cases on their table.” Now, both the Appellate Court and the Austrian Supreme Court have agreed that everyone can file a lawsuit for GDPR violations. Schrems original idea was to make a “class action” style suit against Facebook by allowing any Facebook user to join the case. But, the court did not allow that, and Schemes' was limited to bring only a model case to the court. This is Schrems’ second victory this year in the fight against Facebook. Last month, the Irish Supreme court dismissed Facebook from stopping the referral of privacy case regarding the transfer of EU citizens’ data to the United States. The hearing of this case is now scheduled to happen at the European Court of Justice (ECJ) in July. Schrems’ eight-year-long battle against Facebook Schrems’ fight against Facebook started way before we all realized the severity of tech companies harvesting our personal data. Back in 2011, Shcrems’ professor at Santa Clara University invited Facebook’s privacy lawyer Ed Palmieri to speak to his class. Schrems was surprised to see the lawyer's lack of awareness regarding data protection laws in Europe. He then decided to write his thesis paper about Facebook’s misunderstanding of EU privacy laws. As a part of the research, he requested his personal data from Facebook and found it had his entire user history. He went on to make 22 complaints to the Irish Data Protection Commission, in which he accused Facebook of breaking European data protection laws. His efforts finally showed results, when in 2015 the European Court of Justice took down the EU–US Safe Harbor Principles. As a part of his fight for global privacy rights, Schrems also co-founded the European non-profit noyb (None of Your Business), which aims to "make privacy real”. The organization aims to introduce ways to execute privacy enforcement more effectively. It holds companies accountable who fail to follow Europe's privacy laws and also takes media initiatives to support GDPR. Looks like things hasn’t been going well for Facebook. Along with losing these cases in the EU, in a revelation yesterday by the WSJ, several emails were found that indicate Mark Zuckerberg’s knowledge of potentially problematic privacy practices at the company. You can read the entire press release on NOYB’s official website. Facebook releases Pythia, a deep learning framework for vision and language multimodal research Zuckberg just became the target of the world’s first high profile white hat deepfake op. Can Facebook come out unscathed? US regulators plan to probe Google on anti-trust issues; Facebook, Amazon & Apple also under legal scrutiny  
Read more
  • 0
  • 0
  • 3003

article-image-highlights-from-mary-meekers-2019-internet-trends-report
Sugandha Lahoti
12 Jun 2019
8 min read
Save for later

Highlights from Mary Meeker’s 2019 Internet trends report

Sugandha Lahoti
12 Jun 2019
8 min read
At Recode by Vox’s 2019 Code Conference on Tuesday, Bond partner Mary Meeker made her presentation onstage, covering everything on the internet's latest trends. Meeker had first started presenting these reports in 1995, underlining the most important statistics and technology trends on the internet. Last year in September, Meeker quit Kleiner Perkins to start her own firm Bond and is popularly known as the Queen of the Internet. Mary Meeker’s 2019 Internet trends report highlighted that the internet is continuing to grow, slowly, as more users come online, especially with mobile devices. She also talked about increased internet ad spending, data growth, as well as the rise of freemium subscription business models, interactive gaming, the on-demand economy and more. https://youtu.be/G_dwZB5h56E The internet trends highlighted by Meeker include: Internet Users E-commerce and advertising Internet Usage Freemium business models Data growth Jobs and Work Online Education Immigration and Healthcare Internet Users More than 50% of the world’s population now has access to the internet. There are 3.8 billion internet users in the world with Asia-pacific leading in both users and potential. China is the largest market with 21% of total internet users and India is at 12%. However, the growth is slowing by 6% in 2018 versus 7% in 2017 because so many people have come online that new users are harder to come by. New smartphone unit shipments actually declined in 2018. Per the global internet market cap leaders, the U.S. is stable at 18 of the top 30 and China is stable at 7 of the top 30. These are the two leading countries where internet innovation is at an especially high level. If we look at revenue growth for the internet market cap leaders it continues to slow - 11 percent year-on-year in Q1 versus 13 percent in Q4. Internet usage Internet usage had a solid growth, driven by investment in innovation. The digital media usage in the U.S. is accelerating up 7% versus 5% growth in 2017. The average US adult spends 6.3 hours each day with digital media, over half of which is spent on their mobiles. Wearables had 52 million users which doubled in four years. Roughly 70 million people globally listen to podcasts in the US, a figure that’s doubled in about four years. Outside the US, there's especially high innovation in data-driven and direct fulfillment that's growing very rapidly in China. Innovation outside the US is also especially strong in financial services. Images are also becoming an increasingly relevant way to communicate. More than 50% of the tweets of impressions today are images, video or other forms of media. Interactive gaming innovation is rising across platforms as interactive games like Fortnite become the new social media for certain people. It is accelerating with 2.4 billion users up, 6 percent year-on-year in 2018. On the flip side Almost 26% of adults are constantly online versus 21% three years ago. That number jumped to 39% for 18 to 29 year-olds surveyed. However, digital media users are taking action to reduce their usage and businesses are also taking actions to help users monitor their usage. Social media usage has decelerated up 1% in 2018 versus 6% in 2017. Privacy concerns are high but they're moderating. Regulators and businesses are improving consumer privacy control. In digital media encrypted messaging and traffic are rising rapidly. In Q1, 87 percent of global web traffic was encrypted, up from 53 percent three years ago. Another usage concern is problematic content. Problematic content on the Internet can be less filtered and more amplified. Images and streaming can be more powerful than text. Algorithms can amplify users on patterns  and social media can amplify trending topics. Bad actors can amplify ideologies, unintended bad actors can amplify misinformation and extreme views can amplify polarization. However internet platforms are indeed driving efforts to reduce problematic content as do consumers and businesses. 88% percent of people in the U.S. believe the Internet has been mostly good for them and 70% believe the Internet has been mostly good for society. Cyber attacks have continued to rise. These include state-sponsored attacks, large-scale data provider attacks, and monetary extortion attacks. E-commerce and online advertising E-commerce is now 15 percent of retail sales. Its growth has slowed — up 12.4 percent in Q1 compared with a year earlier — but still towers over growth in regular retail, which was just 2 percent in Q1. In online advertising, on comparing the amount of media time spent versus the amount of advertising dollars spent, mobile hit equilibrium in 2018 while desktop hit that equilibrium point in 2015. The Internet ads spending on an annual basis accelerated a little bit in 2018 up 22 percent.  Most of the spending is still on Google and Facebook, but companies like Amazon and Twitter are getting a growing share. Some 62 percent of all digital display ad buying is for programmatic ads, which will continue to grow. According to the leading tech companies the internet average revenue has been decelerating on a quarterly basis of 20 percent in Q1. Google and Facebook still account for the majority of online ad revenue, but the growth of US advertising platforms like Amazon, Twitter, Snapchat, and Pinterest is outstripping the big players: Google’s ad revenue grew 1.4 times over the past nine quarters and Facebook’s grew 1.9 times, while the combined group of new players grew 2.6 times. Customer acquisition costs — the marketing spending necessary to attract each new customer — is going up. That’s unsustainable because in some cases it surpasses the long-term revenue those customers will bring. Meeker suggests cheaper ways to acquire customers, like free trials and unpaid tiers. Freemium business models Freemium business models are growing and scaling. Freemium businesses equals free user experience which enables more usage, engagement, social sharing and network effects. It also equals premium user experience which drives monetization and product innovation. Freemium business evolution started in gaming, evolving and emerging in consumer and enterprise. One of the important factors for this growth is cloud deployment revenue which grew about 58% year-over-year. Another enabler of freemium subscription business models is efficient digital payments which account for more than 50% of day-to-day transactions around the world. Data growth Internet trends indicate that a number of data plumbers are helping a lot of companies collect data, manage connections, and optimize data. In a survey of retail customers, 91% preferred brands that provided personalized offers and recommendations. 83% were willing to passively share data in exchange for personalized services and 74% were willing to actively share data in exchange for personalized experiences. Data volume and utilization is also evolving rapidly. Enterprise surpassed consumer in 2018 and cloud is overtaking both. More data is now stored in the cloud than on private enterprise servers or consumer devices. Jobs and Work Strong economic indicators, internet enabled services, and jobs are helping work. If we look at global GDP. China, the US and India are rising, but Europe is falling. Cross-border trade is at 29% of global GDP and has been growing for many years. Global relative unemployment concerns are very high outside the US and low in itself. Consumer confidence index is high and rising. Unemployment is at a 19-year low but job openings are at an all-time high and wages are rising. On-demand work is creating internet-enabled opportunities and efficiencies. There are 7 million on-demand workers up 22 percent year-on-year. Remote work is also creating internet enabled work opportunities and efficiency. Americans working remotely have risen from 5 percent versus 3 percent in 2000. Online education Education costs and student debt are rising in the US whereas post-secondary education enrollment is slowing. Online education enrollment is high across a diverse base of universities - public, private for-profit, and private not-for-profit.  Top offline institutions are ramping their online offerings at a very rapid rate - most recently University of Pennsylvania, University of London, University of Michigan and UC Boulder. Google's growth in creating certificates for in-demand jobs is growing rapidly which they are doing in collaboration with Coursera. Immigration and Healthcare In the U.S. 60% of the most highly valued tech companies are founded by first or second generation Americans. They employed 1.9 million people last year. USA entitlements account for 61% of government spending versus 42% 30 years ago, and shows no signs of stopping. Healthcare is steadily digitizing, driven by consumers and the trends are very powerful. You can expect more telemedicine and on-demand consultations. For details and infographics, we recommend you to go through the slide deck of the Internet trends report. What Elon Musk and South African conservation can teach us about technology forecasting. Jim Balsillie on Data Governance Challenges and 6 Recommendations to tackle them Experts present the most pressing issues facing global lawmakers on citizens’ privacy, democracy and the rights to freedom of speech.
Read more
  • 0
  • 0
  • 3046
article-image-deep-learning-models-have-massive-carbon-footprints-can-photonic-chips-help-reduce-power-consumption
Sugandha Lahoti
11 Jun 2019
10 min read
Save for later

Deep learning models have massive carbon footprints, can photonic chips help reduce power consumption?

Sugandha Lahoti
11 Jun 2019
10 min read
Most of the recent breakthroughs in Artificial Intelligence are driven by data and computation. What is essentially missing is the energy cost. Most large AI networks require huge number of training data to ensure accuracy. However, these accuracy improvements depend on the availability of exceptionally large computational resources. The larger the computation resource, the more energy it consumes. This  not only is costly financially (due to the cost of hardware, cloud compute, and electricity) but is also straining the environment, due to the carbon footprint required to fuel modern tensor processing hardware. Considering the climate change repercussions we are facing on a daily basis, consensus is building on the need for AI research ethics to include a focus on minimizing and offsetting the carbon footprint of research. Researchers should also put energy cost in results of research papers alongside time, accuracy, etc. The process of deep learning outsizing environmental impact was further highlighted in a recent research paper published by MIT researchers. In the paper titled “Energy and Policy Considerations for Deep Learning in NLP”, researchers performed a life cycle assessment for training several common large AI models. They quantified the approximate financial and environmental costs of training a variety of recently successful neural network models for NLP and provided recommendations to reduce costs and improve equity in NLP research and practice. They have also provided recommendations to reduce costs and improve equity in NLP research and practice. Per the paper, training AI models can emit more than 626,000 pounds of carbon dioxide equivalent—nearly five times the lifetime emissions of the average American car (and that includes the manufacture of the car itself). It is estimated that we must cut carbon emissions by half over the next decade to deter escalating rates of natural disaster. Source This speaks volumes about the carbon offset and brings conversation to the returns on heavy (carbon) investment of deep learning and if it is really worth the marginal improvement in predictive accuracy over cheaper, alternative methods. This news alarmed people tremendously. https://twitter.com/sakthigeek/status/1137555650718908416 https://twitter.com/vinodkpg/status/1129605865760149504 https://twitter.com/Kobotic/status/1137681505541484545         Even if some of this energy may come from renewable or carbon credit-offset resources, the high energy demands of these models are still a concern. This is because the current energy is derived from carbon-neural sources in many locations, and even when renewable energy is available, it is limited to the equipment produced to store it. The carbon footprint of NLP models The researchers in this paper adhere specifically to NLP models. They looked at four models, the Transformer, ELMo, BERT, and GPT-2, and trained each on a single GPU for up to a day to measure its power draw. Next, they used the number of training hours listed in the model’s original papers to calculate the total energy consumed over the complete training process. This number was then converted into pounds of carbon dioxide equivalent based on the average energy mix in the US, which closely matches the energy mix used by Amazon’s AWS, the largest cloud services provider. Source The researchers found that environmental costs of training grew proportionally to model size. It exponentially increased when additional tuning steps were used to increase the model’s final accuracy. In particular, neural architecture search had high associated costs for little performance benefit. Neural architecture search is a tuning process which tries to optimize a model by incrementally tweaking a neural network’s design through exhaustive trial and error. The researchers also noted that these figures should only be considered as baseline. In practice, AI researchers mostly develop a new model from scratch or adapt an existing model to a new data set, both require many more rounds of training and tuning. Based on their findings, the authors recommend certain proposals to heighten the awareness of this issue to the NLP community and promote mindful practice and policy: Researchers should report training time and sensitivity to hyperparameters. There should be a standard, hardware independent measurement of training time, such as gigaflops required to convergence. There should also be a  standard measurement of model sensitivity to data and hyperparameters, such as variance with respect to hyperparameters searched. Academic researchers should get equitable access to computation resources. This trend toward training huge models on tons of data is not feasible for academics, because they don’t have the computational resources. It will be more cost effective for academic researchers to pool resources to build shared compute centers at the level of funding agencies, such as the U.S. National Science Foundation. Researchers should prioritize computationally efficient hardware and algorithms. For instance, developers could aid in reducing the energy associated with model tuning by providing easy-to-use APIs implementing more efficient alternatives to brute-force. The next step is to introduce energy costs as a standard metric, that researchers are expected to report their findings. They should also try to minimise carbon footprint by developing compute efficient training methods such as new ML algos, or new engineering tools to make existing ones more compute efficient. Above all, we need to formulate strict public policies that steer digital technologies toward speeding a clean energy transition while mitigating the risks. Another factor which contributes to high energy consumptions are Optical neural networks which are used for most deep learning tasks. To tackle that issue, researchers and major tech companies — including Google, IBM, and Tesla — have developed “AI accelerators,” specialized chips that improve the speed and efficiency of training and testing neural networks. However, these AI accelerators use electricity and have a theoretical minimum limit for energy consumption. Also, most present day ASICs are based on CMOS technology and suffer from the interconnect problem. Even in highly optimized architectures where data are stored in register files close to the logic units, a majority of the energy consumption comes from data movement, not logic. Analog crossbar arrays based on CMOS gates or memristors promise better performance, but as analog electronic devices, they suffer from calibration issues and limited accuracy. Implementing chips that use light instead of electricity Another group of MIT researchers have developed a “photonic” chip that uses light instead of electricity, and consumes relatively little power in the process. The photonic accelerator uses more compact optical components and optical signal-processing techniques, to drastically reduce both power consumption and chip area. Practical applications for such chips can also include reducing energy consumption in data centers. “In response to vast increases in data storage and computational capacity in the last decade, the amount of energy used by data centers has doubled every four years, and is expected to triple in the next 10 years.” https://twitter.com/profwernimont/status/1137402420823306240 The chip could be used to process massive neural networks millions of times more efficiently than today’s classical computers. How the photonic chip works? The researchers have given a detailed explanation of the chip’s working in their research paper, “Large-Scale Optical Neural Networks Based on Photoelectric Multiplication”. The chip relies on a compact, energy efficient “optoelectronic” scheme that encodes data with optical signals, but uses “balanced homodyne detection” for matrix multiplication. This technique that produces a measurable electrical signal after calculating the product of the amplitudes (wave heights) of two optical signals. Pulses of light encoded with information about the input and output neurons for each neural network layer — which are needed to train the network — flow through a single channel.  Optical signals carrying the neuron and weight data fan out to grid of homodyne photodetectors. The photodetectors use the amplitude of the signals to compute an output value for each neuron. Each detector feeds an electrical output signal for each neuron into a modulator, which converts the signal back into a light pulse. That optical signal becomes the input for the next layer, and so on. Limitation of Photonic accelerators Photonic accelerators generally have an unavoidable noise in the signal. The more light that’s fed into the chip, the less noise and greater accuracy. Less input light increases efficiency but negatively impacts the neural network’s performance. The ideal condition is achieved when AI accelerators is measured in how many joules it takes to perform a single operation of multiplying two numbers. Traditional accelerators are measured in picojoules, or one-trillionth of a joule. Photonic accelerators measure in attojoules, which is a million times more efficient. In their simulations, the researchers found their photonic accelerator could operate with sub-attojoule efficiency. Tech companies are the largest contributors of carbon footprint The realization that training an AI model can produce emissions equivalent to a five cars, should make carbon footprint of artificial intelligence an important consideration for researchers and companies going forward. UMass Amherst’s Emma Strubell, one of the research team and co-author of the paper said, “I’m not against energy use in the name of advancing science, obviously, but I think we could do better in terms of considering the trade off between required energy and resulting model improvement.” “I think large tech companies that use AI throughout their products are likely the largest contributors to this type of energy use,” Strubell said. “I do think that they are increasingly aware of these issues, and there are also financial incentives for them to curb energy use.” In 2016, Google’s ‘DeepMind’ was able to reduce the energy required to cool Google Data Centers by 30%. This full-fledged AI system has features including continuous monitoring and human override. Recently Microsoft doubled its internal carbon fee to $15 per metric ton on all carbon emissions. The funds from this higher fee will maintain Microsoft’s carbon neutrality and help meet their sustainability goals. On the other hand, Microsoft is also two years into a seven-year deal—rumored to be worth over a billion dollars—to help Chevron, one of the world’s largest oil companies, better extract and distribute oil. https://twitter.com/AkwyZ/status/1137020554567987200 Amazon had announced that it would power data centers with 100 percent renewable energy without a dedicated timeline. Since 2018 Amazon has reportedly slowed down its efforts to use renewable energy using only 50 percent.  It has also not announced any new deals to supply clean energy to its data centers since 2016, according to a report by Greenpeace, and it quietly abandoned plans for one of its last scheduled wind farms last year. In April, over 4,520 Amazon employees organized against Amazon’s continued profiting from climate devastation. However, Amazon rejected all 11 shareholder proposals including the employee-led climate resolution at Annual shareholder meeting. Both these studies’ researchers illustrate the dire need to change our outlook towards building Artificial Intelligence models and chips that have an impact on the carbon footprint. However, this does not mean halting the research of AI altogether. Instead there should be an awareness of the environmental impact that training AI models might have. Which in turn can inspire researchers to develop more efficient hardware and algorithms for the future. Responsible tech leadership or climate washing? Microsoft hikes its carbon tax and announces new initiatives to tackle climate change. Microsoft researchers introduce a new climate forecasting model and a public dataset to train these models. Now there’s a CycleGAN to visualize the effects of climate change. But is this enough to mobilize action?
Read more
  • 0
  • 0
  • 5531

article-image-salesforce-is-buying-tableau-in-a-15-7-billion-all-stock-deal
Richard Gall
10 Jun 2019
4 min read
Save for later

Salesforce is buying Tableau in a $15.7 billion all-stock deal

Richard Gall
10 Jun 2019
4 min read
Salesforce, one of the world's leading CRM platforms, is buying data visualization software Tableau in an all-stock deal worth $15.7 billion. The news comes just days after it emerged that Google is buying one of Tableau's competitors in the data visualization market, Looker. Taken together, the stories highlight the importance of analytics to some of the planet's biggest companies. They suggest that despite years of the big data revolution, it's only now that market-leading platforms are starting to realise that their customers want the level of capabilities offered by the best in the data visualization space. Salesforce shareholders will use their stock to purchase Tableau. As the press release published on the Salesforce site explains "each share of Tableau Class A and Class B common stock will be exchanged for 1.103 shares of Salesforce common stock, representing an enterprise value of $15.7 billion (net of cash), based on the trailing 3-day volume weighted average price of Salesforce's shares as of June 7, 2019." The acquisition is expected to be completed by the end of October 2019. https://twitter.com/tableau/status/1138040596604575750 Why is Salesforce buying Tableau? The deal is an incredible result for Tableau shareholders. At the end of last week, its market cap was $10.7 billion. This has led to some scepticism about just how good a deal this is for Salesforce. One commenter on Hacker News said "this seems really high for a company without earnings and a weird growth curve. Their ticker is cool and maybe sales force [sic] wants to be DATA on nasdaq. Otherwise, it will be hard to justify this high markup for a tool company." With Salesforce shares dropping 4.5% as markets opened this week, it seems investors are inclined to agree - Salesforce is certainly paying a premium for Tableau. However, whatever the long term impact of the acquisition, the price paid underlines the fact that Salesforce views Tableau as exceptionally important to its long term strategy. It opens up an opportunity for Salesforce to reposition and redefine itself as much more than just a CRM platform. It means it can start compete with the likes of Microsoft, which has a full suite of professional and business intelligence tools. Moreover, it also provides the platform with another way of potentially onboarding customers - given Tableau is well-known as a powerful yet accessible data visualization tool, it create an avenue through which new users can find their way to the Salesforce product. Marc Benioff, Chair and co-CEO of Salesforce, said "we are bringing together the world’s #1 CRM with the #1 analytics platform. Tableau helps people see and understand data, and Salesforce helps people engage and understand customers. It’s truly the best of both worlds for our customers--bringing together two critical platforms that every customer needs to understand their world.” Tableau has been a target for Salesforce for some time. Leaked documents from 2016 found that the data visualization was one of 14 companies that Salesforce had an interest in (another was LinkedIn, which would eventually be purchased by Microsoft). Read next: Alteryx vs. Tableau: Choosing the right data analytics tool for your business What's in it for Tableau (aside from the money...)? For Tableau, there are many other benefits of being purchased by Salesforce alongside the money. Primarily this is about expanding the platform's reach - Salesforce users are people who are interested in data with a huge range of use cases. By joining up with Salesforce, Tableau will become their go-to data visualization tool. "As our two companies began joint discussions," Tableau CEO Adam Selipsky said, "the possibilities of what we might do together became more and more intriguing. They have leading capabilities across many CRM areas including sales, marketing, service, application integration, AI for analytics and more. They have a vast number of field personnel selling to and servicing customers. They have incredible reach into the fabric of so many customers, all of whom need rich analytics capabilities and visual interfaces... On behalf of our customers, we began to dream about we might accomplish if we could combine our ability to help people see and understand data with their ability to help people engage and understand customers." What will happen to Tableau? Tableau won't be going anywhere. It will continue to exist under its own brand with the current leadership all remaining, including Selipsky. What does this all mean for the technology market? At the moment, it's too early to say - but the last year or so has seen some major high-profile acquisitions by tech companies. Perhaps we're seeing the emergence of a tooling arms race as the biggest organizations attempt to arm themselves with ecosystems of established market-leading tools. Whether this is good or bad for users remains to be seen, however.  
Read more
  • 0
  • 0
  • 3057

article-image-did-unfettered-growth-kill-maker-media-financial-crisis-leads-company-to-shutdown-maker-faire-and-lay-off-all-staff
Savia Lobo
10 Jun 2019
5 min read
Save for later

Did unfettered growth kill Maker Media? Financial crisis leads company to shutdown Maker Faire and lay off all staff

Savia Lobo
10 Jun 2019
5 min read
Updated: On July 10, 2019, Dougherty announced the relaunch of Maker Faire and Maker Media with the new name “Make Community“. Maker Media Inc., the company behind Maker Faire, the popular event that hosts arts, science, and engineering DIY projects for children and their parents, has laid off all its employees--22 employees--and have decided to shut down due to financial troubles. In January 2005, the company first started off with MAKE, an American bimonthly magazine focused on do it yourself and/or DIWO projects involving computers, electronics, robotics, metalworking, woodworking, etc. for both adults and children. In 2006, the company first held its Maker Faire event, that lets attendees wander amidst giant, inspiring art and engineering installations. Maker Faire now includes 200 owned and licensed events per year in over 40 countries. The Maker movement gained momentum and popularity when MAKE magazine first started publishing 15 years ago.  The movement emerged as a dominant source of livelihood as individuals found ways to build small businesses using their creative activity. In 2014, The WhiteHouse blog posted an article stating, “Maker Faires and similar events can inspire more people to become entrepreneurs and to pursue careers in design, advanced manufacturing, and the related fields of science, technology, engineering and mathematics (STEM).” With funding from the Department of Labor, “the AFL-CIO and Carnegie Mellon University are partnering with TechShop Pittsburgh to create an apprenticeship program for 21st-century manufacturing and encourage startups to manufacture domestically.” Recently, researchers from Baylor University and the University of North Carolina, in their research paper, have highlighted opportunities for studying the conditions under which the Maker movement might foster entrepreneurship outcomes. Dale Dougherty, Maker Media Inc.’s founder and CEO, told TechCrunch, “I started this 15 years ago and it’s always been a struggle as a business to make this work. Print publishing is not a great business for anybody, but it works…barely. Events are hard . . . there was a drop off in corporate sponsorship”. “Microsoft and Autodesk failed to sponsor this year’s flagship Bay Area Maker Faire”, TechCrunch reports. Dougherty further told that the company is trying to keep the servers running. “I hope to be able to get control of the assets of the company and restart it. We’re not necessarily going to do everything we did in the past but I’m committed to keeping the print magazine going and the Maker Faire licensing program”, he further added. In 2016, the company laid off 17 of its employees, followed by 8 employees recently in March. “They’ve been paid their owed wages and PTO, but did not receive any severance or two-week notice”, TechCrunch reports. These layoffs may have hinted the staff of the financial crisis affecting the company. Maker Media Inc. had raised $10 million from Obvious Ventures, Raine Ventures, and Floodgate. Dougherty says, “It started as a venture-backed company but we realized it wasn’t a venture-backed opportunity. The company wasn’t that interesting to its investors anymore. It was failing as a business but not as a mission. Should it be a non-profit or something like that? Some of our best successes, for instance, are in education.” The company has a huge public following for its products. Dougherty told TechCrunch that despite the rain, Maker Faire’s big Bay Area event last week met its ticket sales target. Also, about 1.45 million people attended its events in 2016. “MAKE: magazine had 125,000 paid subscribers and the company had racked up over one million YouTube subscribers. But high production costs in expensive cities and a proliferation of free DIY project content online had strained Maker Media”, writes TechCrunch. Dougherty told TechCrunch he has been overwhelmed by the support shown by the Maker community. As of now, licensed Maker Faire events around the world will proceed as planned. “Dougherty also says he’s aware of Oculus co-founder Palmer Luckey’s interest in funding the company, and a GoFundMe page started for it”, TechCrunch reports. Mike Senese, Executive Editor, MAKE magazine, tweeted, “Nothing but love and admiration for the team that I got to spend the last six years with, and the incredible community that made this amazing part of my life a reality.” https://twitter.com/donttrythis/status/1137374732733493248 https://twitter.com/xeni/status/1137395288262373376 https://twitter.com/chr1sa/status/1137518221232238592 Former Mythbusters co-host Adam Savage, who was a regular presence at the Maker Faire, told The Verge, “Make Media has created so many important new connections between people across the world. It showed the power from the act of creation. We are the better for its existence and I am sad. I also believe that something new will grow from what they built. The ground they laid is too fertile to lie fallow for long.” On July 10, 2019, Dougherty announced he’ll relaunch Maker Faire and Maker Media with the new name “Make Community“. The official launch of Make Community will supposedly be next week. The company is also working on a new issue of Make Magazine that is planned to be published quarterly and the online archives of its do-it-yourself project guides will remain available. Dougherty told TechCrunch “with the goal that we can get back up to speed as a business, and start generating revenue and a magazine again. This is where the community support needs to come in because I can’t fund it for very long.” GitHub introduces ‘Template repository’ for easy boilerplate code management and distribution 12 Visual Studio Code extensions that Node.js developers will love [Sponsored by Microsoft] Shoshana Zuboff on 21st century solutions for tackling the unique complexities of surveillance capitalism
Read more
  • 0
  • 0
  • 2214
article-image-worried-about-deepfakes-check-out-the-new-algorithm-that-manipulate-talking-head-videos-by-altering-the-transcripts
Vincy Davis
07 Jun 2019
6 min read
Save for later

Worried about Deepfakes? Check out the new algorithm that manipulate talking-head videos by altering the transcripts

Vincy Davis
07 Jun 2019
6 min read
Last week, a team of researchers from Stanford University, Max Planck Institute for Informatics, Princeton University and Adobe Research published a paper titled “Text-based Editing of Talking-head Video”. This paper proposes a method to edit a talking-head video based on its transcript to produce a realistic output video, in which the dialogue of the speaker has been modified. Basically, the editor modifies a video using a text transcript, to add new words, delete unwanted ones or completely rearrange the pieces by dragging and dropping. This video will maintain a seamless audio-visual flow, without any jump cuts and will look almost flawless to the untrained eye. The researchers want this kind of text-based editing approach to lay the foundation for better editing tools, in post production of movies and television. Actors often botch small bits of performance or leave out a critical word. This algorithm can help video editors fix that, which has until now involves expensive reshoots. It can also help in easy adaptation of audio-visual video content to specific target audiences. The tool supports three types of edit operations- add new words, rearrange existing words, delete existing words. Ohad Fried, a researcher in the paper says that “This technology is really about better storytelling. Instructional videos might be fine-tuned to different languages or cultural backgrounds, for instance, or children’s stories could be adapted to different ages.” https://youtu.be/0ybLCfVeFL4 How does the application work? The method uses an input talking-head video and a transcript to perform text-based editing. The first step is to align phonemes to the input audio and track each input frame to construct a parametric head model. Next, a 3D parametric face model with each frame of the input talking-head video is registered. This helps in selectively blending different aspects of the face. Then, a background sequence is selected and is used for pose data and background pixels. The background sequence allows editors to edit challenging videos with hair movement and slight camera motion. As Facial expressions are an important parameter, the researchers have tried to preserve the retrieved expression parameters as much as possible, by smoothing out the transition between them. This provides an output of edited parameter sequence which describes the new desired facial motion and a corresponding retimed background video clip. This is forwarded to a ‘neural face rendering’ approach. This step changes the facial motion of the retimed background video to match the parameter sequence. Thus the rendering procedure produces photo-realistic video frames of the subject, appearing to speak the new phrase.These localized edits seamlessly blends into the original video, producing an edited result. Lastly to add the audio, the resulted video is retimed to match the recording at the level of phones. The researchers have used the performers own voice in all their synthesis results. Image Source: Text-based Editing of Talking-head Video The researchers have tested the system with a series of complex edits including adding, removing and changing words, as well as translations to different languages. When the application was tried in a crowd-sourced study with 138 participants, the edits were rated as “real”, almost 60% of the time. Fried said that “The visual quality is such that it is very close to the original, but there’s plenty of room for improvement.” Ethical considerations: Erosion of truth, confusion and defamation Even though the application is quite useful for video editors and producers, it raises important and valid concerns about its potential for misuse. The researchers have also agreed that such a technology might be used for illicit purposes. “We acknowledge that bad actors might use such technologies to falsify personal statements and slander prominent individuals. We are concerned about such deception and misuse.” They have recommended certain precautions to be taken to avoid deception and misuse such as using watermarking. “The fact that the video is synthesized may be obvious by context, directly stated in the video or signaled via watermarking. We also believe that it is essential to obtain permission from the performers for any alteration before sharing a resulting video with a broad audience.” They urge the community to continue to develop forensics, fingerprinting and verification techniques to identify manipulated video. They also support the creation of appropriate regulations and laws that would balance the risks of misuse of these tools against the importance of creative, consensual use cases. The public however remain dubious pointing out valid arguments on why the ‘Ethical Concerns’ talked about in the paper, fail. A user on Hacker News comments, “The "Ethical concerns" section in the article feels like a punt. The author quoting "this technology is really about better storytelling" is aspirational -- the technology's story will be written by those who use it, and you can bet people will use this maliciously.” https://twitter.com/glenngabe/status/1136667296980701185 Another user feels that such kind of technology will only result in “slow erosion of video evidence being trustworthy”. Others have pointed out how the kind of transformation mentioned in the paper, does not come under the broad category of ‘video-editing’ ‘We need more words to describe this new landscape’ https://twitter.com/BrianRoemmele/status/1136710962348617728 Another common argument is that the algorithm can be used to generate terrifyingly real Deepfake videos. A Shallow Fake video was Nancy Pelosi’s altered video, which circulated recently, that made it appear she was slurring her words by slowing down the video. Facebook was criticized for not acting faster to slow the video’s spread. Not just altering speeches of politicians, altered videos like these can also, for instance, be used to create fake emergency alerts, or disrupt elections by dropping a fake video of one of the candidates before voting starts. There is also the issue of defaming someone on a personal capacity. Sam Gregory, Program Director at Witness, tweets that one of the main steps in ensuring effective use of such tools would be to “ensure that any commercialization of synthetic media tools has equal $ invested in detection/safeguards as in detection.; and to have a grounded conversation on trade-offs in mitigation”. He has also listed more interesting recommendations. https://twitter.com/SamGregory/status/1136964998864015361 For more details, we recommend you to read the research paper. OpenAI researchers have developed Sparse Transformers, a neural network which can predict what comes next in a sequence ‘Facial Recognition technology is faulty, racist, biased, abusive to civil rights; act now to restrict misuse’ say experts to House Oversight and Reform Committee Now there’s a CycleGAN to visualize the effects of climate change. But is this enough to mobilize action?
Read more
  • 0
  • 0
  • 4383

article-image-amazon-remars-day-1-kicks-off-showcasing-amazons-next-gen-ai-robots-spot-the-robo-dog-and-a-guest-appearance-from-iron-man
Savia Lobo
06 Jun 2019
11 min read
Save for later

Amazon re:MARS Day 1 kicks off showcasing Amazon’s next-gen AI robots; Spot, the robo-dog and a guest appearance from ‘Iron Man’

Savia Lobo
06 Jun 2019
11 min read
Amazon’s inaugural re:MARS event kicked off on Tuesday, June 4 at the Aria in Las Vegas. This 4-day event is inspired by MARS, a yearly invite-only event hosted by Jeff Bezos that brings together innovative minds in Machine learning, Automation, Robotics, and Space to share new ideas across these rapidly advancing domains. re:MARS featured a lot of announcements revealing a range of robots each engineered for a different purpose. Some of them include helicopter drones for delivery, two robot dogs by Boston Dynamics, Autonomous human-like acrobats by Walt Disney Imagineering, and much more. Amazon also revealed Alexa’s new Dialog Modeling for Natural, Cross-Skill Conversations. Let us have a brief look at each of the announcements. Robert Downey Jr. announces ‘The Footprint Coalition’ project to clean up the environment using Robotics Popularly known as the “Iron Man”, Robert Downey Jr.’s visit was one of the exciting moments where he announced a new project called The Footprint Coalition to clean up the planet using advanced technologies at re:MARS. “Between robotics and nanotechnology we could probably clean up the planet significantly, if not entirely, within a decade,” he said. According to The Forbes, “Amazon did not immediately respond to questions about whether it was investing financially or technologically in Downey Jr.’s project.” “At this point, the effort is severely light on details, with only a bare-bones website to accompany Downey’s public statement, but the actor said he plans to officially launch the project by April 2020,” Forbes reports. A recent United Nations report found that humans are having an unprecedented and devastating effect on global biodiversity, and researchers have found microplastics polluting the air, ocean, and soil. The announcement of this project has been opened to the public because the “company itself is under fire for its policies around the environment and climate change”. Additionally, Morgan Pope and Tony Dohi of Walt Disney Imagineering, also demonstrated their work to create autonomous acrobats. https://twitter.com/jillianiles/status/1136082571081555968 https://twitter.com/thesullivan/status/1136080570549563393 Amazon will soon deliver orders using drones On Wednesday, Amazon unveiled a revolutionary new drone that will test deliver toothpaste and other household goods starting within months. This drone is “part helicopter and part science-fiction aircraft” with built-in AI features and sensors that will help it fly robotically without threatening traditional aircraft or people on the ground. Gur Kimchi, vice president of Amazon Prime Air, said in an interview to Bloomberg, “We have a design that is amazing. It has performance that we think is just incredible. We think the autonomy system makes the aircraft independently safe.” However, he refused to provide details on where the delivery tests will be conducted. Also, the drones have received a year’s approval from the FAA to test the devices in limited ways that still won't allow deliveries. According to a Bloomberg report, “It can take years for traditional aircraft manufacturers to get U.S. Federal Aviation Administration approval for new designs and the agency is still developing regulations to allow drone flights over populated areas and to address national security concerns. The new drone presents even more challenges for regulators because there aren’t standards yet for its robotic features”. Competitors to Amazon’s unnamed drone include Alphabet Inc.’s Wing, which became the first drone to win an FAA approval to operate as a small airline, in April. Also, United Parcel Service Inc. and drone startup Matternet Inc. began using drones to move medical samples between hospitals in Raleigh, North Carolina, in March. Amazon’s drone is about six feet across with six propellers that lift it vertically off the ground. It is surrounded by a six-sided shroud that will protect people from the propellers, and also serves as a high-efficiency wing such that it can fly more horizontally like a plane. Once it gets off the ground, the craft tilts and flies sideways -- the helicopter blades becoming more like airplane propellers. Kimchi said, “Amazon’s business model for the device is to make deliveries within 7.5 miles (12 kilometers) from a company warehouse and to reach customers within 30 minutes. It can carry packages weighing as much as five pounds. More than 80% of packages sold by the retail behemoth are within that weight limit.” According to the company, one of the things the drone has mastered is detecting utility wires and clotheslines. They have been notoriously difficult to identify reliably and pose a hazard for a device attempting to make deliveries in urban and suburban areas. To know more about these high-tech drones in detail, head over to Amazon’s official blogpost. Boston Dynamics’ first commercial robot, Spot Boston Dynamics revealed its first commercial product, a quadrupedal robot named Spot.  Boston Dynamics’ CEO Marc Raibert told The Verge, “Spot is currently being tested in a number of “proof-of-concept” environments, including package delivery and surveying work.” He also said that although there’s no firm launch date for the commercial version of Spot, it should be available within months, certainly before the end of the year. “We’re just doing some final tweaks to the design. We’ve been testing them relentlessly”, Raibert said. These Spot robots are capable of navigating environments autonomously, but only when their surroundings have been mapped in advance. They can withstand kicks and shoves and keep their balance on tricky terrain, but they don’t decide for themselves where to walk. These robots are simple to control; using a D-pad, users can steer the robot as just like an RC car or mechanical toy. A quick tap on the video feed streamed live from the robot’s front-facing camera allows to select a destination for it to walk to, and another tap lets the user assume control of a robot arm mounted on top of the chassis. With 3D cameras mounted atop, a Spot robot can map environments like construction sites, identifying hazards and work progress. It also has a robot arm which gives it greater flexibility and helps it open doors and manipulate objects. https://twitter.com/jjvincent/status/1136096290016595968 The commercial version will be “much less expensive than prototypes [and] we think they’ll be less expensive than other peoples’ quadrupeds”, Raibert said. Here’s a demo video of the Spot robot at the re:MARS event. https://youtu.be/xy_XrAxS3ro Alexa gets new dialog modeling for improved natural, cross-skill conversations Amazon unveiled new features in Alexa that would help the conversational agent to answer more complex questions and carry out more complex tasks. Rohit Prasad, Alexa vice president and head scientist, said, “We envision a world where customers will converse more naturally with Alexa: seamlessly transitioning between skills, asking questions, making choices, and speaking the same way they would with a friend, family member, or co-worker. Our objective is to shift the cognitive burden from the customer to Alexa.” This new update to Alexa is a set of AI modules that work together to generate responses to customers’ questions and requests. With every round of dialog, the system produces a vector — a fixed-length string of numbers — that represents the context and the semantic content of the conversation. “With this new approach, Alexa will predict a customer’s latent goal from the direction of the dialog and proactively enable the conversation flow across topics and skills,” Prasad says. “This is a big leap for conversational AI.” At re:MARS, Prasad also announced the developer preview of Alexa Conversations, a new deep learning-based approach for skill developers to create more-natural voice experiences with less effort, fewer lines of code, and less training data than before. The preview allows skill developers to create natural, flexible dialogs within a single skill; upcoming releases will allow developers to incorporate multiple skills into a single conversation. With Alexa Conversations, developers provide: (1) application programming interfaces, or APIs, that provide access to their skills’ functionality; (2) a list of entities that the APIs can take as inputs, such as restaurant names or movie times;  (3) a handful of sample dialogs annotated to identify entities and actions and mapped to API calls. Alexa Conversations’ AI technology handles the rest. “It’s way easier to build a complex voice experience with Alexa Conversations due to its underlying deep-learning-based dialog modeling,” Prasad said. To know more about this announcement in detail, head over to Alexa’s official blogpost. Amazon Robotics unveiled two new robots at its fulfillment centers Brad Porter, vice president of robotics at Amazon, announced two new robots, one is, code-named Pegasus and the other one, Xanthus. Pegasus, which is built to sort packages, is a 3-foot-wide robot equipped with a conveyor belt on top to drop the right box in the right location. “We sort billions of packages a year. The challenge in package sortation is, how do you do it quickly and accurately? In a world of Prime one-day [delivery], accuracy is super-important. If you drop a package off a conveyor, lose track of it for a few hours  — or worse, you mis-sort it to the wrong destination, or even worse, if you drop it and damage the package and the inventory inside — we can’t make that customer promise anymore”, Porter said. Porter said Pegasus robots have already driven a total of 2 million miles, and have reduced the number of wrongly sorted packages by 50 percent. Porter said the Xanthus, represents the latest incarnation of Amazon’s drive robot. Amazon uses tens of thousands of the current-generation robot, known as Hercules, in its fulfillment centers. Amazon unveiled Xanthus Sort Bot and Xanthus Tote Mover. “The Xanthus family of drives brings innovative design, enabling engineers to develop a portfolio of operational solutions, all of the same hardware base through the addition of new functional attachments. We believe that adding robotics and new technologies to our operations network will continue to improve the associate and customer experience,” Porter says. To know more about these new robots watch the video below: https://youtu.be/4MH7LSLK8Dk StyleSnap: An AI-powered shopping Amazon announced StyleSnap, a recent move to promote AI-powered shopping. StyleSnap helps users pick out clothes and accessories. All they need to do is upload a photo or screenshot of what they are looking for, when they are unable to describe what they want. https://twitter.com/amazonnews/status/1136340356964999168 Amazon said, "You are not a poet. You struggle to find the right words to explain the shape of a neckline, or the spacing of a polka dot pattern, and when you attempt your text-based search, the results are far from the trend you were after." To use StyleSnap, just open the Amazon app, click the camera icon in the upper right-hand corner, select the StyleSnap option, and then upload an image of the outfit. Post this, StyleSnap provides recommendations of similar outfits on Amazon to purchase, with users able to filter across brand, pricing, and reviews. Amazon's AI system can identify colors and edges, and then patterns like floral and denim. Using this information, its algorithm can then accurately pick a matching style. To know more about StyleSnap in detail, head over to Amazon’s official blog post. Amazon Go trains cashierless store algorithms using synthetic data Amazon at the re:MARS shared more details about Amazon Go, the company’s brand for its cashierless stores. They said Amazon Go uses synthetic data to intentionally introduce errors to its computer vision system. Challenges that had to be addressed before opening stores to avoid queues include the need to make vision systems that account for sunlight streaming into a store, little time for latency delays, and small amounts of data for certain tasks. Synthetic data is being used in a number of ways to power few-shot learning, improve AI systems that control robots, train AI agents to walk, or beat humans in games of Quake III. Dilip Kumar, VP of Amazon Go, said, “As our application improved in accuracy — and we have a very highly accurate application today — we had this interesting problem that there were very few negative examples, or errors, which we could use to train our machine learning models.” He further added, “So we created synthetic datasets for one of our challenging conditions, which allowed us to be able to boost the diversity of the data that we needed. But at the same time, we have to be careful that we weren’t introducing artifacts that were only visible in the synthetic data sets, [and] that the data translates well to real-world situations — a tricky balance.” To know more about this news in detail, check out this video: https://youtu.be/jthXoS51hHA The Amazon re:MARS event is still ongoing and will have many more updates. To catch live updates from Vegas visit Amazon’s blog. World’s first touch-transmitting telerobotic hand debuts at Amazon re:MARS tech showcase Amazon introduces S3 batch operations to process millions of S3 objects Amazon Managed Streaming for Apache Kafka (Amazon MSK) is now generally available
Read more
  • 0
  • 0
  • 3346