Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials

7019 Articles
article-image-is-the-npm-6-9-1-bug-a-symptom-of-the-organizations-cultural-problems
Fatema Patrawala
02 Jul 2019
4 min read
Save for later

Is the Npm 6.9.1 bug a symptom of the organization’s cultural problems?

Fatema Patrawala
02 Jul 2019
4 min read
The emergence of worker solidarity and organization throughout the tech industry has been one of the few upsides to a difficult 18 months. And although it might be tempting to see this wave as somehow separate from the technical side of building software, the reality is that worker power - and, indeed, worker safety and respect - are crucial to ensure safe and high quality software. Last week’s npm bug, reported by users last Friday, is a good case in point. It follows a matter of months after news in April of surprise layoffs, and accusations of punitive anti-union actions. It perhaps confirms what one former npm employee told The Register last month: "I think it’s time to break the in-case-of-emergency glass to assess how to keep JavaScript safe… Soon there won’t be any knowledgeable engineers left." What was the npm 6.9.1 bug? The npm 6.9.1 bug is complex. There are a number of layers to the issue, some of which relate to earlier iterations of the package manager. For those interested, Rebecca Turner, a former core contributor to npm who resigned her position at npm in March in response to the layoffs, explains in detail how the bug came about: “...npm publish ignores .git folders by default but forces all files named readme to be included… And that forced include overrides the exclude. And then there was once a remote branch named readme… and that goes in the .git folder, gets included in the publish, which then permanently borks your npm install, because of EISGIT, which in turn is a restriction that’s afaik entirely vestigial, copied forward from earlier versions of npm without clear insight into why you’d want that restriction in the first place.” Turner says she suspects the bug was “introduced with tar rewrite.” Whoever published it, she goes on to say, must have had a repository with a remote reference and had failed to follow the setup guide “which recommends using a separate copy of the repo for publication.” Kat Marchán, CLI and Community Architect at npm, later confirmed that to fix the issue the team had published npm 6.9.2, but said that users would have to uninstall it manually before upgrading. “We are discussing whether to unpublish 6.9.1 as well, but this should stop any further accidents,” Marchán said. The impact of npm’s internal issues The important subplot to all of this is the fact that it appears that npm 6.9.1 was delayed because of npm’s internal issues. A post on GitHub by Audrey Eschright, one of the employees who are currently filing a case against npm with the National Labor Relations Board, explained that work on the open source project had been interrupted because npm’s management had made the decision to remove “core employee contributors to the npm cli.” The implication, then, is that management’s attitude here has had a negative impact on npm 6.9.1. If the allegations of ‘union busting’ are true, then it would seem that preventing its workers from organizing to protect one another were more important than building robust and secure software. At a more basic level, whatever the reality of the situation, it would seem that npm’s management is unable to cultivate an environment that allows employees to do what they do best. Why is this significant? This is ultimately just a story about a bug. Not all that remarkable. But given the context, it’s significant because it highlights that tech worker organization, and how management responds to it, has a direct link to the quality and reliability of the software we use. If friction persists between the commercial leaders within a company and engineers, software is the thing that’s going to suffer. Read Next Surprise NPM layoffs raise questions about the company culture Former npm CTO introduces Entropic, a federated package registry with a new CLI and much more! The npm engineering team shares why Rust was the best choice for addressing CPU-bound bottlenecks
Read more
  • 0
  • 0
  • 2639

article-image-experts-discuss-dark-patterns-and-deceptive-ui-designs-what-are-they-what-do-they-do-how-do-we-stop-them
Sugandha Lahoti
29 Jun 2019
12 min read
Save for later

Experts discuss Dark Patterns and deceptive UI designs: What are they? What do they do? How do we stop them?

Sugandha Lahoti
29 Jun 2019
12 min read
Dark patterns are often used online to deceive users into taking actions they would otherwise not take under effective, informed consent. Dark patterns are generally used by shopping websites, social media platforms, mobile apps and services as a part of their user interface design choices. Dark patterns can lead to financial loss, tricking users into giving up vast amounts of personal data, or inducing compulsive and addictive behavior in adults and children. Using dark patterns is unambiguously unlawful in the United States (under Section 5 of the Federal Trade Commission Act and similar state laws), the European Union (under the Unfair Commercial Practices Directive and similar member state laws), and numerous other jurisdictions. Earlier this week, at the Russell Senate Office Building, a panel of experts met to discuss the implications of Dark patterns in the session, Deceptive Design and Dark Patterns: What are they? What do they do? How do we stop them? The session included remarks from Senator. Mark Warner and Deb Fischer, sponsors of the DETOUR Act, and a panel of experts including Tristan Harris (Co-Founder and Executive Director, Center for Humane Technology). The entire panel of experts included: Tristan Harris (Co-Founder and Executive Director, Center for Humane Technology) Rana Foroohar (Global Business Columnist and Associate Editor, Financial Times) Amina Fazlullah (Policy Counsel, Common Sense Media) Paul Ohm (Professor of Law and Associate Dean, Georgetown Law School), also the moderator Katie McInnis (Policy Counsel, Consumer Reports) Marshall Erwin (Senior Director of Trust & Security, Mozilla) Arunesh Mathur (Dept. of Computer Science, Princeton University) Dark patterns are growing in social media platforms, video games, shopping websites, and are increasingly used to target children The expert session was inaugurated by Arunesh Mathur (Dept. of Computer Science, Princeton University) who talked about his new study by researchers from Princeton University and the University of Chicago. The study suggests that shopping websites are abundant with dark patterns that rely on consumer deception. The researchers conducted a large-scale study, analyzing almost 53K product pages from 11K shopping websites to characterize and quantify the prevalence of dark patterns. They so discovered 1,841 instances of dark patterns on shopping websites, which together represent 15 types of dark patterns. One of the dark patterns was Sneak into Website, which adds additional products to users’ shopping carts without their consent. For example, you would buy a bouquet on a website and the website without your consent would add a greeting card in the hopes that you will actually purchase it. Katie McInnis agreed and added that Dark patterns not only undermine the choices that are available to users on social media and shopping platforms but they can also cost users money. User interfaces are sometimes designed to push a user away from protecting their privacy, making it tough to evaluate them. Amina Fazlullah, Policy Counsel, Common Sense Media said that dark patterns are also being used to target children. Manipulative apps use design techniques to shame or confuse children into in-app purchases or trying to keep them on the app for longer. Children mostly are unable to discern these manipulative techniques. Sometimes the screen will have icons or buttons that will appear to be a part of game play and children will click on them not realizing that they're either being asked to make a purchase or being shown an ad or being directed onto another site. There are games which ask for payments or microtransactions to continue the game forward. Mozilla uses product transparency to curb Dark patterns Marshall Erwin, Senior Director of Trust & Security at Mozilla talked about the negative effects of dark patterns and how they make their own products at Mozilla more transparent.  They have a set of checks and principles in place to avoid dark patterns. No surprises: If users were to figure out or start to understand exactly what is happening with the browser, it should be consistent with their expectations. If the users are surprised, this means browsers need to make a change either by stopping the activity entirely or creating additional transparency that helps people understand. Anti-tracking technology: Cross-site tracking is one of the most pervasive and pernicious dark patterns across the web today that is enabled by cookies. Browsers should take action to decrease the attack surface in the browser and actively protect people from those patterns online.  Mozilla and Apple have introduced anti tracking technology to actively intervene to protect people from the diverse parties that are probably not trustworthy. Detour Act by Senators Warner and Fisher In April, Warner and Fischer had introduced the Deceptive Experiences To Online Users Reduction (DETOUR) Act, a bipartisan legislation to prohibit large online platforms from using dark patterns to trick consumers into handing over their personal data. This act focuses on the activities of large online service providers (over a hundred million users visiting in a given month). Under this act you cannot use practices that trick users into obtaining information or consenting. You will experience new controls about conducting ‘psychological experiments on your users’ and you will no longer be able to target children under 13 with the goal of hooking them into your service. It extends additional rulemaking and enforcement abilities to the Federal Trade Commission. “Protecting users personal data and user autonomy online are truly bipartisan issues”: Senator Mark Warner In his presentation, Warner talked about how 2019 is the year when we need to recognize dark patterns and their ongoing manipulation of American consumers.  While we've all celebrated the benefits that communities have brought from social media, there is also an enormous dark underbelly, he says. It is important that Congress steps up and we play a role as senators such that Americans and their private data is not misused or manipulated going forward. Protecting users personal data and user autonomy online are truly bipartisan issues. This is not a liberal versus conservative, it's much more a future versus past and how we get this future right in a way that takes advantage of social media tools but also put some of the appropriate constraints in place. He says that the driving notion behind the Detour act is that users should have the choice and autonomy when it comes to their personal data. When a company like Facebook asks you to upload your phone contacts or some other highly valuable data to their platform, you ought to have a simple choice yes or no. Companies that run experiments on you without your consent are coercive and Detour act aims to put appropriate protections in place that defend user's ability to make informed choices. In addition to prohibiting large online platforms from using dark patterns to trick consumers into handing over their personal data, the bill would also require informed consent for behavior experimentation. In the process, the bill will be sending a clear message to the platform companies and the FTC that they are now in the business of preserving user's autonomy when it comes to the use of their personal data. The goal, Warner says, is simple - to bring some transparency to what remains a very opaque market and give consumers the tools they need to make informed choices about how and when to share their personal information. “Curbing the use of dark patterns will be foundational to increasing trust online” : Senator Deb Fischer Fischer argued that tech companies are increasingly tailoring users’ online experiences in ways that are more granular. On one hand, she says, you get a more personalized user experience and platforms are more responsive, however it's this variability that allows companies to take that design just a step too far. Companies are constantly competing for users attention and this increases the motivation for a more intrusive and invasive user design. The ability for online platforms to guide the visual interfaces that billions of people view is an incredible influence. It forces us to assess the impact of design on user privacy and well-being. Fundamentally the detour act would prohibit large online platforms from purposely using deceptive user interfaces - dark patterns. The detour act would provide a better accountability system for improved transparency and autonomy online. The legislation would take an important step to restore the hidden options. It would give users a tool to get out of the maze that coaxes you to just click on ‘I agree’. A privacy framework that involves consent cannot function properly if it doesn't ensure the user interface presents fair and transparent options. The detour act would enable the creation of a professional standards body which can register with the Federal Trade Commission. This would serve as a self regulatory body to develop best practices for UI design with the FTC as a backup. She adds, “We need clarity for the enforcement of dark patterns that don't directly involve our wallets. We need policies that place value on user choice and personal data online. We need a stronger mechanism to protect the public interest when the goal for tech companies is to make people engage more and more. User consent remains weakened by the presence of dark patterns and unethical design. Curbing the use of dark patterns will be foundational to increasing trust online. The detour act does provide a key step in getting there.” “The DETOUR act is calling attention to asymmetry and preventing deceptive asymmetry”: Tristan Harris Tristan says that companies are now competing not on manipulating your immediate behavior but manipulating and predicting the future. For example, Facebook has something called loyalty prediction which allows them to sell to an advertiser the ability to predict when you're going to become disloyal to a brand. It can sell that opportunity to another advertiser before probably you know you're going to switch. The DETOUR act is a huge step in the right direction because it's about calling attention to asymmetry and preventing deceptive asymmetry. We need a new relationship for this  asymmetric power by having a duty of care. It’s about treating asymmetrically powerful technologies to be in the service of the systems that they are supposed to protect. He says, we need to switch to a regenerative energy economy that actually treats attention as sacred and not directly tying profit to user extraction. Top questions raised by the panel and online viewers Does A/B testing result in dark patterns? Dark patterns are often a result of A/B testing right where a designer may try things that lead to better engagement or maybe nudge users in a way where the company benefits. However, A/B testing isn't the problem, it’s the intention of how A/B testing is being used. Companies and other organizations should have an oversight on the different experiments that they are conducting to see if A/B testing is actually leading to some kind of concrete harm. The challenge in the space is drawing a line about A/B testing features and optimizing for engagement and decreasing friction. Are consumers smart enough to tackle dark patterns on their own or do we need a legislation? It's well established that for children whose brains are just developing, they're unable to discern these types of deceptive techniques so especially for kids, these types of practices should be banned. For vulnerable families who are juggling all sorts of concerns around income and access to jobs and transportation and health care, putting this on their plate as well is just unreasonable. Dark patterns are deployed for an array of opaque reasons the average user will never recognize. From a consumer perspective, going through and identifying dark pattern techniques--that these platform companies have spent hundreds of thousands  of dollars developing to be as opaque and as tricky as possible--is an unrealistic expectation put on consumers. This is why the DETOUR act and this type of regulation are absolutely necessary and the only way forward. What is it about the largest online providers that make us want to focus on them first or only? Is it their scale or do they have more powerful dark patterns? Is it because they're just harming more people or is it politics? Sometimes larger companies stay wary of indulging in dark patterns because they have a greater risk in terms of getting caught and the PR backlash. However, they do engage in manipulative practices and that warrants a lot of attention. Moreover, targeting bigger companies is just one part of a more comprehensive privacy enforcement environment. Hitting companies that have a large number of users is also great for consumer engagement.  Obviously there is a need to target more broadly but this is a starting point. If Facebook were to suddenly reclass itself and its advertising business model, would you still trust them? No, the leadership that's in charge now for Facebook can not be trusted, especially the organizational cultures that have been building. There are change efforts going on inside of Google and Facebook right now but it’s getting gridlocked. Even if employees want to see policies being changed, they still have bonus structures and employee culture to keep in mind. We recommend you to go through the full hearing here. You can read more about the Detour Act here. U.S. senators introduce a bipartisan bill that bans social media platforms from using ‘dark patterns’ to trick its users. How social media enabled and amplified the Christchurch terrorist attack A new study reveals how shopping websites use ‘dark patterns’ to deceive you into buying things you may not want
Read more
  • 0
  • 0
  • 3143

article-image-vue-maintainers-proposed-listened-and-revised-the-rfc-for-hooks-in-vue-api
Bhagyashree R
28 Jun 2019
6 min read
Save for later

Vue maintainers proposed, listened, and revised the RFC for hooks in Vue API

Bhagyashree R
28 Jun 2019
6 min read
The internet was ablaze when Evan You, creator of Vue, published an RFC to introduce a function-based component API earlier this month. This followed a huge discussion in the Vue community on whether such an API is really needed. https://twitter.com/youyuxi/status/1137567675356291072 This proposal came after Evan You previewed an experimental Hooks API back in November at Vue Conf Toronto 2018. Why Vue needs a function-based component API Components help you to abstract your code into smaller pieces. This gives your web applications a better structure, makes the code more readable and understandable and most importantly enables you to reuse logic across multiple components. According to the RFC, the components API in Vue 2.x has some drawbacks in terms of reusability. The three common patterns that are generally used to achieve reusability in Vue are mixins, high-order components, and renderless components. Each of these come with their share of drawbacks: Mixins bring implicit dependencies in code, causes name clashes, and make your code harder to understand. HOCs can often be verbose, involve lots of passing props and hoisting statics, and can cause name conflicts. Renderless components require extra stateful component instances that come at the cost of performance. This function-based component API aims to address all these drawbacks. Inspired by React Hooks, its objective is to provide developers a “clean and flexible way” to compose logic and share it between components. The team plans to achieve this by moving the logic code to a "composition function" and returning reactive state. Another motivation behind this proposed change is to provide better built-in TypeScript type inference support as function-based APIs are naturally type-friendly. Also, code written with function-based APIs compresses better than an object or class-based code. What Vue developers think about this RFC? The Vue community was a little taken aback with this proposal that will essentially change the way they used to write Vue. They were concerned that this will take away the most desirable property of Vue, which is its simplicity. Vue’s class-based API made it easy to understand and get started with. However, bringing function-based API to Vue will complex things in exchange for very fewer advantages. Some argued that this change will make it another React. “Like a lot of others here, I chose Vue vs React for the simplicity and readability of code. The class-based API was easy to understand and pick up. If I wanted React, I would have just chosen React from the beginning. I get that there are some technical advantages to doing this, but Vue 3 is starting to really turn me off of staying with Vue going forward,” a developer shared on a Reddit thread. Developers were concerned that the time they have invested in learning Vue will go to waste as everything is about to change. A Vue developer commented on Reddit, “You learn to do something one way and then they change it up on you. Might as well just switch to react at this point.” Many compared this scenario to that of Angular 1->2 or Python 2->3 and suggested switching to Svelte to avoid the mess. Some, however, liked the syntax and are looking forward to playing around with the API.  A developer shared, “But I read through, checked out the new (simpler) example, read Evan's arguments about logical task grouping, and on a second read with a more open mind, I actually kind of like the new syntax and am now looking forward to trying it out. I'm glad they agreed to keep the object syntax around though.” How the Vue team responded When the RFC was first published it implied that the current API will be deprecated in a future major release. Also, there was a lot of confusion around the "compatibility" and "stable" build.  Many developers felt that this RFC is already “set in stone” from the way it was communicated. They felt that the core team has already decided to bring this API to Vue without community consultation. So, one of the reasons behind this confusion was how the change was communicated. The team acknowledged this and asked for suggestions from the community to improve their communication. https://twitter.com/N_Tepluhina/status/1142715703558103040 The core team clarified that the update will be additive and the team has no plans to remove the Object API in a future major release. Evan You, the creator of Vue, said in a thread, “feel free to stay with the current API for as long as you wish. As long as the community feels there's a need for the old API to stay, it will stay. The only one that can make the decision to switch to the new API is yourself.” He also addressed the concerns on a Hacker News thread: There is a lot of FUD in this thread so we need to clarify a bit: - This API is purely additive to 2.x and doesn't break anything. - 3.0 will have a standard build which adds this API on top of 2.x API, and an opt-in "lean build" which drops a number of 2.x APIs for a smaller and faster runtime. - This is an open RFC, which means it's not set in stone. The whole point of having an RFC is so that users can voice their opinions. It's not like we are shipping this tomorrow. After listening to various perspectives shared by developers, the core team revised the RFC accordingly putting everybody finally at ease. Guillaume Chau, a member of the Vue core team, put out a clear and concise plan of action on Twitter to which people are responding positively. This plan reassured that the Object API will not be deprecated until the community stops using it and the proposed API will be first offered as a standalone plugin for Vue 2.x. https://twitter.com/Akryum/status/1143114880960126976 Some developers have also started to try out the new API: https://twitter.com/igor_randj/status/1143302939496370177 https://twitter.com/cmsalvado/status/1143230023089786880 Closing thoughts Open source programmers put their time and efforts in building software that helps the community and an RFC (request for comments) is a way for the community to get involved in building high quality software at scale. Through RFC you can share your constructive feedback on why a change is necessary or is not necessary. And, all this can be done in a respectful way. This showed us a very good example of how an RFC should really work. Publishing an RFC, discussing with the community, listening to the community, and deciding collectively what to do next. Despite some hiccups in communication, the Vue core team did a good job in engaging with the community to develop the roadmap for the function-based component API in Vue. Read the RFC for function-based component API for more details. Vue 2.6 is now out with a new unified syntax for slots, and more Learning Vue in 2019 with Anthony Gore’s developer knowledge map Evan You shares Vue 3.0 updates at VueConf Toronto 2018
Read more
  • 0
  • 0
  • 4067

article-image-im-concerned-about-libras-model-for-decentralization-says-co-founder-of-chainspace-facebooks-blockchain-acquisition
Fatema Patrawala
26 Jun 2019
7 min read
Save for later

“I'm concerned about Libra's model for decentralization”, says co-founder of Chainspace, Facebook’s blockchain acquisition

Fatema Patrawala
26 Jun 2019
7 min read
In February, Facebook made its debut into the blockchain space by acquiring Chainspace, a London-based, Gibraltar-registered blockchain venture. Chainspace was a small start-up founded by several academics from the University College London Information Security Research Group. Authors of the original Chainspace paper were Mustafa Al-Bassam, Alberto Sonnino, Shehar Bano, Dave Hrycyszyn and George Danezis, some of the UK’s leading privacy engineering researchers. Following the acquisition, last week Facebook announced the launch of its new cryptocurrency, Libra which is expected to go live by 2020. The Libra whitepaper involves a wide array of authors including the Chainspace co-founders namely Alberto Sonnino, Shehar Bano and George Danezis. According to Wired, David Marcus, a former Paypal president and a Coinbase board member, who resigned from the board last year, is appointed by Facebook to lead the project Libra. Libra isn’t like other cryptocurrencies such as Bitcoin or Ethereum. As per the Reuters report, the Libra blockchain will be permissioned, meaning that only entities authorized by the governing association will be able to run the computers. Mustafa Al-Bassam, one of the research co-founders of Chainspace who did not join Facebook posted a detailed Twitter thread yesterday. The thread included particularly his views on this new crypto-currency - Libra. https://twitter.com/musalbas/status/1143629828551270401 On Libra’s decentralized model being less censorship resistant Mustafa says, “I don't have any doubt that the Libra team is building Libra for the right reasons: to create an open, decentralized payment system, not to empower Facebook. However, the road to dystopia is paved with good intentions, and I'm concerned about Libra's model for decentralization.” He further pointed the discussion towards a user comment on GitHub which reads, “Replace "decentralized" with "distributed" in readme”. Mustafa explains that Libra’s 100 node closed set of validators is seen more as decentralized in comparison to Bitcoin. Whereas Bitcoin has 4 pools that control >51% of hashpower. According to the Block Genesis, decentralized networks are particularly prone to Sybil attacks due to their permissionless nature. Mustafa takes this into consideration and poses a question if Libra is Sybil resistant, he comments, “I'm aware that the word "decentralization" is overused. I'm looking at decentralization, and Sybil-resistance, as a means to achieve censorship-resistance. Specifically: what do you have to do to reverse or censor transaction, how much does it cost, and who has that power? My concern is that Libra could end up creating a financial system that is *less* censorship-resistant than our current traditional financial system. You see, our current banking system is somewhat decentralized on a global scale, as money travels through a network of banks.” He further explains that, “In the banking system there is no majority of parties that can collude together to deny two banks the ability to maintain a relationship which each other - in the worst case scenario they can send physical cash to each other, which does not require a ledger. It's permissionless.” Mustafa adds to this point with a surreal imagination that if Libra was the only way to transfer currency and it is less censorship resistant than we’d be in worse situations, he says, “With cryptocurrency systems (even decentralized ones), there is always necessarily a majority of consensus nodes (e.g. a 51% attack) that can collude together from censor or reverse transactions. So if you're going to create digital cash, this is extremely important to consider. With Libra, censorship-resistance is even more important, as Libra could very well end up being the world's "de facto" currency, and if the Libra network is the only way to transfer that currency, and it's less censorship-resistant, we're worse off than where we started.” On Libra's permissioned consensus node selection authority Mustafa says that, “Libra's current permissioned consensus node selection authority is derived directly from nation state-enforced (Switzerland's) organization laws, rather than independently from stakeholders holding sovereign cryptographic keys.” Source - Libra whitepaper What this means is the "root API" for Libra's node selection mechanism is the Libra Association via the Swiss Federal Constitution and the Swiss courts, rather than public key cryptography. Mustafa also pointed out that the association members for Libra are large $1b+ companies, and US-based. Source - Libra whitepaper To this there could be an argument that governments can regulate the people who hold those public keys, but a key difference is that this can't be directly enforced without access to the private key. Mustafa explained this point with an example from last year, where Iran tested how resistant global payments are to US censorship. Iran requested a 300 million Euro cash withdrawal via Germany's central bank which they rejected under US pressure. Mustafa added, “US sanctions have been bad on ordinary people in Iran, but they can at least use cash to transact with other countries. If people wouldn't even be able to use cash in the future because Libra digital cash isn't censorship-resistant, that would be *brutal*.” On Libra’s proof-of-stake based permissionless mechanism Mustafa argues that the Libra whitepaper confuses consensus with Sybil-resistance. His views are Sybil-resistant node selection through permissionless mechanisms such as proof-of-stake, which selects a set of cryptographic keys that participate in consensus, is necessarily more censorship-resistant than the Association-based model. Proof-of-stake is a Sybil-resistance mechanism, not a consensus mechanism. The "longest chain rule", on the other hand, is the consensus mechanism. He says that Libra has outlined a proof-of-stake-based permissionless roadmap and will transition to this in the next 5 years. Mustafa feels 5 years for this will be way too late when Group of seven nations (G7) are already lining up the taskforce to control Libra. Mustafa also thinks that it isn’t appropriate about Libra's whitepaper to claim the need to start permissioned for the next five years. He says permissionlessness and scalable secure blockchains are an unsolved technical problem, and they need community's help to research this. Source - Libra whitepaper He says, “It's as if they ignored the past decade of blockchain scalability research efforts. Secure layer-one scalability is a solved research problem. Ethereum 2.0, for example, is past the research stage and is now in the implementation stage, and will handle more than Libra's 1000tps.” Mustafa also points out that Chainspace was specifically in the middle of implementing a permissionless sharded blockchain with higher on-chain scalability than Libra's 1000tps. With FB's resources, this could've easily been accelerated and made a reality. He says, there are many research-led blockchain projects that have implemented or are implementing scalability strategies that achieve higher than Libra's 1000tps without heavily trading off security, so the "community" research on this is plentiful; it is just that Facebook is being lazy. He concludes, “I find it a great shame that Facebook has decided to be anti-social and launch a permissioned system as they need the community's help as scalable blockchains are an unsolved problem, instead of using their resources to implement on a decade of research in this area.” People have appreciated Mustafa on giving a detailed review of Libra, one of the tweets read, “This was a great thread, with several acute and correct observations.” https://twitter.com/ercwl/status/1143671361325490177 Another tweet reads, “Isn't a shard (let's say a blockchain sharded into 100 shards) by its nature trading off 99% of its consensus forming decentralization for 100x (minus overhead, so maybe 50x?) increased scalability?” Mustafa responded, “No because consensus participants are randomly sampled into shards from the overall consensus set, so shards should be roughly uniformly secure, and in the event that a shard misbehaves, fraud and data availability proofs kick in.” https://twitter.com/ercwl/status/1143673925643243522 In one of the tweets it is also suggested that 1/3 of Libra validators can enforce censorship even against the will of the 2/3 majority. In contrast it requires majority of miners to censor Bitcoin. Also unlike Libra, there is no entry barrier other than capital to become a Bitcoin miner. https://twitter.com/TamasBlummer/status/1143766691089977346 Let us know what are your views on Libra and how it is expected to perform. Facebook launches Libra and Calibra in a move to seriously disrupt the financial sector Facebook content moderators work in filthy, stressful conditions and experience emotional trauma daily, reports The Verge Facebook releases Pythia, a deep learning framework for vision and language multimodal research
Read more
  • 0
  • 0
  • 2972
Banner background image

article-image-a-new-study-reveals-how-shopping-websites-use-dark-patterns-to-deceive-you-into-buying-things-you-may-not-want
Sugandha Lahoti
26 Jun 2019
6 min read
Save for later

A new study reveals how shopping websites use ‘dark patterns’ to deceive you into buying things you may not want

Sugandha Lahoti
26 Jun 2019
6 min read
A new study by researchers from Princeton University and the University of Chicago suggests that shopping websites are abundant with dark patterns that rely on consumer deception. The researchers conducted a large-scale study, analyzing almost 53K product pages from 11K shopping websites to characterize and quantify the prevalence of dark patterns. They discovered 1,841 instances of dark patterns on shopping websites, which together represent 15 types of dark patterns. Note: All images in the article are taken from the research paper. What are dark patterns Dark patterns are generally used by shopping websites as a part of their user interface design choices. These dark patterns coerce, steer, or deceive users into making unintended and potentially harmful decisions, benefiting an online service. Shopping websites trick users into signing up for recurring subscriptions and making unwanted purchases, resulting in concrete financial loss. These patterns are not just limited to shopping websites, and find common applications on digital platforms including social media, mobile apps, and video games as well. At extreme levels, dark patterns can lead to financial loss, tricking users into giving up vast amounts of personal data, or inducing compulsive and addictive behavior in adults and children. Researchers used a web crawler to identify text-based dark patterns The paper uses an automated approach that enables researchers to identify dark patterns at scale on the web. The researchers crawled 11K shopping websites using a web crawler, built on top of OpenWPM, which is a web privacy measurement platform. The web crawler was used to simulate a user browsing experience and identify user interface elements. The researchers used text clustering to extract recurring user interface designs from the resulting data and then inspected the resulting clusters for instances of dark patterns. The researchers also developed a novel taxonomy of dark pattern characteristics to understand how dark patterns influence user decision-making. Based on the taxonomy, the dark patterns were classified basis whether they lead to an asymmetry of choice, are covert in their effect, are deceptive in nature, hide information from users, and restrict choice. The researchers also mapped the dark patterns in their data set to the cognitive biases they exploit. These biases collectively described the consumer psychology underpinnings of the dark patterns identified. They also determine that many instances of dark patterns are enabled by third-party entities, which provide shopping websites with scripts and plugins to easily implement these patterns on their websites. Key stats from the research There are 1,841 instances of dark patterns on shopping websites, which together represent 15 types of dark patterns and 7 broad categories. These 1,841 dark patterns were present on 1,267 of the 11K shopping websites (∼11.2%) in their data set. Shopping websites that were more popular, according to Alexa rankings, were more likely to feature dark patterns. 234 instances of deceptive dark patterns were uncovered across 183 websites 22 third-party entities were identified that provide shopping websites with the ability to create dark patterns on their sites. Dark pattern categories Sneaking Attempting to misrepresent user actions. Delaying information that users would most likely object to once made available. Sneak into Basket: The “Sneak into Basket” dark pattern adds additional products to users’ shopping carts without their consent Hidden Subscription:  Dark pattern charges users a recurring fee under the pretense of a one-time fee or a free trial Hidden Costs: Reveals new, additional, and often unusually high charges to users just before they are about to complete a purchase. Urgency Imposing a deadline on a sale or deal, thereby accelerating user decision-making and purchases. Countdown Timers: Dynamic indicator of a deadline counting down until the deadline expires. Limited-time Messages: Static urgency message without an accompanying deadline Misdirection Using visuals, language, or emotion to direct users toward or away from making a particular choice. Confirmshaming:  It uses language and emotion to steer users away from making a certain choice. Trick Questions: It uses confusing language to steer users into making certain choices. Visual Interference: It uses style and visual presentation to steer users into making certain choices over others. Pressured Selling: It refers to defaults or often high-pressure tactics that steer users into purchasing a more expensive version of a product (upselling) or into purchasing related products (cross-selling). Social proof Influencing users' behavior by describing the experiences and behavior of other users. Activity Notification:  Recurring attention grabbing message that appears on product pages indicating the activity of other users. Testimonials of Uncertain Origin: The use of customer testimonials whose origin or how they were sourced and created is not clearly specified. Scarcity Signalling that a product is likely to become unavailable, thereby increasing its desirability to users. Examples such as Low-stock Messages and High-demand Messages come under this category. Low-stock Messages: It signals to users about limited quantities of a product High-demand Messages: It signals to users that a product is in high demand, implying that it is likely to sell out soon. Obstruction Making it easy for the user to get into one situation but hard to get out of it. The researchers observed one type of the Obstruction dark pattern: “Hard to Cancel”. The Hard to Cancel dark pattern is restrictive (it limits the choices users can exercise to cancel their services). In cases where websites do not disclose their cancellation policies upfront, Hard to Cancel also becomes information hiding (it fails to inform users about how cancellation is harder than signing up). Forced Action Forcing the user to do something tangential in order to complete their task. The researchers observed one type of the Forced Action dark pattern: “Forced Enrollment” on 6 websites. Limitations of the research The researchers have acknowledged that their study has certain limitations. Only text-based dark patterns are taken into account for this study. There is still work needed to be done for inherently visual patterns (e.g., a change of font size or color to emphasize one part of the text more than another from an otherwise seemingly harmless pattern). The web crawling lead to a fraction of Selenium crashes, which did not allow researchers to either retrieve product pages or complete data collection on certain websites. The crawler failed to completely simulate the product purchase flow on some websites. They only crawled product pages and checkout pages, missing out on dark patterns present in other common pages such as the homepage of websites, product search pages, and account creation pages. The list of dark patterns can be downloaded as a CSV file. For more details, we recommend you to read the research paper. U.S. senators introduce a bipartisan bill that bans social media platforms from using ‘dark patterns’ to trick its users. How social media enabled and amplified the Christchurch terrorist attack Can an Open Web Index break Google’s stranglehold over the search engine market?
Read more
  • 0
  • 0
  • 3475

article-image-the-v-programming-language-is-now-open-sourced-is-it-too-good-to-be-true
Bhagyashree R
24 Jun 2019
5 min read
Save for later

The V programming language is now open source - is it too good to be true?

Bhagyashree R
24 Jun 2019
5 min read
Yesterday, a new statically-typed programming language named V was open sourced. It is described as a simple, fast, and compiled language for creating maintainable software. Its creator, Alex Medvednikov, says that it is very similar to Go and is inspired by Oberon, Rust, and Swift. What to expect from V programming language Fast compilation V can compile up to 1.2 million lines of code per second per CPU. It achieves this by direct machine code generation and strong modularity. If we decide to emit C code, the compilation speed drops to approximately 100k of code per second per CPU. Medvednikov mentions that direct machine code generation is still in its very early stages and right now only supports x64/Mach-O. He plans to make this feature stable by the end of this year. Safety It seems to be an ideal language because it has no null, global variables, undefined values, undefined behavior, variable shadowing, and does bound checking. It supports immutable variables, pure functions, and immutable structs by default. Generics are right now work in progress and are planned for next month. Performance According to the website, V is as fast as C, requires a minimal amount of allocations, and supports built-in serialization without runtime reflection. It compiles to native binaries without any dependencies. Just a 0.4 MB compiler Compared to Go, Rust, GCC, and Clang, the space required and build time of V are very very less. The entire language and standard library is just 400 KB and you can build it in 0.4s. By the end of this year, the author aims to bring this build time down to 0.15s. C/C++ translation V allows you to translate your V code to C or C++. However, this feature is at a very early stage, given that C and C++ are a very complex language. The creator aims to make this feature stable by the end of this year. What do developers think about this language? As much as developers like to have a great language to build applications, many felt that V is too good to be true. Looking at the claims made on the site some developers thought that the creator is either not being truthful about the capabilities of V or is scamming people. https://twitter.com/warnvod/status/1112571835558825986 A language that has the simplicity of Go and the memory management model of Rust is what everyone desires. However, the main reason that makes people skeptical about V is that there is not much proof behind the hard claims it makes. A user on Hacker news commented, “...V's author makes promises and claims which are then retracted, falsified, or untestable. Most notably, the source for V's toolchain has been teased repeatedly as coming soon but has never been released. Without an open toolchain, none of the claims made on V's front page [2] can be verified.” Another thing that makes this case concerning is that the V programming language is currently in alpha stage and is incomplete. Despite that, the creator is making $827 per month from his Patreon account. “However, advertising a product can do something and then releasing it stating it cannot do it yet, is one thing, but accepting money for a product that does not what is advertised, is a fraud,” a user commented. Some developers are also speculating that the creator is maybe just embarrassed to open source his code because of bad coding pattern choices. A user speculates, “V is not Free Software, which is disappointing but not atypical; however, V is not even open source, which precludes a healthy community. Additionally, closed languages tend to have bad patterns like code dumps over the wall, poor community communication, untrustworthy binary behaviors, and delayed product/feature releases. Yes, it's certainly embarrassing to have years of history on display for everybody to see, but we all apparently have gotten over it. What's hiding in V's codebase? We don't know. As a best guess, I think that the author may be ashamed of the particular nature of their bootstrap.” The features listed on the official website are incredible. The only concern was that the creator was not being transparent about how he plans to achieve them. Also, as this was closed source earlier, there was no way for others to verify the performance guarantees it promises that’s why so much confusion happened. Alex Medvednikov on why you can trust V programming On an issue that was reported on GitHub, the creator commented, “So you either believe me or you don't, we'll see who is right in June. But please don't call me a liar, scammer and spread misinformation.” Medvednikov was maybe overwhelmed by the responses and speculations, he was seeing on different discussion forums. Developing a whole new language requires a lot of work and perhaps his deadlines are ambitious. Going by the release announcement Medvednikov made yesterday, he is aware that the language designing process hasn’t been the most elegant version of his vision. He wrote, “There are lots of hacks I'm really embarrassed about, like using os.system() instead of native API calls, especially on Windows. There's a lot of ugly C code with #, which I regret adding at all.” Here’s great advice shared by a developer on V’s GitHub repository: Take your time, good software takes time. It's easy to get overwhelmed building Free software: sometimes it's better to say "no" or "not for now" in order to build great things in the long run :) Visit the official website of the V programming language for more detail. Docker and Microsoft collaborate over WSL 2, future of Docker Desktop for Windows is near Pull Panda is now a part of GitHub; code review workflows now get better! Scala 2.13 is here with overhauled collections, improved compiler performance, and more!
Read more
  • 0
  • 0
  • 14507
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-edge-chrome-brave-share-updates-on-upcoming-releases-recent-milestones-and-more-at-state-of-browsers-event
Bhagyashree R
24 Jun 2019
9 min read
Save for later

Edge, Chrome, Brave share updates on upcoming releases, recent milestones, and more at State of Browsers event

Bhagyashree R
24 Jun 2019
9 min read
Last month, This Dot Labs, a framework-agnostic JavaScript consultancy, conducted its biannual online live streaming event, This.JavaScript - State of Browsers. In this live stream, representatives of popular browsers talk about the amazing features users can look forward to, next releases, and much more. This time Firefox was missing. However, in attendance were: Stephanie Drescher ,  Program Manager, Microsoft Edge Brian Kardell ,  Developer Advocate, Igalia, an active contributor to WebKit Rijubrata Bhaumik , Software Engineer, Intel, who talked about Intel’s contribution towards web Jonathan Sampson ,  Developer Relations, Brave Paul Kinlan , Sr. Developer Advocate, Google Diego Gonzalez, Product Manager, Samsung Internet The event was moderated by Tracy Lee , who is the  founder of This Dot Labs. Following are some of the updates shared by the browser representatives: What’s new with Edge In December last year, Microsoft announced that it will be adopting Chromium in the development of Microsoft Edge for desktop. And, beginning this year we saw its decision coming to fruition. The tech giant made the first preview builds of the Chromium-based Edge available to both macOS and Windows 10 users. These preview builds are available for testing from the Microsoft Edge Insider site. This Chromium-powered Edge is available for iOS and Android users too. Stephanie Drescher shared what has changed for the Edge team after switching to Chromium. This is enabling them to deliver and update the Edge browser across all supported versions of Windows. This is also allowing them to update the browser more frequently as they are no longer tied to the operating system. The Edge team is not just using Chromium but also contributing all the web platform enhancements back to Chromium by default. The team has already made 400+ commits into the Chromium project. Edge comes with support for cross-platform and installable progressive web apps directly from the browser. The team’s next focus area is to improve Windows experience in terms of accessibility, localization, scrolling, and touch. At Build 2019, Microsoft also announced its new WebView that will be available for Win32 and UWP apps. She said this “will give you the option of an evergreen Chromium platform via edge or the option to bring your own version for AppCompat via a model that's similar to Electron.” Moving on to dev tools, the browser has several new dev tools that are visually aligned with VS Code. The updates in dev tools include dark mode on by default, control inputs, and the team is further exploring “more ways to align the experience between your browser dev tools and VS Code.” The browser’s built-in tools can now inspect and debug any Microsoft-Edge powered web content including PWAs, WebView, etc. No doubt these are some amazing features to be excited for. Edge has come to iOS and macOS, however, the question of whether it will support Linux in the future remains unanswered. Drescher said that the team has no plans right now to support Linux, however looking at the number of user requests for Linux support they are starting to think about it. What’s new with Chrome At I/O 2019, Google shared its vision for Chrome, which is making it "instant, powerful, and safe" to help improve the overall browsing experience. To make Chrome faster and lighter, a bunch of improvements to V8, Chrome’s JavaScript engine has been made. Now, JavaScript memory usage is down by 20% for real-world apps. After addressing the startup bottlenecks, Chrome's loading speed has now become 50% better on low-end devices and 10 percent across devices. The scrolling performance has also improved by 18%. Along with these speed gains, the team has also introduced a few features in the web platform that aim to take the burden away from the developers: The lazy loading mechanism reduces the initial payload to improve load time. You just need to add “loading=lazy" in the image or iframe elements. The idea is simple, the web browser will not download an image or iframe that has the loading attribute until the user scrolls near to it. The Portals API, first showcased at I/O this year, aims to make navigation between sites and web pages smoother. Portals is very similar to iframe in that it allows web developers to embed remote content in their pages. The difference is that with Portals you will able to navigate inside the content you are embedding. As a part of making Chrome more powerful, Google is actively working on bridging the capabilities gap between native and web under Project Fugu. It has already introduced two APIs: Web Share and Web Share Target and plans to bring more capabilities like writable file API, event alarms, user idle detection, and more. As the name suggests, the Web Share API allows websites to invoke the native sharing capabilities of the host platform. Users will be able to easily share either a URL or text on pretty much any platform they want to. Till date, we were restricted to share content on native apps that have registered as a share target. With Web Share Target API, installed web apps can also register with the underlying OS as a target to receive shared content. Talking about the safety aspect, Chrome now comes with support for WebAuthn, a new authentication standard by W3C, starting from its 67 version. This API allows servers to integrate strong authenticators that are built into devices, for instance, Windows Hello or Apple’s Touch ID. What's new with Brave Edge, Chrome, and Brave share one common thing and that is they all are Chromium-based. But, what sets Brave apart is the Basic Attention Token (BAT). Jonathan Sampson, who was representing Brave, said that we have seen a “Cambrian Explosion” of cryptocurrencies utility tokens or blockchain assets like Bitcoin, Litecoin, Etherium. Partnership with Coinbase Previously, if we wanted to acquire these assets there was only one way to do it “mining”, which meant a huge investment on expensive GPUs and power bill. Brave believes that the next step to earn these assets is primarily by your “attention”. Brave’s goal is to take users from mining to earning blockchain assets. As a part of this goal, it has partnered with Coinbase, one of the prominent companies in the blockchain space. Users will get 10 dollars in the form of BAT just for learning the state of digital advertising and what Brave and attention tokens are doing in that space. Through BAT, Brave is providing its consumers with a direct way to support their content creators. These content creators can customize and personalize this entire experience by navigating to the signing up on Brave’s creators page. Implementation changes in how BAT is sent to creators The Brave team has also made some implementation changes in terms of how this whole thing works. Previously, consumers could send these tokens to anyone. The token then used to go into an omnibus settlement wallet and stays there until that creator verifies with the program and demonstrates ownership over their web property. Finally, after all this, they get access to these tokens for use. Unfortunately, this could mean that some tokens have to “sit in a state of limbo” for an indefinite amount of time. Now, the team has re-engineered this process to hold these tokens inside your wallet for up to 90 days. If and when that property is verified the tokens are transmitted out. And, if the property is never verified then the tokens are released back inside your wallet. You can send them to another creator instead of letting them sit in that omnibus settlement wallet. Sampson further added, “of course the entire process goes through the anonymize protocol so that brave nor anybody else has any idea which websites you're visiting or to whom you are contributing support.” Inner working of Brave ads To better the ads recommendation Brave comes with a machine learning model integrated. This feature is opt-in so the user gets to decide when and how many ads they want to see in order to earn BAT from their attention. The ML model can study the user and learn about them each day. Every day a catalog is downloaded to each users’ device. Then the individual machines would churn away on that catalog to figure out which ads are relevant to an individual. Once, the relevant ads are found out users will see a small operating system notification. Brave sends 70% of the revenue made from the users’ attention to the user in the form of BAT. Brave Sync (Beta) The beta version of Brave Sync is available across platforms from Windows, macOS, Linux to Android, and iOS. Similar to Brave Ads, this is also an opt-in feature that allows you to automatically sync browsing data across devices. Right now it is in beta and supports syncing only bookmarks. In the future releases, we can expect support for tabs, history, passwords, autofill, as well as Brave Rewards. Once you enable it on one device, you just need to scan a QR code or enter a secret phrase to register another device for syncing. Canary builds available Like all the other browsers, Brave has also started to share their nightly and dev builds to give developers an “earlier insight” into the work they are doing. You can access them through their download page. These were some of the major updates discussed in the live stream. There was also Intel and Samsung who talked about their contributions to the web. Igalia’s developer Brian Kardell talked about the dark mode, pointer events, and more in WebKit. Watch the full event on YouTube for more details. https://www.youtube.com/watch?v=olSQai4EUD8 Elvis Pranskevichus on limitations in SQL and how EdgeQL can help Microsoft makes the first preview builds of Chromium-based Edge available for testing Brave introduces Brave Ads that share 70% revenue with users for viewing ads
Read more
  • 0
  • 0
  • 2993

article-image-raspberry-pi-4-is-up-for-sale-at-35-with-64-bit-arm-core-up-to-4gb-memory-full-throughput-gigabit-ethernet-and-more
Vincy Davis
24 Jun 2019
5 min read
Save for later

Raspberry Pi 4 is up for sale at $35, with 64-bit ARM core, up to 4GB memory, full-throughput gigabit Ethernet and more!

Vincy Davis
24 Jun 2019
5 min read
Today, the Raspberry Pi 4 model is up for sale, starting at $35. It has a 1.5GHz quad-core 64-bit ARM Cortex-A72 CPU, three memory options of up to 4GB, full-throughput gigabit Ethernet, Dual-band 802.11ac wireless networking, two USB 3.0 and two USB 2.0 ports, a complete compatibility with earlier Raspberry Pi products and more. Eben Upton, Chief Executive at Raspberry Pi Trading has said that “This is a comprehensive upgrade, touching almost every element of the platform.” This is the first Raspberry Pi product available offline, since the opening of their store in Cambridge, UK.   https://youtu.be/sajBySPeYH0   What’s new in Raspberry Pi 4? New Raspberry Pi silicon Previous Raspberry Pi models are based on 40nm silicon. However, the new Raspberry Pi 4 is a complete re-implementation of BCM283X on 28nm. The power saving delivered by the smaller process geometry has enabled the use of Cortex-A72 core, which has a 1.5GHz quad-core 64-bit ARM. The Cortex-A72 core can execute more instructions per clock, yielding four times performance improvement, over Raspberry Pi 3B+, depending on the benchmark. New Raspbian software The new Raspbian software provides numerous technical improvements, along with an extensively modernized user interface, and updated applications including the Chromium 74 web browser. For Raspberry Pi 4, the Raspberry team has retired the legacy graphics driver stack used on previous models and opted for the Mesa “V3D” driver. It offers benefits like OpenGL-accelerated web browsing and desktop composition, and also eliminates roughly half of the lines of closed-source code in the platform. Raspberry Pi 4 memory options For the first time, Raspberry Pi 4 is offering a choice of memory capacities, as shown below: All three variants of the new Raspberry Pi model have been launched. The entry-level Raspberry Pi 4 Model B is priced at 35$, excluding sales tax, import duty, and shipping. Additional improvements in Raspberry Pi 4 Power Raspberry Pi 4 has USB-C as the power connector, which will support an extra 500mA of current, ensuring 1.2A for downstream USB devices, even under heavy CPU load. Video The previous type-A HDMI connector has been replaced with a pair of type-D HDMI connectors, so as to accommodate dual display output within the existing board footprint. Ethernet and USB The Gigabit Ethernet magjack has been moved to the top right of the board, hence simplifying the PCB routing. The 4-pin Power-over-Ethernet (PoE) connector is in the same location, thus Raspberry Pi 4 remains compatible with the PoE HAT. The Ethernet controller on the main SoC is connected to an external Broadcom PHY, thus providing full throughput. USB is provided via an external VLI controller, connected over a single PCI Express Gen 2 lane, and providing a total of 4Gbps of bandwidth, shared between the four ports. The Raspberry Pi 4 model has the LPDDR4 memory technology, with triple bandwidth. It has also upgraded the video decode, 3D graphics, and display output to support 4Kp60 throughput. Onboard Gigabit Ethernet and PCI Express controllers have been added to address the non-multimedia I/O limitations of the previous devices. Image Source: Raspberry Pi blog New Raspberry Pi 4 accessories Due to the connector and form-factor changes, Raspberry Pi 4 has the requirement of new accessories. The Raspberry Pi 4 has its own case, priced at $5. It also has developed a suitable 5V/3A power supply, which is priced at $8 and is available in the UK, European, North American and Australian plug formats. The Raspberry Pi 4 Desktop Kit is also available and priced at $120. While the earlier Raspberry Pi models will be available in the market, Upton has mentioned that Raspberry Pi will continue to build these models as long as there's a demand for them. Users are quite ecstatic with the availability of Raspberry Pi 4 and many have already placed orders for it. https://twitter.com/Morphy99/status/1143103131821252609 https://twitter.com/M0VGA/status/1143064771446677509 A user on Reddit comments, “Very nice. Gigabit LAN and 4GB memory is opening it up to a hell of a lot more use cases. I've been tempted by some of the Pi's higher-specced competitors like the Pine64, but didn't want to lose out on the huge community behind the Pi. This seems like the best of both worlds to me.” A user on Hacker News says that “Oh my! This is such a crazy upgrade. I've been using the RPI2 as my HTPC/NAS at my folks, and I'm so happy with it. I was itching to get the last one for myself. USB 3.0! Gigabit Ethernet! WiFi 802.11ac, BT 5.0, 4GB RAM! 4K! $55 at most?! What the!? How the??! I know I'm not maintaining decorum at Hacker News, but I am SO mighty, MIGHTY excited! I'm setting up a VPN to hook this (when I get it) to my VPS and then do a LOT of fun stuff back and forth, remotely, and with the other RPI at my folks.” Another comment reads “This is absolutely great. The RPi was already exceptional for its price point, and this version seems to address the few problems it had (lack of Gigabit, USB speed and RAM capacity) and add onto it even more features. It almost seems too good to be true. Can't wait!” Another user says that “I'm most excited about the modern A72 cores, upgraded hardware decode, and up to 4 GB RAM. They really listened and delivered what most people wanted in a next gen RPi.” For more details, head over to the Raspberry Pi official blog. You can now install Windows 10 on a Raspberry Pi 3 Setting up a Raspberry Pi for a robot – Headless by Default [Tutorial] Introducing Strato Pi: An industrial Raspberry Pi
Read more
  • 0
  • 0
  • 3375

article-image-deepfakes-house-committee-hearing-risks-vulnerabilities-and-recommendations
Vincy Davis
21 Jun 2019
16 min read
Save for later

Deepfakes House Committee Hearing: Risks, Vulnerabilities and Recommendations

Vincy Davis
21 Jun 2019
16 min read
Last week, the House Intelligence Committee held a hearing to examine the public risks posed by “deepfake” videos. Deepfake is identified as a technology that alters audio or video and then is passed off as true or original content. In this hearing, experts on AI and digital policy highlighted to the committee, deepfakes risk to national security, upcoming elections, public trust and the mission of journalism. They also offered potential recommendations on what Congress could do to combat deepfakes and misinformation. The chair of the committee Adam B. Schiff, initiated the hearing by stating that it is time to regulate the technology of deepfake videos as it is enabling sinister forms of deception and disinformation by malicious actors. He adds that “Advances in AI or machine learning have led to the emergence of advance digitally doctored type of media, the so-called deepfakes that enable malicious actors to foment chaos, division or crisis and have the capacity to disrupt entire campaigns including that for the Presidency.” For a quick glance, here’s a TL;DR: Jack Clerk believes that governments should be in the business of measuring and assessing deepfake threats by looking directly at the scientific literature and developing a base knowledge of it. David Doermann suggests that tools and processes which can identify fake content should be made available in the hands of individuals, rather than relying completely on the government or on social media platforms to police content. Danielle Citron warns that the phenomenon of deepfake is going to be increasingly felt by women and minorities and for people from marginalized communities. Clint Watts provides a list of recommendations which should be implemented to prohibit U.S. officials, elected representatives and agencies from creating and distributing false and manipulated content. A unified standard should be followed by all social media platforms. Also they should be pressurized to have a 10-15 seconds delay in all videos, so that they can decide, to label a particular video or not. Regarding 2020 Presidential election: State governments and social media companies should be ready with a response plan, if a fake video surfaces to cause disrupt. It was also recommended that the algorithms to make deepfakes should be open sourced. Laws should be altered, and strict actions should be awarded, to discourage deepfake videos. Being forewarned is forearmed in case of deepfake technology Jack Clerk, OpenAI Policy Director, highlighted in his testimony that he does not think A.I. is the cause of any disruption, but actually is an “accelerant to an issue which has been with us for some time.'' He adds that computer software aligned with A.I. technology has become significantly cheaper and more powerful, due to its increased accessibility. This has led to its usage in audio or video editing, which was previously very difficult. Similar technologies  are being used for production of synthetic media. Also deepfakes are being used in valuable scientific research. Clerk suggests that interventions should be made to avoid its misuse. He believes that “it may be possible for large-scale technology platforms to try and develop and share tools for the detection of malicious synthetic media at both the individual account level and the platform level. We can also increase funding.” He strongly believes that governments should be in the business of measuring and assessing these threats by looking directly at the scientific literature and developing a base knowledge. Clerk concludes saying that “being forewarned is forearmed here.” Make Deepfake detector tools readily availaible David Doermann, the former Project Manager at the Defense Advanced Research Projects Agency mentions that the phrase ‘seeing is believing’ is no longer true. He states that there is nothing fundamentally wrong or evil about the technology, like basic image and video desktop editors, deepfakes is only a tool. There are a lot of positive applications of generative networks just as there are negative ones. He adds that, as of today, there are some solutions that can identify deepfakes reliably. However, Doermann fears that it’s only a matter of time before the current detection capabilities will be rendered less effective. He adds that “it's likely to get much worse before it gets much better.” Doermann suggests that tools and processes which can identify such fake content should be made available in the hands of individuals, rather than relying completely on the government or on social media platforms to police content. At the same time, there should also be ways to verify it or prove it or easily report it. He also hopes that automated detection tools will be developed, in the future, which will help in filtering and detection at the front end of the distribution pipeline. He also adds that “appropriate warning labels should be provided, which suggests that this is not real or not authentic, or not what it's purported to be. This would be independent of whether this is done and the decisions are made, by humans, machines or a combination.” Groups most vulnerable to Deepfake attacks Women and minorities Danielle Citron, a Law Professor at the University of Maryland, describes Deepfake as “particularly troubling when they're provocative and destructive.” She adds that, we as humans, tend to believe what our eyes and ears are telling us and also tend to share information that confirms our biases. It’s particularly true when that information is novel and negative, so the more salacious, we're more willing to pass it on. She also specifies that the deepfakes on social media networks are ad-driven. When all of this is put together, it turns out that the more provocative the deepfake is, the salacious will be the spread virally.  She also informed the panel committee about an incident, involving an investigative journalist in India, who had her posters circulated over the internet and deepfake sex videos, with her face morphed into pornography, over a provocative article. Citron thus states that “the economic and the social and psychological harm is profound”. Also based on her work in cyber stalking, she believes that this phenomenon is going to be increasingly felt by women and minorities and for people from marginalized communities. She also shared other examples explaining the effect of deepfake on trades and businesses. Citron also highlighted that “We need a combination of law, markets and really societal resilience to get through this, but the law has a modest role to play.” She also mentioned that though there are laws to sue for defamation, intentional infliction of emotional distress, privacy torture, these procedures are quite expensive. She adds that criminal law offers very less opportunity for the public to push criminals to the next level. National security Clint Watts, a Senior Fellow at the Foreign Policy Research Institute provided insight into how such technologies can affect national security. He says that “A.I. provides purveyors of disinformation to identify psychological vulnerabilities and to create modified content digital forgeries advancing false narratives against Americans and American interests.” Watts suspects that Russia, “being an enduring purveyor of disinformation is and will continue to pursue the acquisition of synthetic media capability, and employ the output against adversaries around the world.” He also adds that China, being the U.S. rival, will join Russia “to get vast amounts of information stolen from the U.S. The country has already shown a propensity to employ synthetic media in broadcast journalism. They'll likely use it as part of disinformation campaigns to discredit foreign detractors, incite fear inside western-style democracy and then, distort the reality of audiences and the audiences of America's allies.” He also mentions that deepfake proliferation can present a danger to American constituency by demoralizing it. Watts suspects that the U.S. diplomats and military personnel deployed overseas, will be prime target for deepfake driven disinformation planted by adversaries. Watts provided a list of recommendations which should be implemented to “prohibit U.S. officials, elected representatives and agencies from creating and distributing false and manipulated content.” The U.S. government must be the sole purveyor of facts and truth to constituents, assuring the effective administration of democracy via productive policy debate from a shared basis of reality. Policy makers should work jointly with social media companies to develop standards for content and accountability. The U.S. government should partner with private sectors to implement digital verification designating a date, time and physical origination of the content. Social media companies should start labeling videos, and forward the same across all platforms. Consumers should be able to determine the source of the information and whether it's the authentic depiction of people and events. The U.S. government from a national security perspective, should maintain intelligence on capabilities of adversaries to conduct such information. The departments of defense and state should immediately develop response plans, for deepfake smear campaigns and mobilizations overseas, in an attempt to mitigate harm. Lastly he also added that public awareness of deepfakes and signatures, will assist in tamping down attempts to subvert the  U.S. democracy and incite violence. Schiff asked the witnesses, if it's “time to do away with the immunity that social media platforms enjoy”, Watts replied in the affirmative and listed suggestions in three particular areas. If social media platforms see something spiking in terms of virality, it should be put in a queue for human review, linked to fact checkers, then down rate it and don't let it into news feeds. Also make the mainstream understand what is manipulated content. Anything related to outbreaks of violence and public safety should be regulated immediately. Anything related to elected officials or public institutions, should immediately be flagged and pulled down and checked and then a context should be given to it. Co-chair of the committee, Devin Nunes asked Citron what kind of filters can be placed on these tech companies, as “it's not developed by partisan left wing like it is now, where most of the time, it's conservatives who get banned and not democrats”. Citron suggested that proactive filtering won’t be possible and hence companies should react responsibly and should be bipartisan. She added that “but rather, is this a misrepresentation in a defamatory way, right, that we would say it's a falsehood that is harmful to reputation. that's an impersonation, then we should take it down. This is the default I am imagining.” How laws could be altered according to the changing times, to discourage deepfake videos Citron says that laws could be altered, like in the case of Section 230 C. It states that “No speaker or publisher -- or no online service shall be treated as a speaker or publisher of someone else's content.” This law can be altered to “No online service that engages in reasonable content moderation practices shall be treated as a speaker or publisher of somebody else's content.” Citron believes that avoiding reasonability could lead to negligence of law. She also adds that “I've been advising Twitter and Facebook all of the time. There is meaningful reasonable practices that are emerging and have emerged in the last ten years. We already have a guide, it's not as if this is a new issue in 2019. So we can come up with reasonable practices.” Also Watts added that if any adversary from big countries like China, Iran, Russia makes a deepfake video to push the US downwards, we can trace them back if we have aggressive laws at our hand. He says it could be anything from an “arrest and extradition, if the sanction permits, response should be individually, or in terms of cyber response”, could help us to discourage deepfake. How to slow down the spread of videos One of the reasons that these types of manipulated images gain traction is because it's almost instantaneous - they can be shared around the world, shared across platforms in a few seconds. Doermann says that these social media platforms must be pressurized to have a 10-15 seconds delay, so that it can be decided whether to label a particular video or not. He adds that “We've done it for child pornography, we've done it for human trafficking, they're serious about those things. This is another area that's a little bit more in the middle, but I think they can take the same effort in these areas to do that type of triage.” This delay will allow third parties or fact checkers to decide on the authenticity of videos and label them. Citron adds that this is where labelling a particular video can help, “I think it is incredibly important and there are times in which, that's the perfect rather than second best, and we should err on the side of inclusion and label it as synthetic.” The representative of Ohio, Brad Wenstrup added that we can have internal extradition laws, which can punish somebody when “something comes from some other country, maybe even a friendly country, that defames and hurts someone here”. There should be an agreement among nations that “we'll extradite those people and they can be punished in your country for what they did to one of your citizens.” Terri Sewell, the Representative of Alabama further probed about the current scenario of detecting fake videos, to which Doermann replied that currently we have enough solutions to detect a fake video, however with a constant delay of 15-20 minutes. Deepfakes and 2020 Presidential elections Watts says that he’s concerned about deepfakes acting on the eve of election day 2020. Foreign adversaries may use a standard disinformation approach by “using an organic content that suits their narrative and inject it back.” This can escalate as more people are making deepfakes each year. He also added that “Right now I would be very worried about someone making a fake video about electoral systems being out or broken down on election day 2020.” So state governments and social media companies should be ready with a response plan in the wake of such an event. Sewell then asked the witnesses for suggestions on campaigns to political parties/candidates so that they are prepared for the possibility of deepfake content. Watts replied that the most important thing to counter fake content would be a unified standard, that all the social media industries should follow. He added that “if you're a manipulator, domestic or international, and you're making deep fakes, you're going to go to whatever platform allows you to post anything from inauthentic accounts. they go to wherever the weak point is and it spreads throughout the system.” He believes that this system would help counter extremism, disinformation and political smear campaigns. Watts added any sort of lag in responding to such videos should be avoided as “any sort of lag in terms of response allows that conspiracy to grow.” Citron also pointed out that firstly all candidates should have a clear policy about deep fakes and should commit that they won’t use them or spread them. Should the algorithms to make deepfakes be open sourced? Doermann answered that the algorithms of deepfakes have to be absolutely open sourced. He says that though this might help adversaries, but they are anyway going to learn about it. He believes this is significant as, “We need to get this type of stuff out there. We need to get it into the hands of users. There are companies out there that are starting to make these types of things.” He also states that people should be able to use this technology. The more we educate them, more the tools they learn, more the correct choices people can make. On Mark Zuckerberg’s deepfake video On being asked to comment on the decision of Mark Zuckerberg to not take down his deepfake video from his own platform, Facebook, Citron replied that Mark gave a perfect example of “satire and parody”, by not taking down the video. She added that private companies can make these kinds of choices, as they have an incredible amount of power, without any liability, “it seemed to be a conversation about the choices they make and what does that mean for society. So it was incredibly productive, I think.” Watts also opined that he likes Facebook for its consistency in terms of enforcement and that they are always trying to learn better things and implement it. He adds that he really like Facebook as its always ready to hear “from legislatures about what falls inside those parameters. The one thing that I really like is that they're doing is identifying inauthentic account creation and inauthentic content generation, they are enforcing it, they have increased the scale,and it is very very good in terms of how they have scaled it up, it’s not perfect, but it is better.”   Read More: Zuckberg just became the target of the world’s first high profile white hat deepfake op. Can Facebook come out unscathed? On the Nancy Pelosi doctored video Schiff asked the witnesses if there is any account on the number of millions of people who have watched the doctored video of Nancy Pelosi, and an account of how many of them ultimately got to know that it was not a real video. He said he’s asking this as according to psychologists, people never really forget their once constructed negative impression. Clarke replied that “Fact checks and clarifications tend not to travel nearly as far as the initial news.” He added that its becomes a very general thing as “If you care, you care about clarifications and fact checks. but if you're just enjoying media, you're enjoying media. You enjoy the experience of the media and the absolute minority doesn’t care whether it's true.” Schiff also recalled how in 2016, “some foreign actresses, particularly Russia had mimicked black lives matter to push out continent to racially divide people.” Such videos gave the impression of police violence, on people of colour. They “certainly push out videos that are enormously jarring and disruptive.” All the information revealed in the hearing was described as “scary and worrying”, by one of the representatives. The hearing was ended by Schiff, the chair of the committee, after thanking all the witnesses for their testimonies and recommendations. For more details, head over to the full Hearing on deepfake videos by the House Intelligence Committee. Worried about Deepfakes? Check out the new algorithm that manipulate talking-head videos by altering the transcripts Lawmakers introduce new Consumer privacy bill and Malicious Deep Fake Prohibition Act to support consumer privacy and battle deepfakes Machine generated videos like Deepfakes – Trick or Treat?
Read more
  • 0
  • 0
  • 2285

article-image-shopify-announces-fulfillment-network-video-and-3d-model-assets-custom-storefront-tools-and-more
Vincy Davis
20 Jun 2019
6 min read
Save for later

Shopify announces Fulfillment network, video and 3D model assets, custom storefront tools and more!

Vincy Davis
20 Jun 2019
6 min read
At the ongoing Shopify Unite 2019 conference, Shopify has announced a number of new products like Fulfillment network, video and 3D model assets, custom storefront tools, new online store design experience and others. Most of these products will be launched later this year. Here are the major highlights: Shopify Fulfillment Network Shopify Fulfillment Network, will be a dispersed network of fulfillment centers which uses machine learning to automatically select the optimal inventory quantities per location, as well as the closest fulfillment option for each of the customers’ shipments. Once available, it will be possible to simply install the app, select the products, get a quote, and begin selling the products. Entrepreneurs will be able to provide fast, low-cost delivery to their customers while maintaining ownership of customer data and a branded shipping experience. A single back office A merchants order, inventory, and customer data will stay synced and up-to-date across all warehouse locations and channels. Recommended warehouse locations To save costs on shipping, it will be possible to find the best locations, based on where the sales are coming from. Low stock alerts When inventory runs low, merchants will know when to replenish to continue meeting demands. 99.5% order accuracy The correct package will be chosen and out the door on time, with good accuracy. Hands-on warehouse help A dedicated account manager will assist the merchant in finding the best path, to reach their customers so the costs can be kept low. Shopify Fulfillment Network is currently available in the United States, and interested merchants can apply for early access. https://twitter.com/treklightgear/status/1141406217815724034 Native support to video and 3D model assets Shopify’s product will natively support video and 3D model assets, thus adding a new dimension to products and providing a richer purchase experience for customers. This feature is expected to be released later this year. Manage media through a single location It will now be possible to upload, access, and store video and 3D models from the same place where the images are being managed currently. Deploy through the new Shopify video player Users can use, one of the 10 starter themes, to easily display video 3D models using the new Shopify player for video or the viewer for Shopify AR. New editor apps Shopify is inviting partners and developers to create additional apps and custom integrations to open up new ways, to create and modify images, videos, and AR experiences. https://twitter.com/Scobleizer/status/1141456478496161797 https://twitter.com/thewakdesigns/status/1141607172670926848 New Online Store Design Experience The new online store design will provide entrepreneurs, with more options for customization. This will give them more control over the layout and aesthetic of their store. This feature is expected to be released later this year. Easier customization at the page and store level Any page can now be customized by using sections, just like the homepage. It is also now possible for users to save time by setting content on multiple pages, using master pages. Portable content that moves with you From now on, users will not have to make a duplicate of their theme or move content over manually. The shop’s content will follow the owner so that they are able to make changes, like downloading a new version or trying out a new one. A new workspace to update your store It will now be possible to edit and preview updates before publishing. Any minor tweak or major changes can be done in a new space to draft changes. Custom storefront tools Shopify’s Storefront API allows customers to use the custom storefront tools to free their storefront from certain back-end dependencies. This gives customers the flexibility to sell anywhere and in any way they want. This feature is more exciting for more complex or niche businesses that use the web, mobile, gaming, and other interactive mediums as storefronts to reach their customers. Shopify merchants have already started to use the Storefront API’s. For example, NTWRK hosted a live stream shopping show using Shopify’s Storefront API. Connect microservices to create personalized experiences Third-party shipping services can be used for blogs, storefront, and product pages or accurate shipment alerts Turn the world into your storefront Entrepreneurs can engage with their customers through vending machines, live streams, smart mirrors, voice shopping, and more. Speedy and scalable to have development teams work in parallel The flexible architecture will enable the development teams to create the experiences and storefronts, according to their vision. Interested merchants can create custom experiences on the customer storefront tools website. https://twitter.com/CarloTeran/status/1141475420904157185 Customer loyalty with retail shoppers With the new Shopify Point of Sale cart app extensions, users can apply and edit loyalty and promotional details directly from the customer cart. Apply discounts lightning-quick The number of clicks needed to apply a discount has been reduced to one, thus saving users 10 seconds for each sale on average. Important information at your fingertips Key customer details, from birthdays to reward milestones will surface automatically and in-context, so the merchants or their staff won’t have to navigate to the apps to get alerts. More flexibility Shopify’s app partners will give merchants the flexibility to pick a program that works best for their customer experience, like rewarding a long-time customer, online or in-store. Interested merchants can learn more about the apps on the POS Loyalty and Promotion Apps website. https://twitter.com/ShawnBouchard/status/1141483355252199425 Shopify Payments in multiple currencies and languages From this year onwards, merchants can run their business in their own preferred language. Shopify is already available in French, German, Japanese, Italian, Brazilian Portuguese, and Spanish. Additionally, Shopify will be available in 11 additional languages like Dutch, Simplified Chinese, and more. Also, Shopify Payments will enable selling in multiple currencies and will be globally available to all Shopify merchants later this year. The displayed prices will use simple rounding rules and automatically adjust based on current foreign -exchange rates. Soon shoppers will be able to convert between nine major currencies -GBP, AUD, CAD, EUR, HKD, JPY, NZD, SGD, and USD, thus using their preferred way to pay. https://twitter.com/anthonycook/status/1141392464818900996 Users of Shopify are obviously quite elated with all the announcements. People also touted this as Shopify’s way to combat Amazon, its main competitor in the market. https://twitter.com/tomfgoodwin/status/1141434508257964032 Visit the Shopify blog, for more details. Read More Why Retailers need to prioritize eCommerce Automation in 2019 5 things to consider when developing an eCommerce website Through the customer’s eyes: 4 ways Artificial Intelligence is transforming ecommerce
Read more
  • 0
  • 0
  • 6360
article-image-google-rejects-all-13-shareholder-proposals-at-its-annual-meeting-despite-protesting-workers
Sugandha Lahoti
20 Jun 2019
11 min read
Save for later

Google rejects all 13 shareholder proposals at its annual meeting, despite protesting workers

Sugandha Lahoti
20 Jun 2019
11 min read
Yesterday at Google’s annual shareholder meeting, Alphabet, Google’s parent company, faced 13 independent stockholder proposals ranging from sexual harassment and diversity, to the company's policies regarding China and forced arbitration. There was also a proposal to limit Alphabet's power by breaking up the company. However, as expected, every stockholder proposal was voted down after a few minutes of ceremonial voting, despite protesting workers. The company’s co-founders, Larry Page, and Sergey Brin were both no-shows and Google CEO Sundar Pichai didn’t answer any questions. Google has seen a massive escalation in employee activism since 2018. The company faced backlash from its employees over a censored version of its search engine for China, forced arbitration policies, and its mishandling of sexual misconduct. Google Walkout for Real Change organizers Claire Stapleton, and Meredith Whittaker were also retaliated against by their supervisors for speaking out. According to widespread media reports the US Department of Justice is readying to investigate into Google. It has been reported that the probe would examine whether the tech giant broke antitrust law in the operation of its online and advertisement businesses. Source: Google’s annual meeting MOM Equal Shareholder Voting Shareholders requested that Alphabet’s Board initiate and adopt a recapitalization plan for all outstanding stock to have one vote per share. Currently, the company has a multi-class voting structure, where each share of Class A common stock has one vote and each share of Class B common stock has 10 votes. As a result, Page and Brin currently control over 51% of the company’s total voting power, while owning less than 13% of stock. This raises concerns that the interests of public shareholders may be subordinated to those of our co-founders. However, board of directors rejected the proposal stating that their capital and governance structure has provided significant stability to the company, and is therefore in the best interests of the stockholders. Commit to not use Inequitable Employment Practices This proposal urged Alphabet to commit not to use any of the Inequitable Employment Practices, to encourage focus on human capital management and improve accountability.  “Inequitable Employment Practices” are mandatory arbitration of employment-related claims, non-compete agreements with employees, agreements with other companies not to recruit one another’s employees, and involuntary non-disclosure agreements (“NDAs”) that employees are required to sign in connection with settlement of claims that any Alphabet employee engaged in unlawful discrimination or harassment. Again Google rejected this proposal stating that the updated code of conduct already covers these requirements and do not believe that implementing this proposal would provide any incremental value or benefit to its stockholders or employees. Establishment of a Societal Risk Oversight Committee Stockholders asked Alphabet to establish a Societal Risk Oversight Committee (“the Committee”) of the Board of Directors, composed of independent directors with relevant experience. The Committee should provide an ongoing review of corporate policies and procedures, above and beyond legal and regulatory matters, to assess the potential societal consequences of the Company’s products and services, and should offer guidance on strategic decisions. As with the other Board committees, a formal charter for the Committee and a summary of its functions should be made publicly available. This proposal was also rejected. Alphabet said, “The current structure of our Board of Directors and its committees (Audit, Leadership Development and Compensation, and Nominating and Corporate Governance) already covers these issues.” Report on Sexual Harassment Risk Management Shareholders requested management to review its policies related to sexual harassment to assess whether the company needs to adopt and implement additional policies and to report its findings, omitting proprietary information and prepared at a reasonable expense by December 31, 2019. However, this was rejected as well based on the conclusion that Google has robust Conduct Policies already in place. There is also ongoing improvement and reporting in this area, and Google has a concrete plan of action to do more. Majority Vote for the Election of Directors This proposal demands that director nominees should be elected by the affirmative vote of the majority of votes cast at an annual meeting of shareholders, with a plurality vote standard retained for contested director elections, i.e., when the number of director nominees exceeds the number of board seats. This proposal also demands that a director who receives less than such a majority vote should be removed from the board immediately or as soon as a replacement director can be qualified on an expedited basis. This proposal was rejected basis, “Our Board of Directors believes that current nominating and voting procedures for election to our Board of Directors, as opposed to a mandated majority voting standard, provide the board the flexibility to appropriately respond to stockholder interests without the risk of potential corporate governance complications arising from failed elections.” Report on Gender Pay Stockholders demand that Google should report on the company’s global median gender pay gap, including associated policy, reputational, competitive, and operational risks, and risks related to recruiting and retaining female talent. The report should be prepared at a reasonable cost, omitting proprietary information, litigation strategy, and legal compliance information. Google says it already releases an annual report on its pay equity analyses, “ensuring they pay fairly and equitably.” They don’t think an additional report as detailed in the proposal above would enhance Alphabet’s existing commitment to fostering a fair and inclusive culture. Strategic Alternatives The proposal outlines the need for an orderly process to retain advisors to study strategic alternatives. They believe Alphabet may be too large and complex to be managed effectively and want a voluntary strategic reduction in the size of the company than from asset sales compelled by regulators. They also want a committee of independent directors to evaluate those alternatives in the exercise of their fiduciary responsibilities to maximize shareholder value. This proposal was also rejected basis, “Our Board of Directors and management do not favor a given size of the company or focus on any strategy based on ideological grounds. Instead, we develop a strategy based on the company’s customers, partners, users and the communities we serve, and focus on strategies that maximize long-term sustainable stockholder value.” Nomination of an Employee Representative Director An Employee Representative Director should be nominated for election to the Board by shareholders at Alphabet’s 2020 annual meeting of shareholders. Stockholders say that employee representation on Alphabet’s Board would add knowledge and insight on issues critical to the success of the Company, beyond that currently present on the Board, and may result in more informed decision making. Alphabet quoted their “Consideration of Director Nominees” section of the proxy statement, stating that the Nominating and Corporate Governance Committee of Alphabet’s Board of Directors look for several critical qualities in screening and evaluating potential director candidates to serve our stockholders. They only allow the best and most qualified candidates to be elected to the Board of Directors. Accordingly, this proposal was also rejected. Simple Majority Vote The shareholders call for the simple majority vote to be eliminated and replaced by a requirement for a majority of the votes cast for and against applicable proposals, or a simple majority in compliance with applicable laws. Currently, the voice of regular shareholders is diminished because certain insider shares have 10-times as many votes per share as regular shares. Plus shareholders have no right to act by written consent. The rejection reason laid out by Google was that more stringent voting requirements in certain limited circumstances are appropriate and in the best interests of Google’s stockholders and the company. Integrating Sustainability Metrics Report into Performance measures Shareholders requested the Board Compensation Committee to prepare a report assessing the feasibility of integrating sustainability metrics into performance measures. These should apply to senior executives under the Company’s compensation plans or arrangements. They state that Alphabet remains predominantly white, male, and occupationally segregated. Among Alphabet’s top 290 managers in 2017, just over one-quarter were women and only 17 managers were underrepresented people of color. Even after proof of Alphabet being not inclusive, the proposal was rejected. Alphabet said that the company already supports corporate sustainability, including environmental, social and diversity considerations. Google Search in China Google employees, as well as Human rights organizations, have called on Google to end work on Dragonfly. Google employees have quit to avoid working on products that enable censorship; 1,400 current employees have signed a letter protesting Dragonfly. Employees said: “Currently we do not have the information required to make ethically-informed decisions about our work, our projects, and our employment.” Some employees have threatened to strike. Dragonfly may also be inconsistent with Google’s AI Principles. The proposal urged Google to publish a Human Rights Impact Assessment by no later than October 30, 2019, examining the actual and potential impacts of censored Google search in China. Google rejected this proposal stating that before Google launches any new search product, it would conduct proper human rights due diligence, confer with Global Network Initiative (GNI) partners and other key stakeholders. If Google ever considers re-engaging this work, it will do so transparently, engaging and consulting widely. Clawback Policy Alphabet currently does not disclose an incentive compensation clawback policy in its proxy statement. The proposal urges Alphabet’s Leadership Development and Compensation Committee to adopt a clawback policy. This committee will review the incentive compensation paid, granted or awarded to a senior executive in case there is misconduct resulting in a material violation of law or Alphabet’s policy that causes significant financial or reputational harm to Alphabet. Alphabet rejected this proposal based on the following claims: The company has announced significant revisions to its workplace culture policies. Google has ended forced arbitration for employment claims. Any violation of our Code of Conduct or other policies may result in disciplinary action, including termination of employment and forfeiture of unvested equity awards. Report on Content Governance Stockholders are concerned that Alphabet’s Google is failing to effectively address content governance concerns, posing risks to shareholder value. They request Alphabet to issue a report to shareholders assessing the risks posed by content management controversies related to election interference, freedom of expression, and the spread of hate speech. Google said that they have already done significant work in the areas of combatting violent or extremist content, misinformation, and election interference and have continued to keep the public informed about our efforts. Thus they do not believe that implementing this proposal would benefit our stockholders. Read the full report here. Protestors showed disappointment As the annual meeting was conducted, thousands of activists protested across Google offices. A UK-based human rights organization called SumOfUs teamed up with Students for a Free Tibet to propose a breakup of Google. They organized a protest outside of 12 Google offices around the world to coincide with the shareholder meeting, including in San Francisco, Stockholm, and Mumbai. https://twitter.com/SumOfUs/status/1141363780909174784 Alphabet Board Chairman, John Hennessy opened the meeting by reflecting on Google's mission to provide information to the world. "Of course this comes with a deep and growing responsibility to ensure the technology we create benefits society as a whole," he said. "We are committed to supporting our users, our employees and our shareholders by always acting responsibly, inclusively and fairly." In the Q&A session which followed the meeting, protestors demanded why Page wasn't in attendance. "It's a glaring omission," one of the protestors said. "I think that's disgraceful." Hennessy responded, "Unfortunately, Larry wasn't able to be here" but noted Page has been at every board meeting. The stockholders pushed back, saying the annual meeting is the only opportunity for investors to address him. Toward the end of the meeting, protestors including Google employees, low-paid contract workers, and local community members were shouting outside. “We shouldn’t have to be here protesting. We should have been included.” One sign from the protest, listing the first names of the members of Alphabet’s board, read, “Sundar, Larry, Sergey, Ruth, Kent — You don’t speak for us!” https://twitter.com/GoogleWalkout/status/1141454091455008769 https://twitter.com/LauraEWeiss16/status/1141400193390141440   This meeting also marked the official departure of former Google CEO Eric Schmidt and former Google Cloud chief Diane Greene from Alphabet's board. Earlier this year, they said they wouldn’t be seeking re-election. Google Walkout organizer, Claire Stapleton resigns after facing retaliation from management US regulators plan to probe Google on anti-trust issues; Facebook, Amazon & Apple also under legal scrutiny. Google employees lay down actionable demands after staging a sit-in to protest retaliation
Read more
  • 0
  • 0
  • 5049

article-image-facebook-fails-to-block-ecj-data-security-case-from-proceeding
Guest Contributor
19 Jun 2019
6 min read
Save for later

Facebook fails to block ECJ data security case from proceeding

Guest Contributor
19 Jun 2019
6 min read
This July, the European Court of Justice (ECJ) in Luxembourg will now hear a case to answer questions on whether the American government's surveillance, Privacy Shield and Standard Contract Clauses, during EU-US data transfers, provides adequate protection of EU citizen's personal information. The ECJ set the case hearing after the supreme court of Ireland — where Facebook's international headquarters is located — decided, on Friday, May 31st, 2019, to dismiss an appeal by Facebook to block the data security case from progressing to the ECJ. The Austrian Supreme Court has also recently rejected Facebook’s bid to stop a similar case. If Europe's Court of Justice makes a ruling against the current legal arrangements, this would majorly impact thousands of companies, which make millions of data transfers every day. Companies potentially affected, include human resources databases, storage of internet browsing histories and credit card companies. Background on this case The case started with the Austrian privacy lawyer and campaigner, Max Schrems. In 2013, Schrems made a complaint regarding concerns that US surveillance programs like the PRISM system were accessing the data of European Facebook users, as whistleblower Edward Snowden described. His concerns also dealt with Facebook’s use of a separate data transfer mechanism — Standard Contractual Clauses (SCCs). Around the time Snowden disclosed about the US government's mass surveillance programs, Schrems also challenged the legality of the prior EU-US data transfer arrangement, Safe Harbor, eventually bringing it down. After Schrems stated that the transfer of his data by Facebook to the US infringed upon his rights as an EU citizen, Ireland's High Court ruled, in 2017, that the US government partook in "mass indiscriminate processing of data" and deferred concerns to the European Court of Justice. Then, in October of last year, the High Court referred this case to the ECJ based on the Data Protection Commissioner's "well-founded" concerns about whether or not US law provided adequate protection for EU citizens' data privacy rights. Within all of this, there also exist people questioning the compatibility between US law which focuses on national security and EU law which aims for personal privacy. Whistleblowers like Edward Snowden played a role in what has lead up to this case, and whistleblower attorneys and paraprofessionals continue working to expose fraud against the government through means of the False Claims Acts (FCA). Why Facebook appealed the case Although Irish law doesn't require an appeal against CJEU referrals, Facebook chose to stay and appeal the decision anyway, aiming to keep it from progressing to court. The court denied them the stay but granted them leave to appeal last year. Keep in mind that Facebook was already under a lot of scrutiny after playing a part in the Cambridge Analytica data scandal, which showed that up to 87 million users faced having their data compromised by Cambridge Analytica. One of the reasons Facebook said it wanted to block this case from progressing was that the High Court failed to regard the 'Privacy Shield' decision. Under the Privacy Shield decision, the European Commission had approved the use of certain EU-US data transfer channels. Another main issue here was whether Facebook actually had the legal rights to appeal a referral to the ECJ. Privacy Shield is also in question by French digital rights groups who claim it disrupts fundamental EU rights and will be heard by the General Court of the EU in July. Why the appeal was dismissed The five-judge High Court, headed by the Chief Justice Frank Clarke, decided they cannot entertain an appeal over the referral decision itself. In addition, he said Facebook’s criticisms related to the “proper characterization” of underlying facts rather than the facts themselves. If there had been any actual finding of facts not sustainable on the evidence before the High Court per Irish procedural law, he would have overturned it, but no such matter had been established on this appeal, he ruled. "Joint Control" and its possible impact on the case In June 2018, after a Facebook fan page was found to have been allowing visitor data to be collected by Facebook via a cookie on the fan page, without informing visitors, The Federal Administrative Court of Germany referred the case to ECJ. This resulted in the ECJ deciding to deem joint responsibility between social media networks and administrators in the processing of visitor data. The ECJ´s ruling, in this case, has consequences not only for Facebook sites but for other situations where more than one company or administrator plays an active role in the data processing. The concept of “joint control” is now on the table, and further decisions of authorities and courts in this area are likely. What's next for data security Currently, Facebook also faces questioning by Ireland's Data Protection Commission over numerous potential infringements of strict European privacy laws that the new General Data Protection Regulation (GDPR) outlines. Facebook, however, already stated it will take the necessary steps to ensure the site operators can comply with the GDPR. There have even been pleas for Global Data Laws. A common misconception exists that only big organizations, governments and businesses are at risk for data security breaches, but this is simply not true. Data security is important for everyone — now more than ever. Your computer, tablet and mobile devices could be affected by attackers for their sensitive information, such as credit card details, banking details and passwords, by way of phishing attacks, malware attacks, ransomware attacks, man-in-the-middle attacks and more. Therefore, bringing continual awareness to these US and global data security issues will enable stricter laws to be put in place. Kayla Matthews writes about big data, cybersecurity and technology. You can find her work on The Week, Information Age, KDnuggets and CloudTweaks, or over at ProductivityBytes.com. Facebook releases Pythia, a deep learning framework for vision and language multimodal research Zuckberg just became the target of the world’s first high profile white hat deepfake op. Can Facebook come out unscathed? US regulators plan to probe Google on anti-trust issues; Facebook, Amazon & Apple also under legal scrutiny
Read more
  • 0
  • 0
  • 2335

article-image-facebook-launches-libra-and-calibra-in-a-move-to-seriously-disrupt-the-financial-sector
Richard Gall
18 Jun 2019
6 min read
Save for later

Facebook launches Libra and Calibra in a move to seriously disrupt the financial sector

Richard Gall
18 Jun 2019
6 min read
Facebook today announced that it's going to launch its own cryptocurrency, Libra, in what looks like an unprecedented move into the financial sector. At first glance Libra, due to go live in 2020, might seem like a chance for Facebook to just test the waters in a cryptocurrency market that's yet to see widespread adoption, but with the parallel launch of Calibra, a payment platform that sits on top of the cryptocurrency, it appears that Facebook plans to develop an entirely new ecosystem for digital transactions that is far bigger than anything that has come before. While media attention right now is focused on Libra, it's important to understand that what we're seeing from Facebook is an attempt to tackle a number of huge challenges. On the one hand, both Libra and Calibra could help to unify a fairly fragmented digital payment market, but it also opens up a whole new market of people who, lacking a bank account, might not even currently be part of the digital economy. The Libra Association as Facebook's foundation for the future of the digital economy To get to this point Facebook has had to collaborate with some of the biggest organizations in the world to construct a foundational infrastructure for its plans. The Libra Association is a nonprofit organization is made up of companies including PayPal, loan platform Kiva, Uber, and Lyft as well as many others - all of which have essentially brought into Facebook's vision. By grouping together, the Libra Association is perhaps the most serious attempt by a set of established companies and non-profits to make a cryptocurrency a success. The fact that Facebook has taken steps to form an institution around Libra underlines how important the project is to the company. Its also a tacit admission from Facebook that to build a truly successful ecosystem it must work with other organizations. It can't simply go it alone. Is Libra just like any other cryptocurrency? Libra isn't like other cryptocurrencies such as Bitcoin or Ethereum. It should, in theory at least, avoid the notorious volatility of cryptocurrencies by being "tied to a mix of global assets", as the Guardian explains. Libra isn't, then, strictly a decentralized currency. As the Reuters report highlights, "the Libra blockchain will be permissioned, meaning that only entities authorized by the governing association will be able to run the computers." It's easy to gloss over this (Reuters has), but as the Financial Times' Alphaville publication points out, it's actually open to question to what extent Libra is actually built on Blockchain. Writing on Alphaville, journalist Jemima Kelly identifies sections of the Libra white paperthat are confusing and contradictory:  "...Unlike previous blockchains, which view the blockchain as a collection of blocks of transactions, the Libra Blockchain is a single data structure that records the history of transactions and states over time." In essence then, maybe not quite a Blockchain... There's probably some degree of smoke and mirrors at play. But given the challenging recent history of the cryptocurrency market, arguably this is the right move by Facebook and the Libra Association to keep some distance between Libra and 'standard' cryptocurrencies, while flirting with the hype that surrounds blockchain. Read next: Containers and Python are in demand, but Blockchain is all hype, says Skill Up developer survey What is Calibra? Calibra is Facebook's payment platform that sits on top of Libra. The plan is for Calibra to become a cryptocurrency wallet that integrates with Facebook products. Initially it will be accessible as a button in Messenger and WhatsApp, but it could eventually help power transactions on Instagram - something that has long held promise but has never really been implemented in a way amenable to users. Calibra, then, will likely become the primary way that users experience Libra, especially on services like WhatsApp and Messenger. As Kevin Weil, Calibra's VP of Product said in an interview with Verge, "there’s a lot of overlap between the things you want out of a wallet for currency and the things you want out of a messaging app. As well as being accessible through Facebook products, Calibra will also be available as a standalone app that users can download. Can Calibra make digital transactions accessible to those without bank accounts? One of the potential benefits of Calibra is that it can be used by anyone with a smartphone. Most payment services, like PayPal, require a bank account - Calibra will make Libra accessible to millions of people who might typically only have access to cash (according to the Verge report, almost 1.7 billion people don't have a bank account). This brings a great swathe of the global population into the digital economy. From Facebook's perspective, enabling that becomes very valuable strategically. Indeed, it could elevate the organization to Amazon or Google levels of economic power. That's a long way off, but Calibra at least provides the company with a foundation. Is Libra and Calibra really such a good idea for Facebook? Facebook has faced significant scrutiny over the last few years. This means Libra and Calibra will both be entering the world at a tumultuous time for Facebook. Perhaps it's not that surprising - given all the challenges Facebook faces, this is a huge strategic initiative that could help the company re position itself as a core part of the digital economy's infrastructure, rather than simply a tool for misinformation and hate speech. Think of it this way - although both Google and Amazon are facing a number of issues ranging from privacy to working conditions, both companies largely appear to be withstanding difficult times. Whether Libra and Calibra actually work out for Facebook - and whether users actually use it - is another matter completely. While there's undoubtedly a demand for a more seamless payment service that goes one step further than the likes of PayPal, that guarantees both privacy and minimal friction, for Facebook to look at itself as the company to provide the solution to this enormous challenge takes considerable confidence and a big dash of hubris.
Read more
  • 0
  • 0
  • 2620
article-image-telegram-faces-massive-ddos-attack-suspects-link-to-the-ongoing-hong-kong-protest
Savia Lobo
14 Jun 2019
4 min read
Save for later

Telegram faces massive DDoS attack; suspects link to the ongoing Hong Kong protests

Savia Lobo
14 Jun 2019
4 min read
Telegram’s founder Pavel Durov shared his suspicion that the recent massive DDoS attack on his messaging service was made by the Chinese government. He also stated that this attack coincides with the ongoing Hong Kong protests where protesters used Telegram for their inter-communication to avoid detection as Telegram can function both in online as well as offline. https://twitter.com/durov/status/1138942773430804480 On Jun 12, a tweet from Telegram Messenger informed users that the messaging service was “experiencing a powerful DDoS attack”. It further said that this attack was flooding its servers with “garbage requests”, thus disrupting legitimate communications. Telegram allows people to send encrypted messages, documents, videos and pictures free of charge. Users can create groups for up to 200,000 people or channels for broadcasting to unlimited audiences. The reason for its growing popularity is due to its emphasis on encryption, which prevents many widely used methods of reading confidential communications. Hong Kong protests: A movement opposing the ‘extradition law’ On Sunday, around 1 million people demonstrated in the semi-autonomous Chinese city-state against amendments to an extradition law that would allow a person arrested in Hong Kong to face trial elsewhere, including in mainland China. “Critics fear the law could be used to cement Beijing’s authority over the semi-autonomous city-state, where citizens tend to have a higher level of civil liberties than in mainland China”, The Verge reports. According to The New York Times, “Hong Kong, a semi-autonomous Chinese territory, enjoys greater freedoms than mainland China under a "one country, two systems" framework put in place when the former British colony was returned to China in 1997. Hong Kong residents can freely surf the Internet and participate in public protests, unlike in the mainland.” To avoid surveillance and potential future prosecutions, these protestors disabled location tracking on their phones, bought train tickets using cash and refrained from having conversations on their social media. Many protesters masked their faces to avoid facial recognition and also avoided using public transit cards with a fear that it can be voluntarily linked to their identities, instead opting for paper tickets. According to France24, “Many of those on the streets are predominantly young and have grown up in a digital world, but they are all too aware of the dangers of surveillance and leaving online footprints.” Ben, a masked office worker at the protests, said he feared the extradition law would have a devastating impact on freedoms. "Even if we're not doing anything drastic -- as simple as saying something online about China -- because of such surveillance they might catch us," the 25-year-old told France24. The South China Morning Post first reported on the role the messaging app played in the protests when a Telegram group administrator was arrested for conspiracy to commit public nuisance. The alleged person “managed a conversation involving 30,000 members, is that he plotted with others to charge the Legislative Council Complex and block neighbouring roads”, SCMP reports. Bloomberg reported that protestors “relied on encrypted services to avoid detection. Telegram and Firechat -- a peer-to-peer messaging service that works with or without internet access -- are among the top trending apps in Hong Kong’s Apple store”. “Hong Kong’s Legislative Council suspended a review of the bill for a second day on Thursday amid the continued threat of protests. The city’s leader, Chief Executive Carrie Lam, is seeking to pass the legislation by the end of the current legislative session in July”, Bloomberg reports. Telegram also noted that the DDoS attack appears to have stabilized, and also assured users that their data is safe. https://twitter.com/telegram/status/1138781915560009735 https://twitter.com/telegram/status/1138777137102675969 Telegram explained the DDoS attack in an interesting way: A DDoS is a “Distributed Denial of Service attack”: your servers get GADZILLIONS of garbage requests which stop them from processing legitimate requests. Imagine that an army of lemmings just jumped the queue at McDonald’s in front of you – and each is ordering a whopper. The server is busy telling the whopper lemmings they came to the wrong place – but there are so many of them that the server can’t even see you to try and take your order. NSA warns users of BlueKeep vulnerability; urges them to update their Windows systems Over 19 years of ANU(Australian National University) students’ and staff data breached All Docker versions are now vulnerable to a symlink race attack
Read more
  • 0
  • 0
  • 2889

article-image-mark-zuckerberg-is-a-liar-fraudster-unfit-to-be-the-c-e-o-of-facebook-alleges-aaron-greenspan-to-the-uk-parliamentary-committee
Vincy Davis
14 Jun 2019
10 min read
Save for later

Mark Zuckerberg is a liar, fraudster, unfit to be the C.E.O. of Facebook, alleges Aaron Greenspan to the UK Parliamentary Committee

Vincy Davis
14 Jun 2019
10 min read
Last week, the Digital, Culture, Media and Sport Sub-Committee held its hearing on Disinformation with Aaron Greenspan as the witness. Aaron is the founder, president, and CEO of Think Computer Corporation, an IT consulting service and the author of the book ‘Authoritas: One Student's Harvard Admissions and the Founding of the Facebook Era’. Aaron who claims to have had the original idea for Facebook, has been a long standing critic of the social network. In January this year, Aaron published a 75-page report, in which he states that fake accounts made up more than half of Facebook's 2.2 billion users. He has also testified the same, in front of the UK Parliamentary Committee. He minced no words when he said Mark Zuckerberg is a liar, fraudster and unfit to be the CEO of Facebook. Fake accounts Facebook differentiates between duplicated accounts and fake accounts. But Aaron considers “any account which is not in your name and you have a second account for whatever purpose, that account is ultimately a fake account”. He noticed the issue of fake accounts on Facebook last year, after which he did his own enquiry and found some “alarming” numbers. At the end of 2017, Facebook claimed that “only 1% of their accounts are fake”. By their own definition, two weeks ago they have said that the number of “fake accounts have increased to 5%.” This means that in a span of two years, their estimate has increased fivefold. The second issue, Aaron highlights is that it's “extremely unclear” how they have arrived at that estimate. Two years ago, Facebook came up with a ‘Transparency portal’, due to public pressure. The transparency portal of Facebook aims at publishing “regular reports to give our community visibility into how we enforce policies, respond to data requests and protect intellectual property, while monitoring dynamics that limit access to Facebook products.” Aaron states that he found “very difficult to reconcile the SEC filings of Facebook with the numbers from the transparency portal”. He says that the number of fake accounts published on the transparency portal of Facebook do not match with the SEC filings. This is the reason why he decided to do his own report and find out if the numbers from the “two sources of Facebook aligned” and has come to the conclusion that they “don’t”. Instead Aaron has arrived at a conditional conclusion that “around 30% of accounts on Facebook are fake”. He claims that Facebook always minimizes their numbers to make the problem look smaller than it is. Also he adds that “Based on my total use of the platform, the historical trend starting in 2006, when it was made public, up until the present, it seemed like it was safe that 50% are fake and I think this could actually be higher.” Some weeks ago, Facebook “finally updated their transparency portal and announced that in the fourth quarter of 2018, that they have disabled 1.2 billion fake accounts. And in the first quarter of 2019, the numbers counted to 2.2 billion fake accounts, which is an exponential growth curve in fake accounts, according to their own numbers, which has not been audited by any respective body”. If all the numbers are added, “by a conservative guess, it can be said that there are 10billion fake accounts, for a platform which has 2.2 billion active users”. While “Facebook says that fake accounts don’t matter very much and within their undisclosed methodology, they are doing a good job.” Aaron believes that transparency around fake accounts is the number one problem for Facebook to look after. FB is more ‘unwilling’ rather than ‘unable’ to tackle Fake accounts When asked, if he thinks that “Facebook wants to tackle these problems of fake accounts and others”, Aaron answered “No”. He believes very strongly that Mark has no clear intention of complying with the law, as he’s not appearing in front of the parliament, in the UK or in Canada or anywhere where serious questions will be aimed at him. This is because, “he has no genuine answers in many cases. So i have no faith in Facebook, I don't think it can be trusted and don't think it should be trusted and i would suggest independent analysis.” The illusion of Behavioural advertising effectiveness Aaron claims that “Behavioural advertising produces 4% benefit of revenue to advertisers”. So while adexchangers like “Google or Facebook might charge 59% more on average for targeted ads, or upto 4900% more” in some cases, they are going to get customers as they get benefit from them. He also described Facebook as a "black box", and claimed advertisers were "in the dark" about how effective their campaigns actually are on Facebook and if it’s actually reaching real users. Aaron adds that “I have come to the conclusion that Facebook is not in anybody’s control. The company has lost its capability to control its own platform. And i don’t think they can truly regain that ability”, he thus likened the social network to the ‘Chernobyl disaster’, the largest catastrophic nuclear disaster to hit the world in 1986. In this particular situation, there was a technology which was hyped and was “expected to be transformative and make some other problems go away”. Aaron says that in 2004, Mark had described the Facebook system as “something that would involve the problem of reaching critical mass, a nuclear power reference”. Aaron believes that by designing Facebook, the way it is now, “Mark has effectively removed the control from the reactor core”, and the result is an “enormous uninhabitable zone in the internet, which is polluted with disinformation and falsity, much like radiation, that transpires nearly impossible reverse.” History of Facebook Aaron who has always claimed that he is the original creator of Facebook, says that “My need to create ‘Universal Facebook’, was based on Harvard’s structure as an organization, while Mark wanted to build something cool.” He also states that “certainly neither of us had thought of this to be a global encompassing system”. Aaron says that if he would have known that Mark was planning on “something like a huge global system”, he would have made it clear to Mark that “this could end up being a privacy nightmare”. He adds that “In the early days, Facebook lost control of the platform and it will never get it back”, and “unfortunately, the Media has been a significant member in propelling Facebook since the last 10-12 years.” Facebook is not growing Aaron believes that Mark has been lying to investors and the global community about Facebook’s growth, which is not true. He says that from indicators, as an outside observer, everything suggests that the usage of Facebook is falling drastically as users are having concerns about their privacy and yet it is being publicised that Facebook is growing. In reality, Facebook is growing in countries like India, Philippines, Vietnam, and Indonesia, which are the same countries where Facebook claims in their disclaimer that “we have more fake accounts coming from these countries than anyone else.” This means that Facebook is growing in countries where there is a known problem of ‘fake accounts’ and this is more problematic than the rest of the world. Aaron alleges that this misrepresentation of facts to shareholders amounts to “fraud”. On respecting Privacy and Personal Data Aaron states that Facebook used to lie initially that “Openness is a universal good”, while the new lie Facebook says is that “Encryption is the same as privacy,” which is not true, “and that privacy is the universal good.” So Aaron believes that Mark is going completely in contrast to his earlier beliefs, as the “previous model was not working for him anymore so he made a new model, and this is going to increase” further for every next level of Facebook. Aaron also adds that “Encryption comes with a lot of pitfalls.” On the question of whether Zuckerberg respects personal data, Aaron has claimed that Mark does not believe in the concept of Personal Data as he has been performing security fraud on a number of occasions, in an incredibly blatant manner. He states that “the SEC has done nothing about it because they are afraid of targeting a billionaire”. He also pointed out that Mark is not the only executive who lies to stockholders, and claimed that even other tech giants get away with this. For example, “Elon Musk does it.” When asked “if there were any warning signs of the Cambridge Analytica scandal”, prior to 2016. Aaron says that “This wasn’t so much a breach as it was a designed behaviour, and that design was made so on Mark’s orders”. Aaron also recalled that, in 2007, when he was working with Ed Baker, Mark Zuckerberg’s current colleague, he was planning to break the law with a technique, allegedly designed to steal customer data. He added that “When I worked with Ed, he suggested for the software that we were building that, we should ask users for access to their address book. And regardless of whether the answer was yes or no, we should take that data anyway, and use that to send emails to other potential users”. Aaron claims that “At that point, I quit”. Eventually, Ed joined Facebook and is part of its growth team now. Antitrust Law Debate If an antitrust action is taken against Facebook, it would result in Whatsapp and Instagram being separated from its mother platform. However, Mark will still be responsible for 12 billion of users data of Facebook. According to Aaron, this is a huge problem. And this is the reason he believes that “antitrust action against Facebook, is not going to be effective in the long run”. He also adds that some other major methods needs to be undertaken to make Facebook behave responsibly. How do we solve a problem like Facebook? Aaron has some ideas Aaron believes that regulating an entity as large and complex like Facebook requires technical knowledge, and “by large US regulators lack the technical knowledge, to effectively enforce the laws that are already on the book.” Aaron proposes a number of solutions for ways to regulate Facebook, out of which, the most important step he believed is “to remove Mark from the C.E.O. position.” He considers Mark incapable in being a responsible CEO to Facebook. Aaron’s recommendation comes at a time when this week, around 68% of independent investors wanted the company to have an independent chairman. Despite the revolt, the proposal was not passed as Mark owns upto 75% of Class B stock, i.e., he has almost 60% of the voting power at Facebook. Obviously, he and his colleagues voted down the independent Chairman proposal very smoothly. Next, Aaron suggested to regulate Facebook, along the lines of a government regulated bank. He proposes to have KYC requirements for all social media at this point of time. Also he adds that “I think anonymous speech should not be banned, as it plays an important role, but if there’s a problem in any case, then the anonymous person should be held to account.” He says that as “Transparency around fake accounts, is the number one problem faced by Facebook”, a ‘Social Media Tax’ should be levied on all its users. He believes that it will provide some revenue to fund investigative journalism, and due to payment involved, it will require some kind of authentication. This can play a major role in identification of fake accounts. He says that this will make “the entire process manageable for governments.” Check out the full hearing on the Parliament TV website. US regulators plan to probe Google on anti-trust issues; Facebook, Amazon & Apple also under legal scrutiny Facebook argues it didn’t violate users’ privacy rights and thinks there’s no expectation of privacy because there is no privacy on social media Experts present most pressing issues facing global lawmakers on citizens’ privacy, democracy and rights to freedom of speech
Read more
  • 0
  • 0
  • 3834