Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials

7019 Articles
article-image-12-visual-studio-code-extensions-that-node-js-developers-will-love-sponsored-by-microsoft
Richard Gall
05 Jun 2019
7 min read
Save for later

12 Visual Studio Code extensions that Node.js developers will love [Sponsored by Microsoft]

Richard Gall
05 Jun 2019
7 min read
Visual Studio Code might have appeared as a bit of a surprise when it was first launched by Microsoft - why reach out to JavaScript developers? When did Node.js developers become so irresistible? However, once you take a look inside you can begin to see why Visual Studio Code represents such an enticing proposition for Node.js and other JavaScript developers. Put simply, the range of extensions available is unmatched by any other text editor. Extensions are almost like apps when you’re using Visual Studio Code. In fact, there’s pretty much an app store where you can find extensions for a huge range of tasks. These extensions are designed with productivity in mind, but they’re not just time saving tools. Arguably, visual studio code extensions are what allows Visual Studio Code to walk the line between text editor and IDE. They give you more functionality than you might get from an ordinary text editor, but it’s still lightweight enough to not carry the baggage an IDE likely will. With a growing community around Visual Studio Code, the number of extensions is only going to grow. And if there’s something missing, you can always develop one yourself. But before we get ahead of ourselves, let’s take a brief look at some of the best Visual Studio Code extensions for Node.js developers - as well as a few others… This post is part of a series brought to you in conjunction with Microsoft. Download Learning Node.js Development for free courtesy of Microsoft here. The best Node.js Visual Studio Code extensions Node.js Modules Intellisense If you’re a Node.js developer the Node.js Modules Intellisense extension is vital. Basically, it will autocomplete JavaScript (or TypeScript) statements. npm Intellisense Npm is such a great part of working with Node.js. It’s such a simple thing, but it has provided a big shift in the way we approach application development, giving you immediate access to the modules you need to run your application. With Visual Studio Code, it’s even easier. In fact, it’s pretty obvious what npm Intellisense does - it autocompletes npm modules into your code when you try and import them. Search Node_Modules Sometimes, you might want to edit a file within the node_modules folder. To do this, you’ll probably have to do some manual work to find the one you want. Fortunately, with the Search Node_Modules extension, you can quickly navigate the files inside your node_modules folder. Node Exec Node Exec is another simple but very neat extension. It lets you quickly execute code or your current file using Node. Once you’ve installed it, all you need to do is hit F8 (or run the Execute node.js command). Node Readme Documentation is essential. If 2018 has taught us anything it’s transparency, so we could all do with an easier way to ensure that documentation is built into our workflows and boosts rather than drains our productivity. This is why Node readme is such a nice extension - it simply allows you to quickly open the documentation for a particular package. View Node Package Like Node readme, the View Node Package extension helps you quickly get a better understanding of a particular package while remaining inside VSC. You can take a look inside the project’s repository without having to leave Visual Studio Code. Read next: 5 reasons Node.js developers might actually love using Azure [Sponsored by Microsoft] Other useful Visual Studio Code extensions While the extensions above are uniquely useful if you’re working with Node, there are other extensions that could prove invaluable. From cleaner code and debugging, to simple deployment, as a Microsoft rival once almost said, there’s an extension for that… ESLint ESLint is perhaps one of the most popular extensions in the Visual Studio Code marketplace. Helping developers fix troublesome or inconsistent aspects of code, by bringing ESLint into your development workflow you can immediately solve one of the trickier aspects of JavaScript - the fact it can pick up errors sometimes a little too easily. ESLint will typically lint an individual file on typing. However, it’s possible to perform the action on an entire workspace. All you need to do is set eslint.provideLintTask to true. JavaScript ES6 Code Snippets This is a must-have extension for any JavaScript developer working in Visual Studio Code. While VSC does have in-built snippets, this extension includes ES6 snippets to make you that little bit more productive. The extension has import and export snippets, class helpers, and methods. Debugger for Chrome Debugging code in the browser is the method of choice, particularly if you’re working on the front end. But that means you’ll have to leave your code editor - fine, but necessary, right? You won’t need to do that anymore thanks to the Debugger for Chrome extension. It does exactly what it says on the proverbial tin: you can debug in Chrome without leaving VSC. Live Server Live Server is a really nice extension that has seen a huge amount of uptake from the community. At the time of writing, it has received a remarkable 2.2 million downloads. The idea is simple: you can launch a local development server that responds in real-time to the changes you make in your editor. What makes this particularly interesting, not least for Node developers, is that it works for server side code as well as static files. Settings Sync Settings Sync is a nice extension for those developers that find themselves working on different machines. Basically, it allows you to run the same configuration of Visual Studio Code across different instances. Of course, this is also helpful if you’ve just got your hands on a new laptop and are dreading setting everything up all over again… Live Share Want to partner with a colleague on a project or work together to solve a problem? Ordinarily, you might have had to share screens or work together via a shared repository, but thanks to Live Share, you can simply load up someone else’s project in your editor. Azure Functions Most of the extensions we’ve seen will largely help you write better code and become a more productive developer. But the Azure Functions extension is another step up - it lets you build, deploy and debug a serverless app inside visual studio code. It’s currently only in preview, but if you’re new to serverless, it does offer a nice way of seeing how it’s done in practice! Read next: 5 developers explain why they use Visual Studio Code [Sponsored by Microsoft] Start exploring Visual Studio Code This list is far from exhaustive. The number of extensions available in the Visual Studio Code marketplace is astonishing - you’re guaranteed to find something you’ll find useful. The best way to get started is simply to download Visual Studio Code and try it out for yourself. Let us know how it compares to other text editors - what do you like? And what would you change? You can download Visual Studio Code here. Find out how to get started with Node.js on Azure. Download Learning Node.js with Azure for free from Microsoft.
Read more
  • 0
  • 0
  • 17157

article-image-roger-mcnamee-on-silicon-valleys-obsession-for-building-data-voodoo-dolls
Savia Lobo
05 Jun 2019
5 min read
Save for later

Roger McNamee on Silicon Valley’s obsession for building “data voodoo dolls”

Savia Lobo
05 Jun 2019
5 min read
The Canadian Parliament's Standing Committee on Access to Information, Privacy and Ethics hosted the hearing of the International Grand Committee on Big Data, Privacy and Democracy from Monday May 27 to Wednesday May 29.  Witnesses from at least 11 countries appeared before representatives to testify on how governments can protect democracy and citizen rights in the age of big data. This section of the hearing, which took place on May 28, includes Roger McNamee’s take on why Silicon Valley wants to build data voodoo dolls for users. Roger McNamee is the Author of Zucked: Waking up to the Facebook Catastrophe. His remarks in this section of the hearing builds on previous hearing presentations by Professor Zuboff, Professor Park Ben Scott and the previous talk by Jim Balsillie. Roger McNamee’s remarks build on previous hearing presentations by Professor Zuboff, Professor Park Ben Scott and the previous talk by Jim Balsillie. He started off by saying, “Beginning in 2004, I noticed a transformation in the culture of Silicon Valley and over the course of a decade customer focused models were replaced by the relentless pursuit of global scale, monopoly, and massive wealth.” McNamee says that Google wants to make the world more efficient, they want to eliminate user stress that results from too many choices. Now, Google knew that society would not permit a business model based on denying consumer choice and free will, so they covered their tracks. Beginning around 2012, Facebook adopted a similar strategy later followed by Amazon, Microsoft, and others. For Google and Facebook, the business is behavioral prediction using which they build a high-resolution data avatar of every consumer--a voodoo doll if you will. They gather a tiny amount of data from user posts and queries; but the vast majority of their data comes from surveillance, web tracking, scanning emails and documents, data from apps and third parties, and ambient surveillance from products like Alexa, Google assistant, sidewalk labs, and Pokemon go. Google and Facebook used data voodoo dolls to provide their customers who are marketers with perfect information about every consumer. They use the same data to manipulate consumer choices just as in China behavioral manipulation is the goal. The algorithms of Google and Facebook are tuned to keep users on site and active; preferably by pressing emotional buttons that reveal each user's true self. For most users, this means content that provokes fear or outrage. Hate speech, disinformation, and conspiracy theories are catnip for these algorithms. The design of these platforms treats all content precisely the same whether it be hard news from a reliable site, a warning about an emergency, or a conspiracy theory. The platforms make no judgments, users choose aided by algorithms that reinforce past behavior. The result is, 2.5 billion Truman shows on Facebook each a unique world with its own facts. In the U.S. nearly 40% of the population identifies with at least one thing that is demonstrably false; this undermines democracy. “The people at Google and Facebook are not evil they are the products of an American business culture with few rules where misbehavior seldom results in punishment”, he says. Unlike industrial businesses, internet platforms are highly adaptable and this is the challenge. If you take away one opportunity they will move on to the next one and they are moving upmarket getting rid of the middlemen. Today, they apply behavioral prediction to advertising but they have already set their sights on transportation and financial services. This is not an argument against undermining their advertising business but rather a warning that it may be a Pyrrhic victory. If a user’s goals are to protect democracy and personal liberty, McNamee tells them, they have to be bold. They have to force a radical transformation of the business model of internet platforms. That would mean, at a minimum banning web tracking, scanning of email and documents, third party commerce and data, and ambient surveillance. A second option would be to tax micro targeted advertising to make it economically unattractive. But you also need to create space for alternative business models using trust that longs last. Startups can happen anywhere they can come from each of your countries. At the end of the day, though the most effective path to reform would be to shut down the platforms at least temporarily as Sri Lanka did. Any country can go first. The platform's have left you no choice the time has come to call their bluff companies with responsible business models will emerge overnight to fill the void. McNamee explains, “when they (organizations) gather all of this data the purpose of it is to create a high resolution avatar of each and every human being. Doesn't matter whether they use their systems or not they collect it on absolutely everybody. In the Caribbean, Voodoo was essentially this notion that you create a doll, an avatar, such that you can poke it with a pin and the person would experience that pain right and so it becomes literally a representation of the human being.” To know more you can listen to the full hearing video titled, “Meeting No. 152 ETHI - Standing Committee on Access to Information, Privacy and Ethics” on ParlVU. Experts present most pressing issues facing global lawmakers on citizens’ privacy, democracy and rights to freedom of speech Time for data privacy: DuckDuckGo CEO Gabe Weinberg in an interview with Kara Swisher Over 19 years of ANU(Australian National University) students’ and staff data breached
Read more
  • 0
  • 0
  • 2507

article-image-jim-balsillie-on-data-governance-challenges-and-6-recommendations-to-tackle-them
Savia Lobo
05 Jun 2019
5 min read
Save for later

Jim Balsillie on Data Governance Challenges and 6 Recommendations to tackle them

Savia Lobo
05 Jun 2019
5 min read
The Canadian Parliament's Standing Committee on Access to Information, Privacy and Ethics hosted the hearing of the International Grand Committee on Big Data, Privacy and Democracy from Monday, May 27 to Wednesday, May 29.  Witnesses from at least 11 countries appeared before representatives to testify on how governments can protect democracy and citizen rights in the age of big data. This section of the hearing, which took place on May 28, includes Jim Balsillie’s take on Data Governance. Jim Balsillie, Chair, Centre for International Governance Innovation; Retired Chairman and co-CEO of BlackBerry, starts off by talking about how Data governance is the most important public policy issue of our time. It is cross-cutting with economic, social and security dimensions. It requires both national policy frameworks and international coordination. He applauded the seriousness and integrity of Mr. Zimmer Angus and Erskine Smith who have spearheaded a Canadian bipartisan effort to deal with data governance over the past three years. “My perspective is that of a capitalist and global tech entrepreneur for 30 years and counting. I'm the retired Chairman and co-CEO of Research in Motion, a Canadian technology company [that] we scaled from an idea to 20 billion in sales. While most are familiar with the iconic BlackBerry smartphones, ours was actually a platform business that connected tens of millions of users to thousands of consumer and enterprise applications via some 600 cellular carriers in over 150 countries. We understood how to leverage Metcalfe's law of network effects to create a category-defining company, so I'm deeply familiar with multi-sided platform business model strategies as well as navigating the interface between business and public policy.”, he adds. He further talks about his different observations about the nature, scale, and breadth of some collective challenges for the committee’s consideration: Disinformation in fake news is just two of the negative outcomes of unregulated attention based business models. They cannot be addressed in isolation; they have to be tackled horizontally as part of an integrated whole. To agonize over social media’s role in the proliferation of online hate, conspiracy theories, politically motivated misinformation, and harassment, is to miss the root and scale of the problem. Social media’s toxicity is not a bug, it's a feature. Technology works exactly as designed. Technology products services and networks are not built in a vacuum. Usage patterns drive product development decisions. Behavioral scientists involved with today's platforms helped design user experiences that capitalize on negative reactions because they produce far more engagement than positive reactions. Among the many valuable insights provided by whistleblowers inside the tech industry is this quote, “the dynamics of the attention economy are structurally set up to undermine the human will.” Democracy and markets work when people can make choices align with their interests. The online advertisement driven business model subverts choice and represents a fundamental threat to markets election integrity and democracy itself. Technology gets its power through the control of data. Data at the micro-personal level gives technology unprecedented power to influence. “Data is not the new oil, it's the new plutonium amazingly powerful dangerous when it spreads difficult to clean up and with serious consequences when improperly used.” Data deployed through next-generation 5G networks are transforming passive in infrastructure into veritable digital nervous systems. Our current domestic and global institutions rules and regulatory frameworks are not designed to deal with any of these emerging challenges. Because cyberspace knows no natural borders, digital transformation effects cannot be hermetically sealed within national boundaries; international coordination is critical. With these observations, Balsillie has further provided six recommendations: Eliminate tax deductibility of specific categories of online ads. Ban personalized online advertising for elections. Implement strict data governance regulations for political parties. Provide effective whistleblower protections. Add explicit personal liability alongside corporate responsibility to effect the CEO and board of directors’ decision-making. Create a new institution for like-minded nations to address digital cooperation and stability. Technology is becoming the new 4th Estate Technology is disrupting governance and if left unchecked could render liberal democracy obsolete. By displacing the print and broadcast media and influencing public opinion, technology is becoming the new Fourth Estate. In our system of checks and balances, this makes technology co-equal with the executive that led the legislative and the judiciary. When this new Fourth Estate declines to appear before this committee, as Silicon Valley executives are currently doing, it is symbolically asserting this aspirational co-equal status. But is asserting the status and claiming its privileges without the traditions, disciplines, legitimacy, or transparency that checked the power of the traditional Fourth Estate. The work of this international grand committee is a vital first step towards reset redress of this untenable current situation. Referring to what Professor Zuboff said last night, we Canadians are currently in a historic battle for the future of our democracy with a charade called sidewalk Toronto. He concludes by saying, “I'm here to tell you that we will win that battle.” To know more you can listen to the full hearing video titled, “Meeting No. 152 ETHI - Standing Committee on Access to Information, Privacy, and Ethics” on ParlVU. Speech2Face: A neural network that “imagines” faces from hearing voices. Is it too soon to worry about ethnic profiling? UK lawmakers to social media: “You’re accessories to radicalization, accessories to crimes”, hearing on spread of extremist content Key Takeaways from Sundar Pichai’s Congress hearing over user data, political bias, and Project Dragonfly
Read more
  • 0
  • 0
  • 2492

article-image-experts-present-most-pressing-issues-facing-global-lawmakers-on-citizens-privacy-democracy-and-rights-to-freedom-of-speech
Sugandha Lahoti
31 May 2019
17 min read
Save for later

Experts present most pressing issues facing global lawmakers on citizens’ privacy, democracy and rights to freedom of speech

Sugandha Lahoti
31 May 2019
17 min read
The Canadian Parliament's Standing Committee on Access to Information, Privacy, and Ethics are hosting a hearing on Big Data, Privacy and Democracy from Monday, May 27 to Wednesday, May 29 as a series of discussions with experts, and tech execs over the three days. The committee invited expert witnesses to testify before representatives from 12 countries ( Canada, United Kingdom, Singapore, Ireland, Germany, Chile, Estonia, Mexico, Morocco, Ecuador, St. Lucia, and Costa Rica) on how governments can protect democracy and citizen rights in the age of big data. The committee opened with a round table discussion where expert witnesses spoke about what they believe to be the most pressing issues facing lawmakers when it comes to protecting the rights of citizens in the digital age. Expert witnesses that took part were: Professor Heidi Tworek, University of British Columbia Jason Kint, CEO of Digital Content Next Taylor Owen, McGill University Ben Scott, The Center for Internet and Society, Stanford Law School Roger McNamee, Author of Zucked: Waking up to the Facebook Catastrophe Shoshana Zuboff, Author of The Age of Surveillance Capitalism Maria Ressa, Chief Executive Officer and Executive Editor of Rappler Inc. Jim Balsillie, Chair, Centre for International Governance Innovation The session was led by Bob Zimmer, M.P. and Chair of the Standing Committee on Access to Information, Privacy and Ethics. Other members included Nathaniel Erskine-Smith, and Charlie Angus, M.P. and Vice-Chair of the Standing Committee on Access to Information, Privacy and Ethics. Also present was Damian Collins, M.P. and Chair of the UK Digital, Culture, Media and Sport Committee. Testimonies from the witnesses “Personal data matters more than context”, Jason Kint, CEO of Digital Content Next The presentation started with Mr. Jason Kint, CEO of Digital Content Next, a US based Trade association, who thanked the committee and appreciated the opportunity to speak on behalf of 80 high-quality digital publishers globally. He begins by saying how DCN has prioritized shining a light on issues that erode trust in the digital marketplace, including a troubling data ecosystem that has developed with very few legitimate constraints on the collection and use of data about consumers. As a result personal data is now valued more highly than context, consumer expectations, copyright, and even facts themselves. He believes it is vital that policymakers begin to connect the dots between the three topics of the committee's inquiry, data privacy, platform dominance and, societal impact. He says that today personal data is frequently collected by unknown third parties without consumer knowledge or control. This data is then used to target consumers across the web as cheaply as possible. This dynamic creates incentives for bad actors, particularly on unmanaged platforms, like social media, which rely on user-generated content mostly with no liability. Here the site owners are paid on the click whether it is from an actual person or a bot on trusted information or on disinformation. He says that he is optimistic about regulations like the GDPR in the EU which contain narrow purpose limitations to ensure companies do not use data for secondary uses. He recommends exploring whether large tech platforms that are able to collect data across millions of devices, websites, and apps should even be allowed to use this data for secondary purposes. He also applauds the decision of the German cartel office to limit Facebook's ability to collect and use data across its apps and the web. He further says that issues such as bot fraud, malware, ad blockers, clickbait, privacy violations and now disinformation are just symptoms. The root cause is unbridled data collection at the most personal level.  Four years ago DC ended the original financial analysis labeling Google and Facebook the duopoly of digital advertising. In a 150+ billion dollar digital ad market across the North America and the EU, 85 to 90 percent of the incremental growth is going to just these two companies. DNC dug deeper and connected the revenue concentration to the ability of these two companies to collect data in a way that no one else can. This means both companies know much of your browsing history and your location history. The emergence of this duopoly has created a misalignment between those who create the content and those who profit from it. The scandal involving Facebook and Cambridge analytic underscores the current dysfunctional dynamic. With the power Facebook has over our information ecosystem our lives and our democratic systems it is vital to know whether we can trust the company. He also points out that although, there's been a well documented and exhausting trail of apologies, there's been little or no change in the leadership or governance of Facebook. In fact the company has repeatedly refused to have its CEO offer evidence to pressing international government. He believes there should be a deeper probe as there's still much to learn about what happened and how much Facebook knew about the Cambridge Analytica scandal before it became public. Facebook should be required to have an independent audit of its user account practices and its decisions to preserve or purge real and fake accounts over the past decade. He ends his testimony saying that it is critical to shed light on these issues to understand what steps must be taken to improve data protection. This includes providing consumers with greater transparency and choice over their personal data when using practices that go outside of the normal expectations of consumers. Policy makers globally must hold digital platforms accountable for helping to build a healthy marketplace and for restoring consumer trust and restoring competition. “We need a World Trade Organization 2.0 “, Jim Balsillie, Chair, Centre for International Governance Innovation; Retired Chairman and co-CEO of BlackBerry Jim begins by saying that Data governance is the most important public policy issue of our time. It is cross-cutting with economic, social, and security dimension. It requires both national policy frameworks and international coordination. A specific recommendation he brought forward in this hearing was to create a new institution for like-minded nations to address digital cooperation and stability. “The data driven economies effects cannot be contained within national borders”, he said, “we need new or reformed rules of the road for digitally mediated global commerce, a World Trade Organization 2.0”. He gives the example of Financial Stability Board which was created in the aftermath of the 2008 financial crisis to foster global financial cooperation and stability. He recommends forming a similar global institution, for example, digital stability board, to deal with the challenges posed by digital transformation. The nine countries on this committee plus the five other countries attending, totaling 14 could constitute founding members of this board which would undoubtedly grow over time. “Check business models of Silicon Valley giants”, Roger McNamee, Author of Zucked: Waking up to the Facebook Catastrophe Roger begins by saying that it is imperative that this committee and that nations around the world engage in a new thought process relative to the ways of controlling companies in Silicon Valley, especially to look at their business models. By nature these companies invade privacy and undermine democracy. He assures that there is no way to stop that without ending the business practices as they exist. He then commends Sri Lanka who chose to shut down the platforms in response to a terrorist act. He believes that that is the only way governments are going to gain enough leverage in order to have reasonable conversations. He explains more on this in his formal presentation, which took place yesterday. “Stop outsourcing policies to the private sector”, Taylor Owen, McGill University He begins by making five observations about the policy space that we’re in right now. First, self-regulation and even many of the forms of co-regulation that are being discussed have and will continue to prove insufficient for this problem. The financial incentives are simply powerfully aligned against meaningful reform. These are publicly traded largely unregulated companies whose shareholders and directors expect growth by maximizing a revenue model that it is self part of the problem. This growth may or may not be aligned with the public interest. Second, disinformation, hate speech, election interference, privacy breaches, mental health issues and anti-competitive behavior must be treated as symptoms of the problem not its cause. Public policy should therefore focus on the design and the incentives embedded in the design of the platforms themselves. If democratic governments determine that structure and design is leading to negative social and economic outcomes, then it is their responsibility to govern. Third, governments that are taking this problem seriously are converging on a markedly similar platform governance agenda. This agenda recognizes that there are no silver bullets to this broad set of problems and that instead, policies must be domestically implemented and internationally coordinated across three categories: Content policies which seek to address a wide range of both supply and demand issues about the nature amplification and legality of content in our digital public sphere. Data policies which ensure that public data is used for the public good and that citizens have far greater rights over the use, mobility, and monetization of their data. Competition policies which promote free and competitive markets in the digital economy. Fourth, the propensity when discussing this agenda to overcomplicate solutions serves the interests of the status quo. He then recommends sensible policies that could and should be implemented immediately: The online ad micro targeting market could be made radically more transparent and in many cases suspended entirely. Data privacy regimes could be updated to provide far greater rights to individuals and greater oversight and regulatory power to punish abuses. Tax policy can be modernized to better reflect the consumption of digital goods and to crack down on tax base erosion and profit sharing. Modernized competition policy can be used to restrict and rollback acquisitions and a separate platform ownership from application and product development. Civic media can be supported as a public good. Large-scale and long term civic literacy and critical thinking efforts can be funded at scale by national governments, not by private organizations. He then asks difficult policy questions for which there are neither easy solutions, meaningful consensus nor appropriate existing international institutions. How we regulate harmful speech in the digital public sphere? He says, that at the moment we've largely outsourced the application of national laws as well as the interpretation of difficult trade-offs between free speech and personal and public harms to the platforms themselves. Companies who seek solutions rightly in their perspective that can be implemented at scale globally. In this case, he argues that what is possible technically and financially for the companies might be insufficient for the goals of the public good or the public policy goals. What is liable for content online? He says that we’ve clearly moved beyond the notion of platform neutrality and absolute safe harbor but what legal mechanisms are best suited to holding platforms, their design, and those that run them accountable. Also, he asks how are we going to bring opaque artificial intelligence systems into our laws and norms and regulations? He concludes saying that these difficult conversation should not be outsourced to the private sector. They need to be led by democratically accountable governments and their citizens. “Make commitments to public service journalism”, Ben Scott, The Center for Internet and Society, Stanford Law School Ben states that technology doesn't cause the problem of data misinformation, and irregulation. It infact accelerates it. This calls for policies to be made to limit the exploitation of these technology tools by malignant actors and by companies that place profits over the public interest. He says, “we have to view our technology problem through the lens of the social problems that we're experiencing.” This is why the problem of political fragmentation or hate speech tribalism and digital media looks different in each countries. It looks different because it feeds on the social unrest, the cultural conflict, and the illiberalism that is native to each society. He says we need to look at problems holistically and understand that social media companies are a part of a system and they don't stand alone as the super villains. The entire media market has bent itself to the performance metrics of Google and Facebook. Television, radio, and print have tortured their content production and distribution strategies to get likes shares and and to appear higher in the Google News search results. And so, he says, we need a comprehensive public policy agenda and put red lines around the illegal content. To limit data collection and exploitation we need to modernize competition policy to reduce the power of monopolies. He also says, that we need to publicly educate people on how to help themselves and how to stop being exploited. We need to make commitments to public service journalism to provide alternatives for people, alternatives to the mindless stream of clickbait to which we have become accustomed. “Pay attention to the physical infrastructure”, Professor Heidi Tworek, University of British Columbia Taking inspiration from Germany's vibrant interwar media democracy as it descended into an authoritarian Nazi regime, Heidi lists five brief lessons that she thinks can guide policy discussions in the future. These can enable governments to build robust solutions that can make democracies stronger. Disinformation is also an international relations problem Information warfare has been a feature not a bug of the international system for at least a century. So the question is not if information warfare exists but why and when states engage in it. This happens often when a state feels encircled, weak or aspires to become a greater power than it already is. So if many of the causes of disinformation are geopolitical, we need to remember that many of the solutions will be geopolitical and diplomatic as well, she adds. Pay attention to the physical infrastructure Information warfare disinformation is also enabled by physical infrastructure whether it is the submarine cables a century ago or fiber optic cables today. 95 to 99 percent of international data flows through undersea fiber-optic cables. Google partly owns 8.5 percent of those submarine cables. Content providers also own physical infrastructure She says, Russia and China, for example are surveying European and North American cables. China we know as of investing in 5G but combining that with investments in international news networks. Business models matter more than individual pieces of content Individual harmful content pieces go viral because of the few companies that control the bottleneck of information. Only 29% of Americans or Brits understand that their Facebook newsfeed is algorithmically organized. The most aware are the Finns and there are only 39% of them that understand that. That invisibility can provide social media platforms an enormous amount of power that is not neutral. At a very minimum, she says, we need far more transparency about how algorithms work and whether they are discriminatory. Carefully design robust regulatory institutions She urges governments and the committee to democracy-proof whatever solutions,  come up with. She says, “we need to make sure that we embed civil society or whatever institutions we create.” She suggests an idea of forming social media councils that could meet regularly to actually deal with many such problems. The exact format and the geographical scope are still up for debate but it's an idea supported by many including the UN Special Rapporteur on freedom of expression and opinion, she adds. Address the societal divisions exploited by social media Heidi says, that the seeds of authoritarianism need fertile soil to grow and if we do not attend to the underlying economic and social discontents, better communications cannot obscure those problems forever. “Misinformation is effect of one shared cause, Surveillance Capitalism”, Shoshana Zuboff, Author of The Age of Surveillance Capitalism Shoshana also agrees with the committee about how the themes of platform accountability, data security and privacy, fake news and misinformation are all effects of one shared cause. She identifies this underlying cause as surveillance capitalism and defines  surveillance capitalism as a comprehensive systematic economic logic that is unprecedented. She clarifies that surveillance capitalism is not technology. It is also not a corporation or a group of corporations. This is infact a virus that has infected every economic sector from insurance, retail, publishing, finance all the way through to product and service manufacturing and administration all of these sectors. According to her, Surveillance capitalism cannot also be reduced to a person or a group of persons. Infact surveillance capitalism follows the history of market capitalism in the following way - it takes something that exists outside the marketplace and it brings it into the market dynamic for production and sale. It claims private human experience for the market dynamic. Private human experience is repurposed as free raw material which are rendered as behavioral data. Some of these behavioral data are certainly fed back into product and service improvement but the rest are declared of behavioral surplus identified for their rich predictive value. These behavioral surplus flows are then channeled into the new means of production what we call machine intelligence or artificial intelligence. From these come out prediction products. Surveillance capitalists own and control not one text but two. First is the public facing text which is derived from the data that we have provided to these entities. What comes out of these, the prediction products, is the proprietary text, a shadow text from which these companies have amassed high market capitalization and revenue in a very short period of time. These prediction products are then sold into a new kind of marketplace that trades exclusively in human futures. The first name of this marketplace was called online targeted advertising and the human predictions that were sold in those markets were called click-through rates. By now that these markets are no more confined to that kind of marketplace. This new logic of surveillance capitalism is being applied to anything and everything. She promises to discuss on more of this in further sessions. “If you have no facts then you have no truth. If you have no truth you have no trust”, Maria Ressa, Chief Executive Officer and Executive Editor of Rappler Inc. Maria believes that in the end it comes down to the battle for truth and journalists are on the front line of this along with activists. Information is power and if you can make people believe lies, then you can control them. Information can be used for commercial benefits as well as a means to gain geopolitical power. She says,  If you have no facts then you have no truth. If you have no truth you have no trust. She then goes on to introduce a bit about her formal presentation tomorrow saying that she will show exactly how quickly a nation, a democracy can crumble because of information operations. She says she will provide data that shows it is systematic and that it is an erosion of truth and trust.  She thanks the committee saying that what is so interesting about these types of discussions is that the countries that are most affected are democracies that are most vulnerable. Bob Zimmer concluded the meeting saying that the agenda today was to get the conversation going and more of how to make our data world a better place will be continued in further sessions. He said, “as we prepare for the next two days of testimony, it was important for us to have this discussion with those who have been studying these issues for years and have seen firsthand the effect digital platforms can have on our everyday lives. The knowledge we have gained tonight will no doubt help guide our committee as we seek solutions and answers to the questions we have on behalf of those we represent. My biggest concerns are for our citizens’ privacy, our democracy and that our rights to freedom of speech are maintained according to our Constitution.” Although, we have covered most of the important conversations, you can watch the full hearing here. Time for data privacy: DuckDuckGo CEO Gabe Weinberg in an interview with Kara Swisher ‘Facial Recognition technology is faulty, racist, biased, abusive to civil rights; act now to restrict misuse’ say experts to House Oversight and Reform Committee. A brief list of drafts bills in US legislation for protecting consumer data privacy
Read more
  • 0
  • 0
  • 2481
Banner background image

article-image-typescript-3-5-releases-with-omit-helper-improved-speed-excess-property-checks-and-more
Vincy Davis
30 May 2019
5 min read
Save for later

TypeScript 3.5 releases with ‘omit’ helper, improved speed, excess property checks and more

Vincy Davis
30 May 2019
5 min read
Yesterday, Daniel Rosenwasser, Program Manager at TypeScript, announced the release of TypeScript 3.5. This release has great new additions in compiler and language, editor tooling, some breaking changes as well. Some key features include speed improvements, ‘omit’ helper type, improved excess property checks, and more. The earlier version of TypeScript 3.4 was released two months ago. Compiler and Language Speed improvements Typescripts team have been focusing heavily on optimizing certain code paths and stripping down certain functionality, since the past release. This has resulted in TypeScript 3.5 being faster than TypeScript 3.3 for many incremental checks. The compile time of TypeScript 3.5 has also fallen compared to 3.4, but users have been alerted that code completion and any other editor operations would be much ‘snappier’. This release also includes several optimizations to compiler settings such as why files were looked up, where files were found, etc. It’s also been found that in TypeScript 3.5, the amount of time rebuilding can be reduced by as much as 68% compared to TypeScript 3.4. The ‘Omit’ helper type Usually, users create an object that omits certain properties. In TypeScript 3.5, a new version of ‘Omit’ has been defined. It will include its own  lib.d.ts which can be used everywhere. The compiler itself will use this ‘Omit’ type to express types created through object rest, destructuring declarations on generics. Improved excess property checks in union types TypeScript has this feature of excess property checking in object literals. In the earlier versions, certain excess properties were allowed in the object literal, even if it didn’t match between Point and Label. In this new version, the type-checker will verify that all the provided properties belong to some union member and have the appropriate type. The --allowUmdGlobalAccess flag In TypeScript 3.5, you can now reference UMD global declarations like export as namespace foo. This is possible from anywhere, even modules by using the new --allowUmdGlobalAccess flag. Smarter union type checking When checking against union types, TypeScript usually compares each constituent type in isolation. While assigning source to target, it typically involves checking whether the type of source is assignable to target. In TypeScript 3.5, when assigning to types with discriminant properties like in T, the language actually will go further and decompose types like S into a union of every possible inhabitant type. This was not possible in the previous versions. Higher order type inference from generic constructors TypeScript 3.4’s inference allowed newFn to be generic. In TypeScript 3.5, this behavior is generalized to work on constructor functions as well. This means that functions that operate on class components in certain UI libraries like React, can more correctly operate on generic class components. New Editing Tools Smart Select This will provide an API for editors to expand text selections farther outward in a syntactical manner.  This feature is cross-platform and available to any editor which can appropriately query TypeScript’s language server. Extract to type alias TypeScript 3.5 will now support a useful new refactoring, to extract types to local type aliases. However, for users who prefer interfaces over type aliases, an issue still exists for extracting object types to interfaces as well. Breaking changes Generic type parameters are implicitly constrained to unknown In TypeScript 3.5, generic type parameters without an explicit constraint are now implicitly constrained to unknown, whereas previously the implicit constraint of type parameters was the empty object type {}. { [k: string]: unknown } is no longer a wildcard assignment target TypeScript 3.5 has removed the specialized assignability rule to permit assignment to { [k: string]: unknown }. This change was made because of the change from {} to unknown, if generic inference has no candidates. Depending on the intended behavior of { [s: string]: unknown }, several alternatives are available: { [s: string]: any } { [s: string]: {} } object unknown any Improved excess property checks in union types Typescript 3.5 adds a type assertion onto the object (e.g. { myProp: SomeType } as ExpectedType) It also adds an index signature to the expected type to signal, that unspecified properties are expected (e.g. interface ExpectedType { myProp: SomeType; [prop: string]: unknown }) Fixes to unsound writes to indexed access types TypeScript allows you to represent the operation of accessing a property of an object via the name of that property. In TypeScript 3.5, samples will correctly issue an error. Most instances of this error represent potential errors in the relevant code. Object.keys rejects primitives in ES5 In ECMAScript 5 environments, Object.keys throws an exception if passed through  any non-object argument. In TypeScript 3.5, if target (or equivalently lib) is ES5, calls to Object.keys must pass a valid object. This change interacts with the change in generic inference from {} to unknown. The aim of this version of TypeScript is to make the coding experience faster and happier. In the announcement, Daniel has also given the 3.6 iteration plan document and the feature roadmap page, to give users an idea of what’s coming in the next version of TypeScript. Users are quite content with the new additions and breaking changes in TypeScript 3.5. https://twitter.com/DavidPapp/status/1130939572563697665 https://twitter.com/sebastienlorber/status/1133639683332804608 A user on Reddit comments, “Those are some seriously impressive improvements. I know it's minor, but having Omit built in is just awesome. I'm tired of defining it myself in every project.” To read more details of TypeScript 3.5, head over to the official announcement. 5 reasons Node.js developers might actually love using Azure [Sponsored by Microsoft] Introducing InNative, an AOT compiler that runs WebAssembly using LLVM outside the Sandbox at 95% native speed All Docker versions are now vulnerable to a symlink race attack
Read more
  • 0
  • 0
  • 3875

article-image-angular-8-0-releases-with-major-updates-to-framework-angular-material-and-the-cli
Sugandha Lahoti
29 May 2019
4 min read
Save for later

Angular 8.0 releases with major updates to framework, Angular Material, and the CLI

Sugandha Lahoti
29 May 2019
4 min read
Angular 8.0 was released yesterday as a major version of the popular framework for building web, mobile, and desktop applications. This release spans across the framework, Angular Material, and the CLI.  Angular 8.0 improves application startup time on modern browsers, provides new APIs for tapping into the CLI, and aligns Angular to the ecosystem and more web standards. The team behind Angular has released a new Deprecation Guide. Public APIs will now support features for N+2 releases. This means that a feature that is deprecated in 8.1 will keep working in the following two major releases (9 and 10). The team will continue to maintain Semantic Versioning and a high degree of stability even across major versions. Angular 8.0 comes with Differential Loading by Default Differential loading is a process by which the browser chooses between modern or legacy JavaScript based on its own capabilities. The CLI looks at the target JS level in a user’s tsconfig.json form ng-update to determine whether or not to take advantage of Differential Loading. When target is set to es2015, CLI generates and label two bundles. At runtime, the browser uses attributes on the script tag to load the right bundle. <script type="module" src="…"> for Modern JS <script nomodule src="…"> for Legacy JS Angular’s Route Configurations now use Dynamic Imports Previously, lazily loading parts of an application using the router was accomplished by using the loadChildren key in the route configuration. The previous syntax was custom to Angular and built into its toolchain. With version 8, it is migrated to the industry standard dynamic imports. {path: `/admin`, loadChildren: () => import(`./admin/admin.module`).then(m => m.AdminModule)} This will improve the support from editors like VSCode and WebStorm who will now be able to understand and validate these imports. Angular 8.0 CLI updates Workspace APIs in the CLI Previously developers using Schematics had to manually open and modify their angular.json to make changes to the workspace configuration. Angular 8.0 has a new Workspace API to make it easier to read and modify this file. The workspaces API provides an abstraction of the underlying storage format of the workspace and provides support for both reading and writing. Currently, the only supported format is the JSON-based format used by the Angular CLI. New Builder APIs to run build and deployment processes Angular 8.0 has new builder APIs in the CLI that allows developers to tap into ng build, ng test, and ng run to perform processes like build and deployment. There is also an update to AngularFire, which adds a deploy command, making build and deployment to Firebase easier than ever. ng add @angular/fire ng run my-app:deploy Once installed, this deployment command will both build and deploy an application in the way recommended by AngularFire. Support for Web Worker Web workers speed up an application for cpu-intensive processing. Web workers allow developers to offload work to a background thread, such as image or video manipulation. With Angular 8.0, developers can now generate new web workers from the CLI. To add a worker to a project, run: ng generate webWorker my-worker Once added, web worker can be used normally in an application, and the CLI will be able to bundle and code split it correctly. const worker = new Worker(`./my-worker.worker`, { type: `module` }); AngularJS Improvements Unified Angular location service In AngularJS, the $location service handles all routing configuration and navigation, encoding, and decoding of URLS, redirects, and interactions with browser APIs. Angular uses its own underlying Location service for all of these tasks. Angular 8.0 now provides a LocationUpgradeModule that enables a unified location service that shifts responsibilities from the AngularJS $location service to the Angular Location Service. This should improve the lives of applications using ngUpgrade who need routing in both the AngularJS and Angular part of their application. Improvements to lazy load Angular JS As of Angular version 8, lazy loading code can be accomplished simply by using the dynamic import syntax import('...'). The team behind Angular have documented best practices around lazy loading parts of your AngularJS application from Angular, making it easier to migrate the most commonly used features first, and only loading AngularJS for a subset of your application. These are a select few updates. More information on the Angular Blog. 5 useful Visual Studio Code extensions for Angular developers Ionic Framework 4.0 has just been released, now backed by Web Components, not Angular The Angular 7.2.1 CLI release fixes a webpack-dev-server vulnerability
Read more
  • 0
  • 0
  • 5042
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-introducing-innative-an-aot-compiler-that-runs-webassembly-using-llvm-outside-the-sandbox-at-95-native-speed
Savia Lobo
28 May 2019
4 min read
Save for later

Introducing InNative, an AOT compiler that runs WebAssembly using LLVM outside the Sandbox at 95% native speed

Savia Lobo
28 May 2019
4 min read
On May 17, a team of WebAssembly enthusiasts introduced InNative, an AOT (Ahead-Of-Time) compiler for WebAssembly using LLVM with a customizable level of sandboxing for Windows/Linux. It helps run WebAssembly Outside the Sandbox at 95% native speed. The team also announced an initial release of the inNative Runtime v0.1.0 for Windows and Linux, today. https://twitter.com/inNative_sdk/status/1133098611514830850 With the help of InNative, users can grab a precompiled SDK from GitHub, or build from source. If users turn off all the isolation, the LLVM optimizer can almost reach native speeds and nearly recreate the same optimized assembly that a fully optimized C++ compiler would give, while leveraging all the features of the host CPU. Given below are some benchmarks, adapted from these C++ benchmarks: Source: InNative This average benchmark has speed in microseconds and is compiled using GCC -O3 --march=native on WSL. “We usually see 75% native speed with sandboxing and 95% without. The C++ benchmark is actually run twice - we use the second run, after the cache has had time to warm up. Turning on fastmath for both inNative and GCC makes both go faster, but the relative speed stays the same”, the official website reads. “The only reason we haven’t already gotten to 99% native speed is because WebAssembly’s 32-bit integer indexes break LLVM’s vectorization due to pointer aliasing”, the WebAssembly researcher mentions. Once fixed-width SIMD instructions are added, native WebAssembly will close the gap entirely, as this vectorization analysis will have happened before the WebAssembly compilation step. Some features of InNative InNative has the same advantage as that of JIT compilers have, which is that it can always take full advantage of the native processor architecture. It can perform expensive brute force optimizations like a traditional AOT compiler, by caching its compilation result. By compiling on the target machine once, one can get the best of both, Just-In-Time and Ahead-Of-Time. It also allows webassembly modules to interface directly with the operating system. inNative uses its own unofficial extension to allow it to pass WebAssembly pointers into C functions as this kind of C interop is definitely not supported by the standard yet. However, there is a proposal for the same. inNative also lets the users write C libraries that expose themselves as WebAssembly modules, which would make it possible to build an interop library in C++. Once WebIDL bindings are standardized, it will be a lot easier to compile WebAssembly that binds to C APIs. This opens up a world of tightly integrated WebAssembly plugins for any language that supports calling standard C interfaces, integrated directly into the program. inNative lays the groundwork needed for us and it doesn’t need to be platform-independent, only architecture-independent. “We could break the stranglehold of i386 on the software industry and free developers to experiment with novel CPU architectures without having to worry about whether our favorite language compiles to it. A WebAssembly application built against POSIX could run on any CPU architecture that implements a POSIX compatible kernel!”, the official blog announced. A user on Hacker News commented, “The differentiator for InNative seems to be the ability to bypass the sandbox altogether as well as additional native interop with the OS. Looks promising!” Another user on Reddit, “This is really exciting! I've been wondering why we ship x86 and ARM assembly for years now, when we could more efficiently ship an LLVM-esque assembly that compiles on first run for the native arch. This could be the solution!” To know more about InNative in detail, head over to its official blog post. React Native VS Xamarin: Which is the better cross-platform mobile development framework? Tor Browser 8.5, the first stable version for Android, is now available on Google Play Store! Introducing SwiftWasm, a tool for compiling Swift to WebAssembly
Read more
  • 0
  • 0
  • 3870

article-image-privacy-experts-discuss-gdpr-its-impact-and-its-future-on-beth-kindigs-tech-lightning-rounds-podcast
Savia Lobo
28 May 2019
9 min read
Save for later

Privacy Experts discuss GDPR, its impact, and its future on Beth Kindig’s Tech Lightning Rounds Podcast

Savia Lobo
28 May 2019
9 min read
User’s data was being compromised even before the huge Cambridge Analytica scandal was brought to light. On May 25th, 2018, when the GDPR first came into existence in the European Union for data protection and privacy, it brought in much power to individuals over their personal data and to simplify the regulatory environment for international businesses. GDPR recently completed one year and since its inception, these have highly helped in better data privacy regulation. These privacy regulations divided companies into data processors and data controllers. Any company who has customers in the EU must comply regardless of where the company is located. In episode 6 of Tech Lightning Rounds, Beth Kindig of Intertrust speaks to experts from three companies who have implemented GDPR. Robin Andruss, the Director of Privacy at Twilio, a leader in global communications that is uniquely positioned to handle data from text messaging sent inside its applications. Tomas Sander of Intertrust, the company that invented digital rights management and has been advocating for privacy for nearly 30 years. Katryna Dow, CEO of Meeco, a startup that introduces the concept of data control for digital life. Robin Andruss’ on Twilio’s stance on privacy Twilio provides messaging, voice, and video inside mobile and web applications for nearly 40,000 companies including Uber, Lyft, Yelp, Airbnb, Salesforce and many more. “Twilio is one of the leaders in the communications platform as a service space, where we power APIs to help telecommunication services like SMS and texting, for example. A good example is when you order a Lyft or an Uber and you’ll text with a Uber driver and you’ll notice that’s not really their phone number. So that’s an example of one of our services”, Andruss explains. Twilio includes “binding corporate rules”, the global framework around privacy. He says, for anyone who’s been in the privacy space for a long time, they know that it’s actually very challenging to reach this standard. Organizations need to work with a law firm or consultancy to make sure they are meeting a bar of privacy and actually have their privacy regulations and obligations agreed to and approved by their lead DPA, Data Protection Authority in the EU, which in Twilio’s case is the Irish DPC. “We treat everyone who uses Twilio services across the board the same, our corporate rules. One rule, we don’t have a different one for the US or the EU. So I’d say that they are getting GDPR level of privacy standards when you use Twilio”, Andruss said. Talking about the California Consumer Privacy Act (CCPA), Andruss said that it’s mostly more or less targeted towards advertising companies and companies that might sell data about individuals and make money off of it, like Intelius or Spokeo or those sort of services. Beth asked Andruss on “how concerned the rest of us should be about data and what companies can do internally to improve privacy measures” to which he said, “just think about, really, what you’re putting out there, and why, and this third party you’re giving your information to when you are giving it away”. Twilio’s “no-shenanigans” and “Wear your customers’ shoes” approach to privacy Twilio’s “No-shenigans” approach to privacy encourages employees to do the right thing for their end-users and customers. Andruss explained this with an example, “You might be in a meeting, and you can say, “Is that the right thing? Do we really wanna do that? Is that the right thing to do for our customers or is that shenanigany does it not feel right?”. The “Wear your customers’ shoes.” approach is, when Twilio builds a product or thinks about something, they think about how to do the right thing for their customers. This builds trust within the customers that the organization really cares about privacy and wants to do the right thing while customers use Twilio’s tools and services. Tomas Sander on privacy pre-GDPR and post-GDPR Tomas Sander started off by explaining the basics of GDPR, what it does, and how it can help users, and so on. He also cleared a common doubt that most people have about the reach of EU’s GDPR. He said, “One of the main things that the GDPR has done is that it has an extraterritorial reach. So GDPR not only applies to European companies, but to companies worldwide if they provide goods and services to European citizens”. GDPR has “made privacy a much more important issue for many organizations” due to which GDPR has huge fines for non-compliance and that has contributed for it to be taken seriously by companies globally. Because of data breaches, “security has become a boardroom issue for many companies. Now, privacy has also become a boardroom issue”, Sander adds. He said that GDPR has been extremely effective in setting the privacy debate worldwide. Although it’s a regulation in Europe, it’s been extremely effective through its global impact on organizations and on thinking of policymakers, what they wanna do about privacy in their countries. However, talking about positive impact, Sander said that data behemoths such as Google and Facebook are still collecting data from many, many different sources, aggregating it about users, and creating detailed profiles for the purpose of selling advertising, usually, so for profit. This is why the jury is still out! “And this practice of taking all this different data, from location data to smart home data, to their social media data and so on and using them for sophisticated user profiling, that practice hasn’t recognizably changed yet”, he added. Sander said he “recently heard data protection commissioners speak at a privacy conference in Washington, and they believe that we’re going to see some of these investigations conclude this summer. And hopefully then there’ll be some enforcement, and some of the commissioners certainly believe that there will be fines”. Sander’s suggestion for users who are not much into tech is,  “I think people should be deeply concerned about privacy.” He said they can access your web browsing activities, your searches, location data, the data shared on social media, facial recognition from images, and also these days IoT and smart home data that give people intimate insights into what’s happening in your home. With this data, the company can keep a tab on what you do and perhaps create a user profile. “A next step they could take is that they don’t only observe what you do and predict what the next step is you’re going to do, but they may also try to manipulate and influence what you do. And they would usually do that for profit motives, and that is certainly a major concern. So people may not even know, may not even realize, that they’re being influenced”. This is a major concern because it really questions “our individual freedom about… It really becomes about democracy”. Sander also talked about an incident that took place in Germany where its far-right party, “Alternative For Germany”, “Alternative für Deutschland” were able to use a Facebook feature that has been created for advertisers to help it achieve the best result in the federal election for any far right-wing party in Germany after World War 2. The feature that was being used here was a feature of “look-alike” audiences. Facebook helped this party to analyze the characteristics of the 300,000 users who had liked the “Alternative For Germany”, who had liked this party. Further, from these users, it created a “look-alike” audience of another 300,000 users that were similar in characteristics to those who had already liked this party, and then they were specifically targeting ads to this group. Katrina Dow on getting people digitally aware Dow thinks, “the biggest challenge right now is that people just don’t understand what goes on under the surface”. She explains how by a simple picture sharing of a child playing in a park can impact the child’s credit rating in the future.  She says, “People don’t understand the consequences of something that I do right now, that’s digital, and what it might impact some time in the future”. She also goes on explaining how to help people make a more informed choice around the services they wanna use or argue for better rights in terms of those services, so those consequences don’t happen. Dow also discusses one of the principles of the GDPR, which is designing privacy into the applications or websites as the foundation of the design, rather than adding privacy as an afterthought. Beth asked if GDPR, which introduces some level of control, is effective. To which Dow replied, “It’s early days. It’s not working as intended right now.” Dow further explained, “the biggest problem right now is the UX level is just not working. And organizations that have been smart in terms of creating enormous amounts of friction are using that to their advantage.” “They’re legally compliant, but they have created that compliance burden to be so overwhelming, that I agree or just anything to get this screen out of the way is driving the behavior”, Dow added. She says that a part of GDPR is privacy by design, but what we haven’t seen the surface to the UX level. “And I think right now, it’s just so overwhelming for people to even work out, “What’s the choice?” What are they saying yes to? What are they saying no to? So I think, the underlying components are there and from a legal framework. Now, how do we move that to what we know is the everyday use case, which is how you interact with those frameworks”, Dow further added. To listen to this podcast and know more about this in detail, visit Beth Kindig’s official website. Github Sponsors: Could corporate strategy eat FOSS culture for dinner? Mozilla and Google Chrome refuse to support Gab’s Dissenter extension for violating acceptable use policy SnapLion: An internal tool Snapchat employees abused to spy on user data
Read more
  • 0
  • 0
  • 2792

article-image-speech2face-a-neural-network-that-imagines-faces-from-hearing-voices-is-it-too-soon-to-worry-about-ethnic-profiling
Savia Lobo
28 May 2019
8 min read
Save for later

Speech2Face: A neural network that “imagines” faces from hearing voices. Is it too soon to worry about ethnic profiling?

Savia Lobo
28 May 2019
8 min read
Last week, a few researchers from the MIT CSAIL and Google AI published their research study of reconstructing a facial image of a person from a short audio recording of that person speaking, in their paper titled, “Speech2Face: Learning the Face Behind a Voice”. The researchers designed and trained a neural network which uses millions of natural Internet/YouTube videos of people speaking. During training, they demonstrated that the model learns voice-face correlations that allows it to produce images that capture various physical attributes of the speakers such as age, gender, and ethnicity. The entire training was done in a self-supervised manner, by utilizing the natural co-occurrence of faces and speech in Internet videos, without the need to model attributes explicitly. They said they further evaluated and numerically quantified how their Speech2Face reconstructs, obtains results directly from audio, and how it resembles the true face images of the speakers. For this, they tested their model both qualitatively and quantitatively on the AVSpeech dataset and the VoxCeleb dataset. The Speech2Face model The researchers utilized the VGG-Face model, a face recognition model pre-trained on a large-scale face dataset called DeepFace and extracted a 4096-D face feature from the penultimate layer (fc7) of the network. These face features were shown to contain enough information to reconstruct the corresponding face images while being robust to many of the aforementioned variations. The Speech2Face pipeline consists of two main components: 1) a voice encoder, which takes a complex spectrogram of speech as input, and predicts a low-dimensional face feature that would correspond to the associated face; and 2) a face decoder, which takes as input the face feature and produces an image of the face in a canonical form (frontal-facing and with neutral expression). During training, the face decoder is fixed, and only the voice encoder is trained which further predicts the face feature. How were the facial features evaluated? To quantify how well different facial attributes are being captured in Speech2Face reconstructions, the researchers tested different aspects of the model. Demographic attributes Researchers used Face++, a leading commercial service for computing facial attributes. They evaluated and compared age, gender, and ethnicity, by running the Face++ classifiers on the original images and our Speech2Face reconstructions. The Face++ classifiers return either “male” or “female” for gender, a continuous number for age, and one of the four values, “Asian”, “black”, “India”, or “white”, for ethnicity. Source: Arxiv.org Craniofacial attributes Source: Arxiv.org The researchers evaluated craniofacial measurements commonly used in the literature, for capturing ratios and distances in the face. They computed the correlation between F2F and the corresponding S2F reconstructions. Face landmarks were computed using the DEST library. As can be seen, there is statistically significant (i.e., p < 0.001) positive correlation for several measurements. In particular, the highest correlation is measured for the nasal index (0.38) and nose width (0.35), the features indicative of nose structures that may affect a speaker’s voice. Feature similarity The researchers further test how well a person can be recognized from on the face features predicted from speech. They, first directly measured the cosine distance between the predicted features and the true ones obtained from the original face image of the speaker. The table above shows the average error over 5,000 test images, for the predictions using 3s and 6s audio segments. The use of longer audio clips exhibits consistent improvement in all error metrics; this further evidences the qualitative improvement observed in the image below. They further evaluated how accurately they could retrieve the true speaker from a database of face images. To do so, they took the speech of a person to predict the feature using the Speech2Face model and query it by computing its distances to the face features of all face images in the database. Ethical considerations with Speech2Face model Researchers said that the training data used is a collection of educational videos from YouTube and that it does not represent equally the entire world population. Hence, the model may be affected by the uneven distribution of data. They have also highlighted that “ if a certain language does not appear in the training data, our reconstructions will not capture well the facial attributes that may be correlated with that language”. “In our experimental section, we mention inferred demographic categories such as “White” and “Asian”. These are categories defined and used by a commercial face attribute classifier and were only used for evaluation in this paper. Our model is not supplied with and does not make use of this information at any stage”, the paper mentions. They also warn that any further investigation or practical use of this technology would be carefully tested to ensure that the training data is representative of the intended user population. “If that is not the case, more representative data should be broadly collected”, the researchers state. Limitations of the Speech2Face model In order to test the stability of the Speech2Face reconstruction, the researchers used faces from different speech segments of the same person, taken from different parts within the same video, and from a different video. The reconstructed face images were consistent within and between the videos. They further probed the model with an Asian male example speaking the same sentence in English and Chinese to qualitatively test the effect of language and accent. While having the same reconstructed face in both cases would be ideal, the model inferred different faces based on the spoken language. In other examples, the model was able to successfully factor out the language, reconstructing a face with Asian features even though the girl was speaking in English with no apparent accent. “In general, we observed mixed behaviors and a more thorough examination is needed to determine to which extent the model relies on language. More generally, the ability to capture the latent attributes from speech, such as age, gender, and ethnicity, depends on several factors such as accent, spoken language, or voice pitch. Clearly, in some cases, these vocal attributes would not match the person’s appearance”, the researchers state in the paper. Speech2Cartoon: Converting generated image into cartoon faces The face images reconstructed from speech may also be used for generating personalized cartoons of speakers from their voices. The researchers have used Gboard, the keyboard app available on Android phones, which is also capable of analyzing a selfie image to produce a cartoon-like version of the face. Such cartoon re-rendering of the face may be useful as a visual representation of a person during a phone or a video conferencing call when the person’s identity is unknown or the person prefers not to share his/her picture. The reconstructed faces may also be used directly, to assign faces to machine-generated voices used in home devices and virtual assistants. https://twitter.com/NirantK/status/1132880233017761792 A user on HackerNews commented, “This paper is a neat idea, and the results are interesting, but not in the way I'd expected. I had hoped it would the domain of how much person-specific information this can deduce from a voice, e.g. lip aperture, overbite, size of the vocal tract, openness of the nares. This is interesting from a speech perception standpoint. Instead, it's interesting more in the domain of how much social information it can deduce from a voice. This appears to be a relatively efficient classifier for gender, race, and age, taking voice as input.” “I'm sure this isn't the first time it's been done, but it's pretty neat to see it in action, and it's a worthwhile reminder: If a neural net is this good at inferring social, racial, and gender information from audio, humans are even better. And the idea of speech as a social construct becomes even more relevant”, he further added. This recent study is interesting considering the fact that it is taking AI to another level wherein we are able to predict the face just by using audio recordings and even without the need for a DNA. However, there can be certain repercussions, especially when it comes to security. One can easily misuse such technology by impersonating someone else and can cause trouble. It would be interesting to see how this study turns out to be in the near future. To more about the Speech2Face model in detail, head over to the research paper. OpenAI introduces MuseNet: A deep neural network for generating musical compositions An unsupervised deep neural network cracks 250 million protein sequences to reveal biological structures and functions OpenAI researchers have developed Sparse Transformers, a neural network which can predict what comes next in a sequence
Read more
  • 0
  • 0
  • 7005

article-image-implementing-garbage-collection-algorithms-in-golang-tutorial
Sugandha Lahoti
28 May 2019
11 min read
Save for later

Implementing Garbage collection algorithms in Golang [Tutorial]

Sugandha Lahoti
28 May 2019
11 min read
Memory management is a way to control and organize memory.  The basic goal of memory management algorithms is to dynamically designate segments of memory to programs on demand. The algorithms free up memory for reuse when the objects in the memory are never required again. Garbage collection, cache management, and space allocation algorithms are good examples of memory management techniques. In this article, we will cover the garbage collection algorithm in Golang. We'll look at garbage collection first, then look at the different algorithms related to garbage collection. This article is taken from the book Learn Data Structures and Algorithms with Golang by Bhagvan Kommadi. In this book, you will explore Golang's data structures and algorithms to design, implement, and analyze code in the professional setting. Technical requirements Install Go Version 1.10 from Golang, choosing the right version for your OS. The GitHub repository for the code in this article can be found here. Garbage collection Garbage collection is a type of programmed memory management in which memory, currently occupied by objects that will never be used again, is gathered. John McCarthy was the first person to come up with garbage collection for managing Lisp memory management. This technique specifies which objects need to be de-allocated, and then discharges the memory. The strategies that are utilized for garbage collection are stack allocation and region interference. Sockets, relational database handles, user window objects, and file resources are not overseen by garbage collectors. Garbage collection algorithms help reduce dangling pointer defects, double-free defects, and memory leaks. These algorithms are computing-intensive and cause decreased or uneven performance. According to Apple, one of the reasons for iOS not having garbage collection is that garbage collection needs five times the memory to match explicit memory management. In high-transactional systems, concurrent, incremental, and real-time garbage collectors help manage memory collection and release. Garbage collection algorithms depend on various factors: GC throughput Heap overhead Pause times Pause frequency Pause distribution Allocation performance Compaction Concurrency Scaling Tuning Warm-up time Page release Portability Compatibility That's simple, deferred, one-bit, weighted reference counting, mark-and-sweep, and generational collection algorithms discussed in the following sections. The ReferenceCounter class The following code snippet shows how references to created objects are maintained in the stack. The ReferenceCounter class has the number of references, including the pool of references and removed references, as properties: //main package has examples shown // in Hands-On Data Structures and algorithms with Go book package main // importing fmt package import ( "fmt" "sync" ) //Reference Counter type ReferenceCounter struct { num *uint32 pool *sync.Pool removed *uint32 } Let's take a look at the method of the ReferenceCounter class. The newReferenceCounter method The newReferenceCounter method initializes a ReferenceCounter instance and returns a pointer to ReferenceCounter. This is shown in the following code: //new Reference Counter method func newReferenceCounter() *ReferenceCounter { return &ReferenceCounter{ num: new(uint32), pool: &sync.Pool{}, removed: new(uint32), } } The Stack class is described in the next section. The Stack class The Stack class consists of a references array and Count as properties. This is shown in the following code: // Stack class type Stack struct { references []*ReferenceCounter Count int } Let's take a look at the methods of the Stack class. The Stack class – a new method Now, let's look at the heap interface methods that are implemented by the Stack class. The new method initializes the references array, and the Push and Pop heap interface methods take the reference parameter to push and pop reference out of the stack. This is shown in the following code: // New method of Stack Class func (stack *Stack) New() { stack.references = make([]*ReferenceCounter,0) } // Push method func (stack *Stack) Push(reference *ReferenceCounter) { stack.references = append(stack.references[:stack.Count], reference) stack.Count = stack.Count + 1 } // Pop method func (stack *Stack) Pop() *ReferenceCounter { if stack.Count == 0 { return nil } var length int = len(stack.references) var reference *ReferenceCounter = stack.references[length -1] if length > 1 { stack.references = stack.references[:length-1] } else { stack.references = stack.references[0:] } stack.Count = len(stack.references) return reference } The main method In the following code snippet, let's see how Stack is used. A Stack instance is initialized, and references are added to the stack by invoking the Push method. The Pop method is invoked and the output is printed: // main method func main() { var stack *Stack = &Stack{} stack.New() var reference1 *ReferenceCounter = newReferenceCounter() var reference2 *ReferenceCounter = newReferenceCounter() var reference3 *ReferenceCounter = newReferenceCounter() var reference4 *ReferenceCounter = newReferenceCounter() stack.Push(reference1) stack.Push(reference2) stack.Push(reference3) stack.Push(reference4) fmt.Println(stack.Pop(), stack.Pop(), stack.Pop(), stack.Pop()) } Run the following commands to execute the stack_garbage_collection.go file: go run stack_garbage_collection.go The output is as follows: The reference counting, mark-and-sweep, and generational collection algorithms will be discussed in the following sections. Reference counting Reference counting is a technique that's used for keeping the count of references, pointers, and handles to resources. Memory blocks, disk space, and objects are good examples of resources. This technique tracks each object as a resource. The metrics that are tracked are the number of references held by different objects. The objects are recovered when they can never be referenced again. The number of references is used for runtime optimizations. Deutsch-Bobrow came up with the strategy of reference counting. This strategy was related to the number of updated references that were produced by references that were put in local variables. Henry Baker came up with a method that includes references in local variables that are deferred until needed. In the following subsections, the simple, deferred, one-bit, and weighted techniques of reference counting will be discussed. Simple reference counting Reference counting is related to keeping the number of references, pointers, and handles to a resource such as an object, block of memory, or disk space. This technique is related to the number of references to de-allocated objects that are never referenced again. The collection technique tracks, for each object, a tally of the number of references to the object. The references are held by other objects. The object gets removed when the number of references to the object is zero. The removed object becomes inaccessible. The removal of a reference can prompt countless connected references to be purged. The algorithm is time-consuming because of the size of the object graph and slow access speed. In the following code snippets, we can see a simple reference-counting algorithm being implemented. The ReferenceCounter class has number (num), pool, and removed references as properties: ///main package has examples shown // in Go Data Structures and algorithms book package main // importing sync, atomic and fmt packages import ( "sync/atomic" "sync" "fmt" ) //Reference Counter type ReferenceCounter struct { num *uint32 pool *sync.Pool removed *uint32 } The newReferenceCounter, Add, and Subtract methods of the ReferenceCounter class are shown in the following snippet: //new Reference Counter method func newReferenceCounter() ReferenceCounter { return ReferenceCounter{ num: new(uint32), pool: &sync.Pool{}, removed: new(uint32), } } // Add method func (referenceCounter ReferenceCounter) Add() { atomic.AddUint32(referenceCounter.num, 1) } // Subtract method func (referenceCounter ReferenceCounter) Subtract() { if atomic.AddUint32(referenceCounter.num, ^uint32(0)) == 0 { atomic.AddUint32(referenceCounter.removed, 1) } } Let's look at the main method and see an example of simple reference counting. The newReferenceCounter method is invoked, and a reference is added by invoking the Add method. The count reference is printed at the end. This is shown in the following code snippet // main method func main() { var referenceCounter ReferenceCounter referenceCounter = newReferenceCounter() referenceCounter.Add() fmt.Println(*referenceCounter.count) } Run the following commands to execute the reference_counting.go file: go run reference_counting.go The output is as follows: The different types of reference counting techniques are described in the following sections. Deferred reference counting Deferred reference counting is a procedure in which references from different objects to a given object are checked and program-variable references are overlooked. If the tally of the references is zero, that object will not be considered. This algorithm helps reduce the overhead of keeping counts up to date. Deferred reference counting is supported by many compilers. One-bit reference counting The one-bit reference counting technique utilizes a solitary bit flag to show whether an object has one or more references. The flag is stored as part of the object pointer. There is no requirement to spare any object for extra space in this technique. This technique is viable since the majority of objects have a reference count of 1. Weighted reference counting The weighted reference counting technique tallies the number of references to an object, and each reference is delegated a weight. This technique tracks the total weight of the references to an object. Weighted reference counting was invented by Bevan, Watson, and Watson in 1987. The following code snippet shows an implementation of the weighted reference counting technique: //Reference Counter type ReferenceCounter struct { num *uint32 pool *sync.Pool removed *uint32 weight int } //WeightedReference method func WeightedReference() int { var references []ReferenceCounter references = GetReferences(root) var reference ReferenceCounter var sum int for _, reference = range references { sum = sum + reference.weight } return sum } The mark-and-sweep algorithm The mark-and-sweep algorithm is based on an idea that was proposed by Dijkstra in 1978. In the garbage collection style, the heap consists of a graph of connected objects, which are white. This technique visits the objects and checks whether they are specifically available by the application. Globals and objects on the stack are shaded gray in this technique. Every gray object is darkened to black and filtered for pointers to other objects. Any white object found in the output is turned gray. This calculation is rehashed until there are no gray objects. White objects that are left out are inaccessible. A mutator in this algorithm handles concurrency by changing the pointers while the collector is running. It also takes care of the condition so that no black object points to a white object. The mark algorithm has the following steps: Mark the root object Mark the root bit as true if the value of the bit is false For every reference of root, mark the reference, as in the first step The following code snippet shows the marking algorithm. Let's look at the implementation of the Mark method: func Mark( root *object){ var markedAlready bool markedAlready = IfMarked(root) if !markedAlready { map[root] = true } var references *object[] references = GetReferences(root) var reference *object for _, reference = range references { Mark(reference) } } The sweep algorithm's pseudocode is presented here: For each object in the heap, mark the bit as false if the value of the bit is true If the value of the bit is true, release the object from the heap The sweep algorithm releases the objects that are marked for garbage collection. Now, let's look at the implementation of the sweep algorithm: func Sweep(){ var objects *[]object objects = GetObjects() var object *object for _, object = range objects { var markedAlready bool markedAlready = IfMarked(object) if markedAlready { map[object] = true } Release(object) } } The generational collection algorithm The generational collection algorithm divides the heap of objects into generations. A generation of objects will be expired and collected by the algorithm based on their age. The algorithm promotes objects to older generations based on the age of the object in the garbage collection cycle. The entire heap needs to be scavenged, even if a generation is collected. Let's say generation 3 is collected; in this case, generations 0-2 are also scavenged. The generational collection algorithm is presented in the following code snippet: func GenerationCollect(){ var currentGeneration int currentGeneration = 3 var objects *[]object objects = GetObjectsFromOldGeneration(3) var object *object for _, object = range objects { var markedAlready bool markedAlready = IfMarked(object) if markedAlready { map[object] = true } } } This article covered the garbage collection algorithms in Golang. We looked at reference counting algorithms, including simple, deferred, one-bit, and weighted. The mark-and-sweep and generational collection algorithms were also presented with code examples. To learn other memory management techniques in Golang like cache management, and memory space allocation, read our book  Learn Data Structures and Algorithms with Golang. Go User Survey 2018 results: Golang goes from strength to strength, as more engineers than ever are using it at work. State of Go February 2019 – Golang developments report for this month released The Golang team has started working on Go 2 proposals
Read more
  • 0
  • 0
  • 5958
article-image-8-stunning-motion-graphics-trends-for-2019
Guest Contributor
28 May 2019
7 min read
Save for later

8 Stunning Motion graphics trends for 2019

Guest Contributor
28 May 2019
7 min read
Motion graphics, animation, animated content, moving illustrations and the list of various terminologies goes on. All these terminologies are used for the most innovative and modern way of conveying information to the targeted audience and potential prospects. In modern-day’s content-saturated world, where written content is losing its appeal, integrating graphic content is one of the latest and probably the most innovative approaches for making online content better and more attention-capturing. The term motion graphics was coined almost 20 years ago when the world was first introduced to moving and flashy graphics. However, the reality of today sees it in a completely different perspective. Motion graphics is no longer restricted to the creation of animation; rather in this modernized world motion graphics is a worldwide phenomenon and content technique used in all forms and categories of online content. If implemented appropriately, motion graphics can augment the ease of communication of the message and content. In this internet-driven world, where ever you go you are bound to encounter some motion graphics content everywhere. Be it an animated explainer video on a marketing blog or a viral cat video on social media platforms. From hand-drawn flipbooks to celluloid animation, motion graphics have gone through loads of changes and some evolutionary shifts. Whether it’s about inducing a touch of reality to the dull economics stats or showcasing creativity through online logo maker tools to make own logos for free, motion graphics is one of the widely-practiced technique for making high-quality and eye-catchy content. Last year was all about discoveries and fresh starts whereas 2019 is all about innovations. Most of the trends from last year became mainstream, so this year we are assembling a great collection of some of the biggest and the born-to-rule trends of the motion graphics industry. Despite all the changes and shifts in the design industry, certain techniques and trends never fail to lose their appeal and significance in motion design. From the everlasting oldies to some latest discoveries, here is a list of all the trends that will be ruling the design industry this year: Kinetic typography From television advertisements to website content, movable and animated type of content is a great visual tool. One obvious reason behind the increased popularity of movable content type is its attention-grabbing characteristic. Kinetic typography uses simple animation to create words that move and shift across the screen. By leveraging the aspects of this typographic technique, the animator can manipulate letters in several ways. From letter expansion and shrinking to wriggling and taking off, kinetic typography pull off all types of letter and design manipulation. BASKETBALL FOREVER - REBRAND from Not Real on Vimeo. Broken text Another great addition in the typographic theme is breaking down of text. The concept might not be a new one, but the way it is implemented on typographic elements takes the whole content to a different level. It all about playing around with the words—they can either be deconstructed to spread across the whole screen, or they can appear one by one positioned at alternative levels. The broken text adds poetic touch and value to your content while making it easy to understand and visually appealing. Francis Mallmann on Growing Up from Daniel Luna on Vimeo. Seamless transitions One of the oldest techniques in the book, and this trend never actually left the design industry. Where modern approach has influenced every walk of life, the modern-day graphics designers are still using seamless transitions with a sleek addition of contemporary touch as this practice will never get old. A seamless transition is all about integrating fluidity in the content. The lack of cuts between scenes and  the smooth morphing of one scene into another makes the seamless transition an everlasting trend of the industry Tamara Qaddoumi - Flowers Will Rot (Official Music Video) from Pablo Lozano on Vimeo. Thin lines Lines are one of the most underrated design elements. Regardless of their simplicity, lines can be used for a wide array of purposes. From pointing out directions to defining the outlines of shapes and creating segregation between different elements, the potentials of a humble are still unexplored and untapped. However, with a change in time, the usage of thin line sin motion graphics is gradually becoming a common practice. Whether you want to give a vector touch or introduce a freestyle feel to the content, a simple line is enough to give a playful yet interesting look to your graphic content. TIFF: The Canadian Experiment from Polyester Studio on Vimeo. Grain Clean, crisp and concise—these three C’s are the fundamentals of the graphics design industry. However, adding an element of aesthetic to liven up your content is what breathes life to your dull content. That’s when grain comes into the play-- a motion graphics technique that not only adds visual appeal to your content but also makes your subtle content powerful. Whether it’s about transforming 2D figures into textured ones with a slight sense of depth or you want to visualize noise-- the proper use of grain can do just the job. Mumblephone - Special K from Allen Laseter on Vimeo. Liquid motion Just as grain adds a little texture to your content, liquid motion adds an organic feel to your content. From splashes of vibrant colors to the transformation of one shape into visual relishes, liquid motion makes your content flow across the screen in a seamless manner. Liquid motion is all about introducing a sense of movement while enhancing your content with a slightly  dramatic touch. Whether you want theatrical morphing one shape into another or you want to induce a celebratory mood to your content, liquid motion is all about making the content more appealing and keeping the views engaged. Creativity Top 5 Intro Video (Stop motion animation) from Kelly Warner on Vimeo. Amalgamation of 2D and 3D Now that technology has advanced to a greater extent, integrating 2D style with contemporary 3D techniques is one of the innovative ways of using motion graphics in your content. A slight touch of nostalgia coupled with depth and volume is what makes this technique a trend of 2019. Whether you want to introduce an element of surprise or you want to play around with the camera angles and movements, this combo of 2D with 3D is the ideal option for giving a nostalgic yet contemporary touch to your content. Mini - Rocketman Concept from PostPanic on Vimeo. Digital surrealism Surrealism is all about how a designer integrates the touch of reality into something as unreal as animation. Probably one of the most modern approaches to designing, this style illustrates the relation of virtual world element and crisp visuals. Surrealism is all about defying the reality while stretching the boundaries of materials and creating eye-catchy imagery and effects. Window Worlds - E4 Ident from Moth on Vimeo. Motion graphics trends will continue to evolve, the key to mastering the motion graphics techniques is to stay updated on the advanced tools and applications. Fill your animated visual content with all the relevant and important information, leverage your creativity and breathe life to your visual ideation with motion graphics. Author Bio Jessica Ervin is a professional UI UX designer & passionate tech blogger with enthusiastic writing skills. Jessica is a brand researcher as well, She is currently working with Design Iconic by which you can easily make your own logos & download it, having a good reader Jessica is contributing to the Technology, Artificial Intelligence, Augmented Reality, VR, Gadgets, Tech Trends and much more. Jessica’s experience has given her an insight of UI UX designing & writing skills and became a conventional contributor. You can follow her on twitter @jessikaervin The seven deadly sins of web design 7 Web design trends and predictions for 2019 Tips and tricks to optimize your responsive web design
Read more
  • 0
  • 0
  • 9139

article-image-facial-recognition-technology-is-faulty-racist-biased-abusive-to-civil-rights-act-now-to-restrict-misuse-say-experts-to-house-oversight-and-reform-committee
Vincy Davis
27 May 2019
6 min read
Save for later

‘Facial Recognition technology is faulty, racist, biased, abusive to civil rights; act now to restrict misuse’ say experts to House Oversight and Reform Committee

Vincy Davis
27 May 2019
6 min read
Last week, the US House Oversight and Reform Committee held its first hearing on examining the use of ‘Facial Recognition Technology’. The hearing was an incisive discussion on the use of facial recognition by government and commercial entities, flaws in the technology, lack of regulation and its impact on citizen’s civil rights and liberties. The chairman of the committee, U.S. Representative of Maryland, Elijah Cummings said that one of the goals of this hearing about Facial Recognition Technology is to “protect against its abuse”. At the hearing, Joy Buolamwini, founder of Algorithmic Justice League highlighted one of the major pressing points for the failure of this technology as ‘misidentification’, that can lead to false arrests and accusations, a risk especially for marginalized communities. On one of her studies at MIT, on facial recognition systems, it was found that for the task of guessing a gender of a face, IBM, Microsoft and Amazon had error rates which rose to over 30% for darker skin and women. On evaluating benchmark datasets from organizations like NIST (National Institute for Standards and Technology), a striking imbalance was found. The dataset contained 75 percent male and 80 percent lighter skin data, which she addressed as “pale male datasets”. She added that our faces may well be the final frontier of privacy and Congress must act now to uphold American freedom and rights at minimum. Professor of Law at the University of the District of Columbia, Andrew G. Ferguson agreed with Buolamwini stating, “Congress must act now”, to prohibit facial recognition until Congress establishes clear rules. “The fourth amendment won’t save us. The Supreme Court is trying to make amendments but it’s not fast enough. Only legislation can react in real time to real time threats.” Another strong concern raised at the hearing was the use of facial recognition technology in law enforcement. Neema Singh Guliani, Senior Legislative Counsel of American Civil Liberties Union, said law enforcement across the country, including the FBI, is using face recognition in an unrestrained manner. This growing use of facial recognition is being done “without legislative approval, absent safeguards, and in most cases, in secret.” She also added that the U.S. reportedly has over 50 million surveillance cameras, this combined with face recognition threatens to create a near constant surveillance state. An important addition to regulating facial recognition technology was to include all kinds of biometric surveillance under the ambit of surveillance technology. This includes voice recognition and gait recognition, which is also being used actively by private companies like Tesla. This surveillance should not only include legislation, but also real enforcement so “when your data is misused you have actually an opportunity, to go to court and get some accountability”, Guliani added. She also urged the committee to investigate how FBI and other federal agencies are using this technology, whose accuracy has not been tested and how the agency is complying with the Constitution by “reportedly piloting Amazon's face recognition product”. Like FBI and other government agencies, even companies like Amazon and Facebook were heavily criticized by members of the committee for misusing the technology. It was notified that these companies look for ways to develop this technology and market facial recognition. On the same day of this hearing, came the news that Amazon shareholders rejected the proposal on ban of selling its facial recognition tech to governments. This year in January, activist shareholders had proposed a resolution to limit the sale of Amazon’s facial recognition tech called Rekognition to law enforcement and government agencies. This technology is regarded as an enabler of racial discrimination of minorities as it was found to be biased and inaccurate. Jim Jordan, U.S. Representative of Ohio, raised a concern as to how, “Some unelected person at the FBI talks to some unelected person at the state level, and they say go ahead,” without giving “any notification to individuals or elected representatives that their images will be used by the FBI.” Using face recognition in such casual manner poses “a unique threat to our civil rights and liberties”, noted Clare Garvie, Senior Associate of Georgetown University Law Center and Center on Privacy & Technology. Studies continue to show that the accuracy of face recognition varies on the race of the person being searched. This technology “makes mistakes and risks making more mistakes and more misidentifications of African Americans”. She asserted that “face recognition is too powerful, too pervasive, too susceptible to abuse, to continue being unchecked.” A general agreement by all the members was that a federal legislation is necessary, in order to prevent a confusing and potentially contradictory patchwork, of regulation of government use of facial recognition technology. Another point of discussion was how great facial recognition could work, if implemented in a ‘real’ world. It can help the surveillance and healthcare sector in a huge way, if its challenges are addressed correctly. Dr. Cedric Alexander, former President of National Organization of Black Law Enforcement Executives, was more cautious of banning the technology. He was of the opinion that this technology can be used by police in an effective way, if trained properly. Last week, San Francisco became the first U.S. city to pass an ordinance barring police and other government agencies from using facial recognition technology. This decision has attracted attention across the country and could be followed by other local governments. A council member Gomez made the committee’s stand clear that they, “are not anti-technology or anti-innovation, but we have to be very aware that we're not stumbling into the future blind.” Cummings concluded the hearing by thanking the witnesses stating, “I've been here for now 23 years, it's one of the best hearings I've seen really. You all were very thorough and very very detailed, without objection.” The second hearing is scheduled on June 4th, and will have law enforcement witnesses. For more details, head over to the full Hearing on Facial Recognition Technology by the House Oversight and Reform Committee. Read More Over 30 AI experts join shareholders in calling on Amazon to stop selling Rekognition, its facial recognition tech, for government surveillance Oakland Privacy Advisory Commission lay out privacy principles for Oaklanders and propose ban on facial recognition Is China’s facial recognition powered airport kiosks an attempt to invade privacy via an easy flight experience?
Read more
  • 0
  • 0
  • 2579

article-image-implementing-hashing-algorithms-in-golang-tutorial
Sugandha Lahoti
27 May 2019
4 min read
Save for later

Implementing hashing algorithms in Golang [Tutorial]

Sugandha Lahoti
27 May 2019
4 min read
A hashing algorithm is a cryptographic hash technique. It is a scientific calculation that maps data with a subjective size to a hash with a settled size. It's intended to be a single direction function, that you cannot alter. This article covers hash functions and the implementation of a hashing algorithm in Golang. This article is taken from the book Learn Data Structures and Algorithms with Golang by Bhagvan Kommadi. Complete with hands-on tutorials, this book will guide you in using the best data structures and algorithms for problem-solving in Golang. Install Go version 1.10 for your OS. The GitHub URL for the code in this article is available here. The hash functions Hash functions are used in cryptography and other areas. These data structures are presented with code examples related to cryptography. There are two ways to implement a hash function in Go: with crc32 or sha256. Marshaling (changing the string to an encoded form) saves the internal state, which is used for other purposes later. A BinaryMarshaler (converting the string into binary form) example is explained in this section: //main package has examples shown // in Hands-On Data Structures and algorithms with Go book package main // importing bytes, crpto/sha256, encoding, fmt and log package import ( "bytes" "crypto/sha256" "encoding" "fmt" "log" "hash" ) The main method creates a binary marshaled hash of two example strings. The hashes of the two strings are printed. The sum of the first hash is compared with the second hash using the equals method on bytes. This is shown in the following code: //main method func main() { const ( example1 = "this is a example " example2 = "second example" ) var firstHash hash.Hash firstHash = sha256.New() firstHash.Write([]byte(example1)) var marshaler encoding.BinaryMarshaler var ok bool marshaler, ok = firstHash.(encoding.BinaryMarshaler) if !ok { log.Fatal("first Hash is not generated by encoding.BinaryMarshaler") } var data []byte var err error data, err = marshaler.MarshalBinary() if err != nil { log.Fatal("failure to create first Hash:", err) } var secondHash hash.Hash secondHash = sha256.New() var unmarshaler encoding.BinaryUnmarshaler unmarshaler, ok = secondHash.(encoding.BinaryUnmarshaler) if !ok { log.Fatal("second Hash is not generated by encoding.BinaryUnmarshaler") } if err := unmarshaler.UnmarshalBinary(data); err != nil { log.Fatal("failure to create hash:", err) } firstHash.Write([]byte(example2)) secondHash.Write([]byte(example2)) fmt.Printf("%x\n", firstHash.Sum(nil)) fmt.Println(bytes.Equal(firstHash.Sum(nil), secondHash.Sum(nil))) } Run the following command to execute the hash.go file: go run hash.go The output is as follows: Hash implementation in Go Hash implementation in Go has crc32 and sha256 implementations. An implementation of a hashing algorithm with multiple values using an XOR transformation is shown in the following code snippet. The CreateHash function takes a byte array, byteStr, as a parameter and returns the sha256 checksum of the byte array: //main package has examples shown // in Go Data Structures and algorithms book package main // importing fmt package import ( "fmt" "crypto/sha1" "hash" ) //CreateHash method func CreateHash(byteStr []byte) []byte { var hashVal hash.Hash hashVal = sha1.New() hashVal.Write(byteStr) var bytes []byte bytes = hashVal.Sum(nil) return bytes } In the following sections, we will discuss the different methods of hash algorithms. The CreateHashMutliple method The CreateHashMutliple method takes the byteStr1 and byteStr2 byte arrays as parameters and returns the XOR-transformed bytes value, as follows: // Create hash for Multiple Values method func CreateHashMultiple(byteStr1 []byte, byteStr2 []byte) []byte { return xor(CreateHash(byteStr1), CreateHash(byteStr2)) } The XOR method The xor method takes the byteStr1 and byteStr2 byte arrays as parameters and returns the XOR-transformation result, as follows: The main method The main method invokes the createHashMutliple method, passing Check and Hash as string parameters, and prints the hash value of the strings, as follows: // main method func main() { var bytes []byte bytes = CreateHashMultiple([]byte("Check"), []byte("Hash")) fmt.Printf("%x\n", bytes) } Run the following command to execute the hash.go file: go run hash.go The output is as follows: In this article, we discussed hashing algorithms in Golang alongside code examples and performance analysis. To know more about network representation using graphs and sparse matrix representation using a list of lists in Go, read our book Learn Data Structures and Algorithms with Golang. Go User Survey 2018 results: Golang goes from strength to strength, as more engineers than ever are using it at work. State of Go February 2019 - Golang developments report for this month released The Golang team has started working on Go 2 proposals
Read more
  • 0
  • 0
  • 12863
article-image-react-native-vs-xamarin-which-is-the-better-cross-platform-mobile-development-framework
Guest Contributor
25 May 2019
10 min read
Save for later

React Native VS Xamarin: Which is the better cross-platform mobile development framework?

Guest Contributor
25 May 2019
10 min read
One of the most debated topics of the current mobile industry is the battle of the two giant app development platforms, Xamarin and React Native. Central to the buzzing hype of this battle and the increasing popularity of these two platforms are the communities of app developers built around them. Both of these open-source app development platforms are preferred by the app development community to create highly efficient applications while saving time and efforts of the app developers. Both React and Xamarin are supported by some merits and demerits, which makes selecting the best between the two a bit difficult. When it comes to selecting the appropriate mobile application platform, it boils down to the nature, needs and overall objectives of the business and company.  It also comes down to the features and characteristics of that technology, which either make it the best fit for one project or the worst approach for another. With that being said, let’s start with our comparing the two to find out the major differences, explore the key considerations and determine the winner of this unending platform battle. An overview of Xamarin An open-source cross-platform used for mobile application development, Xamarin can be used to build applications for Android, iOS and wearable devices. Offered as a high-tech enterprise app development tool within Microsoft Visual Studio IDE, Xamarin has now become one of the top mobile app development platforms used by various businesses and enterprises. Apart from being a free app development platform, it facilitates the development of mobile applications while using a single programming language, namely C#, for both the Android and iOS versions. Key features Since the day of its introduction, Xamarin has been using C#. C# is a popular programming language in the Microsoft community, and with great features like metaprogramming, functional programming and portability, C# is widely-preferred by many web developers. Xamarin makes it easy for C# developers to shift from web development platform to cross mobile app development platform. Features like portable class libraries, code sharing features, testing clouds and insights, and compatibility with Mac IDE and Visual Studio IDE makes Xamarin a great development tool with no additional costs. Development environment Xamarin provides app developers with a comprehensive app development toolkit and software package. The package includes highly compatible IDEs (for both Mac and VS), distribution and analytics tools such as Hockeyapp and testing tools such as Xamarin Test Cloud. With Xamarin, developers no longer have to invest their time and money in incorporating third-party tools. It uses Mono execution environment for both the platforms, i.e. Android and iOS. Framework C# has matured from its infancy, and the Xamarin framework now provides strong-safety typing which ensures prevention of unexpected code behavior. Since C# supports .NET framework, the language can be used with numerous .NET features like ASynC, LINQ, and Lambdas. Compilation Xamarin.iOS and Xamarin.Android are the two major products offered by this platform. In case of iOS code compilation, the platform follows Ahead-of-Time compilation whereas in Android Just-in-Time compilation approach is followed. However, the compilation process is fully automated and is equipped with features to tackle and resolve issues like memory allocation and garbage collection. App working principles Xamarin has an MVVM architecture coupled with a two-way data binding which provides great support for collaborative work among different departments. If your development approach doesn’t follow a strict performance-oriented approach, then go for Xamarin as it provides high process flexibility. How exactly does it work? Not only does C# form the basis of this platform, but it also provides developers with access to React Native APIs. This feature of Xamarin enables it to create universal backend code that can be used with any UI based on React Native SDK. An overview of React Native With Facebook being the creator of this platform, React Native is one of the widely-used programming platforms. From enabling mobile developers to build highly efficient apps to ensure great quality and increased sustainability, the demand for React Native apps is sure to increase over time. Key features React Native apps for the Android platform uses Java while the iOS version of the same app uses C#. The platforms provide numerous built-in tools, libraries, and frameworks. Its standout feature of hot reloading enables developers to make amendments to the code without spending much time on code compilation process. Development environment The React Native app development platform requires developers to follow a wide array of actions and processes to build a UI. The platform supports easy and faster iterations while enabling execution of a different code even when the application is running. Since React Native doesn’t provide support for 64-bit, it does impact the run time and speed of codes in iOS. Architecture React Native app development platform supports modular architecture. This means that developers can categorize the code into different functional and independent blocks of codes. This characteristic of the React Native platform, therefore, provides process flexibility, ease of upgrade and application updates. Compilation The Reactive Native app development platform follows and supports Just-in-Time compilation for Android applications. Whereas, in case of iOS application Just-in-Time compilation is not available as it might slow down the code execution procedure. App working principles This platform follows a one-way data binding approach which helps in boosting the overall performance of the application. However, through manual implementation, two-way data binding approach can be implemented which is useful for introducing code coherence and in reducing complex errors. How does it actually work? React Native enables developers to build applications using React and JavaScript. The working of a React Native application can be described as thread-based interaction. One thread handles the UI and user gestures while the other is React Native specific and deals with the application’s business logic. It also determines the structure and functionality of the overall user interface. The interaction could be asynchronous, batched or serializable. Learning curves of Xamarin and React Native To master Xamarin one has to be skilled in .NET. Xamarin provides you with easy and complete access to SDK platform capabilities because of Xamarin.iOS and Xamarin.Android libraries.  Xamarin provides a complete package which reduces the need of integrating third-party tools and libraries-- so to become a professional in Xamarin app development all you need is skills and expertise in C#, .NET and some basic working knowledge of React Native classes. While on the other hand, mastering React Native requires thorough knowledge and expertise of JavaScript. Since the platform doesn’t offer well-integrated libraries and tools, knowledge and expertise of third-party sources and tools are of core importance. Key differences between Xamarin and React Native While Trello, Slack, and GitHub use Xamarin, other successful companies like Facebook, Walmart, and Instagram have React Native-based mobile applications. While React, Native application offers better performance, not every company can afford to develop an app for each platform. Cross platforms like Xamarin are the best alternative to React Native apps as they offer higher development flexibility. Where Xamarin offers multiple platform support, cost-effectiveness and time-saving, React Native allows faster development and increased efficiency. Since Xamarin provides complete hardware support, the issues of hardware compatibility are reduced. React Native, on the other hand, provides you with ready-made components which reduce the need for writing the entire code from scratch. In React Native, with integration and after investment in third-party libraries and plugins, the need for WebView functions is eliminated which in turn reduces the memory requirements. Xamarin, on the other hand, provides you with a comprehensive toolkit with zero investments on additional plugins and third-party sources. However, this cross-platform offers restricted access to open-source technologies. A good quality React Native application requires more than a few weeks to develop which increases not only the development time but also the app complexity. If time-consumption is one of the drawbacks of the React Native app, then additional optimization for supporting larger application counts as a limitation for Xamarin. While frequent update contributes in shrinkage of the customer base of the React Native app, then stability complaints and app crashes are some common issues with Xamarin applications. When to go for Xamarin? Case #1: The foremost advantage of Xamarin is that all you need is command over C# and .NET. Case #2: One of the most exciting trends currently in the mobile development industry is the Internet of Things. Considering the rapid increase in need and demand of IoT, if you are developing a product that involves multiple hardware capacities and user devices then make developing with Xamarin your number one priority. Xamarin is fully compatible with numerous IoT devices which eliminates the need for a third-party source for functionality implementation. Case #3: If you are budget-constricted and time-bound then Xamarin is the solution to all your app development worries. Since the backend code for both Android and iOS is similar, it reduces the development time and efforts and is budget friendly. Case #4: The revolutionary and integral test cloud is probably the best part about Xamarin. Even though Test Cloud might take up a fraction of your budget, this expense is worth investing in. The test cloud not only recreates the activity of actual users but it also ensures that your application works well on various devices and is accessible to maximum users. When to go for React Native? Case #1: When it comes to game app development, Xamarin is not a wise choice. Since it supports C# framework and AOT compilation, getting speedy results and rendering is difficult with Xamarin. A Gaming application is updated dynamically, highly interactive and has high-performance graphics; the drawback of zero compatibility with heavy graphics makes Xamarin a poor choice in game app development. For these very reasons, many developers go for React Native when it comes to developing high-performing gaming applications. Case #2: The size of the application is an indirect indicator of the success of application among targeted users. Since many smartphone users have their own photos and video stuffed in their phone’s memory, there is barely any memory and storage left for an additional application. Xamarin-based apps are relatively heavier and occupy more space than their React Native counterparts. Wondering which framework to choose? Xamarin and React Native are the two major players of the mobile app development industry. So, it’s entirely up to you whether you want to proceed with React Native or Xamarin. However, your decision should be based on the type of application, requirements and development cost. If you want a faster development process go for the Xamarin and if you are developing a game, e-commerce or social site go for React Native. Author Bio Khalid Durrani is an Inbound Marketing Expert and a content strategist. He likes to cover the topics related to design, latest tech, startups, IOT, Artificial intelligence, Big Data, AR/VR, UI/UX and much more. Currently, he is the global marketing manager of LogoVerge, an AI-based design agency. The Ionic team announces the release of Ionic React Beta React Native 0.59 RC0 is now out with React Hooks, and more Changes made to React Native Community’s GitHub organization in 2018 for driving better collaboration
Read more
  • 0
  • 0
  • 7747

article-image-is-golang-truly-community-driven-and-does-it-really-matter
Sugandha Lahoti
24 May 2019
6 min read
Save for later

Is Golang truly community driven and does it really matter?

Sugandha Lahoti
24 May 2019
6 min read
Golang, also called Go, is a statically typed, compiled programming language designed by Google. Golang is going from strength to strength, as more engineers than ever are using it at work, according to Go User Survey 2019. An opinion that has led to the Hacker News community into a heated debate last week: “Go is Google's language, not the community's”. The thread was first started by Chris Siebenmann who works at the Department of Computer Science, University of Toronto. His blog post reads, “Go has community contributions but it is not a community project. It is Google's project.” Chris explicitly states that the community's voice doesn't matter very much for Go's development, and we have to live with that. He argues that Google is the gatekeeper for community contributions; it alone decides what is and isn't accepted into Go. If a developer wants some significant feature to be accepted into Golang, working to build consensus in the community is far less important than persuading the Golang core team. He then cites the example of how one member of Google's Go core team discarded the entire Go Modules system that the Go community had been working on and brought in a relatively radically different model. Chris believes that the Golang team cares about the community and want them to be involved, but only up to a certain point. He wants the Go core team to be bluntly honest about the situation, rather than pretend and implicitly lead people on. He further adds, “Only if Go core team members start leaving Google and try to remain active in determining Go's direction, can we [be] certain Golang is a community-driven language.” He then compares Go with C++, calling the latter a genuine community-driven language. He says there are several major implementations in C++ which are genuine community projects, and the direction of C++ is set by an open standards committee with a relatively distributed membership. https://twitter.com/thatcks/status/1131319904039309312 What is better - community-driven or corporate ownership? There has been an opinion floating around developers about how some open source programming projects are just commercial projects driven mainly by a single company.  If we look at the top open source projects, most of them have some kind of corporate backing ( Apple’s Swift, Oracle’s Java, MySQL, Microsoft’s Typescript, Google’s Kotlin, Golang, Android, MongoDB, Elasticsearch) to name a few. Which brings us to the question, what does corporate ownership of open source projects really mean? A benevolent dictatorship can have two outcomes. If the community for a particular project suggests a change, and in case a change is a bad idea, the corporate team can intervene and stop changes. On the other hand, though, it can actually stop good ideas from the community in being implemented, even if a handful of members from the core team disagree. Chris’s post has received a lot of attention by developers on Hacker News who both sided with and disagreed with the opinion put forward. A comment reads, “It's important to have a community and to work with it, but, especially for a programming language, there has to be a clear concept of which features should be implemented and which not - just accepting community contributions for the sake of making the community feel good would be the wrong way.” Another comment reads, “Many like Go because it is an opinionated language. I'm not sure that a 'community' run language will create something like that because there are too many opinions. Many claims to represent the community, but not the community that doesn't share their opinion. Without clear leaders, I fear technical direction and taste will be about politics which seems more uncertain/risky. I like that there is a tight cohesive group in control over Go and that they are largely the original designers. I might be more interested in alternative government structures and Google having too much control only if those original authors all stepped down.” Rather than splitting between Community or Corporate, a more accurate representation would be how much market value is depending on those projects. If a project is thriving, usually enterprises will take good decisions to handle it. However, another but entirely valid and important question to ask is ‘should open source projects be driven by their market value?’ Another common argument is that the core team’s full-time job is to take care of the language instead of taking errant decisions based on community backlash. Google (or Microsoft, or Apple, or Facebook for that matter) will not make or block a change in a way that kills an entire project. But this does not mean they should sit idly, ignoring the community response. Ideally, the more that a project genuinely belongs to its community, the more it will reflect what the community wants and needs. Google also has a propensity to kill its own products. What happens when Google is not as interested in Golang anymore? The company could leave it to the community to figure out the governance model suddenly by pulling off the original authors into some other exciting new project. Or they may let the authors only work on Golang in their spare time at home or at the weekends. While Google's history shows that many of their dead products are actually an important step towards something better and more successful, why and how much of that logic would be directly relevant to an open source project is something worth thinking about. As a Hacker news user wrote, “Go is developed by Bell Labs people, the same people who bought us C, Unix and Plan 9 (Ken, Pike, RSC, et al). They took the time to think through all their decisions, the impacts of said decisions, along with keeping things as simple as possible. Basically, doing things right the first time and not bolting on features simply because the community wants them.” Another says, “The way how Golang team handles potentially tectonic changes in language is also exemplary – very well communicated ideas, means to provide feedback and clear explanation of how the process works.” Rest assured, if any major change is made to Go, even a drastic one such as killing it, it will not be done without consulting the community and taking their feedback. Go User Survey 2018 results: Golang goes from strength to strength, as more engineers than ever are using it at work. GitHub releases Vulcanizer, a new Golang Library for operating Elasticsearch State of Go February 2019 – Golang developments report for this month released
Read more
  • 0
  • 0
  • 6710