Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech Guides

852 Articles
article-image-tim-berners-lees-solid-trick-or-treat
Natasha Mathur
31 Oct 2018
2 min read
Save for later

Tim Berners-Lee’s Solid - Trick or Treat?

Natasha Mathur
31 Oct 2018
2 min read
Solid is a set of conventions and tools developed by Tim Berners-Lee. It aims to build decentralized social applications based on Linked Data principles. It is modular, extensible and it relies as much as possible on existing W3C standards and protocols. This open-source project was launched earlier this month for “personal empowerment through data”. Why are people excited about Solid? Solid aims to radically transform the way Web applications work today, resulting in true data ownership as well as improved privacy. It hopes to empower individuals, developers, and businesses across the globe with completely new ways to build innovative and trusted applications. It gives users the freedom to choose where their data resides and who is allowed to access it. Solid collects all the data into a “Solid POD,” a personal online data repository, that you want to share with advertisers or apps. You get to decide which app gets your data and which does not.  Best thing is that you don’t need to enter any data in apps that support Solid. You can just allow or disallow access to the Solid POD, and the app will take care of the rest on its own. Moreover, Solid also offers every user a choice regarding where their data gets stored, and which specific people or groups can access the select elements in a data. Additionally, you can link to and share the data with anyone, be it your family, friends or colleagues. Is Solid a trick or a treat? That being said, a majority of the companies on the web are extremely sensitive when it comes to their data and might not be interested in losing control over that data. Hence, wide adoption seems to be a hurdle as of now. Also, since its only launched this month, there isn’t enough community support around it. However, Solid is surely taking us a step ahead, to a more free and open Internet, and seems to be a solid TREAT (pun intended) for all of us. For more information on Solid, check out the official Inrupt blog.
Read more
  • 0
  • 0
  • 2394

article-image-deep-reinforcement-learning-trick-or-treat
Bhagyashree R
31 Oct 2018
2 min read
Save for later

Deep reinforcement learning - trick or treat?

Bhagyashree R
31 Oct 2018
2 min read
Deep Reinforcement Learning (Deep RL) is the new buzzword in the machine learning world. Deep RL is an approach which combines reinforcement learning and deep learning in order to achieve human-level performance. It brings together the self-learning approach to learn successful strategies that lead to the greatest long-term rewards and allows the agents to construct and learn their own knowledge directly from raw inputs. With the fusion of these two approaches, we saw the introduction of many algorithms, starting with DeepMind’s Deep Q Network (DQN). It is a deep variant of the Q-learning algorithm. This algorithm reached human-level performance in playing Atari games. Combining Q-learning with reasonably sized neural networks and some optimization tricks, you can achieve human or superhuman performance in several Atari games. Deep RL resulted in one of the notable advancements in the game of AlphaGo.The AI agent by DeepMind was able to beat the human world champions Lee Sedol (4-1) and Fan Hui (5-0). DeepMind then further released advanced versions of their Agent called AlphaGO Zero and AlphaZero. Many recent works from the researchers at UC Berkeley have shown how both reinforcement learning and deep reinforcement learning have enabled the control of complex robots, both for locomotion and navigation. Despite these successes, it is quite difficult to find cases where deep RL has added any practical real-world value. The current status is that it is still a research topic. One of its limitations is that it assumes the existence of a reward function, which is either given or is hand-tuned offline. To get the desired results, your reward function must capture exactly what you want. RL has an annoying tendency to overfit to your reward, resulting in things you haven’t expected. This is the reason why Atari is a benchmark, as it is not only easy to get a lot of samples, but the goal is fairly straightforward i.e to maximize score. With so many researchers working towards introducing improved Deep RL algorithms, it surely is a treat. AlphaZero: The genesis of machine intuition DeepMind open sources TRFL, a new library of reinforcement learning building blocks Understanding Deep Reinforcement Learning by understanding the Markov Decision Process [Tutorial]
Read more
  • 0
  • 0
  • 3518

article-image-aiops-trick-or-treat
Bhagyashree R
31 Oct 2018
2 min read
Save for later

AIOps - Trick or Treat?

Bhagyashree R
31 Oct 2018
2 min read
AIOps, as the term suggests, is Artificial Intelligence for IT operations and was first introduced by Gartner last year. AIOps systems are used to enhance and automate a broad range of processes and tasks in IT operations with the help of big data analytics, machine learning, and other AI technologies. Read also: What is AIOps and why is it going to be important? In its report, Gartner estimated that, by 2020, approximately 50% of enterprises will be actively using AIOps platforms to provide insight into both business execution and IT Operations. AIOps has seen a fairly fast growth since its introduction with many big companies showing interest in AIOps systems. For instance, last month Atlassian acquired Opsgenie, an incident management platform that along with planning and solving IT issues, helps you gain insight to improve your operational efficiency. The reasons why AIOps is being adopted by companies are: it eliminates tedious routine tasks, minimizes costly downtime, and helps you gain insights from data that’s trapped in silos. Where AIOps can go wrong? AIOps alerts us about incidents beforehand, but in some situations, it can also go wrong. In cases where the event is unusual, the system will be less likely to predict it. Also, those events that haven’t occurred before will be entirely outside the ability for machine learning to predict or analyze. Additionally, it can sometimes give false negatives and false positives. False negatives could happen in the cases where the tests are not sensitive enough to detect possible issues. False positives can be the result of incorrect configuration. This essentially means that there will always be a need for human operators to review these alerts and warnings. Is AIOps a trick or treat? AIOps is bringing more opportunities for IT workforce such as AIOps Data Scientist, who will focus on solutions to correlate, consolidate, alert, analyze, and provide awareness of events. Dell defines its Data Scientist role as someone who will “contribute to delivering transformative AIOps solutions on their SaaS platform”. With AIOps, IT workforce won’t just disappear, it will evolve. AIOps is definitely a treat because it reduces manual work and provides an intuitive way of incident response. What is AIOps and why is it going to be important? 8 ways Artificial Intelligence can improve DevOps Tech hype cycles: do they deserve your attention?
Read more
  • 0
  • 0
  • 2625

article-image-the-decentralized-web-trick-or-treat
Bhagyashree R
31 Oct 2018
3 min read
Save for later

The decentralized web - Trick or Treat?

Bhagyashree R
31 Oct 2018
3 min read
The decentralized web refers to a web which is not dominated by powerful monopolies. It’s actually a lot like the web we have now, but with one key difference: its underlying architecture is decentralized, so that it becomes much difficult for any one entity to take down any single web page, website, or service. It takes control away from powerful tech monopolies. Why are people excited about the decentralized web? In effect, the decentralized web is a lot like the earliest version of the web. It aims to roll back the changes that came with Web 2.0, as we began to communicate with each other and share information through centralized services provided by big companies such as Google, Facebook, Microsoft, and Amazon. The decentralized web aims to make us less dependent on these tech giants. Instead, users will have control over their data enabling them to directly interact and exchange messages with others in their network. Blockchain offers a perfect solution to helping us achieve a decentralized web. By creating a decentralized public digital ledger of transactions, you can take the power out of established monopolies and back to those who are simply part of the decentralized network. We saw some advancements in the direction of decentralized web with the launch of Tim Berners-Lee’s startup, Inrupt. The goal of this startup is to get rid of the tech giant’s monopolies on user data. Tim Berners-Lee hopes to achieve this with the help of his open source project, Solid.  Solid provides every user a choice of where they want to store their data, which specific people and groups can access the select elements in a data, and which apps you use. Further examples are Cloudflare introducing IPFS Gateway, which allows you to easily access content from InterPlanetary File System (IPFS), and, more recently, Origin DApp, which is a true peer to peer marketplace on the Ethereum blockchain with origin-js. A note of caution Despite these advances, the decentralized web is still in its infancy. There are still no “killer apps” that promises the same level of features that are we used to now. Many of the apps that do exist are clunky and difficult to use. One of the promises that decentralized makes is being faster, but there is a long way to go on that. There are much bigger issues related to governance such as how the decentralized web will come together when no one is in charge and what is the guarantee that it will not become centralized again. Is the decentralized web a treat… or a trick? Going by the current status of decentralized web, it seems to be a trick. No one likes “change” and it takes a long time to get used to the change. The decentralized web has to offer much more to replace the current functionalities we enjoy. Cloudflare’s decentralized vision of the web: InterPlanetary File System (IPFS) Gateway to create distributed websites Origin DApp: A decentralized marketplace on Ethereum mainnet aims to disrupt gig economy platforms like Airbnb and Uber Tim Berners-Lee plans to decentralize the web with ‘Solid’, an open-source project for “personal empowerment through data”
Read more
  • 0
  • 0
  • 2784

article-image-teaching-ai-ethics-trick-or-treat
Natasha Mathur
31 Oct 2018
5 min read
Save for later

Teaching AI ethics - Trick or Treat?

Natasha Mathur
31 Oct 2018
5 min read
The Public Voice Coalition announced Universal Guidelines for Artificial Intelligence (UGAI) at ICDPPC 2018, last week. “The rise of AI decision-making also implicates fundamental rights of fairness, accountability, and transparency. Modern data analysis produces significant outcomes that have real-life consequences for people in employment, housing, credit, commerce, and criminal sentencing. Many of these techniques are entirely opaque, leaving individuals unaware whether the decisions were accurate, fair, or even about them. We propose these Universal Guidelines to inform and improve the design and use of AI”, reads the EPIC’s guideline page. Artificial Intelligence ethics aim to improve the design and use of AI, as well as to minimize the risk for society, as well as ensures the protection of human rights. AI ethics focuses on values such as transparency, fairness, reliability, validity, accountability, accuracy, and public safety. Why teach AI ethics? Without AI ethics, the wonders of AI can convert into the dangers of AI, posing strong threats to society and even human lives. One such example is when earlier this year, an autonomous Uber car, a 2017 Volvo SUV traveling at roughly 40 miles an hour, killed a woman in the street in Arizona. This incident brings out the challenges and nuances of building an AI system with the right set of values embedded in them. As different factors are considered for an algorithm to reach the required set of outcomes, it is more than possible that these criteria are not always shared transparently with the users and authorities. Other non-life threatening but still dangerous examples include the time when Google Allo, responded with a turban emoji on being asked to suggest three emoji responses to a gun emoji, and when Microsoft’s Twitter bot Tay, who tweeted racist and sexist comments. AI scientists should be taught at the early stages itself that they these values are meant to be at the forefront when deciding on factors such as the design, logic, techniques, and outcome of an AI project. Universities and organizations promoting learning about AI ethics What’s encouraging is that organizations and universities are taking steps (slowly but surely) to promote the importance of teaching ethics to students and employees working with AI or machine learning systems. For instance, The World Economic Forum Global Future Councils on Artificial Intelligence and Robotics has come out with “Teaching AI ethics” project that includes creating a repository of actionable and useful materials for faculties wishing to add social inquiry and discourse into their AI coursework. This is a great opportunity as the project connects professors from around the world and offers them a platform to share, learn and customize their curriculum to include a focus on AI ethics. Cornell, Harvard, MIT, Stanford, and the University of Texas are some of the universities that recently introduced courses on ethics when designing autonomous and intelligent systems. These courses put an emphasis on the AI’s ethical, legal, and policy implications along with teaching them about dealing with challenges such as biased data sets in AI. Mozilla has taken initiative to make people more aware of the social implications of AI in our society through its Mozilla’s Creative Media Awards. “We’re seeking projects that explore artificial intelligence and machine learning. In a world where biased algorithms, skewed data sets, and broken recommendation engines can radicalize YouTube users, promote racism, and spread fake news, it’s more important than ever to support artwork and advocacy work that educates and engages internet users”, reads the Mozilla awards page. Moreover, Mozilla also announced a $3.5 million award for ‘Responsible Computer Science Challenge’ to encourage teaching ethical coding to CS graduates. Other examples include Google’s AI ethics principles announced back in June, to abide by when developing AI projects, and SAP’s AI ethics guidelines and an advisory panel created last month. SAP says that they have designed these guidelines as it “considers the ethical use of data a core value. We want to create software that enables intelligent enterprise and actually improves people’s lives. Such principles will serve as the basis to make AI a technology that augments human talent”. Other organizations, like Drivendata have come out with tools like Deon, a handy tool that helps data scientists add an ethics checklist to your data science projects, making sure that all projects are designed keeping ethics at the center. Some, however, feel that having to explain how an AI system reached a particular outcome (in the name of transparency) can put a damper on its capabilities. For instance, according to David Weinberger, a senior researcher at the Harvard Berkman Klein Center for Internet & society, “demanding explicability sounds fine, but achieving it may require making artificial intelligence artificially stupid”. Teaching AI ethics- trick or treat? AI has transformed the world as we know it. It has taken over different spheres of our lives and made things much simpler for us. However, to make sure that AI continues to deliver its transformative and evolutionary benefits effectively, we need ethics. From governments to tech organizations to young data scientists, everyone must use this tech responsibly. Having AI ethics in place is an integral part of the AI development process and will shape a healthy future of robotics and artificial intelligence. That is why teaching AI ethics is a sure-shot treat. It is a TREAT that will boost the productivity of humans in AI, and help build a better tomorrow.
Read more
  • 0
  • 0
  • 3278

article-image-service-mesh-trick-or-treat
Melisha Dsouza
31 Oct 2018
2 min read
Save for later

Service mesh - Trick or Treat?

Melisha Dsouza
31 Oct 2018
2 min read
‘Service mesh’ is a term that is relatively new and has gained visibility in the past year. It’s a configurable infrastructure layer for a microservices application that makes communication between service instances flexible, reliable, and fast. Why are people talking about ‘service meshes’? Modern applications contain a range of (micro)services that allow it to run effectively. Load balancing, traffic management, routing, security, user authentication - all of these things need to work together properly if the application is going to function as intended.. Managing these various services, across a whole deployment of containers, poses a challenge for those responsible for updating and maintaining them. How does a service mesh work? Enter the Service mesh. It works delivering these services from within the compute cluster through a set of APIs. These APIs, when brought together, form the ‘mesh’.. This makes it much easier to manage software infrastructures of particular complexity - hence why organizations like Netflix and Lyft have used them.. Trick or treat? With the service meshes addressing some of the key challenges when it comes to microservices, this is definitely a treat for 2018 and beyond. NGINX Hybrid Application Delivery Controller Platform improves API management, manages microservices and much more! Kong 1.0 launches: the only open source API platform specifically built for microservices, cloud, and serverless OpenFaaS releases full support for stateless microservices in OpenFaaS 0.9.0
Read more
  • 0
  • 0
  • 2871
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-5g-trick-or-treat
Melisha Dsouza
31 Oct 2018
3 min read
Save for later

5G - Trick or Treat?

Melisha Dsouza
31 Oct 2018
3 min read
5G - or "fifth generation" - mobile internet is coming very soon - possibly early next year. It promises much faster data download speeds - 10 to 20 times faster than we have now. With an improvement in upload speeds, wider coverage and more stable connections, 5G is something to watch out for. Why are people excited about 5G? Mobile is today the main way people use the internet. That change has come at an amazing pace. With this increase in mobile users, demand for services, like music and video streaming, has skyrocketed.. This can cause particular problems when lots of people in the same area access online mobile services at the same time, leading to a congestion of existing spectrum bands, thus resulting in service breakdowns. 5G will use the radio spectrum much more efficiently, enabling more devices to access mobile internet services at the same time. But it’s not just about mobile users. It’s also about the internet of things and smart cities. For example, as cities look to become better connected, with everything from streetlights to video cameras in some way connected to the internet, this network will support this infrastructure in a way that would have previously been impossible. From swarms of drones carrying out search and rescue missions, yo fire assessments and traffic monitoring, 5G really could transform the way we understand and interact with our environment.  It’s not just about movies downloading faster, it’s also about autonomous vehicles communicating with each other seamlessly and reading live map and traffic data to take you to your destination in a more efficient and environmentally friendly way. 5G will also go hand-in-hand with AI, propagating its progress! 5G: trick or treat? All this being said, there will be an increase in cost to employ skilled professionals to manage 5G networks. Users will also need to buy new smartphones that support this network - even some of the most up to date phones will need to be replaced. When  4G was introduced in 2009/10, compatible smartphones came onto the market before the infrastructure had been rolled out fully. That’s a possibility with 5G, but it does look like it might take a little more time.. This technology is still under development and will take some time to be fully operational without any issues. We will leave it up to you decide if the technology is a Trick or a Treat! How 5G Mobile Data will propel Artificial Intelligence (AI) progress VIAVI releases Observer 17.5, a network performance management and diagnostics tool
Read more
  • 0
  • 0
  • 2823

article-image-digital-wellbeing-trick-or-treat
Sugandha Lahoti
31 Oct 2018
2 min read
Save for later

Digital wellbeing - Trick or Treat?

Sugandha Lahoti
31 Oct 2018
2 min read
Digital Wellbeing is coming into full view as Facebook, Instagram, Google's Android and Apple iOS 12 are all introducing digital wellbeing dashboards and features to their operating systems. Basically, Digital Wellbeing enables users to understand their digital habits, control the demands technology places on their attention, and focus on what actually matters. Google introduced a set of features named ‘Digital Wellbeing’ with it’s Android 9 Pie OS. The new features include a Dashboard, to monitor how long you’ve been using your phone and specific apps; App timer, to help users tap into the apps they are using and set a time limit on it for daily usage;  Do Not Disturb to prevent users from hearing any kind of notification from text or emails and Wind down, which turns your screen to grayscale making the apps less tempting as your bedtime approaches. Apple went a step further than Google when it comes to parental controls. While Google's usage dashboard and limits seem primarily designed for users to limit their own behavior, Apple's will let parents remotely manage their kid's usage from their own devices. Facebook is also not far behind with a new tool dubbed, “Your Time on Facebook,” to help users manage their time spent in the Facebook app on each of the last seven days, as well as see their average time spent per day. However, there is no proven research on these features. Much of what we know is based not on peer-reviewed research but on anecdotal data. Sometimes educational apps and videos meant for young children also contain ads on topics which are irrelevant to the learning objective. These ads may potentially soil the mind of young children. There is a growing pressure from public interest groups for the FTC and other government bodies to launch an investigation against these apps and hold developers accountable for their practices. Overall, Digital Wellbeing features sound like a real step forward taken by these tech giants in making the phones less addictive. If done right, this would help users focus on what actually matters and may definitely prove to be a TREAT. But for now, we are reserving our judgement. Tech Titans, Acquisitions and Regulation – Trick or Treat? Edge computing – Trick or Treat? WebAssembly – Trick or Treat?
Read more
  • 0
  • 0
  • 1674

article-image-edge-computing-trick-or-treat
Melisha Dsouza
31 Oct 2018
4 min read
Save for later

Edge computing - Trick or Treat?

Melisha Dsouza
31 Oct 2018
4 min read
According to IDC’s Digital Universe update, the number of connected devices is projected to expand to 30 billion by 2020 to 80 billion by 2025. IDC also estimates that the amount of data created and copied annually will reach 180 Zettabytes (180 trillion gigabytes) in 2025, up from less than 10 Zettabytes in 2015. Thomas Bittman, vice president and distinguished analyst at Gartner Research, in a session on edge computing at the recent Gartner IT Infrastructure, Operations Management and Data Center Conference predicted, “In the next few years, you will have edge strategies-you’ll have to.” This prediction was consistent with a real-time poll conducted at the conference which stated that 25% of the audience uses edge computing technology and more than 50% plan to implement it within two years. How does Edge computing work? 2018 marked the era of edge computing with the increase in the number of smart devices and the massive amounts of data generated by them. Edge computing allows data produced by the internet of things (IoT) devices to be processed near the edge of a user’s network. Instead of relying on the shared resources of large data centers in a cloud-based environment, edge computing will place more demands on endpoint devices and intermediary devices like gateways, edge servers and other new computing elements to encourage a complete edge computing environment. Some use cases of Edge computing The complex architecture of devices today demands a more comprehensive computing model to support its infrastructure. Edge computing caters to this need and reduces latency issues, overhead and cost issues associated with centralized computing options like the cloud. A good example of this is the launch of the world’s first digital drilling vessel, the Noble Globetrotter I by London-based offshore drilling company- ‘Noble Drilling’. The vessel uses data to create virtual versions of some of the key equipment on board. If the drawworks on this digitized rig begins to fail prematurely, information based on a ‘digital twin’ of that asset will notify a team of experts onshore. The “digital twin” is a virtual model of the device that lives inside the edge processor and can point out to tiny performance discrepancies human operators may easily miss. Keeping a watch on all pertinent data on a dashboard, the onshore team can collaborate with the rig’s crew to plan repairs before a failure. Noble believes that this move towards edge computing will lead to a more efficient, cost-effective offshore drilling. By predicting potential failures in advance, Noble can avert breakdowns at and also spare the expense of replacing/ repairing equipment. Another news that caught our attention was  Microsoft’s $5 billion investment in IoT to empower the intelligent cloud and the intelligent edge.  Azure Sphere is one of Microsoft’s intelligent edge solutions to power and protect connected microcontroller unit (MCU)-powered devices. MCU powered devices power everything from household stoves and refrigerators to industrial equipment and considering that there are 9 billion MCU-powered devices shipping every year, we need all the help we can get in the security spectrum! That’s intelligent edge for you on the consumer end of the application spectrum. 2018 also saw progress in the development of edge computing tools and solutions across the spectrum, from hardware to software. Take for instance OpenStack Rocky one of the most widely deployed open source cloud infrastructure software. It is designed to accommodate edge computing requirements by deploying containers directly on bare metal. OpenStack Ironic improves management and automation capabilities to bare metal infrastructure. Users can manage physical infrastructure just like they manage VMs, especially with new Ironic features introduced in Rocky. Intel’s OpenVIVO computer vision toolkit is yet another example of using edge computing to help developers to streamline their deep learning inferences and deploy high-performance computer vision solutions across a wide range of use-cases. Baidu, Inc. released the Kunlun AI chip built to handle AI models for both, edge computing on devices and in the cloud via data centers. Edge computing - Trick or Treat? However, edge computing does come with disadvantages like the steep cost of deploying and managing an edge network, security concerns and performing numerous operations. The final verdict: Edge computing is definitely a treat when complement by embedded AI for enhancing networks to promote efficiency in analysis and improve security for business systems. Intelligent Edge Analytics: 7 ways machine learning is driving edge computing adoption in 2018 Ubuntu 18.10 ‘Cosmic Cuttlefish’ releases with a focus on AI development, multi-cloud and edge deployments, and much more!
Read more
  • 0
  • 0
  • 2974

article-image-webassembly-trick-or-treat
Prasad Ramesh
31 Oct 2018
1 min read
Save for later

WebAssembly - Trick or Treat?

Prasad Ramesh
31 Oct 2018
1 min read
WebAssembly is a low level language that works in binary and close with the machine code. It defines an AST in a binary format. In this language, you can create and debug code in plain text format. It made popular appearance in many browsers last year and is catching on due to its ability to run heavier apps with speed on a browser window. There are Tools and languages built for it. Why are developers excited about WebAssembly? Developers are excited about this as it can potentially run heavy desktop games and applications right inside your browser window. As Mozilla shares plans to bring more functionality to WebAssembly, modern day web browsing will become more robust. However, the language used by this, WASM, poses some security threats. This is because WASM binary applications cannot be checked for tampers. Some features are even being held back from WebAssembly till it is more secure against attacks like Spectre and Meltdown.
Read more
  • 0
  • 0
  • 2881
article-image-machine-generated-videos-like-deepfakes-trick-or-treat
Natasha Mathur
30 Oct 2018
3 min read
Save for later

Machine generated videos like Deepfakes - Trick or Treat?

Natasha Mathur
30 Oct 2018
3 min read
A Reddit user named “DeepFakes” had posted real-looking explicit videos of celebrities last year. He made use of deep learning techniques to insert celebrities’ faces into the adult movies. Since then the term “Deepfakes” has been used to describe deep learning techniques that help create realistic looking fake videos or images. Video tampering is usually done using generative adversarial networks. Why is everyone afraid of deepfakes? Deepfakes are problematic as they make it very hard to differentiate between the fake and real videos or images. This gives people the liberty to use deepfakes for promoting harassment and illegal activities. The most common use of deepfakes is found in revenge porn, fake celebrities videos and political abuse. For instance, people create face-swap porn videos of ex-girlfriends, classmates, politicians, celebrities, and teachers. This not only counts as cyberbullying but poses major threat overall as one can create a fake video showing world leaders declaring war on a country. Moreover, given that deepfakes seem so real, its victims often suffer through feelings of embarrassment and shame. Deepfakes also cause major reputational harm. One such example is of a 24-year-old, Noelle Martin, whose battle with deepfake pornography started six years ago. Anonymous predators stole her non-sexual images online and then doctored them into pornographic videos. Martin says she faces harassment from people till this day. Other victims of deepfakes pornography include celebrities such as Michelle Obama, Emma Watson, Natalie Portman, Ivanka Trump, Kate Middleton, and so forth. But, Deepfakes isn’t just limited to pornography and has made its way to many other spheres. Deepfakes can also be used as a weapon of misinformation since they can be used to maliciously hoax governments, populations and cause internal conflict. From destroying careers by creating fake evidence of them doing something inappropriate to showing soldiers killing innocent civilians, deepfakes have been wreaking havoc. In defense of deepfakes Just as any tool can be used for good and bad, deepfakes is just an effective machine learning tool that creates realistic videos. Even though deepfakes are majorly used for inappropriate activities, some have put it to good use. For instance, GANs or generative adversarial networks (which help create deepfakes) can create realistic images of skin lesions and create examples of liver lesions, which plays a major role in medical research. Other examples include filmmakers using deepfakes for making great videos with swapped in backgrounds, snapchat face swap photo filters, and face swap e-cards (eg; jib jab app) among others.   Are deepfakes trick or treat? If we make pros and cons list for deepfakes, cons seem to outweigh the pros as of today. Although it has its potential good applications, it is majorly used as a tool for harassing and misinforming people. There is a long way to go till deepfakes achieves itself a good rep and right now, it is mostly fake videos, fake images, false danger warnings, and revenge porn. Trick or treat? I spy a total TRICK!
Read more
  • 0
  • 0
  • 4369

article-image-tech-titans-acquisitions-and-regulation-trick-or-treat
Sugandha Lahoti
29 Oct 2018
5 min read
Save for later

Tech Titans, Acquisitions and Regulation - Trick or Treat?

Sugandha Lahoti
29 Oct 2018
5 min read
In probably the biggest open source acquisition ever, IBM announced that it has acquired Red Hat for $34 billion on Sunday. This is consistent with the trend of Silicon Valley giants’ increasing appetite for growth. The past few months also saw the emergence of the trillion dollar tech titans that has mesmerised even Wall Street. Apple and Amazon rose high in their stocks on their race to a $1 Trillion market cap with Google and Microsoft continuing to relentlessly chase that goal.  Even though Facebook and Twitter stocks took heavy blows thanks to the controversies surrounding their platforms, they continue to be valued a lot higher than solid stocks in other industries. Silicon Valley giants also acquired new companies and startups with the aim of capturing the market and coveted users. Microsoft acquired GitHub, and an AI startup Lobe; Alphabet, Google’s parent company helped GitLab raise $100 million in funding; Apple bought Shazam for an estimated $400 million; Cloudera and Hortonworks also merged to advance hybrid cloud development, Edge and Artificial Intelligence. These investments and acquisitions are a clear indication that companies are collaborating together to further technical advancements. Microsoft’s acquisition is also a signal that the attitude of mature Silicon Valley giants towards open source has changed significantly in recent years. However, people fear, that this embracing of open source is more about business than about values. Billion dollar acquisitions don’t exactly scream ‘free and open software’. Some also say that such acquisitions give access to the acquired company’s user base which big companies are most interested in. This issue was again brought up when EU regulators started an investigation over the concern that Apple’s acquisition of Shazam would potentially give Apple an unfair advantage over its rivals such as Spotify. This year has also been the year of questionable data harvesting practices and frequent and massive data breaches across firms, each affecting millions of users, even as tech titans raced to the $1 trillion club. 2018 opened with Facebook’s Cambridge Analytica scandal, that used Facebook’s user data to influence votes in the UK and US. Moreover, 50M facebook user accounts were compromised, a multimillion-dollar ad fraud scheme secretly tracked Android phones and 500K Google+ accounts were compromised by an undisclosed bug. In July, Timehop, a social media application also suffered a data breach with 21 million users’ data compromised. Just a few days ago, Cathay Pacific, a major Hong Kong based airlines, suffered a data breach affecting 9.4 million passengers. In September, Uber paid $148m over a data breach cover-up. Two weeks back, Pentagon also revealed a cybersecurity breach where hackers stole personal data of tens of thousands of military and civilian US Defense Department personnel. All of these events have left many users and even developers jaded. This has led to a growing ‘techlash’ that is throwing its weight on the need for tech regulation in recent times. Tech regulation in its simplest sense means the tech industry cannot be trusted to regulate itself and there must an independent entity that oversees how tech companies behave. This regulatory body would have power to formulate and implement policies and penalize those that don’t comply. Supporters of tech regulation argue that regulation can restore accountability and rebuild trust in tech. It will also make the conversation around the uses and abuses of technology more public while protecting citizens and software engineers. Tech regulation supporters also believe that regulation can bridge the gap between entrepreneurs, engineers and lawmakers. Read more: 5 reasons government should regulate technology However, tech regulation is not without pitfalls. Tech regulation may come at the cost of tech innovation. For example, user privacy and tech innovation are interlinked. Machine learning systems need more data to get better at their jobs. If more users choose to not share their data, the recommendations they get are likely to be generic at best or even irrelevant. Also, advertising revenue for tech companies might be hit by the limited opportunities to profile users. This could have adverse impact on companies’ ability to continue to innovate and provide free products for their users. There is a need to strike a delicate balance to make privacy work practically. This is the conclusion the US senate has come to as it continues to meet with industry leaders, and privacy experts to understand how to protect consumer data privacy without crippling tech innovation. Moreover, companies may may game tech regulation policies by providing users with little choice. For example they could simply deprive users of their services, should they choose to not share their data with the company. This should also be kept in mind while formulating both tech regulatory bodies and policy frameworks. Although data and security breaches are nasty tricks, they have been instrumental in opening the conversation around tech regulations and privacy policies, which if done right, may eventually make it a TREAT to users. As for tech acquisitions, they are never what they seem to be. Not only do they vary from company to company, but also have complex factors at play - people, culture, market, timing among others. It would be unfair or naive to claim tech acquisitions as purely tricks or treats. The truth lies somewhere in shades of gray. One time is clear though, funding does make the world go round! Sir Tim Berners-Lee on digital ethics and socio-technical systems at ICDPPC 2018 Gartner lists ‘Digital Ethics and Privacy’ as one of the top 10 strategic technology trends for 2019 Is Mozilla the most progressive tech organization on the planet right now?
Read more
  • 0
  • 0
  • 1973

article-image-5-ways-to-reduce-app-deployment-time
Guest Contributor
27 Oct 2018
6 min read
Save for later

5 ways to reduce App deployment time

Guest Contributor
27 Oct 2018
6 min read
Over 6,000 mobile apps are released on the Google Play Store every day. This breeds major competition among different apps that are constantly trying to reach more consumers. Spoilt for choice, the average app user is no longer willing to put up with lags, errors, and other things that might go wrong with their app experience. Because consumers have such high expectations, developers need to find a way to release new updates, or deployments faster. This means app developers need to keep the deployment time low without compromising quality. The world of app development is always evolving, and any new deployments come with a risk. You need the right strategy to keep things from going wrong at every stage of the deployment process. Luckily, it’s not as complicated as you might think to create a workflow that won’t breed errors. Here are some tips to get you started. 1. Logging to catch issues before they happen An application log is a file that keeps track of events being logged by a software, which includes vital information such as errors and warnings. Logging helps in catching potential problems before they happen. Even if a problem arises, you’ll have a log to show you why it might have occurred. Logging also provides a history of earlier version updates which you can restore from. You have two options for application logging: creating your own framework or utilizing one that’s already readily available. While it’s completely possible to create your own, based on your own decision about what’s important for your application, there are already tools that work effectively that you can implement for your project. You can learn more about creating a system for finding problems before they happen here: Python Logging Basics - The Ultimate Guide to Logging. 2. Batching to identify errors/breakdowns quickly Deploying in batches gives developers much more control than releasing all major updates at once. When you reduce the amount of change taking place in every update, it’s easier to identify errors and breakdowns. If you update your app with large overhauls, you’ll spend countless hours hunting where something went wrong. Even if your team already utilizes small batch updates, you can take steps to make this process easier through automation using tools like Compuware, Helpsystems or Microsystems' Automation Batch Tools. Writing fresh code every time you need to make a change takes time and effort. When you have an agile schedule, you need to optimize your code to ensure time isn’t spent on repetitive tasks. Automated batching will help your team work faster and will fewer errors. 3. Key Performance Indicators to benchmark success Key Performance Indicators, also known as KPIs anticipate the success of your app. You should identify these early on so you’re able to not only recognize the success of your app but also notice areas that need improving. The KPIs you choose depend on the type of app. In the app world, some of the most common KPIs are: Number of downloads App open rate New users Retention rate Session length Conversion rate from users to customers Knowing your KPIs will help you anticipate user trends. If you notice your session length going down, for example, that’s a good sign it’s time for an update. On the other hand, an increase in downloads is a good indicator that you’re doing something right. 4. Testing Finally, you’ll want to set up a system for testing your app deployments effectively. Testing is important to make sure everything is working properly so you can quickly launch your newest deployment without worrying about things going wrong. You can create sample tests for all aspect of the user experience like logins, key pages, and APIs. However, you’ll need to choose a method (or several) of testing that makes sense based on your deployment size. Common application testing types: Functionality Testing: This is to ensure the app is working on all devices. Performance Testing: With this test, several mobile challenges are introduced like poor network coverage, and less memory that stress the application’s server. Memory Leakage Testing: This step checks for optimized memory processing. Security Testing: As security becomes a greater concern for users, apps need to be tested to ensure data is protected. The good news is much of this testing can be done through automated tools. With just a few clicks, you can test for all of the things above. The most common automated testing tools include Selenium, TestingWhiz, and Test IO. 5. Deployment tracking software When you’re continuously deploying new updates for your app, you need to have a way to track these changes in real-time. This helps your team see when the deployments happened, how they relate to prior deployments, and how they’ve affected your predetermined KPIs. While you should still have a system for testing, automating code, and tracking errors, some errors still happen since there is no way to prevent a problem from happening 100% of the time. Using a deployment tracking software such as Loggly (full disclosure, I work at Loggly), Raygun or Airbrake will help cut down on time spent searching for an error. Because they identify immediately if an error is related to newly released codes, you can spend less time locating a problem and more time solving it. When it comes to your app’s success, you need to make sure your deployments are as pain-free as possible. You don’t have time to waste since competition is fierce today, but that is no excuse to compromise on quality. The above tips will streamline your deployment process so you can focus on building something your users love. About the Author Ashley is an award-winning writer who discovered her passion in providing creative solutions for building brands online. Since her first high school award in Creative Writing, she continues to deliver awesome content through various niches. Mastodon 2.5 released with UI, administration, and deployment changes Google App Engine standard environment (beta) now includes PHP 7.2 Multi-Factor Authentication System – Is it a Good Idea for an App?
Read more
  • 0
  • 0
  • 5328
article-image-top-five-questions-to-ask-when-evaluating-a-data-monitoring-solution
Guest Contributor
27 Oct 2018
6 min read
Save for later

Top five questions to ask when evaluating a Data Monitoring solution

Guest Contributor
27 Oct 2018
6 min read
Massive changes are happening around the way IT services are consumed and delivered. Cloud-based infrastructure is being tied together and instrumented by DevOps processes, while microservices-driven apps are replacing monolithic architectures. This evolution is driving the need for greater monitoring and better analysis of data than we have ever seen before. This need is compounded by the fact that an application today may be instrumented with the help of sensors and devices providing users with critical input in making decisions. Why is there a need for monitoring and analysis? The placement of sensors on practically every available surface in the material world – from machines to humans – is a reality today. Almost anything that is capable of giving off a measurable metric or recorded event can be instrumented, in the virtual world as well as the physical world, and has the need for monitoring. Metrics involve the consistent measurement of characteristics, such as CPU usage, while events are something that is triggered, such as temperature reaching above a threshold. The right instrumentation, observation and analytics are required to create business insight from the myriad of data points coming from these instruments. In the virtual world, monitoring and controlling software components that drive business processes is critical. Data monitoring in software is an important aspect of visualizing what systems are doing – what activities are happening, and precisely when – and how well the applications and services are performing. There is, of course, a business justification for all this monitoring of constant streams of metrics and events data. Companies want to become more data-driven, they want to apply data insights to be better situationally aware of business opportunities and threats. A data-driven organization is able to predict outcomes more effectively than relying on historical information, or on gut instinct. When vast amounts of data points are monitored and analyzed, the organization can find interesting “business moments” in the data. These insights help identify emerging opportunities and competitive advantages. How to develop a Data monitoring strategy Establishing an overall IT monitoring strategy that works for everyone across the board is nearly impossible. But it is possible to develop a monitoring strategy which is uniquely tailored to specific IT and business needs. At a high level, organizations can start developing their Data monitoring strategy by asking these five fundamental questions: #1 Have we considered all stakeholder needs? One of the more common mistakes DevOps teams make is focusing the monitoring strategies on the needs of just a few stakeholders and not addressing the requirements of stakeholders outside of IT operations, such as line of business (LOB) owners, application developers and owners, and other subgroups within operations, such as network operations (NOC) or communications teams. For example, an app developer may need usage statistics around application performance while the network operator might be interested in network bandwidth usage by that app’s users. #2 Will the data capture strategy meet future needs? Organizations, of course, must key on the data capture needs of today at the enterprise level, but at the same time, must consider the future. Developing a long-term plan helps in future-proofing the overall strategy since data formats and data exchange protocols always evolve. The strategy should also consider future needs around ingestion and query volumes. Planning for how much data will be generated, stored and archived will help establish a better long-term plan. #3 Will the data analytics satisfy my organization’s evolving needs? Data analysis needs always change over time. Stakeholders will ask for different types of analysis and planning ahead for those needs, and opting for a flexible data analysis strategy will help ensure that the solution is able to support future needs. #4 Is the presentation layer modular and embeddable? A flexible user interface that addresses the needs of all stakeholders is important for meeting the organization’s overarching goals. Solutions which deliver configurable dashboards that enable users to specify queries for custom dashboards meet this need for flexibility. Organizations should consider a plug-and-play model which allows users to choose different presentation layers as needed. #5 Does architecture enable smart actions? The ability to detect anomalies and trigger specific actions is a critical part of a monitoring strategy. A flexible and extensible model should be used to meet the notification preferences of diverse user groups. Organizations should consider self-learning models which can be trained to detect undefined anomalies from the collected data. Monitoring solutions which address the broader monitoring needs of the entire enterprise are preferred. What are purpose-built monitoring platforms Devising an overall IT monitoring strategy that meets these needs and fundamental technology requirements is a tall order. But new purpose-built monitoring platforms have been created to deal with today’s new requirements for monitoring and analyzing these specific metrics and events workloads – often called time-series data – and provide situational awareness to the business. These platforms support ingesting millions of data points per second, can scale both horizontally and vertically, are designed from the ground up to support real-time monitoring and decision making, and have strong machine learning and anomaly detection functions to aid in discovering interesting business moments. In addition, they are resource-aware, applying compression and down-sampling functions to aid in optimal resource utilization, and are built to support faster time to market with minimal dependencies. With the right strategy in mind, and tools in place, organizations can address the evolving monitoring needs of the entire organization. About the Author Mark Herring is the CMO of InfluxData. He is a passionate marketeer with a proven track record of generating leads, building pipeline, and building vibrant developer and open source communities. Data-driven marketeer with proven ability to define the forest from the trees, improve performance, and deliver on strategic imperatives. Prior to InfluxData, Herring was vice president of corporate marketing and developer marketing at Hortonworks where he grew the developer community by over 40x. Herring brings over 20 years of relevant marketing experience from his roles at Software AG, Sun, Oracle, and Forte Software. TensorFlow announces TensorFlow Data Validation (TFDV) to automate and scale data analysis, validation, and monitoring. How AI is going to transform the Data Center. Introducing TimescaleDB 1.0, the first OS time-series database with full SQL support.
Read more
  • 0
  • 0
  • 4677

article-image-is-initiative-q-a-pyramid-scheme-or-just-a-really-bad-idea
Richard Gall
25 Oct 2018
5 min read
Save for later

Is Initiative Q a pyramid scheme or just a really bad idea?

Richard Gall
25 Oct 2018
5 min read
If things seem too good to be true, they probably are. That's a pretty good motto to live by, and one that's particularly pertinent in the days of fake news and crypto-bubbles. However, it seems like advice many people haven't heeded with Initiative Q, a new 'payment system' developed by the brains behind PayPal technology. That's not to say that Initiative Q certainly is too good to be true. But when an organisation appears to be offering almost hundreds of thousands of dollars to users who simply offer an email and then encourage others to offer theirs, caution is essential. If it looks like a pyramid scheme, then do you really want to risk the chance that it might just be a pyramid scheme? What is Initiative Q? Initiative Q, is, according to its founders, "tomorrow's payment network." On its website it says that current methods of payment, such as credit cards, are outdated. They open up the potential for fraud and other bad business practices, as well as not being particularly efficient. Initiative Q claims that is it going to develop an alternative to these systems "which aggregate the best ideas, innovations, and technologies developed in recent years." It isn't specific about which ideas and technological innovations its referring to, but if you read through the payment model it wants to develop, there are elements that sound a lot like blockchain. For example, it talks about using more accurate methods of authentication to minimize fraud, and improving customer protection by "creating a network where buyers don’t need to constantly worry about whether they are being scammed" (the extent to which this turns out to be deliciously ironic remains to be seen). To put it simply, it's a proposed new payment system that borrows lots of good ideas that still haven't been shaped into a coherent whole. Compelling, yes, but alarm bells are probably sounding. Who's behind Initiative Q? There are very few details on who is actually involved in Initiative Q. The only names attached to the project are Saar Wilf, an entrepreneur who founded Fraud Sciences, a payment technology that was bought by PayPal in 2008, and Lawrence White, Professor of Monetary Theory and Policy and George Mason University. The team should grow, however. Once the number of members has grown to a significant level, the Initiative Q team say "we will continue recruiting the world’s top professionals in payment systems, macroeconomics, and Internet technologies." How is Initiative Q supposed to work? Initiative Q explains that for the world to adopt a new payment network is a huge challenge - a fair comment, because after all, for it to work at all, you need actors within that network who believe in it and trust it. This is why the initial model - which looks and feels a hell of a lot like a pyramid or Ponzi scheme - is, according to Initiative Q, so important. To make this work, you need a critical mass of users. Initiative Q actually defends itself from accusations that it is a Pyramid scheme by pointing out that there's no money involved at this stage. All that happens is that when you sign up you receive a specific number of 'Qs' (the name of the currency Initiative Q is proposing). These Qs obviously aren't worth anything at the moment. The idea is that when the project actually does reach critical mass, it will take on actual value. Isn't Initiative Q just another cryptocurrency? Initiative Q is keen to stress that it isn't a cryptocurrency. That said, on its website the project urges you to "think of it as getting free bitcoin seven years ago." But the website does go into a little more detail elsewhere, explaining that "cryptocurrencies have failed as currencies" because they "focus on ensuring scarcity" while neglecting to consider how people might actually use them in the real world." The implication, then, is that Initiative Q is putting adoption first. Presumably, it's one of the reasons that it has decided to go with such an odd acquisition strategy. Ultimately though, it's too early to say whether Initiative Q is or isn't a cryptocurrency in the strictest (ie. fully de-centralized etc.) sense. There simply isn't enough detail about how it will work. Of course, there are reasons why Initiative Q doesn't want to be seen as a cryptocurrency. From a marketing perspective, it needs to look distinctly different from the crypto-pretenders of the last decade. Initiative Q: pyramid scheme or harmless vaporware? Because no money is exchanged at any point, it's difficult to call Initiative Q a ponzi or pyramid scheme. In fact it's actually quite hard to know what to call it. As David Gerard wrote in a widely shared post from June, published when Initiative Q had a first viral wave, "the Initiative Q payment network concept is hard to critique — because not only does it not exist, they don’t have anything as yet, except the notion of “build a payment network and it’ll be awesome.” But while it's hard to critique, it's also pretty hard to say that it's actually fraudulent. In truth, at the moment it's relatively harmless. However, as David Gerard points out in the same post, if the data of those who signed up is hacked - or even sold (although the organization says it won't do that) - that's a pretty neat database of people who'll offer their details up in return for some empty promises of future riches.
Read more
  • 0
  • 0
  • 5046