Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech Guides

852 Articles
article-image-tech-workers-coalition-volunteers-talk-unionization-and-solidarity-in-silicon-valley
Natasha Mathur
03 Dec 2018
9 min read
Save for later

Tech Workers Coalition volunteers talk unionization and solidarity in Silicon Valley

Natasha Mathur
03 Dec 2018
9 min read
In the latest podcast episode of Delete your account, Roqayah Chamseddine and Kumars Salehi talked to Ares and Kristen, volunteers with the Tech Workers Coalition (TWC), about how they function and organize to bring social justice and solidarity to the tech industry. What is the Tech Workers Coalition? The Tech Workers Coalition is a democratically structured, all-volunteer, and worker-led organization of tech and tech adjacent workers across the US who organize and offer support for activist, civic engagement and education projects. They primarily do work in the Bay Area Seattle, but they are also supporting and working on initiatives across the United States. While they work largely to defend the rights of tech workers, the organization argues for wider solidarity with existing social and economic justice movements. Key Takeaways The podcast discusses the evolution of TWC (from facilitating Google employees in their protest against Google’s Pentagon contract to helping Google employees in “walkout for real change”), pushback received, TWC’s unionizing goal, and their journey going forward. A brief history of the Tech Workers Coalition Tech Workers Coalition started with a friendship between Rachel Melendes, a former cafeteria worker and Matt Schaefer, an engineer. The first meetings, in 2014 and 2015, comprised a few full-time employees at tech companies. These meetings were occasions for discussing and sharing experiences of working in the tech industry in Silicon Valley. It’s worth noting that those involved didn’t just include engineers - subcontracted workers, cafeteria workers, security guards, and janitors were all involved too. So, TWC began life as a forum for discussing workplace issues, such as pay disparity, harassment, and discrimination. However, this forum evolved, with those attending becoming more and more aware that formal worker organization could be a way of achieving a more tangible defense of worker rights in the tech industry. Kristen points out in the podcast how 2016 presidential elections in the US were “mobilizing” and laid a foundation for TWC in terms of determining where their interests lay. She also described how ideological optimism of Silicon Valley companies - evidenced in brand values like “connecting people” and “don’t be evil”, encourages many people to join the tech industry for “naive but well-intentioned reasons.” One example presented by Kristen is of the 14th December Trump tower meeting in 2016, where Donald Trump invited top tech leaders including Tim Cook ( CEO, Apple), Jeff Bezos ( CEO, Amazon), Larry Page (CEO, Alphabet), and Sheryl Sandberg ( COO, Facebook) for a “technology roundup”. Kristen highlights that the meeting, seen by some as an opportunity to put forward the Silicon Valley ethos of openness and freedom, didn’t actually fulfill what it might have done. The acquiescence of these tech leaders to a President widely viewed negatively by many tech workers forced employees to look critically at their treatment in the workplace. It’s almost as if it was the moment, for many workers, when the fact those at the top of the tech industry weren’t on their side. From this point, the TWC has gone from strength to strength. There are now more than 500 people in the Tech Workers Coalition group on Slack that discuss and organize activities to bring more solidarity in the tech industry. Ideological splits within the tech left Ares also talks about ideological splits within the community of left-wing activists in the tech industry. For example, when Kristen joined TWC in 2016, many of the conversations focused on questions like are tech workers actually workers? and aren’t they at fault for gentrification? The fact that the debate has largely moved on from these issues says much about how thinking has changed in activist communities. While in the past activists may have taken a fairly self-flagellating view of, say, gentrification - a view that is arguably unproductive and offers little opportunity for practical action - today, activists focus on what tech workers have in common with those doing traditional working-class jobs. Kristen explains: “tech workers aren’t the ones benefiting from spending 3 grand a month on a 1 bedroom apartment, even if that’s possible for them in a way that is not for many other working people. You can really easily see the people that are really profiting from that are landlords and real estate developers”. As Salehi also points out in the episode, solidarity should ultimately move beyond distinctions and qualifiers like income. TWC’s recent efforts in unionizing tech Google’s walkout for Real Change A recent example of TWC’s efforts to encourage solidarity across the tech industry is its support of Google’s Walkout for Real Change. Earlier this month, 20,000 Google employees along with Vendors, and Contractors walked out of their respective Google offices to protest discrimination and sexual harassment in the workplace. As part of the walkout, Google employees laid out five demands urging Google to bring about structural changes within the workplace. To facilitate the walkout, TWC organized a retaliation hotline that allowed employees to call in if they faced any retribution for participating in the walkout. If an employee contacted the hotline, TWC would then support them in taking their complaints to the labor bureau. TWC also provided resources based on their existing networks and contacts with the National Labour Relations Board (NLRB). Read Also: Recode Decode #GoogleWalkout interview shows why data and evidence don’t always lead to right decisions in even the world’s most data-driven company Ares called the walkout “an escalation in tactic” that would force tech execs to concede to employee demands. He also described how the walkout caused a “ripple effect” -  since seeing Google end its forced arbitration policy, Facebook soon followed too. Protest against AI drones It was back in October when Google announced that it will not be competing for the Pentagon’s cloud-computing contract worth $10 billion, saying the project may conflict with its principles for the ethical use of AI. Google employees had learned about Google’s decision to provide and develop artificial intelligence to a controversial military pilot program known as Project Maven, earlier this year. Project Maven aimed to speed up analysis of drone footage by automatically labeling images of objects and people. Many employees had protested against this move by Google by resigning from the company.  TWC supported Google employees by launching a petition in April in addition to the one that was already in circulation, demanding that Google abandon its work on Maven. The petition also demanded that other major tech companies, such as IBM and Amazon, refuse to work with the U.S. Defense Department. TWC’s Unionizing goal and major obstacles faced in the tech industry On the podcast, Kristen highlights that union density across the tech industry is quite low. While unionization across the industry is one of the TWC’s goals, it’s not their immediate goal. “It depends on the workplace, and what the workers there want to do. We’re starting at a place that is comparable to a lot of industries in the 19th century in terms of what shape it could take, it's very nascent. It will take a lot of experimentation”, she says. The larger goal of TWC is to challenge established tech power structures and practices in order to better serve the communities that have been impacted negatively by them. “We are stronger when we act together, and there’s more power when we come together,” says Kristen. “We’re the people who keep the system going. Without us, companies won't be able to function”. TWC encourages people to think about their role within a workplace, and how they can develop themselves as leaders within the workplace. She adds that unionizing is about working together to change things within the workplace, and if it's done on a large enough scale, “we can see some amount of change”. Issues within the tech industry Kristen also discusses how issues such as meritocracy, racism, and sexism are still major obstacles for the tech industry. Meritocracy is particularly damaging as it prevents change - while in principle it might make sense, it has become an insidious way of maintaining exclusivity for those with access and experience. Kristen argues that people have been told all their lives that if you try hard you’ll succeed and if you don’t then that’s because you didn't try hard enough. “People are taught to be okay with their alienation in society,” she says. If meritocracy is the system through which exclusivity is maintained, sexism, sexual harassment, misogyny, and racism are all symptoms of an industry that, for its optimism and language of change, is actually deeply conservative. Depressingly, there are too many examples to list in full, but one particularly shocking report by The New York Times highlighted sexual misconduct perpetrated by those in senior management. While racism may, at the moment, be slightly less visible in the tech industry - not least because of an astonishing lack of diversity - the internal memo by Mark Luckie, formerly of Facebook, highlighted the ways in which Facebook was “failing its black employees and its black users”. What’s important from a TWC perspective is that none of these issues can be treated in isolation and as individual problems. By organizing workers and providing people with a space in which to share their experiences, the organization can encourage forms of solidarity that break down the barriers that exist across the industry. What’s next for TWC? Kristen mentions how the future for TWC depends on what happens next as there are lots of things that could change rather quickly. Looking at the immediate scope of TWC’s future work, there are projects that they're working on. Ares also mentions how he is blown away by how things have chalked out in the past couple of years and are optimistic about pushing the tendency of rebellion within the tech industry with TWC. “I've been very positively surprised with how things are going but it hasn't been without lots of hard work with lots of folks within the coalition and beyond. In that sense it is rewarding, to see the coalition grow where it is now”, says Kristen. Sally Hubbard on why tech monopolies are bad for everyone: Amazon, Google, and Facebook in focus OK Google, why are you ok with mut(at)ing your ethos for Project DragonFly?
Read more
  • 0
  • 0
  • 4120

article-image-open-source-software-are-maintainers-the-only-ones-responsible-for-software-sustainability
Savia Lobo
01 Dec 2018
6 min read
Save for later

Open Source Software: Are maintainers the only ones responsible for software sustainability?

Savia Lobo
01 Dec 2018
6 min read
Last week, a Californian Computer Scientist disclosed a malicious package ‘flatmap-stream’ in the popular npm package, ‘event-stream’. The reason for this breach is, the ownership of the event-stream package was transferred by Dominic Tarr (original author) to a malicious user, right9ctrl. Following this, many Twitter and GitHub users have supported him whereas the others think he should have been more careful while transferring package ownership. Andre Staltz, an open source hacker mentions in a support to Dominic, “The fact that he gave ownership meant that he *cared* at least to do a tiny action that seemed ok. Not caring would be doing absolutely nothing at all, and that's the case quite often, and OSS maintainers get criticized also for *that*” Who’s responsible for maintaining the open source software? At the NDC Sydney 2018 conference held in September, two open source maintainers Nick Randolph, Technical Lead at Built To Roam and Geoffrey Huntley, an open source software engineer talked on why should companies and people should contribute back to open source and how they can do it. However, if something goes wrong with the project, who is responsible for it? Most users blame the maintainers of the project, but the license does not say so. In fact users, contributors, and maintainers together are equally responsible. Open source is a fantastic avenue for personal development as it does not require the supply, material, planning, and approval like other software Some reasons to contribute to Open Source Software: Other people will help you for free You will save a lot on training and documentation You will not be criticized by open source advocates Ability to hire best engineers You will be able to influence the direction of the projects to which you contribute Companies have embraced open source software as it allows them to get solutions to the market faster for their customers. It has allowed companies to focus on delivering business value instead of low-level technical tasks. The problem with Open Source The majority of open-source software that the world depends on is built by volunteers. When a business chooses to use open-source software this volunteer labor is essentially an unpaid vendor with no contractual obligations. However the speakers say, “Historically, we have defined open-source software in terms of freedom for the consumer, in the future now that open-source has ‘won’ this dialogue needs to change. Did we get it right? Did we ever stop to think about how software is maintained, the rights of maintainers and the cost of maintenance?” The maintainers said, as per the Open Source Software license, once the software is released to the world their responsibility ends. They need not respond to GitHub issues, no need to create documentation, no need to answer questions on stack overflow, and so on. The popular example where a security damage was caused by the popular Heartbleed Bug where the security issue was found in the OpenSSL cryptographic software library, which caused a huge loss of revenue. However, when an OSS breaks or users need new features, they log an issue on GitHub and then sit back awaiting a response. If the comments are not addressed by the maintainer, users start complaining about how badly the project is run. The thing about OSS that's too often forgotten, it's AS-IS, no exceptions. How should Businesses secure their supply chain? Different projects may operate differently, with more or fewer people, with work being prioritized differently, on differing release schedules but in all cases the software delivered is as-is, meaning that there is absolutely no SLA. The speakers say that it businesses should analyze the level of contribution they need to make towards the open source community. They have highlighted that in order to secure their supply chain, users should contribute with money or time. The truth is that free software is not really free. How much is this going to cost in man hours? If not with money, they can contribute with time. For instance, there is an initiative called as opensourcefriday.com and as an engineering leader you or your employees can pull request and learn how the open source they depend upon works. This means you are having a positive influence in the community and also contributing back to open source. And if your company faces any critical issue, the maintainer is likely to help you as you have actively contributed to the community. Source: YouTube How do you know how much to contribute? In order to shift the goal of the software, you have to be the maintainer or a core contributor to influence the direction. If you just want to protect the supply chain, you can simply fix what’s broken. If you wish to contribute at a consistent velocity, contribute at a rate that you can maintain for as long as you want. Source: YouTube According to Nick and Geoffrey what users and businesses should do is: Protect their software chain and see that from a business perspective what are the components I am making use of and make sure that these components are going to exist, going forward. We also need to think about the sustainability of the project and let it not wither away soon. If the project is good for the community, how can we make it sustainable by making more and more people joining the project? Companies should also keep a track of what they are contributing back to these projects. People should share their experiences and their best practices. This contribution will help analyze the risk factors. Share so that the industry matures beyond simple security concerns. Watch the complete talk by Nick and Geoffrey on YouTube https://www.youtube.com/watch?v=Mm_RuObpeGo&app=desktop The Linux and RISC-V foundations team up to drive open source development and adoption of RISC-V instruction set architecture (ISA) OpenStack Foundation to tackle open source infrastructure problems, will conduct conferences under the name ‘Open Infrastructure Summit’ The Ceph Foundation has been launched by the Linux Foundation to support the open source storage project
Read more
  • 0
  • 0
  • 3789

article-image-5-lessons-public-wi-fi-can-teach-us-about-cybersecurity
Guest Contributor
30 Nov 2018
7 min read
Save for later

5 lessons public wi-fi can teach us about cybersecurity

Guest Contributor
30 Nov 2018
7 min read
Free, public Wi-Fi is now crucial in ensuring people stay connected where a secure network is absent or mobile data is unavailable. While the advantages of flexible internet access are obvious, the dangers are often less clear. By now, most of us are aware that these networks can pose a risk, but few can articulate exactly what these risks are and how we can protect ourselves. Follow the advice below to find out exactly what dangers lurk within. The perils of public wi-fi When you join a public hotspot without protection and begin to access the internet, the packets of data that go from your device to the router are public and open for anyone to intercept. While that sounds scary, technology like SSL/TLS has ensured the danger here isn’t as bad as it was a few years ago. That being said, all a cybercriminal needs to snoop on your connection is some relatively simple Linux software that’s accessible online. This leaves you vulnerable to a variety of attacks. Let's take a look at some of them now. Data monitoring Typically, a wi-fi adapter will be set on “managed” mode. This means it acts as a standalone client connecting to a single router for access to the internet. The interface will ignore all data packets except those that are explicitly addressed to it. However, some adapters can be configured into other modes. In “monitor” mode, an adapter will capture all the wireless traffic in a certain channel, regardless of the source or intended recipient. In this mode, the adapter can even capture data packets without being connected to a router – meaning it can sniff and snoop on all the data it gets its hands on. Not all commercial wi-fi adapters are capable of this, as it’s cheaper for manufacturers to make those that only handle “managed” mode. Still, if someone gets their hands on one and pairs it with some simple Linux software, they can see which URLs you are loading and all of the data you’re entering on any website not using HTTPS – including names, addresses, and financial accounts. Fake hotspots Catching unencrypted data packets out of the air isn’t the only risk of public wi-fi. When you connect to an unprotected router, you are implicitly trusting the supplier of that connection. Usually this trust is well-founded – it’s unlikely your local café is interested in your private data. However, the carelessness with which we now connect to public routers means that cybercriminals can easily set up a fake network to bait you in. Once an illegitimate hotspot has been created, all of the data flowing through it can be captured, analysed, and manipulated. One of the most common forms of manipulation is simply redirecting your traffic to an imitation of a popular website. The sole purpose of this clone site will be to capture your personal information and card details – the same strategy used in phishing scams. ARP spoofing Unfortunately, cybercriminals don’t even need a fake hotspot to interfere with your traffic. Every wi-fi and Ethernet network has a unique MAC address – an identifying code used to ensure data packets travel to the correct destination. The way that routers – and all other devices – discover this information is using ARP (Address Resolution Protocol). For example, your smartphone might send out a request asking which device on the network is associated with a certain IP address. The requested device responds with its MAC address, ensuring the data packets are physically directed to the correct location. The issue with ARP is that it can be faked. Your smartphone might send a request for the address of the public wi-fi router, and a different device will answer with a false address. Providing the signal of the false device is stronger than the legitimate one, your smartphone will be fooled. Again, this can be done with simple Linux software. Once the spoofing has taken place, all of your data will be sent to the false router, which can subsequently manipulate the traffic however it likes. Man-in-the-Middle (MitM) attacks A man-in-the-middle attack (MITM) refers to any malicious action in which the attacker secretly relays or alters the communication between two parties. On an unprotected connection, a cybercriminal can modify key parts of the network traffic, redirect this traffic elsewhere, or inject content into an existing packet. This could mean displaying a fake login form or website, changing links, text, pictures, or more. This is relatively straightforward to execute; an attacker within reception range of an unencrypted wi-fi point could insert themselves easily. How to secure your connection The prevalence and simplicity of these attacks only serves to highlight the importance of basic cybersecurity best practices. Following these foundational rules of cybersecurity should serve to counteract the vast majority of public wi-fi threats. Firewalls An effective firewall will monitor and block any suspicious traffic flowing to and from your device. It’s a given that you should always have a firewall in place and your virus definitions updated to protect your device from upcoming threats. Though properly configured firewalls can effectively block some attacks, they’re not infallible, and do not exempt you from danger. They primarily help protect against malicious traffic, not malicious programs, and may not protect you if you inadvertently run malware. Firewalls should always be used in conjunction with other protective measures such as antivirus software. Software updates Not to be underestimated, software and system updates are imperative and should be installed as soon as they’re offered. Staying up to date with the latest security patches is the simplest step in protecting yourself against existing and easily-exploited system vulnerabilities. Use a VPN Whether you’re a regular user of public Wi-Fi or not, A VPN is an essential security tool worth having. This software works by generating an encrypted tunnel that all of your traffic travels through, ensuring your data is secure regardless of the safety of the network you’re on. This is paramount for anyone concerned about their security online, and is arguably the best safeguard against the risks of open networks. That being said, there are dozens of available VPN services, many of which are unreliable or even dangerous. Free VPN providers have been known to monitor and sell users’ data to third parties. It’s important you choose a service provider with a strong reputation and a strict no-logging policy. It’s a crowded market, but most review websites recommend ExpressVPN and NordVPN as reliable options. Use common sense If you find yourself with no option but to use public Wi-Fi without a VPN, the majority of attacks can be avoided with old-school safe computing practices. Avoid making purchases or visiting sensitive websites like online banking. It’s best to stay away from any website that doesn’t use HTTPS. Luckily, popular browser extensions like HTTPS everywhere can help extend your reach. The majority of modern browsers have in-built security features that can identify threats and notify you if they encounter a malicious website. While it’s sensible to heed these warnings, these browsers are not failsafe and are much less likely to spot local interference by an unknown third party. Simple solutions are often the strongest in cybersecurity With the rising use of HTTPS and TLS, it’s become much harder for data to be intercepted and exploited. That being said, with a laptop, free Linux software, and a cheap Wi-Fi adapter, you’d be surprised how much damage can be done. Public Wi-Fi is now a staple of modern life. Despite its ubiquity, it’s still exploited with relative ease, and many are oblivious to exactly what these risks entail. Clearly cybersecurity still has a long way to go at the consumer level; for now, old lessons still ring true – the simplest solutions are often the strongest. William Chalk is a writer and researcher at Top10VPN, a cybersecurity research group and the world’s largest VPN (Virtual Private Network) review site. As well as recommending the best VPN services, they publish independent research to help raise awareness of digital privacy and security risks.  
Read more
  • 0
  • 0
  • 5292

article-image-5-types-of-deep-transfer-learning
Bhagyashree R
25 Nov 2018
5 min read
Save for later

5 types of deep transfer learning

Bhagyashree R
25 Nov 2018
5 min read
Transfer learning is a method of reusing a model or knowledge for another related task. Transfer learning is sometimes also considered as an extension of existing ML algorithms. Extensive research and work is being done in the context of transfer learning and on understanding how knowledge can be transferred among tasks. However, the Neural Information Processing Systems (NIPS) 1995 workshop Learning to Learn: Knowledge Consolidation and Transfer in Inductive Systems is believed to have provided the initial motivations for research in this field. The literature on transfer learning has gone through a lot of iterations, and the terms associated with it have been used loosely and often interchangeably. Hence, it is sometimes confusing to differentiate between transfer learning, domain adaptation, and multitask learning. Rest assured, these are all related and try to solve similar problems. In this article, we will look into the five types of deep transfer learning to get more clarity on how these differ from each other. [box type="shadow" align="" class="" width=""]This article is an excerpt from a book written by Dipanjan Sarkar, Raghav Bali, and Tamoghna Ghosh titled Hands-On Transfer Learning with Python. This book covers deep learning and transfer learning in detail. It also focuses on real-world examples and research problems using TensorFlow, Keras, and the Python ecosystem with hands-on examples.[/box] #1 Domain adaptation Domain adaptation is usually referred to in scenarios where the marginal probabilities between the source and target domains are different, such as P (Xs) ≠ P (Xt). There is an inherent shift or drift in the data distribution of the source and target domains that requires tweaks to transfer the learning. For instance, a corpus of movie reviews labeled as positive or negative would be different from a corpus of product-review sentiments. A classifier trained on movie-review sentiment would see a different distribution if utilized to classify product reviews. Thus, domain adaptation techniques are utilized in transfer learning in these scenarios. #2 Domain confusion Different layers in a deep learning network capture different sets of features. We can utilize this fact to learn domain-invariant features and improve their transferability across domains. Instead of allowing the model to learn any representation, we nudge the representations of both domains to be as similar as possible. This can be achieved by applying certain preprocessing steps directly to the representations themselves. Some of these have been discussed by Baochen Sun, Jiashi Feng, and Kate Saenko in their paper Return of Frustratingly Easy Domain Adaptation. This nudge toward the similarity of representation has also been presented by Ganin et. al. in their paper, Domain-Adversarial Training of Neural Networks. The basic idea behind this technique is to add another objective to the source model to encourage similarity by confusing the domain itself, hence domain confusion. #3 Multitask learning Multitask learning is a slightly different flavor of the transfer learning world. In the case of multitask learning, several tasks are learned simultaneously without distinction between the source and targets. In this case, the learner receives information about multiple tasks at once, as compared to transfer learning, where the learner initially has no idea about the target task. This is depicted in the following diagram: Multitask learning: Learner receives information from all tasks simultaneously #4 One-shot learning Deep learning systems are data hungry by nature, such that they need many training examples to learn the weights. This is one of the limiting aspects of deep neural networks, though such is not the case with human learning. For instance, once a child is shown what an apple looks like, they can easily identify a different variety of apple (with one or a few training examples); this is not the case with ML and deep learning algorithms. One-shot learning is a variant of transfer learning where we try to infer the required output based on just one or a few training examples. This is essentially helpful in real-world scenarios where it is not possible to have labeled data for every possible class (if it is a classification task) and in scenarios where new classes can be added often. The landmark paper by Fei-Fei and their co-authors, One Shot Learning of Object Categories, is supposedly what coined the term one-shot learning and the research in this subfield. This paper presented a variation on a Bayesian framework for representation learning for object categorization. This approach has since been improved upon, and applied using deep learning systems. #5 Zero-shot learning Zero-shot learning is another extreme variant of transfer learning, which relies on no labeled examples to learn a task. This might sound unbelievable, especially when learning using examples is what most supervised learning algorithms are about. Zero-data learning, or zero-short learning, methods make clever adjustments during the training stage itself to exploit additional information to understand unseen data. In their book on Deep Learning, Goodfellow and their co-authors present zero-shot learning as a scenario where three variables are learned, such as the traditional input variable, x, the traditional output variable, y, and the additional random variable that describes the task, T. The model is thus trained to learn the conditional probability distribution of P(y | x, T). Zero-shot learning comes in handy in scenarios such as machine translation, where we may not even have labels in the target language. In this article we learned about the five types of deep transfer learning types: Domain adaptation, domain confusion, multitask learning, one-shot learning, and zero-shot learning. If you found this post useful, do check out the book, Hands-On Transfer Learning with Python, which covers deep learning and transfer learning in detail. It also focuses on real-world examples and research problems using TensorFlow, Keras, and the Python ecosystem with hands-on examples. CMU students propose a competitive reinforcement learning approach based on A3C using visual transfer between Atari games What is Meta Learning? Is the machine learning process similar to how humans learn?
Read more
  • 0
  • 0
  • 17597

article-image-sally-hubbard-on-why-tech-monopolies-are-bad-for-everyone-amazon-google-and-facebook-in-focus
Natasha Mathur
24 Nov 2018
8 min read
Save for later

Sally Hubbard on why tech monopolies are bad for everyone: Amazon, Google, and Facebook in focus

Natasha Mathur
24 Nov 2018
8 min read
When people talk about tech giants such as Amazon, Facebook, and Google, they usually talk about the great and powerful innovations that they’ve brought to the table, that have perpetually transformed the contemporary world. Of late, criticism of these same tech titans holding back the power of innovation from other smaller companies as they have become, what you may call, a tech monopoly has been gain traction. In a podcast episode of Innovation For All, Sheana Ahlqvist talked to Sally Hubbard, an antitrust expert, and investigative journalist at The Capitol Forum, regarding tech giants building monopolies. Here are some key highlights from the podcast.   Let’s recall the definition of monopoly. “A market structure characterized by a single seller, selling a unique product in the market. In a monopoly market, the seller faces no competition, as he is the sole seller of goods with no close substitute. Monopoly market makes the single seller the market controller as well as the price maker. He enjoys the power of setting the price for his goods”. In a nutshell, decrease the prices of your service and drive everyone else out of the business. A popular example is John D Rockefeller, Standard Oil’s chief executive, who ruined other competitors by cutting the prices of the oil until they went bankrupt, immediately after which the higher prices returned. Now although there is no price-fixing in the case of Google or Facebook since they offer completely free services, they’re still a monopoly. Let’s have a look. How are Amazon, Google, and Facebook tech monopolies? If you look at each one of these organizations - Amazon, Facebook, and Google have carved out their own markets, with gargantuan and durable market power vested in the hands of each one of them. According to the US Department of Justice, a market share of greater than 50% has been necessary for courts to find the existence of monopolistic power. A dominant market share is a useful starting point in determining monopoly power. Going by this rule, Google has dominated the search engine market, maintaining an 86.02 % market share as of July 2018, as per Statista. This is way over 50%, making Google a monopoly. The majority of Google revenues are generated through advertising. Similarly, Facebook dominates the social media market, with its worldwide market share of 66.67%, making it a monopoly too. Amazon, on the other hand, has 41% market share in the e-commerce retail market which is expected to increase significantly to 50% of the entire e-commerce retail market’s GMV, by 2021. This brings it pretty close to being a monopoly soon in the e-commerce market soon. Another factor that is considered under the Sherman Act, a part of the antitrust law, when identifying a firm that possesses monopoly power, is the existence of anti-competitive effect i.e. companies trying to maintain or acquire a dominant position by excluding competitors or preventing new entry. One recent example that comes to mind is when Google was fined with $5 billion in July this year for breaching EU’s antitrust laws. Google was fined for 3 types of illegal restrictions on the use of Android, cementing the dominance of its search engine. As per EU, Google denied its rivals a chance to innovate and compete on merits, which is illegal under EU’s antitrust laws. Also Read: A quick look at E.U.’s antitrust case against Google’s Android Monopolies and Social Injustice Hubbard points out how these tech titans don’t have any major competitors or substitutes, and even if you don’t pay most of these tech giants with money, you pay them with your data. This is more than enough for world domination, which is always an underlying aspiration for tech companies as they strive to be “the one” in the eyes of their customers, by carefully leveraging their data. This data also put these companies at an unfair advantage over other smaller and newer businesses. As Clive Humby, a British mathematician rightly said, “data is the new oil” in the digital economy. Hubbard explains how the downsides of this monopoly might not be visible to the consumer but affects entrepreneurs and small businesses who are greatly harmed by the practices of these companies. Taking Amazon, for instance, no one wishes to be dependent on their competitor, however, since Amazon has integrated the service of selling products on its platform, not only is everyone competing against Amazon but are also dependent on Amazon, as it is Amazon who decides the rules for the sellers. Add to this the fact that Amazon comprises a ginormous amount of consumer data in hand, putting it in an unfair advantage over others as it can dominate its products over others. There is currently an ongoing EU investigation into Amazon’s use of consumer and seller data collected on its platform to better its own products sold on its platform. Similarly, Google’s monopoly is evident in the fact that it gets to decide the losers and winners of the internet on its Google search, prioritizing its products over the others. An example of this is Google getting fined with 2.7 billion dollars by EU, last year after it ruled the company had abused its power by promoting its own shopping comparison service at the top of search results. Facebook, on the other hand, doesn’t have a direct competition, leaving users with less choice in terms of Social network sites, making it a monopoly. Add to that the fact that other major social media platforms such as Instagram and Whatsapp are also owned by Facebook. Hubbard explains how Facebook doesn't have competition, so it can prioritize its profits over the other factors such as user data as it's really not concerned about user loss. This is evident in the number of scandals that Facebook has gotten itself into regarding user data.  Facebook is facing a whole lot of data and privacy-related controversies, Cambridge Analytica scandal being the most popular one. Facebook suffered the largest security breach in its history that left 50M user accounts compromised, last month. Department of Housing and Urban Development UD) filed a complaint against Facebook in August, alleging the platform is selling ads that discriminate against users based on race, religion, and sexuality. ACLU also sued Facebook in September for enabling sex and age discrimination through targeted ads. Last week, the New York Times published a bombshell report on how Facebook has been following the strategy of ‘delaying, denying and deflecting’ the blame under the leadership of Sheryl Sandberg for all the controversies surrounding it. Scandals aside, even if a user finds the content hosted by Facebook displeasing, they don’t really have a choice to “stop using Facebook” as their friends and family continue to use the platform to stay in touch. Also, Facebook charges advertisers depending on how many people see a message instead of being based on ad clicks. This is why Facebook’s algorithm is programmed in a way that it prioritizes more engaging branded content and ads over the others. Monopoly and Gender Inequality As the market power of these tech giants increases, so does their wealth. Hubbard points out that the wealth from the many among the working and middle classes get transferred to the few belonging to the 1% and 0.1% at the top of the income and wealth distribution. The concentration of market power hurts workers and results in depresses wages, affecting women and other minority workers the most. “When general wages go down or stagnate, female workers are even worse off. Women make 78 cents to a man’s dollar, with black women making 64 cents and Latina women making 54 cents for every dollar a white man makes. As wages by the bottom 99% of earners continue to shrink, women get paid a mere percentage of fewer dollars. And the top 1% of earners are predominantly men”, mentions Sally Hubbard. There have also been declines in employee mobility as there are lesser firms competing due to giant firms acquiring smaller firms. This leads to reduced bargaining power in the hands of an employee. Moreover, these firms also t impose non-compete clauses and no-poach agreements putting a damper on workers’ ability to switch jobs. As eloquently put by Hubbard, “these tech platforms are the ones controlling the rules of the arena in which the game is played and are also the ones playing the game”. Taking into consideration this analogy, it’s anyone’s guess who’ll win the game. OK Google, why are you ok with mut(at)ing your ethos for Project DragonFly? How far will Facebook go to fix what it broke: Democracy, Trust, Reality Amazon splits HQ2 between New York and Washington, D.C. after a making 200+ states compete over a year; public sentiments largely negative
Read more
  • 0
  • 0
  • 4959

article-image-how-to-build-a-location-based-augmented-reality-app
Guest Contributor
22 Nov 2018
7 min read
Save for later

How to build a location-based augmented reality app

Guest Contributor
22 Nov 2018
7 min read
The augmented reality market is developing rapidly. Today, it has a total market value of almost $15 billion; according to Statista,  and this figure could rise to $210 billion by 2022. Augmented reality is having a huge impact on the games industry, but it’s being used by organizations in fields as diverse as publishing and retail.. For example, Layar is an app that turns static objects into live objects, while IKEA’s Catalog app lets you imagine how different types of furniture might fit into your room. But it’s not just about commerce: some apps have a distinctly educational bent, like Field Trip. Field Trip uses augmented reality to help users learn about the history that immediately surrounds them. The best augmented reality apps are always deceptively simple. But to build a really effective augmented reality application you need a diverse range of skills, that span both the domains of software and real-world physics. Let’s take a closer look at location-based augmented reality apps, including what they’re used for and how you can begin building them. How does location-based AR app work? Location-based augmented reality apps are sometimes called geo-based AR apps. Whatever you call them, one thing is important: they collate GPS mobile data and the digital compass to detect the location and position of the device. The application works like this: The AR app arranges queries to be dispatched to the sensor. Once the data has been acquired, the app can determine where it should add virtual information (such as images) should be added to the real world. Location-based augmented reality apps can be used both inside or outside. When inside and it isn’t possible to connect to GPS, the application will use beacons for location data. The best examples of existing location-based augmented reality apps While reading about location-based augmented reality apps can give you a good idea of how they work, to be really inspired, you need to try some out for yourself. Here’s a list of some of the best location-based augmented reality apps out there. Yelp Monocle Yelp Monocle helps you navigate an unknown city. Using GPS, it provides exactly the sort of information you’d expect from Yelp, but in a format that’s fully integrated with your surroundings. So, you can see restaurant reviews, shop opening hours as you move around your environment. Ingress Ingress is an augmented reality gaming app that immerses you in a (semi) virtual world. Your main mission is to find portals that the game ‘creates’ in your immediate environment and open them. Essentially, the game is a great way to explore the world around you and places a new augmented layer on a place that might otherwise be familiar. Vortex Planetarium Vortex Planetarium is an app for aspiring astronomers or anybody else with a passing interest in astronomy. The app detects the user’s location and then provides them with celestial data to better understand the night sky. Steps to create location-based AR app So, if you like the idea of a location-based augmented reality app, you’ll probably want to get started. As we’ve seen, these apps can be incredibly complex, but if you break the development process down, it should become much easier. 1. Determine what resources you need Depending on the complexity of your app, you need to determine what resources are needed - that could be anything from data to other frameworks and services will be required. For example, if you plan to create a game with 3D objects, you’ll need to use Unity to build in that level of functionality and realism. 2. Choose the right augmented reality tool There are a huge number of available augmented reality software development kits out there. However, rather than wade through every single one, here are some of the best to get started with. R SDKs, but we will list the most popular ones that can give you the widest range of possible features. AR Kit by Apple AR kit from Apple features just about everything you’d need to develop an augmented reality application, For example, it has a technology that allows combines both computer vision and camera data to track the user’s environment. AR Kit also is able to adjust the light level in the virtual model, to respond to the level of light in the real world. ARKit 2 recently brought users a number of cool new features. For example, it allows you to build interactivity into your application, and also allows you to build ‘memory’ into your app so it can ‘remember’ the location of augmented reality objects.ARCore by Google In Google’s ARCore you’ll find a mapping tool which is particularly useful for developing of location-based AR apps. ARCore can also track motion and detect vertical and horizontal surfaces. In the latest version of ARCore users can take two gadgets and work with one AR object from different viewing angles. 3. Geolocation data should be added Not all SDKs provide mapping feature. If it doesn’t, it’s essential to make sure you add in geolocation data. Without it, the app wouldn’t work! As we’ve already seen, GPS technology is typically used. It’s convenient and it can detect a user's location anywhere. It can, however, consume a lot of energy. Location services on iOS and Android will help to activate geolocation on the device. 3 augmented reality pitfalls to avoid Developing something as complex as a location-based augmented reality app is bound to lead to some challenges. So be prepared - watch for some of these pitfalls.. Ensure you have proper functionality. When users move with their camera and look for AR objects, these objects should remain static, regardless of the user’s movements. To do this, use SLAM - Simultaneous Localization and Mapping. This is a technique that allows software systems - like robots - ‘understand’ where they are situated in relation to their surroundings. Accuracy. A crucial factor for any AR app is accuracy. When developing your app, it’s essential to consider the user’s position to ensure that the app sends queries to sensors correctly. If it doesn’t the whole experience could seem plain weird for the user. Similarly, the distance between the device and the real world must be calculated correctly - again, if it isn’t your application simply will not work. Get started - build an awesome augmented reality app! Clearly, building a location-based augmented reality app isn’t easy. It requires skill and a commitment to keep going in the face of challenges. You certainly need a team of great developers around you if you’re going to deliver something that makes an impact. But, really, that’s what makes software development exciting, right? Author Bio Vitaly Kuprenko is a technical writer at Cleveroad. It's a web and mobile app development company in Ukraine. He enjoys telling about tech innovations and digital ways to boost businesses. Magic Leap unveils Mica, a human-like AI in augmented reality. Magic Leap teams with Andy Serkis’ Imaginarium Studios to enhance Augmented Reality “As Artists we should be constantly evolving our technical skills and thought processes to push the boundaries on what’s achievable,” Marco Matic Ryan, Augmented Reality Artist
Read more
  • 0
  • 0
  • 13546
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-what-is-security-chaos-engineering-and-why-is-it-important
Amrata Joshi
21 Nov 2018
6 min read
Save for later

What is security chaos engineering and why is it important?

Amrata Joshi
21 Nov 2018
6 min read
Chaos engineering is, at its root, all about stress testing software systems in order to minimize downtime and maximize resiliency. Security chaos engineering takes these principles forward into the domain of security. The central argument of security chaos engineering is that current security practices aren’t fit for purpose. “Despite spending more on security, data breaches are continuously getting bigger and more frequent across all industries” write Aaron Rinehart and Charles Nwatu in a post published on opensource.com in January 2018. “We hypothesize that a large portion of data breaches are caused not by sophisticated nation-state actors or hacktivists, but rather simple things rooted in human error and system glitches.” The rhetorical question they’re asking is clear: should we wait for an incident to happen in order to work on it? Or should we be looking at ways to prevent them from happening at all? Why do we need security chaos engineering today? There are two problems that make security chaos engineering so important today. One is the way in which security breaches and failures are understood culturally across the industry. Security breaches tend to be seen as either isolated attacks or ‘holes’ within software - anomalies that should have been thought of but weren’t. In turn, this leads to a spiral of failures. Rather than thinking about cybersecurity in a holistic and systematic manner, the focus is all too often on simply identifying weaknesses when they happen and putting changes in place to stop them from happening again. You can see this approach even in the way organizations communicate after high-profile attacks have taken place - ‘we’re taking steps to ensure nothing like this ever happens again.’ While that sentiment is important for both customers and shareholders to hear, it also betrays exactly the problems Rinehart, Wong and Nwatu appear to be talking about. The second problem is more about the nature of software today. As the world moves to distributed systems, built on a range of services, and with an extensive set of software dependencies, vulnerabilities naturally begin to increase too. “Where systems are becoming more and more distributed, ephemeral, and immutable in how they operate… it is becoming difficult to comprehend the operational state and health of our systems' security,” Rinehart and Nwatu explain. When you take the cultural issues and the evolution of software together, it becomes clear that the only way cybersecurity is going to properly tackle today’s challenges is by doing an extensive rethink of how and why things happen. What security chaos engineering looks like in practice If you want to think about what the transition to security chaos engineering actually means in practice, a good way to think about it is seeing it as a shift in mindset. It’s a mindset that doesn’t focus on isolated issues but instead on the overall health of the system. Essentially, you start with a different question: don’t ask ‘where are the potential vulnerabilities in our software’ ask ‘where are the potential points of failure in the system?’ Rinehart and Nwatu explain: “Failures we can consist not only of IT, business, and general human factors but also the way we design, build, implement, configure, operate, observe, and manage security controls. People are the ones designing, building, monitoring, and managing the security controls we put in place to defend against malicious attackers.” By focusing on questions of system design and decision making, you can begin to capture security threats that you might otherwise miss. So, while malicious attacks might account for 47% of all security breaches, human error and system glitches combined account for 53%. This means that while we’re all worrying about the hooded hacker that dominates stock imagery, someone made a simple mistake that just about any software-savvy criminal could take advantage of. How is security chaos engineering different from penetration testing? Security chaos engineering looks a lot like penetration testing, right? After all, the whole point of pentesting is, like chaos engineering, determining weaknesses before they can have an impact. But there are some important differences that shouldn’t be ignored. Again, the key difference is the mindset behind both. Penetration testing is, for the most part, an event. It’s something you do when you’ve updated or changed something significant. It also has a very specific purpose. That’s not a bad thing, but with such a well-defined testing context you might miss security issues that you hadn’t even considered. And if you consider the complexity of a given software system, in which its state changes according to the services and requests it is handling, it’s incredibly difficult - not to mention expensive - to pentest an application in every single possible state. Security chaos engineering tackles that by actively experimenting on the software system to better understand it. The context in which it takes place is wide-reaching and ongoing, not isolated and particular. ChaoSlingr, the security chaos engineering tool ChaoSlingr is perhaps the most prominent tool out there to help you actually do security chaos engineering. Built for AWS, it allows you to perform a number of different ‘security chaos experiments’ in the cloud. Essentially, ChaosSlingr pushes failures into the system in a way that allows you to not only identify security issues but also to better understand your infrastructure. This SlideShare deck, put together by Aaron Rinehart himself, is a good introduction to how it works in a little more detail. Security teams have typically always focused on preventive security measures. ChaosSlingr empowers teams to dig deeper into their systems and improve it in ways that mitigate security risks. It allows you to be proactive rather than reactive. The future is security chaos engineering Chaos engineering has not quite taken off - yet. But it’s clear that the principles behind it are having an impact across software engineering. In particular, at a time when ever-evolving software feels so vulnerable - fragile even - applying it to cybersecurity feels incredibly pertinent and important. It’s true that the shift in mindset is going to be tough. But if we can begin to distrust our assumptions, experiment on our systems, and try to better understand how and why they work the way they do, we are certainly moving towards a healthier and more secure software world. Chaos Conf 2018 Recap: Chaos engineering hits maturity as community moves towards controlled experimentation Chaos engineering platform Gremlin announces $18 million series B funding and new feature for “full-stack resiliency” Gremlin makes chaos engineering with Docker easier with new container discovery feature
Read more
  • 0
  • 0
  • 6854

article-image-is-middleware-dead-cloud-is-the-prime-suspect
Prasad Ramesh
17 Nov 2018
4 min read
Save for later

Is middleware dead? Cloud is the prime suspect!

Prasad Ramesh
17 Nov 2018
4 min read
The cloud is now a ubiquitous term, in use from tasks such as storing photos to remotely using machines for complex AI tasks. But has it killed on premises middleware setups and changed the way businesses manage their services used by their employees? Is middleware dead? Middleware is the bridge that connects an operating system to different applications in a distributed system. Essentially it is a transition layer of software that enables communication between OS and applications. Middleware acts as a pipe for data to flow from one application to another. If the communication between applications in a network is taken care of by this software, developers can focus on the applications themselves, hence middleware came into picture. Middleware is used in enterprise networks. Is middleware still relevant? Middleware was a necessity for an IT business before cloud was a thing. But as cloud adoption has become mainstream, offering scalability and elasticity, middleware has become less important in modern software infrastructures. Middleware in on premises setups was used for different uses such as remote calls, communication with other devices in the network, transaction management and database interactions. All of this is taken care of by the cloud service provider behind scenes. Middleware is largely in decline - with cloud being a key reason. Specifically, some of the reasons middleware has lost favor include: Middleware maintenance can be expensive and quickly deplete resources, especially if you’re using middleware on a large scale. Middleware can’t scale as fast as cloud. If you need to scale, you’ll need new hardware - this makes elasticity difficult, with sunk costs in your hardware resources. Sustaining large applications on the middleware can become challenging over time. How cloud solves middleware challenges The reason cloud is killing off middleware is because it can simply do things better than traditional middleware. In just about every regard, from availability to flexibility to monitoring, using a cloud service makes life much easier. It makes life easier for developers and engineers, while potentially saving organizations time in terms of resource management. If you’re making decisions about software infrastructure, it probably doesn’t feel like a tough decision. Even institutions like banks, that have traditionally resisted software innovation are embracing cloud. More than 80% of world’s largest banks and more than 85% of global banks opting for the cloud according to this Information Age article. When is middleware the right option? There might still be some life left in middleware yet. For smaller organizations, where an on premise server setup will be used for a significant period of time - with cloud merely a possibility on the horizon - middleware still makes sense. Of course, no organization wants to think of itself as ‘small’ - even if you’re just starting out, you probably have plans to scale. In this case, cloud will give you the flexibility that middleware inhibits. While you shouldn’t invest in cloud solutions if you don’t need them, it’s hard to think of a scenario where it wouldn’t provide an advantage over middleware. From tiny startups that need accessible and efficient hosting services, to huge organizations where scale is simply too big to handle alone, cloud is the best option in a massive range of use cases. Is middleware dead really? So yes, middleware is dead for most practical use case scenarios. Most companies go with the cloud given the advantages and flexibility. With upcoming options like multi-cloud which gives you the options to use different cloud services for different areas, there is even more flexibility in using the cloud. Think Silicon open sources GLOVE: An OpenGL ES over Vulkan middleware Real clouds take out Microsoft’s Azure Cloud; users, developers suffer indefinite Azure outage MongoDB switches to Server Side Public License (SSPL) to prevent cloud providers from exploiting its open source code
Read more
  • 0
  • 0
  • 3089

article-image-what-is-distributed-computing-and-whats-driving-its-adoption
Melisha Dsouza
07 Nov 2018
8 min read
Save for later

What is distributed computing and what's driving its adoption?

Melisha Dsouza
07 Nov 2018
8 min read
Distributed computing is having a real impact on the way companies look at the cloud. The "Most Promising Jobs 2018" report published by LinkedIn pointed out that distributed and cloud Computing rank amongst the top 10 most in-demand skills. What are the problems with centralized computing systems? Distributed computing solves many of the challenges that centralized computing systems pose today. These centralized systems - like IBM Mainframes - have been around for decades, but they’re beginning to lose favor. This is because centralized computing is ineffective and expensive in the context of increasing data and workloads. When you have a single central computer which controls a massive amount of computations - at the same time - it’s a massive strain on the system. Even one that’s particularly powerful. Centralized systems simply aren’t capable of processing huge volumes of transactional data and supporting tons of online users concurrently. There’s also a big issue with reliability. If your centralized server fails, all data could be permanently lost if you have no disaster recovery strategy. Fortunately, distributed computing offers solutions to many of these issues. How does distributed computing work? Distributed Computing comprises a group of systems located at different places, all connected over a network. They work on a single problem or a common goal. Each one of these systems is autonomous, programmable, asynchronous and failure-prone. These systems provide a better price/performance ratio when compared to a centralized system. This is because it’s more economical to add microprocessors rather than mainframes to your network. They have more computational power as compared to their centralized (mainframe) computing systems. Distributed computing and agility Another major plus point of distributed computing systems is that they provide much greater agility than centralized computing systems. Without centralization, organizations can add and change software and computational power according to the demands and needs of the business. With the reduction in price for computing power and storage thanks to the rise of public cloud services like AWS, organizations all over the world have begun using distributed systems and service-oriented architectures, like microservices. Distributed computing in action: Google search A perfect example of distributed computing in action is Google search. When a user submits a query, Google will use data from a number of different servers to deliver results, based on things like location, past searches, semantic keywords - and much, much more. These servers are located all around the world and are able to provide the search result in seconds or at time milliseconds. How cloud is driving the adoption of distributed computing Central to the adoption is the cloud. Today, cloud is mainstream and opens up the possibility of distributed systems to organizations in a number of different ways. Arguably, you’re not really seeing the full potential of cloud until you’ve moved to a distributed system. Let’s take a look at the different ways cloud services are helping companies feel confident enough to successfully leverage distributed computing. Infrastructure as a Service (IaaS) IaaS makes distributed systems accessible for many organizations by allowing them to host their infrastructure either internally on a private or public cloud. Essentially, they give an organization control over the operating system and platform that forms the foundation of their software infrastructure, but give an external cloud provider control over servers and virtualization technologies that make it possible to deploy that infrastructure. In the context of a distributed system, this means organizations have less to worry about. As you can imagine, without an IaaS, the process of developing and deploying a distributed system becomes much more complex and even costly. Platform as a Service: Custom Software on another Platform If IaaS effectively splits responsibilities between the organization and the cloud provider (the ‘service’), the platform as a Service (PaaS) ‘outsources’ even more to the cloud provider. Essentially, an organization simply has to handle the applications and data, leaving every other aspect of their infrastructure to the platform. This brings many benefits, and, in theory, should allow even relatively small engineering teams to take advantage of the benefits of a distributed system. The underlying complexity and heavy lifting that a distributed system brings rests with the cloud provider, allowing an organization’s engineers to focus on what matters most - shipping code. If you’re thinking about speed and innovation, then a PaaS opens that right up, provided your happy to allow your cloud provider to manage the bulk of your infrastructure. Software as a Service SaaS solutions are perhaps the clearest example of a distributed system. Arguably, given the way we use Saas today, it’s easy to forget that it can be a part of a distributed system. The concept is simple: it’s a complete software solution delivered to the end-user. If you’re trying to accomplish something particularly complex, something which you simply do not have the resources to do yourself, a SaaS solution could be effective. Users don’t need to worry about installing and maintaining software, they can simply access it via the internet   The biggest advantages of adopting a distributed computing system #1 Complete control on the system architecture Distributed computing opens up your options when it comes to system architecture. Although you might rely on an external cloud service for some resources (like compute or storage), the architectural decisions are ultimately yours. This means that you can make decisions based on exactly what your organization needs and how it works. In a sense, this is why distributed computing can bring you agility - but its not just about being agile in the strict sense, but also in a broader version of the word. It allows you to prioritize according to your own needs and demands. #2 Improve the “absolute performance” of the computing system Tasks can be partitioned into sub computations that can run concurrently. This, in turn, provides a total speedup of task completion. What’s more, if a particular site is currently overloaded with jobs, some of them can be moved to lightly loaded sites. This technique of ‘load sharing’ can boost the performance of your system. Essentially, distributed systems minimize the latency and response time while increasing the throughput. [caption id="attachment_23973" align="alignnone" width="1536"]  [/caption] #3  The Price to Performance ratio for the system Distributed networks offer a better price/performance ratio compared to centralized mainframe computers. This is because decentralized and modular applications can share expensive peripherals, such as high-capacity file servers and high-resolution printers. Similarly, multiple components can be run on nodes with specialized processing. This further reduces the cost of multiple specialized processing systems. #4 Disaster Recovery Distributed systems involve services communicating through different machines. This is where message integrity, confidentiality and authentication comes into play. In such a case, distributed computing gives organizations the flexibility to deploy a 4 way mechanism to keep operations secure: Encryption Authentication Authorization: Auditing: Another aspect of disaster recovery is reliability. If computation and the associated data effectively built into a single machine, and if that machine goes down, the entire service goes with it. With a distributed system, what could happen instead is that specific services might go down, but the whole thing should, in theory at least, stay standing. #5 Resilience through replication So, if specific services can go down within a distributed system, you still do need to do something to increase resilience. You do this by replicating services across multiple nodes, minimizing potential points of failure. This is what’s known as fault tolerance - it improves system reliability without affecting the system as a whole. It’s also worth pointing out that the hardware on which a distributed system is built is replaceable - this is better than depending on centralized hardware which, if it fails, will take everything with it… Another distributed computing example: SETI A good example of a distributed system is SETI. SETI collects massive amounts of data from observatories around the world on activity in the sky, in a bid to identify possible signs of extraterrestrial life. This information is then sliced into smaller pieces of data for easy analysis through distributed computing applications running as a screensaver on individual user PC’s, all around the world. The PC’s running the SETI screensaver will download a small file, and while a PC is unused, the screen saver downloads a data slice from SETI. It then runs the analytics application while the PC is idle, and when the analysis is complete, the analyzed data slice is uploaded back to SETI. This massive data analytics is possible all because of distributed computing. So, although distributed computing has become a bit of a buzzword, the technology is gaining traction in the minds of customers and service providers. Beyond the hype and debate, these services will ultimately help companies to be more responsive to market conditions while restraining IT costs. Cloudflare’s decentralized vision of the web: InterPlanetary File System (IPFS) Gateway to create distributed websites Oath’s distributed network telemetry collector- ‘Panoptes’ is now Open source! Intelligent Edge Analytics: 7 ways machine learning is driving edge computing adoption in 2018
Read more
  • 0
  • 0
  • 6127

article-image-python-data-visualization-myths-you-should-know-about
Savia Lobo
02 Nov 2018
4 min read
Save for later

Python Data Visualization myths you should know about

Savia Lobo
02 Nov 2018
4 min read
In recent years, we have experienced an exponential growth of data. As the amount of data grows, the need for developers with knowledge of data analytics and especially data visualization spikes. Data visualizations help in getting a clear and concise view of the data, making it more tangible for (non-technical) audiences. MATLAB and R are the two available languages that have been traditionally used for data science and data visualization. However, Python is the most requested and used language in the industry. Its ease of use and the speed at which you can manipulate and visualize data combined with the number of available libraries makes Python the best choice. So Data visualization seems easy, doesn’t it? However, there are a lot of myths surrounding it. Let us have a look at some of them. Myth 1: Data visualizations are just for data scientists Today's data visualization libraries are very convenient, so any person can create meaningful visualizations in just a few minutes. Myth 2: Data visualization technologies are difficult to learn Of course, building and designing sophisticated data visualizations will take some work and learning but with very little knowledge of the libraries and what they are capable of, you can create simple visualizations that will help you get valuable insights into your data. Python is a comparably easy language. The “pythonic” approach is also used when building visualization libraries for Python which makes them easy to understand and use. Myth 3: Data visualization isn’t needed for data insights Imagine having a table of data with 20 columns and several thousand rows. What do you think will give you better insights into this data? Just looking at the table and trying to make sense of all the columns and values in them, or creating some simple plots that visualize the content of this table? Of course, you could force yourself to get insights without visualizations, but the key is to work smarter, not harder. Myth 4: Data visualization takes a lot of time If you have a basic understanding of your data, you can create some basic visualizations in no time. There are a lot of libraries, which will be covered in this course, that allow you to simply import some data and build visualizations in a few lines of code. The more difficult part is creating visualizations which are descriptive and display the concepts you wanted to show but don’t worry, this will be discussed in the course in detail as well. Amidst all the myths, Data visualization in combination with Python is an essential skill when working with data. When properly utilized, it is a powerful combination that not only enables you to get better insights into your data but also gives you the tool to communicate results better. Head over to our course titled ‘Data Visualization with Python’, to use Python with NumPy, Pandas, Matplotlib, and Seaborn to create impactful data visualizations with the real world, public data. About Tim and Mario Tim Großmann is a CS student with interest in diverse topics ranging from AI to IoT. He previously worked at the Bosch Center for Artificial Intelligence in Silicon Valley in the field of big data engineering. He’s highly involved in different Open Source projects and actively speaks at meetups and conferences about his projects and experiences. Mario Döbler is a graduate student with a focus in deep learning and AI. He previously worked at the Bosch Center for Artificial Intelligence in Silicon Valley in the field of deep learning. Currently, he dedicates himself to apply deep learning to medical data to make health care accessible to everyone. 4 tips for learning Data Visualization with Python Setting up Apache Druid in Hadoop for Data visualizations [Tutorial] 8 ways to improve your data visualizations  
Read more
  • 0
  • 0
  • 3721
article-image-4-tips-for-learning-data-visualization-with-python
Sugandha Lahoti
01 Nov 2018
4 min read
Save for later

4 tips for learning Data Visualization with Python

Sugandha Lahoti
01 Nov 2018
4 min read
Data today is the world’s most important resource. However, without properly visualizing your data to discover meaningful insights, it’s useless. Creating visualizations helps in getting a clearer and concise view of the data, making it more tangible for (non-technical) audiences. Python is the choice of programming language for developers these days. However, sometimes developers face issues performing data visualization with Python. In this post, Tim Großmann, and Mario Döbler, the authors of the Data Visualization with Python course, discuss some of the best practices you should keep in mind while visualizing data with Python. #1 Start looking and experimenting with examples One of the most important ways to deeply understand and learn to use Python for data visualizations is to download example projects and play around with them. You should read their documentation and comments and change values, observing what influence it has. In many cases, they can even serve as a starting point to insert your own data. Think about how you could modify the given examples to visualize your own data. #2 Start from scratch and build on it Sometimes starting with an empty canvas is the best approach. Start with only the necessary components like your data and the import of your library of choice. This builds a nice flow and process that will enable you to debug problems with precision. Once you have gone through the whole process of building a simple visualization, you will have a good understanding of where an error might occur and how to fix it. Starting from scratch sometimes shows you that simpler solutions will save you a lot of time while still communicating the essence of your idea. #3 Make full use of documentation There are libraries with plenty of documentation to answer every single question you have. Make sure to make best use of it, research their API, look at the given example, and search for open issues on their GitHub pages when encountering a problem. Especially the libraries covered in the course “Data Visualization with Python” not only has extensive documentation, but also an active community that is constantly creating new questions on StackOverflow which will help you to find solutions to your problems in no time. #4 Use every opportunity you have with data to visualize it Every time you encounter new data take a few minutes and think about what information might be interesting and visualize it. Think back to the last time you had to give a presentation about your findings and all you had was a table with numerical values in it. For you it was understandable, but your colleagues sat there and scratched their heads. Try to create some simple visualizations that would have impressed the entire team with your results. Only practice makes you perfect. We hope that these tips will not only enable you to get better insights into your data but also gives you the tool to communicate results better. Don’t forget to checkout our course Data Visualization with Python to understand, explore, and effectively present data using the powerful data visualization techniques of Python. About the authors Tim Großmann is a CS student with interest in diverse topics ranging from AI to IoT. He previously worked at the Bosch Center for Artificial Intelligence in Silicon Valley in the field of big data engineering. He’s highly involved in different Open Source projects and actively speaks at meetups and conferences about his projects and experiences. Mario Döbler is a graduate student with a focus in deep learning and AI. He previously worked at the Bosch Center for Artificial Intelligence in Silicon Valley in the field of deep learning. Currently, he dedicates himself to apply deep learning to medical data to make health care accessible to everyone. 8 ways to improve your data visualizations Seaborn v0.9.0 brings better data visualization with new relational plots, theme updates, and more Getting started with Data Visualization in Tableau
Read more
  • 0
  • 0
  • 3724

article-image-quantum-computing-trick-or-treat
Prasad Ramesh
01 Nov 2018
1 min read
Save for later

Quantum computing - Trick or treat?

Prasad Ramesh
01 Nov 2018
1 min read
Quantum computing uses quantum mechanics in quantum computers to solve a diverse set of complex problems. It uses qubits to store information in parallel dimensions. Quantum computers can work through a solution involving large parameters with far fewer operations than a standard computer. What is so special about Quantum Computing? As they have potential to work through and solve complex problems of tomorrow, research and work on this area is attracting funding from everywhere. But these computers need a lot of physical space right now, kind of like the very first computers in the twentieth century. Quantum computers also pose a security threat since they are good at calculating large items/numbers. Quantum encryption anyone? Quantum computing is even available on the Cloud from different companies. There is even a dedicated language called Q# by Microsoft. Using concepts like entanglement to speed up computation, quantum computing can solve complex problems and is a tricky one, but I call it a treat. What about the security threat? Well, Dr. Alan Turing built a better computer to decrypt messages from another machine, we’ll let you think now.
Read more
  • 0
  • 0
  • 3515

article-image-an-ethical-mobile-operating-system-e-trick-or-treat
Prasad Ramesh
01 Nov 2018
2 min read
Save for later

An ethical mobile operating system, /e/ - Trick or Treat?

Prasad Ramesh
01 Nov 2018
2 min read
Previously known as eelo, /e/ is an ‘ethical’ operating system for mobile phones. Leading the project is Gaël Duval who is also the creator of Mandrake Linux. Is it a new OS? Well not exactly, it is a forked version of Lineage OS stripped of Google apps, with a focus on privacy and considered as an ethical OS. What’s so good about /e/? The good thing here is that this is a unique effort for an ethical OS. Something different from the data collection of Android or the expensive devices by Apple. With a functional ROM including all functionalities, Duval seems to be pretty serious about this. An OS that respects user privacy does sound like a very nice thing. However, as pointed out by people on Reddit, this is what Cyanogen was in the beginning. The ethical OS /e/ is not actually a new OS from scratch. Who has the time or funding for that today? You have /e/ services instead of Google services, but ummm can you trust them? Is /e/ a trick… or a treat? We have mixed feelings about this one, it is a commendable effort, the idea is right. But with the recent privacy debates everywhere trusting a new OS is tricky. We’ll reserve judgement till it is out of beta and has a name that you can Google search for.
Read more
  • 0
  • 0
  • 2541
article-image-top-7-tools-for-virtual-reality-game-developers
Natasha Mathur
31 Oct 2018
12 min read
Save for later

Top 7 tools for virtual reality game developers

Natasha Mathur
31 Oct 2018
12 min read
According to Statista, the virtual reality software market is booming. It is projected to reach a value of around 24.5 billion U.S. dollars by 2020. Also, the estimated revenue of the virtual reality market in the year 2021 is3.56 billion U.S. dollars. This would be a huge increase from a very respectable 3.06 billion U.S. dollars back in 2016 This makes virtual reality a potentially lucrative opportunity if you’re a game developer. But it’s also one that’s a lot of fun, with plenty of creative opportunities, and which doesn’t require a load of money up front. Thanks to technological advancements in the VR space, it’s not easier than ever to build a VR game from scratch. But with so many virtual reality tools out there, it can be hard to know where to start. It leaves you stranded with plenty of options but no sense of direction. To help you out, we’ve consolidated a list of what we think are the top 7 tools to help you get started. 1.Unity 3d: the leading game engine at the cutting edge of the industry Developer: Unity Technologies Release date: 2005 Why choose Unity for virtual reality game development? In a nutshell:  it is the easiest way to get started with Virtual Reality development and doesn’t compromise on the quality of the developed game. Unity offers a huge 3D asset store, which is an online marketplace by Unity. In this asset store, you can easily find the 2D, 3D models, SDKs, templates, as well as different virtual reality tools that you can download and import directly to your game. One of the most popular tools that you can find in the Unity asset store is the VR toolkit. So for times, when you don’t want to spend time on building a character model from scratch, you can simply pick one from the asset store. This helps jump-start the game development process. Some of these assets are free, and for some, you have to pay one-time. Moreover, the documentation in Unity consists of vivid examples ( eg; Introduction to VR best practices), video tutorials, as well as live training sessions (eg; VR essentials pack demo). This is not only great news for the experienced game developer but the newbies too as unity makes it easy for you to quickly learn to build games, including the AAA quality virtual reality games. It also has an ever-growing community. So, for times when you get stuck somewhere during the game development process, a solid community will be there to offer you advice on resolving a wide range of issues. Languages Supported: Unity supports three development languages namely, c#, Boo, and UnityScript. Platforms supported: Unity supports all the platforms such as mobile, PC, web and console platforms. The free version supports Mac OS X, Android, iOS, Windows and among other mobile platforms. The paid version further supports  Nintendo Wii, Xbox 360 and PlayStation. The free version, however, is more than enough to dive right into the development process. Unity also supports all the major HMDs such as Oculus Rift, Steam VR/Vive, Playstation VR, Gear VR, Microsoft HoloLens, and Google’s Daydream View. Price: Unity has three versions, namely,  personal, plus and pro version. The personal version is completely free, Unity 3D plus is $35 per seat per month, and pro is $125 per seat per month. However, the personal version is more than enough to dive right into the development process. Learning curve: Unity 3d has a flat learning curve. It can be used with ease by both beginners and professionals alike. Learning resources: Unity Virtual Reality Projects - Second Edition                                   Unity Virtual Reality - Volume 1 [Video]                                   Unity Virtual Reality - Volume 2 [Video] 2. Unreal Engine 4: a free game engine with exceptional graphics and capabilities for virtual reality Developer: Epic Games Release Date: 1998 Why choose Unreal Engine for virtual reality gaming? Unreal Engine has powered games with some of the most exceptional graphics and features, so it naturally comes with features catered towards advanced Game development. For virtual reality, Unreal Engine comes with an advanced cinematics system, advanced lighting capabilities, a rendering pipeline offering 90 Hz stereo framerate or faster at high resolutions as well as tools scaling from simple to detailed scenes, environments and characters. Similar to Unity, Unreal Engine 4  also comes with an asset store, which is an online marketplace by Unreal offering animations, blueprints, code plugins, props, environments, as well as architectural visualization. Again, just like Unity’s asset store, some of the assets are paid, and some are free. Documentation provided by Unreal Engine is not as rich as the one offered by Unity and comes with basic guides and live training streams on Virtual reality development. Unreal Engine 4 also has a strong community to guide you through your game development journey. Languages supported: Unreal Engine 4 offers only C++ development language. Platforms supported: UE4 supports all the latest HMDs such as Oculus Rift, HTC Vive, Samsung Gear VR, Google VR, and Leap Motion among others. Unreal Engine 4 lets you deploy your VR game projects to Windows PC, PlayStation 4, Xbox One, Mac OS X, iOS, Android, AR, VR, Linux, SteamOS, and HTML5. You can run the Unreal Editor on Windows, Mac OS X, and Linux. Moreover, Xbox One, PlayStation 4 and Nintendo Switch console tools and code are also available at no additional cost to registered developers for their respective platform(s). Price: The great thing about UE4 is that it is very cost-effective for all the game nerds out there, as it's free to use, with a 5% royalty on gross product revenue after the first $3,000 per game per calendar quarter from commercial products. Learning Curve: Unreal Engine 4 has a steep learning curve and is suited mostly for professionals. Learning resources: Exploring Unreal Engine 4 VR Editor and Essentials of VR [Video]                    Unreal Engine 4: The Complete Beginner's Course [Video]                      3. CryEngine: a game engine with a powerful range of assets for virtual reality games Developer: Crytek Release Date: 2002 Why choose CryEngine for virtual reality game development? Similar to Unity and Unreal Engine, CryEngine also offers an asset store, offering tools and assets across different domains such as 3D modeling, scripts, sounds, animations, etc. The documentation offered by CryEngine is not as rich as Unity, which makes it difficult to approach for the beginners. However, it does have an online forum which can guide the experienced developers during their virtual reality game development journey. CryEngine also includes CE# Framework, new Sandbox Editor, Improved Profiling, Reworked Low Overhead Renderer, DirectX 12 Support, Advanced Volumetric Cloud System, new particle system, FMOD Studio support, and Visual Studio 2015 Support, which all collectively can amp up the virtual reality game development process. Languages supported: It supports languages such as C++, Flash, ActionScript, and Lua. Platforms supported: CryEngine supports Windows, Linux, PlayStation 4, Xbox One, Oculus Rift, OSVR, PSVR, and HTC Vive. Mobile support is currently under development. Price: CryEngine is free but takes five percent of the revenues generated by each game built with CryEngine - after the revenues have passed $5,000. Learning curve: CryEngine has a steep learning curve as for anything other than basic games, you need to have strong command on languages such as C++, Flash, ActionScript, and Lua. Learning resources: CryENGINE Game Programming with C++, C#, and Lua                                  CryENGINE SDK Game Programming Essentials [Video] 4. Blender: an accessible tool for building exceptional graphics and animations Developer: Blender Foundation Release Date: 1998 Why choose Blender for virtual reality? Blender, a modern 3D graphics software is not only great for 3D modeling but supports the entirety of the 3D pipeline such as rigging, animation, simulation, rendering, motion tracking, video editing, and game creation. It also comes with a built-in powerful path-tracer engine called Cycles that offers stunning ultra-realistic rendering, real-time viewport preview, PBR shaders & HDR lighting support as well as VR rendering support. It also has a solid community of developers and offers tutorials, workshops, and courses on character modeling, character animation, and blender fundamentals. Blender comes with add-ons for VR such as BlenderVR that supports CAVE/VideoWall, Head-Mounted Displays (HMD) and external rendering modality engines. It helps with the cross-platform development of virtual reality applications as well as porting of scenes from one VR platform configuration to another without any requirement to edit the actual scene. Platforms supported:  Blender supports Windows, Mac OS, and Linux Price: Blender is free to use. Learning Curve: Blender has a flat learning curve and can be used with ease by both beginners and professionals alike. Learning resources: Building a Character using Blender 3D [Video]                                     Blender 3D Basics                            5. Amazon Lumberyard: an accessible and fast tool for building virtual reality games Developer: Amazon Release Date: 2015 Why choose Amazon Lumberyard for virtual reality game development? Bases on CryEngine’s architecture, Amazon Lumberyard, is a powerful cross-platform game engine comprising of tools that help you create the highest-quality games, and connect your games to the vast storage of the AWS Cloud, and engage fans on Twitch. Lumberyard's professional tools such as its virtual reality system use Lumberyard’s Gems, self-contained packages of assets and features that can be added within your game. In fact, these gems act as templates for you to build your own gems and supports all the VR devices without requiring any engine code editing. Lumberyard is also integrated with Amazon GameLift, which is an AWS service meant for deploying, operating, and scaling dedicated game servers for session-based multiplayer games. Lumberyard also speeds up virtual reality development with the new VR Preview function. This full VR preview function is in the editor, which you can click to see in VR right away. This lets the game developers make VR-specific adjustments and level the designs right in the editor, which is quite convenient and saves a lot of time. Platforms supported: Lumberyard supports HMDs such as Oculus Rift, HTC Vive and Open Source Virtual Reality (OSVR). It offers support for  PC, Xbox One, PlayStation 4, iOS (iPhone 5S+ and iOS 7.0+), and Android (Nexus 5 and equivalents with support for OpenGL 3.0+). Lumberyard also offers support for dedicated servers on Windows and Linux. Price: Amazon Lumberyard is free, with no seat licenses, royalties, or subscriptions required. You only need to pay the standard AWS fees for the AWS services that you choose to use. Learning curve: Lumberyard has a flat learning curve and is easy to use for both novices as well as professionals. Learning resources: Learning AWS Lumberyard Game Development 6. AppGameKit -VR (AGK): an easy way to build games for beginners Developer: The Game Creators Release Date: 2017 Why choose AppGameKit-VR for virtual reality game development? AppGameKit-VR lets anyone quickly code and builds apps for multiple platforms with the help of AGKs BASIC scripting system. It adds easy to use VR commands to the core AppGameKit Script Language, which delivers immersive VR experiences. It also allows full development control for SteamVR supported head-mounted displays, touch devices, and Leap Motion hand tracking. AGK does the majority of the work for you, so it makes it super easy to code, compile and export the apps to each platform. You mainly need to focus on your game/app idea.  AGK-VR offers 60 VR commands ranging from diagnostic checks on the hardware and SteamVR, Initialising the HMD, creating standing or seated VR experiences, rendering a 3D scene to the HMD, etc. AGK also offers demos on how to how to get started with using these commands in your games. It also has an online forum where you can ask questions, learn and interact with other users. The details of the AGK script is also fully documented. Platforms supported: AGK VR offers support for Windows, Mac, Linux, iOS Android (inc Google, Amazon & Ouya), HTML5, Raspberry Pi (free from TGC website). Price: AGK is available for $29.9 Learning curve: AppGameKit VR has a flat learning curve, which is ideal for beginners and makes the VR game development quick for the experienced. 7. Oculus Medium 2.0: software designed with virtual reality in mind Developer: Oculus VR Release Date: 2016 Why choose Oculus Medium for building virtual reality games? Oculus Medium is a great tool that brings sculpting, modeling, painting and creating objects for the virtual reality world all together in a single package. It's a very handy tool to have during the character designing process. It lets you sculpt and create a variety of 3D objects to include within your VR game with the help of Oculus Touch controllers alongside the Oculus Rift. It comes with features such as grid snapping, increased layer limit, multiple lights, and 300 prefabricated stamps.  It is quite simple to use, and anyone, be it a newbie or an experienced game developer can use this tool. The rendering engine in Oculus Medium uses Vulkan, which results in smoother frame rates and better memory management when building higher resolution sculpts. Other than that, Oculus Medium offers tutorials for you to quickly get hang of different features in the tool. It also has an online forum where different VR artisans and developers discuss tips, information, and videos to share with others. Price: Oculus Medium 2.0 is available for $30 which is quite affordable for novices and professionals alike. Learning curve: Oculus Medium has a flat learning curve as its pretty approachable for novices as well as professionals.                                 Each of the tools mentioned above brings something unique in terms of their abilities and features. However, keep in mind that selecting a tool solely based on its technical features is not the best idea. Rather, figure out what works best for you, depending on your experience, and requirement. So which tools/tool are you planning to use for VR game development? Is there any tool we missed out? Let us know! Game developers say Virtual Reality is here to stay What’s new in VR Haptics? Top 7 modern Virtual Reality hardware system
Read more
  • 0
  • 0
  • 9455

article-image-automation-and-robots-trick-or-treat
Savia Lobo
31 Oct 2018
3 min read
Save for later

Automation and Robots - Trick or Treat?

Savia Lobo
31 Oct 2018
3 min read
Advancements in AI are on a path of reinventing the way organizations work. Last year, we wrote about RPA, which made front-end manual jobs redundant. This year, we have actual robots on the field. Last month, iRobot, the intelligent robot making company revealed its latest robot, Roomba i7+, that maps and stores your house and also empties the trash automatically. Last week, Google announced its plans to launch a ‘Cloud Robotics platform’ for developers in 2019, which will encourage efficient robotic automation in highly dynamic environments. Earlier this month, Amazon announced that it is opening a chain of 3,000 cashier-less stores across the US by 2021. And most recently, Walmart also announced that it is going to launch a cashierless store next year. The terms ‘Automation’ and ‘Robotics’ sometimes have a crossover, as Robots can be used to automate physical tasks while many types of automation have nothing to do with physical robots. The emergence of AI robots will reduce the need for a huge human workforce, boost the productivity of organizations and reduce their time to market. For example, customer service and other front-end jobs can function 24*7*365 without an uninterrupted service. Within industrial automation, robots can automate time-consuming physical processes. Collaborative robots will carry out a task in the same way a human would, albeit more efficiently! The positives aside, AI there is a danger of it getting out of control as machines can go rogue without humans in the loop. That is why members of European Parliament (MEPs) passed a resolution recently on banning autonomous weapon systems. They emphasized that weapons like these, without proper human control over selecting and attacking targets are a disaster waiting to happen. At the more mundane end of the social spectrum, the dangers of automation are still very real. Robots are expected to significantly replace a lot of human labor. For instance, as per the World Economic Forum survey, in 5 years, machines will do half of our job tasks of today as 1 in 2 employees would need reskilling/upskilling. Another study by renowned economist Andy Haldane, The Bank of England’s chief economist says 15 million jobs in Britain at stake with Artificial Intelligence robots set to replace humans at workforce. As of now, having AI for organizations is a treat due to the different advantages they provide over humans. Although it will replace jobs, people can upskill their knowledge to continue thriving in the automation augmented future. Four interesting Amazon patents in 2018 that use machine learning, AR, and robotics How Rolls Royce is applying AI and robotics for smart engine maintenance Home Assistant: an open source Python home automation hub to rule all things smart
Read more
  • 0
  • 0
  • 3203