Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News - Data Analysis

4 Articles
article-image-aws-announces-open-distro-for-elasticsearch-licensed-under-apache-2-0
Savia Lobo
12 Mar 2019
4 min read
Save for later

AWS announces Open Distro for Elasticsearch licensed under Apache 2.0

Savia Lobo
12 Mar 2019
4 min read
Amazon Web Services announced a new open source distribution of Elasticsearch named Open Distro for Elasticsearch in collaboration with Expedia Group and Netflix. Open Distro for Elasticsearch will be focused on driving innovation with value-added features to ensure users have a feature-rich option that is fully open source. It provides developers with the freedom to contribute to open source value-added features on top of the Apache 2.0-licensed Elasticsearch upstream project. The need for Open Distro for Elasticsearch Elasticsearch’s Apache 2.0 license enabled it to gain adoption quickly and allowed unrestricted use of the software. However, since June 2018, the community witnessed significant intermix of proprietary code into the code base. While an Apache 2.0 licensed download is still available, there is an extreme lack of clarity as to what customers who care about open source are getting and what they can depend on. “Enterprise developers may inadvertently apply a fix or enhancement to the proprietary source code. This is hard to track and govern, could lead to a breach of license, and could lead to immediate termination of rights (for both proprietary free and paid).” Individual code commits also increasingly contain both open source and proprietary code, making it difficult for developers who want to only work on open source to contribute and participate. Also, the innovation focus has shifted from furthering the open source distribution to making the proprietary distribution popular. This means that the majority of new Elasticsearch users are now, in fact, running proprietary software. “We have discussed our concerns with Elastic, the maintainers of Elasticsearch, including offering to dedicate significant resources to help support a community-driven, non-intermingled version of Elasticsearch. They have made it clear that they intend to continue on their current path”, the AWS community states in their blog. These changes have also created uncertainty about the longevity of the open source project as it is getting less innovation focused. Customers also want the freedom to run the software anywhere and self-support at any point in time if they need to. Thus, this has led to the creation of Open Distro for Elasticsearch. Features of Open Distro for Elasticsearch Keeps data security in check Open Distro for Elasticsearch protects users’ cluster by providing advanced security features, including a number of authentication options such as Active Directory and OpenID, encryption in-flight, fine-grained access control, detailed audit logging, advanced compliance features, and more. Automatic notifications Open Distro for Elasticsearch provides a powerful, easy-to-use event monitoring and alerting system. This enables a user to monitor data and send notifications automatically to their stakeholders. It also includes an intuitive Kibana interface and powerful API, which further eases setting up and managing alerts. Increased SQL query interactions It also allows users who are already comfortable with SQL to interact with their Elasticsearch cluster and integrate it with other SQL-compliant systems. SQL offers more than 40 functions, data types, and commands including join support and direct export to CSV. Deep Diagnostic insights with Performance Analyzer Performance Analyzer provides deep visibility into system bottlenecks by allowing users to query Elasticsearch metrics alongside detailed network, disk, and operating system stats. Performance Analyzer runs independently without any performance impact even when Elasticsearch is under stress. According to AWS Open Source Blog, “With the first release, our goal is to address many critical features missing from open source Elasticsearch, such as security, event monitoring and alerting, and SQL support.” Subbu Allamaraju, VP Cloud Architecture at Expedia Group, said, “We are excited about the Open Distro for Elasticsearch initiative, which aims to accelerate the feature set available to open source Elasticsearch users like us. This initiative also helps in reassuring our continued investment in the technology.” Christian Kaiser, VP Platform Engineering at Netflix, said, “Open Distro for Elasticsearch will allow us to freely contribute to an Elasticsearch distribution, that we can be confident will remain open source and community-driven.” To know more about Open Distro for Elasticsearch in detail, visit AWS official blog post. GitHub releases Vulcanizer, a new Golang Library for operating Elasticsearch Elastic launches Helm Charts (alpha) for faster deployment of Elasticsearch and Kibana to Kubernetes How does Elasticsearch work? [Tutorial]
Read more
  • 0
  • 0
  • 3670

article-image-google-expands-its-blockchain-search-tools-adds-six-new-cryptocurrencies-in-bigquery-public-datasets
Sugandha Lahoti
07 Feb 2019
2 min read
Save for later

Google expands its Blockchain search tools, adds six new cryptocurrencies in BigQuery Public Datasets

Sugandha Lahoti
07 Feb 2019
2 min read
Google’s BigQuery Public Datasets program, has added six new cryptocurrencies to expand it’s blockchain search tools. Including Bitcoin and Ethereum which were added last year, the total count is now eight. The six new cryptocurrency blockchain datasets are Bitcoin Cash, Dash, Dogecoin, Ethereum Classic, Litecoin, and Zcash. BigQuery Public dataset is stored in BigQuery and made available to the general public through the Google Cloud Public Dataset Program. The blockchain related datasets consist of the blockchain’s transaction history to help developers better understand cryptocurrency. Apart from adding new datasets, Google has released a set of queries and views that map all blockchain datasets to a double-entry book data structure that enables multi-chain meta-analyses, as well as integration with conventional financial record processing systems. A Blockchain ETL ingestion framework helps to update all datasets every 24 hours via a common codebase. This results in a higher latency for loading Bitcoin blocks into BigQuery. It also leads to ingesting additional BigQuery datasets with less effort. It also means that a low-latency loading solution can be implemented once and can be used to enable real-time streaming transactions for all blockchains. With this release, the blockchain data sets have been standardized into a "unified schema," meaning the data is structured in a uniform, easy-to-access way. They’ve also included more data, such as script op-codes. Having these scripts available for Bitcoin-like datasets enables more advanced analyses. They have also created some views that abstract the blockchain ledger to be presented as a double-entry accounting ledger. This helps to further interoperate with Ethereum and ERC-20 token transactions. Allen Day, Cloud Developer Advocate, Google Cloud Health AI, writes in a blog post, “ We hope these new public datasets encourage you to try out BigQuery and BigQuery ML for yourself. Or, if you run your own enterprise-focused blockchain, these datasets and sample queries can guide you as you form your own blockchain analytics.” Blockchain governance and uses beyond finance – Carnegie Mellon university podcast Stable version of OpenZeppelin 2.0, a framework for smart blockchain contracts, released! Is Blockchain a failing trend or can it build a better world? Harish Garg provides his insight.
Read more
  • 0
  • 0
  • 3524

article-image-the-tech-monopoly-and-income-inequality-why-innovation-isnt-for-everyone
Amarabha Banerjee
30 Oct 2018
5 min read
Save for later

The tech monopoly and income inequality: Why innovation isn't for everyone

Amarabha Banerjee
30 Oct 2018
5 min read
“Capital is dead labor, which, vampire-like, lives only by sucking living labor, and lives the more, the more labor it sucks,” Karl Marx An explanation is due after the introductory statement. I am not going to whine about how capitalism is a necessary evil and socialism or similar ‘isms’ promise a better future. Let’s agree with one basic fact. We are living in one of the most peaceful and technologically advanced era of human civilization on earth (that we know of yet). No major worldwide wars have taken place for around 75 years. We have developed tech miracles, with smartphones and social media becoming the new age essential commodities. It would be naive on my part to start blaming the burgeoning social media business and the companies involved to the decreasing happiness index and the gaping hole in income of the rich and the poor. But then facts don’t lie. A recent study conducted by UC Berkely, (Chris Benner, Gabriela Giusta, Louise Auerhahn, Bob Brownstein, Jeffrey Buchanan) highlights the growing income inequality in Silicon Valley. Source: UC Berkley This chart shows the dip in income of employees belonging to different income brackets. The X-axis shows the percentile of total income earners. We can clearly see that the biggest dip has happened at the middle-income earner section (14.2%). The top tier has seen a rise of 1% and the bottom end is also pretty much stagnant at with a dip of 1%. The biggest impact has been on those slap bang in the middle of the average income bracket. This is particularly alarming because from 1997 to 2017 the owners of some of the planets biggest tech companies have seen a massive jump in their earnings. Companies like Amazon, Facebook, Google have earned enormous wealth and control over the tech landscape. Amazon’s Jeff Bezos is presently sitting on top of a $150 Billion fortune. The anticipation of the majority of the population was that tech will improve the average quality of life. It was believed that it would increase the minimum wage for the low rung workers, and improve the economic status of the middle class. The existing situation seems to point to the exact opposite direction. The reasons can be summed up as below: The competition is immense among the tech companies to survive. Hence profits are largely invested in R&D and in developing ‘better’ futuristic solutions - or new novel products for consumers and users. Advertisement and promotional campaigns have also become significant factors in the survival strategies. That’s why we don’t see companies building affordable housing for their workforce anymore, bonuses have become rare events. The money comes in and goes back into the wheel, the employees are given the reasoning that to survive, the company will have to innovate. The survival rate of start-ups are very low in the tech domain. Hence, more and more startups are shrinking their budget and ensuring that they can breach the profit margin early. To a certain extent this makes sense. But it is damaging for the people who join to explore new domains in their career. If the startup fails, they have to start afresh - and if it is even moderately successful they may find they are working astonishingly long hours for very little reward. The modern-day tech workforce is not organized in any manner. The top tech companies discourage their employees from creating any form of labour union. While activism at work has not been exactly a good influence, but the complete absence of it has often proved to be a disadvantage for the workforce, and is a particular disadvantage given the tumultuous conditions of working in the tech field. The race to reinvent the wheel even when the system is running at a decent progressive pace is what has brought the human civilization to its present state. Global wealth distribution is skewed in favor of the rich few as badly as it possibly can. Monopoly in the tech market is not helping the cause. The internet is slowly becoming a playground for rich kids who own everything, right from the ground itself to the tools to keep it in shape. The rules are determined by them. The frenzy over yearly new tech releases is so huge and so marketable, that people have stopped asking the fundamental question of it all - what’s actually new? This is what capitalism stood for during most of the 20th century - even if it wasn’t immediately clear as it is now. It made people believe that consumerism can make their daily lives better, it’s perfectly ok to let go of a few basic humanitarian values in the pursuit of wealth, and most importantly that everyone can achieve it if they work hard and put their heart and soul into it. Today, technology continues to show us such dreams. Artificial Intelligence and Machine Learning can make our lives better, self-driving cars should ease traffic issues, and halt the march of global warming. But just because we see these dreams, doesn’t mean they’re becoming true. For many people at the heart of this innovation should feel the positive impact of these changes in their own lives, not longer working hours and precarious employment. The constant urge to win the race shouldn’t make the rich richer and the poor poorer. If that pattern starts to emerge and is upheld in the next 4-5 years, then we can surely conclude that Capitalism 2.0 - the type of capitalism that benefits from vulnerability and not from the power and creativity we share  - has finally taken its full form. We might have only ourselves to blame for. OK Google, why are you ok with mut(at)ing your ethos for Project DragonFly? Facebook finds ‘no evidence that hackers accessed third party Apps via user logins Is YouTube’s AI Algorithm evil?
Read more
  • 0
  • 0
  • 2402

article-image-nvidia-and-ai-researchers-create-ai-agent-noise2noise-that-can-denoise-images
Richard Gall
10 Jul 2018
2 min read
Save for later

Nvidia and AI researchers create AI agent Noise2Noise that can denoise images

Richard Gall
10 Jul 2018
2 min read
Nvidia has created an an AI agent that can clean 'noisy images' - without ever having seen a 'clean' one. Working alongside AI researchers from MIT and Aalto University, they have created something they've called 'Noise2Noise'. The team's findings could, they claim, "lead to new capabilities in learned signal recovery using deep neural networks." This could have a big impact on a number of areas, including healthcare. How researchers trained the Noise2Noise AI agent The team took 50,000 images from the ImageNet database which were then manipulated to look 'noisy'. Noise2Noise then ran on these images and was able to 'denoise' them - without knowing what a clean image looked like. This is the most significant part of the research. The AI agent wan't learning from clean data, but was instead simply learning the denoising process. This is an emerging and exciting area in data analysis and machine learning. In the introduction to their recently published journal article, which coincides with a presentation at International Conference on Machine Learning in Stockholm this week the research team explain: "Signal reconstruction from corrupted or incomplete measurements is an important subfield of statistical data analysis. Recent advances in deep neural networks have sparked significant interest in avoiding the traditional, explicit a priori statistical modeling of signal corruptions, and instead learning to map corrupted observations to the unobserved clean versions." The impact and potential applications of Noise2Noise Because the Noise2Noise AI agent doesn't require 'clean data' - or the 'a priori statistical modeling of signal corruptions' - it could be applied in a number of very exciting ways. It "points the way significant benefits in many applications by removing the need for potentially strenuous collection of clean data" the team argue. One of the most interesting potential applications of the research is in the field of MRI scans. Essentially, an agent like Noise2Noise could give a much more accurate MRI scan than those done by traditional MRI scan agents which use something called Fast Fourier Transform. This could subsequently lead to a greater level of detail in MRI scans which will massively support medical professionals to make quicker diagnoses. Read next: Nvidia GPUs offer Kubernetes for accelerated deployments of Artificial Intelligence workloads Nvidia’s Volta Tensor Core GPU hits performance milestones. But is it the best? How to Denoise Images with Neural Networks
Read more
  • 0
  • 0
  • 2987
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime