Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-389-directory-server-set-to-replace-openldap-as-red-hat-and-suse-withdraw-support-for-openldap-in-their-enterprise-linux-offerings
Bhagyashree R
29 Aug 2018
2 min read
Save for later

389 Directory Server set to replace OpenLDAP as Red Hat and SUSE withdraw support for OpenLDAP in their Enterprise Linux offerings

Bhagyashree R
29 Aug 2018
2 min read
Red Hat and SUSE have withdrawn their support for OpenLDAP in their Enterprise Linux offers, which will be replaced by Red Hat’s own 389 Directory Server. The openldap-server packages were deprecated starting from Red Hat Enterprise Linux (RHEL) 7.4, and will not be included in any future major release of RHEL. SUSE, in their release notes, have mentioned that the OpenLDAP server is still available on the Legacy Module for migration purposes, but it will not be maintained for the entire SUSE Linux Enterprise Server (SLE) 15 lifecycle. What is OpenLDAP? OpenLDAP is an open source implementation of Lightweight Directory Access Protocol (LDAP) developed by the OpenLDAP Project. It is a collective effort to develop a LDAP suite of applications and development tools, which are robust, commercial-grade, and open source. What is 389 Directory Server? The 389 Directory Server is an LDAP server developed by Red Hat as a part of Red Hat’s community-supported Fedora Project. The name “389” comes from the port number used by LDAP. It supports many operating systems including Fedora, Red Hat Enterprise Linux 3 and above, Debian, Solaris 8 and above. The 389 Directory Server packages provide the core directory services components for Identity Management (IdM) in Red Hat Enterprise Linux and the Red Hat Directory Server (RHDS). The package is not supported as a stand-alone solution to provide LDAP services. Why Red Hat and SUSE withdrew their support? According to Red Hat, customers prefer Identity Management (IdM) in Red Hat Enterprise Linux solution over OpenLDAP server for enterprise use cases. This is why, they decided to focus on the technologies that Red Hat historically had deep understanding, and expertise in, and have been investing into, for more than a decade. By focusing on Red Hat Directory Server and IdM offerings, Red Hat will be able to better serve their customers of those solutions and increase the value of subscription. To know more on Red Hat and SUSE withdrawing their support for OpenLDAP, check out Red Hat’s announcement and SUSE release notes. Red Hat Enterprise Linux 7.6 Beta released with focus on security, cloud, and automation Red Hat Enterprise Linux 7.5 (RHEL 7.5) now generally available Google, IBM, RedHat and others launch Istio 1.0 service mesh for microservices
Read more
  • 0
  • 0
  • 10779

article-image-google-open-sources-deeplab-model-semantic-image-segmentation-using-tensorflow
Savia Lobo
13 Mar 2018
2 min read
Save for later

Google open sources DeepLab-v3+: A model for Semantic Image Segmentation using TensorFlow

Savia Lobo
13 Mar 2018
2 min read
DeepLab-v3+, Google’s latest and best performing Semantic Image Segmentation model is now open sourced! DeepLab is a state-of-the-art deep learning model for semantic image segmentation, with the goal to assign semantic labels (e.g., person, dog, cat and so on) to every pixel in the input image. Assigning these semantic labels sets a much stricter localization accuracy requirements than other visual entity recognition tasks such as image-level classification or bounding box-level detection. Examples of semantic image segmentation tasks include synthetic shallow depth-of-field effect shipped in the portrait mode of the Pixel 2 and Pixel 2 XL smartphones and mobile real-time video segmentation. DeepLab-v3+ is implemented in TensorFlow and has its models built on top of a powerful convolutional neural network (CNN) backbone architecture for the most accurate results, intended for server-side deployment. Source: Google Research blog Let’s have a look at some of the highlights of DeepLab v3: Google has extended DeepLab-v3 to include a simple yet effective decoder module to refine the segmentation results especially along object boundaries. In this encoder-decoder structure one can arbitrarily control the resolution of extracted encoder features by atrous convolution to trade-off precision and runtime. They has also shared their Tensorflow model training and evaluation code, along with models already pre-trained on the Pascal VOC 2012 and Cityscapes benchmark semantic segmentation tasks. This version also adopts two network backbones, MobileNetv2 and Xception. MobileNetv2 is a fast network structure designed for mobile devices. Xception is a powerful network structure intended for server-side deployment. You can read more about this announcement on the Google Research blog.  
Read more
  • 0
  • 0
  • 10767

article-image-cloudflare-finally-launches-warp-and-warp-plus-after-a-delay-of-more-than-five-months
Vincy Davis
27 Sep 2019
5 min read
Save for later

Cloudflare finally launches Warp and Warp Plus after a delay of more than five months

Vincy Davis
27 Sep 2019
5 min read
More than five months after announcing Warp, Cloudflare has finally made it available to the general public, yesterday. With two million people on the waitlist to try Warp, the Cloudflare team says that it took them harder than they thought to build a next-generation service to secure consumer mobile connections, without compromising on speed and power usage. Along with Warp, Cloudflare is also launching Warp Plus. Warp is a free VPN to the 1.1.1.1 DNS resolver app which will speed up mobile data using the Cloudflare network to resolve DNS queries at a faster pace. It also comes with end-to-end encryption and does not require users to install a root certificate to observe encrypted internet traffic. It is built around a UDP-based protocol that is optimized for the mobile internet and offers excellent performance and reliability. Why Cloudflare delayed the Warp release? A few days before Cloudflare announced Warp on April 1st, Apple released its new version iOS 12.2 with significant changes in its underlying network stack implementation. This made the Warp network unstable thus making the Cloudflare team arrange for workarounds in their networking code, which took more time. Cloudflare adds, “We had a version of the WARP app that (kind of) worked on April 1. But, when we started to invite people from outside of Cloudflare to use it, we quickly realized that the mobile Internet around the world was far more wild and varied than we'd anticipated.” As the internet is made up of diverse network components, the Cloudflare team found it difficult to include all the diversity of mobile carriers, mobile operating systems, and mobile device models in their network. The Cloudflare team also found it testing to include users’ diverse network settings in their network. Warp uses a technology called Anycast to route user traffic to the Cloudflare network, however, it moves the users’ data between entire data centers, which made the Warp functioning complex.  To overcome all these barriers, the Cloudflare team has now changed its approach by focussing more on iOS. The team has also solidified the shared underpinnings of the app to ensure that it would even work with future network stack upgrades. The team has also tested Warp with network-based users to discover as many corner cases as possible. Thus, the Cloudflare team has successfully invented new technologies to keep the session state stable even with multiple mobile networks. Cloudflare introduces Warp Plus - an unlimited version of Warp Along with Warp, the Cloudflare team has also launched Warp Plus, an unlimited version of WARP for a monthly subscription fee. Warp Plus is faster than Warp and uses Cloudflare’s Argo Smart Routing to achieve a higher speed than Warp. The official blog post states, “Routing your traffic over our network often costs us more than if we release it directly to the internet.” To cover these costs, Warp Plus will charge a monthly fee of $4.99/month or less, depending on the user location. The Cloudflare team also added that they will be launching a test tool within the 1.1.1.1 app in a few weeks to make users “see how your device loads a set of popular sites without WARP, with WARP, and with WARP Plus.” Read Also: Cloudflare plans to go public; files S-1 with the SEC  To know more details about Warp Plus, read the technical post by Cloudflare team. Privacy features offered by Warp and Warp Plus The 1.1.1.1 DNS resolver app provides strong privacy protections such as all the debug logs will be kept only long enough to ensure the security of the service. Also, Cloudflare will only retain the limited transaction data for legitimate operational and research purposes.  Warp will not only maintain the 1.1.1.1 DNS protection layers but will also ensure: User’s-identifiable log data will be written to disk The user’s browsing data will not be sold for advertising purposes Warp will not demand any personal information (name, phone number, or email address) to use Warp or Warp Plus Outside editors will regularly regulate Warp’s functioning The Cloudflare team has also notified users that the newly available Warp will have bugs present in them. The blog post also specifies that the most popular bug currently in Warp is due to traffic misroute, which is making the Warp function slower than the speed of non-Warp mobile internet.  Image Source: Cloudflare blog The team has made it easier for users to report bugs as they have to just click on the little bug icon near the top of the screen on the 1.1.1.1 app or shake their phone with the app open and send a bug report to Cloudflare. Visit the Cloudflare blog for more information on Warp and Warp Plus. Facebook will no longer involve third-party fact-checkers to review the political content on their platform GNOME Foundation’s Shotwell photo manager faces a patent infringement lawsuit from Rothschild Patent Imaging A zero-day pre-auth vulnerability is currently being exploited in vBulletin, reports an anonymous researcher
Read more
  • 0
  • 0
  • 10765

article-image-scrivito-launches-serverless-javascript-cms
Kunal Chaudhari
17 Apr 2018
2 min read
Save for later

Scrivito launches serverless JavaScript CMS

Kunal Chaudhari
17 Apr 2018
2 min read
Scrivito, a SaaS-based Content Management Service, launched a new breed of cloud-based serverless JavaScript CMS which is specifically targeted towards medium to large sized businesses. While the world is shifting to cutting-edge cloud technology, web CMS platforms are still stuck in the past. Thomas Witt, Co-Founder, and CTO of Scrivito said that “We’re at a tipping point. Agencies and dev teams that stick with Wordpress and the like are doomed to be overtaken by the inevitable shift to serverless computing and JavaScript development.” Scrivito checks the boxes for key trending tech innovations in the web development space. Serverless? Yes. Cloud native? Yes. So what’s unique about this cutting-edge content management interface and how exactly does it differentiate itself from the other traditional CMS? Scrivito requires zero maintenance thanks to the cloud This is the most unique feature of Scrivito. Since it is a cloud-based service, it allows developers to spin up a CMS instance without having to re-install anything or reconfigure databases, search engine indexing, backups or metadata. This leads to no downtime, no software patches, and minimal maintenance efforts. Component reusability powered by ReactJS Scrivito is powered by Facebook’s popular frontend framework-React. Thanks to its reusable UI components and its flexibility, developers can create complex and interactive functionalities such as configurators or multi-page forms with ease. Not only built for developers, it also makes it easier for agencies and marketing teams to build, edit and manage secure, reliable and cost-effective sites, microsites, and landing pages. Scrivito is extendable Scrivito is easily extendable because it doesn’t require any infrastructure. Developers and editors can create their own widgets and data structures on the fly. Due to its unique working copies technology, it brings version control technologies from software development to the CMS world, thus eliminating the need for a staging server and allowing parallel editing of content across teams. Plus, its API-driven approach provides the benefits of a serverless and a headless CMS together with WYSIWYG editing in a single solution. Scrivito has certainly ignited a revolution in the web development space by introducing serverless technologies to CMS applications. It is available at different price points for personal and enterprise users. To know more about other features and pricing options, check out the project's official webpage.
Read more
  • 0
  • 0
  • 10524

article-image-graph-nets-deepminds-library-for-graph-networks-in-tensorflow-and-sonnet
Sunith Shetty
19 Oct 2018
3 min read
Save for later

Graph Nets – DeepMind's library for graph networks in Tensorflow and Sonnet

Sunith Shetty
19 Oct 2018
3 min read
Graph Nets is a new DeepMind’s library used for building graph networks in TensorFlow and Sonnet. Last week a paper Relational inductive biases, deep learning, and graph networks was published on arXiv by researchers from DeepMind, Google Brain, MIT and University of Edinburgh. The paper introduces a new machine learning framework called Graph networks which is expected to bring new innovations in artificial general intelligence realm. What are graph networks? Graph networks can generalize and extend various types of neural networks to perform calculations on the graph. It can implement relational inductive bias, a technique used for reasoning about inter-object relations. The graph networks framework is based on graph-to-graph modules. Each graph’s features are represented in three characteristics: Nodes Edges: Relations between the nodes Global attributes: System-level properties The graph network takes a graph as an input, performs the required operations and calculations from the edge, to the node, and to the global attributes, and then returns a new graph as an output. The research paper argues that graph networks can support two critical human-like capabilities: Relational reasoning: Drawing logical conclusions of how different objects and things relate to one another Combinatorial Generalization: Constructing new inferences, behaviors, and predictions from known building blocks To understand and learn more about graph networks you can refer the official research paper. Graph Nets Graph Nets library can be installed from pip. To install the library, run the following command: $ pip install graph_nets The installation is compatible with Linux/Mac OSX, and Python versions 2.7 and 3.4+ The library includes Jupyter notebook demos which allow you to create, manipulate, and train graph networks to perform operations such as shortest path-finding task, a sorting task, and prediction task. Each demo uses the same graph network architecture, thus showing the flexibility of the approach. You can try out various demos in your browser using Colaboratory. In other words, you don’t need to install anything locally when running the demos in the browser (or phone) via cloud Colaboratory backend. You can also run the demos on your local machine by installing the necessary dependencies. What’s ahead? The concept was released with ideas not only based in artificial intelligence research but also from the computer and cognitive sciences. Graph networks are still an early-stage research theory which does not yet offer any convincing experimental results. But it will be very interesting to see how well graph networks live up to the hype as they mature. To try out the open source library, you can visit the official Github page. In order to provide any comments or suggestions, you can contact graph-nets@google.com. Read more 2018 is the year of graph databases. Here’s why. Why Neo4j is the most popular graph database Pytorch.org revamps for Pytorch 1.0 with design changes and added Static graph support
Read more
  • 0
  • 0
  • 10524

article-image-workers-dev-will-soon-allow-users-to-deploy-their-cloudflare-workers-to-a-subdomain-of-their-choice
Melisha Dsouza
20 Feb 2019
2 min read
Save for later

workers.dev will soon allow users to deploy their Cloudflare Workers to a subdomain of their choice

Melisha Dsouza
20 Feb 2019
2 min read
Cloudflare users will very soon be able to deploy Workers without having a Cloudflare domain. They will be able to deploy their Cloudflare Workers to a subdomain of their choice, with an extension of .workers.dev. According to the Cloudflare blog, this is a step towards making it easy for users to get started with Workers and build a new serverless project from scratch. Cloudflare Workers’ serverless execution environment allows users to create new applications or improve existing ones without configuring or maintaining infrastructure. Cloudflare Workers run on Cloudflare servers, and not in a user’s browser, meaning that a user’s code will run in a trusted environment where it cannot be bypassed by malicious clients. workers. dev was obtained through Google’s TLD launch program. Customers can head over to workers.dev where they will be able to claim a subdomain (one per user). workers.dev is fully served using Cloudflare Workers. Zack Bloom, the Director of Product for Product Strategy at Cloudflare, says that workers.dev will especially be useful for Serverless apps. Without cold-starts users will obtain instant scaling to almost any volume of traffic, making this type of serverless seem faster and cheaper. Cloudflare workers have received an amazing response from users all over the internet: Source:HackerNews This news has also been received with much enthusiasm: https://twitter.com/MrAhmadAwais/status/1097919710249783297 You can head over to the Cloudflare blog for more information on this news. Cloudflare’s 1.1.1.1 DNS service is now available as a mobile app for iOS and Android Cloudflare’s Workers enable containerless cloud computing powered by V8 Isolates and WebAssembly Cloudflare Workers KV, a distributed native key-value store for Cloudflare Workers
Read more
  • 0
  • 0
  • 10524
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-deno-attempt-to-fix-node-js-flaws-rewritten-in-rust
Prasad Ramesh
27 Aug 2018
2 min read
Save for later

Deno, an attempt to fix Node.js flaws, is rewritten in Rust

Prasad Ramesh
27 Aug 2018
2 min read
Deno is a runtime by creator of Node, Ryan Dahl. It aims at fixing some of the problems in Node. Originally written in Go, Deno is now rewritten in Rust and is in version 0.1. Node.js was developed nearly a decade ago. It was designed in 2009 to use server-side JavaScript. The implementation solved problems of 2009, for which Dahl has no regrets. But lately, he did have regrets elaborated in a talk on 10 things he regrets about Node in the JSConf 2018. Some of the regrets included packages, security issues, the entire build system, among others. Deno is a secure TypeScript run-time on Chrome V8. It was originally written in Go and now has been rewritten in Rust to avoid potential garbage collector issues. Deno is similar to Node.js but is focused on security. Deno takes full advantage of JavaScript being a secure sandbox. So, unlike Node.js, Deno is sandboxed. Scripts should run without any write access by default. Using untrusted utilities like linters will be optional. There is no package.json in Deno, no npm and it is not explicitly compatible with Node. An important thing to note is that the requirement is Python 2, not Python 3. This is because Chrome V8 scripts still use Python 2. There were plans to rewrite Deno in Rust when it was originally released in June this year. Dahl mentioned in a GitHub comment: “The reason for not using Go is that it has a rather complex runtime - including a GC. Although I haven't experienced any problems with that yet, it's not hard to imagine that down the road that might clash badly with V8's very complex runtime.” You can get the binaries here to get started and check out the Github repo. Deploying Node.js apps on Google App Engine is now easy Creating Macros in Rust [Tutorial] Rust Language Server, RLS 1.0 releases with code intelligence, syntax highlighting and more
Read more
  • 0
  • 0
  • 10508

article-image-llvm-8-0-0-releases
Natasha Mathur
22 Mar 2019
3 min read
Save for later

LLVM 8.0.0 releases!

Natasha Mathur
22 Mar 2019
3 min read
LLVM team released LLVM 8.0, earlier this week. LLVM is a collection of tools that help develop compiler front ends and back ends. LLVM is written in C++ and has been designed for compile-time, link-time, run-time, and "idle-time" optimization of programs that are written in arbitrary programming languages. LLVM 8.0 explores known issues, major improvements and other changes in the subprojects of LLVM. There were certain issues in LLVM 8.0.0 that could not be fixed earlier (before this release). For instance, clang is getting miscompiled by trunk GCC, and “asan-dynamic” is not able to work on FreeBSD. Other than the issues, there is a long list of changes that have been made to LLVM 8.0.0. Non-comprehensive changes to LLVM 8.0.0 llvm-cov tool can export lcov trace files with the help of the -format=lcov option of the export command. The add_llvm_loadable_module CMake macro has been deprecated. The add_llvm_library macro with the MODULE argument can now help provide the same functionality. For MinGW, references to data variables that are to be imported from a dll can be now accessed via a stub. This will further allow the linker to convert it to a dllimport if needed. Support has been added for labels as offsets in .reloc directive. Windows support for libFuzzer (x86_64) has also been added. Other Changes LLVM IR:  The Function attribute named speculative_load_hardening has been introduced. This will indicate that Speculative Load Hardening should be enabled for the function body. JIT APIs: ORC (On Request Compilation) JIT APIs will now support concurrent compilation. The existing (non-concurrent) ORC layer classes, as well as the related APIs, have been deprecated. These have been renamed with a “Legacy” prefix (e.g. LegacyIRCompileLayer). All the deprecated classes will be removed in LLVM 9. AArch64 Target: Support has been added for Speculative Load Hardening. Also, initial support added for the Tiny code model, where code and the statically defined symbols should remain within 1MB. MIPS Target: Support forGlobalISel instruction selection framework has been improved. ORC JIT will now offer support for MIPS and MIPS64 architectures. There’s also newly added support for MIPS N32 AB. PowerPC Target: This has now been switched to non-PIC default in LLVM 8.0.0. Darwin support has also been deprecated. Also, Out-of-Order scheduling has been enabled for P9. SystemZ Target: These include various code-gen improvements related to improved auto-vectorization, inlining, as well as the instruction scheduling. Other than these, changes have also been made to X86 target, WebAssembly Target, Nios2 target, and LLDB. For a complete list of changes, check out the official LLVM 8.0.0 release notes. LLVM 7.0.0 released with improved optimization and new tools for monitoring LLVM will be relicensing under Apache 2.0 start of next year LLVM officially migrating to GitHub from Apache SVN
Read more
  • 0
  • 0
  • 10348

article-image-aws-announces-more-flexibility-its-certification-exams-drops-its-exam-prerequisites
Melisha Dsouza
18 Oct 2018
2 min read
Save for later

AWS announces more flexibility its Certification Exams, drops its exam prerequisites

Melisha Dsouza
18 Oct 2018
2 min read
Last week (on 11th October), the AWS team announced that they are removing the exam-prerequisites to give users more flexibility on the AWS Certification Program. Previously, it was a prerequisite for a customer to pass the foundational or Associate level exam before appearing for the Professional or Specialty certification. AWS has now eliminated this prerequisite, taking into account customers requests for flexibility. Customers are no longer required to have an Associate certification before pursuing a Professional certification. Nor do they need to hold a Foundational or Associate certification before pursuing Specialty certification. The professional level exams are pretty tough to pass. Until a customer has a complete deep knowledge of the AWS platform, passing the professional exam is difficult. If a customer skips the Foundational or Associate level exams and directly appears for the professional level exams, he will not have the practice and knowledge necessary to fare well in them. Instead, if he/she fails the exam, backing up to the Associate level can be demotivating. The AWS Certification demonstrates helps individuals obtain an expertise to design, deploy, and operate highly available, cost-effective, and secure applications on AWS. They will gain a  proficiency with AWS which will help them earn tangible benefits This exam will help Employers Identify skilled professionals that can use  AWS technologies to lead IT initiatives. Moreover, the exams will help them reduce risks and costs to implement their workloads and projects on the AWS platform. AWS dominates the cloud computing market and the AWS Certified Solutions Architect exams can help candidates secure their career in this exciting field. AWS offers digital and classroom training build cloud skills and prepare for certification exams. To know more about this announcement, head over to their official Blog. ‘AWS Service Operator’ for Kubernetes now available allowing the creation of AWS resources using kubectl Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence AWS machine learning: Learning AWS CLI to execute a simple Amazon ML workflow [Tutorial]  
Read more
  • 0
  • 0
  • 10346

article-image-postgresql-wins-dbms-of-the-year-2018-beating-mongodb-and-redis-in-db-engines-ranking
Amrata Joshi
09 Jan 2019
4 min read
Save for later

PostgreSQL wins ‘DBMS of the year’ 2018 beating MongoDB and Redis in DB-Engines Ranking

Amrata Joshi
09 Jan 2019
4 min read
Last week, DB Engines announced PostgreSQL as the Database Management System (DBMS) of the year 2018, as it gained more popularity in the DB-Engines Ranking last year than any of the other 343 monitored systems. Jonathan S. Katz, PostgreSQL contributor, said, "The PostgreSQL community cannot succeed without the support of our users and our contributors who work tirelessly to build a better database system. We're thrilled by the recognition and will continue to build a database that is both a pleasure to work with and remains free and open source." PostgreSQL, which will turn 30 this year has won the DBMS title for the second time in a row. It has established itself as the preferred data store amongst developers and has been appreciated for its stability and feature set. In the DBMS market, various systems use PostgreSQL as their base technology, this itself justifies that how well-established PostgreSQL is. Simon Riggs, Major PostgreSQL contributor, said, "For the second year in a row, the PostgreSQL team thanks our users for making PostgreSQL the DBMS of the Year, as identified by DB-Engines. PostgreSQL's advanced features cater to a broad range of use cases all within the same DBMS. Rather than going for edge case solutions, developers are increasingly realizing the true potential of PostgreSQL and are relying on the absolute reliability of our hyperconverged database to simplify their production deployments." How the DB-Engines Ranking scores are calculated For determining the DBMS of the year, the team at DB Engines subtracted the popularity scores of January 2018 from the latest scores of January 2019. The team used a difference of these numbers instead of percentage because that would favor systems with tiny popularity at the beginning of the year. The popularity of a system is calculated by using the parameters, such as the number of mentions of the system on websites, the number of mentions in the results of search engine queries. The team at DB Engines uses Google, Bing, and Yandex for this measurement. In order to count only relevant results, the team searches for <system name> together with the term database, e.g. "Oracle" and "database".The next measure is known as General interest in the system, for which the team uses the frequency of searches in Google Trends. The number of related questions and the number of interested users on the well-known IT-related Q&A site such as Stack Overflow and DBA Stack Exchange are also checked in this process. For calculating the ranking, the team also uses the number of offers on the leading job search engines Indeed and Simply Hired. A number of profiles in professional networks such as LinkedIn and Upwork in which the system is mentioned is also taken into consideration. The number of tweets in which the system is mentioned is also counted. The calculated result is a list of DBMSs sorted by how much they managed to increase their popularity in 2018. 1st runner-up: MongoDB For 2018, MongoDB is the first runner-up and has previously won the DBMS of the year in 2013 and 2014. Its growth in popularity has even accelerated ever since, as it is the most popular NoSQL system. MongoDB keeps on adding functionalities that were previously outside the NoSQL scope. Lat year, MongoDB also added ACID support, which got a lot of developers convinced, to rely on it with critical data. With the improved support for analytics workloads, MongoDB is a great choice for a larger range of applications. 2nd runner-up: Redis Redis, the most popular key-value store got the third place for DBMS of the year 2018. It has been in the top three DBMS of the year for 2014. It is best known as high-performance and feature-rich key-value store. Redis provides a loadable modules system, which means third parties can extend the functionality of Redis. These modules offer a graph database, full-text search, and time-series features, JSON data type support and much more. PipelineDB 1.0.0, the high performance time-series aggregation for PostgreSQL, released! Devart releases standard edition of dbForge Studio for PostgreSQL MongoDB switches to Server Side Public License (SSPL) to prevent cloud providers from exploiting its open source code
Read more
  • 0
  • 0
  • 10345
article-image-introducing-microsofts-airsim-an-open-source-simulator-for-autonomous-vehicles-built-on-unreal-engine
Bhagyashree R
19 Sep 2019
4 min read
Save for later

Introducing Microsoft’s AirSim, an open-source simulator for autonomous vehicles built on Unreal Engine

Bhagyashree R
19 Sep 2019
4 min read
Back in 2017, the Microsoft Research team developed and open-sourced Aerial Informatics and Robotics Simulation (AirSim). On Monday, the team shared how AirSim can be used to solve the current challenges in the development of autonomous systems. Microsoft AirSim and its features Microsoft AirSim is an open-source, cross-platform simulation platform for autonomous systems including autonomous cars, wheeled robotics, aerial drones, and even static IoT devices. It works as a plugin for Epic Games’ Unreal Engine. There is also an experimental release for the Unity game engine. Here is an example of drone simulation in AirSim: https://www.youtube.com/watch?v=-WfTr1-OBGQ&feature=youtu.be AirSim was built to address two main problems developers face during the development of autonomous systems. First, the requirement of large datasets for training and testing the systems and second, the ability to debug in a simulator. With AirSim, the team aims to equip developers with a platform that has various training experiences so that the autonomous systems could be exposed to different scenarios before they are deployed in the real-world. “Our goal is to develop AirSim as a platform for AI research to experiment with deep learning, computer vision and reinforcement learning algorithms for autonomous vehicles. For this purpose, AirSim also exposes APIs to retrieve data and control vehicles in a platform-independent way,” the team writes. AirSim provides physically and visually realistic simulations by supporting hardware-in-the-loop simulation with popular flight controllers such as PX4, an open-source autopilot system. It can be easily extended to accommodate various new types of autonomous vehicles, hardware platforms, and software protocols. Its extensible architecture also allows them to quickly add custom autonomous system models and new sensors to the simulator. AirSim for tackling the common challenges in the autonomous systems’ development In April, the Microsoft Research team collaborated with Carnegie Mellon University and Oregon State University, collectively called Team Explorer, to solve the DARPA Subterranean (SubT) Challenge. The challenge was to build robots that can autonomously map, navigate, and search underground environments during time-sensitive combat operations or disaster response scenarios. On Monday, Microsoft’s Senior Research Manager, Ashish Kapoor shared how they used AirSim to solve this challenge. Team Explorer and Microsoft used AirSim to create an “intricate maze” of man-made tunnels in a virtual world. To create this maze the team used reference material from real-world mines to modularly generate a network of interconnected tunnels. This was a high-definition simulation of man-made tunnels that also included robotic vehicles and a suite of sensors. AirSim also provided a rich platform that Team Explorer could use to test their methods along with generating training experiences for creating various decision-making components for autonomous agents. Microsoft believes that AirSim can also help accelerate the creation of a real dataset for underground environments. “Microsoft’s ability to create near-realistic autonomy pipelines in AirSim means that we can rapidly generate labeled training data for a subterranean environment,” Kapoor wrote. Kapoor also talked about another collaboration with Air Shepherd and USC to help counter wildlife poaching using AirSim. In this collaboration, they developed unmanned aerial vehicles (UAVs) equipped with thermal infrared cameras that can fly through national parks to search for poachers and animals. AirSim was used to create a simulation of this use case, in which virtual UAVs flew over virtual environments at an altitude from 200 to 400 feet above ground level. “The simulation took on the difficult task of detecting poachers and wildlife, both during the day and at night, and ultimately ended up increasing the precision in detection through imaging by 35.2%,” the post reads. These were some of the recent use cases where AirSim was used. To explore more and to contribute you can check out its GitHub repository. Other news in Data 4 important business intelligence considerations for the rest of 2019 How artificial intelligence and machine learning can help us tackle the climate change emergency France and Germany reaffirm blocking Facebook’s Libra cryptocurrency
Read more
  • 0
  • 0
  • 10247

article-image-googles-kaniko-open-source-build-tool-for-docker-images-in-kubernetes
Savia Lobo
27 Apr 2018
2 min read
Save for later

Google’s kaniko - An open-source build tool for Docker Images in Kubernetes, without a root access

Savia Lobo
27 Apr 2018
2 min read
Google recently introduced kaniko, an open-source tool for building container images from a Dockerfile even without privileged root access. Prior to kaniko, building images from a standard Dockerfile typically was totally dependent on an interactive access to a Docker daemon, which requires a root access on the machine to run. Such a process makes it difficult to build container images in environments that can’t easily or securely expose their Docker daemons, such as Kubernetes clusters. To combat these challenges, Kaniko was created. With kaniko, one can build an image from a Dockerfile and push it to a registry. Since it doesn’t require any special privileges or permissions, kaniko can even run in a standard Kubernetes cluster, Google Kubernetes Engine, or in any environment that can’t have access to privileges or a Docker daemon. How does kaniko Build Tool work? kaniko runs as a container image that takes in three arguments: a Dockerfile, a build context and the name of the registry to which it should push the final image. The image is built from scratch, and contains only a static Go binary plus the configuration files needed for pushing and pulling images.kaniko image generation The kaniko executor takes care of extracting the base image file system into the root. It executes each command in order, and takes a snapshot of the file system after each command. The snapshot is created in the user area where the file system is running and compared to the previous state that is in memory. All changes in the file system are appended to the base image, making relevant changes in the metadata of the image. After successful execution of each command in the Dockerfile, the executor pushes the newly built image to the desired registry. Finally, Kaniko unpacks the filesystem, executes commands and takes snapshots of the filesystem completely in user-space within the executor image. This is how it avoids requiring privileged access on your machine. Here, the docker daemon or CLI is not involved. To know more about how to run kaniko in a Kubernetes Cluster, and in the Google Cloud Container Builder, read the documentation on the GitHub Repo. The key differences between Kubernetes and Docker Swarm Building Docker images using Dockerfiles What’s new in Docker Enterprise Edition 2.0?
Read more
  • 0
  • 0
  • 10218

article-image-daily-coping-31-dec-2020-from-blog-posts-sqlservercentral
Anonymous
31 Dec 2020
2 min read
Save for later

Daily Coping 31 Dec 2020 from Blog Posts - SQLServerCentral

Anonymous
31 Dec 2020
2 min read
I started to add a daily coping tip to the SQLServerCentral newsletter and to the Community Circle, which is helping me deal with the issues in the world. I’m adding my responses for each day here. All my coping tips are under this tag.  Today’s tip is to plan some new acts of kindness to do in 2021. As I get older, I do try to spend more time volunteering and helping others more than myself. I’ve had success, my children are adults, and I find less “wants” for myself than I feel the impetus to help others more. I also hope more people feel this, perhaps at a younger age than I am. In any case, I have a couple things for 2021 that I’d like to do: Random acts – I saw this in a movie or show recently, but someone was buying a coffee or something small for a stranger once a week. I need to do that, especially if I get the chance to go out again. DataSaturdays – The demise of PASS means more support for people that might want to run an event, so I need to be prepared to help others again. Coaching – I have been coaching kids, but they’ve been privileged kids. I’d like to switch to kids that lack some of the support and privileges of the kids I usually deal with. I’m hoping things get moving with sports again and I get the chance to talk to the local Starlings program. The post Daily Coping 31 Dec 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 10201
article-image-raspberry-pi-launches-it-last-board-for-the-foreseeable-future-the-raspberry-pi-3-model-a-available-now-at-25
Prasad Ramesh
16 Nov 2018
2 min read
Save for later

Raspberry Pi launches it last board for the foreseeable future: the Raspberry Pi 3 Model A+ available now at $25

Prasad Ramesh
16 Nov 2018
2 min read
Yesterday, Raspberry launched the Raspberry Pi 3 Model A+ board which is a smaller and cheaper version of the Raspberry Pi 3B+. In 2014, the first gen Raspberry Pi 1 Model B+ was followed by a lighter Model A+ with half the RAM and removed ports. This was able to fit into their Hardware Attached on Top (HAT). Until now there were no such small form factor boards for the Raspberry Pi 2 and 3. Size is cut down but not the features (most of) The Raspberry Pi 3 Model A+ retains most of the features and enhancements as the bigger board of this series. This includes a 1.4GHz 64-bit quad-core ARM Cortex-A53 CPU, 512MB LPDDR2 SDRAM, and dual-band 802.11ac wireless LAN and Bluetooth 4.2/BLE. The enhancements retained are improved USB mass-storage booting and improved thermal management. The entire Raspberry Pi 3 Model A+ board is an FCC certified radio module. This will significantly reduce the cost in conformance testing Raspberry Pi–based products. What is shrunk is the price which is now down to $25 and the board size of 65x56mm, the size of a HAT. Source: Raspberry website Raspberry Pi 3 Model A+ will likely be the last product for now In March this year, Raspberry said that the 3+ platform is the final iteration of the “classic” Raspberry Pi boards. The next steps/released products will be out of necessity and not an evolution. This is because for an evolution to happen Raspberry will need a new core silicon, on a new process node, with new memory technology. So this new board, the 3A+ is about closing things; meaning we won’t see any more products in this line, in the foreseeable future. This board does answer one of their most frequent customer requests for ‘missing products’. And clears their pipeline to focus on building the next generation of Raspberry Pi boards. For more details visit the Raspberry Pi website. Introducing Raspberry Pi TV HAT, a new addon that lets you stream live TV Tensorflow 1.9 now officially supports Raspberry Pi bringing machine learning to DIY enthusiasts Should you go with Arduino Uno or Raspberry Pi 3 for your next IoT project?
Read more
  • 0
  • 0
  • 10200

article-image-haproxy-2-0-released-with-kubernetes-ingress-controller-layer-7-retries-polyglot-extensibility-grpc-support-and-more
Vincy Davis
17 Jun 2019
6 min read
Save for later

HAProxy 2.0 released with Kubernetes Ingress controller, layer 7 retries, polyglot extensibility, gRPC support and more

Vincy Davis
17 Jun 2019
6 min read
Last week, HAProxy 2.0 was released with critical features of cloud-native and containerized environments. This is an LTS (Long-term support) release, which includes a powerful set of core features such as Layer 7 retries, Cloud-Native threading and logging, polyglot extensibility, gRPC support and more, and will improve the seamless support for integration into modern architectures. In conjunction with this release, the HAProxy team has also introduced the HAProxy Kubernetes Ingress Controller and the HAProxy Data Plane API. The founder of HAProxy Technologies, Willy Tarreau, has said that these developments will come with HAProxy 2.1 version. The HAProxy project has also opened up issue submissions on its HAProxy GitHub account. Some features of HAProxy 2.0 Cloud-Native Threading and Logging HAProxy can now scale to accommodate any environment with less manual configuration. This will enable the number of worker threads to match the machine’s number of available CPU cores. The process setting is no longer required, thus simplifying the bind line. Two new build parameters have been added: MAX_THREADS and MAX_PROCS, which avoids allocating huge structs. Logging has been made easier for containerized environments. Direct logging to stdout and stderr, or to a file descriptor is now possible. Kubernetes Ingress Controller The HAProxy Kubernetes Ingress Controller provides a high-performance ingress for the Kubernetes-hosted applications. It supports TLS offloading, Layer 7 routing, rate limiting, whitelisting. Ingresses can be configured through either ConfigMap resources or annotations. The Ingress Controller gives users the ability to : Use only one IP address and port and direct requests to the correct pod based on the Host header and request path Secure communication with built-in SSL termination Apply rate limits for clients while optionally whitelisting IP addresses Select from among any of HAProxy's load-balancing algorithms Get superior Layer 7 observability with the HAProxy Stats page and Prometheus metrics Set maximum connection limits to backend servers to prevent overloading services Layer 7 Retries With HAProxy 2.0, it will be possible to retry from another server at Layer 7 for failed HTTP requests. The new configuration directive, retry-on, can be used in defaults, listen, or backend section. The number of attempts at retrying can be specified using the retries directive. The full list of retry-on options is given on the HAProxy blog. HAProxy 2.0 also introduces a new http-request action called disable-l7-retry. It allows the user to disable any attempt to retry the request if it fails for any reason other than a connection failure. This can be useful to make sure that POST requests aren’t retried. Polyglot Extensibility The Stream Processing Offload Engine (SPOE) and Stream Processing Offload Protocol (SPOP) were introduced in HAProxy 1.7. It aimed to create the extension points necessary to build upon HAProxy using any programming language. From HAProxy 2.0, the following libraries and examples will be available in the following languages and platforms: C .NET Core Golang Lua Python gRPC HAProxy 2.0 delivers full support for the open-source RPC framework, gRPC. This allows bidirectional streaming of data, detection of gRPC messages, and logging gRPC traffic. Two new converters, protobuf and ungrpc, have been introduced, to extract the raw Protocol Buffer messages. Using Protocol Buffers, gRPC enables users to serialize messages into a binary format that’s compact and potentially more efficient than JSON. Users need to set up a standard end-to-end HTTP/2 configuration, to start using gRPC in HAProxy. HTTP Representation (HTX) The Native HTTP Representation (HTX) was introduced with HAProxy 1.9. Starting from 2.0, it will be enabled by default. HTX creates strongly typed, well-delineated header fields and allows for gaps and out-of-order fields. It also allows HAProxy to maintain consistent semantics from end-to-end and provides higher performance when translating HTTP/2 to HTTP/1.1 or vice versa. LTS Support for 1.9 Features HAProxy 2.0 bring LTS support for many features that were introduced or improved upon during the 1.9 release. Some are them are specified below: Small Object Cache with an increased caching size up to 2GB, set with the max-object-size directive. The total-max-size setting determines the total size of the cache and can be increased up to 4095MB. New fetches like date_us, cpu_calls and more have been included which will report either an internal state or from layer 4, 5, 6, and 7. New converters like strcmp, concat and more allow to transform data within HAProxy Server Queue Priority Control, lets the users to prioritize some queued connections over others. This is helpful to deliver JavaScript or CSS files before images. The resolvers section supports using resolv.conf by specifying parse-resolv-conf. The HAProxy team has planned to build HAProxy 2.1 with features like UDP Support, OpenTracing and Dynamic SSL Certificate Updates. The HAProxy inaugural community conference, HAProxyConf is scheduled to take place in Amsterdam, Netherlands on November 12-13, 2019. A user on Hacker News comments, “HAProxy is probably the best proxy server I had to deal with ever. It's performance is exceptional, it does not interfere with L7 data unless you tell it to and it's extremely straightforward to configure reading the manual.” While some are busy comparing HAProxy with the nginx web server. A user says that “In my previous company we used to use HAProxy, and it was a hassle. Yes, it is powerful. However, nginx is way easier to configure and set up, and performance wise is a contender for most usual applications people needed. nginx just fulfills most people's requirements for reverse proxy and has solid HTTP/2 support (and other features) for way longer.” Another user states that “Big difference is that haproxy did not used to support ssl without using something external like stunnel -- nginx basically did it all out of the box and I haven't had a need for haproxy in quite some time now.” While others suggest that HAProxy is trying hard to stay equipped with the latest features in this release. https://twitter.com/garthk/status/1140366975819849728 A user on Hacker News agrees by saying that “These days I think HAProxy and nginx have grown a lot closer together on capabilities.” Visit the HAProxy blog for more details about HAProxy 2.0. HAProxy introduces stick tables for server persistence, threat detection, and collecting metrics MariaDB announces the release of MariaDB Enterprise Server 10.4 Businesses need to learn how to manage cloud costs to get real value from serverless and machine learning-as-a-service
Read more
  • 0
  • 0
  • 10098