Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - Cloud Computing

175 Articles
article-image-g-suite-administrators-passwords-were-unhashed-for-14-years-notifies-google
Vincy Davis
22 May 2019
3 min read
Save for later

G Suite administrators' passwords were unhashed for 14 years, notifies Google

Vincy Davis
22 May 2019
3 min read
Today, Google notified its G Suite administrators that some of their passwords were being stored in an encrypted internal system unhashed, i.e., in plaintext, since 2005. Google also states that the error has been fixed and this issue had no effect on the free consumer Google accounts. In 2005, Google had provided G Suite domain administrators with tools to set and recover passwords. This tool enabled administrators to upload or manually set user passwords for their company’s users. This was made possible for helping onboard new users with their account information on their first day of work, and for account recovery. However, this action led to admin console storing a copy of the unhashed password. Google has made it clear that these unhashed passwords were stored in a secure encrypted infrastructure. Google is now working with enterprise administrators to ensure that the users reset their passwords. They are also conducting a thorough investigation and have assured users that no evidence of improper access or misuse of the affected passwords have been identified till now. Google has around 5 million users using G Suite. Out of an abundance of caution, the Google team will also reset accounts of those who have not done it themselves. Additionally, Google has also admitted to another mishap. In January 2019, while troubleshooting new G Suite customer sign-up flows, an accidentally stored subset of unhashed passwords was discovered. Google claims these unhashed passwords were stored for only 14 days and in a secure encrypted infrastructure. This issue has also been fixed and no evidence of improper access or misuse of the affected passwords have been found. In the blogpost, Suzanne Frey, VP of Engineering and Cloud Trust, has given a detailed account of how Google stores passwords for consumers & G Suite enterprise customers. Google is the latest company to have admitted storing sensitive data in plaintext. Two months ago, Facebook had admitted to have stored the passwords of hundreds of millions of its users in plain text, including the passwords of Facebook Lite, Facebook, and Instagram users. Read More: Facebook accepts exposing millions of user passwords in a plain text to its employees after security researcher publishes findings Last year, Twitter and GitHub also admitted to similar security lapses. https://twitter.com/TwitterSupport/status/992132808192634881 https://twitter.com/BleepinComputer/status/991443066992103426 Users are shocked that it took Google 14 long years to identify this error. Others are concerned if even a giant company like Google cannot secure its passwords in 2019, what can be expected from other companies. https://twitter.com/HackingDave/status/1131067167728984064 A user on Hacker News comments, “Google operates what is considered, by an overwhelming majority of expert opinion, one of the 3 best security teams in the industry, likely exceeding in so many ways the elite of some major world governments. And they can't reliably promise, at least not in 2019, never to accidentally durably log passwords. If they can't, who else can? What are we to do with this new data point? The issue here is meaningful, and it's useful to have a reminder that accidentally retaining plaintext passwords is a hazard of building customer identity features. But I think it's at least equally useful to get the level set on what engineering at scale can reasonably promise today.” To know more about this news in detail, head over to Google’s official blog. Google announces Glass Enterprise Edition 2: an enterprise-based augmented reality headset As US-China tech cold war escalates, Google revokes Huawei’s Android support, allows only those covered under open source licensing Google AI engineers introduce Translatotron, an end-to-end speech-to-speech translation model
Read more
  • 0
  • 0
  • 2051

article-image-introducing-datastax-constellation-a-cloud-platform-for-rapid-development-of-apache-cassandra-based-apps
Bhagyashree R
21 May 2019
3 min read
Save for later

Introducing DataStax Constellation: A cloud platform for rapid development of Apache Cassandra-based apps

Bhagyashree R
21 May 2019
3 min read
At the first day of Accelerate 2019, DataStax unveiled DataStax Constellation, a modern cloud platform specifically designed for Apache Cassandra. DataStax is a leading provider of the always-on, active everywhere distributed hybrid cloud database built on Apache Cassandra. https://twitter.com/DataStax/status/1130803273647230976 DataStax Accelerate 2019 is a three-day event (21-23 May) happening at Maryland, US. On the agenda, this event has 70+ technical sessions, networking with experts and people from leading companies like IBM, Walgreens, T-Mobile, and also new product announcements. Sharing the vision behind DataStax Constellation, Billy Bosworth, CEO of DataStax, said, “With Constellation, we are making a major commitment to being the leading cloud database company and putting cloud development at the top of our priority list. From edge to hybrid to multi-cloud, we are providing developers with a cloud platform that includes the complete set of tools they need to build game-changing applications that spark transformational business change and let them do what they do best.” What is DataStax Constellation? DataStax Constellation is a modern cloud platform that provides smart services for easy and rapid development and deployment of Cassandra-based applications. It comes with an integrated web console that simplifies the use and management of Cassandra. DataStax Constellation provides an interactive developer tool for CQL (Cassandra Query Language) named DataStax Studio. This tool makes it easy for developers to collaborate by keeping track of code, query results, and visualizations in self-documenting notebooks. The Constellation platform is initially launched with two cloud services, DataStax Apache Cassandra-as-a-Service and DataStax Insights: DataStax Apache Cassandra as a Service DataStax Apache Cassandra as a Service enables you to easily develop and deploy Apache Cassandra applications in the cloud. Here are some of the advantages and features it comes with: Ensures high availability of applications: It assures uptime and integrity with multiple data replicas. Users are only charged when the database is in use, which significantly reduces operational overhead. Reduces administrative overhead: It makes your applications capable of self-healing with its advanced optimization and remediation mechanisms. Better performance than open-source Cassandra: This provides up to three times better performance than open source Apache Cassandra at any scale. DataStax Insights DataStax Insights is performance management and monitoring tool for DataStax Constellation and DataStax Enterprise. Here are some of the features it comes with: Centralized and scalable monitoring: It provides centralized and scalable monitoring across all cloud and on-premise deployments. Simplified administration: It provides an at-a-glance health index that simplifies administration via a single view of all clusters. Automated performance tuning: Its AI-powered analysis and recommendations enable automated performance tuning. Sharing his future plans regarding Constellation, Bosworth said, “Constellation is for all developers seeking easy and obvious application deployment in any cloud. And the great thing is that we are planning for it to be available on all three of the major cloud providers: AWS, Google, and Microsoft.” DataStax plans to make Constellation, Insights, and Cassandra as a Service available on all three cloud providers in Q4 of 2019. To know more about DataStax Constellation, visit its official website Instaclustr releases three open source projects for Apache Cassandra database users ScyllaDB announces Scylla 3.0, a NoSQL database surpassing Apache Cassandra in features cstar: Spotify’s Cassandra orchestration tool is now open source!
Read more
  • 0
  • 0
  • 1518

article-image-gke-sandbox-a-gvisor-based-feature-to-increase-security-and-isolation-in-containers
Vincy Davis
17 May 2019
4 min read
Save for later

GKE Sandbox : A gVisor based feature to increase security and isolation in containers

Vincy Davis
17 May 2019
4 min read
During the Google Cloud Next ‘19, Google Cloud announced the beta version of GKE Sandbox, a new feature in Google Kubernetes Engine (GKE). Yesterday, Yoshi Tamura (Product Manager of Google Kubernetes Engine and gVisor) and Adin Scannell (Senior Staff Software Engineer of gVisor) explained in brief about the GKE Sandbox, on Google Cloud’s official blogspot. GKE Sandbox increases the security and isolation of containers by adding an extra layer between the containers and the host OS. At general availability, GKE Sandbox will be available in the upcoming GKE Advanced. This feature will help in building demanding production applications on top of managed Kubernetes service. GKE Sandbox uses gVisor to abstract the internals, which makes the internals an easy-to-use service. While creating a pod, the user can simply choose GKE Sandbox and continue to interact with containers. This will need no new learning of controls or a mental model. In view of limiting potential attacks, GKE Sandbox helps teams running multi-tenant clusters such as SaaS providers. These teams are often executing  unknown or untrusted code. This helps in providing more secure multi-tenancy in GKE. gVisor is an open-source container sandbox runtime that was released last year. It was created to defend against a host compromise when it runs an arbitrary, untrusted code, and still integrate with container-based infrastructure. gVisor is used in many Google Cloud Platform (GCP) services like the App Engine standard environment, Cloud Functions, Cloud ML Engine, and most recently Cloud Run. Some features of gVisor include: Provides an independent operating system kernel to each container. Applications can interact with the virtualized environment provided by gVisor's kernel rather than the host kernel. Manages and places restrictions on file and network operations. Ensures there are two isolation layers between the containerized application and the host OS. Due to the reduced and restricted interaction of an application with the host kernel, attackers have a smaller attack surface. An experience shared on the official Google blog post mentions how Data refinery creator Descartes Labs have applied machine intelligence to massive data sets. Tim Kelton, Co-Founder and Head of SRE, Security, and Cloud Operations at Descartes Labs, said, “As a multi-tenant SaaS provider, we still wanted to leverage Kubernetes scheduling to achieve cost optimizations, but build additional security layers on top of users’ individual workloads. GKE Sandbox provides an additional layer of isolation that is quick to deploy, scales, and performs well on the ML workloads we execute for our users." Applications suitable for GKE Sandbox GKE Sandbox is well-suited to run compute and memory-bound applications and so works with a wide variety of applications such as: Microservices and functions : GKE Sandbox will enable additional defense in depth while preserving low spin-up times and high service density. Data processing : GKE Sandbox can process data in less than 5 percent for streaming disk I/O and compute-bound applications like FFmpeg. CPU-based machine learning: Training and executing machine learning models frequently involves large quantities of data and complex workflows which mostly belongs to a third party. The CPU overhead of sandboxing compute-bound machine learning tasks is less than 10 percent. A user on Reddit commented, “This is a really interesting add-on to GKE and I'm glad to see vendors starting to offer a variety of container runtimes on their platforms.” GKE Sandbox feature has got rave reviews on twitter too. https://twitter.com/ahmetb/status/1128709028203220992 https://twitter.com/sarki247/status/1128931366803001345 If you want to try GKE Sandbox and know more details, head over to Google’s official feature page. Google Open-sources Sandboxed API, a tool that helps in automating the process of porting existing C and C++ code Google Cloud introduces Traffic Director Beta, a networking management tool for service mesh Google Cloud Console Incident Resolved!
Read more
  • 0
  • 0
  • 3111
Banner background image

article-image-amazon-s3-retiring-support-path-style-api-requests-sparks-censorship-fears
Fatema Patrawala
06 May 2019
5 min read
Save for later

Amazon S3 is retiring support for path-style API requests; sparks censorship fears

Fatema Patrawala
06 May 2019
5 min read
Last week on Tuesday Amazon announced that Amazon S3 will no longer support path-style API requests. Currently Amazon S3 supports two request URI styles in all regions: path-style (also known as V1) that includes bucket name in the path of the URI (example: //s3.amazonaws.com/<bucketname>/key) and virtual-hosted style (also known as V2) which uses the bucket name as part of the domain name (example: //<bucketname>.s3.amazonaws.com/key). Amazon team mentions in the announcement that, “In our effort to continuously improve customer experience, the path-style naming convention is being retired in favor of virtual-hosted style request format.” They have also asked customers to update their applications to use the virtual-hosted style request format when making S3 API requests. And this should be done before September 30th, 2020 to avoid any service disruptions. Customers using the AWS SDK can upgrade to the most recent version of the SDK to ensure their applications are using the virtual-hosted style request format. They have further mentioned that, “Virtual-hosted style requests are supported for all S3 endpoints in all AWS regions. S3 will stop accepting requests made using the path-style request format in all regions starting September 30th, 2020. Any requests using the path-style request format made after this time will fail.” Users on Hackernews see this as a poor development by Amazon and have noted its implications that collateral freedom techniques using Amazon S3 will no longer work. One of them has commented strongly on this, “One important implication is that collateral freedom techniques [1] using Amazon S3 will no longer work. To put it simply, right now I could put some stuff not liked by Russian or Chinese government (maybe entire website) and give a direct s3 link to https:// s3 .amazonaws.com/mywebsite/index.html. Because it's https — there is no way man in the middle knows what people read on s3.amazonaws.com. With this change — dictators see my domain name and block requests to it right away. I don't know if they did it on purpose or just forgot about those who are less fortunate in regards to access to information, but this is a sad development. This censorship circumvention technique is actively used in the wild and loosing Amazon is no good.” Amazon team suggests that if your application is not able to utilize the virtual-hosted style request format, or if you have any questions or concerns, you may reach out to AWS Support. To know more about this news check out the official announcement page from Amazon. Update from Amazon team on 8th May Amazon’s Chief Evangelist for AWS, Jeff Barr sat with the S3 team to understand this change in detail. After getting a better understanding he posted an update on why the team plans to deprecate the path based model. Here’s his comparison on old vs the new: S3 currently supports two different addressing models: path-style and virtual-hosted style. Take a quick look at each one. The path-style model looks either like this (the global S3 endpoint): https://s3.amazonaws.com/jbarr-public/images/ritchie_and_thompson_pdp11.jpeg https://s3.amazonaws.com/jeffbarr-public/classic_amazon_door_desk.png Or this (one of the regional S3 endpoints): https://s3-useast2.amazonaws.com/jbarrpublic/images/ritchie_and_thompson_pdp11.jpeg https://s3-us-east-2.amazonaws.com/jeffbarr-public/classic_amazon_door_desk.png For example, jbarr-public and jeffbarr-public are bucket names; /images/ritchie_and_thompson_pdp11.jpeg and /jeffbarr-public/classic_amazon_door_desk.png are object keys. Even though the objects are owned by distinct AWS accounts and are in different S3 buckets and possibly in distinct AWS regions, both of them are in the DNS subdomain s3.amazonaws.com. Hold that thought while we look at the equivalent virtual-hosted style references: https://jbarr-public.s3.amazonaws.com/images/ritchie_and_thompson_pdp11.jpeg https://jeffbarr-public.s3.amazonaws.com/classic_amazon_door_desk.png These URLs reference the same objects, but the objects are now in distinct DNS subdomains (jbarr-public.s3.amazonaws.com and jeffbarr-public.s3.amazonaws.com, respectively). The difference is subtle, but very important. When you use a URL to reference an object, DNS resolution is used to map the subdomain name to an IP address. With the path-style model, the subdomain is always s3.amazonaws.com or one of the regional endpoints; with the virtual-hosted style, the subdomain is specific to the bucket. This additional degree of endpoint specificity is the key that opens the door to many important improvements to S3. The select few in the community are in favor of this as per one of the user comment on Hacker News which says, “Thank you for listening! The original plan was insane. The new one is sane. As I pointed out here https://twitter.com/dvassallo/status/1125549694778691584 thousands of printed books had references to V1 S3 URLs. Breaking them would have been a huge loss. Thank you!” But for the other few Amazon team has failed to address the domain censorship issue as per another user which says, “Still doesn't help with domain censorship. This was discussed in-depth in the other thread from yesterday, but TLDR, it's a lot harder to block https://s3.amazonaws.com/tiananmen-square-facts than https://tiananmen-square-facts.s3.amazonaws.com because DNS lookups are made before HTTPS kicks in.” Read about this update in detail here. Amazon S3 Security access and policies 3 announcements about Amazon S3 from re:Invent 2018: Intelligent-Tiering, Object Lock, and Batch Operations Amazon introduces S3 batch operations to process millions of S3 objects
Read more
  • 0
  • 0
  • 6431

article-image-fastly-edge-cloud-platform-files-for-ipo
Bhagyashree R
22 Apr 2019
3 min read
Save for later

Fastly, edge cloud platform, files for IPO

Bhagyashree R
22 Apr 2019
3 min read
Last week, Fastly Inc., a provider of an edge cloud platform announced that it has filed its proposed initial public offering (ipo) with the US Securities and Exchange Commission. Last year in July, in its last round of financing before a public offering,  the company raised $40 million investment. The book-running managers for the proposed offering are BofA Merrill Lynch, Citigroup, and Credit Suisse. William Blair, Raymond James, Baird, Oppenheimer & Co., Stifel, Craig-Hallum Capital Group and D.A. Davidson & Co. are co-managers for the proposed offering. Founded by Artur Bergman in 2011, Fastly is an American cloud computing services provider. Its edge cloud platform provides a content delivery network, Internet security services, load balancing, and video & streaming services. The edge cloud platform is designed from the ground up to be programmable and to support agile software development. This programmable edge cloud platform gives developers real-time visibility and control by stream logging data. So, developers are able to instantly see the impact of new code in production, troubleshoot issues as they occur, and rapidly identify suspicious traffic. Fastly boasts of catering to customers like The New York Times, Reddit, GitHub, Stripe, Ticketmaster and Pinterest. The company, in the unfinished prospectus shared how it has grown over the years, the risks of investing in the company, what are its plans for the future, and more. The company shows a steady growth in its revenue, while in December 2017 it was $104.9 million, it increased to $144.6 million, by the end of 2018. Its loss has also shown some decline from $32.5 million in December 2017 to $30.9 million in December 2018. Predicting its future market value, the prospectus says, “When incorporating these additional offerings, we estimate a total market opportunity of approximately $18.0 billion in 2019, based on expected growth from 2017, to $35.8 billion in 2022, growing with an expected CAGR of 25.6%.“ Fastly has not yet determined the number of shares to offered and the price range for the proposed offering. Currently, the company’s public filing has a placeholder amount of $100 million. However, looking at the amount of funding the company has received, TechCrunch predicts that it is more likely to get closer to $1 billion when it finally prices its shares. Fastly has two classes of authorized common stock: Class A and Class B. The rights of both the common stockholders are identical, except with respect to voting and conversion. Each Class A share is entitled to one vote per share and each Class B share is entitled to 10 votes per share. Class B shares are convertible into one shares of Class A common stock. The Class A common stock will be listed on The New York Stock Exchange under the symbol “FSLY.” To read more in detail, check out the ipo filing by Fastly. Fastly open sources Lucet, a native WebAssembly compiler and runtime Cloudflare raises $150M with Franklin Templeton leading the latest round of funding Dark Web Phishing Kits: Cheap, plentiful and ready to trick you  
Read more
  • 0
  • 0
  • 2575

article-image-platform9-open-sources-klusterkit-to-simplify-the-deployment-and-operations-of-kubernetes-clusters
Bhagyashree R
16 Apr 2019
3 min read
Save for later

Platform9 open sources Klusterkit to simplify the deployment and operations of Kubernetes clusters

Bhagyashree R
16 Apr 2019
3 min read
Today, Platform9 open sourced Klusterkit under the Apache 2.0 license. It is a set of three open source tools that can be used separately or in tandem to simplify the creation and management of highly-available, multi-master, production-grade Kubernetes clusters on-premise, air-gapped environments. Tools included in Klusterkit ‘etcdadm’ Inspired by the ‘kubeadm’ command, ‘etcdadm’ is a command-line interface (CLI) for operating an etcd cluster. It makes the creation of a new cluster, addition of a new member, or the removal of a member from an existing cluster easier. It is adopted by Kubernetes Cluster Lifecycle SIG,  a group that focuses on deployment and upgrades of clusters. ‘nodeadm’ This is a CLI node administration tool that complements kubeadm by deploying all the dependencies required by kubeadm. You can easily deploy a Kubernetes control plane or nodes on any machine running Linux with the help of this tool. ‘cctl’ This is a cluster lifecycle management tool based on Kubernetes community's Cluster API spec. It uses the other two tools in Klusterkit to easily deploy and maintain highly-available Kubernetes clusters in on-premises, even air-gapped environments. Features of Klusterkit It comes with multi-master (K8s HA) support Users can deploy and manage secure etcd clusters It provides rolling upgrade and rollback capability It works in air-gapped environments Users can backup and recover etcd clusters from quorum loss You can control plane protection from low memory/ low CPU conditions. Klusterkit solution architecture Source: Platform 9 Klusterkit stores the metadata of the Kubernetes cluster you build, in a single file named ‘cctl-state.yaml’. You can invoke the cctl CLI to orchestrate the lifecycle of a Kubernetes cluster from any machine which contains this state file. For performing CRUD operations on clusters, cctl implements and calls into the cluster-api interface as a library. It uses ssh-provider, the machine controller for the cluster-api reference implementation. The ssh-provider then, in turn, calls etcdadm and nodeadm to perform cluster operations. In an email sent to us, Arun Sriraman, Kubernetes Technical Lead Manager at Platform9, explaining the importance of Klusterkit, said, “Klusterkit presents a powerful, yet easy-to-use Kubernetes toolset that complements community efforts like Cluster API and kubeadm to allow enterprises a path to modernize applications to use Kubernetes, and run them anywhere -- even in on-premise, air-gapped environments.” To know more in detail, check out the documentation on GitHub. Pivotal and Heroku team up to create Cloud Native Buildpacks for Kubernetes Kubernetes 1.14 releases with support for Windows nodes, Kustomize integration, and much more Introducing ‘Quarkus’, a Kubernetes native Java framework for GraalVM & OpenJDK HotSpot
Read more
  • 0
  • 0
  • 2350
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-googles-cloud-healthcare-api-is-now-available-in-beta
Amrata Joshi
09 Apr 2019
3 min read
Save for later

Google’s Cloud Healthcare API is now available in beta

Amrata Joshi
09 Apr 2019
3 min read
Last week, Google announced that its Cloud Healthcare API is now available in beta. The API acts as a bridge between on-site healthcare systems and applications that are hosted on Google Cloud. This API is HIPAA compliant, ecosystem-ready and developer-friendly. The aim of the team at Google is to give hospitals and other healthcare facilities more analytical power with the help of Cloud Healthcare API. The official post reads, "From the beginning, our primary goal with Cloud Healthcare API has been to advance data interoperability by breaking down the data silos that exist within care systems. The API enables healthcare organizations to ingest and manage key data and better understand that data through the application of analytics and machine learning in real time, at scale." This API offers a managed solution for storing and accessing healthcare data in Google Cloud Platform (GCP). With the help of this API, users can now explore new capabilities for data analysis, machine learning, and application development for healthcare solutions. The  Cloud Healthcare API also simplifies app development and device integration to speed up the process. This API also supports standards-based data formats and protocols of existing healthcare tech. For instance, it will allow healthcare organizations to stream data processing with Cloud Dataflow, analyze data at scale with BigQuery, and tap into machine learning with the Cloud Machine Learning Engine. Features of Cloud Healthcare API Compliant and certified This API is HIPAA compliant and HITRUST CSF certified. Google is also planning ISO 27001, ISO 27017, and ISO 27018 certifications for Cloud Healthcare API. Explore your data This API allows users to explore their healthcare data by incorporating advanced analytics and machine learning solutions such as BigQuery, Cloud AutoML, and Cloud ML Engine. Managed scalability Google’s Cloud Healthcare API provides web-native, serverless scaling which is optimized by Google’s infrastructure. Users can simply activate the API to send requests as the initial capacity configuration is not required. Apigee Integration This API integrates with Apigee, which is recognized by Gartner as a leader in full lifecycle API management, for delivering app and service ecosystems around user data. Developer-friendly This API organizes users’ healthcare information into datasets with one or more modality-specific stores per set where each store exposes both a REST and RPC interface. Enhanced data liquidity The API also supports bulk import and export of FHIR data and DICOM data, which accelerates delivery for applications with dependencies on existing datasets. It further provides a convenient API for moving data between projects. The official post reads, “While our product and engineering teams are focused on building products to solve challenges across the healthcare and life sciences industries, our core mission embraces close collaboration with our partners and customers.” Google will highlight what its partners, including the American Cancer Society, CareCloud, Kaiser Permanente, and iDigital are doing with the API at the ongoing Google Cloud Next. To know more about this news, check out Google’s official announcement. Ian Goodfellow quits Google and joins Apple as a director of machine learning Google dissolves its Advanced Technology External Advisory Council in a week after repeat criticism on selection of members Google employees filed petition to remove anti-trans, anti-LGBTQ and anti-immigrant Kay Coles James from the AI council  
Read more
  • 0
  • 0
  • 4069

article-image-zabbix-4-2-release-for-data-collection-processing-and-visualization
Fatema Patrawala
03 Apr 2019
7 min read
Save for later

Zabbix 4.2 release packed with modern monitoring system for data collection, processing and visualization

Fatema Patrawala
03 Apr 2019
7 min read
Zabbix Team announced the release of Zabbix 4.2. The latest release of Zabbix is packed with modern monitoring system for: data collection and processing, distributed monitoring, real-time problem and anomaly detection, alerting and escalations, visualization and more. Let us check out what Zabbix 4.2 has actually brought to the table. Here is a list of the most important functionality included into the new release. Official support of new platforms In addition to existing official packages and appliances, Zabbix 4.2 will now cater to the following platforms: Zabbix package for RaspberryPi Zabbix package for SUSE Enterprise Linux Server Zabbix agent for Mac OS/X Zabbix agent for MSI for Windows Zabbix Docker images Built-in support of Prometheus data collection Zabbix is able to collect data in many different ways (push/pull) from various data sources including JMX, SNMP, WMI, HTTP/HTTPS, RestAPI, XML Soap, SSH, Telnet, agents, scripts and other data sources, with Prometheus being the latest addition to the bunch. Now the 4.2 release will offer an integration with the exporters using native support of PromQL language. Moreover, the use of dependent metrics will give the Zabbix team ability to collect massive amounts of Prometheus metrics in a highly efficient way: this way they get all the data using a single HTTP call and then just reuse it for corresponding dependent metrics. Zabbix can also transform Prometheus data into JSON format, which can be used directly for low-level discovery. Efficient high-frequency monitoring We all want to discover problems as fast as possible. Now with 4.2 we can collect data with high frequency, instantly discover problems without keeping excessive amount of history data in the Zabbix database. Validation of collected data and error handling No one wants to collect incorrect data. With Zabbix 4.2 we can address that via built-in preprocessing rules that validate data by matching or not matching regular expression, using JSONPath or XMLPath. Now it is also possible to extract error messages from collected data. This can be especially handy if we get an error from external APIs. Preprocessing data with JavaScript In Zabbix 4.2 you can fully harness the power of user-defined scripts written in JavaScript. Support of JavaScript gives absolute freedom of data preprocessing! In fact, you can now replace all external scripts with JavaScript. This will enable all sorts of data transformation, aggregation, filtering, arithmetical and logical operations and much more. Test preprocessing rules from UI As preprocessing becomes much more powerful, it is important to have a tool to verify complex scenarios. Zabbix 4.2 will allow to test preprocessing rules straight from the Web UI! Processing millions of metrics per second! Prior to 4.2, all preprocessing was handled solely by the Zabbix server. A combination of proxy-based preprocessing with throttling gives us the ability to perform high-frequency monitoring collecting millions of values per second without overloading the Zabbix Server. Proxies will perform massive preprocessing of collected data while the Server will only receive a small fraction of it. Easy low level discovery Low-level discovery (LLD) is a very effective tool for automatic discovery of all sorts of resources (filesystems, processes, applications, services, etc) and automatic creation of metrics, triggers and graphs related to them. It tremendously helps to save time and effort allowing to use just a single template for monitoring devices with different resources. Zabbix 4.2 supports processing based on arbitrary JSON input, which in turn allows us to communicate directly with external APIs, and use received data for automatic creation of hosts, metrics and triggers. Combined with JavaScript preprocessing it opens up fantastic opportunities for templates, that may work with various external data sources such as cloud APIs, application APIs, data in XML, JSON or any other format. Support of TimescaleDB TimescaleDB promises better performance due to more efficient algorithms and performance oriented data structures. Another significant advantage of TimescaleDB is automatic table partitioning, which improves performance and (combined with Zabbix) delivers fully automatic management of historical data. However, Zabbix team hasn’t performed any serious benchmarking yet. So it is hard to comment on real life experience of running TimescaleDB in production. At this moment TimescaleDB is an actively developed and rather young project. Simplified tag management Prior to Zabbix 4.2 we could only set tags for individual triggers. Now tag management is much more efficient thanks to template and host tags support. All detected problems get tag information not only from the trigger, but also from the host and corresponding templates. More flexible auto-registration Zabbix 4.2 auto-registration options gives the ability to filter host names based on a regular expression. It’s really useful if we want to create different auto-registration scenarios for various sets of hosts. Matching by regular expression is especially beneficial in case we have complex naming conventions for our devices. Control host names for auto-discovery Another improvement is related to naming hosts during auto-discovery. Zabbix 4.2 allows to assign received metric data to a host name and visible name. It is an extremely useful feature that enables great level of automation for network discovery, especially if we use Zabbix or SNMP agents. Test media type from Web UI Zabbix 4.2 allows us to send a test message or check that our chosen alerting method works as expected straight from the Zabbix frontend. This is quite useful for checking the scripts we are using for integration with external alerting and helpdesk systems etc. Remote monitoring of Zabbix components Zabbix 4.2 introduces remote monitoring of internal performance and availability metrics of the Zabbix Server and Proxy. Not only that, it also allows to discover Zabbix related issues and alert us even if the components are overloaded or, for example, have a large amount of data stored in local buffer (in case of proxies). Nicely formatted email messages Zabbix 4.2 comes with support of HTML format in email messages. It means that we are not limited to plain text anymore, the messages can use all power of HTML and CSS for much nicer and easy to read alert messages. Accessing remote services from network maps A new set of macros is now supported in network maps for creation of user-defined URLs pointing to external systems. It allows to open external tickets in helpdesk or configuration management systems, or do any other actions using just one or two mouse-clicks. LLD rule as a dependant metric This functionality allows to use received values of a master metric for data collection and LLD rules simultaneously. In case of data collection from Prometheus exporters, Zabbix will only execute HTTP query once and the result of the query will be used immediately for all dependent metrics (LLD rules and metric values). Animations for maps Zabbix 4.2 comes with support of animated GIFs making problems on maps more noticeable. Extracting data from HTTP headers Web-monitoring brings the ability to extract data from HTTP headers. With this we can now create multi-step scenarios for Web-monitoring and for external APIs using the authentication token received in one of the steps. Zabbix Sender pushes data to all IP addresses Zabbix Sender will now send metric data to all IP addresses defined in the “ServerActive” parameter of the Zabbix Agent configuration file. Filter for configuration of triggers Configuration of triggers page got a nice extended filter for quick and easy selection of triggers by a specified criteria. Showing exact time in graph tooltip It is a minor yet very useful improvement. Zabbix will show you timestamp in graph tooltip. Other improvements Non-destructive resizing and reordering of dashboard widgets Mass-update for item prototypes Support of IPv6 for DNS related checks (“net.dns” and “new.dns.record”) “skip” parameter for VMWare event log check “vmware.eventlog” Extended preprocessing error messages to include intermediate step results Expanded information and the complete list of Zabbix 4.2 developments, improvements and new functionality is available in Zabbix Manual. Encrypting Zabbix Traffic Deploying a Zabbix proxy Zabbix and I – Almost Heroes
Read more
  • 0
  • 0
  • 4757

article-image-you-can-now-integrate-chaos-engineering-into-your-ci-and-cd-pipelines-thanks-to-gremlin-and-spinnaker
Richard Gall
02 Apr 2019
3 min read
Save for later

You can now integrate chaos engineering into your CI and CD pipelines thanks to Gremlin and Spinnaker

Richard Gall
02 Apr 2019
3 min read
Chaos engineering is a trend that has been evolving quickly over the last 12 months. While for the decade it has largely been the preserve of Silicon Valley's biggest companies, thanks to platforms and tools like Gremlin, and an increased focus on software resiliency, that's been changing. Today, however, is a particularly important step in chaos engineering, as Gremlin have partnered with Netflix-built continuous deployment platform Spinnaker to allow engineering teams to automate chaos engineering 'experiments' throughout their CI and CD pipelines. Ultimately it means DevOps teams can think differently about chaos engineering. Gradually, this could help shift the way we think about chaos engineering, as it moves from localized experiments that require an in depth understanding of one's infrastructure, to something that is built-into the development and deployment process. More importantly, it makes it easier for engineering teams to take complete ownership of the reliability of their software. At a time when distributed systems bring more unpredictability into infrastructure, and when downtime has never been more costly (a Gartner report suggested downtime costs the average U.S. company $5,600 a minute all the way back in 2014) this is a step that could have a significant impact on how engineers work in the future. Read next: How Gremlin is making chaos engineering accessible [Interview] Spinnaker and chaos engineering Spinnaker is an open source continuous delivery platform built by Netflix and supported by Google, Microsoft, and Oracle. It's a platform that has been specifically developed for highly distributed and hybrid systems. This makes it a great fit for Gremlin, and also highlights that the growth of chaos engineering is being driven by the move to cloud. Adam Jordens, a Core Contributor to Spinnaker and a member of the Spinnaker Technical Oversight Committee said that "with the rise of microservices and distributed architectures, it’s more important than ever to understand how your cloud infrastructure behaves under stress.” Jordens continued; "by integrating with Gremlin, companies will be able to automate chaos engineering into their continuous delivery platform for the continual hardening and resilience of their internet systems.” Kolton Andrus, Gremlin CEO and Co-Founder explained the importance of Spinnaker in relation to chaos engineering, saying that "by integrating with Gremlin, users can now automate chaos experiments across multiple cloud providers including AWS EC2, Kubernetes, Google Compute Engine, Google Kubernetes Engine, Google App Engine, Microsoft Azure, Openstack, and more, enabling enterprises to build more resilient software." In recent months Gremlin has been working hard on products and features that make chaos engineering more accessible to companies and their engineering teams. In February, it released Gremlin Free, a free version of Gremlin designed to offer users a starting point for performing chaos experiments.
Read more
  • 0
  • 0
  • 2986

article-image-kubernetes-1-14-releases-with-support-for-windows-nodes-kustomize-integration-and-much-more
Amrata Joshi
26 Mar 2019
2 min read
Save for later

Kubernetes 1.14 releases with support for Windows nodes, Kustomize integration, and much more

Amrata Joshi
26 Mar 2019
2 min read
Yesterday, the team at Kubernetes released Kubernetes 1.14, a new update to the popular open-source container orchestration system. Kubernetes 1.14 comes with support for Windows nodes, kubectl plugin mechanism, Kustomize integration, and much more. https://twitter.com/spiffxp/status/1110319044249309184 What’s new in Kubernetes 1.14? Support for Windows Nodes This release comes with added support for Windows nodes as worker nodes. Kubernetes now schedules Windows containers and enables a vast ecosystem of Windows applications. With this release, enterprises with investments can easily manage their workloads and operational efficiencies across their deployments, regardless of the operating systems. Kustomize integration With this release, the declarative resource config authoring capabilities of kustomize are now available in kubectl through the -k flag. Kustomize helps the users in authoring and reusing resource config using Kubernetes native concepts. kubectl plugin mechanism This release comes with kubectl plugin mechanism that allows developers to publish their own custom kubectl subcommands in the form of standalone binaries. PID Administrators can now provide pod-to-pod PID (Process IDs) isolation by defaulting the number of PIDs per pod. Pod priority and preemption in this release enables Kubernetes scheduler to schedule important pods first and remove the less important pods to create room for more important ones. Users are generally happy and excited about this release. https://twitter.com/fabriziopandini/status/1110284805411872768 A user commented on HackerNews, “The inclusion of Kustomize[1] into kubectl is a big step forward for the K8s ecosystem as it provides a native solution for application configuration. Once you really grok the pattern of using overlays and patches, it starts to feel like a pattern that you'll want to use everywhere” To know more about this release in detail, check out Kubernetes’ official announcement. RedHat’s OperatorHub.io makes it easier for Kuberenetes developers and admins to find pre-tested ‘Operators’ for applications Microsoft open sources ‘Accessibility Insights for Web’, a chrome extension to help web developers fix their accessibility issues Microsoft open sources the Windows Calculator code on GitHub  
Read more
  • 0
  • 0
  • 2458
article-image-microsoft-open-sources-project-zipline-its-data-compression-algorithm-and-hardware-for-the-cloud
Natasha Mathur
15 Mar 2019
3 min read
Save for later

Microsoft open-sources Project Zipline, its data compression algorithm and hardware for the cloud

Natasha Mathur
15 Mar 2019
3 min read
Microsoft announced that it is open-sourcing its new cutting-edge compression technology, called Project Zipline, yesterday. As a part of this open-source release, Project Zipline compression algorithms, hardware design specifications, and Verilog source code for register transfer language (RTL) has been made available. Apart from the announcement of Project Zipline, the Open Compute Project (OCP) Global Summit 2019 also started yesterday in San Jose. In the summit, the latest innovations that can make hardware more efficient, flexible, and scalable are shared. Microsoft states that its journey with OCP began in 2014 when it joined the foundation and contributed the server and data center designs that power its global Azure Cloud. Moreover, Microsoft contributes innovations to OCP every year at the summit. Microsoft has again decided to contribute Project Zipline this year. “This contribution will provide collateral for integration into a variety of silicon components across the industry for this new high-performance compression standard. Contributing RTL at this level of detail as open source to OCP is industry leading”, states Microsoft team. Project Zipline is aimed to optimize the hardware implementation for different types of data on cloud storage workloads. Microsoft has been able to achieve higher compression ratios, higher throughput, and lower latency than the other algorithms currently available. This allows for compression without compromise as well as data processing for different industry usage models (from cloud to edge). Microsoft’s Project Zipline compression algorithm produces great results with up to 2X high compression ratios as compared to the commonly used Zlib-L4 64KB model. These enhancements, in turn, produce direct customer benefits for cost savings and allow access to petabytes or exabytes of capacity in a cost-effective way for the customers. Project Zipline has also been optimized for a large variety of datasets, and Microsoft’s release of RTL allows hardware vendors to use the reference design that offers the highest compression, lowest cost, and lowest power in an algorithm. Project Zipline is available to the OCP ecosystem, so vendors can contribute further to benefit Azure and other customers. Microsoft team states that this contribution towards open source will set a “new precedent for driving frictionless collaboration in the OCP ecosystem for new technologies and opening the doors for hardware innovation at the silicon level”. In the future, Microsoft expects Project Zipline compression technology to enter different market segments and usage models such as network data processing, smart SSDs, archival systems, cloud appliances, general purpose microprocessor, IoT, and edge devices. For more information, check out the official Microsoft announcement. Microsoft open sources the Windows Calculator code on GitHub Microsoft open sources ‘Accessibility Insights for Web’, a chrome extension to help web developers fix their accessibility issue Microsoft researchers introduce a new climate forecasting model and a public dataset to train these models
Read more
  • 0
  • 0
  • 2628

article-image-debian-project-leader-elections-goes-without-nominations
Fatema Patrawala
13 Mar 2019
5 min read
Save for later

Debian project leader elections goes without nominations. What now?

Fatema Patrawala
13 Mar 2019
5 min read
The Debian Project is an association of individuals who have made common cause to create a free operating system. One of the traditional rites of the northern hemisphere spring is the elections for the Debian project leader. Over a six-week period in the month of March they hold the elections, interested candidates put their names forward, describe their vision for the project as a whole, answer questions from Debian developers, then wait and watch while the votes come in. But what would happen if Debian were to hold an election and no candidates stepped forward? The Debian project has just found itself in that situation this year and is trying to figure out what will happen next. The Debian project scatters various types of authority widely among its members, leaving relatively little for the project leader. As long as they stay within the bounds of Debian policy, individual developers have nearly absolute control over the packages they maintain, for example: Difficult technical disagreements between developers are handled by the project's technical committee. The release managers and FTP masters make the final decisions on what the project will actually ship (and when). The project secretary ensures that the necessary procedures are followed. The policy team handles much of the overall design for the distribution. So, in a sense, there is relatively little leading left for the leader to do. The roles that do fall to the leader fit into a couple of broad areas; the first of those is representing the project to the rest of the world. The leader gives talks at conferences and manages the project's relationships with other groups and companies. The second role is, to a great extent, administrative: the leader manages the project's money appoints developers to other roles within the project and takes care of details that nobody else in the project is responsible for Leaders are elected to a one-year term; for the last two years, this position has been filled by Chris Lamb. The February "Bits from the DPL" by Chris gives a good overview of what sorts of tasks the leader is expected to carry out. The Debian constitution describes the process for electing the leader. Six weeks prior to the end of the current leader's term, a call for candidates goes out. Only those recognized as Debian developers are eligible to run; they get one week to declare their intentions. There follows a three-week campaigning period, then two weeks for developers to cast their votes. This being Debian, there is always a "none of the above" option on the ballot; should this option win, the whole process restarts from the beginning. This year, the call for nominations was duly sent out by project secretary Kurt Roeckx on March 3. But, as of March 10, no eligible candidates had put their names forward. Lamb has been conspicuous in his absence from the discussion, with the obvious implication that he does not wish to run for a third term. So, it would seem, the nomination period has come to a close and the campaigning period has begun, but there is nobody there to do any campaigning. This being Debian, the constitution naturally describes what is to happen in this situation: the nomination period is extended for another week. Any Debian developers who procrastinated past the deadline now have another seven days in which to get their nominations in; the new deadline is March 17. Should this deadline also pass without candidates, it will be extended for another week; this loop will repeat indefinitely until somebody gives in and submits their name. Meanwhile, though, there is another interesting outcome from this lack of candidacy: the election of a new leader, whenever it actually happens, will come after the end of Lamb's term. There is no provision for locking the current leader in the office and requiring them to continue carrying out its duties; when the term is done, it's done. So the project is now certain to have a period of time where it has no leader at all. Some developers seem to relish this possibility; one even suggested that a machine-learning system could be placed into that role instead. But, as Joerg Jaspert pointed out: "There is a whole bunch of things going via the leader that is either hard to delegate or impossible to do so". Given enough time without a leader, various aspects of the project's operation could eventually grind to a halt. The good news is that this possibility, too, has been foreseen in the constitution. In the absence of a project leader, the chair of the technical committee and the project secretary are empowered to make decisions — as long as they are able to agree on what those decisions should be. Since Debian developers are famously an agreeable and non-argumentative bunch, there should be no problem with that aspect of things. In other words, the project will manage to muddle along for a while without a leader, though various aspects of processes could slow down and become more awkward if the current candidate drought persists. One might well wonder, though, why there seems to be nobody who wants to take the helm of this project for a year. Could the fact that it is an unpaid position requiring a lot of time and travel have something to do with it? If that were indeed to prove to be part of the problem, Debian might eventually have to consider doing what a number of similar organizations have done and create a paid position to do this work. Such a change would not be easy to make. But, if the project finds itself struggling to find a leader every year, it's a discussion that may need to happen. Are Debian and Docker slowly losing popularity? It is supposedly possible to increase reproducibility from 54% to 90% in Debian Buster! Debian 9.7 released with fix for RCE flaw  
Read more
  • 0
  • 0
  • 3301

article-image-google-cloud-console-incident-resolved
Melisha Dsouza
12 Mar 2019
2 min read
Save for later

Google Cloud Console Incident Resolved!

Melisha Dsouza
12 Mar 2019
2 min read
On 11th March, Google Cloud team received a report of an issue with Google Cloud Console and Google Cloud Dataflow. Mitigation work to fix the issue was started on the same day as per Google Cloud’s official page. According to Google post, “Affected users may receive a "failed to load" error message when attempting to list resources like Compute Engine instances, billing accounts, GKE clusters, and Google Cloud Functions quotas.” As a workaround, the team suggested the use of gcloud SDK instead of the Cloud Console. No workaround was suggested for Google Cloud Dataflow. While the mitigation was underway, another update was posted by the team: “The issue is partially resolved for a majority of users. Some users would still face trouble listing project permissions from the Google Cloud Console.” The issue which began around 09:58 Pacific Time, was finally resolved around 16:30 Pacific Time on the same day. The team said that they will conduct an internal investigation of this issue and “make appropriate improvements to their systems to help prevent or minimize future recurrence. They will also provide a more detailed analysis of this incident once they have completed our internal investigation.”  There is no other information revealed as of today. This downtime affected a  majority of Google Cloud users. https://twitter.com/lukwam/status/1105174746520526848 https://twitter.com/jbkavungal/status/1105184750560571393 https://twitter.com/bpmtri/status/1105264883837239297 Head over to Google Cloud’s official page for more insights on this news. Monday’s Google outage was a BGP route leak: traffic redirected through Nigeria, China, and Russia Researchers input rabbit-duck illusion to Google Cloud Vision API and conclude it shows orientation-bias Elizabeth Warren wants to break up tech giants like Amazon, Google Facebook, and Apple and build strong antitrust laws
Read more
  • 0
  • 0
  • 2848
article-image-aws-announces-open-distro-for-elasticsearch-licensed-under-apache-2-0
Savia Lobo
12 Mar 2019
4 min read
Save for later

AWS announces Open Distro for Elasticsearch licensed under Apache 2.0

Savia Lobo
12 Mar 2019
4 min read
Amazon Web Services announced a new open source distribution of Elasticsearch named Open Distro for Elasticsearch in collaboration with Expedia Group and Netflix. Open Distro for Elasticsearch will be focused on driving innovation with value-added features to ensure users have a feature-rich option that is fully open source. It provides developers with the freedom to contribute to open source value-added features on top of the Apache 2.0-licensed Elasticsearch upstream project. The need for Open Distro for Elasticsearch Elasticsearch’s Apache 2.0 license enabled it to gain adoption quickly and allowed unrestricted use of the software. However, since June 2018, the community witnessed significant intermix of proprietary code into the code base. While an Apache 2.0 licensed download is still available, there is an extreme lack of clarity as to what customers who care about open source are getting and what they can depend on. “Enterprise developers may inadvertently apply a fix or enhancement to the proprietary source code. This is hard to track and govern, could lead to a breach of license, and could lead to immediate termination of rights (for both proprietary free and paid).” Individual code commits also increasingly contain both open source and proprietary code, making it difficult for developers who want to only work on open source to contribute and participate. Also, the innovation focus has shifted from furthering the open source distribution to making the proprietary distribution popular. This means that the majority of new Elasticsearch users are now, in fact, running proprietary software. “We have discussed our concerns with Elastic, the maintainers of Elasticsearch, including offering to dedicate significant resources to help support a community-driven, non-intermingled version of Elasticsearch. They have made it clear that they intend to continue on their current path”, the AWS community states in their blog. These changes have also created uncertainty about the longevity of the open source project as it is getting less innovation focused. Customers also want the freedom to run the software anywhere and self-support at any point in time if they need to. Thus, this has led to the creation of Open Distro for Elasticsearch. Features of Open Distro for Elasticsearch Keeps data security in check Open Distro for Elasticsearch protects users’ cluster by providing advanced security features, including a number of authentication options such as Active Directory and OpenID, encryption in-flight, fine-grained access control, detailed audit logging, advanced compliance features, and more. Automatic notifications Open Distro for Elasticsearch provides a powerful, easy-to-use event monitoring and alerting system. This enables a user to monitor data and send notifications automatically to their stakeholders. It also includes an intuitive Kibana interface and powerful API, which further eases setting up and managing alerts. Increased SQL query interactions It also allows users who are already comfortable with SQL to interact with their Elasticsearch cluster and integrate it with other SQL-compliant systems. SQL offers more than 40 functions, data types, and commands including join support and direct export to CSV. Deep Diagnostic insights with Performance Analyzer Performance Analyzer provides deep visibility into system bottlenecks by allowing users to query Elasticsearch metrics alongside detailed network, disk, and operating system stats. Performance Analyzer runs independently without any performance impact even when Elasticsearch is under stress. According to AWS Open Source Blog, “With the first release, our goal is to address many critical features missing from open source Elasticsearch, such as security, event monitoring and alerting, and SQL support.” Subbu Allamaraju, VP Cloud Architecture at Expedia Group, said, “We are excited about the Open Distro for Elasticsearch initiative, which aims to accelerate the feature set available to open source Elasticsearch users like us. This initiative also helps in reassuring our continued investment in the technology.” Christian Kaiser, VP Platform Engineering at Netflix, said, “Open Distro for Elasticsearch will allow us to freely contribute to an Elasticsearch distribution, that we can be confident will remain open source and community-driven.” To know more about Open Distro for Elasticsearch in detail, visit AWS official blog post. GitHub releases Vulcanizer, a new Golang Library for operating Elasticsearch Elastic launches Helm Charts (alpha) for faster deployment of Elasticsearch and Kibana to Kubernetes How does Elasticsearch work? [Tutorial]
Read more
  • 0
  • 0
  • 3709

article-image-are-debian-and-docker-slowly-losing-popularity
Savia Lobo
12 Mar 2019
5 min read
Save for later

Are Debian and Docker slowly losing popularity?

Savia Lobo
12 Mar 2019
5 min read
Michael Stapelbergs, in his blog, stated why he has planned to reduce his involvement towards Debian software distribution. Stapelbergs is the one who wrote the Linux tiling window manager i3, the code search engine Debian Code Search and the netsplit-free. He said, he’ll reduce his involvement in Debian by, transitioning packages to be team-maintained remove the Uploaders field on packages with other maintainers orphan packages where he is the sole maintainer Stapelbergs mentions the pain points in Debian and why he decided to move away from it. Change process in Debian Debian follows a different change process where packages are nudged in the right direction by a document called the Debian Policy, or its programmatic embodiment, lintian. This tool is not necessarily important. “currently, all packages become lint-unclean, all maintainers need to read up on what the new thing is, how it might break, whether/how it affects them, manually run some tests, and finally decide to opt in. This causes a lot of overhead and manually executed mechanical changes across packages”, Stapelbergs writes. “Granting so much personal freedom to individual maintainers prevents us as a project from raising the abstraction level for building Debian packages, which in turn makes tooling harder.” Fragmented workflow and infrastructure Debian generally seems to prefer decentralized approaches over centralized ones. For example, individual packages are maintained in separate repositories (as opposed to in one repository), each repository can use any SCM (git and svn are common ones) or no SCM at all, and each repository can be hosted on a different site. Practically, non-standard hosting options are used rarely enough to not justify their cost, but frequently enough to be a huge pain when trying to automate changes to packages. Stapelbergs said that after he noticed the workflow fragmentation in the Go packaging team, he also tried fixing this with the workflow changes proposal, but did not succeed in implementing it. Debian is hard to machine-read “While it is obviously possible to deal with Debian packages programmatically, the experience is far from pleasant. Everything seems slow and cumbersome.” debiman needs help from piuparts in analyzing the alternatives mechanism of each package to display the manpages of e.g. psql(1). This is because maintainer scripts modify the alternatives database by calling shell scripts. Without actually installing a package, you cannot know which changes it does to the alternatives database. There used to be a fedmsg instance for Debian, but it no longer seems to exist. “It is unclear where to get notifications from for new packages, and where best to fetch those packages”, Stapelbergs says. A user on HackerNews said, “I've been willing to package a few of my open-source projects as well for almost a year, and out of frustration, I've ended up building my .deb packages manually and hosting them on my own apt repository. In the meantime, I've published a few packages on PPAs (for Ubuntu) and on AUR (for ArchLinux), and it's been as easy as it could have been.” Check out what the entire blogpost by Stapelbergs. Maish Saidel-Keesing believes Docker will die soon Maish Saidel-Keesing, a Cloud & AWS Solutions Architect at CyberArk, Israel, in his blog post mentions, “the days for Docker as a company are numbered and maybe also a technology as well” https://twitter.com/maishsk/status/1019115484673970176 Docker has undoubtedly brought in the popular containerization technology. However, Saidel-Keesing says, “Over the past 12-24 months, people are coming to the realization that docker has run its course and as a technology is not going to be able to provide additional value to what they have today - and have decided to start to look elsewhere for that extra edge.” He also talks about how Open Container Initiative brought with it the Runtime Spec, which opened the door to use something else besides docker as the runtime. Docker is no longer the only runtime that is being used. “Kelsey Hightower - has updated his Kubernetes the hard way over the years from CRI-O to containerd to gvisor. All the cool kids on the block are no longer using docker as the underlying runtime. There are many other options out there today clearcontainers, katacontainers and the list is continuously growing”, Saidel-Keesing says. “What triggered me was a post from Scott Mccarty - about the upcoming RHEL 8 beta - Enterprise Linux 8 Beta: A new set of container tools” https://twitter.com/maishsk/status/1098295411117309952 Saidel-Keesing writes, “Lo and behold - no more docker package available in RHEL 8”. He further added, “If you’re a container veteran, you may have developed a habit of tailoring your systems by installing the “docker” package. On your brand new RHEL 8 Beta system, the first thing you’ll likely do is go to your old friend yum. You’ll try to install the docker package, but to no avail. If you are crafty, next, you’ll search and find this package: podman-docker.noarch : "package to Emulate Docker CLI using podman." To know more on this news, head over to Maish Saidel-Keesing’s blog post. Docker Store and Docker Cloud are now part of Docker Hub Cloud Native Application Bundle (CNAB): Docker, Microsoft partner on an open source cloud-agnostic all-in-one packaging format It is supposedly possible to increase reproducibility from 54% to 90% in Debian Buster!
Read more
  • 0
  • 0
  • 7028