Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - Cloud & Networking

376 Articles
article-image-us-airlines-grounded-after-suspected-software-failure
Richard Gall
01 Apr 2019
2 min read
Save for later

US airlines grounded after suspected software failure

Richard Gall
01 Apr 2019
2 min read
A number of US airlines have been grounded for around 40 minutes by a software failure, causing significant delays to passengers travelling across the country. The issue is believed to have been caused by a problem with something called AeroData. AeroData program that helps to manage things like weight and balance - flight controllers need full visibility on this for planes to be allowed to fly. A tweet by the Federal Aviation Authority this morning confirmed the problem, citing "computer issues" as the reason for delays across the U.S. https://twitter.com/FAANews/status/1112681600788119553 However, within an hour, the FAA provided a further update, saying that "the issue has now been resolved." However, given that even a short delay would have a knock on effect on flights throughout the day, it advised passengers to get in direct contact with the airlines with which they are flying. A spokeswoman for Delta is quoted on USA Today saying "a brief third-party technology issue that prevented some Delta Connection flights from being dispatched on time this morning has been resolved." Flight delays are making the case for software resiliency The details of the failure have not yet been revealed, but it nevertheless looks like the air industry is making a particularly strong case for investing in the resiliency of software. "These airline outages will keep occurring unless something changes." Kolton Andrus, CEO and co-founder of chaos engineering platform Gremlin said: "These airline outages will keep occurring unless something changes. Systems are becoming more complex, and more than ever they rely on software that breaks. While we should continue to celebrate airlines that respond quickly, resolve issues, and maintain a good customer service -- we should celebrate even more the engineering teams at airlines who are catching problems before they cause outages in the first place." Read next: Chaos Engineering: managing complexity by breaking things
Read more
  • 0
  • 0
  • 1961

article-image-sailfish-os-3-0-2-named-oulanka-now-comes-with-improved-power-management-and-more-features
Bhagyashree R
28 Mar 2019
2 min read
Save for later

Sailfish OS 3.0.2, named Oulanka, now comes with improved power management and more features

Bhagyashree R
28 Mar 2019
2 min read
Last week, Jolla announced the release of Sailfish OS 3.0.2. This release goes by the name Oulanka, which is a national park in Lapland and Northern Ostrobothnia regions of Finland. Along with 44 fixed issues, this release brings in a battery saving mode, better connectivity, new device management APIs, and more. Improved power management Sailfish OS Oulanka comes with a battery saving mode, which is enabled by default when the battery goes lower than 20%. Additionally, users can also specify the battery saving threshold themselves by going to the “Battery” section in the settings menu. Better connectivity Improvements are made in this release so that Sailfish OS better handles scenarios when a large number of Bluetooth and WLAN devices are connected to the network. Now, Bluetooth and WLAN network scan will not slow down your devices. Also, many updates have been made in the Firewall introduced in the previous release, Sipoonkorpi, for better robustness. Updates in Corporate API This release comes with several improvements in the Corporate API. New device management APIs are added including data counters, call statistics, location data sources, proxy settings, app auto start, roaming status, and cellular settings. Sailfish X Beta for Xperia XA2 Sailfish X, the downloadable version of Sailfish OS for select devices, continues to be in Beta for XA2 with the Oulanka update. With this release, the team has improved several aspects of Android 8.1 Support Beta for XA2 devices. Now, Android apps will be able to connect to the internet more reliably via mobile data. To know more in detail about Sailfish OS Oulanka, check out the official announcement. An early access to Sailfish 3 is here! Linux 5.1 will come with Intel graphics, virtual memory support, and more The Linux Foundation announces the CHIPS Alliance project for deeper open source hardware integration
Read more
  • 0
  • 0
  • 3477

article-image-microsoft-adobe-and-sap-share-new-details-about-the-open-data-initiative
Natasha Mathur
28 Mar 2019
3 min read
Save for later

Microsoft, Adobe, and SAP share new details about the Open Data Initiative

Natasha Mathur
28 Mar 2019
3 min read
Earlier this week at the Adobe Summit, world’s largest conference focused on Customer Experience Management, Microsoft, Adobe and SAP announced that they’re expanding their Open Data Initiative. CEOs of Microsoft, Adobe, and SAP announced the launch of the Open Data Initiative at the Microsoft Ignite Conference in 2018. The core idea behind Open Data Initiative is to make it easier for the customers to move data between each others’ services. Now, the three partners are looking forward to transforming customer experiences with the help of real-time insights that will be delivered via the cloud. They have also come out with a common approach and a set of resources for customers to help customers create new connections across previously siloed data. Read Also: Women win all open board director seats in Open Source Initiative 2019 board elections “From the beginning, the ODI has been focused on enhancing interoperability between the applications and platforms of the three partners through a common data model with data stored in a customer-chosen data lake”, reads the Microsoft announcement. This unified data lake offers customers their choice of development tools and applications to build and deploy services. Also, these companies have come out with a new approach for publishing, enriching and ingesting initial data feeds from Adobe Experience Platform into a customer’s data lake. The whole approach will be activated via Adobe Experience Cloud, Microsoft Dynamics 365, Office 365 and SAP C/4HANA. This, in turn, will provide a new level of AI enrichment, helping firms serve their customers better. Moreover, to further advance the development of the initiative, Adobe, Microsoft and SAP, also shared the details about their plans to summon a Partner Advisory Council. This Partner Advisory Council will comprise over a dozen firms including Accenture, Amadeus, Capgemini, Change Healthcare, Cognizant, etc. Microsoft states that these organizations believe there is a significant opportunity in the ODI to help them offer altogether new value to their customers. “We’re excited about the initiative Adobe, Microsoft and SAP have taken in this area, and we see a lot of opportunity to contribute to the development of ODI”, states Stephan Pretorius, CTO, WPP. Microsoft introduces Pyright, a static type checker for the Python language written in TypeScript Microsoft announces: Microsoft Defender ATP for Mac, a fully automated DNA data storage, and revived office assistant Clippy Microsoft brings PostgreSQL extension and SQL Notebooks functionality to Azure Data Studio
Read more
  • 0
  • 0
  • 2776
Banner background image

article-image-shodan-monitor-a-new-website-that-monitors-the-network-and-tracks-what-is-connected-to-the-internet
Amrata Joshi
28 Mar 2019
2 min read
Save for later

Shodan Monitor, a new website that monitors the network and tracks what is connected to the internet

Amrata Joshi
28 Mar 2019
2 min read
Just two days ago, the team at Shodan introduced Shodan Monitor, a new website that helps users to setup network alerts and keeps a track of what's connected to the internet. Features of Shodan Monitor Networking gets easy with Shodan Monitor Users will be able to explore what they have connected to the internet within their network range. The users can also set up real-time notifications in case something unexpected shows up. Scaling The Shodan platform can handle networks of all the sizes. In case an ISP wants to deal with millions of customers then Shodan could be reliable in that scenario. Security Shodan Monitor helps in monitoring the users’ known networks and their devices across the internet. It helps in detecting leaks to the cloud, identifying phishing websites and compromised databases. Shodan navigates users to important information Shodan Monitor helps in keeping the dashboards precise and relevant by proving the most relevant information with the help of their web crawlers. The information shown to the users on their dashboards gets filtered before getting displayed to them. Component details API Shodan Monitor provides users with developer-friendly API and command-line interface, which has all the features of the Shodan Monitor website. Scanning Shodan’s global infrastructure helps users to scan the networks in order to confirm that an issue has been fixed. Batteries Shodan’s API plan subscription gives users access to Shodan Monitor, search engine, API, and a wide range of websites. Few users are happy about this news and excited to use it. https://twitter.com/jcsecprof/status/1110866625253855235 According to a few others, the website still needs some work as they are facing error while working with the website. https://twitter.com/MarcelBilal/status/1110796413607313408 To know more about this news, check out Shodan Monitor. Will putting limits on how much JavaScript is loaded by a website help prevent user resource abuse? Grunt makes it easy to test and optimize your website. Here’s how. [Tutorial] FBI takes down some ‘DDoS for hire’ websites just before Christmas    
Read more
  • 0
  • 0
  • 2182

article-image-elastic-stack-6-7-releases-with-elastic-maps-elastic-update-and-much-more
Amrata Joshi
27 Mar 2019
3 min read
Save for later

Elastic Stack 6.7 releases with Elastic Maps, Elastic Update and much more!

Amrata Joshi
27 Mar 2019
3 min read
Yesterday, the team at Elastic released Elastic Stack 6.7 a group of open source products from Elastic designed to help users take data from any type of source and visualize that data in real time. What’s new in Elastic Stack 6.7? Elastic Maps Elastic Maps is a new dedicated solution used for mapping, querying, and visualizing geospatial data in Kibana. They expand on existing geospatial visualization options in Kibana with features such as visualization of multiple layers and data sources in the same map. It also includes features like dynamic data-driven styling on vector layers on maps, mapping of both aggregate and document-level data and much more. Elastic Maps also embeds the query bar with autocomplete for real-time ad-hoc search. Elastic Uptime This release comes with Elastic Uptime, that makes it easy to detect when application services are down or they are responding slowly. It notifies users about problems way before those services are called by the application. Cross Cluster Replication (CCR) Cross Cluster Replication (CCR) now has a variety of use cases that include cross-datacenter and cross-region replication and it is generally available. Index Lifecycle Management (ILM) With this release, Index lifecycle management (ILM) is now generally available and also ready for production use. ILM helps Elasticsearch admins with defining and automating lifecycle management policies, such as how data is to be managed and moved between phases like hot, warm, cold, and deletion phases while it ages. Elasticsearch SQL Elasticsearch SQL, helps users with interacting and querying their Elasticsearch data using SQL. Elasticsearch SQL functionality includes the JDBC and ODBC clients, which allows third-party tools to connect to Elasticsearch as a backend datastore. With this release, Elasticsearch SQL gets generally available. Canvas Canvas that helps users to showcase and present live data from Elasticsearch with pixel-perfect precision, becomes generally available with this release. Kibana localization In this release, Kibana’s first localization, which is now available in simplified Chinese. Kibana also introduces a new localization framework that provides support for additional languages. Functionbeat Functionbeat is a Beat that deploys as a function in serverless computing frameworks, as well as streams, cloud infrastructure logs, and metrics into Elasticsearch. The Functionbeat is now generally available and it supports the AWS Lambda framework and can stream data from CloudWatch Logs, SQS, and Kinesis. Upgrade Assistant The Upgrade Assistant in this release will help users in preparing their existing Elastic Stack environment for the upgrade to 7.0. The Upgrade Assistant includes both APIs and UIs and works as an important cluster checkup tool to help plan the upgrade. It also helps in identifying things like deprecation warnings to enable a smoother upgrade experience. To know more about this release, check out Elastic’s blog post. Microsoft brings PostgreSQL extension and SQL Notebooks functionality to Azure Data Studio Core CPython developer unveils a new project that can analyze his phone’s ‘silent connections’ How to handle backup and recovery with PostgreSQL 11 [Tutorial]  
Read more
  • 0
  • 0
  • 3358

article-image-uber-open-sources-peloton-a-unified-resource-scheduler
Natasha Mathur
27 Mar 2019
2 min read
Save for later

Uber open-sources Peloton, a unified Resource Scheduler

Natasha Mathur
27 Mar 2019
2 min read
Earlier this month, Uber open-sourced Pelton, a unified resource scheduler that manages resources across distinct workloads. Pelton, first introduced in November last year, is built on top of Mesos. “By allowing others in the cluster management community to leverage unified schedulers and workload co-location, Peloton will open the door for more efficient resource utilization and management across the community”, states the Uber team. Peloton is designed for web-scale companies such as Uber that consist of millions of containers and tens of thousands of nodes. Peloton comes with advanced resource management capabilities such as elastic resource sharing, hierarchical max-min fairness, resource overcommits, and workload preemption. Peloton uses Mesos to aggregate resources from different hosts and then further launch tasks as Docker containers. Peloton also makes use of hierarchical resource pools to manage elastic and cluster-wide resources more efficiently. Before Peloton was released, each workload at Uber comprised its own cluster which resulted in various inefficiencies. However, with Peloton, mixed workloads can be colocated in shared clusters for better resource utilization. Peloton feature highlights Elastic Resource Sharing: Peloton supports hierarchical resource pools that help elastically share resources among different teams. Resource Overcommit and Task Preemption: Peloton helps with improving cluster utilization by scheduling workloads that use slack resources. Optimized for Big Data Workloads:  Support has been provided for advanced Apache Spark features such as dynamic resource allocation. Optimized for Machine Learning: There is support provided for GPU and Gang scheduling for TensorFlow and Horovod. High Scalability: Users can scale to millions of containers and tens of thousands of nodes. “Open sourcing Peloton will enable greater industry collaboration and open up the software to feedback and contributions from industry engineers, independent developers, and academics across the world”, states the Uber team. Uber and Lyft drivers strike in Los Angeles Uber and GM Cruise are open sourcing their Automation Visualization Systems Uber releases Ludwig, an open source AI toolkit that simplifies training deep learning models for non-experts
Read more
  • 0
  • 0
  • 3469
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-kubernetes-1-14-releases-with-support-for-windows-nodes-kustomize-integration-and-much-more
Amrata Joshi
26 Mar 2019
2 min read
Save for later

Kubernetes 1.14 releases with support for Windows nodes, Kustomize integration, and much more

Amrata Joshi
26 Mar 2019
2 min read
Yesterday, the team at Kubernetes released Kubernetes 1.14, a new update to the popular open-source container orchestration system. Kubernetes 1.14 comes with support for Windows nodes, kubectl plugin mechanism, Kustomize integration, and much more. https://twitter.com/spiffxp/status/1110319044249309184 What’s new in Kubernetes 1.14? Support for Windows Nodes This release comes with added support for Windows nodes as worker nodes. Kubernetes now schedules Windows containers and enables a vast ecosystem of Windows applications. With this release, enterprises with investments can easily manage their workloads and operational efficiencies across their deployments, regardless of the operating systems. Kustomize integration With this release, the declarative resource config authoring capabilities of kustomize are now available in kubectl through the -k flag. Kustomize helps the users in authoring and reusing resource config using Kubernetes native concepts. kubectl plugin mechanism This release comes with kubectl plugin mechanism that allows developers to publish their own custom kubectl subcommands in the form of standalone binaries. PID Administrators can now provide pod-to-pod PID (Process IDs) isolation by defaulting the number of PIDs per pod. Pod priority and preemption in this release enables Kubernetes scheduler to schedule important pods first and remove the less important pods to create room for more important ones. Users are generally happy and excited about this release. https://twitter.com/fabriziopandini/status/1110284805411872768 A user commented on HackerNews, “The inclusion of Kustomize[1] into kubectl is a big step forward for the K8s ecosystem as it provides a native solution for application configuration. Once you really grok the pattern of using overlays and patches, it starts to feel like a pattern that you'll want to use everywhere” To know more about this release in detail, check out Kubernetes’ official announcement. RedHat’s OperatorHub.io makes it easier for Kuberenetes developers and admins to find pre-tested ‘Operators’ for applications Microsoft open sources ‘Accessibility Insights for Web’, a chrome extension to help web developers fix their accessibility issues Microsoft open sources the Windows Calculator code on GitHub  
Read more
  • 0
  • 0
  • 3183

article-image-facebook-and-microsoft-announce-open-rack-v3-to-address-the-power-demands-from-artificial-intelligence-and-networking
Bhagyashree R
18 Mar 2019
3 min read
Save for later

Facebook and Microsoft announce Open Rack V3 to address the power demands from artificial intelligence and networking

Bhagyashree R
18 Mar 2019
3 min read
From the past few months, Facebook and Microsoft together have been working on a new architecture based on the Open Rack standards. Last week, Facebook announced a new initiative that aims to build uniformity around the Rack & Power design. The Rack & Power Project Group is responsible for setting the rack standards designed for data centers, integrating the rack into the data center infrastructure. This project comes under a larger initiative started by Facebook called Open Compute Project. Why a new version of Open Rack is needed? Today, the industry is turning to AI and ML systems to solve several difficult problems. Though these systems are helpful, at the same time, they require increased power density at both the component level and the system level. The ever-increasing bandwidth speed demand for networking systems has also led to similar problems. So, in order to improve the overall system performance, it is important to get memory, processors, and system fabrics as close together as possible. This new architecture of Open Rack will come with greater benefits as compared to the current version, Open Rack V2. “For this next version, we are collaborating to create flexible, interoperable, and scalable solutions for the community through a common OCP architecture. Accomplishing this goal will enable wider adoption of OCP technologies across multiple industries, which will benefit operators, solution providers, original design manufacturers, and configuration managers,” shared Facebook in the blog post. What are the goals of this initiative? This new initiative aims to achieve the following goals A common OCP rack architecture to enable greater sharing between Microsoft and Facebook. Creating a flexible frame and power infrastructure that will support a wide range of solutions across the OCP community Apart from the features need by Facebook, this architecture will come with additional features for the larger community, including physical security for solutions deployed in co-location facilities. New thermal solutions will be introduced such as liquid cooling manifolds, door-based heat exchanges, and defined physical and thermal interfaces. These solutions are currently under development by the Advanced Cooling Solutions sub-project. Introducing new power and battery backup solutions that scale across different rack power levels and also accommodate different power input types. To know more in detail, check out the official announcement on Facebook. Two top executives leave Facebook soon after the pivot to privacy announcement Facebook tweet explains ‘server config change’ for 14-hour outage on all its platforms Facebook under criminal investigations for data sharing deals: NYT report
Read more
  • 0
  • 0
  • 4262

article-image-microsoft-open-sources-project-zipline-its-data-compression-algorithm-and-hardware-for-the-cloud
Natasha Mathur
15 Mar 2019
3 min read
Save for later

Microsoft open-sources Project Zipline, its data compression algorithm and hardware for the cloud

Natasha Mathur
15 Mar 2019
3 min read
Microsoft announced that it is open-sourcing its new cutting-edge compression technology, called Project Zipline, yesterday. As a part of this open-source release, Project Zipline compression algorithms, hardware design specifications, and Verilog source code for register transfer language (RTL) has been made available. Apart from the announcement of Project Zipline, the Open Compute Project (OCP) Global Summit 2019 also started yesterday in San Jose. In the summit, the latest innovations that can make hardware more efficient, flexible, and scalable are shared. Microsoft states that its journey with OCP began in 2014 when it joined the foundation and contributed the server and data center designs that power its global Azure Cloud. Moreover, Microsoft contributes innovations to OCP every year at the summit. Microsoft has again decided to contribute Project Zipline this year. “This contribution will provide collateral for integration into a variety of silicon components across the industry for this new high-performance compression standard. Contributing RTL at this level of detail as open source to OCP is industry leading”, states Microsoft team. Project Zipline is aimed to optimize the hardware implementation for different types of data on cloud storage workloads. Microsoft has been able to achieve higher compression ratios, higher throughput, and lower latency than the other algorithms currently available. This allows for compression without compromise as well as data processing for different industry usage models (from cloud to edge). Microsoft’s Project Zipline compression algorithm produces great results with up to 2X high compression ratios as compared to the commonly used Zlib-L4 64KB model. These enhancements, in turn, produce direct customer benefits for cost savings and allow access to petabytes or exabytes of capacity in a cost-effective way for the customers. Project Zipline has also been optimized for a large variety of datasets, and Microsoft’s release of RTL allows hardware vendors to use the reference design that offers the highest compression, lowest cost, and lowest power in an algorithm. Project Zipline is available to the OCP ecosystem, so vendors can contribute further to benefit Azure and other customers. Microsoft team states that this contribution towards open source will set a “new precedent for driving frictionless collaboration in the OCP ecosystem for new technologies and opening the doors for hardware innovation at the silicon level”. In the future, Microsoft expects Project Zipline compression technology to enter different market segments and usage models such as network data processing, smart SSDs, archival systems, cloud appliances, general purpose microprocessor, IoT, and edge devices. For more information, check out the official Microsoft announcement. Microsoft open sources the Windows Calculator code on GitHub Microsoft open sources ‘Accessibility Insights for Web’, a chrome extension to help web developers fix their accessibility issue Microsoft researchers introduce a new climate forecasting model and a public dataset to train these models
Read more
  • 0
  • 0
  • 3200

article-image-google-to-be-the-founding-member-of-cdf-continuous-delivery-foundation
Bhagyashree R
15 Mar 2019
3 min read
Save for later

Google to be the founding member of CDF (Continuous Delivery Foundation)

Bhagyashree R
15 Mar 2019
3 min read
On Tuesday, Google announced of being one of the founding members of the newly-formed Continuous Delivery Foundation (CDF). As a part of its membership, Google will be contributing to two projects namely Spinnaker and Tekton. About Continuous Delivery Foundation The formation of CDF was announced at the Linux Foundation Open Source Leadership Summit on Tuesday. CDF will act as a “vendor-neutral home” for some of the most important open source projects for continuous delivery and specifications to speed up the release pipeline process. https://twitter.com/linuxfoundation/status/1105515314899492864 The existing CI/CD ecosystem is heavily fragmented, which makes it difficult for developers and companies to decide on particular tooling for their projects. Also, DevOps practitioners often find it very challenging to gather guidance information on software delivery best practices. CDF was formed to make CI/CD tooling easier and define the best practices and guidelines that will enable application developers to deliver better and more secure software at speed. CDF is currently hosting some of the most popularly used CI/CD tools including Jenkins, Jenkins X, Spinnaker, and Tekton. The foundation is backed by 20+ founding members which include Alauda, Alibaba, Anchore, Armory.io, Atos, Autodesk, Capital One, CircleCI, CloudBees, DeployHub, GitLab, Google, HSBC, Huawei, IBM, JFrog, Netflix, Puppet, Rancher, Red Hat, SAP, Snyk, and SumoLogic. Why Google joined CDF? Google as a part of this foundation will be working on Spinnaker and Tekton. Originally created by Netflix and jointly led by Netflix and Google, Spinnaker is an open source, multi-cloud delivery platform. It comes with various features for making continuous delivery reliable including support for advanced deployment strategies, an open source canary analysis service named Kayenta, and more. The Spinnaker’s user community has great experience in the continuous delivery domain, and by joining CDF Google aims to share that expertise with the broader community. Tekton is a set of shared, open source components for building CI/CD systems. It allows you to build, test, and deploy applications across multiple environments such as virtual machines, serverless, Kubernetes, or Firebase. In the next few months, we can expect to see support for results and event triggering in Tekton. Google is also planning to work with CI/CD vendors to build an ecosystem of components that will allow users to use Tekton with existing tools like Jenkins X, Kubernetes native, and others. Dan Lorenc, Staff Software Engineer at Google Cloud, sharing Google’s motivation behind joining CDF said, “Continuous Delivery is a critical part of modern software development, but today space is heavily fragmented. The Tekton project addresses this problem by working with the open source community and other leading vendors to collaborate on the modernization of CI/CD infrastructure.” Kim Lewandowski, Product Manager at Google Cloud, said, “The ability to deploy code securely and as fast as possible is top of mind for developers across the industry. Only through best practices and industry-led specifications will developers realize a reliable and portable way to take advantage of continuous delivery solutions. Google is excited to be a founding member of the CDF and to work with the community to foster innovation for deploying software anywhere.” To know more, check out the official announcement at the Google Open Source blog. Google Cloud Console Incident Resolved! Cloudflare takes a step towards transparency by expanding its government warrant canaries Google to acquire cloud data migration start-up ‘Alooma’
Read more
  • 0
  • 0
  • 4733
article-image-debian-project-leader-elections-goes-without-nominations
Fatema Patrawala
13 Mar 2019
5 min read
Save for later

Debian project leader elections goes without nominations. What now?

Fatema Patrawala
13 Mar 2019
5 min read
The Debian Project is an association of individuals who have made common cause to create a free operating system. One of the traditional rites of the northern hemisphere spring is the elections for the Debian project leader. Over a six-week period in the month of March they hold the elections, interested candidates put their names forward, describe their vision for the project as a whole, answer questions from Debian developers, then wait and watch while the votes come in. But what would happen if Debian were to hold an election and no candidates stepped forward? The Debian project has just found itself in that situation this year and is trying to figure out what will happen next. The Debian project scatters various types of authority widely among its members, leaving relatively little for the project leader. As long as they stay within the bounds of Debian policy, individual developers have nearly absolute control over the packages they maintain, for example: Difficult technical disagreements between developers are handled by the project's technical committee. The release managers and FTP masters make the final decisions on what the project will actually ship (and when). The project secretary ensures that the necessary procedures are followed. The policy team handles much of the overall design for the distribution. So, in a sense, there is relatively little leading left for the leader to do. The roles that do fall to the leader fit into a couple of broad areas; the first of those is representing the project to the rest of the world. The leader gives talks at conferences and manages the project's relationships with other groups and companies. The second role is, to a great extent, administrative: the leader manages the project's money appoints developers to other roles within the project and takes care of details that nobody else in the project is responsible for Leaders are elected to a one-year term; for the last two years, this position has been filled by Chris Lamb. The February "Bits from the DPL" by Chris gives a good overview of what sorts of tasks the leader is expected to carry out. The Debian constitution describes the process for electing the leader. Six weeks prior to the end of the current leader's term, a call for candidates goes out. Only those recognized as Debian developers are eligible to run; they get one week to declare their intentions. There follows a three-week campaigning period, then two weeks for developers to cast their votes. This being Debian, there is always a "none of the above" option on the ballot; should this option win, the whole process restarts from the beginning. This year, the call for nominations was duly sent out by project secretary Kurt Roeckx on March 3. But, as of March 10, no eligible candidates had put their names forward. Lamb has been conspicuous in his absence from the discussion, with the obvious implication that he does not wish to run for a third term. So, it would seem, the nomination period has come to a close and the campaigning period has begun, but there is nobody there to do any campaigning. This being Debian, the constitution naturally describes what is to happen in this situation: the nomination period is extended for another week. Any Debian developers who procrastinated past the deadline now have another seven days in which to get their nominations in; the new deadline is March 17. Should this deadline also pass without candidates, it will be extended for another week; this loop will repeat indefinitely until somebody gives in and submits their name. Meanwhile, though, there is another interesting outcome from this lack of candidacy: the election of a new leader, whenever it actually happens, will come after the end of Lamb's term. There is no provision for locking the current leader in the office and requiring them to continue carrying out its duties; when the term is done, it's done. So the project is now certain to have a period of time where it has no leader at all. Some developers seem to relish this possibility; one even suggested that a machine-learning system could be placed into that role instead. But, as Joerg Jaspert pointed out: "There is a whole bunch of things going via the leader that is either hard to delegate or impossible to do so". Given enough time without a leader, various aspects of the project's operation could eventually grind to a halt. The good news is that this possibility, too, has been foreseen in the constitution. In the absence of a project leader, the chair of the technical committee and the project secretary are empowered to make decisions — as long as they are able to agree on what those decisions should be. Since Debian developers are famously an agreeable and non-argumentative bunch, there should be no problem with that aspect of things. In other words, the project will manage to muddle along for a while without a leader, though various aspects of processes could slow down and become more awkward if the current candidate drought persists. One might well wonder, though, why there seems to be nobody who wants to take the helm of this project for a year. Could the fact that it is an unpaid position requiring a lot of time and travel have something to do with it? If that were indeed to prove to be part of the problem, Debian might eventually have to consider doing what a number of similar organizations have done and create a paid position to do this work. Such a change would not be easy to make. But, if the project finds itself struggling to find a leader every year, it's a discussion that may need to happen. Are Debian and Docker slowly losing popularity? It is supposedly possible to increase reproducibility from 54% to 90% in Debian Buster! Debian 9.7 released with fix for RCE flaw  
Read more
  • 0
  • 0
  • 4091

article-image-google-cloud-console-incident-resolved
Melisha Dsouza
12 Mar 2019
2 min read
Save for later

Google Cloud Console Incident Resolved!

Melisha Dsouza
12 Mar 2019
2 min read
On 11th March, Google Cloud team received a report of an issue with Google Cloud Console and Google Cloud Dataflow. Mitigation work to fix the issue was started on the same day as per Google Cloud’s official page. According to Google post, “Affected users may receive a "failed to load" error message when attempting to list resources like Compute Engine instances, billing accounts, GKE clusters, and Google Cloud Functions quotas.” As a workaround, the team suggested the use of gcloud SDK instead of the Cloud Console. No workaround was suggested for Google Cloud Dataflow. While the mitigation was underway, another update was posted by the team: “The issue is partially resolved for a majority of users. Some users would still face trouble listing project permissions from the Google Cloud Console.” The issue which began around 09:58 Pacific Time, was finally resolved around 16:30 Pacific Time on the same day. The team said that they will conduct an internal investigation of this issue and “make appropriate improvements to their systems to help prevent or minimize future recurrence. They will also provide a more detailed analysis of this incident once they have completed our internal investigation.”  There is no other information revealed as of today. This downtime affected a  majority of Google Cloud users. https://twitter.com/lukwam/status/1105174746520526848 https://twitter.com/jbkavungal/status/1105184750560571393 https://twitter.com/bpmtri/status/1105264883837239297 Head over to Google Cloud’s official page for more insights on this news. Monday’s Google outage was a BGP route leak: traffic redirected through Nigeria, China, and Russia Researchers input rabbit-duck illusion to Google Cloud Vision API and conclude it shows orientation-bias Elizabeth Warren wants to break up tech giants like Amazon, Google Facebook, and Apple and build strong antitrust laws
Read more
  • 0
  • 0
  • 3611

article-image-sway-1-0-released-with-swaynag-improved-performance-major-bug-fixes-and-more
Amrata Joshi
12 Mar 2019
3 min read
Save for later

Sway 1.0 released with swaynag, improved performance, major bug fixes and more!

Amrata Joshi
12 Mar 2019
3 min read
Yesterday, the team at Sway, the i3-compatible Wayland compositor released Sway 1.0, the first stable release of sway which is a consistent, flexible, and powerful desktop environment for Linux and FreeBSD. Sway 1.0 comes with a variety of features that improves performance and offers a better implementation of Wayland. This release is 100% compatible with i3, i3 IPC, i3-gaps and i3bar. What’s new in Sway 1.0? In this release, swayidle, a daemon for managing DPMS and idle activity has been added. This release comes with swaynag, an i3-nagbar replacement. With this release, the bindsym locked now add keybindings which work when the screen is locked. In this release, the command blocks are now generic and they work with any command. It is now possible to adjust the Window opacity with the opacity command. With this release, the border csd enables client-side decorations. Sawy 1.0 comes with atomic layout updates that help in resizing windows and adjusting the layout. With this release, the urgency hints from Xwayland are also supported. The Output damage tracking in this release will help in improving CPU performance and power usage. The performance will be improved with Hardware cursors. In this release, Wayland, x11, and headless backends are now supported for end-users. Major changes This release will now depend on wlroots 0.5. This release has dropped the dependency on asciidoc. With Sawy 1.0, the experimental Nvidia support has been removed. With this release, the swaylock is now distributed separately. Major Bugs fixes Issues related to xdg-shell have been fixed. Issues related to Xwayland have been fixed. Reloading config doesn't cause crashes anymore. Few users are excited about this news. One of the users commented on HackerNews, “Sway is absolutely incredible, it puts macOS, built by Apple's army of engineers and dump trucks of money to shame in its simplicity, stability, and efficiency.” Few others are unhappy because of the tiling window manager. Another user commented, “I really don't get the benefit of a tiling window manager. I tried one and instantly felt boxed in. There's not enough room on the screen for everything I need to have opened and flip between, which is why I use an overlapping window manager in the first place.” To know more about this news, check out the official announcement. Sway 1.0 beta.1 released with the addition of third-party panels, auto-locking, and more Alphabet’s Chronicle launches ‘Backstory’ for business network security management ‘2019 Upskilling: Enterprise DevOps Skills’ report gives an insight into the DevOps skill set required for enterprise growth  
Read more
  • 0
  • 0
  • 1884
article-image-are-debian-and-docker-slowly-losing-popularity
Savia Lobo
12 Mar 2019
5 min read
Save for later

Are Debian and Docker slowly losing popularity?

Savia Lobo
12 Mar 2019
5 min read
Michael Stapelbergs, in his blog, stated why he has planned to reduce his involvement towards Debian software distribution. Stapelbergs is the one who wrote the Linux tiling window manager i3, the code search engine Debian Code Search and the netsplit-free. He said, he’ll reduce his involvement in Debian by, transitioning packages to be team-maintained remove the Uploaders field on packages with other maintainers orphan packages where he is the sole maintainer Stapelbergs mentions the pain points in Debian and why he decided to move away from it. Change process in Debian Debian follows a different change process where packages are nudged in the right direction by a document called the Debian Policy, or its programmatic embodiment, lintian. This tool is not necessarily important. “currently, all packages become lint-unclean, all maintainers need to read up on what the new thing is, how it might break, whether/how it affects them, manually run some tests, and finally decide to opt in. This causes a lot of overhead and manually executed mechanical changes across packages”, Stapelbergs writes. “Granting so much personal freedom to individual maintainers prevents us as a project from raising the abstraction level for building Debian packages, which in turn makes tooling harder.” Fragmented workflow and infrastructure Debian generally seems to prefer decentralized approaches over centralized ones. For example, individual packages are maintained in separate repositories (as opposed to in one repository), each repository can use any SCM (git and svn are common ones) or no SCM at all, and each repository can be hosted on a different site. Practically, non-standard hosting options are used rarely enough to not justify their cost, but frequently enough to be a huge pain when trying to automate changes to packages. Stapelbergs said that after he noticed the workflow fragmentation in the Go packaging team, he also tried fixing this with the workflow changes proposal, but did not succeed in implementing it. Debian is hard to machine-read “While it is obviously possible to deal with Debian packages programmatically, the experience is far from pleasant. Everything seems slow and cumbersome.” debiman needs help from piuparts in analyzing the alternatives mechanism of each package to display the manpages of e.g. psql(1). This is because maintainer scripts modify the alternatives database by calling shell scripts. Without actually installing a package, you cannot know which changes it does to the alternatives database. There used to be a fedmsg instance for Debian, but it no longer seems to exist. “It is unclear where to get notifications from for new packages, and where best to fetch those packages”, Stapelbergs says. A user on HackerNews said, “I've been willing to package a few of my open-source projects as well for almost a year, and out of frustration, I've ended up building my .deb packages manually and hosting them on my own apt repository. In the meantime, I've published a few packages on PPAs (for Ubuntu) and on AUR (for ArchLinux), and it's been as easy as it could have been.” Check out what the entire blogpost by Stapelbergs. Maish Saidel-Keesing believes Docker will die soon Maish Saidel-Keesing, a Cloud & AWS Solutions Architect at CyberArk, Israel, in his blog post mentions, “the days for Docker as a company are numbered and maybe also a technology as well” https://twitter.com/maishsk/status/1019115484673970176 Docker has undoubtedly brought in the popular containerization technology. However, Saidel-Keesing says, “Over the past 12-24 months, people are coming to the realization that docker has run its course and as a technology is not going to be able to provide additional value to what they have today - and have decided to start to look elsewhere for that extra edge.” He also talks about how Open Container Initiative brought with it the Runtime Spec, which opened the door to use something else besides docker as the runtime. Docker is no longer the only runtime that is being used. “Kelsey Hightower - has updated his Kubernetes the hard way over the years from CRI-O to containerd to gvisor. All the cool kids on the block are no longer using docker as the underlying runtime. There are many other options out there today clearcontainers, katacontainers and the list is continuously growing”, Saidel-Keesing says. “What triggered me was a post from Scott Mccarty - about the upcoming RHEL 8 beta - Enterprise Linux 8 Beta: A new set of container tools” https://twitter.com/maishsk/status/1098295411117309952 Saidel-Keesing writes, “Lo and behold - no more docker package available in RHEL 8”. He further added, “If you’re a container veteran, you may have developed a habit of tailoring your systems by installing the “docker” package. On your brand new RHEL 8 Beta system, the first thing you’ll likely do is go to your old friend yum. You’ll try to install the docker package, but to no avail. If you are crafty, next, you’ll search and find this package: podman-docker.noarch : "package to Emulate Docker CLI using podman." To know more on this news, head over to Maish Saidel-Keesing’s blog post. Docker Store and Docker Cloud are now part of Docker Hub Cloud Native Application Bundle (CNAB): Docker, Microsoft partner on an open source cloud-agnostic all-in-one packaging format It is supposedly possible to increase reproducibility from 54% to 90% in Debian Buster!
Read more
  • 0
  • 0
  • 7924

article-image-introducing-quarkus-a-kubernetes-native-java-framework-for-graalvm-openjdk-hotspot
Melisha Dsouza
08 Mar 2019
2 min read
Save for later

Introducing ‘Quarkus’, a Kubernetes native Java framework for GraalVM & OpenJDK HotSpot

Melisha Dsouza
08 Mar 2019
2 min read
Yesterday, RedHat announced the launch of ‘Quarkus’, a Kubernetes Native Java framework that offers developers “a unified reactive and imperative programming model” in order to address a wider range of distributed application architectures. The framework uses Java libraries and standards and is tailored for GraalVM and HotSpot. Quarkus has been designed keeping in mind serverless, microservices, containers, Kubernetes, FaaS, and the cloud and it provides an effective solution for running Java on these new deployment environments. Features of Quarkus Fast Startup enabling automatic scaling up and down of microservices on containers and Kubernetes as well as FaaS on-the-spot execution. Low memory utilization to help optimize container density in microservices architecture deployments that require multiple containers. Quarkus unifies imperative and reactive programming models for microservices development. Quarkus introduces a full-stack framework by leveraging libraries like Eclipse MicroProfile, JPA/Hibernate, JAX-RS/RESTEasy, Eclipse Vert.x, Netty, and more. Quarkus includes an extension framework for third-party framework authors can leverage and extend. Twitter was abuzz with Kubernetes users expressing their excitement on this news- describing Quarkus as “game changer” in the world of microservices: https://twitter.com/systemcraftsman/status/1103759828118368258 https://twitter.com/MarcusBiel/status/1103647704494804992 https://twitter.com/lazarotti/status/1103633019183738880 This open source framework is available under the Apache Software License 2.0 or compatible license. You can head over to the Quarkus website for more information on this news. Using lambda expressions in Java 11 [Tutorial] Bootstrap 5 to replace jQuery with vanilla JavaScript Will putting limits on how much JavaScript is loaded by a website help prevent user resource abuse?
Read more
  • 0
  • 0
  • 5500