Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News - Cloud & Networking

376 Articles
article-image-azure-devops-report-how-a-bug-caused-sqlite3-for-python-to-go-missing-from-linux-images
Vincy Davis
03 Jul 2019
3 min read
Save for later

Azure DevOps report: How a bug caused ‘sqlite3 for Python’ to go missing from Linux images

Vincy Davis
03 Jul 2019
3 min read
Yesterday, Youhana Naseim the Group Engineering Manager at Azure Pipelines provided a post-mortem of the bug, due to which a sqlite3 module in the Ubuntu 16.04 image for Python went missing from May 14th. The Azure DevOps team identified the bug on May 31st and fixed it on June 26th. Naseim apologized to all the affected customers for the delay in detecting and fixing the issue. https://twitter.com/hawl01475954/status/1134053763608530945 https://twitter.com/ProCode1/status/1134325517891411968 How Azure DevOps team detected and fixed the issue The Azure DevOps team upgraded the versions of Python, which were included in the Ubuntu 16.04 image with M151 payload. These versions of Python’s build scripts consider sqlite3 as an optional module, hence the builds were carried out successfully despite the missing sqlite3 module. Naseim says that, “While we have test coverage to check for the inclusion of several modules, we did not have coverage for sqlite3 which was the only missing module.” The issue was first reported by a user who received the M151 deployment containing the bug via the Azure Developer Community on May 20th. But the Azure support team escalated, only after receiving more reports during the M152 deployment on May 31st. The support team then proceed with the M153 deployment, after posting a workaround for the issue, as the M152 deployment would take at least 10 days. Further, due to an internal miscommunication, the support team didn’t start the M153 deployment to Ring 0 until June 13th. [box type="shadow" align="" class="" width=""]To safeguard the production environment, Azure DevOps rolls out changes in a progressive and controlled manner via the ring model of deployments. [/box]To safeguard the production environment, Azure DevOps rolls out changes in a progressive and controlled manner via the ring model of deployments. The team then resumed deployment to Ring 1 on June 17th and reached Ring 2 by June 20th. Finally, after a few failures, the team fully deployed the M153 deployment by June 26th. Azure’s future workarounds to deliver timely fixes The Azure team has set out plans to make improvements to their deployment and hotfix processes with an aim to deliver timely fixes. Their long term plan is to provide customers with the ability to choose to revert to the previous image as a quick workaround for issues introduced in new images. The detailed medium and short plans are as given below: Medium-term plans Add the ability to better compare what changed on the images to catch any unexpected discrepancies that our test suite might miss. Increase the speed and reliability of deployment process. Short term plans Build a full CI Pipeline for image generation for verifying images daily. Add test coverage for all modules in the Python standard library including sqlite3. Improving the support team's communication with the support team to escalate issues more quickly. Add telemetry, so it would be possible to detect and diagnose issues more quickly. Implement measures, which will enable reverting to prior image versions quickly and mitigate issues faster. Visit the Azure Devops status site for more details. Read More Ubuntu has decided to drop i386 (32-bit) architecture from Ubuntu 19.10 onwards Xubuntu 19.04 releases with latest Xfce package releases, new wallpapers and more Ubuntu 19.04 Disco Dingo Beta releases with support for Linux 5.0 and GNOME 3.32
Read more
  • 0
  • 0
  • 3557

article-image-why-did-slack-suffer-an-outage-on-friday
Fatema Patrawala
01 Jul 2019
4 min read
Save for later

Why did Slack suffer an outage on Friday?

Fatema Patrawala
01 Jul 2019
4 min read
On Friday, Slack, an instant messaging platform for work spaces confirmed news of the global outage. Millions of users reported disruption in services due to the outage which occurred early Friday afternoon. Slack experienced a performance degradation issue impacting users from all over the world, with multiple services being down. Yesterday the Slack team posted a detailed incident summary report of the service restoration. The Slack status page read: “On June 28, 2019 at 4:30 a.m. PDT some of our servers became unavailable, causing degraded performance in our job processing system. This resulted in delays or errors with features such notifications, unfurls, and message posting. At 1:05 p.m. PDT, a separate issue increased server load and dropped a large number of user connections. Reconnection attempts further increased the server load, slowing down customer reconnection. Server capacity was freed up eventually, enabling all customers to reconnect by 1:36 p.m. PDT. Full service restoration was completed by 7:20 p.m. PDT. During this period, customers faced delays or failure with a number of features including file uploads, notifications, search indexing, link unfurls, and reminders. Now that service has been restored, the response team is continuing their investigation and working to calculate service interruption time as soon as possible. We’re also working on preventive measures to ensure that this doesn’t happen again in the future. If you’re still running into any issues, please reach out to us at feedback@slack.com.” https://twitter.com/SlackStatus/status/1145541218044121089 These were the various services which were affected due to outage: Notifications Calls Connections Search Messaging Apps/Integrations/APIs Link Previews Workspace/Org Administration Posts/Files Timeline of Friday’s Slack outage According to user reports it was observed that some Slack messages were not delivered with users receiving an error message. On Friday, at 2:54 PM GMT+3, Slack status page gave the initial signs of the issue,  "Some people may be having an issue with Slack. We’re currently investigating and will have more information shortly. Thank you for your patience,". https://twitter.com/SlackStatus/status/1144577107759996928 According to the Down Detector, Slack users noted that message editing also appeared to be impacted by the latest bug. Comments indicated it was down around the world, including Sweden, Russia, Argentina, Italy, Czech Republic, Ukraine and Croatia. The Slack team continued to give updates on the issue, and on Friday evening they reported of services getting back to normal. https://twitter.com/SlackStatus/status/1144806594435117056 This news gained much attraction on Twitter, as many of them commented saying Slack is already preps up for the weekend. https://twitter.com/RobertCastley/status/1144575285980999682 https://twitter.com/Octane/status/1144575950815932422 https://twitter.com/woutlaban/status/1144577117788790785   Users on Hacker News compared Slack with other messaging platforms like Mattermost, Zulip chat, Rocketchat etc. One of the user comments read, “Just yesterday I was musing that if I were King of the (World|Company) I'd want an open-source Slack-alike that I could just drop into the Cloud of my choice and operate entirely within my private network, subject to my own access control just like other internal services, and with full access to all message histories in whatever database-like thing it uses in its Cloud. Sure, I'd still have a SPOF but it's game over anyway if my Cloud goes dark. Is there such a project, and if so does it have any traction in the real world?” To this another user responded, “We use this at my company - perfectly reasonable UI, don't know about the APIs/integrations, which I assume are way behind Slack…” Another user also responded, “Zulip, Rocket.Chat, and Mattermost are probably the best options.” Slack stocks surges 49% on the first trading day on the NYSE after direct public offering Dropbox gets a major overhaul with updated desktop app, new Slack and Zoom integration Slack launches Enterprise Key Management (EKM) to provide complete control over encryption keys  
Read more
  • 0
  • 0
  • 2870

article-image-google-launches-beta-version-of-deep-learning-containers-for-developing-testing-and-deploying-ml-applications
Amrata Joshi
28 Jun 2019
3 min read
Save for later

Google launches beta version of Deep Learning Containers for developing, testing and deploying ML applications

Amrata Joshi
28 Jun 2019
3 min read
Yesterday, Google announced the beta availability of Deep Learning Containers, a new cloud service that provides environments for developing, testing as well as for deploying machine learning applications. In March this year, Amazon also launched a similar offering, AWS Deep Learning Containers with Docker image support for easy deployment of custom machine learning (ML) environments. The major advantage of Deep Learning containers is its ability to test machine learning applications on-premises and it can quickly move them to cloud. Support for PyTorch, TensorFlow scikit-learn and R Deep Learning Containers, launched by Google Cloud Platform (GCP) can be run both in the cloud as well as on-premise. It has support for machine learning frameworks like PyTorch, TensorFlow 2.0, and TensorFlow 1.13. Deep Learning Containers by AWS has support for TensorFlow and Apache MXNet frameworks. Whereas Google’s ML containers don’t support Apache MXNet but they come with pre-installed PyTorch, TensorFlow scikit-learn and R. Features various tools and packages GCP Deep Learning Containers consists of several performance-optimized Docker containers that come along with various tools used for running deep learning algorithms. These tools include preconfigured Jupyter Notebooks that are interactive tools used to work with and share code, visualizations, equations and text. Google Kubernetes Engine clusters is also one of the tools and it used for orchestrating multiple container deployments. It also comes with access to packages and tools such as Nvidia’s CUDA, cuDNN, and NCCL. Docker images now work on cloud and on-premises  The docker images also work on cloud, on-premises, and across GCP products and services such as Google Kubernetes Engine (GKE), Compute Engine, AI Platform, Cloud Run, Kubernetes, and Docker Swarm. Mike Cheng, software engineer at Google Cloud in a blog post, said, “If your development strategy involves a combination of local prototyping and multiple cloud tools, it can often be frustrating to ensure that all the necessary dependencies are packaged correctly and available to every runtime.” He further added, “Deep Learning Containers address this challenge by providing a consistent environment for testing and deploying your application across GCP products and services, like Cloud AI Platform Notebooks and Google Kubernetes Engine (GKE).” For more information, visit the AI Platform Deep Learning Containers documentation. Do Google Ads secretly track Stack Overflow users? CMU and Google researchers present XLNet: a new pre-training method for language modeling that outperforms BERT on 20 tasks Curl’s lead developer announces Google’s “plan to reimplement curl in Libcrurl”    
Read more
  • 0
  • 0
  • 4041

article-image-low-carbon-kubernetes-scheduler-a-demand-side-management-solution-that-consumes-electricity-in-low-grid-carbon-intensity-areas
Savia Lobo
27 Jun 2019
7 min read
Save for later

Low Carbon Kubernetes Scheduler: A demand side management solution that consumes electricity in low grid carbon intensity areas

Savia Lobo
27 Jun 2019
7 min read
Machine learning experts are increasingly becoming interested in researching on how machine learning can be used to reduce greenhouse gas emissions and help society adapt to a changing climate. For example, Machine Learning can be used to regulate cloud data centres that manage an important asset, ‘Data’ as these data centres typically comprise tens to thousands of interconnected servers and consume a substantial amount of electrical energy. Researchers from Huawei published a paper in April 2015, estimating that by 2030 data centres will use anywhere between 3% and 13% of global electricity At the ICT4S 2019 conference held in Lappeenranta, Finland, from June 10-15, researchers from the University of Bristol, UK, introduced their research on a low carbon scheduling policy for the open-source Kubernetes container orchestrator. “Low Carbon Kubernetes Scheduler” can provide demand-side management (DSM) by migrating consumption of electric energy in cloud data centres to countries with the lowest carbon intensity of electricity. In their paper the researchers highlight, “All major cloud computing companies acknowledge the need to run their data centres as efficiently as possible in order to address economic and environmental concerns, and recognize that ICT consumes an increasing amount of energy”. Since the end of 2017, Google Cloud Platform runs its data centres entirely on renewable energy. Also, Microsoft has announced that its global operations have been carbon neutral since 2012. However, not all cloud providers have been able to make such an extensive commitment. For example, Oracle Cloud is currently 100% carbon neutral in Europe, but not in other regions. The Kubernetes Scheduler selects compute nodes based on the real-time carbon intensity of the electric grid in the region they are in. Real-time APIs that report grid carbon intensity is available for an increasing number of regions, but not exhaustively around the planet. In order to effectively demonstrate the schedulers ability to perform global load balancing, the researchers have evaluated the scheduler based on its ability to the metric of solar irradiation. “While much of the research on DSM focusses on domestic energy consumption there has also been work investigating DSM by cloud data centres”, the paper mentions. Demand side management (DSM) refers to any initiatives that affect how and when electricity is being required by consumers. Source: CEUR-WS.org Existing schedulers work with consideration to singular data centres rather than taking a more global view. On the other hand, the Low Carbon Scheduler considers carbon intensity across regions as scaling up and down of a large number of containers that can be done in a matter of seconds. Each national electric grid contains electricity generated from a variable mix of alternative sources. The carbon intensity of the electricity provided by the grid anywhere in the world is a measure of the amount of greenhouse gas released into the atmosphere from the combustion of fossil fuels for the generation of electricity. Significant generation sites report the volume of electricity input to the grid in regular intervals to the organizations operating the grid (for example the National Grid in the UK) in real-time via APIs. These APIs typically provide the retrieval of the production volumes and thus allow to calculate the carbon intensity in real-time. The Low carbon scheduler collects the carbon intensity from the available APIs and ranks them to identify the region with the lowest carbon intensity. [box type="shadow" align="" class="" width=""]For the European Union, such an API is provided by the European Network of Transmission System Operators for Electricity (www.entsoe.eu) and for the UK this is the Balancing Mechanism Reporting Service (www.elexon.co.uk).[/box] Why Kubernetes for building a low carbon scheduler Kubernetes can make use of GPUs4 and has also been ported to run on ARM architecture 5. Researchers have also said that Kubernetes has to a large extent won the container orchestration war. It also has support for extendability and plugins which makes it the “most suitable for which to develop a global scheduler and bring about the widest adoption, thereby producing the greatest impact on carbon emission reduction”. Kubernetes allows schedulers to run in parallel, which means the scheduler will not need to re-implement the pre-existing, and sophisticated, bin-packing strategies present in Kubernetes. It need only to apply a scheduling layer to complement the existing capabilities proffered by Kubernetes. According to the researchers, “Our design, as it operates at a higher level of abstraction, assures that Kubernetes continues to deal with bin-packing at the node level, while the scheduler performs global-level scheduling between data centres”. The official Kubernetes documentation describes three possible ways of extending the default scheduler (kube-scheduler): adding these rules to the scheduler source code and recompiling, implementing one’s own scheduler process that runs instead of, or alongside kube-scheduler, or implementing a scheduler extender. Evaluating the performance of the low carbon Kubernetes scheduler The researchers recorded the carbon intensities for the countries that the major cloud providers operate data centers between 18.2.2019 13:00 UTC and 21.4.2019 9:00 UTC. Following is a table showing countries where the largest public cloud providers operate data centers, as of April 2019. Source: CEUR-WS.org They further ranked all countries by the carbon intensity of their electricity in 30-minute intervals. Among the total set of 30-minute values, Switzerland had the lowest carbon intensity (ranked first) in 0.57% of the 30-minute intervals, Norway 0.31%, France 0.11% and Sweden in 0.01%. However, the list of the least carbon intense countries only contains countries in central Europe locations. To justify Kubernetes’ ability or globally distributed deployments the researchers chose to optimize placement to regions with the greatest degree of solar irradiance termed a Heliotropic Scheduler. This scheduler is termed ‘heliotropic’ in order to differentiate it from a ‘follow-the-sun’ application management policy that relates to meeting customer demand around the world by placing staff and resources in proximity to those locations (thereby making them available to clients at lower latency and at a suitable time of day). A ‘heliotropic’ policy, on the other hand, goes to where sunlight, and by extension solar irradiance, is abundant. They further evaluated the Heliotropic Scheduler implementation by running BOINC jobs on Kubernetes. BOINC (Berkeley Open Infrastructure for Network Computing) is a software platform for volunteer computing that allows users to contribute computational capacity from their home PCs towards scientific research. Einstein@Home, SETI@home and IBM World Community Grid are some of the most widely supported projects. Researchers say: “Even though many cloud providers are contracting for renewable energy with their energy providers, the electricity these data centres take from the grid is generated with release of a varying amount of greenhouse gas emissions into the atmosphere. Our scheduler can contribute to moving demand for more carbon intense electricity to less carbon intense electricity”. While the paper concludes that wind-dominant, solar-complementary strategy is superior for the integration of renewable energy sources into cloud data centres’ infrastructure, the Low Carbon Scheduler provides a proof-of-concept demonstrating how to reduce carbon intensity in cloud computing. To know more about this implementation for lowering carbon emissions read the research paper. Machine learning experts on how we can use machine learning to mitigate and adapt to the changing climate VMware reaches the goal of using 100% renewable energy in its operations, a year ahead of their 2020 vision Deep learning models have massive carbon footprints, can photonic chips help reduce power consumption?
Read more
  • 0
  • 0
  • 4166

article-image-amazon-adds-udp-load-balancing-support-for-network-load-balancer
Vincy Davis
25 Jun 2019
3 min read
Save for later

Amazon adds UDP load balancing support for Network Load Balancer

Vincy Davis
25 Jun 2019
3 min read
Yesterday, Amazon announced support for load balancing UDP traffic on Network Load Balancers, which will enable it to deploy connectionless services for online gaming, IoT, streaming, media transfer, and native UDP applications. This has been a long requested feature by Amazon customers. The Network Load Balancer is designed to handle tens of millions of requests per second while maintaining high throughput at ultra low latency, with no effort on the users part. UDP load balancing will give users the liberty to no longer maintain a fleet of proxy servers to ingest UDP traffic, and instead use the same load balancer for both TCP and UDP traffic. Hence simplifying the network architecture, reducing users cost and scalability. Supported Targets UDP on Network Load Balancers is supported for Instance target types only. It does not support IP target types and PrivateLink. Health Checks Health checks must be done using TCP, HTTP, or HTTPS. Users can check on the health of a service by clicking override and specifying a health check on the selected port. Users can then run a custom implementation of Syslog that stores the log messages centrally and in a highly durable form. Multiple Protocols A single Network Load Balancer can handle both TCP and UDP traffic. In situations like DNS, when support of TCP and UDP is both needed on the same port, user can set up a multi-protocol target group and a multi-protocol listener. New CloudWatch Metrics The existing CloudWatch metrics (ProcessedBytes, ActiveFlowCount, and NewFlowCount) can  now represent the aggregate traffic processed by the TCP, UDP, and TLS listeners on the given Network Load Balancer. Users who host DNS, SIP, SNMP, Syslog, RADIUS and other UDP services in their own data centers can now move their services to AWS. It is also possible to deploy services to handle Authentication, Authorization, and Accounting, often known as AAA. Earlier this year, Amazon launched the TLS Termination support for Network Load Balancer. It simplifies the process of building secure web applications by allowing users to make use of TLS connections that terminate at a Network Load Balancer. Users are delighted with Amazon’s support for load balancing UDP traffic. https://twitter.com/cgswong/status/1143312489360183296 A user on Hacker News comments,“This is a Big Deal because it enables support for QUIC, which is now being standardized as HTTP/3. To work around the TCP head of line blocking problem (among others) QUIC aises UDP. QUIC does some incredible patching over legacy decisions in the TCP and IP stack to make things faster, more reliable, especially on mobile networks, and more secure.” Another comment reads, “This is great news, and something I’ve been requesting for years. I manage an IoT backend based on CoAP, which is typically UDP-based. I’ve looked at Nginx support for UDP, but a managed load balancer is much more appealing.” Some users see this as Amazon’s way of preparing ‘http3 support’ for the future. https://twitter.com/atechiethought/status/1143240391870832640 Another user on Hacker News wrote, “Nice! I wonder if this is a preparatory step for future quick/http3 support?” For details on how to create a UDP Network Load Balancer, head over to Amazon’s official blog. Amazon patents AI-powered drones to provide ‘surveillance as a service’ Amazon is being sued for recording children’s voices through Alexa without consent Amazon announces general availability of Amazon Personalize, an AI-based recommendation service
Read more
  • 0
  • 0
  • 5945

article-image-kubernetes-1-15-releases-with-extensibility-around-core-kubernetes-apis-cluster-lifecycle-stability-and-more
Vincy Davis
20 Jun 2019
5 min read
Save for later

Kubernetes 1.15 releases with extensibility around core Kubernetes APIs, cluster lifecycle stability, and more!

Vincy Davis
20 Jun 2019
5 min read
Update: On July 23rd, the Enhancements Lead of Kubernetes 1.15 at VMware, Kenny Coleman,  published a “What's New in Kubernetes 1.15” video with Cloud Native Computing Foundation (CNCF). In the video, he explains in detail about the three new major features in Kubernetes 1.15, which include Dynamic HA Clusters with Kubeadm, Volume Cloning and CustomResourceDefinition (CRDs). Coleman also highlights each feature and explains its importance to users.  Watch the video below to know in detail about Kenny Coleman’s talk about Kubernetes 1.15. https://www.youtube.com/watch?v=eq7dgHjPpzc On June 19th, the Kubernetes team announced the release of Kubernetes 1.15, which consists of 25 enhancements, including 2 moving to stable, 13 in beta, and 10 in alpha. The key features of this release include extensibility around core Kubernetes APIs, cluster lifecycle stability, and usability improvements. This is Kubernetes’ second release this year. The previous version Kubernetes 1.14, released three months ago, had 10 stable enhancements--the most amount of stable features revealed in a release. In an interview to the New Stack, Claire Laurence, the team lead at Kubernetes said that in this release, “We’ve had a fair amount of features progress to beta. I think what we’ve been seeing a lot with these alpha and beta features as they progress is a lot of continued focus on stability and overall improvement before indicating that those features are stable.” Let’s have a brief look at all the new features and updates. #1 Extensibility around core Kubernetes APIs The theme of the new developments around CustomResourceDefinitions is data consistency and native behavior. The Kubernetes team wants that a user should not notice whether the interaction is with a CustomResource or with a Golang-native resource. Hence, from v1.15 onwards, Kubernetes will check each schema against a restriction called “structural schema”. This enforces non-polymorphic and complete typing of each field in a CustomResource. Out of the five enhancements, the ‘CustomResourceDefinition Defaulting’ is an alpha release. It is specified using the default keyword in the OpenAPI validation schema. Defaulting will be available as alpha in Kubernetes 1.15 for structural schemas. The other four enhancements are in beta which include: CustomResourceDefinition Webhook Conversion In Kubernetes, CustomResourceDefinitions gain the ability to convert between different versions on-the-fly, just like users are used to from native resources for the long term. CustomResourceDefinition OpenAPI Publishing OpenAPI publishing for CRDs will be available with Kubernetes 1.15 as beta, but only for structural schemas. CustomResourceDefinitions Pruning Pruning is the automatic removal of unknown fields in objects sent to a Kubernetes API. A field is unknown if it is not specified in the OpenAPI validation schema. It enforces that only data structures specified by the CRD developer are persisted to etcd. This is the behaviour of native resources, and will be available for CRDs as well, starting as beta in Kubernetes 1.15. Admission Webhook Reinvocation & Improvements In the earlier versions, mutating webhooks were only called once, in alphabetical order. An earlier run webhook cannot react on the output of webhooks, called later in the chain. With Kubernetes 1.15, mutating webhooks can opt-in into at least one re-invocation by specifying reinvocationPolicy: IfNeeded. If a later mutating webhook modifies the object, the earlier webhook will get a second chance. #2 Cluster Lifecycle Stability and Usability Improvements The cluster lifecycle building block, kubeadm, continues to receive features and stability work, which is needed for bootstrapping production clusters efficiently. kubeadm has promoted high availability (HA) capability to beta, allowing users to use the familiar kubeadm init and kubeadm join commands to configure and deploy an HA control plane. With kubeadm, certificate management has become more robust in 1.15, as it seamlessly rotates all the certificates before expiry. The kubeadm configuration file API is moving from v1beta1 to v1beta2 in 1.15. kubeadm now has its own new logo. Continued Improvement of CSI In Kubernetes 1.15, the Special Interests Groups (SIG) Storage enables migration of in-tree volume plugins to Container Storage Interface (CSI). SIG Storage worked on bringing CSI to feature parity with in-tree functionality, including functionality like resizing and inline volumes. SIG Storage introduces new alpha functionality in CSI that doesn’t exist in the Kubernetes Storage subsystem yet, like volume cloning. Volume cloning enables users to specify another PVC as a “DataSource” when provisioning a new volume. If the underlying storage system supports this functionality and implements the “CLONE_VOLUME” capability in its CSI driver, then the new volume becomes a clone of the source volume. Additional feature updates Support for go modules in Kubernetes Core Continued preparation on cloud provider extraction and code organization. The cloud provider code has been moved to kubernetes/legacy-cloud-providers for easier removal later and external consumption. Kubectl get and describe now work with extensions. Nodes now support third party monitoring plugins. A new Scheduling Framework for schedule plugins is now Alpha ExecutionHook API designed to trigger hook commands in the containers for different use cases is now Alpha. These extensions/v1beta1, apps/v1beta1, and apps/v1beta2 APIs will continue to depreciate and eventually will be retired in the next version 1.16. To know about the additional features in detail check out the release notes. https://twitter.com/markdeneve/status/1141135440336039936 https://twitter.com/IanColdwater/status/1141485648412651520 For more details on Kubernetes 1.15, check out Kubernetes blog. HAProxy 2.0 released with Kubernetes Ingress controller, layer 7 retries, polyglot extensibility, gRPC support and more Red Hat releases OpenShift 4 with adaptability, Enterprise Kubernetes and more! Linkerd 2.3 introduces Zero-Trust Networking for Kubernetes
Read more
  • 0
  • 0
  • 3496
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-mongodb-announces-new-cloud-features-beta-version-of-mongodb-atlas-data-lake-and-mongodb-atlas-full-text-search-and-more
Amrata Joshi
19 Jun 2019
3 min read
Save for later

MongoDB announces new cloud features, beta version of MongoDB Atlas Data Lake and MongoDB Atlas Full-Text Search and more!

Amrata Joshi
19 Jun 2019
3 min read
Yesterday, the team at MongoDB announced new cloud services and features that will offer a better way to work with data. The beta versions of MongoDB Atlas Data Lake and MongoDB Atlas Full-Text Search will help users to access new features in a fully managed MongoDB environment. MongoDB Charts include embedded charts in web applications The general availability of MongoDB Charts will help customers in creating charts and graphs, and further building and sharing dashboards. It also helps in embedding these charts, graphs and dashboards directly into web apps for creating better user experiences. MongoDB Charts is generally available to Atlas as well as on-premise customers which help in creating real-time visualization of MongoDB data. The MongoDB Charts include new features, such as embedded charts in external web applications, geospatial data visualization with new map charts, and built-in workload isolation for eliminating the impact of analytics queries on an operational application. Dev Ittycheria, CEO and President, MongoDB, said, “Our new offerings radically expand the ways developers can use MongoDB to better work with data.” He further added, “We strive to help developers be more productive and remove infrastructure headaches --- with additional features along with adjunct capabilities like full-text search and data lake. IDC predicts that by 2025 global data will reach 175 Zettabytes and 49% of it will reside in the public cloud. It’s our mission to give developers better ways to work with data wherever it resides, including in public and private clouds.” MongoDB Query Language added to MongoDB Atlas Data Lake MongoDB Atlas Data Lake helps customers to quickly query data on S3 in any format such as BSON, CSV, JSON, TSV, Parquet and Avro with the help of MongoDB Query Language (MQL). One of the major plus points about MongoDB Query Language is that it is expressive and will that allows developers to query the data. Developers can now use the same query language across data on S3, and make querying massive data sets easy and cost-effective. With MQL being added to MongoDB Atlas Data Lake, users can now run queries and explore their data by giving access to existing S3 storage buckets with a few clicks from the MongoDB Atlas console. Since the Atlas Data Lake is completely serverless, there is no need for setting up an infrastructure or managing it. Also, the customers pay only for the queries they run when they are actively working with the data. The team has planned for the availability of MongoDB Atlas Data Lake on Google Cloud Storage and Azure Storage for the future. Atlas Full-Text Search offers rich text search capabilities Atlas Full-Text Search offers rich text search capabilities that are based on Apache Lucene 8 against fully managed MongoDB databases. Also, there is no need for additional infrastructure or systems to manage. Full-Text Search helps the end users in filtering, ranking, and sorting their data for bringing out the most relevant results. So, users are not required to pair their database with an external search engine To know more about this news, check out the official press release. 12,000+ unsecured MongoDB databases deleted by Unistellar attackers MongoDB is going to acquire Realm, the mobile database management system, for $39 million MongoDB withdraws controversial Server Side Public License from the Open Source Initiative’s approval process  
Read more
  • 0
  • 0
  • 2068

article-image-lyft-announces-envoy-mobile-an-ios-and-android-client-network-library-for-mobile-application-networking
Sugandha Lahoti
19 Jun 2019
3 min read
Save for later

Lyft announces Envoy Mobile, an iOS and Android client network library for mobile application networking

Sugandha Lahoti
19 Jun 2019
3 min read
Yesterday, Lyft released the initial OSS preview release of Envoy Mobile. This is an iOS and Android client network library that brings Lyft’s Envoy Proxy to mobile platforms. https://twitter.com/mattklein123/status/1140998175722835974 Envoy proxy was initially built at Lyft to solve the networking and observability issues inherent in large polyglot server-side microservice architectures. It soon gained large scale public appreciation and was used by major public cloud providers, end user companies, and infrastructure startups. Now, Envoy Proxy is brought to iOS and Android platforms, providing an API and abstraction for mobile application networking. Envoy Mobile is currently in a very early stage of development. The initial release brings the following features: Ability to compile Envoy on both Android and iOS: Envoy Mobile uses an intelligent protobuf code generation and an abstract transport to help both iOS and Android provide similar interfaces and ergonomics for consuming APIs. Ability to run Envoy on a thread within an application, utilizing it effectively as an in-process proxy server. Swift/Obj-C/Kotlin demo applications that utilize exposed Swift/Obj-C/Kotlin “raw” APIs to interact with Envoy and make network calls, Long term goals Envoy Mobile provides support for Swift APIs for iOS and Kotlin APIs for Android initially, but depending on community interest they will consider adding support for additional languages in the future. In the long term, they are also planning to include the gRPC Server Reflection Protocol into a streaming reflection service API. This API will allow both Envoy and Envoy Mobile to fetch generic protobuf definitions from a central IDL service, which can then be used to implement annotation driven networking via reflection. They also plan to bring Envoy Mobile to xDS configuration to mobile clients, in the form of routing, authentication, failover, load balancing, and other policies driven by a global load balancing system. Envoy Mobile can also add cross-platform functionality when using strongly typed IDL APIs. Some examples of annotations that are planned in their roadmap are Caching, Priority, Streaming, Marking an API as offline/deferred capable and more. Envoy Mobile is getting loads of appreciation from developers with many happy they have open sourced its development. A comment on Hacker news reads, “I really like how they're releasing this as effectively the first working proof of concept and committing to developing the rest entirely in the open - it's a great opportunity to see how a project of this scale plays out in real-time on GitHub.” https://twitter.com/omerlh/status/1141225499139682305 https://twitter.com/dinodaizovi/status/1141157828247347200   Currently the project is in a pre-release stage. Not all features are implemented, and it is not ready for production use. However, you can get started here. Also see the demo release and their roadmap where they plan to develop Envoy Mobile entirely in the open. Related News Uber and Lyft drivers go on strike a day before Uber IPO roll-out Lyft introduces Amundsen;  a data discovery and metadata engine for its researchers and data scientists Lyft acquires computer vision startup Blue Vision Labs, in a bid to win the self driving car race
Read more
  • 0
  • 0
  • 3463

article-image-microsoft-finally-makes-hyper-v-server-2019-available-after-a-delay-of-more-than-six-months
Vincy Davis
18 Jun 2019
3 min read
Save for later

Microsoft finally makes Hyper-V Server 2019 available, after a delay of more than six months

Vincy Davis
18 Jun 2019
3 min read
Last week, Microsoft announced that Hyper-V server, one of the variants in the Windows 10 October 2018/1809 release is finally available, on the Microsoft Evaluation Center. This release comes after a delay of more than six months, since the re-release of Windows Server 1809/Server 2019 in early November. It has also been announced that Hyper-V Server 2019 will  be available to Visual Studio Subscription customers, by 19th June 2019. Microsoft Hyper-V Server is a free product, and includes all the great Hyper-V virtualization features like the Datacenter Edition. It is ideal to use when running on Linux Virtual Machines or VDI VMs. Microsoft had originally released the Windows Server 10 in October 2018. However it had to pull both the client and server versions of 1809 down, for investigating the reports of users of users missing files, after updating to the latest Windows 10 feature update. Microsoft then re-released Windows Server 1809/Server 2019 in early November 2018, but without the Hyper-V Server 2019. Read More: Microsoft fixes 62 security flaws on Patch Tuesday and re-releases Windows 10 version 1809 and Windows Server 2019 Early this year, Microsoft made Windows Server 2019 evaluation media available on the Evaluation Center, but the Hyper-V Server 2019 was still missing. Though no official statement was provided by the Microsoft officials, it is suspected that it may be due to errors with the working of Remote Desktop Services (RDS). Later in April, Microsoft officials stated that they found some issues with the media, and will release an update soon. Now that the Hyper-V Server 2019 is finally going to be available, it can put all users of Windows Server 2019 at ease. Users who had managed to download the original release of Hyper-V Server 2019 while it was available, are advised to delete it and install the new version, when it will be made available on 19th June 2019. Users are happy with this news, but are still wondering what took Microsoft so long to come up with the Hyper-V Server 2019. https://twitter.com/ProvoSteven/status/1139926333839028224 People are also skeptical about the product quality. A user on Reddit states that “I'm shocked, shocked I tell you! Honestly, after nearly 9 months of MS being unable to release this, and two months after they said the only thing holding it back were "problems with the media", I'm not sure I would trust this edition. They have yet to fully explain what it is that held it back all these months after every other Server 2019 edition was in production.” Microsoft’s Xbox team at E3 2019: Project Scarlett, AI-powered Flight Simulator, Keanu Reeves in Cyberpunk 2077, and more Microsoft quietly deleted 10 million faces from MS Celeb, the world’s largest facial recognition database 12 Visual Studio Code extensions that Node.js developers will love [Sponsored by Microsoft]
Read more
  • 0
  • 0
  • 3003

article-image-vmware-reaches-the-goal-of-using-100-renewable-energy-in-its-operations-a-year-ahead-of-their-2020-vision
Vincy Davis
13 Jun 2019
3 min read
Save for later

VMware reaches the goal of using 100% renewable energy in its operations, a year ahead of their 2020 vision

Vincy Davis
13 Jun 2019
3 min read
Yesterday, VMware announced that they have achieved their goal of achieving 100% renewable energy in their operations, a year ahead of their 2020 vision. VMware has always been optimistic about the power of technology to help solve societal problems. One of their key focus areas has been to change their relationship with energy. https://twitter.com/PGelsinger/status/1138868618257719297 In 2016, VMware had announced their goal to achieve carbon neutral emissions and to advance its commitment to use 100 percent renewable energy by 2020. They have been successful in reaching both these goals, much before the scheduled time. In November 2018, VMware achieved Carbon Neutrality across all their business operations. Now, they have also powered  100 percent of their operations with renewable energy and have joined RE100, a year early. RE100 is a global corporate leadership initiative to commit influential businesses to 100% renewable electricity and accelerate change towards zero carbon energy. RE100 is led by The Climate Group in partnership with CDP and works to increase corporate demand for–and delivery of–renewable energy. As Data centers are responsible for two percent of the world’s greenhouse gas emissions,  VMware’s technologies have helped IT infrastructure become more efficient by fundamentally changing how their customers use power. They have helped their customers avoid putting 540 million metric tons of carbon dioxide into the atmosphere, which is equivalent to powering the population of Spain, Germany and Switzerland for one year. In a blogpost, the Vice President of Sustainability at VMware, Nicola Acutt, has mentioned that they could achieve RE100 through a combination of strategies, such as: Opting into clean power through local utilities Locating assets in areas with renewable energy For areas not feasible, they purchased renewable energy credits (RECs). This indicates demand to the global market of renewable energy, and enables the development of its  infrastructure. According to the U.N.’s report, as a society, around 70-85 percent of electricity will have to be shifted to renewable energy sources by 2050, to avoid the worst impacts of climatic change. Acutt states that to achieve this goal, all establishments will have to acquire a system approach to become more efficient. This will help drive the transition to a sustainable economy globally. The response to this news has been great, people are praising VMware for reaching RE100 ahead of their schedule. https://twitter.com/T180985/status/1139059931695345665 https://twitter.com/songsteven2/status/1138908028714065923 https://twitter.com/RSadorus/status/1138985222815404032 Deep learning models have massive carbon footprints, can photonic chips help reduce power consumption? Now there’s a CycleGAN to visualize the effects of climate change. But is this enough to mobilize action? Responsible tech leadership or climate washing? Microsoft hikes its carbon tax and announces new initiatives to tackle climate change
Read more
  • 0
  • 0
  • 3220
article-image-dropbox-gets-a-major-overhaul-with-updated-desktop-app-new-slack-and-zoom-integration
Sugandha Lahoti
13 Jun 2019
3 min read
Save for later

Dropbox gets a major overhaul with updated desktop app, new Slack and Zoom integration

Sugandha Lahoti
13 Jun 2019
3 min read
Dropbox has revamped the traditional cloud storage service and announced a new unified version of its desktop app, the company is calling “the new Dropbox.” This new version would be a single workplace solution to help you organize content, connect tools, and bring work groups together unifying productivity tools such as Google Docs, Microsoft Office, Slack, Salesforce, Trello, and Zoom. Dropbox is becoming a task-management app The new version of the popular file sharing service wants to be your file tree, your finder and your desktop for the cloud. Users can create and store shortcuts to any online project management and productivity tools alongside their content. It has a unified search bar that lets you crawl across your computer’s file system and all your cloud storage across other productivity apps. Users can add descriptions to folders to help the team understand more about the work they’re doing. Key content can be highlighted by pinning it to the top of a workspace, and users can @mention people and assign to-dos. Users can see file activity and keep tabs with a new team activity feed. There’s also a “Send feedback” button in the lower-right side of the page to talk about how the update is working (or not working) for you in practice. Search bar New third party integrations: Slack and Zoom Dropbox now integrates with Slack for seamless collaboration between content and communication. Users can start Slack conversations and share content to Slack channels directly from Dropbox. Slack integration with dropbox Users can also video conference with Zoom by connecting Zoom and calendar to Dropbox. From Dropbox, they can add and join Zoom Meetings where they can share files from their Dropbox. The new Dropbox has got users quite excited. https://twitter.com/jsnell/status/1138847481238712320 https://twitter.com/sdw/status/1138518725571665920   Some others have commented that the new dropbox is massive in size. https://twitter.com/puls/status/1138561011684859905 https://twitter.com/sandofsky/status/1138686582859239425   However, some pointed out that the new file sharing service lacked privacy protections. Obviously, if it integrates with other productivity tools, there should be a mechanism to keep user data private. https://twitter.com/TarikTech/status/1139068388964261888 The new file sharing service was launched on Tuesday for all of its 13 million business users across 400,000 teams plus its consumer tiers. Users can opt-in for early access and businesses can turn on early access in their admin panel. Dropbox purchases workflow and eSignature startup ‘HelloSign’ for $250M How Dropbox uses automated data center operations to reduce server outage and downtime Zoom, the video conferencing company files to go public, possibly a profitable IPO
Read more
  • 0
  • 0
  • 2262

article-image-mariadb-announces-the-release-of-mariadb-enterprise-server-10-4
Amrata Joshi
12 Jun 2019
4 min read
Save for later

MariaDB announces the release of MariaDB Enterprise Server 10.4

Amrata Joshi
12 Jun 2019
4 min read
Yesterday, the team at MariaDB announced the release of MariaDB Enterprise Server 10.4, which is code named as “restful nights”. It is a hardened and secured server which is also different from MariaDB’s Community Server. This release is focused on solving enterprise customer needs, offering them greater reliability, stability as well as long-term support in production environments. MariaDB Enterprise Server 10.4 and its backported versions will be available to customers by the end of the month as part of the MariaDB Platform subscription. https://twitter.com/mariadb/status/1138737719553798144 The official blog post reads, “For the past couple of years, we have been collaborating very closely with some of our large enterprise customers. From that collaboration, it has become clear that their needs differ vastly from that of the average community user. Not only do they have different requirements on quality and robustness, they also have different requirements for features to support production environments. That’s why we decided to invest heavily into creating a MariaDB Enterprise Server, to address the needs of our customers with mission critical production workloads.” MariaDB Enterprise Server 10.4 comes with added functionality for enterprises that are running MariaDB at scale in production environments. It also involves new levels of testing and is shipped in by default secure configuration. It also includes the same features of MariaDB Server 10.4, including bitemporal tables, an expanded set of instant schema changes and a number of improvements to authentication and authorization (e.g., password expiration and automatic/manual account locking) Max Mether, VP of Server Product Management, MariaDB Corporation, wrote in an email to us, “The new version of MariaDB Server is a hardened database that transforms open source into enterprise open source.” He further added, “We worked closely with our customers to add the features and quality they need to run in the most demanding production environments out-of-the-box. With MariaDB Enterprise Server, we’re focused on top-notch quality, comprehensive security, fast bug fixes and features that let our customers run at internet-scale performance without downtime.” James Curtis, Senior Analyst, Data Platforms and Analytics, 451 Research, said, “MariaDB has maintained a solid place in the database landscape during the past few years.” He added, “The company is taking steps to build on this foundation and expand its market presence with the introduction of MariaDB Enterprise Server, an open source, enterprise-grade offering targeted at enterprise clients anxious to stand up production-grade MariaDB environments.” Reliability and stability MariaDB Enterprise Server 10.4 offers reliability and stability that is required for production environments. In this server, even bugs are fixed that further help in maintaining reliability. The key enterprise features are backported for the ones running earlier versions of MariaDB Server, and provide long-term support. Security Unsecured databases are most of the times the reason for data breaches. But the MariaDB Enterprise Server 10.4is configured with security settings to support enterprise applications. All non-GA plugins will be disabled by default in order to reduce the risks incurred when using unsupported features. Further, the default configuration is changed to enforce strong security, durability and consistency. Enterprise backup MariaDB Enterprise Server 10.4 offers enterprise backup that brings operational efficiency to customers with large databases and further breaks up the backups into non-blocking stages. So this way, writes and schema changes can occur during backups than waiting for backup to complete. Auditing capabilities This server adds secure, stronger and easier auditing capabilities by logging all changes to the audit configuration. It also logs detailed connection information that gives the customers a comprehensive view of changes made to the database. End-to-end encryption It also offers end-to-end encryption for multi-master clusters where the transaction buffers are encrypted that ensure that the data is secure. https://twitter.com/holgermu/status/1138511727610478594 Learn more about this news on the official web page. MariaDB CEO says big proprietary cloud vendors “strip-mining open-source technologies and companies” MariaDB announces MariaDB Enterprise Server and welcomes Amazon’s Mark Porter as an advisor to the board of directors TiDB open sources its MySQL/MariaDB compatible data migration (DM) tool
Read more
  • 0
  • 0
  • 2807

article-image-joyent-public-cloud-to-reach-end-of-life-in-november
Amrata Joshi
07 Jun 2019
4 min read
Save for later

Joyent Public Cloud to reach End-of-Life in November

Amrata Joshi
07 Jun 2019
4 min read
Yesterday Joyent announced its departure from the public cloud space. Beginning November 9, 2019, the Joyent Public Cloud, including Triton Compute and Triton Object Storage (Manta), will no longer accept new customers as of June 6, 2019, and will discontinue serving existing customers upon EOL on November 9th. In 2016, Joyent was acquired by Samsung after it had explored Manta, which is the Joyent’s object storage system, for implementation, Samsung liked the product, hence had bought it. In 2014, Joyent was even praised by Gartner, in its IaaS Magic Quadrant, for having a “unique vision.” The company had also developed a single-tenant cloud offering for cloud-mature, hyperscale users such as Samsung, who also demand vastly improved cloud costs. Since more resources are required for expanding the single-tenant cloud business, the company had to take this call. The team will continue to build functionality for their open source Triton offering complemented by commercial support options to utilize Triton equivalent private clouds in a single-tenant model. The official blog post reads, “As that single-tenant cloud business has expanded, the resources required to support it have grown as well, which has led us to a difficult decision.” Now the current customers have five months to switch and find a new home. The customers need to migrate, backup, or retrieve data running or stored in the Joyent Cloud before November 9th. The company will be removing compute and data from the current public cloud after November 9th and will not be capturing backups of any customer data. Joyent is working towards assisting its customers through the transition with the help of its partners. Some of the primary partners involved in assistance include OVH, Microsoft Azure, Redapt Attunix, and a few more, meanwhile, the additional partners are being finalized. Users might have to deploy the same open source software that powers the Joyent Public Cloud in their own datacenter or on a BMaaS provider like SoftLayer with the company’s ongoing support. For the ones who don’t have the level of scale for their own datacenter or for running BMaaS, Joyent is evaluating different options to support this transition and make it as smooth as possible. Steve Tuck, Joyent president and chief operating officer (COO), wrote in the blog post, “To all of our public cloud customers, we will work closely with you over the coming five months to help you transition your applications and infrastructure as seamlessly as possible to their new home.” He further added, “We are truly grateful for your business and the commitment that you have shown us over the years; thank you.” All publicly-available data centers including US-West, US-Southwest, US-East 1/2/3/3b, EU-West, and Manta will be impacted by the EOL. However, the company said that there will be no impact to their Node.js Enterprise Support offering, they will  invest heavily in software support business for both Triton and Node.js support. They will also be shortly releasing a new Node.js Support portal for the customers. Few think that Joyent’s value proposition got affected because of its public interface. A user commented on HackerNews, “Joyent's value proposition was killed (for the most part) by the experience of using their public interface. It would've taken a great deal of bravery to try that and decide a local install would be better. The node thing also did a lot of damage - Joyent wrote a lot of the SmartOS/Triton command line tools in node so they were slow as hell. Triton itself is a very non-trivial install although quite probably less so than a complete k8s rig.” Others have expressed remorse on Joyent Public Cloud EOL. https://twitter.com/mcavage/status/1136657172836708352 https://twitter.com/jamesaduncan/status/1136656364057612288 https://twitter.com/pborenstein/status/1136661813070827520 To know more about this news, check out EOL of Joyent Public Cloud. Google Cloud went offline taking with it YouTube, Snapchat, Gmail, and a number of other web services Bryan Cantrill on the changing ethical dilemmas in Software Engineering Yuri Shkuro on Observability challenges in microservices and cloud-native applications
Read more
  • 0
  • 0
  • 2817
article-image-google-is-looking-to-acquire-looker-a-data-analytics-startup-for-2-6-billion-even-as-antitrust-concerns-arise-in-washington
Sugandha Lahoti
07 Jun 2019
5 min read
Save for later

Google is looking to acquire Looker, a data analytics startup for $2.6 billion even as antitrust concerns arise in Washington

Sugandha Lahoti
07 Jun 2019
5 min read
Google has got into an agreement with data analytics startup Looker, and is planning to add it to its Google Cloud division. The acquisition will cost Google $2.6 billion in an all-cash transaction. After the acquisition, Looker organization will report to Frank Bien, who will report to Thomas Kurian, CEO of Google Cloud. Looker is Google’s biggest acquisition since it bought smart home company Nest for $3.2 billion in 2014. Looker's analytics platform uses business intelligence and data visualization tools.  Founded in 2011, Looker has grown rapidly, now helping more than 1,700 companies understand and analyze their data. The company had raised more than $280 million in funding, according to Crunchbase. Looker spans the gap in two areas of data warehousing and Business Intelligence. Looker's platform includes a modeling platform where the user codifies the view of the data using a SQL-like proprietary modeling language (LookML). It complements the modeling language with an end user visualization tool providing the self-service analytics portion. Source Primarily, Looker will help Google Cloud become a complete analytics solution that will help customers in ingesting data to visualizing results and integrating data and insights into their daily workflows. Looker + Google Cloud will be used for: Connecting, analyzing and visualizing data across Google Cloud, Azure, AWS, on-premise databases or ISV SaaS applications Operationalizing BI for everyone with powerful data modeling Augmenting business intelligence from Looker with artificial intelligence from Google Cloud Creating collaborative, data-driven applications for industries with interactive data visualization and machine learning Source Implications of Google + Locker Google and Looker already have a strong existing partnership and 350 common customers (such as Buzzfeed, Hearst, King, Sunrun, WPP Essence, and Yahoo!) and this acquisition will only strength it. “We have many common customers we’ve worked with. One of the great things about this acquisition is that the two companies have known each other for a long time, we share very common culture,” Kurian said in a blog. This is also a significant move by Google to gain market share from Amazon Web Services, which reported $7.7 billion in revenue for the last quarter. Google Cloud has been trailing behind Amazon and Microsoft in the cloud-computing market. Looker’s  acquisition will hopefully make its service more attractive to corporations. Looker’s CEO Frank Bien commented on the partnership as a chance to gain the scale of the Google cloud platform. “What we’re really leveraging here, and I think the synergy with Google Cloud, is that this data infrastructure revolution and what really emerged out of the Big Data trend was very fast, scalable — and now in the cloud — easy to deploy data infrastructure,” he said. What is intriguing is Google’s timing and all-cash payment of this buyout. FCC, DOJ, and Congress are currently looking at bringing potential antitrust on Google and other big tech. According to widespread media reports the US Department of Justice is readying to investigate into Google. It has been reported that the probe would examine whether the tech giant broke antitrust law in the operation of its online and advertisement businesses. According to Paul Gallant, a tech analyst with Cowen who focuses on regulatory issues, “A few years ago, this deal would have been waved through without much scrutiny. We’re in a different world today, and there might well be some buyer’s remorse from regulators on prior tech deals like this.” Public reaction to this accusation has been mixed. While some are happy: https://twitter.com/robgo/status/1136628768968192001 https://twitter.com/holgermu/status/1136639110892810241 Others remain dubious: "With Looker out of the way, the question turns to 'What else is on Google's cloud shopping list?," said Aaron Kessler, a Rayond James analyst in a report. "While the breadth of public cloud makes it hard to list specific targets, vertical specific solutions appear to be a strategic priority for Mr. Kurian." There are also questions on if Google will limit Looker to BigQuery, or at least get the newest features first. https://twitter.com/DanVesset/status/1136672725060243457 Then, there is the issue of whether Google will limit which clouds Looker can be run on. Although the company said, they will continue to support Looker’s multi-cloud strategy and will expand support for multiple analytics tools and data sources to provide customers choice.  Google Cloud will also continue to expand Looker’s investments in product development, go-to-market, and customer success capabilities. Google is also known for killing off its own products and also undermining some of its acquisition. With NEST for example, they said that it will be integrated with Google assistant. The decision was reversed only after a massive public backlash. Looker can also be one such acquisition, which may eventually merge with Google Analytics, Google’s proprietary Web analytics service. The deal expected to close later this year, albeit subject to regulatory approval. Google Cloud went offline taking with it YouTube, Snapchat, Gmail, and a number of other web services Google and Binomial come together to open-source Basis Universal Texture Format Ian Lance Taylor, Golang team member, adds another perspective to Go being Google’s language
Read more
  • 0
  • 0
  • 3487

article-image-opensuse-may-go-independent-from-suse-reports-lwn-net
Vincy Davis
03 Jun 2019
3 min read
Save for later

OpenSUSE may go independent from SUSE, reports LWN.net

Vincy Davis
03 Jun 2019
3 min read
Lately, the relationship between SUSE and openSUSE community has been under discussion. Different options are being considered, among which the possibility of setting up openSUSE into an entirely independent foundation is gaining momentum. This will enable openSUSE to have greater autonomy and control over its own future and operations. Though openSUSE board chair Richard Brown and SUSE leadership have publicly reiterated that SUSE remains committed to openSUSE. There has been a lot of concern over the ability of openSUSE to be able to operate in a sustainable way, without being entirely beholden to SUSE. The idea of an independent openSUSE foundation has popped up many times in the past. Former openSUSE board member Peter Linnell says, “Every time, SUSE has changed ownership, this kind of discussion pops up with some mild paranoia IMO, about SUSE dropping or weakening support for openSUSE”. He also adds, “Moreover, I know SUSE's leadership cares a lot about having a healthy independent openSUSE community. They see it as important strategically and the benefits go both ways.” On the contrary, openSUSE Board member Simon Lees says, “it is almost certain that at some point in the future SUSE will be sold again or publicly listed, and given the current good working relationship between SUSE and openSUSE it is likely easier to have such discussions now vs in the future should someone buy SUSE and install new management that doesn't value openSUSE in the same way the current management does.” In an interview with LWN, Brown described the conversation between SUSE and the broader community, about the possibility of an independent foundation, as being frank, ongoing, and healthy. He also mentioned that everything from a full independent openSUSE foundation to a tweaking of the current relationship that provides more legal autonomy for openSUSE can be considered. Also, there is a possibility for some form of organization to be run under the auspices of the Linux Foundation. Issues faced by openSUSE Brown has said, “openSUSE has multiple stakeholders, but it currently doesn't have a separate legal entity of its own, which makes some of the practicalities of having multiple sponsors rather complicated”. Under the current arrangement, it is difficult for OpenSUSE to directly handle financial contributions. Sponsorship and the ability to raise funding have become a prerequisite for the survival of openSUSE. Brown comments, “openSUSE is in continual need of investment in terms of both hardware and manpower to 'keep the lights on' with its current infrastructure”. Another concern has been the tricky collaboration between the community and the company across all SUSE products. In particular, Brown has stated issues with the openSUSE Kubic and SUSE Container-as-a-Service Platform. With a more distinctly separate openSUSE, the implication and the hope is that openSUSE projects will have increased autonomy over its governance and interaction with the wider community. According to LWN, if openSUSE becomes completely independent, it will have increased autonomy over its governance and interaction with the wider community. Though different models for openSUSE's governance are under consideration, Brown has said, “The current relationship between SUSE and openSUSE is unique and special, and I see these discussions as enhancing that, and not necessarily following anyone else's direction”. There has also been no declaration of any hard deadline in place. For more details, head over to LWN article. SUSE is now an independent company after being acquired by EQT for $2.5 billion 389 Directory Server set to replace OpenLDAP as Red Hat and SUSE withdraw support for OpenLDAP in their Enterprise Linux offerings Salesforce open sources ‘Lightning Web Components framework’
Read more
  • 0
  • 0
  • 3084