Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - Cloud & Networking

376 Articles
article-image-google-cloud-went-offline-taking-with-it-youtube-snapchat-gmail-and-a-number-of-other-web-services
Sugandha Lahoti
03 Jun 2019
4 min read
Save for later

Google Cloud went offline taking with it YouTube, Snapchat, Gmail, and a number of other web services

Sugandha Lahoti
03 Jun 2019
4 min read
Update: The article has been updated to include Google's response on Sunday's disruption service. Over the weekend, Google Cloud suffered a major outage taking down a number of Google services, YouTube, GSuite, Gmail, etc. It also affected services dependent on Google such as Snapchat, Nest, Discord, Shopify and more. The problem was first reported by East Coast users in the U.S around 3 PM ET / 12 PM PT, and the company resolved them after more than four hours. According to downdetector, UK, France, Austria, Spain, Brazil, also reported they are suffering from the outage. https://twitter.com/DrKMhana/status/1135291239388143617 In a statement posted to its Google Cloud Platform the company said it experiencing a multi-region issue with the Google Compute Engine. “We are experiencing high levels of network congestion in the eastern USA, affecting multiple services in Google Cloud, GSuite, and YouTube. Users may see a slow performance or intermittent errors. We believe we have identified the root cause of the congestion and expect to return to normal service shortly,” the company said in a statement. The issue was sorted four hours after Google acknowledged the downtime. “The network congestion issue in the eastern USA, affecting Google Cloud, G Suite, and YouTube has been resolved for all affected users as of 4:00 pm US/Pacific,” the company said in a statement. “We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. We will provide a detailed report of this incident once we have completed our internal investigation. This detailed report will contain information regarding SLA credits.” This outage resulted in some major suffering. Not only did it impact one of the most used apps by Netziens (YouTube and Sanpchat), people also reported that they were unable to use their NEST controlled devices such as turn on their AC or open their "smart" locks to let people into the house. https://twitter.com/davidiach/status/1135302533151436800 Even Shopify experienced problems because of the Google outage, which prevented some stores (both brick-and-mortar and online) from processing credit card payments for hours. https://twitter.com/LarryWeru/status/1135322080512270337 The entire dependency of the world’s most popular applications on just one backend in the hands of one company seems a bit startling. It is also surprising how so many people just rely on one hosting service. At the very least, companies should think of setting up a contingency plan, in case the services go down again. https://twitter.com/zeynep/status/1135308911643451392 https://twitter.com/SeverinAlexB/status/1135286351962812416 Another issue which popped up was how Google cloud randomly being down is proof that cloud-based gaming isn't ready for mass audiences yet. At this year’s Game Developers Conference (GDC), Google marked its entry in the game industry with Stadia, its new cloud-based platform for streaming games. It will be launching later this year in select countries including the U.S., Canada, U.K., and Europe. https://twitter.com/BrokenGamezHDR/status/1135318797068488712 https://twitter.com/soul_societyy/status/1135294007515500549 On Monday, Google released an apologetic update on the outage. They outlined the incident, detection and their response. In essence, the root cause of Sunday’s disruption was a configuration change that was intended for a small number of servers in a single region. The configuration was incorrectly applied to a larger number of servers across several neighboring regions, and it caused those regions to stop using more than half of their available network capacity. The network traffic to/from those regions then tried to fit into the remaining network capacity, but it did not. The network became congested, and our networking systems correctly triaged the traffic overload and dropped larger, less latency-sensitive traffic in order to preserve smaller latency-sensitive traffic flows, much as urgent packages may be couriered by bicycle through even the worst traffic jam. Next, Google’s engineering teams are conducting a thorough post-mortem to understand all the contributing factors to both the network capacity loss and the slow restoration. Facebook family of apps hits 14 hours outage, longest in its history Worldwide Outage: YouTube, Facebook, and Google Cloud goes down affecting thousands of users YouTube went down, Twitter flooded with deep questions, YouTube back and everyone is back to watching cat videos.
Read more
  • 0
  • 0
  • 2720

article-image-amazon-managed-streaming-for-apache-kafka-amazon-msk-is-now-generally-available
Amrata Joshi
03 Jun 2019
3 min read
Save for later

Amazon Managed Streaming for Apache Kafka (Amazon MSK) is now generally available

Amrata Joshi
03 Jun 2019
3 min read
Last week, Amazon Web Services announced the general availability of Amazon Managed Streaming for Apache Kafka (Amazon MSK). Amazon MSK makes it easy for developers to build and run applications based on Apache Kafka without having to manage the underlying infrastructure. It is fully compatible with Apache Kafka that enables customers to easily migrate their on-premises or Amazon Elastic Cloud Compute (Amazon EC2) clusters to Amazon MSK without code changes. Customers can use Apache Kafka for capturing and analyzing real-time data streams from a range of sources, including database logs, IoT devices, financial systems, and website clickstreams. Many customers choose to self-manage their Apache Kafka clusters and they end up spending their time and cost in securing, scaling, patching, and ensuring high availability for Apache Kafka and Apache ZooKeeper clusters. But Amazon MSK offers attributes of Apache Kafka that are combined with the availability, security, and scalability of AWS. Customers can now create Apache Kafka clusters that are designed for high availability that span multiple Availability Zones (AZs) with few clicks. Amazon MSK also monitors the server health and automatically replaces servers when they fail. Customers can now easily scale out cluster storage in the AWS management console to meet changes in demand. Amazon MSK runs the Apache ZooKeeper nodes at no additional cost and provides multiple levels of security for Apache Kafka clusters which include VPC network isolation, AWS Identity and Access Management (IAM), etc. It allows customers to continue to run applications built on Apache Kafka and allow them to use Apache Kafka compatible tools and frameworks. General Manager of Amazon MSK, AWS, Rajesh Sheth, wrote to us in an email, "Customers who are running Apache Kafka have told us they want to spend less time managing infrastructure and more time building applications based on real-time streaming data.” He further added, “Amazon MSK gives these customers the ability to run Apache Kafka without having to worry about managing the underlying hardware, and it gives them an easy way to integrate their Apache Kafka applications with other AWS services. With Amazon MSK, customers can stand up Apache Kafka clusters in minutes instead of weeks, so they can spend more time focusing on the applications that impact their businesses.” Amazon MSK is currently available in the US East (Ohio), US East (N. Virginia), US West (Oregon), EU (Ireland), EU (Frankfurt), EU (Paris), EU (London), Asia Pacific (Singapore), Asia Pacific (Tokyo), and Asia Pacific (Sydney), and will expand to additional AWS Regions in the next year. Amazon rejects all 11 shareholder proposals including the employee-led climate resolution at Annual shareholder meeting Amazon to roll out automated machines for boxing up orders: Thousands of workers’ job at stake Amazon resists public pressure to re-assess its facial recognition business; “failed to act responsibly”, says ACLU
Read more
  • 0
  • 0
  • 3069

article-image-unity-editor-will-now-officially-support-linux
Vincy Davis
31 May 2019
2 min read
Save for later

Unity Editor will now officially support Linux

Vincy Davis
31 May 2019
2 min read
Yesterday Martin Best, Senior Technical Product Manager at Unity, briefly announced that the Unity Editor will now officially support Linux. Currently the Editor is available only on ‘preview’ for Ubuntu and CentOS, but Best has stated that it will be fully supported by Unity 2019.3. Another important note is to make sure that before opening projects via the Linux Editor, the 3rd-party tools also support it. Unity has been offering an unofficial, experimental Unity Editor for Linux since 2015. Unity had released the 2019.1 version in April this year, in which it was mentioned that the Unity editor for Linux has moved into preview mode from the experimental status. Now the status has been made official. Best mentions in the blog post, “growing number of developers using the experimental version, combined with the increasing demand of Unity users in the Film and Automotive, Transportation, and Manufacturing (ATM) industries means that we now plan to officially support the Unity Editor for Linux.” The Unity Editor for Linux will be accessible to all Personal (free), Plus, and Pro licenses users, starting with Unity 2019.1. It will be officially supported on the following configurations: Ubuntu 16.04, 18.04 CentOS 7 x86-64 architecture Gnome desktop environment running on top of X11 windowing system Nvidia official proprietary graphics driver and AMD Mesa graphics driver Desktop form factors, running on device/hardware without emulation or compatibility layer Users are quite happy that the Unity Editor will now officially support Linux. A user on Reddit comments, “Better late than never.” Another user added, “Great news! I just used the editor recently. The older versions were quite buggy but the latest release feels totally on par with Windows. Excellent work Unity Linux team!” https://twitter.com/FourthWoods/status/1134196011235237888 https://twitter.com/limatangoalpha/status/1134159970973470720 For the latest builds, check out the Unity Hub. For giving feedback on the Unity Editor for Linux, head over to the Unity Forum page. Obstacle Tower Environment 2.0: Unity announces Round 2 of its ‘Obstacle Tower Challenge’ to test AI game players. Unity has launched the ‘Obstacle Tower Challenge’ to test AI game players Unity updates its TOS, developers can now use any third party service that integrate into Unity
Read more
  • 0
  • 0
  • 6587
Banner background image

article-image-spacex-shares-new-information-on-starlink-after-the-successful-launch-of-60-satellites
Sugandha Lahoti
27 May 2019
3 min read
Save for later

SpaceX shares new information on Starlink after the successful launch of 60 satellites

Sugandha Lahoti
27 May 2019
3 min read
After the successful launch of Elon Musk’s mammoth space mission, Starlink last week, the company has unveiled a brand new website with more details on the Starlink commercial satellite internet service. Starlink Starlink sent 60 communications satellites to the orbit which will eventually be part of a single constellation providing high speed internet to the globe. SpaceX has plans to deploy nearly 12,000 satellites in three orbital shells by the mid-2020s, initially placing approximately 1600 in a 550-kilometer (340 mi)-altitude area. The new website gives a few glimpses of how Starlink’s plan looks like such as including the CG representation of how the satellites will work. These satellites will move along their orbits simultaneously, providing internet in a given area. They have also revealed more intricacies about the satellites. Flat Panel Antennas In each satellite, the signal is transmitted and received by four high-throughput phased array radio antennas. These antennas have a flat panel design and can transmit in multiple directions and frequencies. Starlink Ion Propulsion system and solar array Each satellite carries a krypton ion propulsion system. These systems enable satellites to orbit raise, maneuver in space, and deorbit. There is also a singular solar array, singe for simplifying the system. Ion thrusters provide a more fuel-efficient form of propulsion than conventional liquid propellants. It uses Krypton, which is less expensive than xenon but offers lower thrust efficiency. Starlink Star Tracker and Autonomous collision avoidance system Star Tracker is Space X’s inbuilt sensors, that can tell each satellite’s output for precise broadband throughput placement and tracking. The collision avoidance system uses inputs from the U.S. Department of Defense debris tracking system, reducing human error with a more reliable approach. Through this data it can perform maneuvers to avoid collision with space debris and other spacecrafts. Per Techcrunch, who interviewed a SpaceX representative, “the debris tracker hooks into the Air Force’s Combined Space Operations Center, where trajectories of all known space debris are tracked. These trajectories are checked against those of the satellites, and if a possible collision is detected the course changes are made, well ahead of time.” Source: Techcrunch More information on Starlink (such as the cost of the project, what ground stations look like, etc) is yet unknown. Till that time, keep an eye on the Starlink’s website and this space for new updates. SpaceX delays launch of Starlink, its commercial satellite internet service, for the second time to “update satellite software” Jeff Bezos unveils space mission: Blue Origin’s Lunar lander to colonize the moon Elon Musk reveals big plans with Neuralink
Read more
  • 0
  • 0
  • 4406

article-image-g-suite-administrators-passwords-were-unhashed-for-14-years-notifies-google
Vincy Davis
22 May 2019
3 min read
Save for later

G Suite administrators' passwords were unhashed for 14 years, notifies Google

Vincy Davis
22 May 2019
3 min read
Today, Google notified its G Suite administrators that some of their passwords were being stored in an encrypted internal system unhashed, i.e., in plaintext, since 2005. Google also states that the error has been fixed and this issue had no effect on the free consumer Google accounts. In 2005, Google had provided G Suite domain administrators with tools to set and recover passwords. This tool enabled administrators to upload or manually set user passwords for their company’s users. This was made possible for helping onboard new users with their account information on their first day of work, and for account recovery. However, this action led to admin console storing a copy of the unhashed password. Google has made it clear that these unhashed passwords were stored in a secure encrypted infrastructure. Google is now working with enterprise administrators to ensure that the users reset their passwords. They are also conducting a thorough investigation and have assured users that no evidence of improper access or misuse of the affected passwords have been identified till now. Google has around 5 million users using G Suite. Out of an abundance of caution, the Google team will also reset accounts of those who have not done it themselves. Additionally, Google has also admitted to another mishap. In January 2019, while troubleshooting new G Suite customer sign-up flows, an accidentally stored subset of unhashed passwords was discovered. Google claims these unhashed passwords were stored for only 14 days and in a secure encrypted infrastructure. This issue has also been fixed and no evidence of improper access or misuse of the affected passwords have been found. In the blogpost, Suzanne Frey, VP of Engineering and Cloud Trust, has given a detailed account of how Google stores passwords for consumers & G Suite enterprise customers. Google is the latest company to have admitted storing sensitive data in plaintext. Two months ago, Facebook had admitted to have stored the passwords of hundreds of millions of its users in plain text, including the passwords of Facebook Lite, Facebook, and Instagram users. Read More: Facebook accepts exposing millions of user passwords in a plain text to its employees after security researcher publishes findings Last year, Twitter and GitHub also admitted to similar security lapses. https://twitter.com/TwitterSupport/status/992132808192634881 https://twitter.com/BleepinComputer/status/991443066992103426 Users are shocked that it took Google 14 long years to identify this error. Others are concerned if even a giant company like Google cannot secure its passwords in 2019, what can be expected from other companies. https://twitter.com/HackingDave/status/1131067167728984064 A user on Hacker News comments, “Google operates what is considered, by an overwhelming majority of expert opinion, one of the 3 best security teams in the industry, likely exceeding in so many ways the elite of some major world governments. And they can't reliably promise, at least not in 2019, never to accidentally durably log passwords. If they can't, who else can? What are we to do with this new data point? The issue here is meaningful, and it's useful to have a reminder that accidentally retaining plaintext passwords is a hazard of building customer identity features. But I think it's at least equally useful to get the level set on what engineering at scale can reasonably promise today.” To know more about this news in detail, head over to Google’s official blog. Google announces Glass Enterprise Edition 2: an enterprise-based augmented reality headset As US-China tech cold war escalates, Google revokes Huawei’s Android support, allows only those covered under open source licensing Google AI engineers introduce Translatotron, an end-to-end speech-to-speech translation model
Read more
  • 0
  • 0
  • 2059

article-image-microsoft-introduces-service-mesh-interface-smi-for-interoperability-across-different-service-mesh-technologies
Amrata Joshi
22 May 2019
2 min read
Save for later

Microsoft introduces Service Mesh Interface (SMI) for interoperability across different service mesh technologies

Amrata Joshi
22 May 2019
2 min read
Yesterday, the team at Microsoft launched Service Mesh Interface (SMI) that defines a set of common and portable APIs. It is an open project that started in partnership with Microsoft, HashiCorp, Linkerd, Solo.io, Kinvolk, and Weaveworks and with support coming from Aspen Mesh, Docker, Canonical, Pivotal, Rancher, Red Hat, and VMware. SMI provides developers with interoperability across different service mesh technologies including Linkerd, Istio, and Consul Connect. The need for service mesh technology Previously, not much attention was given to the network architecture and organizations believed in making applications smarter instead. But now while dealing with micro-services, containers, and orchestration systems like Kubernetes, the engineering teams face issues with securing, managing and monitoring a number of network endpoints. The service mesh technology has a solution to this problem as it makes the network smarter. It pushes service this logic into the network, controlled by a separate set of management APIs, and frees the engineers from teaching all the services to encrypt sessions, authorize clients, emit reasonable telemetry. Key features of Service Mesh Interface(SMI) It provides a standard interface for meshes on Kubernetes. It also comes with a basic feature set for common mesh use cases. It provides lexibility to support new mesh capabilities. It applies policies like identity and transport encryption across services. It also captures key metrics like error rate and latency between services. Service Mesh Interface shifts and weighs traffic between different services. William Morgan, Linkerd maintainer, said, “SMI is a big step forward for Linkerd’s goal of democratizing the service mesh, and we’re excited to make Linkerd’s simplicity and performance available to even more Kubernetes users.” Idit Levine, Founder and CEO of Solo.io, said, “The standardization of interfaces are crucial to ensuring a great end user experience across technologies and for ecosystem collaboration. With that spirit, we are excited to work with Microsoft and others on the SMI specification and have already delivered the first reference implementations with the Service Mesh Hub and SuperGloo project.” To know more about this news, check out Microsoft’s blog post. Microsoft officially releases Microsoft Edge canary builds for macOS users Game rivals, Microsoft and Sony, form a surprising cloud gaming and AI partnership Microsoft releases security updates: a “wormable” threat similar to WannaCry ransomware discovered  
Read more
  • 0
  • 0
  • 2962
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-introducing-datastax-constellation-a-cloud-platform-for-rapid-development-of-apache-cassandra-based-apps
Bhagyashree R
21 May 2019
3 min read
Save for later

Introducing DataStax Constellation: A cloud platform for rapid development of Apache Cassandra-based apps

Bhagyashree R
21 May 2019
3 min read
At the first day of Accelerate 2019, DataStax unveiled DataStax Constellation, a modern cloud platform specifically designed for Apache Cassandra. DataStax is a leading provider of the always-on, active everywhere distributed hybrid cloud database built on Apache Cassandra. https://twitter.com/DataStax/status/1130803273647230976 DataStax Accelerate 2019 is a three-day event (21-23 May) happening at Maryland, US. On the agenda, this event has 70+ technical sessions, networking with experts and people from leading companies like IBM, Walgreens, T-Mobile, and also new product announcements. Sharing the vision behind DataStax Constellation, Billy Bosworth, CEO of DataStax, said, “With Constellation, we are making a major commitment to being the leading cloud database company and putting cloud development at the top of our priority list. From edge to hybrid to multi-cloud, we are providing developers with a cloud platform that includes the complete set of tools they need to build game-changing applications that spark transformational business change and let them do what they do best.” What is DataStax Constellation? DataStax Constellation is a modern cloud platform that provides smart services for easy and rapid development and deployment of Cassandra-based applications. It comes with an integrated web console that simplifies the use and management of Cassandra. DataStax Constellation provides an interactive developer tool for CQL (Cassandra Query Language) named DataStax Studio. This tool makes it easy for developers to collaborate by keeping track of code, query results, and visualizations in self-documenting notebooks. The Constellation platform is initially launched with two cloud services, DataStax Apache Cassandra-as-a-Service and DataStax Insights: DataStax Apache Cassandra as a Service DataStax Apache Cassandra as a Service enables you to easily develop and deploy Apache Cassandra applications in the cloud. Here are some of the advantages and features it comes with: Ensures high availability of applications: It assures uptime and integrity with multiple data replicas. Users are only charged when the database is in use, which significantly reduces operational overhead. Reduces administrative overhead: It makes your applications capable of self-healing with its advanced optimization and remediation mechanisms. Better performance than open-source Cassandra: This provides up to three times better performance than open source Apache Cassandra at any scale. DataStax Insights DataStax Insights is performance management and monitoring tool for DataStax Constellation and DataStax Enterprise. Here are some of the features it comes with: Centralized and scalable monitoring: It provides centralized and scalable monitoring across all cloud and on-premise deployments. Simplified administration: It provides an at-a-glance health index that simplifies administration via a single view of all clusters. Automated performance tuning: Its AI-powered analysis and recommendations enable automated performance tuning. Sharing his future plans regarding Constellation, Bosworth said, “Constellation is for all developers seeking easy and obvious application deployment in any cloud. And the great thing is that we are planning for it to be available on all three of the major cloud providers: AWS, Google, and Microsoft.” DataStax plans to make Constellation, Insights, and Cassandra as a Service available on all three cloud providers in Q4 of 2019. To know more about DataStax Constellation, visit its official website Instaclustr releases three open source projects for Apache Cassandra database users ScyllaDB announces Scylla 3.0, a NoSQL database surpassing Apache Cassandra in features cstar: Spotify’s Cassandra orchestration tool is now open source!
Read more
  • 0
  • 0
  • 1518

article-image-core-security-features-of-elastic-stack-are-now-free
Amrata Joshi
21 May 2019
3 min read
Save for later

Core security features of Elastic Stack are now free!

Amrata Joshi
21 May 2019
3 min read
Today, the team at Elastic announced that the core security features of the Elastic Stack are now free. They also announced about releasing Elastic Stack versions 6.8.0 and 7.1.0 and the alpha release of Elastic Cloud on Kubernetestoday. With the free core security features, users can now define roles that protect index and cluster level access, encrypt network traffic, create and manage users, and fully secure Kibana with Spaces. The team had opened the code for these features last year and has finally made them free today which means the users can now run a fully secure cluster. https://twitter.com/heipei/status/1130573619896225792 Release of Elastic Stack versions 6.8.0 and 7.1.0 The team also made an announcement about releasing versions 6.8.0 and 7.1.0 of the Elastic Stack, today. These versions do not contain new features but they make the core security features free in the default distribution of the Elastic Stack. The core security features include TLS for encrypted communications, file and native realm to create and manage users, and role-based access control to control user access to cluster APIs and indexes. The features also include allowing multi-tenancy for Kibana with security for Kibana Spaces. Previously, these core security features required a paid gold subscription, however, now, they are free as a part of the basic tier. Alpha release of Elastic Cloud on Kubernetes The team has also announced the alpha release of Elastic Cloud on Kubernetes (ECK) which is the official Kubernetes Operator for Elasticsearch and Kibana. It is a new product based on the Kubernetes Operator pattern that lets users manage, provision, and operate Elasticsearch clusters on Kubernetes. It is designed for automating and simplifying how Elasticsearch is deployed and operated in Kubernetes. It also provides an official way for orchestrating Elasticsearch on Kubernetes and provides a SaaS-like experience for Elastic products and solutions on Kubernetes. The team has moved the core security features into the default distribution of Elastic Stack to ensure that all clusters launched and managed by ECK are secured by default at creation time. The clusters that are deployed via ECK include free features and tier capabilities such as Kibana Spaces, frozen indices for dense storage, Canvas, Elastic Maps, and more. Users can now monitor Kubernetes logs and infrastructure with the help of Elastic Logs and Elastic Infrastructure apps. Few users think that security shouldn’t be an added feature, it should be inbuilt. A user commented on HackerNews, “Security shouldn't be treated as a bonus feature.” Another user commented, “Security should almost always be a baseline requirement before something goes up for public sale.” Few others are happy about this news. A user commented, “I know it's hard to make a buck with an open source business model but deciding to charge more for security-related features is always so frustrating to me. It leads to a culture of insecure deployments in environments when the business is trying to save money. Differentiate on storage or number of cores or something, anything but auth/security. I'm glad they've finally reversed this.” To know more about this news, check out the blog post by Elastic. Elasticsearch 7.0 rc1 releases with new allocation and security features Elastic Stack 6.7 releases with Elastic Maps, Elastic Update and much more! AWS announces Open Distro for Elasticsearch licensed under Apache 2.0  
Read more
  • 0
  • 0
  • 3760

article-image-spacex-delays-launch-of-starlink-its-commercial-satellite-internet-service-for-the-second-time-to-update-satellite-software
Bhagyashree R
17 May 2019
4 min read
Save for later

SpaceX delays launch of Starlink, its commercial satellite internet service, for the second time to “update satellite software”

Bhagyashree R
17 May 2019
4 min read
Update: On 23rd May, SpaceX successfully launched 60 satellites of the company’s Starlink constellation to orbit after a launch from Cape Canaveral. “This is one of the hardest engineering projects I’ve ever seen done, and it’s been executed really well,” said Elon Musk, SpaceX’s founder and CEO, during a press briefing last week. “There is a lot of new technology here, and it’s possible that some of these satellites may not work, and in fact a small possibility that all the satellites will not work. “We don’t want to count anything until it’s hatched, but these are, I think, a great design and we’ve done everything we can to maximize the probability of success,” he said. On Wednesday night, SpaceX was all set to send a Falcon 9 rocket into the space carrying the very first 60 satellites for its new Starlink commercial satellite internet service. And, while everyone was eagerly waiting for the launch webcast, the heavy winds ruined the show for everyone. SpaceX rescheduled the launch at 10:30 pm EDT from Florida's Cape Canaveral Air Force Station, but it has canceled the launch yet again citing the reason as software issues. The launch is now delayed for about a week. https://twitter.com/SpaceX/status/1129181397262843906 Elon Musk’s plans for SpaceX This launch of 60 satellites, weighing 227 kgs each, is actually the first step into creating Elon Musk’s plan of a huge Starlink constellation. He eventually aims to build up a mega constellation of 12,000 satellites. If everything goes well today, Falcon 9 will make a landing on the “Of Course I Still Love You” drone-ship in the Atlantic Ocean. After 1 hour and 20 minutes of the launch, the second stage will begin when the Starlink satellites will start self-deploying. On Wednesday, Musk on a teleconference with reporters revealed a bunch of details about his satellite internet service. Revealing the release mechanism behind the satellites he said that each of the satellites does not have their own release mechanism. Instead, the Falcon rocket’s upper stage will begin a very slow rotation and each one will be released in turn with a different amount of rotational inertia. "It will almost seem like spreading a deck of cards on a table,” he adds. Once the deployment happens, the satellites will start powering up their ion drives and open their solar panels. They will then move to an altitude of 550 km under their own power. This is a new approach for delivering commercial satellite internet. Other satellite internet services such as Viasat depend on few big satellites in geostationary orbit over 22,000 miles (35,000 kilometers) above Earth as opposed to 550 km in this case. Conventional internet services can suffer from high levels of latency because the signals have to travel a huge distance between the satellite and earth. Starlink aims to bring the satellites to the lower orbit to minimize the distance hence resulting in less lag time. However, the catch here is that as these satellites are closer to the Earth they are not able to cover a large surface area and hence a much greater number of them will be required to cover the whole planet. Though his plans look promising, Musk does not claim of everything going well. He, in the teleconference, said, "This is very hard. There is a lot of new technology, so it's possible that some of these satellites may not work. There's a small possibility that all of these satellites will not work." He further added that six more launches of a similar payload will be required before the service even begins to offer a “minor” coverage. You willl be able to see the launch webcast hauere or also on lthe official website: https://www.youtube.com/watch?time_continue=8&v=rT366GiQkP0 Jeff Bezos unveils space mission: Blue Origin’s Lunar lander to colonize the moon Elon Musk reveals big plans with Neuralink DeepMind, Elon Musk, and others pledge not to build lethal AIation
Read more
  • 0
  • 0
  • 2306

article-image-gke-sandbox-a-gvisor-based-feature-to-increase-security-and-isolation-in-containers
Vincy Davis
17 May 2019
4 min read
Save for later

GKE Sandbox : A gVisor based feature to increase security and isolation in containers

Vincy Davis
17 May 2019
4 min read
During the Google Cloud Next ‘19, Google Cloud announced the beta version of GKE Sandbox, a new feature in Google Kubernetes Engine (GKE). Yesterday, Yoshi Tamura (Product Manager of Google Kubernetes Engine and gVisor) and Adin Scannell (Senior Staff Software Engineer of gVisor) explained in brief about the GKE Sandbox, on Google Cloud’s official blogspot. GKE Sandbox increases the security and isolation of containers by adding an extra layer between the containers and the host OS. At general availability, GKE Sandbox will be available in the upcoming GKE Advanced. This feature will help in building demanding production applications on top of managed Kubernetes service. GKE Sandbox uses gVisor to abstract the internals, which makes the internals an easy-to-use service. While creating a pod, the user can simply choose GKE Sandbox and continue to interact with containers. This will need no new learning of controls or a mental model. In view of limiting potential attacks, GKE Sandbox helps teams running multi-tenant clusters such as SaaS providers. These teams are often executing  unknown or untrusted code. This helps in providing more secure multi-tenancy in GKE. gVisor is an open-source container sandbox runtime that was released last year. It was created to defend against a host compromise when it runs an arbitrary, untrusted code, and still integrate with container-based infrastructure. gVisor is used in many Google Cloud Platform (GCP) services like the App Engine standard environment, Cloud Functions, Cloud ML Engine, and most recently Cloud Run. Some features of gVisor include: Provides an independent operating system kernel to each container. Applications can interact with the virtualized environment provided by gVisor's kernel rather than the host kernel. Manages and places restrictions on file and network operations. Ensures there are two isolation layers between the containerized application and the host OS. Due to the reduced and restricted interaction of an application with the host kernel, attackers have a smaller attack surface. An experience shared on the official Google blog post mentions how Data refinery creator Descartes Labs have applied machine intelligence to massive data sets. Tim Kelton, Co-Founder and Head of SRE, Security, and Cloud Operations at Descartes Labs, said, “As a multi-tenant SaaS provider, we still wanted to leverage Kubernetes scheduling to achieve cost optimizations, but build additional security layers on top of users’ individual workloads. GKE Sandbox provides an additional layer of isolation that is quick to deploy, scales, and performs well on the ML workloads we execute for our users." Applications suitable for GKE Sandbox GKE Sandbox is well-suited to run compute and memory-bound applications and so works with a wide variety of applications such as: Microservices and functions : GKE Sandbox will enable additional defense in depth while preserving low spin-up times and high service density. Data processing : GKE Sandbox can process data in less than 5 percent for streaming disk I/O and compute-bound applications like FFmpeg. CPU-based machine learning: Training and executing machine learning models frequently involves large quantities of data and complex workflows which mostly belongs to a third party. The CPU overhead of sandboxing compute-bound machine learning tasks is less than 10 percent. A user on Reddit commented, “This is a really interesting add-on to GKE and I'm glad to see vendors starting to offer a variety of container runtimes on their platforms.” GKE Sandbox feature has got rave reviews on twitter too. https://twitter.com/ahmetb/status/1128709028203220992 https://twitter.com/sarki247/status/1128931366803001345 If you want to try GKE Sandbox and know more details, head over to Google’s official feature page. Google Open-sources Sandboxed API, a tool that helps in automating the process of porting existing C and C++ code Google Cloud introduces Traffic Director Beta, a networking management tool for service mesh Google Cloud Console Incident Resolved!
Read more
  • 0
  • 0
  • 3145
article-image-adobe-warns-users-of-infringement-claims-if-they-continue-using-older-versions-of-its-creative-cloud-products
Bhagyashree R
15 May 2019
3 min read
Save for later

Adobe warns users of “infringement claims” if they continue using older versions of its Creative Cloud products

Bhagyashree R
15 May 2019
3 min read
On Monday, Adobe communicated to some of its users, who have taken the subscription of its Creative Cloud applications, that they cannot continue using the older versions and may face “infringement claims” from third-party companies if failed to upgrade to new versions. The email sent to the users did not have any mention of the reason why they should discontinue using the older versions. However, Adobe did share with AppleInsider that this sudden announcement is because of “ongoing litigation.” AppleInsider speculates that the company is referring to the recent lawsuit filed by Dolby Labs against Adobe for not complying with its audit obligations. In a statement to AppleInsider, Adobe wrote: "Adobe recently discontinued certain older versions of Creative Cloud applications. Customers using those versions have been notified that they are no longer licensed to use them and were provided guidance on how to upgrade to the latest authorized versions. Unfortunately, customers who continue to use or deploy older, unauthorized versions of Creative Cloud may face potential claims of infringement by third parties. We cannot comment on claims of third-party infringement, as it concerns ongoing litigation." Adobe licenses certain audio processing technologies from Dolby Labs. According to the license, whenever Adobe sells a product that has been developed with Dolby Labs’ technology, it is obligated to report the sale to Dolby and pay a royalty. The license also provides Dolby Labs the right to audit Adobe’s books and sales of the products containing its licensed technology. When the company asked Adobe to share this information, it refused to do so. “Under all of Adobe’s license agreements with Dolby, Dolby had broad rights to inspect Adobe’s books and records through a third-party audit, in order to verify the accuracy of Adobe’s reporting of sales and payment of royalties. When Dolby sought to exercise its right to audit Adobe’s books and records to ensure proper reporting and payment, Adobe refused to engage in even basic auditing and information sharing practices; practices that Adobe itself had demanded of its own licenses,” the lawsuit reads. This abrupt announcement has left many users infuriated who do not want to update to the latest versions for valid reasons. Many users wait to upgrade to the latest versions until any reported bugs are fixed. Users might also avoid the latest versions because of a few missing features that they need for their project. Matt Roszak was the first one to report this on Twitter, and after that many others reported the same issue. https://twitter.com/KupoGames/status/1126905276693667841 https://twitter.com/InspiringWhyNot/status/1128099070424289280 This also triggered a discussion on Hacker News, where a user commented: “I was stunned by this revelation, but then I think back to all the other times Adobe has exhibited similar behavior and it seems like they won't change. It's not like the CC suite is cheap either. To compare, the Office 365 subscription provides so much more value for a better price. For a very high-class replacement of Adobe products, I would recommend Affinity suite of products - they are a buy once, use forever kind. And Affinity Designer (replacement for Illustrator) is incredibly good - even better than Illustrator in a lot of areas. And the price of Designer is less than 2 months of Illustrator fees. Beat that!” Microsoft, Adobe, and SAP share new details about the Open Data Initiative Adobe Acquires Allegorithmic, a popular 3D editing and authoring company Mozilla disables the by default Adobe Flash plugin support in Firefox Nightly 69
Read more
  • 0
  • 0
  • 2352

article-image-after-rhel-8-release-users-awaiting-the-release-of-centos-8
Vincy Davis
10 May 2019
2 min read
Save for later

After RHEL 8 release, users awaiting the release of CentOS 8

Vincy Davis
10 May 2019
2 min read
The release of Red Hat Enterprise Linux 8 (RHEL 8) this week, has made everyone waiting for CentOS 8 rebuild to occur. The release of CentOS 8 will require major overhaul in the installer, packages, packaging, and build systems so that it can work with the newer OS. CentOS 7 was released last year, days after RHEL 7 was released. So far the team at CentOS have made their new build system setup ready, and are currently working on the artwork. But they still need to work on their multiple series of build loops in order to get all of the CentOS 8.0 packages built in a compatible fashion. There will be a an installer update followed by a release candidate(s). Only after all these releases, CentOS 8 will finally be available for it’s users. The RHEL 8 release has made many users excited for the CentOS 8 build. A user on Reddit commented, “Thank you to the project maintainers; while RedHat does release the source code anyone who’s actually compiled from source knows that it’s never push-button easy” Another user added, “Thank you Red Hat! You guys are amazing. The entire world has benefited from your work. I've been a happy Fedora user for many years, and I deeply appreciate how you've made my life better. Thank you for building an amazing set of distros, and thank you for pushing forward many of the huge projects that improve our lives such as Gnome and many more. Thank you for your commitment to open source and for living your values. You are heroes to me” So far, a release date has not been declared for CentOS 8 , but a rough timeline has been shared. To read about the steps needed to make a CentOS rebuild, head over to the CentOS wiki page. Red Hat releases OpenShift 4 with adaptability, Enterprise Kubernetes and more! Soon, RHEL (Red Hat Enterprise Linux) won’t support KDE Red Hat released RHEL 7.6
Read more
  • 0
  • 0
  • 6727

article-image-neuvector-announces-new-container-risk-reports-for-vulnerability-exploits-external-attacks-and-more-based-on-redhat-openshift-integration
Savia Lobo
09 May 2019
3 min read
Save for later

NeuVector announces new container risk reports for vulnerability exploits, external attacks, and more based on RedHat OpenShift integration

Savia Lobo
09 May 2019
3 min read
NeuVector, a firm that deals with container network security, yesterday, announced new capabilities to help container security teams better assess the security posture of their deployed services in production. NeuVector now delivers an intelligent assessment of the risk of east-west attacks, ingress and egress connections, and damaging vulnerability exploits. An overall risk score summarizes all available risk factors and provides advice on how to lower the threat of attack – thus improving the score. The service connection risk score shows how likely it is for attackers to move laterally (east-west) to probe containers that are not segmented by the NeuVector firewall rules. The ingress/egress risk score shows the risk of external attacks or outbound connections commonly used for data stealing or connecting to C&C (command and control) servers. In an email written to us, Gary Duan, CTO of NeuVector said, “The NeuVector container security solution spans the entire pipeline – from build to ship to run. Because of this, we are able to present an overall analysis of the risk of attack for containers during run-time. But not only can we help assess and reduce risk, we can actually take automated actions such as blocking network attacks, quarantining suspicious containers, and capturing container and network forensics.” With the RedHat OpenShift integration, individual users can review the risk scores and security posture for the containers within their assigned projects. They are able to see the impact of their improvements to security configurations and protections as they lower risk scores and remove potential vulnerabilities. Read Also: Red Hat Summit 2019 Highlights: Microsoft collaboration, Red Hat Enterprise Linux 8 (RHEL 8), IDC study predicts $10 trillion global revenue by end of 2019, and more! The one-click RBAC integration requires no additional coding, scripting or configuration, and adds to other OpenShift integration points for admission control, image streams, OVS networking, and service deployments. Fei Huang, CEO of NeuVector said, “We are seeing many business-critical container deployments using Red Hat OpenShift. These customers turn to NeuVector to provide complete run-time protection for in-depth defense – with the combination of container process and file system monitoring, as well as the industry’s only true layer-7 container firewall.” To know about this announcement in detail visit the official website. Red Hat releases OpenShift 4 with adaptability, Enterprise Kubernetes and more! Red Hat rebrands logo after 20 years; drops Shadowman Red Hat announces CodeReady Workspaces, the first Kubernetes-Native IDE for easy collaboration among developers
Read more
  • 0
  • 0
  • 1513
article-image-red-hat-releases-openshift-4-with-adaptability-enterprise-kubernetes-and-more
Vincy Davis
09 May 2019
3 min read
Save for later

Red Hat releases OpenShift 4 with adaptability, Enterprise Kubernetes and more!

Vincy Davis
09 May 2019
3 min read
The 3-day Red Hat Summit 2019 has kicked off at Boston Convention and Exhibition Center, United States.Yesterday, Red Hat announced a major release, Red Hat OpenShift 4, the next generation of its trusted enterprise Kubernetes platform, re-engineered to address the complex realities of container orchestration in production systems. Red Hat OpenShift 4 simplifies hybrid and multicloud deployments to help IT organizations utilize new applications, help businesses to thrive and gain momentum in an ever-increasing set of competitive markets. Features of Red Hat OpenShift 4 Simplify and automate the cloud everywhere Red Hat OpenShift 4 will help in automating and operationalize the best practices for modern application platforms. It will operate as a unified cloud experience for the hybrid world, and enable an automation-first approach including Self-managing platform for hybrid cloud: This provides a cloud-like experience via automatic software updates and lifecycle management across the hybrid cloud which enables greater security, auditability, repeatability, ease of management and user experience. Adaptability and heterogeneous support: It will be available in the coming months across major public cloud vendors including Alibaba, Amazon Web Services (AWS), Google Cloud, IBM Cloud, Microsoft Azure, private cloud technologies like OpenStack, virtualization platforms and bare-metal servers. Streamlined full stack installation:When streamlined full stack installation is used along with an automated process, it will make it easier to get started with enterprise Kubernetes. Simplified application deployments and lifecycle management: Red Hat helped stateful and complex applications on Kubernetes with Operators which will helps in self operating application maintenance, scaling and failover. Trusted enterprise Kubernetes The Cloud Native Computing Foundation (CNCF) has certified the Red Hat OpenShift Container Platform, in accordance with Kubernetes. It offers built on the backbone of the world’s leading enterprise Linux platform backed by the open source expertise, compatible ecosystem, and leadership of Red Hat. It also provides codebase which will help in securing key innovations from upstream communities. Empowering developers to innovate OpenShift 4 supports the evolving needs of application development as a consistent platform to optimize developer productivity with: Self-service, automation and application services that will help developers to extend their application by on-demand provisioning of application services. Red Hat CodeReady Workspaces enables developers to hold the power of containers and Kubernetes, while working with familiar Integrated Development Environment (IDE) tools that they use day-to-day. OpenShift Service Mesh combines Istio, Jaeger, and Kiali projects as a single capability that encodes communication logic for microservices-based application architectures. Knative for building serverless applications in Developer Preview makes Kubernetes an ideal platform for building, deploying and managing serverless or function-as-a-service (FaaS) workloads. KEDA (Kubernetes-based Event-Driven Autoscaling), a collaboration between Microsoft and Red Hat supports deployment of serverless event-driven containers on Kubernetes, enabling Azure Functions in OpenShift, in Developer Preview. This allows accelerated development of event-driven, serverless functions across hybrid cloud and on-premises with Red Hat OpenShift. Red Hat mentioned that OpenShift 4 will be available in the coming months. To read more details about Openshift 4, head over to the official press release on Red Hat. To know about the other major announcements at the Red Hat Summit 2019 like Microsoft collaboration, Red Hat Enterprise Linux 8 (RHEL 8), visit our coverage on Red Hat Summit 2019 Highlights. Red Hat rebrands logo after 20 years; drops Shadowman Red Hat team announces updates to the Red Hat Certified Engineer (RHCE) program Red Hat Satellite to drop MongoDB and will support only PostgreSQL backend
Read more
  • 0
  • 0
  • 3586

article-image-red-hat-summit-2019-highlights-microsoft-collaboration-red-hat-enterprise-linux-8-rhel-8-idc-study-predicts-support-for-software-worth-10-trillion
Savia Lobo
09 May 2019
11 min read
Save for later

Red Hat Summit 2019 Highlights: Microsoft collaboration, Red Hat Enterprise Linux 8 (RHEL 8), IDC study predicts support for software worth $10 trillion

Savia Lobo
09 May 2019
11 min read
The 3-day Red Hat Summit 2019 kicked off yesterday at Boston Convention and Exhibition Center, United States. Since yesterday, there have been a lot of exciting announcements on board including the collaboration of Red Hat with Microsoft where Satya Nadella (Microsoft’s CEO) came over to announce this collaboration. Red Hat also announced the release of Red Hat Enterprise Linux 8, an IDC study predicting $10 trillion global revenue for Red Hat by the end of 2019 and much more. Let us have a look at each of these announcements in brief. Azure Red Hat OpenShift: A Red Hat and Microsoft collaboration The latest Red Hat and Microsoft collaboration, Azure Red Hat OpenShift, must be important - when the Microsoft’s CEO himself came across from Seattle to present it. Red Hat had already brought OpenShift to Azure last year. This is the next step in its partnership with Microsoft. The new Azure Red Hat OpenShift combines Red Hat's enterprise Kubernetes platform OpenShift (running on Red Hat Enterprise Linux (RHEL) with Microsoft's Azure cloud. With Azure Red Hat OpenShift, customers can combine Kubernetes-managed, containerized applications into Azure workflows. In particular, the two companies see this pairing as a road forward for hybrid-cloud computing. Paul Cormier, President of Products and Technologies at RedHat, said, “Azure Red Hat OpenShift provides a consistent Kubernetes foundation for enterprises to realize the benefits of this hybrid cloud model. This enables IT leaders to innovate with a platform that offers a common fabric for both app developers and operations.” Some features of the Azure Red Hat OpenShift include: Fully managed clusters with master, infrastructure and application nodes managed by Microsoft and Red Hat; plus, no VMs to operate and no patching required. Regulatory compliance will be provided through compliance certifications similar to other Azure services. Enhanced flexibility to more freely move applications from on-premise environments to the Azure public cloud via the consistent foundation of OpenShift. Greater speed to connect to Azure services from on-premises OpenShift deployments. Extended productivity with easier access to Azure public cloud services such as Azure Cosmos DB, Azure Machine Learning and Azure SQL DB for building the next-generation of cloud-native enterprise applications. According to the official press release, “Microsoft and Red Hat are also collaborating to bring customers containerized solutions with Red Hat Enterprise Linux 8 on Azure, Red Hat Ansible Engine 2.8 and Ansible Certified modules. In addition, the two companies are working to deliver SQL Server 2019 with Red Hat Enterprise Linux 8 support and performance enhancements.” Red Hat Enterprise Linux 8 (RHEL 8) is now generally available Red Hat Enterprise Linux 8 (RHEL 8) gives a consistent OS across public, private, and hybrid cloud environments. It also provides users with version choice, long life-cycle commitments, a robust ecosystem of certified hardware, software, and cloud partners, and now comes with built-in management and predictive analytics. Features of RHEL 8 Support for latest and emerging technologies RHEL 8 is supported across different architectures and environments such that the user has a consistent and stable OS experience. This helps them to adapt to emerging tech trends such as machine learning, predictive analytics, Internet of Things (IoT), edge computing, and big data workloads. This is mainly due to the hardware innovations like the GPUS, which can assist machine learning workloads. RHEL 8 is supported to deploy and manage GPU-accelerated NGC containers on Red Hat OpenShift. These AI containers deliver integrated software stacks and drivers to run GPU-optimized machine learning frameworks such as TensorFlow, Caffe2, PyTorch, MXNet, and others. Also, NVIDIA’s DGX-1 and DGX-2 servers are RHEL certified and are designed to deliver powerful solutions for complex AI challenges. Introduction of Application Streams RHEL 8 introduces Application Streams where fast-moving languages, frameworks and developer tools are updated frequently without impacting the core resources. This melds faster developer innovation with production stability in a single, enterprise-class operating system. Abstracts complexities in granular sysadmin tasks with RHEL web console RHEL 8 abstracts away many of the deep complexities of granular sysadmin tasks behind the Red Hat Enterprise Linux web console. The console provides an intuitive, consistent graphical interface for managing and monitoring the Red Hat Enterprise Linux system, from the health of virtual machines to overall system performance. To further improve ease of use, RHEL supports in-place upgrades, providing a more streamlined, efficient and timely path for users to convert Red Hat Enterprise Linux 7 instances to Red Hat Enterprise Linux 8 systems. Red Hat Enterprise Linux System Roles for managing and configuring Linux in production The Red Hat Enterprise Linux System Roles automate many of the more complex tasks around managing and configuring Linux in production. Powered by Red Hat Ansible Automation, System Roles are pre-configured Ansible modules that enable ready-made automated workflows for handling common, complex sysadmin tasks. This automation makes it easier for new systems administrators to adopt Linux protocols and helps to eliminate human error as the cause of common configuration issues. Supports OpenSSL 1.1.1 and TLS 1.3 cryptographic standards To enhance security, RHEL 8 supports the OpenSSL 1.1.1 and TLS 1.3 cryptographic standards. This provides access to the strongest, latest standards in cryptographic protection that can be implemented system-wide via a single command, limiting the need for application-specific policies and tuning. Support for the Red Hat container toolkit With cloud-native applications and services frequently driving digital transformation, RHEL 8 delivers full support for the Red Hat container toolkit. Based on open standards, the toolkit provides technologies for creating, running and sharing containerized applications. It helps to streamline container development and eliminates the need for bulky, less secure container daemons. Other additions in the RHEL 8 include: It drives added value for specific hardware configurations and workloads, including the Arm and POWER architectures as well as real-time applications and SAP solutions. It forms the foundation for Red Hat’s entire hybrid cloud portfolio, starting with Red Hat OpenShift 4 and the upcoming Red Hat OpenStack Platform 15. Red Hat Enterprise Linux CoreOS, a minimal footprint operating system designed to host Red Hat OpenShift deployments is built on RHEL 8 and will be released soon. Red Hat Enterprise Linux 8 is also broadly supported as a guest operating system on Red Hat hybrid cloud infrastructure, including Red Hat OpenShift 4, Red Hat OpenStack Platform 15 and Red Hat Virtualization 4.3. Red Hat Universal Base Image becomes generally available Red Hat Universal Base Image, a userspace image derived from Red Hat Enterprise Linux for building Red Hat certified Linux containers is now generally available. The Red Hat Universal Base Image is available to all developers with or without a Red Hat Enterprise Linux subscription, providing a more secure and reliable foundation for building enterprise-ready containerized applications. Applications built with the Universal Base Image can be run anywhere and will experience the benefits of the Red Hat Enterprise Linux life cycle and support from Red Hat when it is run on Red Hat Enterprise Linux or Red Hat OpenShift Container Platform. Red Hat reveals results of a commissioned IDC study Yesterday, at its summit, Red Hat also announced new research from IDC that examines the contributions of Red Hat Enterprise Linux to the global economy. The press release says: “According to the IDC study, commissioned by Red Hat, software and applications running on Red Hat Enterprise Linux are expected to contribute to more than $10 trillion worth of global business revenues in 2019, powering roughly 5% of the worldwide economy as a cross-industry technology foundation.” According to IDC’s research, IT organizations using Red Hat Enterprise Linux can expect to save $6.7 billion in 2019 alone. IT executives responding to the global survey point to Red Hat Enterprise Linux: Reducing the annual cost of software by 52% Reducing the amount of time IT staff spend doing standard IT tasks by 25% Reducing costs associated with unplanned downtime by 5% To know more about this announcement in detail, read the official press release. Red Hat infrastructure migration solution helped CorpFlex to reduce the IT infrastructure complexity and costs by 87% Using Red Hat’s infrastructure migration solution, CorpFlex, a managed service provider in Brazil successfully reduced the complexity of its IT infrastructure and lowered costs by 87% through its savings on licensing fees. Delivered as a set of integrated technologies and services, including Red Hat Virtualization and Red Hat CloudForms, the Red Hat infrastructure migration solution has provided CorpFlex with the framework for a complete hybrid cloud solution while helping the business improve its bottom line. Red Hat Virtualization is built on Red Hat Enterprise Linux and the Kernel-based Virtual Machine (KVM) project. This gave CorpFlex the ability to virtualize resources, processes, and applications with a stable foundation for a cloud-native and containerized future. Migrating to the Red Hat Virtualization can provide improved affordable virtualization, improved performance, enhanced collaboration, extended automation, and more. Diogo Santos, CTO of CorpFlex said, “With Red Hat Virtualization, we’ve not only seen cost-saving in terms of licensing per virtual machine but we’ve also been able to enhance our own team’s performance through Red Hat’s extensive expertise and training.” To know more about this news in detail, head over to the official press release. Deutsche Bank increases operational efficiency by using Red Hat’s open hybrid cloud technologies to power its ‘Fabric’ application platform Fabric is a key component of Deutsche Bank's digital transformation strategy and serves as an automated, self-service hybrid cloud platform that enables application teams to develop, deliver and scale applications faster and more efficiently. Red Hat Enterprise Linux has served as a core operating platform for the bank for a number of years by supporting a common foundation for workloads both on-premises and in the bank's public cloud environment. For Fabric, the bank continues using Red Hat’s cloud-native stack, built on the backbone of the world’s leading enterprise Linux platform, with Red Hat OpenShift Container Platform. The bank deployed Red Hat OpenShift Container Platform on Microsoft Azure as part of a security-focused new hybrid cloud platform for building, hosting and managing banking applications. By deploying Red Hat OpenShift Container Platform, the industry's most comprehensive enterprise Kubernetes platform, on Microsoft Azure, IT teams can take advantage of massive-scale cloud resources with a standard and consistent PaaS-first model. According to the press release, “The bank has achieved its objective of increasing its operational efficiency, now running over 40% of its workloads on 5% of its total infrastructure, and has reduced the time it takes to move ideas from proof-of-concept to production from months to weeks.” To know more about this news in detail, head over to the official press release. Lockheed Martin and Red Hat together to bring accelerate upgrades to the U.S. Air Force’s F-22 Raptor fighter jets Red Hat announced that Lockheed Martin was working with it to modernize the application development process used to bring new capabilities to the U.S. Air Force’s fleet of F-22 Raptor fighter jets. Lockheed Martin Aeronautics replaced its previous waterfall development process used for F-22 Raptor with an agile methodology and DevSecOps practices that are more adaptive to the needs of the U.S. Air Force. Together, Lockheed Martin and Red Hat created an open architecture based on Red Hat OpenShift Container Platform that has enabled the F-22 team to accelerate application development and delivery. “The Lockheed Martin F-22 Raptor is one of the world’s premier fighter jets, with a combination of stealth, speed, agility, and situational awareness. Lockheed Martin is working with the U.S. Air Force on innovative, agile new ways to deliver the Raptor’s critical capabilities to warfighters faster and more affordable”, the press release mentions. Lockheed Martin chose Red Hat Open Innovation Labs to lead them through the agile transformation process and help them implement an open source architecture onboard the F-22 and simultaneously disentangle its web of embedded systems to create something more agile and adaptive to the needs of the U.S. Air Force. Red Hat Open Innovation Labs’ dual-track approach to digital transformation combined enterprise IT infrastructure modernization and, through hands-on instruction, helped Lockheed’s team adopt agile development methodologies and DevSecOps practices. During the Open Innovation Labs engagement, a cross-functional team of five developers, two operators, and a product owner worked together to develop a new application for the F-22 on OpenShift. After seeing an early impact with the initial project team, within six months, Lockheed Martin had scaled its OpenShift deployment and use of agile methodologies and DevSecOps practices to a 100-person F-22 development team. During a recent enablement session, the F-22 Raptor scrum team improved its ability to forecast for future sprints by 40%, the press release states. To know more about this news in detail, head over to its official press release on Red Hat. This story will have certain updates until the Summit gets over. We will update the post as soon as we get further updates or new announcements. Visit the official Red Hat Summit website to know more about the sessions conducted, the list of keynote speakers, and more. Red Hat rebrands logo after 20 years; drops Shadowman Red Hat team announces updates to the Red Hat Certified Engineer (RHCE) program Red Hat Satellite to drop MongoDB and will support only PostgreSQL backend
Read more
  • 0
  • 0
  • 3692