Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News - Virtualization

10 Articles
article-image-google-introduces-e2-a-flexible-performance-driven-and-cost-effective-vms-for-google-compute-engine
Vincy Davis
12 Dec 2019
3 min read
Save for later

Google introduces E2, a flexible, performance-driven and cost-effective VMs for Google Compute Engine

Vincy Davis
12 Dec 2019
3 min read
Yesterday, June Yang, the director of product management at Google announced a new beta version of the E2 VMs for Google Compute Engine. It features a dynamic resource management that delivers a reliable performance with flexible configurations and the best total cost of ownership (TCO) than any other VMs in Google Cloud. According to Yang, “E2 VMs are a great fit for a broad range of workloads including web servers, business-critical applications, small-to-medium sized databases, and development environments.” He further adds, “For all but the most demanding workloads, we expect E2 to deliver similar performance to N1, at a significantly lower cost.” What are the key features offered by E2 VMs E2 VMs are built to offer 31% savings compared to N1, which is the lowest total cost of ownership of any VM in Google Cloud. Thus, the VMs acquire a sustainable performance at a consistently low price point. Unlike comparable options from other cloud providers, E2 VMs can support a high CPU load without complex pricing. The E2 VMs can be tailored up to 16 vCPUs and 128 GB of memory and will only distribute the resources that the user needs or with the ability to use custom machine types. Custom machine types are ideal for scenarios when workloads that require more processing power or more memory but don't need all of the upgrades that are provided by the next machine type level. How E2 VMs achieve optimal efficiency Large, efficient physical servers E2 VMs automatically take advantage of the continual improvements in machines by flexibly scheduling across the zone’s available CPU platforms. With new hardware upgrades, the E2 VMs are live migrated to newer and faster hardware which allows it to automatically take advantage of these new resources. Intelligent VM placement In E2 VMs, Borg, Google’s cluster management system predicts how a newly added VM will perform on a physical server by observing the CPU, RAM, memory bandwidth, and other resource demands of the VMs. Post this, Borg searches across thousands of servers to find the best location to add a VM. These observations by Borg ensures that a newly placed VM will be compatible with its neighbors and will not experience any interference from them. Performance-aware live migration After the VMs are placed on a host, its performance is continuously monitored so that if there is an increase in demand for VMs, a live migration can be used to transparently shift the E2 load to other hosts in the data center. A new hypervisor CPU scheduler In order to meet E2 VMs performance goals, Google has built a custom CPU scheduler with better latency and co-scheduling behavior than Linux’s default scheduler. The new scheduler yields sub-microsecond average wake-up latencies with fast context switching which helps in keeping the overhead of dynamic resource management negligible for nearly all workloads. https://twitter.com/uhoelzle/status/1204972503921131521 Read the official announcement to know the custom VM shapes and predefined configurations offered by E2 VMs. You can also read part- 2 of the announcement to know more about the dynamic resource management in E2 VMs. Why use JVM (Java Virtual Machine) for deep learning Brad Miro talks TensorFlow 2.0 features and how Google is using it internally EU antitrust regulators are investigating Google’s data collection practices, reports Reuters Google will not support Cloud Print, its cloud-based printing solution starting 2021 Google Chrome ‘secret’ experiment crashes browsers of thousands of IT admins worldwide
Read more
  • 0
  • 0
  • 3828

article-image-vmworld-2019-vmware-tanzu-on-kubernetes-new-hybrid-cloud-offerings-collaboration-with-multi-cloud-platforms-and-more
Fatema Patrawala
30 Aug 2019
7 min read
Save for later

VMworld 2019: VMware Tanzu on Kubernetes, new hybrid cloud offerings, collaboration with multi cloud platforms and more!

Fatema Patrawala
30 Aug 2019
7 min read
VMware kicked off its VMworld 2019 US in San Francisco last week on 25th August and ended yesterday with a series of updates, spanning Kubernetes, Azure, security and more. This year’s event theme was “Make Your Mark” aimed at empowering VMworld 2019 attendees to learn, connect and innovate in the world of IT and business. 20,000 attendees from more than 100 countries descended to San Francisco for VMworld 2019. VMware CEO Pat Gelsinger took the stage, and articulated VMware’s commitment and support for TechSoup, a one-stop IT shop for global nonprofits. Gelsinger also put emphasis on the company's 'any cloud, any application, any device, with intrinsic security' strategy. “VMware is committed to providing software solutions to enable customers to build, run, manage, connect and protect any app, on any cloud and any device,” said Pat Gelsinger, chief executive officer, VMware. “We are passionate about our ability to drive positive global impact across our people, products and the planet.” Let us take a look at the key highlights of the show: VMworld 2019: CEO's take on shaping tech as a force for good The opening keynote from Pat Gelsinger had everything one would expect; customer success stories, product announcements and the need for ethical fix in tech. "As technologists, we can't afford to think of technology as someone else's problem," Gelsinger told attendees, adding “VMware puts tremendous energy into shaping tech as a force for good.” Gelsinger cited three benefits of technology which ended up opening the Pandora's Box. Free apps and services led to severely altered privacy expectations; ubiquitous online communities led to a crisis in misinformation; while the promise of blockchain has led to illicit uses of cryptocurrencies. "Bitcoin today is not okay, but the underlying technology is extremely powerful," said Gelsinger, who has previously gone on record regarding the detrimental environmental impact of crypto. This prism of engineering for good, alongside good engineering, can be seen in how emerging technologies are being utilised. With edge, AI and 5G, and cloud as the "foundation... we're about to redefine the application experience," as the VMware CEO put it. Read also: VMware reaches the goal of using 100% renewable energy in its operations, a year ahead of their 2020 vision Gelsinger’s 2018 keynote was about the theme of tech 'superpowers'. Cloud, mobile, AI, and edge. This time, more focus was given to how the edge was developing. Whether it was a thin edge, containing a few devices and an SD-WAN connection, a thick edge of a remote data centre with NFV, or something in between, VMware aims to have it all covered. "Telcos will play a bigger role in the cloud universe than ever before," said Gelsinger, referring to the rise of 5G. "The shift from hardware to software [in telco] is a great opportunity for US industry to step in and play a great role in the development of 5G." VMworld 2019 introduces Tanzu to build, run and manage software on Kubernetes VMware is moving away from virtual machines to containerized applications. On the product side VMware Tanzu was introduced, a new product portfolio that aims to enable enterprise-class building, running, and management of software on Kubernetes. In Swahili, ’tanzu’ means the growing branch of a tree and in Japanese, ’tansu’ refers to a modular form of cabinetry. For VMware, Tanzu is their growing portfolio of solutions that help build, run and manage modern apps. Included in this is Project Pacific, which is a tech preview focused on transforming VMware vSphere into a Kubernetes native platform. "With project Pacific, we're bringing the largest infrastructure community, the largest set of operators, the largest set of customers directly to the Kubernetes. We will be the leading enabler of Kubernetes," Gelsinger said. Read also: VMware Essential PKS: Use upstream Kubernetes to build a flexible, cost-effective cloud-native platform Other product launches included an update to collaboration program Workspace ONE, including an AI-powered virtual assistant, as well as the launch of CloudHealth Hybrid by VMware. The latter, built on cloud cost management tool CloudHealth, aims to help organisations save costs across an entire multi-cloud landscape and will be available by the end of Q3. Collaboration, not compete with major cloud providers - Google Cloud, AWS & Microsoft Azure At VMworld 2019 VMware announced an extended partnership with Google Cloud earlier this month led the industry to consider the company's positioning amid the hyperscalers. VMware Cloud on AWS continues to gain traction - Gelsinger said Outposts, the hybrid tool announced at re:Invent last year, is being delivered upon - and the company also has partnerships in place with IBM and Alibaba Cloud. Further, VMware in Microsoft Azure is now generally available, with the facility to gradually switch across Azure data centres. By the first quarter of 2020, the plan is to make it available across nine global areas. Read also: Cloud Next 2019 Tokyo: Google announces new security capabilities for enterprise users The company's decision not to compete, but collaborate with the biggest public clouds has paid off. Gelsinger also admitted that the company may have contributed to some confusion over what hybrid cloud and multi-cloud truly meant. But the explanation from Gelsinger was pretty interesting. Increasingly, with organisations opting for different clouds for different workloads, and changing environments, Gelsinger described a frequent customer pain point for those nearer the start of their journeys. Do they migrate their applications or do they modernise? Increasingly, customers want both - the hybrid option. "We believe we have a unique opportunity for both of these," he said. "Moving to the hybrid cloud enables live migration, no downtime, no refactoring... this is the path to deliver cloud migration and cloud modernisation." As far as multi-cloud was concerned, Gelsinger argued: "We believe technologists who master the multi-cloud generation will own it for the next decade." Collaboration with NVIDIA to accelerate GPU services on AWS NVIDIA and VMware today announced their intent to deliver accelerated GPU services for VMware Cloud on AWS to power modern enterprise applications, including AI, machine learning and data analytics workflows. These services will enable customers to seamlessly migrate VMware vSphere-based applications and containers to the cloud, unchanged, where they can be modernized to take advantage of high-performance computing, machine learning, data analytics and video processing applications. Through this partnership, VMware Cloud on AWS customers will gain access to a new, highly scalable and secure cloud service consisting of Amazon EC2 bare metal instances to be accelerated by NVIDIA T4 GPUs, and new NVIDIA Virtual Compute Server (vComputeServer) software. “From operational intelligence to artificial intelligence, businesses rely on GPU-accelerated computing to make fast, accurate predictions that directly impact their bottom line,” said Jensen Huang, founder and CEO, NVIDIA. “Together with VMware, we’re designing the most advanced GPU infrastructure to foster innovation across the enterprise, from virtualization, to hybrid cloud, to VMware's new Bitfusion data center disaggregation.” Read also: NVIDIA’s latest breakthroughs in conversational AI: Trains BERT in under an hour, launches Project Megatron to train transformer based models at scale Apart from this, Gelsinger made special note to mention VMware's most recent acquisitions, with Pivotal and Carbon Black and discussed about where they fit in the VMware stack at the back. VMware’s hybrid cloud platform for Next-gen Hybrid IT VMware introduced new and expanded cloud offerings to help customers meet the unique needs of traditional and modern applications. VMware empowers IT operators, developers, desktop administrators, and security professionals with the company’s hybrid cloud platform to build, run, and manage workloads on a consistent infrastructure across their data center, public cloud, or edge infrastructure of choice. VMware uniquely enables a consistent hybrid cloud platform spanning all major public clouds – AWS, Azure, Google Cloud, IBM Cloud – and more than 60 VMware Cloud Verified partners worldwide. More than 70 million workloads run on VMware. Of these, 10 million are in the cloud. These are running in more than 10,000 data centers run by VMware Cloud providers. Take a look at the full list of VMworld 2019 announcements here. What’s new in cloud and virtualization this week? VMware signs definitive agreement to acquire Pivotal Software and Carbon Black Pivotal open sources kpack, a Kubernetes-native image build service Oracle directors support billion dollar lawsuit against Larry Ellison and Safra Catz for NetSuite deal
Read more
  • 0
  • 0
  • 2867

article-image-google-launches-beta-version-of-deep-learning-containers-for-developing-testing-and-deploying-ml-applications
Amrata Joshi
28 Jun 2019
3 min read
Save for later

Google launches beta version of Deep Learning Containers for developing, testing and deploying ML applications

Amrata Joshi
28 Jun 2019
3 min read
Yesterday, Google announced the beta availability of Deep Learning Containers, a new cloud service that provides environments for developing, testing as well as for deploying machine learning applications. In March this year, Amazon also launched a similar offering, AWS Deep Learning Containers with Docker image support for easy deployment of custom machine learning (ML) environments. The major advantage of Deep Learning containers is its ability to test machine learning applications on-premises and it can quickly move them to cloud. Support for PyTorch, TensorFlow scikit-learn and R Deep Learning Containers, launched by Google Cloud Platform (GCP) can be run both in the cloud as well as on-premise. It has support for machine learning frameworks like PyTorch, TensorFlow 2.0, and TensorFlow 1.13. Deep Learning Containers by AWS has support for TensorFlow and Apache MXNet frameworks. Whereas Google’s ML containers don’t support Apache MXNet but they come with pre-installed PyTorch, TensorFlow scikit-learn and R. Features various tools and packages GCP Deep Learning Containers consists of several performance-optimized Docker containers that come along with various tools used for running deep learning algorithms. These tools include preconfigured Jupyter Notebooks that are interactive tools used to work with and share code, visualizations, equations and text. Google Kubernetes Engine clusters is also one of the tools and it used for orchestrating multiple container deployments. It also comes with access to packages and tools such as Nvidia’s CUDA, cuDNN, and NCCL. Docker images now work on cloud and on-premises  The docker images also work on cloud, on-premises, and across GCP products and services such as Google Kubernetes Engine (GKE), Compute Engine, AI Platform, Cloud Run, Kubernetes, and Docker Swarm. Mike Cheng, software engineer at Google Cloud in a blog post, said, “If your development strategy involves a combination of local prototyping and multiple cloud tools, it can often be frustrating to ensure that all the necessary dependencies are packaged correctly and available to every runtime.” He further added, “Deep Learning Containers address this challenge by providing a consistent environment for testing and deploying your application across GCP products and services, like Cloud AI Platform Notebooks and Google Kubernetes Engine (GKE).” For more information, visit the AI Platform Deep Learning Containers documentation. Do Google Ads secretly track Stack Overflow users? CMU and Google researchers present XLNet: a new pre-training method for language modeling that outperforms BERT on 20 tasks Curl’s lead developer announces Google’s “plan to reimplement curl in Libcrurl”    
Read more
  • 0
  • 0
  • 4019
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-elastic-stack-6-7-releases-with-elastic-maps-elastic-update-and-much-more
Amrata Joshi
27 Mar 2019
3 min read
Save for later

Elastic Stack 6.7 releases with Elastic Maps, Elastic Update and much more!

Amrata Joshi
27 Mar 2019
3 min read
Yesterday, the team at Elastic released Elastic Stack 6.7 a group of open source products from Elastic designed to help users take data from any type of source and visualize that data in real time. What’s new in Elastic Stack 6.7? Elastic Maps Elastic Maps is a new dedicated solution used for mapping, querying, and visualizing geospatial data in Kibana. They expand on existing geospatial visualization options in Kibana with features such as visualization of multiple layers and data sources in the same map. It also includes features like dynamic data-driven styling on vector layers on maps, mapping of both aggregate and document-level data and much more. Elastic Maps also embeds the query bar with autocomplete for real-time ad-hoc search. Elastic Uptime This release comes with Elastic Uptime, that makes it easy to detect when application services are down or they are responding slowly. It notifies users about problems way before those services are called by the application. Cross Cluster Replication (CCR) Cross Cluster Replication (CCR) now has a variety of use cases that include cross-datacenter and cross-region replication and it is generally available. Index Lifecycle Management (ILM) With this release, Index lifecycle management (ILM) is now generally available and also ready for production use. ILM helps Elasticsearch admins with defining and automating lifecycle management policies, such as how data is to be managed and moved between phases like hot, warm, cold, and deletion phases while it ages. Elasticsearch SQL Elasticsearch SQL, helps users with interacting and querying their Elasticsearch data using SQL. Elasticsearch SQL functionality includes the JDBC and ODBC clients, which allows third-party tools to connect to Elasticsearch as a backend datastore. With this release, Elasticsearch SQL gets generally available. Canvas Canvas that helps users to showcase and present live data from Elasticsearch with pixel-perfect precision, becomes generally available with this release. Kibana localization In this release, Kibana’s first localization, which is now available in simplified Chinese. Kibana also introduces a new localization framework that provides support for additional languages. Functionbeat Functionbeat is a Beat that deploys as a function in serverless computing frameworks, as well as streams, cloud infrastructure logs, and metrics into Elasticsearch. The Functionbeat is now generally available and it supports the AWS Lambda framework and can stream data from CloudWatch Logs, SQS, and Kinesis. Upgrade Assistant The Upgrade Assistant in this release will help users in preparing their existing Elastic Stack environment for the upgrade to 7.0. The Upgrade Assistant includes both APIs and UIs and works as an important cluster checkup tool to help plan the upgrade. It also helps in identifying things like deprecation warnings to enable a smoother upgrade experience. To know more about this release, check out Elastic’s blog post. Microsoft brings PostgreSQL extension and SQL Notebooks functionality to Azure Data Studio Core CPython developer unveils a new project that can analyze his phone’s ‘silent connections’ How to handle backup and recovery with PostgreSQL 11 [Tutorial]  
Read more
  • 0
  • 0
  • 3151

article-image-windows-sandbox-an-environment-to-safely-test-exe-files-is-coming-to-windows-10-next-year
Prasad Ramesh
20 Dec 2018
2 min read
Save for later

Windows Sandbox, an environment to safely test EXE files is coming to Windows 10 next year

Prasad Ramesh
20 Dec 2018
2 min read
Microsoft will be offering a new tool called Windows Sandbox next year with a Windows 10 update. Revealed this Tuesday, it provides an environment to safely test EXE applications before running them on your computer. Windows sandbox features Windows Sandbox is an isolated desktop environment where users can run untrusted software without any risk of them having any effects on your computer. Any application you install in Windows Sandbox is contained in the sandbox and cannot affect your computer. All software with their files and state are permanently deleted when a Windows Sandbox is closed. You need Windows 10 Pro or Windows 10 Enterprise to use it and will be shipped with an update, no separate download needed. Every run of Windows Sandbox is new and runs like a fresh installation of Windows. Everything is deleted when you close Windows Sandbox. It uses hardware-based virtualization for kernel isolation based on Microsoft’s hypervisor. A separate kernel isolates it from the host machine. It has an integrated kernel scheduler and virtual GPU. Source: Microsoft website Requirements In order to use this new feature based on Hyper-V, you’ll need, AMD64 architecture, virtualization capabilities enabled in BIOS, minimum 4GB RAM (8GB recommended), 1 GB of free disk space (SSD recommended), and dual-core CPU (4 cores with hyperthreading recommended). What are the people saying The general sentiment towards this release is positive. https://twitter.com/AnonTechOps/status/1075509695778041857 However, a comment on Hacker news suggests that this might not be that useful for its intended purpose: “Ironically, even though the recommended use for this in the opening paragraph is to combat malware, I think that will be the one thing this feature is no good at. Doesn’t even moderately sophisticated malware these days try to detect if it’s in a sandbox environment? A fresh-out-of-the-box Windows install must be a giant red flag for that.” Meanwhile, if you’re on Windows 7 or Windows 8, you can try Sandboxie. For more technical details under the hood of Sandbox, visit the Microsoft website. Oracle releases VirtualBox 6.0.0 with improved graphics, user interface and more Chaos engineering platform Gremlin announces $18 million series B funding and new feature for “full-stack resiliency” Are containers the end of virtual machines?
Read more
  • 0
  • 0
  • 5885

article-image-oracle-releases-virtualbox-6-0-0-with-improved-graphics-user-interface-and-more
Amrata Joshi
19 Dec 2018
2 min read
Save for later

Oracle releases VirtualBox 6.0.0 with improved graphics, user interface and more

Amrata Joshi
19 Dec 2018
2 min read
Yesterday, the team at Oracle released VirtualBox 6.0.0, a free and open-source hosted hypervisor for x86 computers. VirtualBox was initially developed by Innotek GmbH, which was then acquired by Sun Microsystems in 2008 and then by Oracle in 2010. VirtualBox is a virtualization product for enterprise as well as home use. It is an extremely feature rich, high-performance product for enterprise customers. Features of VirtualBox 6.0.0 User interface Virtual 6.0.0 comes with a greatly improved HiDPI and scaling support which includes better detection and per-machine configuration. User interface is simpler and more powerful. It also comes with a new file manager that enables users to control the guest file system and copy files between host and guest. Graphics VirtualBox 6.0.0 features 3D graphics support for Windows guests, and VMSVGA 3D graphics device emulation on Linux and Solaris guests. It comes with an added support for surround speaker setups. It also comes with added utility vboximg-mount on Apple hosts for accessing the content of guest disks on the host. In VirtualBox 6.0.0, there is an added support for Hyper-V to avoid the inability to run VMs at low performance. VirtualBox 6.0.0 comes with support for exporting a virtual machine to Oracle cloud infrastructure This release comes with a better application and virtual machine set-up Linux guests This release now supports Linux 4.20 and VMSVGA. The process of building vboxvideo on EL 7.6 standard kernel has been improved with this release. Other features Support for DHCP options. MacOS Guest initial support. Now it is possible to configure upto four custom ACPI tables for a VM. With this release, video and audio recordings can be separately enabled. Better support for attaching and detaching remote desktop connections. Major bug fixes The previous release used to throw wrong instruction after single-step exception with rdtsc. This issue has been resolved with this release. This release comes with improved audio/video recording. This issues with serial port emulation have been fixed. The resizing issue with disk images has been resolved. This release comes with an improved shared folder for auto-mounting. Issues with BIOS has been fixed. Read more about this news on VirtualBox’s changelog. Installation of Oracle VM VirtualBox on Linux Setting up a Joomla Web Server using Virtualbox, TurnkeyLinux and DynDNS How to Install VirtualBox Guest Additions
Read more
  • 0
  • 0
  • 5419
article-image-chaos-engineering-platform-gremlin-announces-18-million-series-b-funding-and-new-feature-for-full-stack-resiliency
Richard Gall
28 Sep 2018
3 min read
Save for later

Chaos engineering platform Gremlin announces $18 million series B funding and new feature for "full-stack resiliency"

Richard Gall
28 Sep 2018
3 min read
Gremlin, the chaos engineering platform have revealed some exciting news today to coincide with the very first chaos engineering conference - Chaos Conf. Not only has the company raised a $18 million in its series B funding round, it has also launched a brand new feature. Application Level Fault Injection - ALFI - brings a whole new dimension to the Gremlin platform as it will allow engineering teams to run resiliency tests - or 'chaos experiments' - at an application level. Up until now, tests could only be run at the infrastructure level, targeting a specific host or container (although containers are only a recent addition). Bringing chaos engineering to serverless applications One of the benefits of ALFI is it will make it possible to run 'attacks' on serverless applications. Citing Cloudability's State of the Cloud 2018 report, the press release highlights that serverless adoption is growing rapidly. This means that Gremlin will now be able to expand its use cases and continue to move forward in its broader mission to help engineering teams improve the resiliency of their software in a manageable and accessible way. Matt Fornaciari, Gremlin CTO and co-founder, said: “With ALFI one of the first problems we wanted to address was improving the reliability and understanding of serverless providers like AWS Lambda and Azure Functions. It’s a tough problem to solve because the host is abstracted and it’s a very new technology -- but now we can run attacks at the application level, and with a level of precision that isn’t possible at the infrastructure level. We are giving them a scalpel to very specifically target particular communications between different systems and services.” One of the great benefits of ALFI is that it should help engineers to tackle different types of threats that might be missed if you simply focus on infrastructure. Yan Cui, Principal Engineer at DAZN, the sports streaming service explained, saying, "AWS Lambda protects you against some infrastructure failures, but you still need to defend against weakness in your own code. Application-level fault injection is a great way to uncover these weaknesses." A new chapter for Gremlin and a big step forward for chaos engineering It would seem that Gremlin is about to embark on a new chapter. But what will be even more interesting is the wider impact chaos engineering has on the industry. Research, such as this year's Packt Skill Up survey, indicates that chaos engineering is a trend that is still in an emergent phase. If Gremlin can develop a product that not only makes chaos engineering relatively accessible but also palatable for those making technical decisions, we might start to see things changing. It's clear that Redpoint Ventures, the VC firm leading Gremlin's Series B funding, sees a lot of potential in what the platform can offer the software landscape. Managing Director  Tomasz Tuguz said "In a world where nearly every business is an online business, Gremlin makes companies more resilient and saves millions of dollars in unnecessary disasters and outages. We’re thrilled to join them on this journey."
Read more
  • 0
  • 0
  • 3998

article-image-openstack-rocky-released-to-meet-ai-machine-learning-nfv-and-edge-computing-demands-for-infrastructure
Savia Lobo
31 Aug 2018
4 min read
Save for later

OpenStack Rocky released to meet AI, machine learning, NFV and edge computing demands for infrastructure

Savia Lobo
31 Aug 2018
4 min read
Yesterday, OpenStack announced its 18th release, Rocky. This release aims at addressing new demands for infrastructure driven by AI, machine learning, NFV and edge computing, by starting with a bare metal foundation and enabling containers, VMs and GPUs. The Rocky update is the second OpenStack update for 2018 and follows the Queens milestone that became available on Feb, 28. [box type="info" align="" class="" width=""]Rocky is named after the mountains stretching across British Columbia, the location of the previous OpenStack Summit.[/box] Highlights and Improvements in OpenStack Rocky Rocky includes several other enhancements with two key highlights such as, Improvements to the ‘Ironic’ project's bare metal provisioning service Fast Forward Upgrades Improvements to the ‘Ironic’ project's bare metal provisioning service OpenStack Ironic brings increased sophisticated management and automation capabilities to bare metal infrastructure. It is also a driver for Nova, allowing multi-tenancy. This means users can manage physical infrastructure in the same way they are used to managing VMs, especially with new Ironic features landed in Rocky: User-managed BIOS settings: BIOS (basic input output system) performs hardware initialization and has many configuration options supporting a variety of use cases when customized. The different BIOS options can aid users in gaining performance, configuring power management options, or enabling technologies such as SR-IOV or DPDK. Ironic now lets users manage BIOS settings, supporting use cases like NFV and giving users more flexibility. Conductor groups: In Ironic, the “conductor” uses drivers to execute operations on the hardware. Ironic has introduced the “conductor_group” property, which can be used to restrict what nodes a particular conductor (or conductors) have control over. This allows users to isolate nodes based on physical location, reducing network hops for increased security and performance. RAM Disk deployment interface: This is a new interface in Ironic for diskless deployments. This interface is seen in large-scale and high-performance computing (HPC) use cases when operators desire fully ephemeral instances for rapidly standing up a large-scale environment. Fast Forward Upgrades (FFU) The Fast Forward Upgrade (FFU) feature from the TripleO project helps users to overcome upgrade hurdles and get on newer releases of OpenStack faster. FFU lets a TripleO user on Release “N” quickly speed through intermediary releases to get on Release “N+3” (the current iteration of FFU being the Newton release to Queens). This helps users in gaining access to the ease-of-operations enhancements and novel developments like vGPU support present in Queens. Additional Highlights in Rocky Cyborg In Rocky, Cyborg introduces a new REST API for FPGAs, an accelerator seen in machine learning, image recognition, and other HPC use cases. This allows users to dynamically change the functions loaded on an FPGA device. Qinling Qinling is introduced in Rocky. Qinling (“CHEEN - LEENG”) is a function-as-a-service (FaaS) project that delivers serverless capabilities on top of OpenStack clouds. This allows users to run functions on OpenStack clouds without managing servers, VMs or containers, while still connecting to other OpenStack services like Keystone. Masakari This supports high availability by providing automatic recovery from failures. It also expands its monitoring capabilities to include internal failures in any instance, such as a hung OS, data corruption or a scheduling failure. Octavia This is the load balancing project that adds support for UDP (user datagram protocol), bringing load balancing to edge and IoT use cases. UDP is the transport layer frequently seen in voice, video and other real-time applications. Magnum This project makes container orchestration engines and their resources first-class resources in OpenStack. Magnum has become a Certified Kubernetes installer in the Rocky cycle. Passing these conformance tests gives users confidence that Magnum interacts with Kubernetes. To know more about other highlights in detail, visit Rocky’s release notes. Automating OpenStack Networking and Security with Ansible 2 [Tutorial] Introducing VMware Integrated OpenStack (VIO) 5.0, a new Infrastructure-as-a-Service (IaaS) cloud Deploying OpenStack – the DevOps Way
Read more
  • 0
  • 0
  • 3739

article-image-gremlin-makes-chaos-engineering-with-docker-easier-with-new-container-discovery-feature
Richard Gall
28 Aug 2018
3 min read
Save for later

Gremlin makes chaos engineering with Docker easier with new container discovery feature

Richard Gall
28 Aug 2018
3 min read
Gremlin, the product that's bringing chaos engineering to a huge range of organizations, announced today that it has added a new feature to its product: container discovery. Container discovery will make it easier to run chaos engineering tests alongside Docker. Chaos engineering and containers have always been closely related - arguably the loosely coupled architectural style of modern software driven by containers has, in turn, led to an increased demand for chaos engineering to improve software resiliency. Matt Fornaciari, Gremlin CTO, explains that "with today’s updates to the Gremlin platform, DevOps teams will be able to drastically improve the reliability of Docker in production." Read next: How Gremlin is making chaos engineering accessible [Interview] What does Gremlin's new container discovery feature do? Container discovery will do 2 things: it will make it easier for engineers to identify specific Docker containers, but more importantly, it will also allow them to simulate attacks or errors within those containerized environments. The real benefit of this is that it makes the testing process so much easier for engineers. Containers are, the press release notes, "often highly dynamic, ephemeral, and difficult to pinpoint at a given moment," which means identifying and isolating a particular container to run a 'chaos test' on can ordinarily be very challenging and time consuming. Gremlin has been working with the engineering team at Under Armour. Paul Osman, Senior Engineering Manager says that "we use Gremlin to test various failure scenarios and build confidence in the resiliency of our microservices." This new feature could save the engineers a lot of time, as he explains: "the ability to target containerized services with an easy-to-use UI has reduced the amount of time it takes us to do fault injection significantly." Read next: Chaos Engineering: managing complexity by breaking things Why is Docker such a big deal for Gremlin? As noted above, chaos engineering and containers are part of the same wider shift in software architectural styles. With Docker leading the way when it comes to containerization - its market share growing healthily - making it easier to perform resiliency tests on containers is incredibly important for the product. It's not a stretch to say that Gremlin have probably been working on this feature for some time, with users placing it high on their list of must-haves. Chaos engineering is still in its infancy - this year's Skill Up report found that it still remains on the periphery of many developer's awareness. However, that could quickly change, and it appears that Gremlin are working hard to make chaos engineering not only more accessible but also more appealing to companies for whom software resiliency is essential.
Read more
  • 0
  • 0
  • 4024
article-image-what-to-expect-from-vsphere-6-7
Vijin Boricha
11 May 2018
3 min read
Save for later

What to expect from vSphere 6.7

Vijin Boricha
11 May 2018
3 min read
VMware has announced the latest release of the industry-leading virtualization platform vSphere 6.7. With vSphere 6.7, IT organizations can address key infrastructure demands like: Extensive growth in quantity and diversity of applications delivered Increased adoption of hybrid cloud environments Global expansion of data centers Robust infrastructure and application security Let’s take a look at some of the key capabilities of vSphere 6.7: Effortless and Efficient management: vSphere 6.7 is built on the industrial innovations delivered by vSphere 6.5, which advances customer experience to a another level. With vSphere 6.7 you can leverage management simplicity, operational efficiency, and faster time to market, all at scale. It comes with an enhanced vCenter Server Appliance (vCSA), new APIs that improve multiple vCenters deployments, which results in easier management of vCenter Server Appliance, as well as backup and restore. Customers can now link multiple vCenters and have seamless visibility across their environment without external platform services or load balancers dependencies. Extensive Security capabilities: vSphere 6.7 has enhanced its security capabilities from vSphere 6.5. It has added support for Trusted Platform Module (TPM) 2.0 hardware devices and has also introduced Virtual TPM 2.0, where you will notice significant enhancements in both the hypervisor and the guest operating system security. With this capability VMs and hosts cannot be tampered, preventing loading of unauthorized components and this enables desired guest operating system security features. With vSphere 6.7, VM Encryption is further enhanced and more operationally simple to manage, enabling encrypted vMotion across different vCenter instances. vSphere 6.7 has also extended its security features keeping in mind the collaboration between VMware and Microsoft ensuring secured Windows VMs on vSphere. Universal Application Platform: vSphere is now a universal application platform that supports existing mission critical applications along with new workloads such as 3D Graphics, Big Data, Machine Learning, Cloud-Native and more. It has also extended its support to some of the latest hardware innovations in the industry, delivering exceptional performance for a variety of workloads. With collaboration of VMware and Nvidia, vSphere 6.7 has further extended its support for GPUs by virtualizing Nvidia GPUs for non-VDI and non-general-purpose-computing use cases such as artificial intelligence, machine learning, big data and more. With these enhancements, customers are now able to better lifecycle management of hosts, reducing disruption for end-users. VMware plans to invest more in this area in order to bring full vSphere support to GPUs in future releases. Hybrid Cloud Experience is now flawless: Since customers have started looking for hybrid cloud options vSphere 6.7 introduces vCenter Server Hybrid Linked Mode. It makes customers have a unified manageability and visibility across an on-premises vSphere environments running on similar versions and a VMware Cloud on AWS environment, running on a different version of vSphere. To ensure seamless hybrid cloud experience, vSphere 6.7 delivers a new capability, called Per-VM EVC which allows for seamless migration across different CPUs. This is only an overview of the key capabilities of vSphere 6.7. You can know more about this release from VMware vSphere Blog and VMware release. Microsoft’s Azure Container Service (ACS) is now Azure Kubernetes Services (AKS) VMware vSphere storage, datastores, snapshots The key differences between Kubernetes and Docker Swarm
Read more
  • 0
  • 0
  • 2888