Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - Cloud Computing

175 Articles
article-image-zone-redundancy-for-azure-cache-for-redis-now-in-preview-from-microsoft-azure-blog-announcements
Matthew Emerick
14 Oct 2020
3 min read
Save for later

Zone Redundancy for Azure Cache for Redis now in preview from Microsoft Azure Blog > Announcements

Matthew Emerick
14 Oct 2020
3 min read
Between waves of pandemics, hurricanes, and wildfires, you don’t need cloud infrastructure adding to your list of worries this year. Fortunately, there has never been a better time to ensure your Azure deployments stay resilient. Availability zones are one of the best ways to mitigate risks from outages and disasters. With that in mind, we are announcing the preview for zone redundancy in Azure Cache for Redis. Availability Zones on Azure Azure Availability Zones are geographically isolated datacenter locations within an Azure region, providing redundant power, cooling, and networking. By maintaining a physically separate set of resources with the low latency from remaining in the same region, Azure Availability Zones provide a high availability solution that is crucial for businesses requiring resiliency and business continuity. Redundancy options in Azure Cache for Redis Azure Cache for Redis is increasingly becoming critical to our customers’ data infrastructure. As a fully managed service, Azure Cache for Redis provides various high availability options. By default, caches in the standard or premium tier have built-in replication with a two-node configuration—a primary and a replica hosting two identical copies of your data. New in preview, Azure Cache for Redis can now support up to four nodes in a cache distributed across multiple availability zones. This update can significantly enhance the availability of your Azure Cache for Redis instance, giving you greater peace of mind and hardening your data architecture against unexpected disruption. High Availability for Azure Cache for Redis The new redundancy features deliver better reliability and resiliency. First, this update expands the total number of replicas you can create. You can now implement up to three replica nodes in addition to the primary node. Having more replicas generally improves resiliency (even if they are in the same availability zone) because of the additional nodes backing up the primary. Even with more replicas, a datacenter-wide outage can still disrupt your application. That’s why we’re also enabling zone redundancy, allowing replicas to be located in different availability zones. Replica nodes can be placed in one or multiple availability zones, with failover automatically occurring if needed across availability zones. With Zone Redundancy, your cache can handle situations where the primary zone is knocked offline due to issues like floods, power outages, or even natural disasters. This increases availability while maintaining the low latency required from a cache. Zone redundancy is currently only available on the premium tier of Azure Cache for Redis, but it will also be available on the enterprise and enterprise flash tiers when the preview is released. Industry-leading service level agreement Azure Cache for Redis already offers an industry-standard 99.9 percent service level agreement (SLA). With the addition of zone redundancy, the availability increases to a 99.95 percent level, allowing you to meet your availability needs while keeping your application nimble and scalable. Adding zone redundancy to Azure Cache for Redis is a great way to promote availability and peace of mind during turbulent situations. Learn more in our documentation and give it a try today. If you have any questions or feedback, please contact us at AzureCache@microsoft.com.
Read more
  • 0
  • 0
  • 2891

article-image-three-ways-serverless-apis-can-accelerate-enterprise-innovation-from-microsoft-azure-blog-announcements
Matthew Emerick
07 Oct 2020
5 min read
Save for later

Three ways serverless APIs can accelerate enterprise innovation from Microsoft Azure Blog > Announcements

Matthew Emerick
07 Oct 2020
5 min read
With the wrong architecture, APIs can be a bottleneck to not only your applications but to your entire business. Bottlenecks such as downtime, low performance, or high application complexity, can result in exaggerated infrastructure and organizational costs and lost revenue. Serverless APIs mitigate these bottlenecks with autoscaling capabilities and consumption-based pricing models. Once you start thinking of serverless as not only a remover-of-bottlenecks but also as an enabler-of-business, layers of your application infrastructure become a source of new opportunities. This is especially true of the API layer, as APIs can be productized to scale your business, attract new customers, or offer new services to existing customers, in addition to its traditional role as the communicator between software services. Given the increasing dominance of APIs and API-first architectures, companies and developers are gravitating towards serverless platforms to host APIs and API-first applications to realize these benefits. One serverless compute option to host API’s is Azure Functions, event-triggered code that can scale on-demand, and you only pay for what you use. Gartner predicts that 50 percent of global enterprises will have deployed a serverless functions platform by 2025, up from only 20 percent today. You can publish Azure Functions through API Management to secure, transform, maintain, and monitor your serverless APIs. Faster time to market Modernizing your application stack to run microservices on a serverless platform decreases internal complexity and reduces the time it takes to develop new features or products. Each serverless function implements a microservice. By adding many functions to a single API Management product, you can build those microservices into an integrated distributed application. Once the application is built, you can use API Management policies to implement caching or ensure security requirements. Quest Software uses Azure App Service to host microservices in Azure Functions. These support user capabilities such as registering new tenants and application functionality like communicating with other microservices or other Azure platform resources such as the Azure Cosmos DB managed NoSQL database service. “We’re taking advantage of technology built by Microsoft and released within Azure in order to go to market faster than we could on our own. On average, over the last three years of consuming Azure services, we’ve been able to get new capabilities to market 66 percent faster than we could in the past.” - Michael Tweddle, President and General Manager of Platform Management, Quest Quest also uses Azure API Management as an serverless API gateway for the Quest On Demand microservices that implement business logic with Azure Functions and to apply policies that control access, traffic, and security across microservices. Modernize your infrastructure Developers should be focusing on developing applications, not provisioning and managing infrastructure. API management provides a serverless API gateway that delivers a centralized, fully managed entry point for serverless backend services. It enables developers to publish, manage, secure, and analyze APIs on at global scale. Using serverless functions and API gateways together allows organizations to better optimize resources and stay focused on innovation. For example, a serverless function provides an API through which restaurants can adjust their local menus if they run out of an item. Chipotle turned to Azure to create a unified web experience from scratch, leveraging both Azure API Management and Azure Functions for critical parts of their infrastructure. Calls to back-end services (such as ordering, delivery, and account management and preferences) hit Azure API Management, which gives Chipotle a single, easily managed endpoint and API gateway into its various back-end services and systems. With such functionality, other development teams at Chipotle are able to work on modernizing the back-end services behind the gateway in a way that remains transparent to Smith’s front-end app. “API Management is great for ensuring consistency with our API interactions, enabling us to always know what exists where, behind a single URL,” says Smith. “There are lots of changes going on behind the API gateway, but we don’t need to worry about them.”- Mike Smith, Lead Software Developer, Chipotle Innovate with APIs Serverless APIs are used to either increase revenue, decrease cost, or improve business agility. As a result, technology becomes a key driver of business growth. Businesses can leverage artificial intelligence to analyze API calls to recognize patterns and predict future purchase behavior, thus optimizing the entire sales cycle. PwC AI turned to Azure Functions to create a scalable API for its regulatory obligation knowledge mining solution. It also uses Azure Cognitive Search to quickly surface predictions found by the solution, embedding years of experience into an AI model that easily identifies regulatory obligations within the text. “As we’re about to launch our ROI POC, I can see that Azure Functions is a value-add that saves us two to four weeks of work. It takes care of handling prediction requests for me. I also use it to extend the model to other PwC teams and clients. That’s how we can productionize our work with relative ease.”- Todd Morrill, PwC Machine Learning Scientist-Manager, PwC Quest Software, Chipotle, and PwC are just a few Microsoft Azure customers who are leveraging tools such as Azure Functions and Azure API Management to create an API architecture that ensures your API’s are monitored, managed, and secure. Rethinking your API approach to use serverless technologies will unlock new capabilities within your organization that are not limited by scale, cost, or operational resources. Get started immediately Learn about common serverless API architecture patterns at the Azure Architecture Center, where we provide high-level overviews and reference architectures for common patterns that leverage Azure Functions and Azure API Management, in addition to other Azure services. Reference architecture for a web application with a serverless API. 
Read more
  • 0
  • 0
  • 3161

article-image-optimize-your-azure-workloads-with-azure-advisor-score-from-microsoft-azure-blog-announcements
Matthew Emerick
07 Oct 2020
3 min read
Save for later

Optimize your Azure workloads with Azure Advisor Score from Microsoft Azure Blog > Announcements

Matthew Emerick
07 Oct 2020
3 min read
Modern engineering practices, like Agile and DevOps, are redirecting the ownership of security, operations, and cost management from centralized teams to workload owners—catalyzing innovations at a higher velocity than in traditional data centers. In this new world, workload owners are expected to build, deploy, and manage cloud workloads that are secure, reliable, performant, and cost-effective. If you’re a workload owner, you want well-architected deployments, so you might be wondering, how well are you doing today? Of all the actions you can take, which ones will make the biggest difference for your Azure workloads? And how will you know if you’re making progress? That’s why we created Azure Advisor Score—to help you understand how well your Azure workloads are following best practices, assess how much you stand to gain by remediating issues, and prioritize the most impactful recommendations you can take to optimize your deployments. Introducing Advisor Score Advisor Score enables you to get the most out of your Azure investment using a centralized dashboard to monitor and work towards optimizing the cost, security, reliability, operational excellence, and performance of your Azure resources. Advisor Score will help you: Assess how well you’re following the best practices defined by Azure Advisor and the Microsoft Azure Well-Architected Framework. Optimize your deployments by taking the most impactful actions first. Report on your well-architected progress over time. Baselining is one great use case we’ve already seen with customers. You can use Advisor Score to baseline yourself and track your progress over time toward your goals by reviewing your score’s daily, weekly, or monthly trends. Then, to reach your goals, you can take action first on the individual recommendations and resources with the most impact. How Advisor Score works Advisor Score measures how well you’re adopting Azure best practices, comparing and quantifying the impact of the Advisor recommendations you’re already following, and the ones you haven’t implemented yet. Think of it as a gap analysis for your deployed Azure workloads. The overall score is calculated on a scale from 0 percent to 100 percent both in aggregate and separately for cost, security (coming soon), reliability, operational excellence, and performance. A score of 100 percent means all your resources follow all the best practices recommended in Advisor. On the other end of the spectrum, a score of zero percent means that none of your resources follow the recommended best practices. Advisor Score weighs all resources, both those with and without active recommendations, by their individual cost relative to your total spend. This builds on the assumption that the resources which consume a greater share of your total investment in Azure are more critical to your workloads. Advisor Score also adds weight to resources with longstanding recommendations. The idea is that the accumulated impact of these recommendations grows the longer they go unaddressed. Review your Advisor Score today Check your Advisor Score today by visiting Azure Advisor in the Azure portal. To learn more about the model behind Advisor Score and see examples of how the score is calculated, review our Advisor Score documentation, and this behind-the-scenes blog from our data science team about the development of Advisor Score.
Read more
  • 0
  • 0
  • 2112
Banner background image

article-image-lower-prices-and-more-flexible-purchase-options-for-azure-red-hat-openshift-from-microsoft-azure-blog-announcements
Matthew Emerick
07 Oct 2020
4 min read
Save for later

Lower prices and more flexible purchase options for Azure Red Hat OpenShift from Microsoft Azure Blog > Announcements

Matthew Emerick
07 Oct 2020
4 min read
For the past several years, Microsoft and Red Hat have worked together to co-develop hybrid cloud solutions intended to enable greater customer innovation. In 2019, we launched Azure Red Hat OpenShift as a fully managed, jointly engineered implementation of Red Hat OpenShift running on Red Hat OpenShift 3.11 that is deeply integrated into the Azure control plane. With the release of Red Hat OpenShift 4, we announced the general availability of Azure Red Hat OpenShift on OpenShift 4 in April 2020. Today we’re sharing that in collaboration with Red Hat, we are dropping the price of Red Hat OpenShift licenses on Azure Red Hat OpenShift worker nodes by up to 77 percent. We’re also adding the choice of a three-year term for Reserved Instances (RIs) on top of the existing one year RI and pay as you go options, with a reduction in the minimum number of virtual machines required. The new pricing is effective immediately. Finally, as part of the ongoing improvements, we are increasing the Service Level Agreement (SLA) to be 99.95 percent. With these new price reductions, Azure Red Hat OpenShift provides even more value with a fully managed, highly-available enterprise Kubernetes offering that manages the upgrades, patches, and integration for the components that are required to make a platform. This allows your teams to focus on building business value, not operating technology platforms. How can Red Hat OpenShift help you? As a developer Kubernetes was built for the needs of IT Operations, not developers. Red Hat OpenShift is designed so developers can deploy apps on Kubernetes without needing to learn Kubernetes. With built-in Continuous Integration (CI) and Continuous Delivery (CD) pipelines, you can code and push to a repository and have your application up and running in minutes. Azure Red Hat OpenShift includes everything you need to manage your development lifecycle; standardized workflows, support for multiple environments, continuous integration, release management, and more. Also included is the provision self-service, on-demand application stacks, and deploy solutions from the Developer Catalog such as OpenShift Service Mesh, OpenShift Serverless, Knative, and more. Red Hat OpenShift provides commercial support for the languages, databases, and tooling you already use, while providing easy access to Azure services such as Azure Database for PostgreSQL and Azure Cosmos DB, to enable you create resilient and scalable cloud native applications. As an IT operator Adopting a container platform lets you keep up with application scale and complexity requirements. Azure Red Hat OpenShift is designed to make deploying and managing the container platform easier, with automated maintenance operations and upgrades built right in, integrated platform monitoring—including Azure Monitor for Containers, and a support experience directly from the Azure support portal. With Azure Red Hat OpenShift, your developers can be up and running in minutes. You can scale on your terms, from ten containers to thousands, and only pay for what you need. With one-click updates for platform, services, and applications, Azure Red Hat OpenShift monitors security throughout the software supply chain to make applications more stable without reducing developer productivity. You can also leverage built-in vulnerability assessment and management tools in Azure Security Center to scan images that are pushed to, imported, or pulled from an Azure Container Registry. Discover Operators from the Kubernetes community and Red Hat partners, curated by Red Hat. You can install Operators on your clusters to provide optional add-ons and shared services to your developers, such as AI and machine learning, application runtimes, data, document stores, monitoring logging and insights, security, and messaging services. Regional availability Azure Red Hat OpenShift is available in 27 regions worldwide, and we’re continuing to expand that list. Over the past few months, we have added support for Azure Red Hat OpenShift in a number of regions, including West US, Central US, North Central US, Canada Central, Canada East, Brazil South, UK West, Norway East, France Central, Germany West Central, Central India, UAE North, Korea Central, East Asia, and Japan East. Industry compliance certifications To help you meet your compliance obligations across regulated industries and markets worldwide, Azure Red Hat OpenShift is PCI DSS, FedRAMP High, SOC 1/2/3, ISO 27001 and HITRUST certified. Azure maintains the largest compliance portfolio in the industry, both in terms of the total number of offerings and also the number of customer-facing services in assessment scope. For more details, check the Microsoft Azure Compliance Offerings, as well as the number of customer-facing services in the assessment scope. Next steps Try Azure Red Hat OpenShift now. We are excited about these new lower prices and how this helps our customers build their business on a platform that enables IT operations and developers to collaborate effectively, develop, and deploy containerized applications rapidly with strong security capabilities.
Read more
  • 0
  • 0
  • 2412

article-image-microsoft-partners-expand-the-range-of-mission-critical-applications-you-can-run-on-azure-from-microsoft-azure-blog-announcements
Matthew Emerick
06 Oct 2020
14 min read
Save for later

Microsoft partners expand the range of mission-critical applications you can run on Azure from Microsoft Azure Blog > Announcements

Matthew Emerick
06 Oct 2020
14 min read
How the depth and breadth of the Microsoft Azure partner ecosystem enables thousands of organizations to bring their mission-critical applications to Azure. In the past few years, IT organizations have been realizing compelling benefits when they transitioned their business-critical applications to the cloud, enabling them to address the top challenges they face with running the same applications on-premises. As even more companies embark on their digital transformation journey, the range of mission and business-critical applications has continued to expand, even more so because technology drives innovation and growth. This has further accelerated in the past months, spurred in part by our rapidly changing global economy. As a result, the definition of mission-critical applications is evolving and goes well beyond systems of record for many businesses. It’s part of why we never stopped investing across the platform to enable you to increase the availability, security, scalability, and performance of your core applications running on Azure. The expansion of mission-critical apps will only accelerate as AI, IoT, analytics, and new capabilities become more pervasive. We’re seeing the broadening scope of mission-critical scenarios both within Microsoft and in many of our customers’ industry sectors. For example, Eric Boyd, in his blog, outlined how companies in healthcare, insurance, sustainable farming, and other fields have chosen Microsoft Azure AI to transform their businesses. Applications like Microsoft Teams have now become mission-critical, especially this year, as many organizations had to enable remote workforces. This is also reflected by the sheer number of meetings happening in Teams. Going beyond Azure services and capabilities Many organizations we work with are eager to realize myriad benefits for their own business-critical applications, but first need to address questions around their cloud journey, such as: Are the core applications I use on-premises certified and supported on Azure? As I move to Azure, can I retain the same level of application customization that I have built over the years on-premises? Will my users experience any impact in the performance of my applications? In essence, they want to make sure that they can continue to capitalize on the strategic collaboration they’ve forged with their partners and ISVs as they transition their core business processes to the cloud. They want to continue to use the very same applications that they spent years customizing and optimizing on-premises. Microsoft understands that running your business on Azure goes beyond the services and capabilities that any platform can provide. You need a comprehensive ecosystem. Azure has always been partner-oriented, and we continue to strengthen our collaboration with a large number of ISVs and technology partners, so you can run the applications that are critical to the success of your business operations on Azure. A deeper look at the growing spectrum of mission-critical applications Today, you can run thousands of third-party ISV applications on Azure. Many of these ISVs in turn depend on Azure to deliver their software solutions and services. Azure has become a mission-critical platform for our partner community as well as our customers. When most people think of mission-critical applications, enterprise resource planning systems (ERP), supply chain management (SCM), product lifecycle management (PLM), and customer relationship management (CRM) applications are often the first examples that come to mind. However, to illustrate the depth and breadth of our mission-critical ecosystem, consider these distinct and very different categories of applications that are critical for thousands of businesses around the world: Enterprise resource planning (ERP) systems. Data management and analytics applications. Backup, and business continuity solutions. High-performance computing (HPC) scenarios that exemplify the broadening of business-critical applications that rely on public cloud infrastructure. Azure’s deep ecosystem addresses the needs of customers in all of these categories and more. ERP systems When most people think of mission-critical applications ERP, SCM, PLM, and CRM applications are often the first examples that come to mind. Some examples on Azure include: SAP—We have been empowering our enterprise customers to run their most mission-critical SAP workloads on Azure, bringing the intelligence, security, and reliability of Azure to their SAP applications and data. Viewpoint, a Trimble company—Viewpoint has been helping the construction industry transform through integrated construction management software and solutions for more than 40 years. To meet the scalability and flexibility needs of both Viewpoint and their customers, a significant portion of their clients are now running their software suite on Azure and experiencing tangible benefits. Data management and analytics Data is the lifeblood of the enterprise. Our customers are experiencing an explosion of mission-critical data sources, from the cloud to the edge, and analytics are key to unlocking the value of data in the cloud. AI is a key ingredient, and yet another compelling reason to modernize your core apps on Azure. DataStax—DataStax Enterprise, a scale out, hybrid, cloud-native NoSQL database built on Apache Cassandra™, in conjunction with Azure, can provide a foundation for personalized, real-time scalable applications. Learn how this combination can enable enterprises to run mission critical workloads to increase business agility, without compromising compliance and data governance. Informatica—Informatica has been working with Microsoft to help businesses ensure that the data that is driving your customer and business decisions is trusted, authenticated, and secure. Specifically, Informatica is focused on the quality of the data that is powering your mission-critical applications and can help you derive the maximum value from your existing investments. SAS®—Microsoft and SAS are enabling customers to easily run their SAS workloads in the cloud, helping them unlock critical value from their digital transformation initiatives. As part of our collaboration, SAS is migrating its analytical products and industry solutions onto Azure as the preferred cloud provider for the SAS Cloud. Discover how mission-critical analytics is finding a home in the cloud. Backup and disaster recovery solutions Uptime and disaster recovery plans that minimize recovery time objective (RTO) and recovery point objective (RPO) are the top metrics senior IT decision-makers pay close attention to when it comes to mission-critical environments. Backing up critical data is a key element of putting in place robust business continuity plans. Azure provides built-in backup and disaster recovery features, and we also partner with industry leaders like Commvault, Rubrik, Veeam, Veritas, Zerto, and others so you can keep using your existing applications no matter where your data resides. Commvault—We continue to work with Commvault to deliver data management solutions that enable higher resiliency, visibility, and agility for business-critical workloads and data in our customers’ hybrid environments. Learn about Commvault’s latest offerings—including support for Azure VMware Solution and why their Metallic SaaS suite relies exclusively on Azure. Rubrik—Learn how Rubrik helps enterprises achieve low RTOs, self-service automation at scale, and accelerated cloud adoption. Veeam—Read how you can use Veeam’s solution portfolio to backup, recover, and migrate mission-critical workloads to Azure. Veritas—Find out how Veritas InfoScale has advanced integration with Azure that simplifies the deployment and management of your mission-critical applications in the cloud. Zerto—Discover how the extensive capabilities of Zerto’s platform help you protect mission critical applications on Azure. Teradici—Finally, Teradici underscores how the lines between mission-critical and business-critical are blurring. Read how business continuity plans are being adjusted to include longer term scenarios. HPC scenarios HPC applications are often the most intensive and highest-value workloads in a company, and are business-critical in many industries, including financial services, life sciences, energy, manufacturing and more. The biggest and most audacious innovations from supporting the fight against COVID-19, to 5G semiconductor design; from aerospace engineering design processes to the development of autonomous vehicles, and so much more are being driven by HPC. Ansys—Explore how Ansys Cloud on Azure has proven to be vital for business continuity during unprecedented times. Rescale—Read how Rescale can provide a turnkey platform for engineers and researchers to quickly access Azure HPC resources, easing the transition of business-critical applications to the cloud. You can rely on the expertise of our partner community Many organizations continue to accelerate the migration of their core applications to the cloud, realizing tangible and measurable value in collaboration with our broad partner community, which includes global system integrators like Accenture, Avanade, Capgemini, Wipro, and many others. For example, UnifyCloud recently helped a large organization in the financial sector modernize their data estate on Azure while achieving 69 percent reduction in IT costs. We are excited about the opportunities ahead of us, fueled by the power of our collective imagination. Learn more about how you can run business-critical applications on Azure and increase business resiliency. Watch our Microsoft Ignite session for a deeper diver and demo.   “The construction industry relies on Viewpoint to build and host the mission-critical technology used to run their businesses, so we have the highest possible standards when it comes to the solutions we provide. Working with Microsoft has allowed us to meet those standards in the Azure cloud by increasing scalability, flexibility and reliability – all of which enable our customers to accelerate their own digital transformations and run their businesses with greater confidence.” —Dan Farner, Senior Vice President of Product Development, Viewpoint (a Trimble Company) Read the Gaining Reliability, Scalability, and Customer Satisfaction with Viewpoint on Microsoft Azure blog.     “Business critical applications require a transformational data architecture built on scale-out data and microservices to enable dramatically improved operations, developer productivity, and time-to-market. With Azure and DataStax, enterprises can now run mission critical workloads with zero downtime at global scale to achieve business agility, compliance, data sovereignty, and data governance.”—Ed Anuff, Chief Product Officer, DataStax Read the Application Modernization for Data-Driven Transformation with DataStax Enterprise on Microsoft Azure blog.     “As Microsoft’s 2020 Data Analytics Partner of Year, Informatica works hand-in-hand with Azure to solve mission critical challenges for our joint customers around the world and across every sector.  The combination of Azure’s scale, resilience and flexibility, along with Informatica’s industry-leading Cloud-Native Data Management platform on Azure, provides customers with a platform they can trust with their most complex, sensitive and valuable business critical workloads.”—Rik Tamm-Daniels, Vice President of strategic ecosystems and technology, Informatica Read the Ensuring Business-Critical Data Is Trusted, Available, and Secure with Informatica on Microsoft Azure blog.       “SAS and Microsoft share a vision of helping organizations make better decisions as they strive to serve customers, manage risks and improve operations. Organizations are moving to the cloud at an accelerated pace. Digital transformation projects that were scheduled for the future now have a 2020 delivery date. Customers realize analytics and cloud are critical to drive their digital growth strategies. This partnership helps them quickly move to Microsoft Azure, so they can build, deploy, and manage analytic workloads in a reliable, high-performant and cost-effective manner.”—Oliver Schabenberger, Executive Vice President, Chief Operating Officer and Chief Technology Officer, SAS Read the Mission-critical analytics finds a home in the cloud blog.   “Microsoft is our Foundation partner and selecting Microsoft Azure as our platform to host and deliver Metallic was an easy decision. This decision sparks customer confidence due to Azure’s performance, scale, reliability, security and offers unique Best Practice guidance for customers and partners. Our customers rely on Microsoft and Azure-centric Commvault solutions every day to manage, migrate and protect critical applications and the data required to support their digital transformation strategies.”—Randy De Meno, Vice President/Chief Technology Officer, Microsoft Practice & Solutions Read the Commvault extends collaboration with Microsoft to enhance support for mission-critical workloads blog.     “Enterprises depend on Rubrik and Azure to protect mission-critical applications in SAP, Oracle, SQL and VMware environments. Rubrik helps enterprises move to Azure securely, faster, and with a low TCO using Rubrik’s automated tiering to Azure Archive Storage. Security minded customers appreciate that with Rubrik and Microsoft, business critical data is immutable, preventing ransomware threats from accessing backups, so businesses can quickly search and restore their information on-premises and in Azure.”—Arvind Nithrakashyap, Chief Technology Officer and Co-Founder, Rubrik Learn how enterprises use Rubrik on Azure.     “Veeam continues to see increased adoption of Microsoft Azure for business-critical applications and data across our 375,000 plus global customers. While migration of applications and data remains the primary barrier to the public cloud, we are committed to helping eliminate these challenges through a unified Cloud Data Management platform that delivers simplicity, flexibility and reliability at its core, while providing unrivaled data portability for greater cost controls and savings. Backed by the unique Veeam Universal License – a portable license that moves with workloads to ensure they're always protected – our customers are able to take control of their data by easily migrating workloads to Azure, and then continue protecting and managing them in the cloud.”—Danny Allan, Chief Technology Officer and Senior Vice President for Product Strategy, Veeam Read the Backup, recovery, and migration of mission-critical workloads on Azure blog.     “Thousands of customers rely on Veritas to protect their data both on-premises and in Azure. Our partnership with Microsoft helps us drive the data protection solutions that our enterprise customers rely on to keep their business-critical applications optimized and immediately available.”—Phil Brace, Chief Revenue Officer, Veritas Read the Migrate and optimize your mission-critical applications in Microsoft Azure with Veritas InfoScale blog.     “Microsoft has always leveraged the expertise of its partners to deliver the most innovative technology to customers. Because of Zerto’s long-standing collaboration with Microsoft, Zetro's IT Resilience platform is fully integrated with Azure and provides a robust, fully orchestrated solution that reduces data loss to seconds and downtime to minutes. Utilizing Zerto’s end-to-end, converged backup, DR, and cloud mobility platform, customers have proven time and time again they can protect mission-critical applications during planned or unplanned disruptions that include ransomware, hardware failure, and numerous other scenarios using the Azure cloud – the best cloud platform for IT resilience in the hybrid cloud environment.”—Gil Levonai, CMO and SVP of Product, Zerto Read the Protecting Critical Applications in the Cloud with the Zerto Platform blog.     “The longer business continues to be disrupted, the more the lines blur and business critical functions begin to shift to mission critical, making virtual desktops and workstations on Microsoft Azure an attractive option for IT managers supporting remote workforces in any function or industry. Teradici Cloud Access Software offers a flexible and secure solution that supports demanding business critical and mission critical workloads on Microsoft Azure and Azure Stack with exceptional performance and fidelity, helping businesses gain efficiency and resilience within their business continuity strategy.”—John McVay, Director of Strategic Alliances, Teradici Read the Longer IT timelines shift business critical priorities to mission critical blog.         "It is imperative for Ansys to support our customers' accelerating needs for on-demand high performance computing to drive their increasingly complex engineering requirements. Microsoft Azure, with its purpose-built HPC and robust go-to market capabilities, was a natural choice for us, and together we are enabling our joint customers to keep designing innovative products even as they work from home.”—Navin Budhiraja, Vice President and General Manager, Cloud and Platform, Ansys Read the Ansys Cloud on Microsoft Azure: A vital resource for business continuity during the pandemic blog.     “Robust and stable business critical systems are paramount for success. Rescale customers leveraging Azure HPC resources are taking advantage of the scalability, flexibility and intelligence to improve R&D, accelerate development and reduce costs not possible with a fixed infrastructure.”—Edward Hsu, Vice President of Product, Rescale Read the Business Critical Systems that Drive Innovation blog.     “Customers are transitioning business-critical workloads to Azure and realizing significant cost benefits while modernizing their applications. Our solutions help customers develop cloud strategy, modernize quickly, and optimize cloud environments while minimizing risk and downtime.”—Vivek Bhatnagar, Co-Founder and Chief Technology Officer, UnifyCloud Read the Moving mission-critical applications to the cloud: More important than ever blog.
Read more
  • 0
  • 0
  • 1870

article-image-azure-functions-3-0-released-with-support-for-net-core-3-1
Savia Lobo
12 Dec 2019
2 min read
Save for later

Azure Functions 3.0 released with support for .NET Core 3.1!

Savia Lobo
12 Dec 2019
2 min read
On 9th December, Microsoft announced that the go-live release of the Azure Functions 3.0 is now available. Among many new capabilities and functionality added to this release, one amazing addition is the support for the newly released .NET Core 3.1 -- an LTS (long-term support) release -- and Node 12. With users having the advantage to build and deploy 3.0 functions in production, the Azure Functions 3.0 bring newer capabilities including the ability to target .NET Core 3.1 and Node 12, higher backward compatibility for existing apps running on older language versions, without any code changes. “While the runtime is now ready for production, and most of the tooling and performance optimizations are rolling out soon, there are still some tooling improvements to come before we announce Functions 3.0 as the default for new apps. We plan to announce Functions 3.0 as the default version for new apps in January 2020,” the official announcement mentions. While users running on earlier versions of Azure Functions will continue to be supported, the company does not plan to deprecate 1.0 or 2.0 at present. “Customers running Azure Functions targeting 1.0 or 2.0 will also continue to receive security updates and patches moving forward—to both the Azure Functions runtime and the underlying .NET runtime—for apps running in Azure. Whenever there’s a major version deprecation, we plan to provide notice at least a year in advance for users to migrate their apps to a newer version,” Microsoft mentions. https://twitter.com/rickvdbosch/status/1204115191367114752 https://twitter.com/AzureTrenches/status/1204298388403044353 To know more about this in detail, read Azure Functions’ official documentation. Creating triggers in Azure Functions [Tutorial] Azure Functions 2.0 launches with better workload support for serverless Serverless computing wars: AWS Lambdas vs Azure Functions
Read more
  • 0
  • 0
  • 6133
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-grafana-labs-announces-general-availability-of-loki-1-0-a-multi-tenant-log-aggregation-system
Savia Lobo
20 Nov 2019
3 min read
Save for later

Grafana Labs announces general availability of Loki 1.0, a multi-tenant log aggregation system

Savia Lobo
20 Nov 2019
3 min read
Today, at the ongoing KubeCon 2019, Grafana Labs, an open source analytics and monitoring solution provider, announced that Loki version 1.0 is generally available for production use. Loki is an open source logging platform that provides developers with an easy-to-use, highly efficient and cost-effective approach to log aggregation. The Loki project was first introduced at KubeCon Seattle in 2018. Before the official launch, this project was started inside of Grafana Labs and was internally used to monitor all of Grafana Labs’ infrastructure. It helped ingest around 1.5TB/10 billion log lines a day. Released under the Apache 2.0 license, the Loki tool is optimized for Grafana, Kubernetes, and Prometheus. Just within a year, the project has more than 1,000 contributions from 137 contributors and also has nearly 8,000 stars on GitHub. With Loki 1.0, users can instantaneously switch between metrics and logs, preserving context and reducing MTTR. By storing compressed, unstructured logs and only indexing metadata, Loki is cost-effective and simple to operate by design. It includes a set of components that can be composed into a fully-featured logging stack. Grafana Cloud offers a high-performance, hosted Loki service that allows users to store all logs together in a single place with usage-based pricing. Loki’s design is inspired by Prometheus, the open source monitoring solution for the cloud-native ecosystem, as it offers a Prometheus-like query language called LogQL to further integrate with the cloud-native ecosystem. Tom Wilkie, VP of Product at Grafana Labs, said, “Grafana Labs is proud to have created Loki and fostered the development of the project, building first-class support for Loki into Grafana and ensuring customers receive the support and features they need.” He further added, “We are committed to delivering an open and composable observability platform, of which Loki is a key component, and continue to rely on the power of open source and our community to enhance observability into application and infrastructure.” Grafana Labs also offers enterprise services and support for Loki, which includes: Support and training from Loki maintainers and experts 24 x 7 x 365 coverage from the geographically distributed Grafana team Per-node pricing that scales with deployment Read more about Grafana Loki in detail on GitHub. “Don’t break your users and create a community culture”, says Linus Torvalds, Creator of Linux, at KubeCon + CloudNativeCon + Open Source Summit China 2019 KubeCon + CloudNativeCon EU 2019 highlights: Microsoft’s Service Mesh Interface, Enhancements to GKE, Virtual Kubelet 1.0, and much more! Grafana 6.2 released with improved security, enhanced provisioning, Bar Gauge panel, lazy loading and more
Read more
  • 0
  • 0
  • 4338

article-image-cncf-announces-helm-3-a-kubernetes-package-manager-and-tool-to-manage-charts-and-libraries
Fatema Patrawala
14 Nov 2019
3 min read
Save for later

CNCF announces Helm 3, a Kubernetes package manager and tool to manage charts and libraries

Fatema Patrawala
14 Nov 2019
3 min read
The Cloud Native Computing Foundation (CNCF), which builds sustainable ecosystems for cloud native software, yesterday announced the stable release of Helm 3. Helm is a package manager for Kubernetes and a tool for managing charts of pre-configured Kubernetes resources. “Helm is one of our fastest-growing projects in contributors and users contributing back to the project,” said Chris Aniszczyk, CTO, CNCF. “Helm is a powerful tool for all Kubernetes users to streamline deployments, and we’re impressed by the progress the community has made with this release in growing their community.” As per the team the internal implementation of Helm 3 has changed considerably from Helm 2. The most important change is the removal of Tiller, a service that communicates with the Kubernetes API to manage Helm packages. Then there are improvements to chart repositories, release management, security, and library charts. Helm uses a packaging format called charts, which are collections of files describing a related set of Kubernetes resources. These charts can then be packaged into versioned archives to be deployed. Helm 2 defined a workflow for creating, installing, and managing these charts. Helm 3 builds upon that workflow, changing the underlying infrastructure to reflect the needs of the community as they change and evolve. In this release, the Helm maintainers incorporated feedback and requests from the community to better address the needs of Kubernetes users and the broad cloud native ecosystem. Helm 3 is ready for public deployment Last week, third party security firm Cure53 completed their open source security audit of Helm 3, mentioning Helm’s mature focus on security, and concluded that Helm 3 is “recommended for public deployment.” According to the report, “in light of the findings stemming from this CNCF-funded project, Cure53 can only state that the Helm projects the impression of being highly mature. This verdict is driven by a number of different factors… and essentially means that Helm can be recommended for public deployment, particularly when properly configured and secured in accordance to recommendations specified by the development team.” “When we built Helm, we set out to create a tool to serve as an ‘on-ramp’ to Kubernetes. With Helm 3, we have really accomplished that,” said Matt Fisher, the Helm 3 release manager. “Our goal has always been to make it easier for the Kubernetes user to create, share, and run production-grade workloads. The core maintainers are really excited to hit this major milestone, and we look forward to hearing how the community is using Helm 3.” Helm 3 is a joint community effort, with core maintainers from organizations including Microsoft, Samsung SDS, IBM, and Blood Orange. As per the team the next phase of Helm’s development will see new features targeted toward stability and enhancements to existing features. Features on the roadmap include enhanced functionality for helm test, improvements to Helm’s OCI integration, and enhanced functionality for the Go client libraries. To know more about this news, read the official announcement from the Cloud Native Computing Foundation. StackRox Kubernetes Security Platform 3.0 releases with advanced configuration and vulnerability management capabilities Microsoft launches Open Application Model (OAM) and Dapr to ease developments in Kubernetes and microservices An unpatched security issue in the Kubernetes API is vulnerable to a “billion laughs” attack Kubernetes 1.16 releases with Endpoint Slices, general availability of Custom Resources, and other enhancements StackRox App integrates into the Sumo Logic Dashboard for improved Kubernetes security  
Read more
  • 0
  • 0
  • 3206

article-image-stackrox-kubernetes-security-platform-3-0-releases-with-advanced-configuration-and-vulnerability-management-capabilities
Bhagyashree R
13 Nov 2019
3 min read
Save for later

StackRox Kubernetes Security Platform 3.0 releases with advanced configuration and vulnerability management capabilities

Bhagyashree R
13 Nov 2019
3 min read
Today, StackRox, a Kubernetes-native container security platform provider announced StackRox Kubernetes Security Platform 3.0. This release includes industry-first features for configuration and vulnerability management that enable businesses to achieve stronger protection of cloud-native, containerized applications. In a press release, Wei Lien Dang, StackRox’s vice president of product, and co-founder said, “When it comes to Kubernetes security, new challenges related to vulnerabilities and misconfigurations continue to emerge.” “DevOps and Security teams need solutions that quickly and easily solve these issues. StackRox 3.0 is the first container security platform with the capabilities orgs need to effectively deal with Kubernetes configurations and vulnerabilities, so they can reduce risk to what matters most – their applications and their customer’s data,” he added. What’s new in StackRox Kubernetes Security Platform 3.0 Features for configuration management Interactive dashboards: This will enable users to view risk-prioritized misconfigurations, easily drill-down to critical information about the misconfiguration, and determine relevant context required for effective remediation. Kubernetes role-based access control (RBAC) assessment: StackRox will continuously monitor permission for users and service accounts to help mitigate against excessive privileges being granted. Kubernetes secrets access monitoring: The platform will discover secrets in Kubernetes and monitor which deployments can use them to limit unnecessary access. Kubernetes-specific policy enforcement: StackRox will identify configurations in Kubernetes related to network exposures, privileged containers, root processes, and other factors to determine policy violations. Advanced vulnerability management capabilities Interactive dashboards: StackRox Kubernetes Security Platform 3.0 has interactive views that provide risk prioritized snapshots across your environment, highlighting vulnerabilities in both, images and Kubernetes. Discovery of Kubernetes vulnerabilities: The platform gives you visibility into critical vulnerabilities that exist in the Kubernetes platform including the ones related to the Kubernetes API server disclosed by the Kubernetes product security team. Language-specific vulnerabilities: StackRox scans container images for additional vulnerabilities that are language-dependent, providing greater coverage across containerized applications.  Along with the aforementioned features, StackRox Kubernetes Security Platform 3.0 adds support for various ecosystem platforms. These include CRI-O, the Open Container Initiative (OCI)-compliant implementation of the Kubernetes Container Runtime Interface (CRI), Google Anthos, Microsoft Teams integration, and more. These were a few latest capabilities shipped in StackRox Kubernetes Security Platform 3.0. To know more, you can check out live demos and Q&A by the StackRox team at KubeCon 2019, which will be happening from November 18-21 in San Diego, California. It brings together adopters and technologists from leading open source and cloud-native communities. Kubernetes 1.16 releases with Endpoint Slices, general availability of Custom Resources, and other enhancements StackRox App integrates into the Sumo Logic Dashboard  for improved Kubernetes security Microsoft launches Open Application Model (OAM) and Dapr to ease developments in Kubernetes and microservices  
Read more
  • 0
  • 0
  • 2904

article-image-redhats-quarkus-announces-plans-for-quarkus-1-0-releases-its-rc1
Vincy Davis
11 Nov 2019
3 min read
Save for later

Red Hat’s Quarkus announces plans for Quarkus 1.0, releases its rc1 

Vincy Davis
11 Nov 2019
3 min read
Update: On 25th November, the Quarkus team announced the release of Quarkus 1.0.0.Final bits. Head over to the Quarkus blog for more details on the official announcement. Last week, RedHat’s Quarkus, the Kubernetes native Java framework for GraalVM & OpenJDK HotSpot announced the availability of its first release candidate. It also notified users that its first stable version will be released by the end of this month. Launched in March this year, Quarkus framework uses Java libraries and standards to provide an effective solution for running Java on new deployment environments like serverless, microservices, containers, Kubernetes, and more. Java developers can employ this framework to build apps with faster startup time and less memory than traditional Java-based microservices frameworks. It also provides flexible and easy to use APIs that can help developers to build cloud-native apps, and best-of-breed frameworks. “The community has worked really hard to up the quality of Quarkus in the last few weeks: bug fixes, documentation improvements, new extensions and above all upping the standards for developer experience,” states the Quarkus team. Latest updates added in Quarkus 1.0 A new reactive core based on Vert.x with support for reactive and imperative programming models. This feature aims to make reactive programming a first-class feature of Quarkus. A new non-blocking security layer that allows reactive authentications and authorization. It also enables reactive security operations to integrate with Vert.x. Improved Spring API compatibility, including Spring Web and Spring Data JPA, as well as Spring DI. A Quarkus ecosystem also called as “universe”, is a set of extensions that fully supports native compilation via GraalVM native image. It supports Java 8, 11 and 13 when using Quarkus on the JVM. It will also support Java 11 native compilation in the near future. RedHat says, “Looking ahead, the community is focused on adding additional extensions like enhanced Spring API compatibility, improved observability, and support for long-running transactions.” Many users are excited about Quarkus and are looking forward to trying the stable version. https://twitter.com/zemiak/status/1192125163472637952 https://twitter.com/loicrouchon/status/1192206531045085186 https://twitter.com/lasombra_br/status/1192114234349563905 How Quarkus brings Java into the modern world of enterprise tech Apple shares tentative goals for WebKit 2020 Apple introduces Swift Numerics to support numerical computing in Swift Rust 1.39 releases with stable version of async-await syntax, better ergonomics for match guards, attributes on function parameters, and more Fastly announces the next-gen edge computing services available in private beta
Read more
  • 0
  • 0
  • 2663
article-image-fastly-announces-the-next-gen-edge-computing-services-available-in-private-beta
Fatema Patrawala
08 Nov 2019
4 min read
Save for later

Fastly announces the next-gen edge computing services available in private beta

Fatema Patrawala
08 Nov 2019
4 min read
Fastly, a San Francisco based startup, providing edge cloud platform, yesterday announced the private beta launch of Compute@Edge, its new edge computing services. Compute@Edge is a powerful language-agnostic compute environment. This major milestone marks as an evolution of Fastly’s edge computing capabilities and the company’s innovation in the serverless space.  https://twitter.com/fastly/status/1192080450069643264 Fastly’s Compute@Edge is designed to empower developers to build far more advanced edge applications with greater security, more robust logic, and new levels of performance. They can also create a new and improved digital experience with their own technology choices around the cloud platforms, services, and programming languages needed.  Rather than spend time on operational overhead, the company’s goal is to continue reinventing the way end users live, work, and play on the web. Fastly's Compute@Edge gives developers the freedom to push complex logic closer to end users. “When we started Fastly, we sought to build a platform with the power to realize the future of edge computing — from our software-defined modern network to our point of presence design, everything has led us to this point,” explained Tyler McMullen, CTO of Fastly. “With this launch, we’re excited to double down on that vision and work with enterprises to help them build truly complete applications in an environment that offers new levels of stability, security, and global scale.” We had the opportunity to interview Fastly’s CTO Tyler McMullen a few months back. We discussed Fastly’s Lucet and the future of WebAssembly and Rust among other things. You can read the full interview here.  Fastly Compute@Edge leverages speed for global scale and security Fastly’s Compute@Edge environment promises to offer 100x faster startup time at 35.4 microseconds, than any other solution in the market. Additionally Compute@Edge is powered by its open-source WebAssembly compiler and runtime, Lucet and supports Rust as a second language in addition to Varnish Configuration Language (VCL).  Other benefits of Compute@Edge include: Code can be computed around the world instead of a single region. This will allow developers to reduce code execution latency and further optimize the performance of their code, without worrying about managing the underlying infrastructure The unmatched speed at which the environment operates, combined with Fastly’s isolated sandboxing technology, reduces the risk of accidental data leakage. With a “burn-after-reading” approach to request memory, entire classes of vulnerabilities are eliminated With Compute@Edge, developers can serve GraphQL from its network edge and deliver more personalized experiences Developers can develop their own customized API protection logic With manifest manipulation, developers can deliver content with a “best-performance-wins” approach— like multi-CDN live streams that run smoothly for users around the world Fastly has operated in the serverless market since its founding in 2011 through its Edge Cloud Platform, including products like Full Site Delivery, Load Balancer, DDoS, and Web Application Firewall (WAF). Till date, Fastly’s serverless computing offering has focused on delivery-centric use cases via its VCL-powered programmable edge. With the introduction of Compute@Edge, Fastly unlocks even more powerful and widely-applicable computing capabilities. To learn more about Fastly’s edge computing and cloud services, you can visit its official blog. Developers who are interested to be a part of the private beta can sign up on this page. Fastly SVP, Adam Denenberg on Fastly’s new edge resources, edge computing, fog computing, and more Fastly, edge cloud platform, files for IPO Fastly open sources Lucet, a native WebAssembly compiler and runtime “Rust is the future of systems programming, C is the new Assembly”: Intel principal engineer, Josh Triplett Wasmer introduces WebAssembly Interfaces for validating the imports and exports of a Wasm module
Read more
  • 0
  • 0
  • 2961

article-image-microsoft-announces-azure-quantum-an-open-cloud-ecosystem-to-learn-and-build-scalable-quantum-solutions
Savia Lobo
05 Nov 2019
3 min read
Save for later

Microsoft announces Azure Quantum, an open cloud ecosystem to learn and build scalable quantum solutions

Savia Lobo
05 Nov 2019
3 min read
Yesterday, at the Microsoft Ignite 2019 in Orlando, the company released the preview of its first full-stack, scalable, general open cloud ecosystem, ‘Azure Quantum’. For developers, Microsoft has specifically created the open-source Quantum Development Kit, which includes all of the tools and resources you need to start learning and building quantum solutions. Azure Quantum is a set of quantum services including pre-built solutions to software and quantum hardware, providing developers and customers access to some of the most competitive quantum offerings in the market. For this offering, Microsoft has partnered with 1QBit, Honeywell, IonQ, and QCI. With Azure Quantum service, anyone gains deeper insights about quantum computing through a series of tools and learning tutorials such as the quantum katas. It also allows developers to write programs with Q# and QDK and experiment running the code against simulators and a variety of quantum hardware. Customers can also solve complex business challenges with pre-built solutions and algorithms running in Azure. According to Wired, “Azure Quantum has similarities to a service from IBM, which has offered free and paid access to prototype quantum computers since 2016. Google, which said last week that one of its quantum processors had achieved a milestone known as “quantum supremacy” by outperforming a top supercomputer, has said it will soon offer remote access to quantum hardware to select companies.” Microsoft’s Azure Quantum model is more like the existing computing industry, where cloud providers allow customers to choose processors from companies such as Intel and AMD, says William Hurley, CEO of startup Strangeworks. This startup offers services for programmers to build and collaborate with quantum computing tools from IBM, Google, and others. With just a single program, users will be able to target a variety of hardware through Azure Quantum – Azure classical computing, quantum simulators, and resource estimators, and quantum hardware from our partners, as well as our future quantum system being built on revolutionary topological qubit. Microsoft, on its official website, announced that the Azure Quantum will be launched in private preview in the coming months. Many users are excited to try the Quantum service by Azure. https://twitter.com/Daniel_Rubino/status/1191364279339036673 To know more about Azure Quantum in detail, visit Microsoft’s official page. Are we entering the quantum computing era? Google’s Sycamore achieves ‘quantum supremacy’ while IBM refutes the claim Using Qiskit with IBM QX to generate quantum circuits [Tutorial] How to translate OpenQASM programs in IBX QX into quantum scores [Tutorial]
Read more
  • 0
  • 0
  • 2607

article-image-developers-ask-for-an-option-to-disable-docker-compose-from-automatically-reading-the-env-file
Bhagyashree R
18 Oct 2019
3 min read
Save for later

Developers ask for an option to disable Docker Compose from automatically reading the .env file

Bhagyashree R
18 Oct 2019
3 min read
In June this year, Jonathan Chan, a software developer reported that Docker Compose automatically reads from .env. Since other systems also access the same file for parsing and processing variables, this was creating some confusion resulting in breaking compatibility with other .env utilities. Docker Compose has a "docker-compose.yml" config file used for deploying, combining, and configuring multiple multi-container Docker applications. The .env file is used for putting values in the "docker-compose.yml" file. In the .env file, the default environment variables are specified in the form of key-value pairs. “With the release of 1.24.0, the feature where Compose will no longer accept whitespace in variable names sourced from environment files. This matches the Docker CLI behavior. breaks compatibility with other .env utilities. Although my setup does not use the variables in .env for docker-compose, docker-compose now fails because the .env file does not meet docker-compose's format,” Chan explains. This is not the first time that this issue has been reported. Earlier this year, a user opened an issue on the GitHub repo. He described that after upgrading Compose to 1.24.0-rc1, its automatic parsing of .env file was failing. “I keep export statements in my .env file so I can easily source it in addition to using it as a standard .env. In previous versions of Compose, this worked fine and didn't give me any issues, however with this new update I instead get an error about spaces inside a value,” he explained in his report. As a solution, Chan has proposed, “I propose that you can specify an option to ignore the .env file or specify a different.env file (such as .docker.env) in the docker-compose.yml file so that we can work around projects that are already using the .env file for something else.” This sparked a discussion on Hacker News where users also suggested a few solutions. “This is the exact class of problem that docker itself attempts to avoid. This is why I run docker-compose inside a docker container, so I can control exactly what it has access to and isolate it. There's a guide to do so here. It has the added benefit of not making users install docker-compose itself - the only project requirement remains docker,” A user commented. Another user recommended, “You can run docker-compose.yml in any folder in the tree but it only reads the .env from cwd. Just CD into someplace and run docker-compose.” Some users also pointed out the lack of authentication mechanism in Docker Hub. “Docker Hub still does not have any form of 2FA. Even SMS 2FA would be something / great at this point. As an attacker, I would put a great deal of focus on attacking a company’s registries on Docker Hub. They can’t have 2FA, so the work/reward ratio is quite high,” a user commented. Others recommended to set up a time-based one-time password (TOTP) instead. Check out the reported issue on the GitHub repository. Amazon EKS Windows Container Support is now generally available GKE Sandbox : A gVisor based feature to increase security and isolation in containers 6 signs you need containers  
Read more
  • 0
  • 0
  • 8398
article-image-kubernetes-1-16-releases-with-endpoint-slices-general-availability-of-custom-resources-and-other-enhancements
Vincy Davis
19 Sep 2019
4 min read
Save for later

Kubernetes 1.16 releases with Endpoint Slices, general availability of Custom Resources, and other enhancements

Vincy Davis
19 Sep 2019
4 min read
Yesterday, the Kubernetes team announced the availability of Kubernetes 1.16, which consists of 31 enhancements: 8 moving to stable, 8 is beta, and 15 in alpha. This release contains a new feature called Endpoint Slices in alpha to be used as a scalable alternative to Endpoint resources. Kubernetes 1.16 also contains major enhancements like custom resources, overhauled metrics and volume extension. It also brings additional improvements like the general availability of custom resources and more. Extensions like extensions/v1beta1, apps/v1beta1, and apps/v1beta2 APIs are deprecated in this version. This is Kubernetes' third release this year. The previous version Kubernetes 1.15 released three months ago. It accorded features like extensibility around core Kubernetes APIs and cluster lifecycle stability and usability improvements. Introducing Endpoint Slices in Kubernetes 1.16 The main goal of Endpoint Slices is to increase the scalability for Kubernetes Services. With the existing Endpoints, a single resource had to include all the network endpoints making the corresponding Endpoints resources large and costly. Also, when an Endpoints resource is updated, all the pieces of code watching the Endpoints required a full copy of the resource. This became a tedious process when dealing with a big cluster. With Endpoint Slices, the network endpoints for a Service are split into multiple resources by decreasing the amount of data required for updates. The Endpoint Slices are restricted to 100 endpoints each, by default. The other goal of Endpoint Slices is to provide extensible and useful resources for a variety of implementations. Endpoint Slices will also provide flexibility for address types. The blog post states, “An initial use case for multiple addresses would be to support dual stack endpoints with both IPv4 and IPv6 addresses.”  As the feature is available in alpha only, it is not enabled by default in Kubernetes 1.16. Major enhancements in Kubernetes 1.16 General availability of Custom Resources With Kubernetes 1.16, CustomResourceDefinition (CRDs) is generally available, with apiextensions.k8s.io/v1, as it contains the integration of API evolution in Kubernetes. CRDs were previously available in beta. It is widely used as a Kubernetes extensibility mechanism. In the CRD.v1, the API evolution has a ‘defaulting’ support by default. When defaulting is  combined with the CRD conversion mechanism, it will be possible to build stable APIs over time. The blog post adds, “Updates to the CRD API won’t end here. We have ideas for features like arbitrary subresources, API group migration, and maybe a more efficient serialization protocol, but the changes from here are expected to be optional and complementary in nature to what’s already here in the GA API.” Overhauled metrics In the earlier versions, the global metrics registry was extensively used by the Kubernetes to register exposed metrics. In this latest version, the metrics registry has been implemented, thus making the Kubernetes metrics more stable and transparent. Volume Extension This release contains many enhancements to volumes and volume modifications. The volume resizing support in (Container Storage Interface) CSI specs has moved to beta, allowing the CSI spec volume plugin to be resizable. Additional Windows Enhancements in Kubernetes 1.16 Workload identity option for Windows containers has moved to beta. It can now gain exclusive access to external resources. New alpha support is added for kubeadm which can be used to prepare and add a Windows node to cluster. New plugin support is introduced for CSI in alpha. Interested users can download Kubernetes 1.16 on GitHub. Check out the Kubernetes blog page for more information. Other interesting news in Kubernetes The Continuous Intelligence report by Sumo Logic highlights the rise of Multi-Cloud adoption and open source technologies like Kubernetes Kubernetes releases etcd v3.4 with better backend storage, improved raft voting process, new raft non-voting member and more CNCF-led open source Kubernetes security audit reveals 37 flaws in Kubernetes cluster; recommendations proposed
Read more
  • 0
  • 0
  • 3618

article-image-kubernetes-releases-etcd-v3-4-with-better-backend-storage-improved-raft-voting-process-new-raft-non-voting-member-and-more
Fatema Patrawala
02 Sep 2019
5 min read
Save for later

Kubernetes releases etcd v3.4 with better backend storage, improved raft voting process, new raft non-voting member and more

Fatema Patrawala
02 Sep 2019
5 min read
Last Friday, a team at Kubernetes announced the release of etcd 3.4 version. etcd 3.4 focuses on stability, performance and ease of operation. It includes features like pre-vote and non-voting member and improvements to storage backend and client balancer. Key features and improvements in etcd v3.4 Better backend storage etcd v3.4 includes a number of performance improvements for large scale Kubernetes workloads. In particular, etcd experienced performance issues with a large number of concurrent read transactions even when there is no write (e.g. “read-only range request ... took too long to execute”). Previously, the storage backend commit operation on pending writes, blocks incoming read transactions, even when there was no pending write. Now, the commit does not block reads which improve long-running read transaction performance. The team has further made backend read transactions fully concurrent. Previously, ongoing long-running read transactions block writes and upcoming reads. With this change, write throughput is increased by 70% and P99 write latency is reduced by 90% in the presence of long-running reads. They also ran Kubernetes 5000-node scalability test on GCE with this change and observed similar improvements. Improved raft voting process etcd server implements Raft consensus algorithm for data replication. Raft is a leader-based protocol. Data is replicated from leader to follower; a follower forwards proposals to a leader, and the leader decides what to commit or not. Leader persists and replicates an entry, once it has been agreed by the quorum of cluster. The cluster members elect a single leader, and all other members become followers. The elected leader periodically sends heartbeats to its followers to maintain its leadership, and expects responses from each follower to keep track of its progress. In its simplest form, a Raft leader steps down to a follower when it receives a message with higher terms without any further cluster-wide health checks. This behavior can affect the overall cluster availability. For instance, a flaky (or rejoining) member drops in and out, and starts campaign. This member ends up with higher terms, ignores all incoming messages with lower terms, and sends out messages with higher terms. When the leader receives this message of a higher term, it reverts back to follower. This becomes more disruptive when there’s a network partition. Whenever the partitioned node regains its connectivity, it can possibly trigger the leader re-election. To address this issue, etcd Raft introduces a new node state pre-candidate with the pre-vote feature. The pre-candidate first asks other servers whether it’s up-to-date enough to get votes. Only if it can get votes from the majority, it increments its term and starts an election. This extra phase improves the robustness of leader election in general. And helps the leader remain stable as long as it maintains its connectivity with the quorum of its peers. Introducing a new raft non-voting member, “Learner” The challenge with membership reconfiguration is that it often leads to quorum size changes, which are prone to cluster unavailabilities. Even if it does not alter the quorum, clusters with membership change are more likely to experience other underlying problems. In order to address failure modes, etcd introduced a new node state “Learner”, which joins the cluster as a non-voting member until it catches up to leader’s logs. This means the learner still receives all updates from leader, while it does not count towards the quorum, which is used by the leader to evaluate peer activeness. The learner only serves as a standby node until promoted. This relaxed requirements for quorum provides the better availability during membership reconfiguration and operational safety. Improvements to client balancer failover logic etcd is designed to tolerate various system and network faults. By design, even if one node goes down, the cluster “appears” to be working normally, by providing one logical cluster view of multiple servers. But, this does not guarantee the liveness of the client. Thus, etcd client has implemented a different set of intricate protocols to guarantee its correctness and high availability under faulty conditions. Historically, etcd client balancer heavily relied on old gRPC interface: every gRPC dependency upgrade broke client behavior. A majority of development and debugging efforts were devoted to fixing those client behavior changes. As a result, its implementation has become overly complicated with bad assumptions on server connectivity. The primary goal in this release was to simplify balancer failover logic in etcd v3.4 client; instead of maintaining a list of unhealthy endpoints, whenever client gets disconnected from the current endpoint. To know more about this release, check out the Changelog page on GitHub. What’s new in cloud and networking this week? VMworld 2019: VMware Tanzu on Kubernetes, new hybrid cloud offerings, collaboration with multi cloud platforms and more! The Accelerate State of DevOps 2019 Report: Key findings, scaling strategies and proposed performance & productivity models Pivotal open sources kpack, a Kubernetes-native image build service
Read more
  • 0
  • 0
  • 4088