Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News - Cloud & Networking

376 Articles
article-image-top-hacks-it-certification
Ronnie Wong
14 Oct 2021
5 min read
Save for later

Top life hacks for prepping for your IT certification exam

Ronnie Wong
14 Oct 2021
5 min read
I remember deciding to pursue my first IT certification, the CompTIA A+. I had signed up for a class that lasted one week, per exam, meaning two weeks.  We reviewed so much material during that time that the task of preparing for the certification seemed overwhelming.  Even with an instructor, the scope of the material was a challenge.   Mixed messages  Somedays, I would hear from others how difficult the exam was; on other days, I would hear how easy the exam was. I would also hear advice about topics I should study more and even some topics I didn’t think about studying.  These conflicting comments only increased my anxiety as my exam date drew closer. No matter what I read, studied, or heard from people about the exam, I felt like I was not prepared to pass it. Overwhelmed by the sheer volume of material, anxious from the comments of others and feeling like I didn’t do enough preparation when I finally passed the exam, it didn’t bring me joy so much as relief that I had survived it.   Then it was time to prepare for the second exam, and those same feelings came back but this time with a little more confidence that I could pass it. After that first A+ exam, I have not only passed more exams, I have also have helped others prepare successfully for many certification exams.    Exam hacks  Below is a list that has helped not only me but also others to successfully prepare for exams.   Start with the exam objectives and keep a copy of them close by you for reference during your whole preparation time.  If you haven’t downloaded them (many are on the exam vendor’s site), do it now.  This is your verified guide on what topics will appear on the exam, and it will help you feel confident to ignore others when they tell you what to study. If it’s not in the exam objectives, then it is more than likely not on the exam. There is never a 100% guarantee, but whatever they ask you will at least be related to those topics found on the objectives. They will not be in addition to the objectives.                                                                                                                                                                                                              To sharpen the focus of your preparation, refer to your exam objectives again.  You may see this as just a list, but it is so much more. Put differently, the exam objectives set the scope of what to study.  How?  Pay attention to the verbs used on the exam objectives.  The objectives never give you a topic without using a verb to help you recognize the depth you should go into when you study. e.g., “configure and verify HSRP.”  You are not only learning what HSRP is, but you should know where and how to configure and verify it working successfully.  If it reads to “describe the hacking process”, you will know this topic is more conceptual. A conceptual topic with that verb would require you to define it and put it in context.                                                                                                                                                                                        The exam objectives also show the weighting of those topics for the exam. Vendors break down the objective domain into percentages. For example, you may find one topic accounts for 40% of the exam. This helps you predict what topics you will see more questions for on the exam. That means you can know what topics you’re more likely to see than other topics.  You may also see that you already know a good percentage of the exam as well. It’s a confidence booster and that mindset is key in your preparation.                                                                                                                                    A good study session begins and ends with a win. You can easily sabotage your study by picking a topic that is too difficult to get through in a single session. In the same manner, ending a study session where you feel like you didn’t learn anything is also disheartening.  This is demotivating at best.  How do we ensure that we can begin and end a study session with win? Create a study session with three topics. Begin with an easier topic to review or learn. Then, you can choose a topic that is more challenging.  Of course, end your study session with another easier topic.  Following this model, do a minimum of one a day or maximum of two sessions a day.                            Put your phone away. Set your emails and notifications, instant messaging, and social media on do not disturb during your study session time. Good study time is uninterrupted, except on your very specific and short breaks. It’s amazing how much more you can accomplish when you have dedicated study time away from beeps, rings, notifications.     Prep is king  Preparing for a certification exam is hard enough due to the quantity of material and the added stress of sitting for an exam and passing. You can make it more effective by using the objectives to help guide you, putting a session plan in place that is motivating as well as reducing the distractions during your dedicated study times. These are commonly overlooked preparation hacks that will benefit you in your next certification exam.   These are just some handy hints for passing IT Certification exams. What tips would you give? Have you recently completed a certification or are you planning on taking one soon?  Packt would love to hear your thoughts, so why not take the following survey? The first 200 respondents will get a free ebook of choice from the Packt catalogue.*    *To receive the ebook, you must supply an email. Free ebook requires a no-charge account creation with Packt   
Read more
  • 0
  • 0
  • 5145

article-image-new-amazon-rds-on-graviton2-processors-from-aws-news-blog
Matthew Emerick
15 Oct 2020
3 min read
Save for later

New – Amazon RDS on Graviton2 Processors from AWS News Blog

Matthew Emerick
15 Oct 2020
3 min read
I recently wrote a post to announce the availability of M6g, R6g and C6g families of instances on Amazon Elastic Compute Cloud (EC2). These instances offer better cost-performance ratio than their x86 counterparts. They are based on AWS-designed AWS Graviton2 processors, utilizing 64-bit Arm Neoverse N1 cores. Starting today, you can also benefit from better cost-performance for your Amazon Relational Database Service (RDS) databases, compared to the previous M5 and R5 generation of database instance types, with the availability of AWS Graviton2 processors for RDS. You can choose between M6g and R6g instance families and three database engines (MySQL 8.0.17 and higher, MariaDB 10.4.13 and higher, and PostgreSQL 12.3 and higher). M6g instances are ideal for general purpose workloads. R6g instances offer 50% more memory than their M6g counterparts and are ideal for memory intensive workloads, such as Big Data analytics. Graviton2 instances provide up to 35% performance improvement and up to 52% price-performance improvement for RDS open source databases, based on internal testing of workloads with varying characteristics of compute and memory requirements. Graviton2 instances family includes several new performance optimizations such as larger L1 and L2 caches per core, higher Amazon Elastic Block Store (EBS) throughput than comparable x86 instances, fully encrypted RAM, and many others as detailed on this page. You can benefit from these optimizations with minimal effort, by provisioning or migrating your RDS instances today. RDS instances are available in multiple configurations, starting with 2 vCPUs, with 8 GiB memory for M6g, and 16 GiB memory for R6g with up to 10 Gbps of network bandwidth, giving you new entry-level general purpose and memory optimized instances. The table below shows the list of instance sizes available for you: Instance Size vCPU Memory (GiB) Dedicated EBS Bandwidth (Mbps) Network Bandwidth (Gbps) M6g R6g large 2 8 16 Up to 4750 Up to 10 xlarge 4 16 32 Up to 4750 Up to 10 2xlarge 8 32 64 Up to 4750 Up to 10 4xlarge 16 64 128 4750 Up to 10 8xlarge 32 128 256 9000 12 12xlarge 48 192 384 13500 20 16xlarge 64 256 512 19000 25 Let’s Start Your First Graviton2 Based Instance To start a new RDS instance, I use the AWS Management Console or the AWS Command Line Interface (CLI), just like usual, and select one of the db.m6g or db.r6ginstance types (this page in the documentation has all the details). Using the CLI, it would be: aws rds create-db-instance --region us-west-2 --db-instance-identifier $DB_INSTANCE_NAME --db-instance-class db.m6g.large --engine postgres --engine-version 12.3 --allocated-storage 20 --master-username $MASTER_USER --master-user-password $MASTER_PASSWORD The CLI confirms with: { "DBInstance": { "DBInstanceIdentifier": "newsblog", "DBInstanceClass": "db.m6g.large", "Engine": "postgres", "DBInstanceStatus": "creating", ... } Migrating to Graviton2 instances is easy, in the AWS Management Console, I select my database and I click Modify. The I select the new DB instance class: Or, using the CLI, I can use the modify-db-instance API call. There is a short service interruption happening when you switch instance type. By default, the modification will happen during your next maintenance window, unless you enable the ApplyImmediately option. You can provision new or migrate to Graviton2 Amazon Relational Database Service (RDS) instances in all regions where EC2 M6g and R6g are available : US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Ireland), and Europe (Frankfurt) AWS Regions. As usual, let us know your feedback on the AWS Forum or through your usual AWS contact. -- seb
Read more
  • 0
  • 0
  • 4648

article-image-making-your-new-normal-safer-with-recaptcha-enterprise-from-cloud-blog
Matthew Emerick
15 Oct 2020
4 min read
Save for later

Making your new normal safer with reCAPTCHA Enterprise from Cloud Blog

Matthew Emerick
15 Oct 2020
4 min read
Traffic from both humans and bots are at record highs. Since March 2020, reCAPTCHA has seen a 40% increase in usage - businesses and services that previously saw most of their users in person have shifted to online-first or online-only. This increased demand for online services and transactions can expose businesses to various forms of online fraud and abuse, and without dedicated teams familiar with these attacks and how to stop them, we’ve seen hundreds of thousands of new websites come to reCAPTCHA for visibility and protection. During COVID-19, reCAPTCHA is playing a critical role helping global public sector agencies to distribute masks and other supplies, provide up-to-date information to constituents, and secure user accounts from distributed attacks. The majority of these agencies are using the score-based detection that comes from reCAPTCHA v3 or reCAPTCHA Enterprise instead of showing the visual or audio challenges found in reCAPTCHA v2. This reduces friction for users and also gives teams flexibility on how to take action on bot requests and fraudulent activity. reCAPTCHA Enterprise can also help protect your business. Whether you’re moving operations online for the first time or have your own team of security engineers, reCAPTCHA can help you detect new web attacks, understand the threats, and take action to keep your users safe. Many enterprises lack visibility in parts of their site, and adding reCAPTCHA helps to expose costly attacks before they happen. The console shows the risk associated with each action to help your business stay ahead. Unlike many other abuse and fraud fighting platforms, reCAPTCHA doesn’t rely on invasive fingerprinting. These techniques can often penalize privacy-conscious users who try to keep themselves safe with tools such as private networks, and are in conflict with browsers’ pushes for privacy-by-default. Instead, we’ve shifted our focus to in-session behavioral risk analysis, detecting fraudulent behavior rather than caring about who or what is behind the network connection. We’ve found this to be extremely effective in detecting attacks in a world where adversaries have control of millions of IP addresses and compromised devices, and regularly pay real humans to manually bypass detections. Since we released reCAPTCHA Enterprise last year, we’ve been able to work closer with existing and new customers, collaborating on abuse problems and determining best practices in specific use cases, such as account takeovers, carding, and scraping. The more granular score distribution that comes with reCAPTCHA Enterprise gives customers more fine-tuned control over when and how to take action. reCAPTCHA Enterprise learns how to score requests specific to the use case, but the score is also best used in a context-specific way. Our most successful customers use features to delay feedback to adversaries, such as limiting capabilities of suspicious accounts, requiring additional verification for sensitive purchases, and manually moderating content likely generated by a bot.  We also recently released a report by ESG where they evaluated the effectiveness of reCAPTCHA Enterprise as deployed in a real-world hyperscale website to protect against automated credential stuffing and account takeover attacks. ESG noted: “Approximately two months after reCAPTCHA Enterprise deployment, login attempts dropped by approximately 90% while the registered user base grew organically.” We’re continually developing new types of signals to detect abuse at scale. Across the four million sites with reCAPTCHA protections enabled, we defend everything from accounts, to e-commerce transactions, to food distribution after disasters, to voting for your favorite celebrity. Now more than ever, we’re proud to be protecting our customers and their users. To see reCAPTCHA Enterprise in action, check out our latest video. To get started with reCAPTCHA Enterprise, contact our sales team. Related Article Protect your organization from account takeovers with reCAPTCHA Enterprise How reCAPTCHA Enterprise helps protect your websites from fraudulent activity like account takeovers and hijacking Read Article
Read more
  • 0
  • 0
  • 1565

article-image-new-dataproc-optional-components-support-apache-flink-and-docker-from-cloud-blog
Matthew Emerick
15 Oct 2020
5 min read
Save for later

New Dataproc optional components support Apache Flink and Docker from Cloud Blog

Matthew Emerick
15 Oct 2020
5 min read
Google Cloud’s Dataproc lets you run native Apache Spark and Hadoop clusters on Google Cloud in a simpler, more cost-effective way. In this blog, we will talk about our newest optional components available in Dataproc’s Component Exchange: Docker and Apache Flink. Docker container on Dataproc Docker is a widely used container technology. Since it’s now a Dataproc optional component, Docker daemons can now be installed on every node of the Dataproc cluster. This will give you the ability to install containerized applications and interact with Hadoop clusters easily on the cluster.  In addition, Docker is also critical to supporting these features: Running containers with YARN Portable Apache Beam job Running containers on YARN allows you to manage dependencies of your YARN application separately, and also allows you to create containerized services on YARN. Get more details here. Portable Apache Beam packages jobs into Docker containers and submits them the Flink cluster. Find more detail about Beam portability.  Docker optional component is also configured to use Google Container Registry, in addition to the default Docker registry. This lets you use container images managed by your organization. Here is how to create a Dataproc cluster with the Docker optional component: gcloud beta dataproc clusters create <cluster-name>   --optional-components=DOCKER   --image-version=1.5 When you run the Docker application, the log will be streamed to Cloud Logging, using gcplogs driver. If your application does not depend on any Hadoop services, check out Kubernetes and Google Kubernetes Engine to run containers natively. For more on using Dataproc, check out our documentation. Apache Flink on Dataproc Among streaming analytics technologies, Apache Beam and Apache Flink stand out. Apache Flink is a distributed processing engine using stateful computation. Apache Beam is a unified model for defining batch and steaming processing pipelines. Using Apache Flink as an execution engine, you can also run Apache Beam jobs on Dataproc, in addition to Google’s Cloud Dataflow service. Flink and running Beam on Flink are suitable for large-scale, continuous jobs, and provide: A streaming-first runtime that supports both batch processing and data streaming programs A runtime that supports very high throughput and low event latency at the same time Fault-tolerance with exactly-once processing guarantees Natural back-pressure in streaming programs Custom memory management for efficient and robust switching between in-memory and out-of-core data processing algorithms Integration with YARN and other components of the Apache Hadoop ecosystem Our Dataproc team here at Google Cloud recently announced that Flink Operator on Kubernetes is now available. It allows you to run Apache Flink jobs in Kubernetes, bringing the benefits of reducing platform dependency and producing better hardware efficiency.  Basic Flink Concepts A Flink cluster consists of a Flink JobManager and a set of Flink TaskManagers. Like similar roles in other distributed systems such as YARN, JobManager has responsibilities such as accepting jobs, managing resources and supervising jobs. TaskManagers are responsible for running the actual tasks.  When running Flink on Dataproc, we use YARN as resource manager for Flink. You can run Flink jobs in 2 ways: job cluster and session cluster. For the job cluster, YARN will create JobManager and TaskManagers for the job and will destroy the cluster once the job is finished. For session clusters, YARN will create JobManager and a few TaskManagers.The cluster can serve multiple jobs until being shut down by the user. How to create a cluster with Flink Use this command to get started: gcloud beta dataproc clusters create <cluster-name>   --optional-components=FLINK   --image-version=1.5 How to run a Flink job After a Dataproc cluster with Flink starts, you can submit your Flink jobs to YARN directly using the Flink job cluster. After accepting the job, Flink will start a JobManager and slots for this job in YARN. The Flink job will be run in the YARN cluster until finished. The JobManager created will then be shut down. Job logs will be available in regular YARN logs. Try this command to run a word-counting example: The Dataproc cluster will not start a Flink Session cluster by default. Instead, Dataproc will create the script “/usr/bin/flink-yarn-daemon,” which will start a Flink session.  If you want to start a Flink session when Dataproc is created, use the metadata key to allow it: If you want to start the Flink session after Dataproc is created, you can run the following command on master node: Submit jobs to that session cluster. You’ll need to get the Flink JobManager URL: How to run a Java Beam job It is very easy to run an Apache Beam job written in Java. There is no extra configuration needed. As long as you package your Beam jobs into a JAR file, you do not need to configure anything to run Beam on Flink. This is the command you can use: How to run a Python Beam job written in Python Beam jobs written in Python use a different execution model. To run them in Flink on Dataproc, you will also need to enable the Docker optional component. Here’s how to create a cluster: You will also need to install necessary Python libraries needed by Beam, such as apache_beam and apache_beam[gcp]. You can pass in a Flink master URL to let it run in a session cluster. If you leave the URL out, you need to use the job cluster mode to run this job: After you’ve written your Python job, simply run it to submit: Learn more about Dataproc.
Read more
  • 0
  • 0
  • 1771

article-image-prevent-planned-downtime-during-the-holiday-shopping-season-with-cloud-sql-from-cloud-blog
Matthew Emerick
15 Oct 2020
3 min read
Save for later

Prevent planned downtime during the holiday shopping season with Cloud SQL from Cloud Blog

Matthew Emerick
15 Oct 2020
3 min read
Routine database maintenance is a way of life. Updates keep your business running smoothly and securely. And with a managed service, like Cloud SQL, your databases automatically receive the latest patches and updates, with significantly less downtime. But we get it: Nobody likes downtime, no matter how brief.  That's why we’re pleased to announce that Cloud SQL, our fully managed database service for MySQL, PostgreSQL, and SQL Server, now gives you more control over when your instances undergo routine maintenance. Cloud SQL is introducing maintenance deny period controls. With maintenance deny periods, you can prevent automatic maintenance from occurring during a 90-day time period.  This can be especially useful for the Cloud SQL retail customers about to kick off their busiest time of year, with Black Friday and Cyber Monday just around the corner. This holiday shopping season is a time of peak load that requires heightened focus on infrastructure stability, and any upgrades can put that at risk. By setting a maintenance deny period from mid-October to mid-January, these businesses can prevent planned upgrades from Cloud SQL during this critical time. Understanding Cloud SQL maintenanceBefore describing these new controls, let’s answer a few questions we often hear about the automatic maintenance that Cloud SQL performs. What is automatic maintenance?To keep your databases stable and secure, Cloud SQL automatically patches and updates your database instance (MySQL, Postgres, and SQL Server), including the underlying operating system. To perform maintenance, Cloud SQL must temporarily take your instances offline. What is a maintenance window?Maintenance windows allow you to control when maintenance occurs. Cloud SQL offers maintenance windows to minimize the impact of planned maintenance downtime to your applications and your business.  Defining the maintenance window lets you set the hour and day when an update occurs, such as only when database activity is low (for example, on Saturday at midnight).  Additionally, you can control the order of updates for your instance relative to other instances in the same project (“Earlier” or “Later”). Earlier timing is useful for test instances, allowing you to see the effects of an update before it reaches your production instances.  What are the new maintenance deny period controls?You can now set a single deny period, configurable from 1 to 90 days, each year. During the deny period, Cloud SQL will not perform maintenance that causes downtime on your database instance. Deny periods can be set to reduce the likelihood of downtime during the busy holiday season, your next product launch, end of quarter financial reporting, or any other important time for your business. Paired with Cloud SQL’s existing maintenance notification and rescheduling functionality, deny periods give you even more flexibility and control. After receiving a notification of upcoming maintenance, you can reschedule ad hoc, or if you want to prevent maintenance longer, set a deny period.  Getting started with Cloud SQL’s new maintenance controlReview our documentation to learn more about maintenance deny periods and, when you're ready, start configuring them for your database instances.  What’s next for Cloud SQLSupport for additional maintenance controls continues to be a top request from users. These new deny periods are an addition to the list of existing maintenance controls for Cloud SQL. Have more ideas? Let us know what other features and capabilities you need with our Issue Tracker and by joining the Cloud SQL discussion group. We’re glad you’re along for the ride, and we look forward to your feedback!
Read more
  • 0
  • 0
  • 1486

article-image-cache-is-king-announcing-lower-pricing-for-cloud-cdn-from-cloud-blog
Matthew Emerick
14 Oct 2020
2 min read
Save for later

Cache is king: Announcing lower pricing for Cloud CDN from Cloud Blog

Matthew Emerick
14 Oct 2020
2 min read
Organizations all over the world rely on Cloud CDN for fast, reliable web and video content delivery. Now, we’re making it even easier for you to take advantage of our global network and cache infrastructure by reducing the cost of Cloud CDN for your content delivery going forward. First, we’re reducing the price of cache fill (content fetched from your origin) charges across the board, by up to 80%. You still get the benefit of our global private backbone for cache fill though—ensuring continued high performance, at a reduced cost. We’ve also removed cache-to-cache fill charges and cache invalidation charges for all customers going forward. This price reduction, along with our recent introduction of a new set of flexible caching capabilities, makes it even easier to use Cloud CDN to optimize the performance of your applications. Cloud CDN can now automatically cache web assets, video content or software downloads, control exactly how they should be cached, and directly set response headers to help meet web security best practices. You can review our updated pricing in our public documentation, and customers egressing over 1PB per month should reach out to our sales team to discuss commitment-based discounts as part of your migration to Google Cloud. To read more about Cloud CDN, or begin using it, start here.
Read more
  • 0
  • 0
  • 1408
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-zone-redundancy-for-azure-cache-for-redis-now-in-preview-from-microsoft-azure-blog-announcements
Matthew Emerick
14 Oct 2020
3 min read
Save for later

Zone Redundancy for Azure Cache for Redis now in preview from Microsoft Azure Blog &gt; Announcements

Matthew Emerick
14 Oct 2020
3 min read
Between waves of pandemics, hurricanes, and wildfires, you don’t need cloud infrastructure adding to your list of worries this year. Fortunately, there has never been a better time to ensure your Azure deployments stay resilient. Availability zones are one of the best ways to mitigate risks from outages and disasters. With that in mind, we are announcing the preview for zone redundancy in Azure Cache for Redis. Availability Zones on Azure Azure Availability Zones are geographically isolated datacenter locations within an Azure region, providing redundant power, cooling, and networking. By maintaining a physically separate set of resources with the low latency from remaining in the same region, Azure Availability Zones provide a high availability solution that is crucial for businesses requiring resiliency and business continuity. Redundancy options in Azure Cache for Redis Azure Cache for Redis is increasingly becoming critical to our customers’ data infrastructure. As a fully managed service, Azure Cache for Redis provides various high availability options. By default, caches in the standard or premium tier have built-in replication with a two-node configuration—a primary and a replica hosting two identical copies of your data. New in preview, Azure Cache for Redis can now support up to four nodes in a cache distributed across multiple availability zones. This update can significantly enhance the availability of your Azure Cache for Redis instance, giving you greater peace of mind and hardening your data architecture against unexpected disruption. High Availability for Azure Cache for Redis The new redundancy features deliver better reliability and resiliency. First, this update expands the total number of replicas you can create. You can now implement up to three replica nodes in addition to the primary node. Having more replicas generally improves resiliency (even if they are in the same availability zone) because of the additional nodes backing up the primary. Even with more replicas, a datacenter-wide outage can still disrupt your application. That’s why we’re also enabling zone redundancy, allowing replicas to be located in different availability zones. Replica nodes can be placed in one or multiple availability zones, with failover automatically occurring if needed across availability zones. With Zone Redundancy, your cache can handle situations where the primary zone is knocked offline due to issues like floods, power outages, or even natural disasters. This increases availability while maintaining the low latency required from a cache. Zone redundancy is currently only available on the premium tier of Azure Cache for Redis, but it will also be available on the enterprise and enterprise flash tiers when the preview is released. Industry-leading service level agreement Azure Cache for Redis already offers an industry-standard 99.9 percent service level agreement (SLA). With the addition of zone redundancy, the availability increases to a 99.95 percent level, allowing you to meet your availability needs while keeping your application nimble and scalable. Adding zone redundancy to Azure Cache for Redis is a great way to promote availability and peace of mind during turbulent situations. Learn more in our documentation and give it a try today. If you have any questions or feedback, please contact us at AzureCache@microsoft.com.
Read more
  • 0
  • 0
  • 2109

article-image-new-redis-6-compatibility-for-amazon-elasticache-from-aws-news-blog
Matthew Emerick
07 Oct 2020
5 min read
Save for later

New – Redis 6 Compatibility for Amazon ElastiCache from AWS News Blog

Matthew Emerick
07 Oct 2020
5 min read
After the last Redis 5.0 compatibility for Amazon ElastiCache, there has been lots of improvements to Amazon ElastiCache for Redis including upstream supports such as 5.0.6. Earlier this year, we announced Global Datastore for Redis that lets you replicate a cluster in one region to clusters in up to two other regions. Recently we improved your ability to monitor your Redis fleet by enabling 18 additional engine and node-level CloudWatch metrics. Also, we added support for resource-level permission policies, allowing you to assign AWS Identity and Access Management (IAM) principal permissions to specific ElastiCache resource or resources. Today, I am happy to announce Redis 6 compatibility to Amazon ElastiCache for Redis. This release brings several new and important features to Amazon ElastiCache for Redis: Managed Role-Based Access Control – Amazon ElastiCache for Redis 6 now provides you with the ability to create and manage users and user groups that can be used to set up Role-Based Access Control (RBAC) for Redis commands. You can now simplify your architecture while maintaining security boundaries by having several applications use the same Redis cluster without being able to access each other’s data. You can also take advantage of granular access control and authorization to create administration and read-only user groups. Amazon ElastiCache enhances the new Access Control Lists (ACL) introduced in open source Redis 6 to provide a managed RBAC experience, making it easy to set up access control across several Amazon ElastiCache for Redis clusters. Client-Side Caching – Amazon ElastiCache for Redis 6 comes with server-side enhancements to deliver efficient client-side caching to further improve your application performance. Redis clusters now support client-side caching by tracking client requests and sending invalidation messages for data stored on the client. In addition, you can also take advantage of a broadcast mode that allows clients to subscribe to a set of notifications from Redis clusters. Significant Operational Improvements – This release also includes several enhancements that improve application availability and reliability. Specifically, Amazon ElastiCache has improved replication under low memory conditions, especially for workloads with medium/large sized keys, by reducing latency and the time it takes to perform snapshots. Open source Redis enhancements include improvements to expiry algorithm for faster eviction of expired keys and various bug fixes. Note that open source Redis 6 also announced support for encryption-in-transit, a capability that is already available in Amazon ElastiCache for Redis 4.0.10 onwards. This release of Amazon ElastiCache for Redis 6 does not impact Amazon ElastiCache for Redis’ existing support for encryption-in-transit. In order to apply RBAC to a new or existing Redis 6 cluster, we first need to ensure you have a user and user group created. We’ll review the process to do this below. Using Role-Based Access Control – How it works An alternative to Authenticating Users with the Redis AUTH Command, Amazon ElastiCache for Redis 6 offers Role-Based Access Control (RBAC). With RBAC, you create users and assign them specific permissions via an Access String. If you want to create, modify, and delete users and user groups, you will need to select to the User Management and User Group Management sections in the ElastiCache console. ElastiCache will automatically configure a default user with user ID and user name “default”, and then you can add it or new created users to new groups in User Group Management. If you want to change the default user with your own password and access setting, you need to create a new user with the username set to “default” and can then swap it with the original default user. We recommend using your own strong password for a default user. The following example shows how to swap the original default user with another default that has a modified access string via AWS CLI. $ aws elasticache create-user --user-id "new-default-user" --user-name "default" --engine "REDIS" --passwords "a-str0ng-pa))word" --access-string "off +get ~keys*" Create a user group and add the user you created previously. $ aws elasticache create-user-group --user-group-id "new-default-group" --engine "REDIS" --user-ids "default" Swap the new default user with the original default user. $ aws elasticache modify-user-group --user-group-id "new-default-group" --user-ids-to-add "new-default-user" --user-ids-to-remove "default" Also, you can modify a user’s password or change its access permissions using modify-user command, or remove a specific user using delete-user command. It will be removed from any user groups to which it belongs. Similarly you can modify a user group by adding new users and/or removing current users using modify-user-group command, or delete a user group using delete-user-group command. Note that the user group itself, not the users belonging to the group, will be deleted. Once you have created a user group and added users, you can assign the user group to a replication group, or migrate between Redis AUTH and RBAC. For more information, see the documentation in detail. Redis 6 cluster for ElastiCache – Getting Started As usual, you can use the ElastiCache Console, CLI, APIs, or a CloudFormation template to create to new Redis 6 cluster. I’ll use the Console, choose Redis from the navigation pane and click Create with the following settings: Select “Encryption in-transit” checkbox to ensure you can see the “Access Control” options. You can select an option of Access Control either User Group Access Control List by RBAC features or Redis AUTH default user. If you select RBAC, you can choose one of the available user groups. My cluster is up and running within minutes. You can also use the in-place upgrade feature on existing cluster. By selecting the cluster, click Action and Modify. You can change the Engine Version from 5.0.6-compatible engine to 6.x. Now Available Amazon ElastiCache for Redis 6 is now available in all AWS regions. For a list of ElastiCache for Redis supported versions, refer to the documentation. Please send us feedback either in the AWS forum for Amazon ElastiCache or through AWS support, or your account team. – Channy;
Read more
  • 0
  • 0
  • 3525

article-image-amazon-sagemaker-continues-to-lead-the-way-in-machine-learning-and-announces-up-to-18-lower-prices-on-gpu-instances-from-aws-news-blog
Matthew Emerick
07 Oct 2020
11 min read
Save for later

Amazon SageMaker Continues to Lead the Way in Machine Learning and Announces up to 18% Lower Prices on GPU Instances from AWS News Blog

Matthew Emerick
07 Oct 2020
11 min read
Since 2006, Amazon Web Services (AWS) has been helping millions of customers build and manage their IT workloads. From startups to large enterprises to public sector, organizations of all sizes use our cloud computing services to reach unprecedented levels of security, resiliency, and scalability. Every day, they’re able to experiment, innovate, and deploy to production in less time and at lower cost than ever before. Thus, business opportunities can be explored, seized, and turned into industrial-grade products and services. As Machine Learning (ML) became a growing priority for our customers, they asked us to build an ML service infused with the same agility and robustness. The result was Amazon SageMaker, a fully managed service launched at AWS re:Invent 2017 that provides every developer and data scientist with the ability to build, train, and deploy ML models quickly. Today, Amazon SageMaker is helping tens of thousands of customers in all industry segments build, train and deploy high quality models in production: financial services (Euler Hermes, Intuit, Slice Labs, Nerdwallet, Root Insurance, Coinbase, NuData Security, Siemens Financial Services), healthcare (GE Healthcare, Cerner, Roche, Celgene, Zocdoc), news and media (Dow Jones, Thomson Reuters, ProQuest, SmartNews, Frame.io, Sportograf), sports (Formula 1, Bundesliga, Olympique de Marseille, NFL, Guiness Six Nations Rugby), retail (Zalando, Zappos, Fabulyst), automotive (Atlas Van Lines, Edmunds, Regit), dating (Tinder), hospitality (Hotels.com, iFood), industry and manufacturing (Veolia, Formosa Plastics), gaming (Voodoo), customer relationship management (Zendesk, Freshworks), energy (Kinect Energy Group, Advanced Microgrid Systems), real estate (Realtor.com), satellite imagery (Digital Globe), human resources (ADP), and many more. When we asked our customers why they decided to standardize their ML workloads on Amazon SageMaker, the most common answer was: “SageMaker removes the undifferentiated heavy lifting from each step of the ML process.” Zooming in, we identified five areas where SageMaker helps them most. #1 – Build Secure and Reliable ML Models, Faster As many ML models are used to serve real-time predictions to business applications and end users, making sure that they stay available and fast is of paramount importance. This is why Amazon SageMaker endpoints have built-in support for load balancing across multiple AWS Availability Zones, as well as built-in Auto Scaling to dynamically adjust the number of provisioned instances according to incoming traffic. For even more robustness and scalability, Amazon SageMaker relies on production-grade open source model servers such as TensorFlow Serving, the Multi-Model Server, and TorchServe. A collaboration between AWS and Facebook, TorchServe is available as part of the PyTorch project, and makes it easy to deploy trained models at scale without having to write custom code. In addition to resilient infrastructure and scalable model serving, you can also rely on Amazon SageMaker Model Monitor to catch prediction quality issues that could happen on your endpoints. By saving incoming requests as well as outgoing predictions, and by comparing them to a baseline built from a training set, you can quickly identify and fix problems like missing features or data drift. Says Aude Giard, Chief Digital Officer at Veolia Water Technologies: “In 8 short weeks, we worked with AWS to develop a prototype that anticipates when to clean or change water filtering membranes in our desalination plants. Using Amazon SageMaker, we built a ML model that learns from previous patterns and predicts the future evolution of fouling indicators. By standardizing our ML workloads on AWS, we were able to reduce costs and prevent downtime while improving the quality of the water produced. These results couldn’t have been realized without the technical experience, trust, and dedication of both teams to achieve one goal: an uninterrupted clean and safe water supply.” You can learn more in this video. #2 – Build ML Models Your Way When it comes to building models, Amazon SageMaker gives you plenty of options. You can visit AWS Marketplace, pick an algorithm or a model shared by one of our partners, and deploy it on SageMaker in just a few clicks. Alternatively, you can train a model using one of the built-in algorithms, or your own code written for a popular open source ML framework (TensorFlow, PyTorch, and Apache MXNet), or your own custom code packaged in a Docker container. You could also rely on Amazon SageMaker AutoPilot, a game-changing AutoML capability. Whether you have little or no ML experience, or you’re a seasoned practitioner who needs to explore hundreds of datasets, SageMaker AutoPilot takes care of everything for you with a single API call. It automatically analyzes your dataset, figures out the type of problem you’re trying to solve, builds several data processing and training pipelines, trains them, and optimizes them for maximum accuracy. In addition, the data processing and training source code is available in auto-generated notebooks that you can review, and run yourself for further experimentation. SageMaker Autopilot also now creates machine learning models up to 40% faster with up to 200% higher accuracy, even with small and imbalanced datasets. Another popular feature is Automatic Model Tuning. No more manual exploration, no more costly grid search jobs that run for days: using ML optimization, SageMaker quickly converges to high-performance models, saving you time and money, and letting you deploy the best model to production quicker. “NerdWallet relies on data science and ML to connect customers with personalized financial products“, says Ryan Kirkman, Senior Engineering Manager. “We chose to standardize our ML workloads on AWS because it allowed us to quickly modernize our data science engineering practices, removing roadblocks and speeding time-to-delivery. With Amazon SageMaker, our data scientists can spend more time on strategic pursuits and focus more energy where our competitive advantage is—our insights into the problems we’re solving for our users.” You can learn more in this case study. Says Tejas Bhandarkar, Senior Director of Product, Freshworks Platform: “We chose to standardize our ML workloads on AWS because we could easily build, train, and deploy machine learning models optimized for our customers’ use cases. Thanks to Amazon SageMaker, we have built more than 30,000 models for 11,000 customers while reducing training time for these models from 24 hours to under 33 minutes. With SageMaker Model Monitor, we can keep track of data drifts and retrain models to ensure accuracy. Powered by Amazon SageMaker, Freddy AI Skills is constantly-evolving with smart actions, deep-data insights, and intent-driven conversations.“ #3 – Reduce Costs Building and managing your own ML infrastructure can be costly, and Amazon SageMaker is a great alternative. In fact, we found out that the total cost of ownership (TCO) of Amazon SageMaker over a 3-year horizon is over 54% lower compared to other options, and developers can be up to 10 times more productive. This comes from the fact that Amazon SageMaker manages all the training and prediction infrastructure that ML typically requires, allowing teams to focus exclusively on studying and solving the ML problem at hand. Furthermore, Amazon SageMaker includes many features that help training jobs run as fast and as cost-effectively as possible: optimized versions of the most popular machine learning libraries, a wide range of CPU and GPU instances with up to 100GB networking, and of course Managed Spot Training which lets you save up to 90% on your training jobs. Last but not least, Amazon SageMaker Debugger automatically identifies complex issues developing in ML training jobs. Unproductive jobs are terminated early, and you can use model information captured during training to pinpoint the root cause. Amazon SageMaker also helps you slash your prediction costs. Thanks to Multi-Model Endpoints, you can deploy several models on a single prediction endpoint, avoiding the extra work and cost associated with running many low-traffic endpoints. For models that require some hardware acceleration without the need for a full-fledged GPU, Amazon Elastic Inference lets you save up to 90% on your prediction costs. At the other end of the spectrum, large-scale prediction workloads can rely on AWS Inferentia, a custom chip designed by AWS, for up to 30% higher throughput and up to 45% lower cost per inference compared to GPU instances. Lyft, one of the largest transportation networks in the United States and Canada, launched its Level 5 autonomous vehicle division in 2017 to develop a self-driving system to help millions of riders. Lyft Level 5 aggregates over 10 terabytes of data each day to train ML models for their fleet of autonomous vehicles. Managing ML workloads on their own was becoming time-consuming and expensive. Says Alex Bain, Lead for ML Systems at Lyft Level 5: “Using Amazon SageMaker distributed training, we reduced our model training time from days to couple of hours. By running our ML workloads on AWS, we streamlined our development cycles and reduced costs, ultimately accelerating our mission to deliver self-driving capabilities to our customers.“ #4 – Build Secure and Compliant ML Systems Security is always priority #1 at AWS. It’s particularly important to customers operating in regulated industries such as financial services or healthcare, as they must implement their solutions with the highest level of security and compliance. For this purpose, Amazon SageMaker implements many security features, making it compliant with the following global standards: SOC 1/2/3, PCI, ISO, FedRAMP, DoD CC SRG, IRAP, MTCS, C5, K-ISMS, ENS High, OSPAR, and HITRUST CSF. It’s also HIPAA BAA eligible. Says Ashok Srivastava, Chief Data Officer, Intuit: “With Amazon SageMaker, we can accelerate our Artificial Intelligence initiatives at scale by building and deploying our algorithms on the platform. We will create novel large-scale machine learning and AI algorithms and deploy them on this platform to solve complex problems that can power prosperity for our customers.” #5 – Annotate Data and Keep Humans in the Loop As ML practitioners know, turning data into a dataset requires a lot of time and effort. To help you reduce both, Amazon SageMaker Ground Truth is a fully managed data labeling service that makes it easy to annotate and build highly accurate training datasets at any scale (text, image, video, and 3D point cloud datasets). Says Magnus Soderberg, Director, Pathology Research, AstraZeneca: “AstraZeneca has been experimenting with machine learning across all stages of research and development, and most recently in pathology to speed up the review of tissue samples. The machine learning models first learn from a large, representative data set. Labeling the data is another time-consuming step, especially in this case, where it can take many thousands of tissue sample images to train an accurate model. AstraZeneca uses Amazon SageMaker Ground Truth, a machine learning-powered, human-in-the-loop data labeling and annotation service to automate some of the most tedious portions of this work, resulting in reduction of time spent cataloging samples by at least 50%.” Amazon SageMaker is Evaluated The hundreds of new features added to Amazon SageMaker since launch are testimony to our relentless innovation on behalf of customers. In fact, the service was highlighted in February 2020 as the overall leader in Gartner’s Cloud AI Developer Services Magic Quadrant. Gartner subscribers can click here to learn more about why we have an overall score of 84/100 in their “Solution Scorecard for Amazon SageMaker, July 2020”, the highest rating among our peer group. According to Gartner, we met 87% of required criteria, 73% of preferred, and 85% of optional. Announcing a Price Reduction on GPU Instances To thank our customers for their trust and to show our continued commitment to make Amazon SageMaker the best and most cost-effective ML service, I’m extremely happy to announce a significant price reduction on all ml.p2 and ml.p3 GPU instances. It will apply starting October 1st for all SageMaker components and across the following regions: US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), EU (Frankfurt), EU (London), Canada (Central), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Seoul), Asia Pacific (Tokyo), Asia Pacific (Mumbai), and AWS GovCloud (US-Gov-West). Instance Name Price Reduction ml.p2.xlarge -11% ml.p2.8xlarge -14% ml.p2.16xlarge -18% ml.p3.2xlarge -11% ml.p3.8xlarge -14% ml.p3.16xlarge -18% ml.p3dn.24xlarge -18% Getting Started with Amazon SageMaker As you can see, there are a lot of exciting features in Amazon SageMaker, and I encourage you to try them out! Amazon SageMaker is available worldwide, so chances are you can easily get to work on your own datasets. The service is part of the AWS Free Tier, letting new users work with it for free for hundreds of hours during the first two months. If you’d like to kick the tires, this tutorial will get you started in minutes. You’ll learn how to use SageMaker Studio to build, train, and deploy a classification model based on the XGBoost algorithm. Last but not least, I just published a book named “Learn Amazon SageMaker“, a 500-page detailed tour of all SageMaker features, illustrated by more than 60 original Jupyter notebooks. It should help you get up to speed in no time. As always, we’re looking forward to your feedback. Please share it with your usual AWS support contacts, or on the AWS Forum for SageMaker. - Julien
Read more
  • 0
  • 0
  • 2045

article-image-three-ways-serverless-apis-can-accelerate-enterprise-innovation-from-microsoft-azure-blog-announcements
Matthew Emerick
07 Oct 2020
5 min read
Save for later

Three ways serverless APIs can accelerate enterprise innovation from Microsoft Azure Blog &gt; Announcements

Matthew Emerick
07 Oct 2020
5 min read
With the wrong architecture, APIs can be a bottleneck to not only your applications but to your entire business. Bottlenecks such as downtime, low performance, or high application complexity, can result in exaggerated infrastructure and organizational costs and lost revenue. Serverless APIs mitigate these bottlenecks with autoscaling capabilities and consumption-based pricing models. Once you start thinking of serverless as not only a remover-of-bottlenecks but also as an enabler-of-business, layers of your application infrastructure become a source of new opportunities. This is especially true of the API layer, as APIs can be productized to scale your business, attract new customers, or offer new services to existing customers, in addition to its traditional role as the communicator between software services. Given the increasing dominance of APIs and API-first architectures, companies and developers are gravitating towards serverless platforms to host APIs and API-first applications to realize these benefits. One serverless compute option to host API’s is Azure Functions, event-triggered code that can scale on-demand, and you only pay for what you use. Gartner predicts that 50 percent of global enterprises will have deployed a serverless functions platform by 2025, up from only 20 percent today. You can publish Azure Functions through API Management to secure, transform, maintain, and monitor your serverless APIs. Faster time to market Modernizing your application stack to run microservices on a serverless platform decreases internal complexity and reduces the time it takes to develop new features or products. Each serverless function implements a microservice. By adding many functions to a single API Management product, you can build those microservices into an integrated distributed application. Once the application is built, you can use API Management policies to implement caching or ensure security requirements. Quest Software uses Azure App Service to host microservices in Azure Functions. These support user capabilities such as registering new tenants and application functionality like communicating with other microservices or other Azure platform resources such as the Azure Cosmos DB managed NoSQL database service. “We’re taking advantage of technology built by Microsoft and released within Azure in order to go to market faster than we could on our own. On average, over the last three years of consuming Azure services, we’ve been able to get new capabilities to market 66 percent faster than we could in the past.” - Michael Tweddle, President and General Manager of Platform Management, Quest Quest also uses Azure API Management as an serverless API gateway for the Quest On Demand microservices that implement business logic with Azure Functions and to apply policies that control access, traffic, and security across microservices. Modernize your infrastructure Developers should be focusing on developing applications, not provisioning and managing infrastructure. API management provides a serverless API gateway that delivers a centralized, fully managed entry point for serverless backend services. It enables developers to publish, manage, secure, and analyze APIs on at global scale. Using serverless functions and API gateways together allows organizations to better optimize resources and stay focused on innovation. For example, a serverless function provides an API through which restaurants can adjust their local menus if they run out of an item. Chipotle turned to Azure to create a unified web experience from scratch, leveraging both Azure API Management and Azure Functions for critical parts of their infrastructure. Calls to back-end services (such as ordering, delivery, and account management and preferences) hit Azure API Management, which gives Chipotle a single, easily managed endpoint and API gateway into its various back-end services and systems. With such functionality, other development teams at Chipotle are able to work on modernizing the back-end services behind the gateway in a way that remains transparent to Smith’s front-end app. “API Management is great for ensuring consistency with our API interactions, enabling us to always know what exists where, behind a single URL,” says Smith. “There are lots of changes going on behind the API gateway, but we don’t need to worry about them.”- Mike Smith, Lead Software Developer, Chipotle Innovate with APIs Serverless APIs are used to either increase revenue, decrease cost, or improve business agility. As a result, technology becomes a key driver of business growth. Businesses can leverage artificial intelligence to analyze API calls to recognize patterns and predict future purchase behavior, thus optimizing the entire sales cycle. PwC AI turned to Azure Functions to create a scalable API for its regulatory obligation knowledge mining solution. It also uses Azure Cognitive Search to quickly surface predictions found by the solution, embedding years of experience into an AI model that easily identifies regulatory obligations within the text. “As we’re about to launch our ROI POC, I can see that Azure Functions is a value-add that saves us two to four weeks of work. It takes care of handling prediction requests for me. I also use it to extend the model to other PwC teams and clients. That’s how we can productionize our work with relative ease.”- Todd Morrill, PwC Machine Learning Scientist-Manager, PwC Quest Software, Chipotle, and PwC are just a few Microsoft Azure customers who are leveraging tools such as Azure Functions and Azure API Management to create an API architecture that ensures your API’s are monitored, managed, and secure. Rethinking your API approach to use serverless technologies will unlock new capabilities within your organization that are not limited by scale, cost, or operational resources. Get started immediately Learn about common serverless API architecture patterns at the Azure Architecture Center, where we provide high-level overviews and reference architectures for common patterns that leverage Azure Functions and Azure API Management, in addition to other Azure services. Reference architecture for a web application with a serverless API. 
Read more
  • 0
  • 0
  • 2490
article-image-optimize-your-azure-workloads-with-azure-advisor-score-from-microsoft-azure-blog-announcements
Matthew Emerick
07 Oct 2020
3 min read
Save for later

Optimize your Azure workloads with Azure Advisor Score from Microsoft Azure Blog &gt; Announcements

Matthew Emerick
07 Oct 2020
3 min read
Modern engineering practices, like Agile and DevOps, are redirecting the ownership of security, operations, and cost management from centralized teams to workload owners—catalyzing innovations at a higher velocity than in traditional data centers. In this new world, workload owners are expected to build, deploy, and manage cloud workloads that are secure, reliable, performant, and cost-effective. If you’re a workload owner, you want well-architected deployments, so you might be wondering, how well are you doing today? Of all the actions you can take, which ones will make the biggest difference for your Azure workloads? And how will you know if you’re making progress? That’s why we created Azure Advisor Score—to help you understand how well your Azure workloads are following best practices, assess how much you stand to gain by remediating issues, and prioritize the most impactful recommendations you can take to optimize your deployments. Introducing Advisor Score Advisor Score enables you to get the most out of your Azure investment using a centralized dashboard to monitor and work towards optimizing the cost, security, reliability, operational excellence, and performance of your Azure resources. Advisor Score will help you: Assess how well you’re following the best practices defined by Azure Advisor and the Microsoft Azure Well-Architected Framework. Optimize your deployments by taking the most impactful actions first. Report on your well-architected progress over time. Baselining is one great use case we’ve already seen with customers. You can use Advisor Score to baseline yourself and track your progress over time toward your goals by reviewing your score’s daily, weekly, or monthly trends. Then, to reach your goals, you can take action first on the individual recommendations and resources with the most impact. How Advisor Score works Advisor Score measures how well you’re adopting Azure best practices, comparing and quantifying the impact of the Advisor recommendations you’re already following, and the ones you haven’t implemented yet. Think of it as a gap analysis for your deployed Azure workloads. The overall score is calculated on a scale from 0 percent to 100 percent both in aggregate and separately for cost, security (coming soon), reliability, operational excellence, and performance. A score of 100 percent means all your resources follow all the best practices recommended in Advisor. On the other end of the spectrum, a score of zero percent means that none of your resources follow the recommended best practices. Advisor Score weighs all resources, both those with and without active recommendations, by their individual cost relative to your total spend. This builds on the assumption that the resources which consume a greater share of your total investment in Azure are more critical to your workloads. Advisor Score also adds weight to resources with longstanding recommendations. The idea is that the accumulated impact of these recommendations grows the longer they go unaddressed. Review your Advisor Score today Check your Advisor Score today by visiting Azure Advisor in the Azure portal. To learn more about the model behind Advisor Score and see examples of how the score is calculated, review our Advisor Score documentation, and this behind-the-scenes blog from our data science team about the development of Advisor Score.
Read more
  • 0
  • 0
  • 1724

article-image-lower-prices-and-more-flexible-purchase-options-for-azure-red-hat-openshift-from-microsoft-azure-blog-announcements
Matthew Emerick
07 Oct 2020
4 min read
Save for later

Lower prices and more flexible purchase options for Azure Red Hat OpenShift from Microsoft Azure Blog &gt; Announcements

Matthew Emerick
07 Oct 2020
4 min read
For the past several years, Microsoft and Red Hat have worked together to co-develop hybrid cloud solutions intended to enable greater customer innovation. In 2019, we launched Azure Red Hat OpenShift as a fully managed, jointly engineered implementation of Red Hat OpenShift running on Red Hat OpenShift 3.11 that is deeply integrated into the Azure control plane. With the release of Red Hat OpenShift 4, we announced the general availability of Azure Red Hat OpenShift on OpenShift 4 in April 2020. Today we’re sharing that in collaboration with Red Hat, we are dropping the price of Red Hat OpenShift licenses on Azure Red Hat OpenShift worker nodes by up to 77 percent. We’re also adding the choice of a three-year term for Reserved Instances (RIs) on top of the existing one year RI and pay as you go options, with a reduction in the minimum number of virtual machines required. The new pricing is effective immediately. Finally, as part of the ongoing improvements, we are increasing the Service Level Agreement (SLA) to be 99.95 percent. With these new price reductions, Azure Red Hat OpenShift provides even more value with a fully managed, highly-available enterprise Kubernetes offering that manages the upgrades, patches, and integration for the components that are required to make a platform. This allows your teams to focus on building business value, not operating technology platforms. How can Red Hat OpenShift help you? As a developer Kubernetes was built for the needs of IT Operations, not developers. Red Hat OpenShift is designed so developers can deploy apps on Kubernetes without needing to learn Kubernetes. With built-in Continuous Integration (CI) and Continuous Delivery (CD) pipelines, you can code and push to a repository and have your application up and running in minutes. Azure Red Hat OpenShift includes everything you need to manage your development lifecycle; standardized workflows, support for multiple environments, continuous integration, release management, and more. Also included is the provision self-service, on-demand application stacks, and deploy solutions from the Developer Catalog such as OpenShift Service Mesh, OpenShift Serverless, Knative, and more. Red Hat OpenShift provides commercial support for the languages, databases, and tooling you already use, while providing easy access to Azure services such as Azure Database for PostgreSQL and Azure Cosmos DB, to enable you create resilient and scalable cloud native applications. As an IT operator Adopting a container platform lets you keep up with application scale and complexity requirements. Azure Red Hat OpenShift is designed to make deploying and managing the container platform easier, with automated maintenance operations and upgrades built right in, integrated platform monitoring—including Azure Monitor for Containers, and a support experience directly from the Azure support portal. With Azure Red Hat OpenShift, your developers can be up and running in minutes. You can scale on your terms, from ten containers to thousands, and only pay for what you need. With one-click updates for platform, services, and applications, Azure Red Hat OpenShift monitors security throughout the software supply chain to make applications more stable without reducing developer productivity. You can also leverage built-in vulnerability assessment and management tools in Azure Security Center to scan images that are pushed to, imported, or pulled from an Azure Container Registry. Discover Operators from the Kubernetes community and Red Hat partners, curated by Red Hat. You can install Operators on your clusters to provide optional add-ons and shared services to your developers, such as AI and machine learning, application runtimes, data, document stores, monitoring logging and insights, security, and messaging services. Regional availability Azure Red Hat OpenShift is available in 27 regions worldwide, and we’re continuing to expand that list. Over the past few months, we have added support for Azure Red Hat OpenShift in a number of regions, including West US, Central US, North Central US, Canada Central, Canada East, Brazil South, UK West, Norway East, France Central, Germany West Central, Central India, UAE North, Korea Central, East Asia, and Japan East. Industry compliance certifications To help you meet your compliance obligations across regulated industries and markets worldwide, Azure Red Hat OpenShift is PCI DSS, FedRAMP High, SOC 1/2/3, ISO 27001 and HITRUST certified. Azure maintains the largest compliance portfolio in the industry, both in terms of the total number of offerings and also the number of customer-facing services in assessment scope. For more details, check the Microsoft Azure Compliance Offerings, as well as the number of customer-facing services in the assessment scope. Next steps Try Azure Red Hat OpenShift now. We are excited about these new lower prices and how this helps our customers build their business on a platform that enables IT operations and developers to collaborate effectively, develop, and deploy containerized applications rapidly with strong security capabilities.
Read more
  • 0
  • 0
  • 1807

article-image-microsoft-partners-expand-the-range-of-mission-critical-applications-you-can-run-on-azure-from-microsoft-azure-blog-announcements
Matthew Emerick
06 Oct 2020
14 min read
Save for later

Microsoft partners expand the range of mission-critical applications you can run on Azure from Microsoft Azure Blog &gt; Announcements

Matthew Emerick
06 Oct 2020
14 min read
How the depth and breadth of the Microsoft Azure partner ecosystem enables thousands of organizations to bring their mission-critical applications to Azure. In the past few years, IT organizations have been realizing compelling benefits when they transitioned their business-critical applications to the cloud, enabling them to address the top challenges they face with running the same applications on-premises. As even more companies embark on their digital transformation journey, the range of mission and business-critical applications has continued to expand, even more so because technology drives innovation and growth. This has further accelerated in the past months, spurred in part by our rapidly changing global economy. As a result, the definition of mission-critical applications is evolving and goes well beyond systems of record for many businesses. It’s part of why we never stopped investing across the platform to enable you to increase the availability, security, scalability, and performance of your core applications running on Azure. The expansion of mission-critical apps will only accelerate as AI, IoT, analytics, and new capabilities become more pervasive. We’re seeing the broadening scope of mission-critical scenarios both within Microsoft and in many of our customers’ industry sectors. For example, Eric Boyd, in his blog, outlined how companies in healthcare, insurance, sustainable farming, and other fields have chosen Microsoft Azure AI to transform their businesses. Applications like Microsoft Teams have now become mission-critical, especially this year, as many organizations had to enable remote workforces. This is also reflected by the sheer number of meetings happening in Teams. Going beyond Azure services and capabilities Many organizations we work with are eager to realize myriad benefits for their own business-critical applications, but first need to address questions around their cloud journey, such as: Are the core applications I use on-premises certified and supported on Azure? As I move to Azure, can I retain the same level of application customization that I have built over the years on-premises? Will my users experience any impact in the performance of my applications? In essence, they want to make sure that they can continue to capitalize on the strategic collaboration they’ve forged with their partners and ISVs as they transition their core business processes to the cloud. They want to continue to use the very same applications that they spent years customizing and optimizing on-premises. Microsoft understands that running your business on Azure goes beyond the services and capabilities that any platform can provide. You need a comprehensive ecosystem. Azure has always been partner-oriented, and we continue to strengthen our collaboration with a large number of ISVs and technology partners, so you can run the applications that are critical to the success of your business operations on Azure. A deeper look at the growing spectrum of mission-critical applications Today, you can run thousands of third-party ISV applications on Azure. Many of these ISVs in turn depend on Azure to deliver their software solutions and services. Azure has become a mission-critical platform for our partner community as well as our customers. When most people think of mission-critical applications, enterprise resource planning systems (ERP), supply chain management (SCM), product lifecycle management (PLM), and customer relationship management (CRM) applications are often the first examples that come to mind. However, to illustrate the depth and breadth of our mission-critical ecosystem, consider these distinct and very different categories of applications that are critical for thousands of businesses around the world: Enterprise resource planning (ERP) systems. Data management and analytics applications. Backup, and business continuity solutions. High-performance computing (HPC) scenarios that exemplify the broadening of business-critical applications that rely on public cloud infrastructure. Azure’s deep ecosystem addresses the needs of customers in all of these categories and more. ERP systems When most people think of mission-critical applications ERP, SCM, PLM, and CRM applications are often the first examples that come to mind. Some examples on Azure include: SAP—We have been empowering our enterprise customers to run their most mission-critical SAP workloads on Azure, bringing the intelligence, security, and reliability of Azure to their SAP applications and data. Viewpoint, a Trimble company—Viewpoint has been helping the construction industry transform through integrated construction management software and solutions for more than 40 years. To meet the scalability and flexibility needs of both Viewpoint and their customers, a significant portion of their clients are now running their software suite on Azure and experiencing tangible benefits. Data management and analytics Data is the lifeblood of the enterprise. Our customers are experiencing an explosion of mission-critical data sources, from the cloud to the edge, and analytics are key to unlocking the value of data in the cloud. AI is a key ingredient, and yet another compelling reason to modernize your core apps on Azure. DataStax—DataStax Enterprise, a scale out, hybrid, cloud-native NoSQL database built on Apache Cassandra™, in conjunction with Azure, can provide a foundation for personalized, real-time scalable applications. Learn how this combination can enable enterprises to run mission critical workloads to increase business agility, without compromising compliance and data governance. Informatica—Informatica has been working with Microsoft to help businesses ensure that the data that is driving your customer and business decisions is trusted, authenticated, and secure. Specifically, Informatica is focused on the quality of the data that is powering your mission-critical applications and can help you derive the maximum value from your existing investments. SAS®—Microsoft and SAS are enabling customers to easily run their SAS workloads in the cloud, helping them unlock critical value from their digital transformation initiatives. As part of our collaboration, SAS is migrating its analytical products and industry solutions onto Azure as the preferred cloud provider for the SAS Cloud. Discover how mission-critical analytics is finding a home in the cloud. Backup and disaster recovery solutions Uptime and disaster recovery plans that minimize recovery time objective (RTO) and recovery point objective (RPO) are the top metrics senior IT decision-makers pay close attention to when it comes to mission-critical environments. Backing up critical data is a key element of putting in place robust business continuity plans. Azure provides built-in backup and disaster recovery features, and we also partner with industry leaders like Commvault, Rubrik, Veeam, Veritas, Zerto, and others so you can keep using your existing applications no matter where your data resides. Commvault—We continue to work with Commvault to deliver data management solutions that enable higher resiliency, visibility, and agility for business-critical workloads and data in our customers’ hybrid environments. Learn about Commvault’s latest offerings—including support for Azure VMware Solution and why their Metallic SaaS suite relies exclusively on Azure. Rubrik—Learn how Rubrik helps enterprises achieve low RTOs, self-service automation at scale, and accelerated cloud adoption. Veeam—Read how you can use Veeam’s solution portfolio to backup, recover, and migrate mission-critical workloads to Azure. Veritas—Find out how Veritas InfoScale has advanced integration with Azure that simplifies the deployment and management of your mission-critical applications in the cloud. Zerto—Discover how the extensive capabilities of Zerto’s platform help you protect mission critical applications on Azure. Teradici—Finally, Teradici underscores how the lines between mission-critical and business-critical are blurring. Read how business continuity plans are being adjusted to include longer term scenarios. HPC scenarios HPC applications are often the most intensive and highest-value workloads in a company, and are business-critical in many industries, including financial services, life sciences, energy, manufacturing and more. The biggest and most audacious innovations from supporting the fight against COVID-19, to 5G semiconductor design; from aerospace engineering design processes to the development of autonomous vehicles, and so much more are being driven by HPC. Ansys—Explore how Ansys Cloud on Azure has proven to be vital for business continuity during unprecedented times. Rescale—Read how Rescale can provide a turnkey platform for engineers and researchers to quickly access Azure HPC resources, easing the transition of business-critical applications to the cloud. You can rely on the expertise of our partner community Many organizations continue to accelerate the migration of their core applications to the cloud, realizing tangible and measurable value in collaboration with our broad partner community, which includes global system integrators like Accenture, Avanade, Capgemini, Wipro, and many others. For example, UnifyCloud recently helped a large organization in the financial sector modernize their data estate on Azure while achieving 69 percent reduction in IT costs. We are excited about the opportunities ahead of us, fueled by the power of our collective imagination. Learn more about how you can run business-critical applications on Azure and increase business resiliency. Watch our Microsoft Ignite session for a deeper diver and demo.   “The construction industry relies on Viewpoint to build and host the mission-critical technology used to run their businesses, so we have the highest possible standards when it comes to the solutions we provide. Working with Microsoft has allowed us to meet those standards in the Azure cloud by increasing scalability, flexibility and reliability – all of which enable our customers to accelerate their own digital transformations and run their businesses with greater confidence.” —Dan Farner, Senior Vice President of Product Development, Viewpoint (a Trimble Company) Read the Gaining Reliability, Scalability, and Customer Satisfaction with Viewpoint on Microsoft Azure blog.     “Business critical applications require a transformational data architecture built on scale-out data and microservices to enable dramatically improved operations, developer productivity, and time-to-market. With Azure and DataStax, enterprises can now run mission critical workloads with zero downtime at global scale to achieve business agility, compliance, data sovereignty, and data governance.”—Ed Anuff, Chief Product Officer, DataStax Read the Application Modernization for Data-Driven Transformation with DataStax Enterprise on Microsoft Azure blog.     “As Microsoft’s 2020 Data Analytics Partner of Year, Informatica works hand-in-hand with Azure to solve mission critical challenges for our joint customers around the world and across every sector.  The combination of Azure’s scale, resilience and flexibility, along with Informatica’s industry-leading Cloud-Native Data Management platform on Azure, provides customers with a platform they can trust with their most complex, sensitive and valuable business critical workloads.”—Rik Tamm-Daniels, Vice President of strategic ecosystems and technology, Informatica Read the Ensuring Business-Critical Data Is Trusted, Available, and Secure with Informatica on Microsoft Azure blog.       “SAS and Microsoft share a vision of helping organizations make better decisions as they strive to serve customers, manage risks and improve operations. Organizations are moving to the cloud at an accelerated pace. Digital transformation projects that were scheduled for the future now have a 2020 delivery date. Customers realize analytics and cloud are critical to drive their digital growth strategies. This partnership helps them quickly move to Microsoft Azure, so they can build, deploy, and manage analytic workloads in a reliable, high-performant and cost-effective manner.”—Oliver Schabenberger, Executive Vice President, Chief Operating Officer and Chief Technology Officer, SAS Read the Mission-critical analytics finds a home in the cloud blog.   “Microsoft is our Foundation partner and selecting Microsoft Azure as our platform to host and deliver Metallic was an easy decision. This decision sparks customer confidence due to Azure’s performance, scale, reliability, security and offers unique Best Practice guidance for customers and partners. Our customers rely on Microsoft and Azure-centric Commvault solutions every day to manage, migrate and protect critical applications and the data required to support their digital transformation strategies.”—Randy De Meno, Vice President/Chief Technology Officer, Microsoft Practice & Solutions Read the Commvault extends collaboration with Microsoft to enhance support for mission-critical workloads blog.     “Enterprises depend on Rubrik and Azure to protect mission-critical applications in SAP, Oracle, SQL and VMware environments. Rubrik helps enterprises move to Azure securely, faster, and with a low TCO using Rubrik’s automated tiering to Azure Archive Storage. Security minded customers appreciate that with Rubrik and Microsoft, business critical data is immutable, preventing ransomware threats from accessing backups, so businesses can quickly search and restore their information on-premises and in Azure.”—Arvind Nithrakashyap, Chief Technology Officer and Co-Founder, Rubrik Learn how enterprises use Rubrik on Azure.     “Veeam continues to see increased adoption of Microsoft Azure for business-critical applications and data across our 375,000 plus global customers. While migration of applications and data remains the primary barrier to the public cloud, we are committed to helping eliminate these challenges through a unified Cloud Data Management platform that delivers simplicity, flexibility and reliability at its core, while providing unrivaled data portability for greater cost controls and savings. Backed by the unique Veeam Universal License – a portable license that moves with workloads to ensure they're always protected – our customers are able to take control of their data by easily migrating workloads to Azure, and then continue protecting and managing them in the cloud.”—Danny Allan, Chief Technology Officer and Senior Vice President for Product Strategy, Veeam Read the Backup, recovery, and migration of mission-critical workloads on Azure blog.     “Thousands of customers rely on Veritas to protect their data both on-premises and in Azure. Our partnership with Microsoft helps us drive the data protection solutions that our enterprise customers rely on to keep their business-critical applications optimized and immediately available.”—Phil Brace, Chief Revenue Officer, Veritas Read the Migrate and optimize your mission-critical applications in Microsoft Azure with Veritas InfoScale blog.     “Microsoft has always leveraged the expertise of its partners to deliver the most innovative technology to customers. Because of Zerto’s long-standing collaboration with Microsoft, Zetro's IT Resilience platform is fully integrated with Azure and provides a robust, fully orchestrated solution that reduces data loss to seconds and downtime to minutes. Utilizing Zerto’s end-to-end, converged backup, DR, and cloud mobility platform, customers have proven time and time again they can protect mission-critical applications during planned or unplanned disruptions that include ransomware, hardware failure, and numerous other scenarios using the Azure cloud – the best cloud platform for IT resilience in the hybrid cloud environment.”—Gil Levonai, CMO and SVP of Product, Zerto Read the Protecting Critical Applications in the Cloud with the Zerto Platform blog.     “The longer business continues to be disrupted, the more the lines blur and business critical functions begin to shift to mission critical, making virtual desktops and workstations on Microsoft Azure an attractive option for IT managers supporting remote workforces in any function or industry. Teradici Cloud Access Software offers a flexible and secure solution that supports demanding business critical and mission critical workloads on Microsoft Azure and Azure Stack with exceptional performance and fidelity, helping businesses gain efficiency and resilience within their business continuity strategy.”—John McVay, Director of Strategic Alliances, Teradici Read the Longer IT timelines shift business critical priorities to mission critical blog.         "It is imperative for Ansys to support our customers' accelerating needs for on-demand high performance computing to drive their increasingly complex engineering requirements. Microsoft Azure, with its purpose-built HPC and robust go-to market capabilities, was a natural choice for us, and together we are enabling our joint customers to keep designing innovative products even as they work from home.”—Navin Budhiraja, Vice President and General Manager, Cloud and Platform, Ansys Read the Ansys Cloud on Microsoft Azure: A vital resource for business continuity during the pandemic blog.     “Robust and stable business critical systems are paramount for success. Rescale customers leveraging Azure HPC resources are taking advantage of the scalability, flexibility and intelligence to improve R&D, accelerate development and reduce costs not possible with a fixed infrastructure.”—Edward Hsu, Vice President of Product, Rescale Read the Business Critical Systems that Drive Innovation blog.     “Customers are transitioning business-critical workloads to Azure and realizing significant cost benefits while modernizing their applications. Our solutions help customers develop cloud strategy, modernize quickly, and optimize cloud environments while minimizing risk and downtime.”—Vivek Bhatnagar, Co-Founder and Chief Technology Officer, UnifyCloud Read the Moving mission-critical applications to the cloud: More important than ever blog.
Read more
  • 0
  • 0
  • 1434
article-image-amazon-s3-update-three-new-security-access-control-features-from-aws-news-blog
Matthew Emerick
02 Oct 2020
5 min read
Save for later

Amazon S3 Update – Three New Security & Access Control Features from AWS News Blog

Matthew Emerick
02 Oct 2020
5 min read
A year or so after we launched Amazon S3, I was in an elevator at a tech conference and heard a couple of developers use “just throw it into S3” as the answer to their data storage challenge. I remember that moment well because the comment was made so casually, and it was one of the first times that I fully grasped just how quickly S3 had caught on. Since that launch, we have added hundreds of features and multiple storage classes to S3, while also reducing the cost to storage a gigabyte of data for a month by almost 85% (from $0.15 to $0.023 for S3 Standard, and as low as $0.00099 for S3 Glacier Deep Archive). Today, our customers use S3 to support many different use cases including data lakes, backup and restore, disaster recovery, archiving, and cloud-native applications. Security & Access Control As the set of use cases for S3 has expanded, our customers have asked us for new ways to regulate access to their mission-critical buckets and objects. We added IAM policies many years ago, and Block Public Access in 2018. Last year we added S3 Access Points (Easily Manage Shared Data Sets with Amazon S3 Access Points) to help you manage access in large-scale environments that might encompass hundreds of applications and petabytes of storage. Today we are launching S3 Object Ownership as a follow-on to two other S3 security & access control features that we launched earlier this month. All three features are designed to give you even more control and flexibility: Object Ownership – You can now ensure that newly created objects within a bucket have the same owner as the bucket. Bucket Owner Condition – You can now confirm the ownership of a bucket when you create a new object or perform other S3 operations. Copy API via Access Points – You can now access S3’s Copy API through an Access Point. You can use all of these new features in all AWS regions at no additional charge. Let’s take a look at each one! Object Ownership With the proper permissions in place, S3 already allows multiple AWS accounts to upload objects to the same bucket, with each account retaining ownership and control over the objects. This many-to-one upload model can be handy when using a bucket as a data lake or another type of data repository. Internal teams or external partners can all contribute to the creation of large-scale centralized resources. With this model, the bucket owner does not have full control over the objects in the bucket and cannot use bucket policies to share objects, which can lead to confusion. You can now use a new per-bucket setting to enforce uniform object ownership within a bucket. This will simplify many applications, and will obviate the need for the Lambda-powered self-COPY that has become a popular way to do this up until now. Because this setting changes the behavior seen by the account that is uploading, the PUT request must include the bucket-owner-full-control ACL. You can also choose to use a bucket policy that requires the inclusion of this ACL. To get started, open the S3 Console, locate the bucket and view its Permissions, click Object Ownership, and Edit: Then select Bucket owner preferred and click Save: As I mentioned earlier, you can use a bucket policy to enforce object ownership (read About Object Ownership and this Knowledge Center Article to learn more). Many AWS services deliver data to the bucket of your choice, and are now equipped to take advantage of this feature. S3 Server Access Logging, S3 Inventory, S3 Storage Class Analysis, AWS CloudTrail, and AWS Config now deliver data that you own. You can also configure Amazon EMR to use this feature by setting fs.s3.canned.acl to BucketOwnerFullControl in the cluster configuration (learn more). Keep in mind that this feature does not change the ownership of existing objects. Also, note that you will now own more S3 objects than before, which may cause changes to the numbers you see in your reports and other metrics. AWS CloudFormation support for Object Ownership is under development and is expected to be ready before AWS re:Invent. Bucket Owner Condition This feature lets you confirm that you are writing to a bucket that you own. You simply pass a numeric AWS Account ID to any of the S3 Bucket or Object APIs using the expectedBucketOwner parameter or the x-amz-expected-bucket-owner HTTP header. The ID indicates the AWS Account that you believe owns the subject bucket. If there’s a match, then the request will proceed as normal. If not, it will fail with a 403 status code. To learn more, read Bucket Owner Condition. Copy API via Access Points S3 Access Points give you fine-grained control over access to your shared data sets. Instead of managing a single and possibly complex policy on a bucket, you can create an access point for each application, and then use an IAM policy to regulate the S3 operations that are made via the access point (read Easily Manage Shared Data Sets with Amazon S3 Access Points to see how they work). You can now use S3 Access Points in conjunction with the S3 CopyObject API by using the ARN of the access point instead of the bucket name (read Using Access Points to learn more). Use Them Today As I mentioned earlier, you can use all of these new features in all AWS regions at no additional charge. — Jeff;  
Read more
  • 0
  • 0
  • 1527

article-image-google-introduces-e2-a-flexible-performance-driven-and-cost-effective-vms-for-google-compute-engine
Vincy Davis
12 Dec 2019
3 min read
Save for later

Google introduces E2, a flexible, performance-driven and cost-effective VMs for Google Compute Engine

Vincy Davis
12 Dec 2019
3 min read
Yesterday, June Yang, the director of product management at Google announced a new beta version of the E2 VMs for Google Compute Engine. It features a dynamic resource management that delivers a reliable performance with flexible configurations and the best total cost of ownership (TCO) than any other VMs in Google Cloud. According to Yang, “E2 VMs are a great fit for a broad range of workloads including web servers, business-critical applications, small-to-medium sized databases, and development environments.” He further adds, “For all but the most demanding workloads, we expect E2 to deliver similar performance to N1, at a significantly lower cost.” What are the key features offered by E2 VMs E2 VMs are built to offer 31% savings compared to N1, which is the lowest total cost of ownership of any VM in Google Cloud. Thus, the VMs acquire a sustainable performance at a consistently low price point. Unlike comparable options from other cloud providers, E2 VMs can support a high CPU load without complex pricing. The E2 VMs can be tailored up to 16 vCPUs and 128 GB of memory and will only distribute the resources that the user needs or with the ability to use custom machine types. Custom machine types are ideal for scenarios when workloads that require more processing power or more memory but don't need all of the upgrades that are provided by the next machine type level. How E2 VMs achieve optimal efficiency Large, efficient physical servers E2 VMs automatically take advantage of the continual improvements in machines by flexibly scheduling across the zone’s available CPU platforms. With new hardware upgrades, the E2 VMs are live migrated to newer and faster hardware which allows it to automatically take advantage of these new resources. Intelligent VM placement In E2 VMs, Borg, Google’s cluster management system predicts how a newly added VM will perform on a physical server by observing the CPU, RAM, memory bandwidth, and other resource demands of the VMs. Post this, Borg searches across thousands of servers to find the best location to add a VM. These observations by Borg ensures that a newly placed VM will be compatible with its neighbors and will not experience any interference from them. Performance-aware live migration After the VMs are placed on a host, its performance is continuously monitored so that if there is an increase in demand for VMs, a live migration can be used to transparently shift the E2 load to other hosts in the data center. A new hypervisor CPU scheduler In order to meet E2 VMs performance goals, Google has built a custom CPU scheduler with better latency and co-scheduling behavior than Linux’s default scheduler. The new scheduler yields sub-microsecond average wake-up latencies with fast context switching which helps in keeping the overhead of dynamic resource management negligible for nearly all workloads. https://twitter.com/uhoelzle/status/1204972503921131521 Read the official announcement to know the custom VM shapes and predefined configurations offered by E2 VMs. You can also read part- 2 of the announcement to know more about the dynamic resource management in E2 VMs. Why use JVM (Java Virtual Machine) for deep learning Brad Miro talks TensorFlow 2.0 features and how Google is using it internally EU antitrust regulators are investigating Google’s data collection practices, reports Reuters Google will not support Cloud Print, its cloud-based printing solution starting 2021 Google Chrome ‘secret’ experiment crashes browsers of thousands of IT admins worldwide
Read more
  • 0
  • 0
  • 3827