Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Architecting Google Cloud Solutions

You're reading from   Architecting Google Cloud Solutions Learn to design robust and future-proof solutions with Google Cloud technologies

Arrow left icon
Product type Paperback
Published in Apr 2021
Publisher Packt
ISBN-13 9781800563308
Length 472 pages
Edition 1st Edition
Arrow right icon
Author (1):
Arrow left icon
Victor Dantas Victor Dantas
Author Profile Icon Victor Dantas
Victor Dantas
Arrow right icon
View More author details
Toc

Table of Contents (17) Chapters Close

Preface 1. Section 1: Introduction to Google Cloud
2. Chapter 1: An Introduction to Google Cloud for Architects FREE CHAPTER 3. Chapter 2: Mastering the Basics of Google Cloud 4. Section 2: Designing Great Solutions in Google Cloud
5. Chapter 3: Designing the Network 6. Chapter 4: Architecting Compute Infrastructure 7. Chapter 5: Architecting Storage and Data Infrastructure 8. Chapter 6: Configuring Services for Observability 9. Chapter 7: Designing for Security and Compliance 10. Section 3: Designing for the Modern Enterprise
11. Chapter 8: Approaching Big Data and Data Pipelines 12. Chapter 9: Jumping on the DevOps Bandwagon with Site Reliability Engineering (SRE) 13. Chapter 10: Re-Architecting with Microservices 14. Chapter 11: Applying Machine Learning and Artificial Intelligence 15. Chapter 12: Achieving Operational Excellence 16. Other Books You May Enjoy

Designing networks and subnetworks

A network in GCP is called a VPC, and it differs from the way other cloud platforms define virtual networks in that it is a more purely logical construct, with no IP address range defined in it. It is also global by default, spanning all available GCP regions, and it is segmented by subnetworks (the equivalent to what is referred to as subnets in other cloud platforms), which themselves have IP address ranges and a set region. A GCP project can have up to five VPC networks (although this quota can be increased upon request), and networks can be shared across projects and also peered with each other.

There are three network types (or "modes") in GCP, as follows:

  • Default: Provided by default to every new project. It contains one subnetwork per region and includes some default firewall rules. These default rules allow all traffic within the network, as well as inbound RDP, SSH, and ICMP from any other network.
  • Auto: Auto mode creates one subnetwork per region automatically with an expandable address range. The default network is, in fact, an auto mode network. When creating an auto mode network, the default firewall rules (which would be automatically deployed in the default network) are suggested in the creation form. Still, you can select which specific ones you want to deploy and there's the option to not select any of them and, therefore, not deploy any firewall rules on creation. This means every piece of network traffic will be blocked until a rule is created at a later point.
  • Custom: In this mode, no default subnetworks are created, so you can specify in which regions you want subnetworks in. You also have full control over their IP ranges, as long as they're within the private RFC 1918 address space. You can convert an already existing auto mode network into a custom mode network to customize it to your needs. However, you cannot convert a custom mode network back into auto mode.

Because subnetworks are regional in scope, they cross all the different zones within each region. This means that two VMs can be in the same subnetwork but deployed to different zones (which often means different data centers).

All networks have managed routing capability, which means you don't need to set up routes yourself for network communication across different subnetworks and regions of a VPC network. In addition, with the default and auto mode networks, firewall rules are also automatically set up (optionally, in the case of auto mode networks) to allow any communication that may occur within the network, even if it's going across different subnetworks and regions. With custom networks, these firewall rules won't be created automatically, but you can create them yourself and customize them to the security level you want.

Important Note

Although default and auto networks have been designed so that you can get started easily, you shouldn't use these options in an enterprise-grade solution. The firewall rules that allow all communications within the network and management protocols (such as RDP, SSH, and ICMP) to the outside world, though convenient, are too permissive and go against network security best practices and the zero trust principle, which we will discuss later in this chapter. The auto mode network could be used for truly global solutions, but generally, the suggested firewall rules should not be selected when creating the network.

Because Google's VPC networks are global, you can connect an on-premises network to all your GCP subnetworks (in different regions) with a single VPN gateway service. That way, it becomes significantly easier to extend your on-premises network to global reach with GCP.

Multi-project networking

When working with multiple projects and networks under the same organization in Google Cloud, there are two essential features to consider: shared VPC and VPC peering.

Let's look at what they are.

Shared VPC

The idea behind a shared VPC is to host a single network within a designated GCP project (referred to as the host project), which allows it to be shared with one or more projects (referred to as service projects) to centralize and simplify network management and security controls. In this model, the resources in the service projects can be deployed to one of the subnetworks of the shared VPC (belonging to the host project). Network and security policies can be applied to this one single, centralized network.

Security best practices and the least privilege principle are facilitated with the shared VPC model, since network administration tasks can be delegated to and centralized by network and security admins in the shared VPC network. Administrators in the service projects cannot make any changes that impact the network in any way. This also helps keep access control policies consistent across different projects in the organization.

When a project is set as a host project, all existing and new VPC networks will automatically become a shared VPC network. Therefore, this is a project-level definition, and you can't determine specific networks within a host project that should become shared VPCs.

When it comes to subnetworks, you can define which subnetworks can be accessed by a specific service project within the host project. This allows you to assign different subnetworks to different projects and keep them segregated (via firewall rules) if needed.

VPC peering

In GCP, two networks can be peered as long as they don't have overlapping IP address ranges. What peering means is that their route tables will be shared, and the underlying platform (Google's software-defined network) will be set up so that these two networks can be reached by each another. This effectively means you will have all your subnetworks (layer 2 network domains) across both VPC networks connected via a managed layer 3 router (which you do not need to concern yourself with). One way to think about this is that you're merely extending one of the networks with another network as if to obtain a single VPC network with a larger address space. We are, of course, talking about reachability at the network layer – things still need to be allowed in the firewall for communication to happen, and any IAM permissions granted at the VPC network level won't translate over to peered VPC networks.

VPC peering is non-transitive, which means that if VPC A is peered with VPC B, and VPC C is peered with VPC B, the resources in VPC C won't be able to reach the resources in VPC A (and vice versa) through the peerings. This is because route exchange only supports propagating routes to an immediate peer, not to peers of peers. This has implications on the scalability of, for example, a hub-and-spoke network model. Popularized by Microsoft as an ideal topology for sharing managed Active Directory (AD) installations and other shared services, the hub-and-spoke model involves one centralized hub network, and several "spoke" networks that are peered with the hub (and therefore can reach its shared services, such as AD domain controllers). The hub is also where a VPN gateway is deployed to for connectivity to on-premises networks. The general idea is that the resources in any of the "spoke" networks can reach on-premises resources (and vice versa) through the hub. However, due to the route exchange limitation of GCP, this design won't work as intended, unless you're deploying a smaller version of a hub-and-spoke model and you have no requirement for reachability between your spokes and on-premises. You can work around this limitation by setting up VPNs between VPCs, as opposed to peerings, though this is non-ideal, given the extra cost of VPN tunnels and extra management overhead on the network. For that reason, if you want to achieve something similar to that of a hub-and-spoke model, you should use a shared VPC.

IP addresses

In GCP, VMs can have two types of IP addresses:

  • Internal IP: This is an IP in the private space, allocated from the subnetwork range by DHCP. An internal (network-scoped) DNS name is associated with this IP, based on its VM hostname and following the structure:

    [hostname].c.[project-id].internal

    Internal IPs can be static (DHCP-reserved) or ephemeral, in which case it might change upon, for example, the instance being restarted.

  • External IP: This is a public IP that is dynamically assigned by GCP from an address pool when configured as ephemeral, or with a reserved address when configured as static. An external IP is never directly attached to a VM (and therefore not visible from within the VM's operating system). Rather, it's an IP address that is mapped to the VM's internal IP so that, effectively, the external IP "belongs" to the VM. DNS records can be published using existing DNS servers (outside of GCP, or within GCP when hosting the DNS zone for the domain using the cloud DNS service – more on that later).

You can also assign a range of IP addresses as aliases to a VM instance's primary IP address. This feature allows you to assign separate IPs to separate services running on the same VM instance, making it particularly useful for use cases involving containerization. You can also set up a secondary CIDR range in your subnetwork that can be explicitly used to assign alias IPs for that subnet.

In Google Cloud, a VM can have multiple network interface cards (NICs) belonging to different VPC networks. This means that a VM can belong to more than one network, as long as those networks' IP ranges don't overlap. The internal DNS name is only associated with the first interface (nic0), and the NICs can only be configured during the creation of the instance, not afterward. One limitation of this is that you can't delete an interface without deleting its actual VM. In other words, you should think of a NIC as not a separate resource from the VM but as a part of it. This approach to network interfaces is slightly different from what you might have seen with other cloud providers. It should guide your design decisions since you don't have the flexibility to add or remove the network interfaces on an existing VM instance. You should also keep in mind that instances support one NIC per vCPU, up to a maximum of eight NICs.

One typical use case for having a VM with multiple NICs would be when the application requires a separation between management access and data access through different networks. It could also be used for VMs performing network functions such as load balancing, or only when you want a VM to privately access two different networks when these networks can't be peered for security reasons (for example, a DMZ and an internal network).

NAT

In Google Cloud, a VM instance without an external IP address cannot communicate with public networks (such as the internet) by default. Cloud NAT, Google Cloud's Network Address Translation (NAT) service, allows such instances to send outbound packets to the internet (and receive corresponding inbound responses on the same established connection). Cloud NAT is a fully managed service and is a distributed, software-defined solution. Therefore, it's not based on virtual appliances or proxy VMs, and all address translation and packet handling functions are carried out in Google Cloud's underlying network infrastructure.

Using a service such as Cloud NAT has security benefits compared to having several external IP addresses assigned to VM instances. You can also manually assign specific NAT IP addresses to be used by the NAT virtual gateway, which allows you to confidently share that set of IP addresses to third parties that require IP whitelisting to provide access. As a fully managed and software-based service, cloud NAT is highly available and scalable by default, with virtually no impact on the bandwidth and network performance of outgoing connections.

For VM instances that only access Google APIs and services, an alternative is Private Google Access. This service allows VMs with only internal IP addresses to access Google's Cloud and Developer APIs and most Google Cloud services through Google's network.

DNS

Cloud DNS is a very reliable, low-latency DNS service powered by Google's global network. It's the only service in Google Cloud that is offered with 100% availability SLA. That level of availability is typically so difficult to achieve and often so unfeasible that services are almost never provided with 100% availability. However, DNS is such an essential service – since it's something that virtually every other service depends on – that if it goes down, the damage would far exceed the cost of keeping this service up at all times (without name resolution, no system anywhere would be reachable – that would have the same kind of impact as all your services being down, at least from the perspective of end users). Therefore, Google delivers DNS with 100% availability SLA. You can leverage that level of availability for your organization's DNS needs on GCP with cloud DNS. Besides, you also get autoscaling and low latency capabilities with fast anycast name servers distributed globally.

Cloud DNS offers both public zones, for names visible to the internet and that can be resolved by anyone; and private managed DNS zones, for names visible only internally. The service also includes managed DNS security extensions (DNSSEC), which add security capabilities that protect your domains from DNS spoofing and cache poisoning attacks.

To be able to manage and automate DNS records for systems on GCP, it is a good idea to have the relevant DNS zones within Google's cloud DNS by making it the authoritative DNS server for those zones (in case you have your DNS services elsewhere).

Cloud CDN

The cloud Content Delivery Network (CDN) is a service that allows you to use Google's global edge network to serve content closer to users. You can reduce latency and speed up access to website content (especially "static" content such as images and JavaScript files) for your users. In addition, by having commonly accessed content distributed globally through Google's edge network, you also reduce the load on your web servers.

This service works with an external load balancer (which we will explore later in this chapter) to deliver content. The load balancing service will provide the frontend IP addresses and ports that receive requests, as well as the backends (referred to as origin servers) that provide the content in response to requests. To understand how cloud CDN works, particularly with regard to how it handles caching and cache misses, please refer to https://cloud.google.com/cdn/docs/overview#how-cdn-works.

Network pricing and service tiers

Pricing is always an essential factor in any design, so in this section, we'll highlight the basic pricing considerations for networking services in Google Cloud.

General network pricing

Although these prices can change, the following table provides some general pricing information for Google Cloud networking services, at the time of writing, to give you a good idea of what is charged and what the cost structure looks like:

Further and more up to date information about pricing can be found at https://cloud.google.com/network-tiers/pricing.

The following diagram provides a visual illustration of what types of network traffic are charged, as well as what types are not:

Figure 3.1 – Types of network traffic in GCP – free (solid lines) and charged (dashed lines)

Figure 3.1 – Types of network traffic in GCP – free (solid lines) and charged (dashed lines)

In the preceding diagram, the arrows indicate the direction of the traffic. A solid line shows traffic that is free of charge, while a dashed line shows network traffic that incurs a cost.

As a cloud architect, you may want to explore ways to reduce unnecessary network costs, such as when there are "chatty" applications communicating across different regions when they could have been deployed in the same region. Such design decisions are obviously multi-dimensional and involve trade-offs between cost, performance (bandwidth, latency), availability, and more. Very often, however, the cost aspect is left out of these discussions.

Service tiers

Google Cloud is the first major public cloud provider to offer a tiered cloud network. There are two network service tiers you can choose from: Premium Tier and Standard Tier.

The main difference is that with the Premium Tier, most or all network traffic is routed through Google's "premium" network backbone. In contrast, the Standard Tier relies on regular internet service provider (ISP) networks. This, of course, really only affects the inter-regional part of the network, not so much so the internal networking and the communications between cloud services within the same region, which have similar characteristics in both tiers.

The Premium Tier is therefore tailored to online services that rely on high performance (with higher throughput and lower latency communications) at a global scale. It is well-suited for large enterprises and businesses for which downtime means impactful loss of revenue. The Standard Tier, on the other hand, optimizes for cost by relying on regular ISP networks for internet traffic. It is well-suited for smaller enterprises and businesses that operate only within a single cloud region. The network performance in the Standard Tier is similar to that of other cloud providers, according to Google. Therefore, it is not necessarily to be seen as a "cheap and dirty" networking solution for non-serious businesses, but a solution that is, well, standard. The Premium Tier is then the next level for those seeking the best Google can offer in terms of networking, which is the largest cloud provider network globally (in terms of the number of points of presence) and arguably the best performing one.

Due to the global nature of the premium service tier, the network services and resources that are global in nature, such as global HTTP(S) Load Balancing and Cloud NAT Gateways, are thus always Premium Tier themselves (although they'll likely also have a Standard Tier version that operates only within a single region). In a way, Google is simply packaging its global network services into a Premium Tier offering by keeping everything inside its own high-performing network. It is doing so to draw a line between the performance and pricing characteristics of global and regional networking products, in addition to giving users the option to derive cost savings at the expense of network performance. Keep in mind that the Premium Tier is the default tier when deploying and using GCP networks. In contrast, the Standard Tier is something you have to opt in for when you want to tell Google to use ISP networks (instead of their own) for traffic over the internet.

Getting hands-on – deploying a custom VPC network

If you don't already have a Google Cloud account and project set up, please refer to Chapter 1, An Introduction to Google Cloud for Architects, for instructions on how to do so.

In the GCP console, make sure you have the Compute Engine API (required for deploying networks) enabled on your project by going to the following URL:

https://console.cloud.google.com/marketplace/product/google/compute.googleapis.com.

Then, click on Enable. If you don't have a project selected (via the top bar of the console), you'll be asked to choose the project where you wish to enable the API. If you have a project selected and you don't see this button, this means the API is already enabled on your project.

In the GCP console, expand the navigation menu on the left by clicking on the menu icon (the icon with three stacked horizontal lines) in the top-left corner. Then, search for and click on VPC network. Your project likely has a default VPC network, unless you deleted it after creating the project. Explore the default network, observing the subnetworks that were created (one on each of the available regions), and also explore the Firewall section to see the default firewall rules. Finally, check the Routes section to view the local default routes for each subnetwork, plus a default route to the internet. We'll dive deeper into firewalls and routes in the next section.

Click on VPC networks and then on Create VPC Network, as shown in the following screenshot:

Figure 3.2 – Creating a VPC network

Figure 3.2 – Creating a VPC network

Give it a name (for example, my-custom-vpc), a description if you'd like, and then, under the Subnets section, select Custom under Subnet creation mode. Note that you won't have any subnetworks listed as you would if you'd selected automatic mode because, now, how your subnetworks are defined is up to you.

Click on Add Subnet. Name it subnet1-eastus, and then for Region, select us-east1, and for IP address range, type in 10.10.0.0/20. In the Private Google Access field, select On. Under Flow logs, select On, and then click on Configure Logs to expand the flow logs configuration form. After that, select 30 SEC for Aggregation Interval. Leave the remaining options with their default values. Click on Done.

Click on Add Subnet to add a new subnet. Name it subnet2-westeu. For region, choose europe-west1. In the IP address range field, type in 10.20.0.0/20. This time, select Off for both the Private Google access and Flow logs fields. Click on Done.

Under Dynamic routing mode, select Global. Click on Create. On the VPC networks page, create your newly created VPC (my-custom-vpc). Explore the configuration of your new VPC, which should match the one shown in the following screenshot. Navigate to the Firewall rules tab, as well as the Routes tab, and see how those lists are empty (except for the default route to the internet and to your subnetworks, which were created for you). With custom VPC networks, you don't get certain things automatically set up for you, but you do have full control over the configurations. When designing networks for enterprise organizations, this is the way you should approach it:

Figure 3.3 – A custom VPC network

Figure 3.3 – A custom VPC network

Now, what did you just create? This is a VPC network with two subnetworks, one located in the East US region and another in the West Europe region. It's a custom VPC network with no default firewall rules. Therefore, no communication is currently allowed in this network. You're starting with tight security. You also have global dynamic routing mode enabled, which means that GCP's Cloud Router has visibility to the resources in all regions and will be able to dynamically route traffic to VMs across different regions. For example, if there's a VPN tunnel set up in one of the regions to an on-premises network, VMs in other regions can only reach the tunnel (and the on-premises network) if global dynamic routing is enabled. In addition, both subnetworks here (and any additional subnetworks that are created) will be advertised by Cloud Router, and all the VMs in all regions will dynamically learn about on-premises hosts, thanks to dynamic routing. To better understand this, take a look at the following diagram, which depicts a VPC network connected to an on-premises network without dynamic global routing:

Figure 3.4 – VPC without global dynamic routing

Figure 3.4 – VPC without global dynamic routing

With this regional routing set up, VMs in other regions (different from the one where the VPN tunnel endpoint is) can't be reached by on-premises VMs and aren't able to reach on-premises VMs through the tunnel. They're also not able to learn routes dynamically from the connected on-premises network(s). In other words, without global dynamic routing, your network only "exists" in a single region from the perspective of external networks.

Finally, you enabled Private Google Access and Flow Logs in the East US subnetwork. This means that in this subnetwork, VMs will be able to access Google services and APIs privately (that is, with no need for a public IP address or external access), and a sample of the network flows sent from and received by the VMs in this subnetwork (Flow Logs) will be recorded (every 30 seconds, in our configuration) and be available for use in network monitoring or real-time security analysis. We will explore Flow Logs further in this chapter, but for now, know that this is a crucial network monitoring feature that you can enable at the subnetwork level.

With the basics of networking on GCP out of the way, let's dig into two fundamental elements of any network: routes and firewalls.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image