Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Architecting Google Cloud Solutions

You're reading from   Architecting Google Cloud Solutions Learn to design robust and future-proof solutions with Google Cloud technologies

Arrow left icon
Product type Paperback
Published in Apr 2021
Publisher Packt
ISBN-13 9781800563308
Length 472 pages
Edition 1st Edition
Arrow right icon
Author (1):
Arrow left icon
Victor Dantas Victor Dantas
Author Profile Icon Victor Dantas
Victor Dantas
Arrow right icon
View More author details
Toc

Table of Contents (17) Chapters Close

Preface 1. Section 1: Introduction to Google Cloud
2. Chapter 1: An Introduction to Google Cloud for Architects FREE CHAPTER 3. Chapter 2: Mastering the Basics of Google Cloud 4. Section 2: Designing Great Solutions in Google Cloud
5. Chapter 3: Designing the Network 6. Chapter 4: Architecting Compute Infrastructure 7. Chapter 5: Architecting Storage and Data Infrastructure 8. Chapter 6: Configuring Services for Observability 9. Chapter 7: Designing for Security and Compliance 10. Section 3: Designing for the Modern Enterprise
11. Chapter 8: Approaching Big Data and Data Pipelines 12. Chapter 9: Jumping on the DevOps Bandwagon with Site Reliability Engineering (SRE) 13. Chapter 10: Re-Architecting with Microservices 14. Chapter 11: Applying Machine Learning and Artificial Intelligence 15. Chapter 12: Achieving Operational Excellence 16. Other Books You May Enjoy

Mastering common network designs

In this section, we're going to cover some design considerations and best practices, followed by common designs, for VPC deployments on GCP.

Design considerations and best practices

The network is one of the most fundamental components of an IT infrastructure. For that reason, the design of the VPC network should be given sufficient time and effort in the early stages of the overall solution design. Design decisions at this level can't be easily reversed later, so make sure you're taking in as much input as possible into consideration.

In this final section of this chapter, you're going to learn about the common design patterns you can use to build your own design. But before that, we will highlight a few best practices to keep in mind and guide you through your decisions.

Use a naming convention

This goes for all your resources and not only the network-related ones. But if you're starting your design with the network (a natural starting point), that may be the time to set your naming convention straight. This involves defining how resource names are constructed and setting abbreviations, acronyms, and relevant labels that help users identify a resource's purpose and its associated business unit and location. Some examples of labels you may need to define are as follows:

  • Company short name: acm (ACME)
  • Department or Business Unit: it, hr, and so on
  • Application code: crm (Customer Relationship Management application), pay (Payroll application)
  • Region abbreviation: eu-we1 (europe-west1), us-ea2 (us-east2)
  • Environment: dev, test, prod, and so on

Once you have defined some labels and possible values based on your IT environment, you can start defining naming structures for various GCP resources; for example:

  • Department-specific and global network resources: {company name}-{dept-label}-{environment-label}-{resource type}-{seq#}. For example, if you have one VPC per company department, you apply this to your VPCs; for example, acm-it-test-vpc-1.
  • Department- or application-specific regional/zonal resources: {company name}-{[APP or DEPT] label}-{region/zone label}-{environment-label}-{resource type}-{seq#}; for example, applied to subnetworks: acm-hr-eu-we1-prod-subnet-01. A naming convention will help ensure consistency in how resources are named and will facilitate several aspects of infrastructure management.

Subnetwork design

Firstly, as we mentioned previously, avoid using default or auto mode networks and opt for custom mode networks instead so that you have full control over the subnetwork design and firewall rules. Then, you can deploy subnetworks in the regions that your business operates in and adopt an IP address scheme so that there are no overlaps with any other network (such as on-premises networks) that you intend to peer or integrate with your VPC network.

Also, aim to group your applications into a few, large subnetworks. Traditional enterprise networks separate applications into many small address ranges (by using, for example, VLANs). However, in modern cloud networks, fewer subnets with large address spaces are recommended as it facilitates management and reduces complexity at no cost in terms of security. Firewalls, service accounts, and network tags are all features that can be used to segment traffic and isolate network communications as needed.

Shared VPC for multiple projects

If you're designing a multi-project solution in GCP, you may want to consider deploying a shared VPC. As we discussed previously, a shared VPC offers an effective way for you to simplify management and centralize security and network policies in a single host project. In contrast, service projects (which may represent, for example, different company departments or applications) can simply deploy their resources to the same network. This avoids the situation where there are multiple VPC networks to manage, which increases the risks of inconsistent configuration and policies, excessive use of network administration roles, and disruptive changes to the network design.

In terms of the service projects, grant the network user role at the subnetwork level so that each project can only use its assigned subnetwork(s), which will reinforce the principle of least privilege.

Isolate workloads

For isolation between project-specific workloads and for independent IAM controls, you can create VPC networks in different projects. Network-related IAM roles assigned at the project level will apply to all VPC networks within the project, so if you require independent IAM policies per VPC network, create different projects to host those networks. This setup works as an alternative to or in conjunction with a shared VPC model.

If you're working with an organization that deals with compliance regulations (such as HIPAA or PCI-DSS) and sensitive data that needs to be secured appropriately, then isolate these types of data into dedicated VPC networks. Two different VPC networks in GCP will never be able to reach each other (from a routing perspective) unless they're peered (or are integrated by other means, such as with a VPN gateway or a network appliance). This significantly reduces the risk of unauthorized access to the data or breach of compliance.

Limit external access

Limit the use of external IP addresses and access to public networks as much as possible. Resources with only an internal IP address can still access many Google services and APIs through Private Google Access. You can use a Cloud NAT to provide VMs with external access. By limiting unnecessary external access, you reduce the attack surface on your environment by eliminating the possibility of VMs being reached from external sources (especially when management protocols such as SSH and RDP are not restricted at the firewall level).

Common network designs

In this section, we will look at some of the common network designs that are adopted by enterprises using Google Cloud.

Single VPC network, high availability

A single VPC network, which can be of global scale in Google Cloud, can, in many cases, suffice if you wish to build a robust network design that is easy to manage and maintain. There are two ways of obtaining high availability with a single VPC network:

  • Leveraging different zones within a subnetwork: By deploying instances to different zones within a subnetwork (and its associated region), you spread your application across different infrastructure failure domains, therefore obtaining improved availability and resiliency against hardware failures (and, in some cases, even the failure of an entire data center).
  • Leveraging different regions (with different subnetworks): Deploying instances to different regions allows you to obtain a higher degree of failure independence, which even protects you against regional failures and natural disasters. It's the best design choice for realizing robust global systems. With a global HTTP(S) Load Balancer, you can deliver lower latency for end users with intelligent global routing, as you learned previously.

Whether you opt for multi-zonal or multi-regional deployments on the network, you can obtain high availability without additional security complexity (it's still one single GCP firewall for the network). The following diagram illustrates this design:

Figure 3.9 – Single VPC network with zonal and regional deployments

Figure 3.9 – Single VPC network with zonal and regional deployments

In the preceding diagram, VM instance Instance1B is a failover instance for Instance1A on a different zone that can serve traffic in case Instance1A fails to. VM Instance1C is a failover instance located in a different region.

Shared VPC and multiple service projects

For a more complex and scalable infrastructure, you can opt for having a shared VPC where network controls can be centralized (that is, the configuration of things such as subnetworks, routes, and firewall rules), with service projects able to share the same network infrastructure. The users in these service projects still have the autonomy to deploy and manage instances and applications, without the risk of impacting the network configuration. This is a great way to prevent breaking changes and inconsistencies in the network configurations.

This design is exemplified in the following diagram:

Figure 3.10 – Shared VPC and multiple service projects

Figure 3.10 – Shared VPC and multiple service projects

Only one region is shown in the preceding diagram, but it easily works with multiple regions as well. Subnetworks 1 and 2 are shared with the DEV service project (as you learned previously, you can define which specific subnetworks are shared with which specific service projects). Subnetworks 3 and 4 are used by the TEST project, while 5 and 6 are used by the PROD project. Network policies are centralized in the host project. Workloads and VM instances are managed within each of the service projects individually, and they can be deployed to the subnetworks that are created in the host project.

Multiple VPC networks bridged by a next-generation firewall (NGFW) appliance

Sometimes, security requirements will dictate that "untrusted" network environments (the portion of the network that's exposed to the internet or outside networks) be more strictly isolated from "trusted" networks (such as the internal networks hosting applications) via a next-generation firewall (NGFW). Google Cloud's native firewall is not an NGFW firewall, the definition of which is a firewall with additional network filtering functions such as deep packet inspection. An NGFW provides you with deeper insights into the packets traversing the network, thus allowing you to detect and prevent network attacks. However, while you lack such capabilities from GCP's built-in firewall service, nothing prevents you from deploying a VM appliance containing a software-based version of an NGFW (several vendors make VM images available for consumption in the cloud).

In this design, an untrusted network (DMZ) is introduced to terminate outside connections (such as hybrid interconnects and connections originating from the internet). The traffic is then filtered in the NGFW, which is deployed to a multi-NIC VM, before reaching trusted networks. The NGFW VM has an NIC on each of the VPC networks, which, in this design, must all reside within the same GCP project. Therefore, you should observe the limits on the number of VPC networks and, most importantly, on the number of NICs supported on a single VM (since the former can be extended upon demand, but the latter has a hard limit of eight as the time of writing).

There are many variations of this design, but the following diagram shows an example of such a topology:

Figure 3.11 – Multiple VPC networks bridged by an NGFW appliance

Figure 3.11 – Multiple VPC networks bridged by an NGFW appliance

In a topology diagram, a DMZ VPC is where external traffic is terminated. In this example, this is traffic from an on-premises location, from another public cloud network, and the internet. This is the "untrusted" network. Two other trusted networks, Prod and Staging, represent a production VPC network and a staging VPC network where application instances are deployed to, respectively. Traffic from and to the untrusted zone is filtered through the NGFW firewall.

You could combine this design with that of a shared VPC and service projects so that if you have multiple projects, you won't need to replicate this design over to different projects (which would require numerous NGFW appliances and licenses). For example, the project shown in Figure 3.8 would become a host project, with the two trusted VPCs being shared with other service projects (used, for example, by different development teams).

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image