Amazon VPCs are great for housing applications within your AWS account, but it is rare that you will need only a single VPC. Many AWS customers use multiple VPCs. This can range from five VPCs in a single account or hundreds of VPCs across several AWS accounts. This is due to a number of factors; different teams may own different AWS accounts and VPCs, or there may be security concerns that require decentralizing workloads. The next sections will focus on building connectivity between your Amazon VPCs.
VPC Peering
One of the simplest ways to build connectivity between VPCs is using a VPC peering connection. VPC peering is a simple connection that is built between two VPCs. This peering does not use a specific type of gateway or external connection, so it does not possess any limits on throughput for connectivity across the peering.
Refer to Figure 1.15 for a visual of VPC peering:
Figure 1.15: VPC peering connection
In the preceding figure, you will see that there is a VPC peering connection built between VPC-A and VPC-B. Every VPC peering connection will have a VPC peering connection ID associated with it. That peering connection ID can then be used within the route tables of the VPC as a target for any route entries that are to use the VPC peering. In this example, VPC-A has an IPv4 CIDR of 10.1.1.0/24
and VPC-B has a CIDR of 10.2.2.0/24
. Each VPC has local route tables configured with routes to the peered VPC CIDR. The target of the routes is the VPC peering connection ID, which is formatted as pcx-xxxxxxxxxx
.
There are a few limitations and constraints you need to consider about VPC peering. This may influence your decision of whether to implement VPC peering for connectivity or choose something such as AWS TGW, which will be discussed in Chapter 5, Hybrid Networking with AWS Transit Gateway.
You may have occasions where VPCs have multiple VPC peering connections to other VPCs. As the number of VPCs within your environment grows, your number of VPC peering connections may also grow. It may be easy to assume that you could just “daisy-chain” VPCs together and essentially route through a central VPC to get to others. This is commonly referred to as a hub-and-spoke network.
However, you need to be aware that VPC peering connections are non-transitive. This means that traffic cannot transit through a VPC and across another VPC peering connection. Refer to Figure 1.16 as an example:
Figure 1.16: Non-transitive VPC peering
There are three VPCs in this example, with VPC-B peered to both VPC-A and VPC-C. Any traffic from VPC-A or VPC-C destined for VPC-B will be permitted and vice versa. However, any traffic destined from VPC-A to VPC-C will be blocked due to the non-transitive nature of VPC peering. For a transitive solution, it is often recommended to use a service such as AWS TGW. Alternatively, you could opt for a full-mesh VPC peering solution. This means that every VPC is peered to every other VPC. As you may imagine, once the number of VPCs reaches a certain scale, managing this number of VPC peering connections can turn out to be a difficult task.
Note
This non-transitive nature also applies to services such as internet gateways, NAT gateways, Direct Connect, and gateway endpoints. In other words, VPC peering cannot be used for resources in a VPC to access these services within another VPC.
Overlapping CIDR Blocks
When creating a VPC peering connection, the two VPCs cannot have overlapping IP CIDR blocks. This applies to both IPv4 and IPv6, with both primary and secondary IP blocks considered. If you attempt to create a VPC peering between VPCs with overlapping CIDRs, the peering connection will fail.
Inter-Region Maximum Transmission Unit (MTU)
Creating VPC peering connections between VPCs that reside in different AWS regions is a supported configuration. However, it is important to consider that the MTU across inter-region peering connections is 1,500 bytes. This differs from intra-region peering connections, which support a jumbo MTU, or 9,001 bytes. This can be an important consideration with traffic that may have issues with the fragmentation of payloads, that is, a packet larger than 1,500 bytes needing to be fragmented into multiple packets and sent across VPC peering.
Refer to Figure 1.17 for an example:
Figure 1.17: Inter-region VPC peering
Within a region, the MTU is much higher, and applications could use jumbo frames. However, traffic between AWS regions is restricted to a standard 1,500 bytes and the applications must be able to detect this and adjust the packet payload size accordingly.
The VPC peering connection between VPC-A and VPC-B in region us-east-1 has an MTU of 9,001 bytes. The inter-region peering between VPC-A in us-east-1 and VPC-C in ap-southeast-2 has an MTU of 1,500 bytes.
Other Considerations for VPC Peering Connections
There are several other characteristics of VPC peering connections that need to be considered when using this solution for providing inter-VPC connectivity. A few of them are listed here:
- Routing across VPC peering connections is all dependent on static routes. This means all route tables must be updated within both VPCs to ensure bidirectional communications.
- The number of VPC peerings per VPC is limited by a service quota. The default amount is 50 peering connections, but this is adjustable up to 125.
- You cannot create multiple VPC peering connections between two VPCs.
- Unicast reverse path forwarding is not supported for VPC peering. Therefore, if multiple VPCs with the same CIDR block are peered to the same VPC, you will need to implement longest match route table entries to achieve symmetric traffic.
Note
For a full list of VPC peering limitations, refer to the AWS documentation page here: https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-basics.html#vpc-peering-limitations
Provisioning Process for VPC Peering Connections
Provisioning a VPC peering connection is a multi-step process. The process includes a VPC requesting to peer with another VPC, then that request being approved before the VPC peering connection is established. The VPC initiating the peering connection request is known as the requester VPC and the receiving VPC is known as the accepter VPC.
This process is quite simple when peering VPCs within the same AWS account, as the request and the approval can all be done within the account. However, this process is allowed across AWS accounts, so the requester VPC could be in one AWS account, while the accepter VPC is in another AWS account. The accepter VPC could be within an account that is under the same AWS Organization or a separate one.
A VPC peering connection will go through a series of stages within both the provisioning and deprovisioning processes. These stages represent the current state of the peering connection and whether it is ready for use. The stages are outlined here:
- Initiating request: The request to form a VPC peering connection has been made and is in the initiation state. From here, the process will move to pending acceptance, unless there is a failure.
- Failed: The VPC peering connection has failed after initiation. The peering connection cannot be recovered from this state – it must be re-initiated.
- Pending acceptance: The VPC peering connection has been initiated and is awaiting approval from the accepter VPC.
- Expired: The peering connection has expired.
- Rejected: The accepter VPC has rejected the request for a VPC peering connection to be created.
- Provisioning: The accepter VPC has accepted the peering request and it is in the process of being provisioned.
- Active: The VPC peering connection is active and ready for use. VPC route tables can be updated with route entries to use the VPC peering connection ID as the target.
- Deleting: The current VPC peering connection has been requested for deletion and is in the process of being removed.
- Deleted: The VPC peering connection has been removed and is no longer available for use.
Understanding the VPC peering connection status will give insight into whether the connection is active and usable or whether there is a problem.
Configuring VPC Peering Connections
This section details the creation of a VPC peering connection both from the AWS console and using the AWS CLI.
To configure an EIP, you can simply navigate to the VCP
dashboard of the AWS console, select Peering connections
, and choose the Create peering connection
option. The process and required VPC peering details are shown in Figure 1.18.
Figure 1.18: Create VPC peering details
VPC peering can be initiated to the same account or between AWS accounts; in either case, the VPCs that can be peered will show up in the drop-down menu.
Once the VPC peering connection has been created, it will need to be accepted. This can take place in the same AWS account or a separate one. You will navigate again to the Peering connections
section within the account of the accepter VPC and approve the pending request as shown in Figure 1.19.
Figure 1.19: Accept VPC peering
Before the VPC peer is active, the request must be accepted, either by the same account or the other account to which the request was made.
A VPC peering connection can be created using the AWS CLI command:
aws ec2 create-vpc-peering-connection
For example, to create a VPC peering request using the AWS CLI, use the following command:
aws ec2 create-vpc-peering-connection --vpc-id vpc-12345678 --peer-vpc-id vpc-87654321 --peer-region us-west-2
A VPC peering connection can be accepted using the following AWS CLI command:
aws ec2 accept-vpc-peering-connection
As an example, to accept a VPC peer request, use the following AWS CLI command:
aws ec2 accept-vpc-peering-connection --vpc-peering-connection-id pcx-12345678
Hub-and-Spoke VPC Architectures
As mentioned before, one common network topology that is used for connecting multiple IP networks together is hub-and-spoke. This topology is composed of a central “hub” network that then has connections with separate spokes. Typically, all traffic is backhauled through these hubs when attempting to talk to another network outside of their own. Given that VPCs are non-transitive, it is not possible to build this topology within AWS simply using VPC peering connections. You have the option to use a concept called a transit VPC to achieve this topology. In addition, you need to properly evaluate when to use this transit VPC or a service such as AWS TGW. This section will cover both topics.
Since hub-and-spoke networks cannot be directly achieved with VPC peering connections, an alternative solution is the transit VPC. The concept of a transit VPC is based on the use of IPsec VPN connectivity over the top of the standard VPC connectivity. This can be achieved by deploying NVAs into a central VPC and then configuring a series of IPsec VPN tunnels from these appliances to other VPCs. Additionally, these appliances could build IPsec connectivity to on-premises or other third-party networks. For building connectivity to other Amazon VPCs, you have a couple of options when using a transit VPC:
- Option 1: Build IPsec connectivity from the NVAs in the transit VPC to virtual private gateways (VGWs) residing in the spoke VPCs
- Option 2: Build IPsec connectivity from the NVAs in the transit VPC to additional NVAs within the spoke VPCs
Both options are outlined in Figure 1.20:
Figure 1.20: Transit VPC options
In Figure 1.20, both the preceding options have the option to utilize either static or dynamic routing for overlay connectivity via IPsec tunnels. The dynamic routing option is fully dependent on the capabilities of the vendor used for the NVA. The VGW will only support Border Gateway Protocol (BGP) as a routing protocol.
Consider a scenario where Trailcats had a need for expanded connectivity due to some quota limitation, such as a maximum number of routes in the VPC routing table, or if connecting to a third-party vendor that required a different connectivity solution than that offered by native cloud networking. This is common when working with vendors and a major driver of enterprises adopting NVA-based solutions.
The strength of Option 1 in this situation is that a minimal deployment of NVAs might meet the needs of vendor connectivity while allowing workload VPCs to be connected to the vendor and each other. This option works so long as there is no need to propagate large amounts of routes to the workload VPCs, as the VPC route tables have a low maximum number of routes.
The weakness of this option is evident in that a large portion of connectivity becomes manual – namely, the VPC route tables and the VPN connections to the VPC-attached VGWs.
Option 2 requires more up-front work to deploy NVAs to workload VPCs, but often these solutions are software-defined, which facilitates route exchange and connectivity between the NVAs.
The weakness of this option is it requires far more life cycle management of NVA instances and potential software problems if the NVA vendor releases software with bugs or other problems.
Note
More details on AWS transit VPCs can be found at the following URL: https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/transit-vpc-option.html