Part 1 of 2: Classless Inter-Domain Routing (CIDR) Block & Subnetting
A VPC, in AWS parlance, is a Virtual Private Cloud. Effectively, it is simply a network (or set of networks) that you can set up within your AWS account to better control connectivity between your resources and services. Many AWS services utilize VPCs to some degree but are sometimes abstracted away. It’s not necessarily important to know all of the ins and outs of the VPC product to develop your application, but it can be useful to have a reference or basic understanding of the networking concepts involved.
In an AWS VPC, those basic concepts include:
- Classless Inter-Domain Routing (CIDR) Block
In this article, we will talk about those first two bullet points. In Article 2, we’ll dig into the remaining two bullet points.
Let’s Sip Some CIDR
As we said in the intro, VPCs allow you to create multiple different networks within your AWS account and place resources inside of those networks. When you create one of those networks, you get to assign an address range to it. That top-level address range is called a CIDR block. It’s beyond the scope of this article to get into the weeds about how that works, but just know it defines the set of IP addresses that you’ll be able to use with your resources. Usually, these ranges are defined with reserved addresses that are specifically designated for private networks.
These ranges are:
You’ve probably seen or worked with one or more of these ranges if you’ve ever set up a wireless router at home or in the office. Let’s take a quick look at the two parts that make up that range.
Home on the Range
Let’s use 192.168.0.0/16 as our example. We can break that down much like we can break down the discography of Guns N’ Roses: before and after the slash.
The number before the slash is what the root of the range will be. In this case, all of our addresses in this network will start with 192.168.x.x.
The number after the slash is a little more complicated, but it basically lets you subdivide that root into smaller networks (we’ll get into this more in the subnets section of Part 2). It’s called a subnet mask, and the number determines what octets (or parts of the octet) are set in stone and which ones are assignable.
Again, this can be a bit of a rabbit hole, but there are some important things to know about them and how they work in a VPC:
- An IP address range (CIDR block) is divided into 4, 8-bit groups of numbers called octets.
- As the subnet mask number increases, the number of assignable addresses for that block decreases.
- In AWS, the subnet mask for the VPC CIDR must be between /16 and /28.
For example, let’s take the 192.168.0.0/16 range again. For this example, we want to mask the first two octets, which is where we derive that /16 number. So, if we wanted to calculate that slash-number, it looks like this:
^ ^ = 192.168.0.0/16
8 + 8
This means that we are left with the final two octets available for use within our new VPC network. Because they’re 8-bit, that gives us 256^2 - or 65,536 - addresses to work with for this individual VPC. This is our most basic building block of the VPC. In this case, all of our addresses will begin with 192.168.x.x.
Note: Unless you really need to, it’s usually just easier to stick with masks divisible by 8 - so, in the case of our VPCs, that would be just /16 and /24 (because AWS restricts our min to 16 and max to 28). You can certainly get fancier here and divide them up however you wish within those ranges, but for the purposes of our applications, there usually isn’t much need to do so. But, here is the wiki article on it in case you are interested.
The Multi-VPCs of Madness
There’s an important aspect of VPC creation that should be considered when planning your network(s):
VPCs cannot talk to each other by default.
I’m stating the obvious here - they’re called Virtual Private Networks, after all - but understanding why they can’t communicate with each other, why you might want them to, and what mechanisms exist to enable that possibility is something that your future self will high-five you for, should you ever need it.
- Why can’t they talk?
It all comes down to routing. We have to tell every network how traffic should be flowing in and out of your new network. The way we do that is by defining a routing table (we’ll talk more about in the routing section). These routing tables, by default, only route traffic within the defined VPC network and to the internet. Without us telling them, they don’t know how to talk to each other.
- Why might we want them to talk?
For a lot of applications, especially at first, you may not need them to. But, let’s say, as time progresses, you want to revamp the cloud architecture of your application without disturbing your existing version. You’ve set up your new environment and tested it, and you’re ready to migrate the data from the old platform. The easiest, fastest way to do that may be to connect the old prod to the new prod and start synching data.
- What mechanisms exist to do that?
There are a few, but one of the easiest is called VPC Peering. With peering, you just tell the routing tables of each VPC how to get to the other network. These peering connections can even be made with VPCs in separate accounts! The caveat? The CIDR blocks of the VPCs being paired cannot overlap. Meaning, that if you create two VPCs, both with a CIDR of 192.168.0.0/16, they will not be able to talk to each other. There are some workarounds, but it’s much easier to just get into the habit of creating VPCs with unique CIDRs. Obviously, in the case of cross-account peering connections, you can’t always know or create unique CIDRs, but it’s good practice to do so when you can.
Ok, great! Now we know we should try to create VPCs with unique CIDRs, so what’s the easiest way to do that?
Remember the 10.0.0.0/8 reserved range? Well, that little dude is going to help us out a ton here. Unless you’re building complex applications with equally or more complex networking requirements, we can keep it simple and just use the features of that range with the constraints of our VPC.
The Swiss Army CIDR
Since the 10.0.0.0 reserved range has a mask of /8 (making it a Class A range), and we know that with AWS VPCs, our CIDR range has to have a minimum mask of /16. We’re now left with that second octet, all alone in the cold. Like this:
^ = 10.0.0.0/16
(Min /16 as required by AWS)
Turns out, however, we can still use that second octet in our VPCs. So, if we re-imagine our little formula from above, we can adjust it like so if we want to create multiple VPCs:
^ = 10.0.0.0/16
^ = 10.1.0.0/16
^ = 10.2.0.0/16
Basically, we can get 256 unique base VPC Class B CIDR ranges out of that one reserved range. Now, because they are unique, we can peer them down the road (if needed), without worrying about overlap. That will likely be plenty for the majority of use cases.
The Net 2: Subnets
A lot of what we discussed regarding CIDRs can also be applied to subnets. In fact, a subnet is literally a sub-network: a further division of the network into smaller, segmented networks to better contain and control traffic.
For example, if we use our 10.0.0.0/16 block from above, we can now create whatever class C subnets we want on that range. To keep it simple, we can add a /24 mask to each subnet we’re creating, and now we have 256 addresses (not all usable, but that’s not overly important right now) available for each of them.
So, our network and subnets could look like this:
CIDR Block: 10.0.0.0/16
Why would we want to create multiple subnets like this? In the AWS world, there are multiple reasons:
- Subnets in a VPC can be Public, Private, or Isolated. We want to be intentional about how we are thinking about how our resources should access or be accessed by the internet. These designations help us to keep traffic isolated where it needs to be and to stay in line with the AWS Well-Architected Framework. This breaks down like this:
- Public: These subnets are intended for internet-facing resources. They have routing tables associated with an internet gateway, which just tells the instance how to route traffic to the internet, and handles NAT for instances with associated public IP addresses. There’s a subnet-level setting to auto-assign public IPs, or you can choose to assign one at instance/resource creation. This subnet is great for things like Load Balancers, NAT Gateways (we’ll talk more about those in a second), and public-facing instances.
Note: For now, just remember that for a resource in a public subnet to access the internet, it needs to have a public IP address associated. You can think of this as almost like a DMZ.
- Private: These subnets are intended for resources that should not be directly accessible from the internet. They don’t have an internet gateway associated and should not get public IP addresses. But! They can still potentially access the internet. You can create a NAT gateway and associate that with the routing table for your private subnets, and resources within them are able to reach the internet. You don’t need to assign a public IP to your resources or anything, the NAT gateway takes care of the address translation. This subnet is suitable for web servers behind load balancers, and other resources that need to make outbound connections or serve the traffic being passed from another internet-facing resource.
- Isolated: These subnets are intended for traffic that shouldn’t be accessible nor be able to access the internet. Essentially, local-only traffic. This works well for RDS databases and other resources that only need to handle internal traffic.
Note: By default, all subnets can talk to each other, regardless of their Public, Private, or Isolated designation.
- Subnets are associated directly with AZs (Availability Zones). When you create a subnet, you assign it to an AZ. So, when you assign a resource to a subnet, you’re putting it directly in an AZ. When creating highly available solutions, ideally, you’ll distribute each layer of your application architecture across multiple AZs. For example, for your web-server tier, you can place instances of your web-server into 2 different AZs. That way, if one AZ has issues, the instances in the other AZs will still be functioning. Theoretically, your application will continue running, assuming every layer is also deployed across multiple AZs.
- Some service configurations require you to have multiple subnets and that they are in more than one AZ. Multi-AZ RDS instances are an example. Again, this is to help facilitate redundant production workloads. Some services don’t use VPCs (or do so behind the scenes), so you may not run across this as much if you are deploying an application that uses more services than instances. Either way, it’s a good idea to have an understanding of what’s going on in the background.
In this article, we looked at what AWS VPCs are and how the underlying networks are defined logically. Even though more and more AWS services are abstracting the networking side of things away, it’s a good idea to have a basic understanding of what’s happening here. You’ll inevitably run into services or configurations that are going to require at least some knowledge of how traffic is flowing in and out, and the more you know about how that works, the deeper you’ll be able to dive into those services. Knowing how the networks are broken up is a good starting point for that understanding. In Part 2, we’ll look at how traffic moves, or is routed, internally and externally between those subnets, networks, and the internet.