Skip to content

Making Sense of Serverless blog series

Post # 1: What is “Serverless,” Really?

Day 1: Traditional Servers

In the beginning, there was the server. The fundamental unit of compute and storage, servers were the atomic building blocks of applications. The infrastructure really influenced how we built applications, applications were monoliths, and the scaling strategy was mostly vertical. If you were running out of capacity, you built a bigger server. Capacity planning was based upon peak load, meaning that you were running resources at all times. This was problematic because it led to a lot of idle, unused resources at any given time.

Management was very manual. We had to provision resources by ordering servers, racking and stacking them, getting them connected to networks and configuring them. The management of operating systems and packages was also very manual, and there was basically zero granularity for cost. You would really think about your cost in terms of the infrastructure itself versus how much a specific application or feature costs.

On the Second Day was Virtualization…

Virtualization allowed traditional servers and their associated storage to be partitioned. That brought a lot of value, particularly in regards to capacity planning for utilization. You could do resource packing and make use of spare capacity, which you couldn’t really do before. However, it didn’t really help that much with management. People, processes, and tools had to do a lot more work around managing that resource packing, moving VMs around between hypervisors, and managing OS packages, and resources.

The big benefit from an application perspective was that applications were able to start thinking in terms of horizontal scaling a little bit more, because you could modularize your applications to run in smaller VMs that could then be migrated around. Additionally, you could scale up and down by adding additional VMs and load balancers. This brought a slight uptick in granularity for cost as well as apps were becoming more modular.

The Third Day, AWS said, “Let There Be Cloud!”

The introduction of Amazon S3 and Amazon EC2 ushered in a new way of thinking about architecture and scaling, with fundamental units of compute and storage available via APIs as a service. As a result, applications became much more modular and functioned more like interconnected services. The scaling strategy became significantly more horizontal, and as things like auto-scaling were introduced, it provided the ability to smooth out scaling and capacity planning, and start to map usage to demand. But because the scaling unit was still servers, you wound up with fairly lumpy mappings. Additionally, while scaling up and down with servers was progress, it was still slow: you had to wait for servers to boot, and if you don’t want to wait you have to set up your auto-scaling triggers to try and be ahead.

However, the management path was greatly reduced: you got rid of your data center, although you still needed to manage OS, packages, resources and so on because you still have a server to manage. But APIs meant that we got tools, like Infrastructure as Code tools and configuration management platforms, so we can start thinking of servers more as cattle, not pets.

This was also a significant improvement to the granularity of cost: we now had resource tagging in the cloud, and apps become services that were more granular and split up, so you subsequently had more insight and detail into your environment.

On the Fourth Day Came Containers…

Docker was introduced in 2013, and had a meteoric rise, resonating especially with application developers. Then came container orchestration platforms like Kubernetes in 2015. Containers made us start to think more and more into microservices — you could break up applications into smaller units. Scaling became highly automated, but still relatively slow. Capacity management curved map usage to demand, and now the standard unit shifted from servers to pods.

However, what many people don’t realize is that container management and orchestration platforms are extremely complex. Additionally, you are still managing a lot of the execution environment. Containers aren’t just your code, they also contain quite a bit of dependencies, so you have to do things like rebuild container images to apply security fixes and management. It’s better than managing a server in many ways but there’s still a management tax. With containers, there is a slightly better cost granularity than properly architected EC2 environments, but not massively so.

On the Fifth Day was Lambda…

AWS Lambda is a service that allows developers to create very small units of code that run natively in the cloud. There are no servers to manage, containers, orchestration platforms, or substrate to deal with. Just code.

Clearly, the execution environment that AWS runs for us does have servers and containers, but the key thing to note is that it’s available as a service. Lambda functions can be built in many popular languages, and functions can be triggered manually or automatically based upon events enabling event-driven architectures. Functions can run in massive parallel with no consideration for resource scaling, and Lambda works on a pay-per-execution model. You pay for only as long as the duration of your execution is.

In 2014, AWS Lambda was released and then the term “serverless” was born. Application and infrastructure begin to converge and applications were broken into the smallest units of code. Scaling became mostly automatic and transparent. The emergence of serverless has been the biggest drop in management tax so far in our leaps forward with cloud compute — there is no OS, no servers, no containers, no orchestration engine, and basically zero administration. The granularity of cost has subsequently become extremely high — with serverless, you can specifically see which components of your system and application are costing you money.

Tune in for the second installment of this series next week, where we go into more detail about the benefits and challenges of serverless.

Author Spotlight:

Jonathan LaCour

Keep Up To Date With AWS News

Stay up to date with the latest AWS services, latest architecture, cloud-native solutions and more.