Skip to content

Go Faster with Microservices: From Monolithic to Cloud-Native

Defining Microservices

In order to define microservices, let’s look at the word itself —Micro” + “Services”. “Micro” means small. And that means small not just in the size of the code itself, but also in a scope of the service and the size of the team working on that service. And the second half is “Services”. “Service” refers to an API that provides a defined protocol, defined contracts, and defined Service Level Agreements (SLAs). As a consumer of that service, you can rely on those protocols because you have a contract with the service. 

So “microservices” are single purpose, self-contained services with well-defined protocols and published SLAs, and they're designed to be built and operated by small autonomous teams.

In the first installment of this two-part series, we will cover the benefits and limitations of microservices in today’s cloud computing landscape.

Why switch to microservices?

Microservices aren’t just a trendy way to architect applications. They offer significant benefits:

#1 Microservices accelerate development

In traditional monolithic applications, especially at larger companies, you have huge teams of technologists that are working on a massive application altogether, and that creates a lot of friction and overhead. Conversely, you're working on microservices-based architectures, you can split out your team into smaller autonomous teams. Such teams can work in parallel and accelerate the overall kind of velocity of your organization. 

#2 Microservices improve operations

The second major driver is that microservices can improve your operations — by isolating issues down to individual microservices, you can often reduce the time it takes to repair any particular workload.

#3 Microservices enable massive scaling

Microservice architecture also enables massive scaling, and it does that by allowing each service to be independently scaled to meet the demand for the application feature it supports.

#4 Microservices let you choose the best technology 

Nimble teams leveraging microservices aren’t restricted to a “one size fits all” approach, rather, they have the freedom to choose the best tool for every job. 

#5 Improving resilience and security

In a monolithic architecture, if a single component fails, it can cause the entire application to fail. With microservices, applications handle total service failure by degrading functionality and not crashing the entire application. So by switching to a microservices architecture, you increase your application’s resistance to failure. 

Why does AWS use microservices?

AWS is one of the best examples of a company that actually practices what they preach. They use microservices internally. They have shared eight key drivers for their own internal adoption of microservices:

  1. Pick the right tool for the job
  2. Improve resiliency and security
  3. Lower costs with granular scaling
  4. Optimize team productivity
  5. Create new compositions easily
  6. Experiment and fail safely
  7. Adopt technology faster
  8. Deploy features safely and quickly

You can read more about AWS’ perspectives on the benefits and drivers for using microservices here: https://aws.amazon.com/microservices.

Trade-offs to consider

Adopting the microservices architecture does create some trade-offs that you need to understand and consider. Let’s explore three here.

#1 Higher cognitive load

Monolithic applications pack a lot of complexity into a single system. While they are hard to scale, they are pretty easy to understand in many ways because you do not have to worry about an application being composed together of many different systems. 

Conversely, with microservices architecture, if you really want to understand the overall system, you have to start thinking about different services and how they relate to one another. It’s a complex ecosystem with many moving parts and a lot of factors to consider - hence the higher cognitive load.

#2 Complexity for debugging

If you have a monolithic application, you don't have to leave the borders of your application to do the debugging. But with a microservices architecture, you now have a distributed system. Distributed systems have very different ways and methods for instrumenting, tracing, and debugging, and that means you have to adopt modern tools and modern instrumentation such as distributed tracing and observability platforms.

However, this “tradeoff” is, in our opinion, actually a benefit, because to take full advantage of the public cloud, you really need to evolve your tools in monitoring anyway. While it may be less convenient in the short term, in the longer term it will help make your teams more effective and efficient.

#3 Inter-service Network Latency

This last trade-off is due to the limitation of physics - the speed of light. We can’t overcome the speed of light! In a monolithic application, all your requests are handled and processed within a single system, and often they don't even have to leave the borders of that application. 

Conversely, with architectures composed of microservices, a single request coming from a customer may touch dozens of different interconnected systems behind the scenes. When you decompose your monolithic application, you have to understand that you cannot treat remote services like local services. You cannot overcome the speed of light, and you need to treat the network as something that will fail and will underperform at times.  This means limiting network-bound calls via batching of requests, caching etc. as well as gracefully handling delays via throttling, circuit breakers, buffering, queues and so on.

Now that you’ve learned more about what microservices are and their key benefits and tradeoffs, be sure to tune in next week for the second installment, where will we cover how to build microservices on AWS.

Author Spotlight:

Jonathan LaCour

Keep Up To Date With AWS News

Stay up to date with the latest AWS services, latest architecture, cloud-native solutions and more.