AWS Serverless Architecture - Why does it matter?
- What is serverless?
- Key Tenets of a serverless application model
- Why Does Serverless Matter?
- Benefits of serverless architecture
- Drawbacks of a serverless architecture
- Traditional vs. Serverless Architecture
What is serverless?
A serverless cloud computing execution model is one where the cloud provider dynamically manages the provision and allocation of servers. When you want to build an app, your development structure is broken down into two major parts. The first part includes general expectations for the running of the app, this is what AWS calls the “undifferentiated heavy lifting” generally found in every app and usually common from one to the other and includes things like setting up and running the servers where you deploy the app or running your CD tools.
While important, these things do not give you an advantage over any other app. What sets your app apart from your competitors and any other app is what constitutes the second part of development. This is what AWS calls “the secret sauce”. In order to effectively run a company and focus on your core business activities, you should make sure that the first part of setting up your application goes as much as possible to a cloud provider.
This is where serverless architecture comes in. It runs in stateless compute containers that are event-triggered, ephemeral, and completely managed by the cloud provider. Your charges are based on the number of executions made rather than pre-purchased compute capacity. This means your developers are spending less time on server management and concentrating more on providing value to customers.
Key Tenets of a serverless application model
Server management: Developers not need to install the runtime or patch the servers. The servers will be fully managed by the cloud provider.
Scalability: The microservices in your app will scale continually based on the more or less busy they get. This means they scale up as they get busier and scale back down the less busy the get.
Cost optimization: You’re only paying for what you’re using like memory consumption, CPU usage or network output rather than server units.
Built-in availability: High availability should be a capability you use, rather than a capability you build yourself.
For instance, with AWS Fargate, you deploy a container instead of managing a server. You pay only for the time your container runs, based on CPU and memory consumption. You can then scale your apps based on the resources you use at a particular time. If one machine running your Fargate instance goes down, it automatically falls to a new one.
A serverless application consists of the following:
AWS Lambda functions (FaaS) — They are the key enablers in Serverless architecture. They read and write from your database and provide JSON responses.
Security Token Service (STS) — These generate temporary AWS credentials to be used by users of the client application to invoke the AWS API (and thus invoke Lambda).
User Authentication —AWS Lambda is integrated with AWS Cognito, an identity service. This makes it easy to sign-up and sign-in to your web or mobile apps. AWS Cognito can authenticate users through social ID providers or using in-house identity system.
Database — AWS DynamoDB, while not essential for a serverless application, provides a fully managed NoSQL database.
Why Does Serverless Matter?
Regardless of what programming language you use, if your users are not satisfied with your application, the purpose is defeated. Users care more about things like functionality, navigation and aesthetics. Being able to carry out these functions is that what being serverless allows you to do.
They take away all the heavy lifting and allow your developers to concentrate on the user interface and experience while your business focuses on providing excellent services to your users. Business goals can be easily targeted and energy can be directed at what matters most to your users.
Benefits of serverless architecture
There are a number of benefits to going Serverless, which include:
- Deploying smaller units result in faster delivery of features to the market. Thus increasing the ability to adapt to change.
- Reduction in the cost of hiring backend infrastructure engineers.
- Reduction in operational costs.
- The set up is faster and smoother
- Operational management becomes much easier.
- Fosters the adoption of Nanoservices, Microservices, SOA Principles.
- Builds and encourages innovation
- The cost is determined based on the number of function executions, measured in milliseconds instead of hours.
- There’s zero need for system administration.
- The server is scalable according to the usage.
- Monitoring out of the box.
- You’re no longer responsible for the backend infrastructure.
- Gives businesses the opportunity to release new features faster than before.
- It is possible that users can more easily provide their own storage backend(i.e Dropbox, Google Drive).
Drawbacks of a serverless architecture
On the other hands there are some points that might detract from a serverless solution namely:
- An increased risk exposure meanings more trust needed for a third-party provider.
- Security risk becomes a major consideration.
- Disaster recovery risk increases
- Reduces overall control of the server
- Vendor lock-in requires more trust for a third-party provider.
- Increased architectural complexity.
- Local testing becomes quite dicey.
- The cost is unpredictable and so can be excessive because the number of executions is not predefined.
- Immature technology results in component fragmentation, unclear best-practices.
- Significant restrictions on the local state.
- Duration of execution becomes capped.
- The discipline required against function sprawl.
- Multi-tenancy increases the chances of neighbour functions hogging system resources behind the scenes.
- If not correctly structured, Increased request latency could lead to poor user experience
Traditional vs. Serverless Architecture
In the past and even up until now, applications have run on traditional servers. This meant someone had to permanently monitor the servers continuously in order to fix errors as they arise as well as, patch and update. The responsibility of this can be exhausting and can affect not just productivity but also user experience of the app. With serverless, however, you no longer have to worry about issues like downtime and monitoring, patching or updating.
This responsibility now falls solely on your cloud provider and you can focus on your core business activities and second-tier app concerns like user experience, user interface and support.
While there are still some cases where traditional servers outshine serverless, deciding on which model to use depends on the function and application needs.
Let’s look at each aspect and see which comes out on top;
When it comes to pricing, the cost of serverless is reduced because clients only pay for what they consume. Traditional servers were charged based on the unit cost and this could make a huge dent in your pocket. Serverless charges based on execution. You’re only charged for the number of executions and as your executions increase, your cost increases and vice versa. This works with a pay as you use model, so if the executions reduce so does your cost. Your number of seconds varies with the amount of memory you require and so does your price per MS (millisecond). Shorter running functions are more adaptable to this model with a peak execution time of 300-second for most Cloud vendors.
Serverless functions are accessed only as private APIs. Which requires you to set up and API Gateway in order to gain access. While this doesn’t have an impact on your pricing or process, it also means you cannot directly access them through the usual IP, unlike the traditional servers. In this instance, traditional servers have an edge and an advantage over serverless.
3rd Party Dependencies
Most projects rely on libraries that are not built into the language or framework you use. Often, developers use libraries with functionality that includes cryptography, image processing, and so on. These are external dependencies that can be quite heavy. Without system-level access, these dependencies have to be packaged into the application itself. This means for simple applications that do not have a lot of dependencies, serverless is still the way to go. However, for applications that have a lot of dependencies, it is better to use traditional servers.
With serverless, setting up diverse and multiple environments is just as easy as setting up a single environment. You no longer have to set everything up individually and deal with the complexities involved. Seeing as with serverless, you’re charged per execution, it saves you a lot and is a large improvement over traditional servers.
Serverless computing has a standard 300-second timeout limit. Because of this limit, complex functions that run long are not the best for serverless. Serverless will, therefore, not be the best to handle applications that have variable execution times or those that require information from an external source to provide responses. This is where the traditional server clearly comes out on top and is preferred to serverless.
While the scaling process for Serverless is automated, simple and all-around convenient, it gives the developer little or no control which means preventing and dealing with errors or issues as they occur is difficult. Traditional servers, however, allow for this because monitoring, patching and updating are done manually.
Available Options for AWS Serverless
- AWS Lambda
- AWS Cognito
- AWS Kinesis
- Amazon S3
- AWS DynamoDB
- Amazon SQS
- Amazon API Gateway
- Amazon Step Functions
Serverless architecture is an exciting and promising alternative to traditional servers, but due to the limitations it comes with it is difficult to decide on if it’s an improvement to traditional servers or just a fancier alternative. Depending on the business requirements and the use of the application, serverless may or may not be a better alternative to traditional servers. So it's best to take a step back and give a critical review of your solution to see if it can benefit from going Serverless or not.