Friday, 1st of Jun 2018
What is Serverless
Serverless isn't going away and more companies are starting to use it and more providers are growing their serverless offerings. So what exactly is it?
According to Wikipedia; Serverless computing is a cloud-computing execution model in which the cloud provider runs the server, and dynamically manages the allocation of machine resources. Pricing is based on the actual amount of resources consumed by an application, rather than on pre-purchased units of capacity.
Let's break that down and run through a basic scenario of a traditional architecture compared to a Serverless architecture.
Consider a simple LAMP (Linux, Apache, MySQL & PHP) stack, which should be familiar to most readers.
Above are two scalable EC2 web-app instances, running in separate availability zones. DB Master in one zone and a standby waiting in another zone. In front of these is an elastic load balancer which routes traffic to the web-app of choice. All EC2 instances would be setup identically with an Apache web server and PHP engine to execute scripts. When traffic or load increases on the servers more instances would load up in the relevant availability zones.
The administrator is responsible for the configuration, maintenance and up-keep of the infrastructure.
This includes, but isn't limited to the initial setup of the Linux server, the hardening of the server, on-going maintenance of software updates and security patches, as well as the installation and configuration of Apache & PHP and the relevant extensions.
Correctly configuring Linux and these packages can be strenuous, and is a specialisation in itself. Developers are often tasked with these jobs, not only during the initial setup phase, but also the on-going maintenance.
Develops core focus should be on writing and shipping code, but often they can't start that task until all the servers are humming the team. Only then can they start deploying code the world to consume.
Deploying code becomes the core focus of the developers and the need to configure, manage, patch and secure servers is no longer required. The cloud provider takes care of these steps, and provides developers a ready to use production environment without any pre-configuration.
A serverless example would most likely be microservice architecture instead of traditional monolith design.
An AWS Serverless application would start with a front-end website. A modern single page application framework like React or Angular could be hosted in an S3 bucket. As the frontend site is sitting in an S3 bucket there is no need to worry about scaling, maintenance of Apache servers or anything like that. AWS provides it as a fully managed service.
The API Gateway handles REST requests and routes the traffic to Lambda services, which is a function as a service. The Lambda function could be written in a number of languages including Node.js, Python or Java, the Lambda holds a snippet of code (a function), which is spun up when invoked, then shutdown upon completion. The function has a small footprint making the boot time incredibly fast, on average in fraction of a second.
Instead of having a PHP server which runs 24/7 and executes commands as they come in, the Lambda function will be sitting in a powered off state, not consuming resources and waiting for a request to come in before t springs into action. AWS takes care of all the scaling, if one user requests the function then one instance boots up, if a thousand users request the function then hundreds or thousands of instances boot up for the users, and shutdown once they are no longer needed. Aside from scaling, maintenance is also taken care of, AWS manage all security, package upgrades and configuration of the environment, the account owner just needs to deploy code.
In an end-to-end example, a user comes to a website (Served from S3), they visit the Contact Us page and fill in a customer enquiry form and hit Submit. The website makes a
POST request to a Lambda function. The function boots, writes the request to the database and sends the enquiry via email to the account owner. It then shuts down. The frontend website waits for the Lambda response and displays a Thank You message to the user upon completion.
Circling back to the Wikipedia definition and pulling apart the statement;
Cloud-computing execution model; the Lambda's execute the business logic and the S3 storage buckets serve the front-end website.
Cloud provider runs the server; Lambda, S3 and DynamoDB are all managed by AWS. The administrator doesn't need to worry about uptime, maintenance, patches or any of the run-time configuration.
Dynamically manages the allocation of machine resources; AWS will scale the S3 bucket to handle unlimited amounts of requests (This can be better configured with a CDN, but we'll leave this out for this example). Lambda will continue to scale with usage and the DynamoDB will accept whatever is thrown at it.
Pricing is based on the actual amount of resources consumed by an application, rather than on pre-purchased units of capacity; This article hasn't focused too much on pricing. S3 is charged on data-in data-out and storage. Lambda is priced on invocations and CPU cycle time. DynamoDB is similar to S3 and charged on read, write and storage usage.
Traditional servers are the sole responsibility of the administrator. The cloud provider will provision the machines, after that it's up to the admin to ensure they are kept running. The payment of these servers are relatively flat, users pay for whatever servers are allocated.
Serverless is fully managed by the cloud provider and the account owner is charged on a per-use basis.
Want to learn more about serverless, and the benefits and cost savings it can bring to your business?
Get in contact today for your free consultation email@example.com
You may also like
2 minute read
When working with serverless instead of traditional bare-metal or virtual machines, such as EC2, there can be substantial cost savings, and…
2 minute read
Building on the previous article about the building of the Monolithic Serverless Hybrid, we wanted to discuss how we handled deployments. We…