Once upon a time, virtualized containers came on the scene and completely changed the compute game. Flash forward to 2018, and serverless computing is the new kid on the block.

Haven’t been brought up to speed yet on the new hotness that is containers and serverless computing? No worries. We’re here to lay it all out for you.

The Dwindling Operating System

In medieval times, people believed that the entire universe revolved around the Earth. Similarly, up until recently, it was believed that the entire universe of enterprise IT hardware and software revolved around the operating system.

Both concepts have now been proven to be false. In astronomy circles, it was Copernicus who came along to correct the narrative. And in terms of the IT world, containers are the new Copernicus.

This story begins with with hardware virtualization and the packaging of guest operating systems as part of virtual machines, all deployed together under the control of a hypervisor. One problem with this system, though, is that full VMs are resource-hungry, racking up RAM and CPU cycles and limiting the number of apps that can be run on a single server.

It was inevitable that people would start asking why operating systems couldn’t also be virtualized, only using the parts that are necessary for any particular application or microservice to run. The answer to this query? Containerization, my friends.

Containers Are All the Rage with CIOs

Containers, epitomized by Docker, have provided a pretty neat solution to the VM issues that IT leaders face. Rather than applications depending on a full-host operating system for its runtime environment, libraries, dependencies, configuration files and binaries, containers allow these components to be packaged up as an image along with the application itself.

In simplest terms, a container is a runtime instance of an image. And since containers have root access to the kernel of the host operating system, more applications can then be run on each server.

Now, instead of the numerous gigabytes taken up by a VM with its guest OS, a container measures only a few dozen megabytes. Containers are quickly provisioned and can be destroyed once they are no longer needed. They can be easily ported to different environments, and updating is quicker and easier, too: the configuration file is simply amended, a new container created and the old one destroyed. Too easy!

For developers with the skills needed to deploy, manage and monitor containers, the lifecycle of an application or microservice can be greatly shortened. Apps can be deployed much more quickly, then replicated, moved and backed up. Testing is easier as well because there is no difference between testing an app locally, on a test server, or in production.

Unsurprisingly, containers have been a big hit with CIOs. In fact, according to Enterprise Technology Research (ETR), containers have prompted the strongest buyer intention of any other technology.

Containers: Not All Roses and Butterflies

So far so good on the concept of containers, right? Not so fast, kids. Containers come with their own set disadvantages in terms of the way they are designed. For example, since containers share the same kernel and have root access, they are more vulnerable to any security events that affect the kernel. In contrast, VMs are isolated from one another and each have their own kernel, meaning it’s highly unlikely that a security issue affecting one application will impact other applications.

Another issue with containers is that they don’t create their own operating systems, so they are restricted to using the native OS of the host. If an application requires a different OS, it will have to be added onto a new server running that particular OS. This is more of a problem with businesses running their own servers, as it may necessitate more hardware to deploy applications and microservices on different operating systems.

The two issues above aren’t going to be deal breakers to most IT decision makers, but there is also a third issue that can be easily overlooked by an overenthusiastic CEO or even CIO – one that can heavily impact the bottom line. Containers are pretty complex to package up securely, and your average DevOps teams are going to struggle to manage them. That means you have to either recruit more talented programmers, or invest in extra training for your current programmers.

There seems little point in saving money on running VM instances if all of that potential profit is going to be absorbed by extra wage and training costs. This has forced businesses to give a good, hard look at the next level of abstraction: serverless technology.

The Serverless Revolution

Event-driven applications and serverless computing effectively solve the issue of suitable programmers, and they also offer additional time and cost-saving benefits to businesses.

The most well-known example of a serverless computing service is AWS Lambda, although Microsoft Azure (Functions) and Google (Cloud Functions) have also joined the party. Anybody who has used these services will know that there is no server infrastructure at all to set up, maintain or monitor. This allows developers to focus their attention on the application logic, determining events and desired outcomes while letting Amazon, Microsoft or Google handle the server side of things.

Another huge factor that will surely play a part in serverless adoption is the draining effect of VM sprawl – a much bigger issue than many CEOs and CFOs realize. With businesses estimated to be using an average of just 20% of the resources they’ve allocated, there is considerable wastage. This can be tightened up significantly by deploying event-driven apps using serverless technology.

Instead of VMs lying around waiting for applications to use them, serverless resources are automatically scaled up and down in response to end user activity. This is a much more efficient way of utilizing cloud resources and is particularly suitable for applications with unpredictable and fluctuating usage.

Serverless computing is already having a huge impact on productivity with some companies, enabling them to benefit from free tier public cloud allowances, even when deploying full-scale public apps. At AWS RE:INVENT 2017, Expedia’s CEO Mark Okerstrom discussed how Expedia was leveraging over 300 Lambda serverless functions to process over 40 million invocations per day. Pretty mind blowing, to say the least!

Even Serverless Computing Has (a couple) Downsides

In theory, serverless computing is a no-brainer solution to developers looking to focus their time on developing apps rather than setting up container environments, turning containers into services, creating swarms and all the other time-consuming tasks needed to deploy containerized apps.

However, embracing serverless computing is not yet a perfect solution. For starters, it involves handing over control and placing trust in the public cloud provider you are using, and having to customize your applications to your provider’s custom cloud function. This doesn’t sit well with some business owners who prefer more transparency and control over how their apps are deployed, and are also wary of getting locked-in by a vendor.

In addition, serverless computing services aren’t suitable for all types of application due to restrictions in the environment provided. For example, AWS Lambda will provide you with 128 to 1536 MB of RAM (and a proportional amount of CPU); 0.1s to 300s of execution time; 512 MB of writeable disk space and 250 MB of deployed code with dependencies (50 MB if zipped). There is also no root access, which can disqualify some applications from the service or require developers to deploy extensive workarounds.

How Shamrock Can Help You Containerize, Go Serverless and Save Some Bucks

Cloud and virtualization technology is constantly evolving, and it can be difficult to determine which route to follow when every vendor you meet claims that their product is the most cost-effective (i.e. only) solution. False!

The experts at Shamrock Consulting Group are battle tested across hundreds of cloud projects, so there’s nothing we haven’t seen or found a solution for.

We’re also vendor-neutral consultancy, meaning our fiduciary duty is to you, not the cloud providers. When you engage us to help you figure out if the container or serverless computing route is best for your business, it will be with 100% certainty that our analysis and feedback will be based on our extensive industry experience and objective knowledge of the products available. Let Shamrock do it for you.

Ben Ferguson

Ben Ferguson

Ben Ferguson is the Senior Network Architect and Vice President of Shamrock Consulting Group, the leader in technical procurement for telecommunications, data communications, data center and cloud services. Since his departure from Biochemical research in 2004, he has built core competencies around enterprise wide area network architecture, high density data center deployments, public and private cloud deployments, and Voice over IP telephony. Ben has designed hundreds of wide area networks for some of the largest companies in the world. When he takes the occasional break from designing networks, he enjoys surfing, golf, working out, trying new restaurants and spending time with his wife Linsey and his dog, Hamilton.