Cloud computing’s pace of innovation seems relentless. Each year at Reinvent, AWS proudly touts how many new services and service features it has released over the past twelve months; each year the number inexorably rises. At the 2015 conference, Amazon announced that it had offered over 700 improvements or new services during the previous year. For my recent analysis of the impact AWS is having on the industry, see What the AWS Numbers Really Mean .
When, at the 2014 conference, Amazon announced its new Lambda service , I immediately recognized it for what it represents: a further reduction in application time-to-market, and a shift upward in responsibility for application operations. Lambda obviates the need for users to create and manage application execution environments. Instead of creating a virtual machine or a container and then loading the application code into it, users hand that code over to AWS, and it takes care of making sure it executes.
Lambda has seen significant uptake in the market since its official release. Its offering has been matched by all the major cloud providers: Azure Functions , Google Cloud Functions , and IBM’s OpenWhisk (a bit of an odd name, this, but at least IBM broke with the “functions” pattern).
To recognize the progress, and offer an event to which the “code execution” community could gather and compare notes,A Cloud Guru hosted the first Serverlessconf — a conference devoted to building applications without any need to configure or manage execution environments. It was an amazing conference which had a palpable “I was there at the dawn of” feeling to it.
To the 250 attendees, one thing was clear: serverless applications are being embraced quickly by users large and small. Because the conference had such a spirit of pioneering about it, there was little vendor jousting and more excitement as speaker after speaker shared their experiences in working this new way.
AWS Lambda General Manager Tim Wagner kicked off the conference keynote in a dramatic fashion by swinging a baseball bat to destroy a stack of servers, as you can see from this gif:
Serverless, indeed! During his presentation he acknowledged the widely held suspicion that Lambda operates by dynamically loading containers and running the code inside of them.
To me, one of the most interesting aspects of the serverless approach is the fact that users pay only for execution time — the period during which the application code is actually running. As Wagner put it, serverless means only paying for resources while they’re delivering value, instead of paying for them whether they’re executing critical work or just sitting idle. He put it more vividly by extending the familiar pets vs. cattle trope to saying that serverless allows users to cut to the chase — instead of taking responsibility for a cow, they can just order a hamburger from a fast food chain and pick it up at the drive-thru window!
This approach can reduce user charges significantly (more on that shortly), which raises a question: how can Amazon make money on a service that is much less costly than an instance- or container-based application? Wagner said that it came down to “bin packing” — AWS cramming as many Lambda functions onto a single server as it can, and deftly installing and removing container instances to ensure they’re only on the machine as long as they are needed to run application code. To accomplish this, Lambda obviously must be a highly dynamic service, with smart tooling that constantly shuttles code packages on and off servers.
Other cloud providers got their chance to crow as well. IBM discussed the tooling it’s built around OpenWhisk, with a GUI design and operation environment to simplify application management (I will discuss this topic in more detail in my next blog). Google’s presentation focused on its Firebase database, which is a really interesting technology. It supports push notifications to client devices, which means client-side applications don’t need to poll for changes. Instead, the client can register to be notified when data operations on a specific Firebase resource occur; this makes it possible for a client to react in real-time when the underlying database changes due to data insertion or modification.
I want to focus on a couple of end user presentations, because, to my mind, they demonstrated why serverless applications hold so much potential.
A Cloud Guru gave a presentation on its application and architecture. ACG is a year-old company that company provides online cloud training (thus the name). While quite small (just a handful of employees), it has delivered training to tens of thousands of customers on its learning platform. That platform is based completely on serverless architecture by wiring together a rich client, external services (e.g., Auth0 for identity management), and various AWS services, all connected with Lambda functions.
Another user, this one at the other end of the age and size spectrum, also presented on its experience with serverless application design. Nordstrom is aggressively moving to public cloud computing, and one group decided to explore how it could leverage Lambda. Like ACG, it used a similar mix of services to build its application; in Nordstrom’s case it created a system to capture tag data, a constant flow of information packets that need to be processed in real-time. Overall, Nordstrom is positive about the value it receives from a serverless architecture, although the two presenters from Nordstrom did note some shortcomings with the tooling available for Lambda.
The most striking aspect of both presentations was the cost of running the application. Both sets of speakers noted that they had yet to break out of the free tier of Lambda usage (users get a certain number of Lambda calls per day at no charge). In other words, they were able to run a real-world revenue-generating application with no cost for computer processing, instead of paying for persistent virtual machines or containers, which need to run constantly even if processing is only performed sporadically.
Of course, not everyone is totally sold on the serverless concept. A number of tweets in the conference twitter stream expressed skepticism:
I am continually amazed at the number of people in the tech industry who, despite its staggering rate of innovation, snidely dismiss anything new as, at best, no significant improvement, and, at worst, just some long-established tech functionality gussied up and proclaimed as brand-new. I still hear people discuss cloud computing as “just like timesharing.” Even if the concepts are similar, this attitude overlooks the fact that these new technologies operate at vastly different scale and cost structures, effectively making them quite different than anything that went before.
More thoughtful, and certainly more temperately stated, was Aneel Lakhani’s year-old (but retweeted during the conference itself) discussion of how Lambda (and, it must be said, all the rest of the serverless function-executing cloud services) lock users into the service. Once your application is designed to a particular cloud provider’s serverless functions, a rewrite of the application is necessary to migrate it somewhere else. This is different than using a common execution format like Docker, which putatively gives you application portability.
I have heard the lock-in issue raised constantly throughout the history of cloud computing, and the serverless lock-in argument is one more example of that. I agree that embracing many of the services of any given cloud provider dramatically increases your commitment to it, but I disagree with the estimate of how problematic that commitment is. In my experience, users embrace commitment/lock-in when it offers significantly better price-performance than any alternative.
Of course, it’s possible that a given vendor may exploit that commitment downstream; however, I haven’t really seen that with AWS or the other large public providers. One could argue that serverless functionality itself is a counterexample to the fear of lock-in. After all, if AWS was really interested in exploiting customer commitment, it never would have released Lambda; instead, it would have pushed its virtualization-based EC2 service, which, from the example offered by ACG and Nordstrom, is at least 900% more costly than the Lambda-based alternative.
In any case, what’s the alternative? It’s not like staying on-premise magically unshackles users from vendor lock-in. Without pointing fingers at any given vendor, one could argue that lock-in is far worse in these legacy environments. And don’t believe that using containers somehow makes lock-in impossible. Migrating any complex application is difficult, and the code execution shift may actually be the easiest part of the job.
That’s not to say that serverless application architectures are a panacea. Any new design pattern brings its own issues, typically different than the previous platform. Nordstrom was quite explicit in identifying areas it found shortcomings in Lambda.
However, I believe serverless application design will be a powerful trend in the future, and will grow from its tiny presence today to a significant portion of the total industry application fleet. Any new platform that provides the kind of dramatic economic advantage that Lambda and its serverless brethren do is bound to generate user adoption. I would go so far as to say that, ultimately, serverless environments may be the way that microservices are implemented by most organizations. Freeing themselves from needing to manage execution environments means they can focus on application functionality, where all the end user value lies — everything else is just low-value plumbing.
The response to the conference was enthusiastic. So enthusiastic, in fact, that ACG announced at the event that there would another one in London later in the year. It will be interesting to see what transpires between now and then.
In my next post, I’ll discuss the future of serverless computing and some of the challenges it will need to address to fulfill its destiny.