Header Ads

The business case for serverless [TechCrunch]


While serverless is typically championed as a way to reduce costs and scale massively on demand, there is one extraordinarily compelling reason above all others to adopt a serverless-first approach: it is the best way to achieve maximum development velocity over time. It is not easy to implement correctly and is certainly not a cure-all, but, done right, it paves an extraordinary path to maximizing development velocity, and it is because of this that serverless is the most under-hyped, under-discussed tech movement amongst founders and investors today.

The case for serverless starts with a simple premise: if the fastest startup in a given market is going to win, then the most important thing is to maintain or increase development velocity over time. This may sound obvious, but very, very few startups state maintaining or increasing development velocity as an explicit goal.

“Development velocity,” to be specific, means the speed at which you can deliver an additional unit of value to a customer. Of course, an additional unit of customer value can be delivered either by shipping more value to existing customers, or by shipping existing value—that is, existing features—to new customers.

For many tech startups, particularly in the B2B space, both of these are gated by development throughput (the former for obvious reasons, and the latter because new customer onboarding is often limited by onboarding automation that must be built by engineers). What does serverless mean, exactly? It’s a bit of a misnomer. Just as cloud computing didn’t mean that data centers disappeared into the ether — it meant that those data centers were being run by someone else, and servers could be provisioned on-demand and paid for by the hour — serverless doesn’t mean that there aren’t any servers.

There always have to be servers somewhere. Broadly, serverless means that you aren’t responsible for all of the configuration and management of those servers. A good definition of serverless is pay-per-use computing where uptime is out of the developer’s control. With zero usage, there is zero cost. And if the service goes down, you are not responsible for getting it back up. AWS started the serverless movement in 2014 with a “serverless compute” platform called AWS Lambda.

Whereas a ‘normal’ cloud server like AWS’s EC2 offering had to be provisioned in advance and was billed by the hour regardless of whether or not it was used, AWS Lambda was provisioned instantly, on demand, and was billed only per request. Lambda is astonishingly cheap: $0.0000002 per request plus $0.00001667 per gigabyte-second of compute. And while users have to increase their server size if they hit a capacity constraint on EC2, Lambda will scale more or less infinitely to accommodate load — without any manual intervention. And, if an EC2 instance goes down, the developer is responsible for diagnosing the problem and getting it back online, whereas if a Lambda dies another Lambda can just take its place.

Although Lambda—and equivalent services like Azure Functions or Google Cloud Functions—is incredibly attractive from a cost and capacity standpoint, the truth is that saving money and preparing for scale are very poor reasons for a startup to adopt a given technology. Few startups fail as a result of spending too much money on servers or from failing to scale to meet customer demand — in fact, optimizing for either of these things is a form of premature scaling, and premature scaling on one or many dimensions (hiring, marketing, sales, product features, and even hierarchy/titles) is the primary cause of death for the vast majority of startups. In other words, prematurely optimizing for cost, scale, or uptime is an anti-pattern.

When people talk about a serverless approach, they don’t just mean taking the code that runs on servers and chopping it up into Lambda functions in order to achieve lower costs and easier scaling. A proper serverless architecture is a radically different way to build a modern software application — a method that has been termed a serverless, service-full approach.

It starts with the aggressive adoption of off-the-shelf platforms—that is, managed services—such as AWS Cognito or Auth0 (user authentication—sign up and sign in—as-a-service), AWS Step Functions or Azure Logic Apps (workflow-orchestration-as-a-service), AWS AppSync (GraphQL backend-as-a-service), or even more familiar services like Stripe.

Whereas Lambda-like offerings provide functions as a service, managed services provide functionality as a service. The distinction, in other words, is that you write and maintain the code (e.g., the functions) for serverless compute, whereas the provider writes and maintains the code for managed services. With managed services, the platform is providing both the functionality and managing the operational complexity behind it.

By adopting managed services, the vast majority of an application’s “commodity” functionality—authentication, file storage, API gateway, and more—is handled by the cloud provider’s various off-the-shelf platforms, which are stitched together with a thin layer of your own ‘glue’ code. The glue code — along with the remaining business logic that makes your application unique — runs on ultra-cheap, infinitely-scalable Lambda (or equivalent) infrastructure, thereby eliminating the need for servers altogether. Small engineering teams like ours are using it to build incredibly powerful, easily-maintainable applications in an architecture that yields an unprecedented, sustainable development velocity as the application gets more complex.

There is a trade-off to adopting the serverless, service-full philosophy. Building a radically serverless application requires taking an enormous hit to short term development velocity, since it is often much, much quicker to build a “service” than it is to use one of AWS’s off-the-shelf. When developers are considering a service like Stripe, “build vs buy” isn’t even a question—it is unequivocally faster to use Stripe’s payment service than it is to build a payment service yourself. More accurately, it is faster to understand Stripe’s model for payments than it is to understand and build a proprietary model for payments—a testament both to the complexity of the payment space and to the intuitive service that Stripe has developed.

But for developers dealing with something like authentication (Cognito or Auth0) or workflow orchestration (AWS Step Functions or Azure Logic Apps), it is generally slower to understand and implement the provider’s model for a service than it is to implement the functionality within the application’s codebase (either by writing it from scratch or by using an open source library). By choosing to use a managed service, developers are deliberately choosing to go slower in the short term—a tough pill for a startup to swallow. Many, understandably, choose to go fast now and roll their own.

The problem with this approach comes back to an old axiom in software development: “code isn’t an asset—code is debt.” Code requires an entry on both sides of the accounting equation. It is an asset that enables companies to deliver value to the customer, but it also requires maintenance that has to be accounted for and distributed over time. All things equal, startups want the smallest codebase possible (provided, of course, that developers aren’t taking this too far and writing clever but unreadable code). Less code means less surface area to maintain, and also means less surface area for new engineers to grasp during ramp-up.

Herein lies the magic of using managed services. Startups get the beneficial use of the provider’s code as an asset without holding that code debt on their “technical balance sheet.” Instead, the code sits on the provider’s balance sheet, and the provider’s engineers are tasked with maintaining, improving, and documenting that code. In other words, startups get code that is self-maintaining, self-improving, and self-documenting—the equivalent of hiring a first-rate engineering team dedicated to a non-core part of the codebase—for free. Or, more accurately, at a predictable per-use cost. Contrast this with using a managed service like Cognito or Auth0. On day one, perhaps it doesn’t have all of the features on a startup’s wish list. The difference is that the provider has a team of engineers and product managers whose sole task is to ship improvements to this service day in and day out. Their exciting core product is another company’s would-be redheaded stepchild.

If there is a single unifying principle amongst a startup’s engineering team, it should be to write as little code—and be responsible for as few non-core services—as humanly possible. By adopting this philosophy, a startup can build a platform that can process billions of transactions at an extremely predictable, purely-variable cost with nearly zero devops oversight.

Being this lazy takes a surprising amount of discipline. Getting good at managing a serverless codebase and serverless infrastructure is nontrivial. It means building extensive practices around testing and automation, which means an even larger upfront time investment. Integrating with a managed service can be unbelievably painful, with days spent trying to understand all of the gaps, gotchas, and edge cases. The temptation to implement a proprietary solution can be incredible, especially when it means a story can be done in a matter of minutes or hours instead of days or longer.

It means writing wonky workarounds when a service only accommodates 80% of a developer’s needs. And as the missing 20% of functionality is released, it means refactoring code to remove the workaround, even when it is working just fine and there is no near-term benefit to changing it. The substantial early time investment means that a serverless/managed-service-first approach is not right for every startup. The most important question to ask is, over what time scale do we need to be fast? If the answer is days or weeks, as is the case for many very early-stage startups, it is probably not the right approach.

But if the timescale for velocity optimization has shifted from days or weeks to months or years, it is worth taking a close look at going serverless.

Recruiting great engineers is extraordinarily hard—and only getting harder. It is a tremendous competitive advantage to task those engineers with building differentiated business functionality while your competitors build services that do commoditized, undifferentiated heavy lifting, and then remain stuck with the maintenance of those services for years to come. Of course, there are certain cases where serverless just doesn’t make sense, but those are disappearing at a rapid rate (for example, Lambda’s 5-minute timeout was recently tripled to 15 minutes)—and reasons such as lock-in or latency are generally nonsense or a thing of the past.

Ultimately, the job of a software startup—and therefore the job of the founder—is to deliver customer value above and beyond the capability of the competition. That job comes down to maximizing development velocity, which, in turn, comes down to mitigating complexity wherever possible. It may be that every codebase, and therefore every startup, is destined to become “a big ball of mud”—the term coined in a 1997 paper to describe the “haphazardly structured, sprawling, sloppy, duct-tape-and-baling-wire, spaghetti-code jungle” that every software project seems eventually destined to become.

One day, complexity will grow past a breaking point and development velocity will begin to decline irreversibly, and so the ultimate job of the founder is to push that day off as long as humanly possible. The best way to do that is to keep your ball of mud to the minimum possible size— serverless is the most powerful tool ever developed to do exactly that.


Source: TechCrunch https://techcrunch.com/2018/12/15/the-business-case-for-serverless/
Powered by Blogger.
// check for file exists steve added below // check for file exists steve added above