On Demand, but When?
- 5/4/2019
- ·
- #development
The long, steady march towards managed infrastructure is fantastic. Less time configuring, securing, and scaling hardware, means more time for software developers to add real value. It’s even better when services are available on-demand–we pay only for what we use, up to whatever practically-infinite limit their supplier is willing to provide.
But here’s a little puzzle.
Suppose you’re running an urban farm and you want to sell twelve eggs each day. You know your chickens are good for no more than one each, meaning you’ll need at least twelve happy hens; call it 14-15, just to be safe. Given a choice between underprovisioned (and overwhelmed) chickens and paying for a bit more feed than you might need, you’ll probably choose the latter.
That’s the capacity planning we all know and love–push-based production, the bane of all things lean, demand-driven, and agile.
Now consider the on-demand (“chickenless”) alternative. Instead of keeping chickens on standby, you’ll wait for orders to come in. When they do, you spin up a chicken and head out to the coop to collect… which is as operationally elegant as practically ridiculous. We get the eggs, sure, but when? At the soonest, it’s brunch tomorrow. Completely unacceptable, if we wanted an omelet today.
Every system, whether an egg farm or cloud service, has some associated rise time. This is a measure of the system’s responsiveness, the time needed to scale to meet new demand. It might be very low, but it will never, ever be zero. Even though VMs, containers, function-as-a-service platforms, and myriad other on-demand services are fast and getting faster, anyone who’s waited on a cold Lambda can confirm that “on-demand” and “immediate” are still rather different things.
Should you use on-demand services?
On-demand models have much to recommend them. They can yield big operational savings over traditional, always-on systems while (somewhat paradoxically) increasing availability. Plus, less time wrestling with infrastructure will likely improve developer productivity and happiness.
In the present environment, rise times may not even matter. Service providers in competitive markets have strong incentives to keep getting faster, and the operational benefits of an on-demand service will often weigh heavier than any incremental performance hit on startup. But there are applications where even modest overhead is felt, as well as cases (think extending an edge network) where the time to scale an “on-demand” service may be measured in the minutes and hours needed to physically add new devices.
The bottom line is: treat “on-demand” with adequate skepticism and then take full advantage. But if you can forecast demand, and if responsiveness really matters, well, maybe there’s still a place for some ol’ fashioned planning ahead.