As with every experiment a developer makes, deployment, even if it were testing, is hard. The disciplined flow of take a singular feature, build it to a MUP, deploy it to figure things is a tall ask. I would not say this is because most development cycles are haphazard. As I have observed, a good chunk comes from trying to get the best possible version out.
Be that as it may, got myself to deploy a small executable version of one of our old pipelines today. I do have gripes with how it currently is - regex is painful.  . But that aside, the other point of concern once things were running, was improving on what's on the plate.
While at it, did fiddle with alternatives for the DigitalOcean droplet, a while back. One of the interesting pointers that often gets lost in the lower end of the cost focused spectrum, is value. Agreed that there is always a what you get is what you pay for standard, but some don't quite add up.
Without taking names, here's what I could see from a couple of choices I evaluated. The three cloud instances had very similar specs - 1 vCPU, 1GB RAM, 20/25GB of SSD. On the OS end, DigitalOcean had a recently upgraded Ubuntu 20.04 while the others had Debian9 / Ubuntu 21.04. While the new instances were clean with what's running, the DO instance had a fair bit running. Putting down concrete numbers, the two new ones were running at 25%/9% of memory. The DO instance, in contrast, had a steady 50% usage to start with, thanks to containerd.
While CPU peaks to about 30% when it runs, the runtime itself is a vast difference between instances. The DO instance, surprising as it is, completes at a steady 250ms (worst case of 550ms over runs). The other two, take a second to complete, steady. This particular part of the pipeline has seen its share of shaving (hoping, not a yak), and has I'd presume hit its peak. But, that said, the seeming similarity not reflecting in the outputs, is sure a takeaway.
One of the first reasons why crisp (when it still was Tenreads) switched to DigitalOcean was cost. And one of the artificial constraints I had put in place for devs is to use the erstwhile 512MB systems. The primal reasoning was to ensure we understand our constraints and the systems. Not surprising has my run over the last year has been, where spawning an 8GB box for staging tests isn't unusual. Guess, a second, personal takeaway here.
For context -
* g++ generated binary with **O3** flag and thread support. * libcurl, pugixml for external dependencies. * <regex> addition takes up 65% of the binary size (232K). * Prior experiments with cPython3.9 and Go 1.15 had runtimes of about 5s and 3s each.