In the good old days (well, perhaps they weren’t always “good”), you had to build most things yourself, rather than buy them off the shelf. You had to make your own burgers. Ok, I made up that specific case….
However, when it came to technology stacks, there was less available than today, and you had to build a lot more. Of course, I’m not saying we had to build operating systems and databases from scratch! However, there were fewer tools available further up the tech stack. I remember interning at Lehman in uni, and there was a team dedicated to developing and supporting various very cool web applets which had all been built internally for displaying charts with market data. Today, there are loads of open source tools doing just that, such as Plotly, ggplot etc. as well as many commercial chart tools. In all likelihood, today, you would probably use an off the shelf chart tool, rather than building it all yourself.
When it came to deploying apps, whether it was web based applications, or risk calculations, Lehman had lots of Linux boxes. Of course these machines had to properly maintained. The difficulty with having your own local resources is scaling the computation power. If you have too few, jobs could stack up waiting for compute. If you have too many, you pay for lots of hardware which wouldn’t be number crunching a lot of the time.
Today, with the cloud, we can just spin up Linux boxes when we need to use them for computation, and switch them off when not in use. With AWS, we have EC2, with Google Cloud, there’s the Compute Engine. The question with the cloud is not simply whether we use the lower level services like EC2, to replicate local Linux server boxes, but also to think about higher level services. Higher level services can include managed databases, which scale when we use more data and compute. Serverless computing using things like AWS Lambdas or Google Cloud Functions, allow us to run computations at scale when needed in a more convenient way. We don’t need to worry about spinning up and down EC2 instances etc. it’s done in the background for us with Lambdas.
Whilst it might be easier, the more we use higher level cloud services, the more we’ll get locked into a specific way of doing things and have more vendor lock in. If we just relied on EC2 to install and manage applications etc. it would be easier to move between different cloud providers or indeed local resources. But if a particular service saves us a lot of time, it’s worth paying for and we can also accept an element of vendor lock in. We can use pricing calculators to estimate the cost of using particular services too, which can help our decision making.
One thing to bear in mind, is that whilst we don’t need to physically maintain machines locally if we use the cloud, we still need to manage the services we use in the cloud.
Another important point with the cloud, is that we don’t necessarily have to decide to use the cloud or local for everything. We can have a hybrid approach using the cloud for some workloads or local resources for others, if it is more cost effective. If we have a lot of continual computation, for a certain workload, we might find it better to spend money on buying local hardware. For other processes, where we keep scaling up and down, the cloud is probably better.
All the decisions we make such as whether to use the cloud (or have a hybrid cloud solution), which managed services to use etc. all depend on your specific circumstances. It’s important to estimate the types of workloads you’ll use in order to make the right decision. One thing is for certain, the cloud has definitely given us more choices to make!