Ad Image

Computing Trends Back to the Future – A Perspective on Computing

Computing Trends

Computing Trends

Solutions Review’s Expert Insights Series is a collection of contributed articles written by industry experts in enterprise software categories. In this feature, VMWare‘s VP of Strategy Joshua Burgin offers commentary on computing trends and how your organization can stay on the cutting edge.

Expert Insights Badge SmallThe internet is getting old. Next year marks 35 years from dialing up to web-based computing.  We’re making incredible progress in connecting systems together – but we’re making things more complex too. Fortunately we’re building better technology – and I’ve never been more excited about what’s coming.

The First Data Centers – Computing at Scale, But in Big Converted Rooms in Old Buildings

The first web browser had been publicly available for less than five years by the time I joined Amazon. We were already trying to get to what would become distributed systems architecture.

  • We ran a monolithic website. We had a single web server with an “online and offline” directories. We ran on bare metal hardware running Unix key-value DBs for the catalog data and a large database on the backend to store and process orders.
  • Networking was for systems and network engineers, who handled the specialized hardware (switches and routers) that connected the physical hosts to each other and the internet.
  • As for data, there was a tightly controlled system that managed credit card data – akin to early network partitioning, least privilege and RBAC, but in a more physical sense.

All this constituted a kind of architecture. But the bottlenecks were obvious: the presentation layer, business logic and data were all intermingled, which meant to add functionality we often had to be an expert in all things – systems administration, database engineering, programming.

Download Link to Managed Service Providers Buyers Guide

Computing Trends

A Manifesto for Distributed Computing

It wasn’t long before we ran out of capacity on the biggest computers made at the time. We needed scale, which meant a fleet of front-end web servers, which meant we needed to build a distributed system.

At Amazon, engineers wrote a “Distributed Computing Manifesto.” The objective was to separate the presentation layer, business logic and data, while ensuring that reliability, scale, performance and security met an incredibly high bar and kept costs under control. This removed bottlenecks to software development by scaling our monolith into what we would now call a service-oriented architecture.

Computing at Massive Scale – Why So Hard?!

Programming patterns and abstractions in both hardware and software have solved lots of problems.

One we’ve not solved is distributed systems. Distributed Systems are about designing for failure. And systems fail all the time. There are whole fields of computer science dedicated to consistency, availability and partitions.

And service-oriented architectures can actually make things extraordinarily more complex with dozens of services talking to each other – and absent observability tools and dashboards – it’s hard to know if we were experiencing a problem in networking, databases, or something else.

Towards Distributed Systems

With virtualization, cloud, and containers, we’re running applications inside multiple isolated user-space instances, containers, to which only parts of the kernel resources are allocated.

We’ve now reached the point of making it easy to break complex monolithic applications into smaller, modular microservices – fully encapsulated into one neat container.

We Can Go Further

We’re ready for the broad-based use of distributed systems for commercial interests. We have these elastic, micro-services-based, containerized applications running at scale, and we need something to orchestrate them.

Enter Kubernetes to deploy applications into the right containers, autoscale, execute efficient bin packing, decommission containers that aren’t in use, and monitor health. And Kubernetes will continue to improve given it’s large community and ecosystem.

And we’re now operating on multiple clouds, on-prem, in hybrid environments – we’re able to address more specific needs like operating at the edge.

What’s Next

These are the next biggest challenges to address and opportunities I see for entrepreneurially-minded companies.


– A lot of folks love to talk about how complex and frustrating etcd is. It’s more just the fact that etcd is a distributed system, and distributed systems are inherently complex and frustrating. The opportunity is to make this self-healing and largely invisible for customers.

Service Discovery & Networking

With thousands of micro-services managing service discovery and appropriately designing and segmenting our network, we still require specialized knowledge. There’s a variety of service meshes – which aim to solve the traffic management, security, and observability challenges introduced by microservices and distributed architecture. And of course there’s still the interaction of the physical network. Likely there’s more than one solution to this problem – because it crosses so many domains – but interoperability will be key, not sloughing off the complexity onto platform operators like we see now.


Most folks are still running databases the old way. Using a Database as a managed service from one of the cloud providers or vendors makes some of these problems go away, but we need automated deployment, fine-grained resource allocation and efficient use of resources and portability.


container attached storage is in its infancy. We need to to take all the innovations we’ve seen in cloud storage – especially around object and block storage and bring them fully into the container world.


This is harder than ever. It’s not feasible just to have perimeter security or lock down every port. Traditional security tools were not designed to monitor running containers. Namespaces are not a security boundary. I see the next frontier of container-aware security focused less on the specific infrastructure and more about the end-to-end application operating on many underlying resources.

A fundamental theme of the past 35 years has been the ever-increasing scale of a simple objective: to connect systems together, and then to abstract away the infrastructure and other complexities from our developers and end-users.

Looking forward, I’m excited to see the industry take the innovations from containers – strong isolation, fast startup and efficient use of resources – and fully build out the world of micro-VMs.

  • Let’s containerize databases and storage more fully.
  • Let’s make applications fully instruction-set agnostic so they can be migrated to Arm. Let’s evolve networking and security to be “application” and container aware.
  • Let’s go beyond logs, metrics and traces and build systems that learn and adapt to prevent failures, auto-healing rather than requiring the same kind of manual intervention (re-deploy, restart) that we’ve been doing for 35 years.

The world produces quintrillions of bytes of data each day. The opportunity ahead for modernization in technology is too big to fathom. But our ability to capture that opportunity will be increasingly reliant on whether we can fully abstract away the underlying complexity from developers so they can focus on building applications.

Download Link to Cloud MSP Vendor Map

Share This

Related Posts