Running the Numbers: Where to Deploy Your Application

App deployment

As part of Solutions Review’s Premium Content Series—a collection of contributed columns written by industry experts in maturing software categories— Josh Johnson of Akamai Technologies calculates the best place for app deployment, and concludes “it depends”.

Application developers used to have a simple choice when it came to determining where to execute their code: the server or the client. But with the emergence of cloud computing and edge computing, that choice has expanded to offer a continuum of options. An application can run in a single location, across a handful of regions, or deployed to hundreds or even thousands of edge locations. With all these options, it’s not surprising that developers are asking themselves what is the best location for their application.

The short answer is: it depends. The choice of where to execute logic depends on what you’re trying to achieve with your application. Let’s take a look at the different strengths and limitations of each compute model and how they lend themselves—or not—to particular application functions.

App Deployment: Weigh Your Options


Cloud App Deployment

Hosting an application in the cloud offers the flexibility to run apps and storage in an environment that can easily scale vertically. Need more data or storage resources? No problem, just pay more. This makes it ideal for data-heavy applications, like a large, centralized database, for example.

However, horizontal scaling is limited, reducing your ability to place application logic close to the user. You could host your application across perhaps a dozen regional cloud locations, but for some users this will create a long trip to the application to submit the request and then back to the user with the requested content. That delay is not acceptable for applications that require low latency, such as streaming applications that deliver personalized content.

Distributed Cluster App Deployment

Another option is hosting your application across a distributed cluster of virtual machines (VMs) or containers. This approach may involve VMs in hundreds of locations, extending the horizontal scaling ability beyond what a traditional cloud model can provide. Vertical scaling is not quite as flexible as the cloud, requiring hardware to be added at all distributed locations. However, a distributed system can place applications closer to users, potentially offering a latency advantage when compared to a limited number of cloud locations.

This distributed approach lends itself to applications designed to serve groups of users clustered together in specific regions. An example would be video transcoding, where you are delivering localized content to large numbers of users in hundreds of media markets nationally. With distributed VMs located in each market, the result could be “close to live” latency for those users—ideal for sporting events, for example.

Edge Computing App Deployment

Modern edge computing allows code execution in thousands of points worldwide using lightweight or serverless platforms. This offers extensive horizontal scalability, placing compute resources very close to the end user. On the other hand, vertical scalability is limited. This makes edge computing ideal for applications that require very low latency and user-specific functionality, such as delivering personalized web content.

Consider the example of A/B testing, where one user may receive different content than another user at the same URL based on their distinct profiles. Both variants can be cached at the edge, where the logic is performed to determine which experience to serve to the user. The personalization functionality—including accessing test configuration data, reading cookies, adding request header, etc.—is performed at the edge in milliseconds, serving up the appropriate page without delay.

Another ideal application for the edge is security. Implementing traffic filtering at the edge can block suspicious activity before it has a chance to infiltrate the network, while also reducing network traffic and the workload placed on centralized servers.

It’s Not an “Either/Or” Choice

As we’ve seen, each of these compute options has its own, distinct strengths and limitations. But that doesn’t mean that it’s an “all or nothing” choice. We can combine different compute models, using each to perform the compute workloads that make sense for that technology.

For example, in a content delivery scenario, you could deploy JavaScript code at the edge to determine which local channels the user should receive based on their location. However, the actual transcoding of the streaming content could then be handled by the appropriate delivery platform in the regional distributed system or even in the cloud.

Another example would be video watermarking to help combat piracy. In this scenario, video content is streamed from the centralized cloud or distributed platform, while edge compute handles the task of encoding a unique, invisible mark into each copy of video content. This mark can be used forensically to track down a pirate downloading content without authorization.

More Options, Better Online Experiences

The key to determining the optimal division of app resources is thinking critically about each application’s purpose and the desired user experience. When large compute workloads are required and latency is not critical, cloud is king. When computing needs to serve large clusters of users located in a defined geographic region, distributed computing offers some attractive advantages. When one-to-one personalization and low latency are the goals, the edge is unmatched. Having a clear understanding of the strengths of each deployment model is crucial for developing applications that deliver the online experiences today’s users expect.

Josh Johnson
Follow Josh
Latest posts by Josh Johnson (see all)