Akamai Stock Surges 27% on $1.8B Anthropic Cloud Deal

When Akamai Technologies revealed a $1.8 billion, seven-year cloud infrastructure agreement with a client it identified only as “a leading frontier model provider,” the market reacted with a ferocity rarely seen in a company with nearly three decades of history. Shares surged 27% in a single session — the largest single-day rally in Akamai’s 28-year existence. Bloomberg soon confirmed what many suspected: the customer was Anthropic, the artificial intelligence company behind the Claude family of large language models. The akamai anthropic deal instantly became the largest contract in Akamai’s history, and it signaled something much larger than a single transaction. A business built on speeding up web pages had just been validated as a serious player in the AI infrastructure race.

akamai anthropic deal

The Anatomy of the Largest Contract in Akamai’s History

Seven years is an eternity in technology. Most cloud service agreements run for one to three years, with optional renewals that rarely get exercised in full. Akamai and Anthropic committed to a full seven-year term, a duration that provides revenue visibility the company’s legacy content delivery business has never enjoyed. The legacy CDN business operates on shorter cycles — monthly or annual commitments — and faces persistent price compression as competitors like Cloudflare and Fastly drive down margins. The akamai anthropic deal changes that dynamic entirely.

Revenue from the commitment is not expected to begin in earnest until the fourth quarter of 2026. In that initial period, Akamai expects to recognize approximately $20 million to $25 million. The ramp reflects the time required to deploy the physical infrastructure — servers, networking gear, and cooling systems — that Anthropic’s workloads will demand. Akamai will need to build out capacity across its global network of more than 4,100 locations in 130 countries, a process that does not happen overnight.

The contract follows a $200 million, four-year cloud services agreement that Akamai signed in February 2026 with another unnamed US technology company. Under that deal, the customer will use a multi-thousand NVIDIA Blackwell GPU cluster for AI training and inference workloads. Together, the two contracts represent $2 billion in committed cloud revenue from customers that did not exist in Akamai’s pipeline two years ago. That is a remarkable shift for a company whose revenue was still dominated by content delivery through 2023.

Why Seven Years Matters

Long-term commitments in cloud infrastructure are rare because technology changes so quickly. A seven-year deal assumes that the hardware and software stack deployed today will remain useful for most of the next decade. For Anthropic, that assumption is a bet on Akamai’s ability to upgrade and refresh its infrastructure over the contract’s life. For Akamai, it is a bet that Anthropic’s compute needs will continue growing at a pace that justifies the commitment. Both parties are essentially saying that AI inference workloads will remain compute-intensive enough to warrant dedicated infrastructure for the foreseeable future.

The visibility that a seven-year contract provides has immediate effects on how Akamai manages its capital expenditures. The company can now plan multi-year data center expansions with confidence that a major customer will fill that capacity. That reduces the financial risk of building ahead of demand, a challenge that has tripped up many cloud infrastructure providers. Akamai’s CFO can model revenue with far greater certainty than the company’s CDN business ever allowed.

The Stock Market’s 27% Reaction: A Repricing, Not a Surge

When a stock rises 27% in a single day, the natural assumption is that investors are reacting to news. But the magnitude of the move suggests something deeper: a repricing of the entire company based on a new narrative. Akamai had been valued as a mature, slow-growth infrastructure business with a declining legacy product and two promising but unproven growth engines in cybersecurity and cloud computing. The akamai anthropic deal provided the proof point that the cloud pivot was real and that the company could win business from the most demanding AI companies in the world.

The 27% rally added roughly $4 billion to Akamai’s market capitalization. That is more than double the value of the contract itself, which is worth $1.8 billion over seven years. Investors are not pricing the deal in isolation. They are pricing the pipeline of similar deals that the contract signals. If Akamai can win Anthropic, the logic goes, it can win other frontier AI companies that face the same compute constraints.

Akamai’s cloud infrastructure services revenue grew 40% year over year to $95 million in the quarter that included the deal announcement. That growth rate is exceptional by any standard, especially for a company whose legacy CDN business declined 7% in the same period. The cloud business is still small relative to the company’s overall revenue — approximately $95 million per quarter versus roughly $1 billion in total quarterly revenue — but it is growing at a pace that can transform the company’s revenue mix within three to five years.

Cybersecurity Remains the Revenue Anchor

Cloud computing may be the growth story, but cybersecurity remains Akamai’s largest business segment. The security division generates approximately $590 million per quarter, accounting for about 55% of total revenue, and is growing at 11% year over year. That growth rate is respectable but not explosive. The security business benefits from the same global edge network that powers Akamai’s CDN and cloud services, creating a natural competitive advantage. Akamai can offer customers a combined security and cloud computing package that smaller providers cannot match.

The security business also provides a buffer against the decline in CDN revenue. As content delivery becomes more commoditized, the security segment’s higher margins and recurring subscription model give Akamai the financial stability to invest in cloud infrastructure. The company can afford to take the long view on cloud computing because security revenue provides a reliable foundation.

The Pivot from Content Delivery to Cloud Computing

Akamai was founded in 1998 at the Massachusetts Institute of Technology by Daniel Lewin and Tom Leighton, along with a team of MIT researchers. The company’s original insight was that the internet’s architecture — designed for resilience, not speed — created bottlenecks that made web pages load slowly. Akamai’s solution was to cache content on servers placed at the edge of the network, closer to users. The company grew rapidly as the web expanded, becoming the dominant content delivery network for the world’s largest websites and streaming services.

For two decades, Akamai operated the world’s largest content delivery network, caching and distributing web pages, video streams, and software downloads. At its peak, the CDN business handled roughly 30% of all global web traffic. But the business model had a fundamental weakness: it was a commodity. As competitors emerged and customers became more price-sensitive, Akamai’s margins compressed. The company needed a new growth engine.

Tom Leighton, who served as chief scientist before becoming CEO in 2013, led the diversification effort. The first major pivot was into cybersecurity, which began with a series of acquisitions and organic product development. The security business now protects enterprises from distributed denial-of-service attacks, web application threats, and credential abuse. It is a natural extension of Akamai’s edge network, which provides the scale to absorb and mitigate attacks that would overwhelm smaller providers.

The second pivot, into cloud computing, began in earnest with the $900 million acquisition of Linode in 2022. Linode was a respected but relatively small cloud provider that offered virtual machines, storage, and networking at competitive prices. Akamai saw an opportunity to differentiate by combining Linode’s cloud infrastructure with its own edge network, creating a platform that could run applications at the edge with lower latency than centralized cloud providers like AWS or Azure.

Why Edge Computing Matters for AI Inference

AI inference — the process of running a trained model to generate predictions or responses — is fundamentally different from AI training. Training requires massive clusters of GPUs running for weeks or months, consuming enormous amounts of power and generating tremendous heat. Inference, by contrast, needs to happen quickly, often in real time, and the latency between the user and the model determines the quality of the experience.

Akamai’s edge network, with thousands of locations distributed around the world, is ideally suited for inference workloads. A user in Tokyo querying a Claude model should not have to wait for the request to travel to a centralized data center in Virginia or Oregon. With Akamai’s edge infrastructure, the inference can happen on a server in Tokyo, reducing latency from hundreds of milliseconds to single digits. That difference matters for conversational AI, code generation, and real-time translation.

At NVIDIA’s GTC event in March 2026, Akamai announced that it would deploy thousands of NVIDIA RTX PRO 6000 GPUs and build what it described as the industry’s first global-scale implementation of NVIDIA’s AI Grid. The AI Grid architecture pushes inference workloads to edge locations, reducing the distance data must travel and lowering the total cost of inference. Akamai is betting that the market for edge inference will grow faster than centralized cloud inference as AI applications become more interactive and latency-sensitive.

Anthropic’s Compute Crisis: Why It Signed with Akamai

Anthropic’s decision to sign a $1.8 billion contract with Akamai reflects the single most important constraint in the current AI infrastructure market: demand for compute exceeds the capacity of any single provider. Anthropic, like its competitors OpenAI and Google DeepMind, faces a constant struggle to secure enough computing power to train and deploy its models. The company’s Claude models are among the most capable in the world, but they require enormous amounts of GPU compute to operate.

Dario Amodei, Anthropic’s chief executive, disclosed that the company experienced “80x growth” in annualized revenue and usage in the first quarter of 2026. That growth rate is staggering by any measure. A company growing at 80x per year cannot afford to wait for a single cloud provider to build capacity. It must secure compute from every available source, including Akamai, which is a relatively new entrant in the cloud infrastructure market.

Anthropic’s strategy is to diversify its compute supply across multiple providers and architectures. The company runs Claude across Google’s custom Tensor Processing Units, Amazon’s Trainium and Inferentia chips, and NVIDIA’s GPUs. It is also exploring the possibility of building its own custom chips, following the example of Google and Amazon. The Akamai deal adds another layer of diversification, providing compute capacity that is geographically distributed and optimized for inference rather than training.

The company also signed a separate agreement to take all of SpaceX’s Colossus 1 data center capacity, adding more than 300 megawatts and over 220,000 NVIDIA GPUs to its compute footprint. That deal, combined with the Akamai contract, gives Anthropic one of the largest and most diverse compute infrastructures of any AI company in the world. The company is effectively building a distributed supercomputer across multiple providers and geographies.

The 80x Growth Problem

Growing annualized revenue and usage by 80x in a single quarter creates a set of problems that most companies will never experience. The most immediate problem is infrastructure: Anthropic must double its compute capacity roughly every two weeks to keep pace with demand. That rate of expansion is far faster than any single cloud provider can build data centers. The company must therefore work with multiple providers simultaneously, signing long-term contracts to secure capacity before it is built.

The second problem is cost. Compute is Anthropic’s largest expense by a wide margin. The company spends billions of dollars per year on GPU clusters, networking equipment, and data center power. The $1.8 billion Akamai deal represents a significant portion of Anthropic’s infrastructure budget, but it is only one of several large contracts the company has signed. The total cost of Anthropic’s compute infrastructure likely exceeds $10 billion over the next several years.

The third problem is latency. As Claude usage grows, Anthropic must ensure that inference responses remain fast for users around the world. Centralized data centers in a few locations cannot provide low-latency responses to users in Asia, Africa, and South America. Akamai’s edge network, with servers in 130 countries, provides a solution that centralized cloud providers cannot easily replicate.

Concentration Risk: The Single-Customer Debate

Every investor who looked at the akamai anthropic deal asked the same question: what happens if Anthropic decides to build its own infrastructure or switches to a different provider? A single customer representing a significant portion of a company’s growth pipeline creates concentration risk. If Anthropic reduces its commitment or fails to grow as expected, Akamai’s cloud business would face a significant headwind.

The $1.8 billion contract is large enough to distort Akamai’s financial picture. The company’s cloud infrastructure services revenue was $95 million in the most recent quarter, or approximately $380 million on an annualized basis. The Anthropic contract, if fully executed, would add roughly $257 million per year in revenue, more than doubling the cloud business. That level of dependence on a single customer is unusual for a company of Akamai’s size and maturity.

However, the risk may be less severe than it appears. The contract is structured as a seven-year commitment, which means Anthropic cannot simply walk away. Even if Anthropic’s growth slows or the company decides to build its own infrastructure, it would still be obligated to pay for the capacity it reserved. The contract provides Akamai with downside protection that shorter-term agreements do not offer.

Moreover, the deal signals that Akamai’s cloud platform is competitive with offerings from Amazon, Google, and Microsoft. If Akamai can win business from Anthropic, it can win business from other large AI companies and enterprises. The company’s pipeline of major enterprise customers, as Tom Leighton described it, is strong and growing. The Anthropic deal may be the first of many large cloud contracts, not the last.

You may also enjoy reading: Why Netflix Refuses to Let Stranger Things Die.

The Broader Enterprise Opportunity

Akamai’s cloud platform is designed to appeal to enterprises that need to run AI inference workloads at the edge. These enterprises include financial services companies that need real-time fraud detection, healthcare providers that need low-latency diagnostic tools, and manufacturers that need to run computer vision models on factory floors. The edge computing model is particularly attractive for industries that cannot tolerate the latency of centralized cloud processing.

The enterprise market for edge AI inference is still in its early stages, but it is growing rapidly. Gartner estimates that by 2028, more than 50% of enterprise-generated data will be processed outside centralized data centers or cloud environments. That shift creates a massive opportunity for providers like Akamai that have the infrastructure to support edge workloads. The Anthropic deal validates the edge AI thesis and gives Akamai a reference customer that other enterprises will want to emulate.

The Competitive Landscape: Akamai vs. the Hyperscalers

Akamai’s cloud computing business competes directly with Amazon Web Services, Microsoft Azure, and Google Cloud, which together control more than 65% of the global cloud infrastructure market. Competing against the hyperscalers is a daunting challenge for any company, especially one that entered the cloud market relatively late. Akamai does not have the breadth of services that AWS offers or the enterprise relationships that Microsoft has cultivated over decades.

Akamai’s competitive advantage lies in its edge network. The hyperscalers have centralized data centers in a few dozen regions, while Akamai has thousands of edge locations in 130 countries. For workloads that require low latency — such as AI inference, real-time video processing, and interactive gaming — Akamai’s distributed architecture provides a meaningful performance advantage. The company can deliver inference responses in single-digit milliseconds, while centralized cloud providers often require 50 to 100 milliseconds or more.

Akamai also differentiates on price. The company’s cloud platform is built on commodity hardware and open-source software, which keeps costs lower than the hyperscalers’ proprietary infrastructure. Akamai charges significantly less for compute and storage than AWS or Azure, making it an attractive option for cost-conscious enterprises. The trade-off is that Akamai offers fewer services and less automation, which may deter customers that need a fully managed platform.

The Linode acquisition gave Akamai a solid foundation in cloud computing, but the company has invested heavily in expanding its capabilities since 2022. Akamai now offers managed Kubernetes, object storage, bare metal servers, and a range of networking services. The platform is not as comprehensive as AWS, but it is competitive for the use cases that matter most to Akamai’s target customers: AI inference, edge computing, and content delivery.

The NVIDIA Partnership

Akamai’s partnership with NVIDIA is a critical component of its AI strategy. The NVIDIA RTX PRO 6000 GPUs that Akamai is deploying are designed for professional visualization and AI inference workloads. They are not the same as the H100 or B200 GPUs that hyperscalers use for training, but they are well-suited for inference tasks that require high throughput and low latency.

Akamai’s deployment of NVIDIA’s AI Grid architecture is a bet on distributed inference. Instead of running all inference workloads in a centralized data center, the AI Grid distributes models across edge locations, allowing inference to happen as close to the user as possible. That architecture reduces latency, lowers bandwidth costs, and improves the user experience for interactive AI applications.

The partnership also gives Akamai access to NVIDIA’s software stack, including CUDA, TensorRT, and Triton Inference Server. These tools allow Akamai to optimize model performance on its infrastructure, reducing the cost of inference for customers. Akamai can offer enterprises a complete inference platform that combines hardware, software, and networking in a single package.

The Nebius Acquisition and the Inference Optimization Market

In a related development that underscores the value of inference optimization, Nebius — a European cloud infrastructure company — acquired Eigen AI for $643 million in early 2026. Eigen AI specializes in optimizing inference performance, reducing the cost and latency of running AI models in production. The acquisition signals that inference optimization is becoming a valuable and competitive market.

Akamai is well-positioned to benefit from the same trend. The company’s edge network provides a natural platform for inference optimization, and its partnership with NVIDIA gives it access to cutting-edge hardware and software. Akamai could potentially acquire an inference optimization company of its own to strengthen its capabilities, or it could build the technology internally. Either way, the Nebius acquisition validates the thesis that inference optimization is a growing market with significant value.

The inference optimization market is driven by the same dynamic that drives the compute market: demand for AI inference is growing faster than supply. Companies that can reduce the cost of inference by 20% or 30% gain a significant competitive advantage. Akamai, with its distributed edge infrastructure and NVIDIA partnership, is well-positioned to offer inference optimization services that centralized cloud providers cannot match.

What the Deal Means for AI Infrastructure

The akamai anthropic deal is a signal that the AI infrastructure market is undergoing a fundamental shift. The hyperscalers — Amazon, Google, and Microsoft — have dominated the market for AI compute, but they are struggling to keep up with demand. AI companies like Anthropic are being forced to diversify their compute supply across multiple providers, including smaller players like Akamai and specialized providers like CoreWeave and Lambda.

That diversification is good for the market as a whole. It reduces the concentration risk that comes from relying on a single cloud provider, and it creates competition that drives down prices and improves quality. AI companies that can access compute from multiple providers are less vulnerable to price increases, service disruptions, or strategic changes by any single provider.

Akamai’s success in winning the Anthropic deal also demonstrates that there is room in the AI infrastructure market for companies that are not hyperscalers. The key is differentiation: Akamai’s edge network provides a capability that the hyperscalers cannot easily replicate. Other infrastructure providers will likely follow a similar strategy, focusing on specific use cases or geographies where they can offer unique value.

The deal also highlights the importance of long-term commitments in the AI infrastructure market. AI companies need predictable compute capacity to plan their product roadmaps and manage their costs. Cloud providers need predictable revenue to justify the capital expenditures required to build data centers. Long-term contracts align the interests of both parties and create a stable foundation for growth.

The Future of Akamai’s Cloud Business

Akamai’s cloud infrastructure services revenue is growing at 40% year over year, and the Anthropic deal will accelerate that growth significantly. The company is on track to become a meaningful player in the cloud computing market, particularly in the edge inference segment. The question is whether Akamai can sustain its growth momentum and win additional large contracts from other AI companies and enterprises.

The company’s pipeline of major enterprise customers, as described by Tom Leighton, suggests that the Anthropic deal is not an isolated event. Akamai is competing for cloud contracts with some of the largest companies in the world, and its edge computing capabilities give it a unique value proposition. If Akamai can convert even a fraction of its pipeline into signed contracts, the cloud business could grow to rival the cybersecurity business in revenue within five years.

Akamai’s legacy CDN business will continue to decline as content delivery becomes more commoditized, but the decline is manageable. The company’s cybersecurity and cloud computing businesses are growing fast enough to offset the CDN decline and drive overall revenue growth. Akamai is successfully transforming itself from a CDN company into a diversified infrastructure provider, and the Anthropic deal is the most visible proof of that transformation.

Add Comment