How Tech Giants Compete in Cloud Markets

Explore top LinkedIn content from expert professionals.

Summary

Tech giants compete in cloud markets by offering powerful infrastructure and specialized tools that help businesses run applications, store data, and build artificial intelligence models remotely. In this fast-growing industry, companies like Amazon, Microsoft, Google, and others use strategic partnerships, custom technology, and AI integration to attract customers and stay ahead of rivals.

  • Invest in innovation: Cloud providers are constantly developing unique chips and AI models to improve performance and meet surging demand for computing power.
  • Build strategic partnerships: Companies often collaborate—even with competitors—to secure access to essential hardware and services, ensuring their platforms can handle advanced workloads.
  • Embrace flexibility: Businesses are increasingly using multiple cloud providers to avoid shortages and manage costs, driving operational complexity but expanding their access to resources.
Summarized by AI based on LinkedIn member posts
  • View profile for Shelly Palmer
    Shelly Palmer Shelly Palmer is an Influencer

    Professor of Advanced Media in Residence at S.I. Newhouse School of Public Communications at Syracuse University

    382,953 followers

    Yesterday, Reuters reported that OpenAI finalized a cloud deal with Google in May. This might look like routine tech news. It is not. This is a strategic inflection point in the AI infrastructure wars. OpenAI, whose ChatGPT threatens the core of Google Search, is now paying Google billions of dollars to power its growth. This was not a partnership of choice. It was a partnership of necessity. Since ChatGPT launched in late 2022, OpenAI has struggled to meet soaring demand for computing power. Training and inference workloads have outpaced what Microsoft’s Azure alone can support. OpenAI had to expand. Google Cloud was the solution. For OpenAI, the deal reduces its dependency on Microsoft. For Google, it is a calculated win. Google Cloud generated $43 billion in revenue last year, about 12 percent of Alphabet’s total. By serving a direct competitor, Google is positioning its cloud business as a neutral, high-performance platform for AI at scale. The market responded. Alphabet shares rose 2.1 percent on the news. Microsoft fell 0.6 percent. There are only a handful of true hyperscalers in the U.S. AWS, Azure, and GCP dominate, with Oracle and IBM trailing behind. The appetite for compute is growing faster than any one company can satisfy. In this new phase of the AI era, exclusivity is a luxury no one can afford. Collaboration across competitive lines is inevitable. -s

  • View profile for Saanya Ojha
    Saanya Ojha Saanya Ojha is an Influencer

    Partner at Bain Capital Ventures

    79,808 followers

    In true Silicon Valley fashion, the AI arms race is getting down to the silicon itself.🤺 The Big 3 hyperscalers, Amazon, Microsoft, and Google—traditionally NVIDIA’s biggest customers—are encroaching on its core turf by developing their own AI chips. Meanwhile, NVIDIA, the juggernaut of GPUs, is pushing into hyperscaler territory with DGX Cloud, offering AI infrastructure that could, in theory, make it less reliant on Big Tech clouds. Why does this matter? Because the silicon layer is a battleground for billions. 💸 Hyperscalers are tired of footing NVIDIA's massive GPU bill, so they’re investing big in in-house silicon to cut costs and assert control. Amazon’s Inferentia and Trainium chips, Google’s TPUs, and Microsoft’s Maia project are all about building tech stacks with minimal dependency on outside hardware. The goal? Price control and performance tailored to hyperscaler needs. For NVIDIA, this is about strategic survival. Its business model relies on selling chips that empower the same hyperscalers who are now racing to break free. DGX Cloud and partnerships with Oracle, Google, and Microsoft (ironically) are NVIDIA’s way of expanding beyond hardware sales into high-margin, AI-driven cloud services. NVIDIA is doubling down on services, building out a powerful software ecosystem, and offering a soup-to-nuts solution for enterprises wanting AI access without the infrastructure burden. If hyperscalers get their chips right, NVIDIA's dominance could be challenged. But if NVIDIA’s DGX Cloud gains traction, it’s a warning shot that it can play in hyperscaler territory too—and may lure AI workloads directly onto its infrastructure. The stakes have never been higher, so let the chips fall where they may.🌐

  • View profile for Jason Saltzman
    Jason Saltzman Jason Saltzman is an Influencer

    Head of Insights @ CB Insights | Former Professional 🚴♂️

    36,146 followers

    Hyperscalers are fighting the cloud wars. Startups are fighting for compute. Our latest business relationship data shows GCP taking 38% of new AI startup relationships, AWS with 30%, Azure with 8%, and… 25% of AI startups taking a multi-cloud approach. In most cases, this isn't a sophisticated infrastructure strategy. It's a response to a fundamental supply constraint: there simply aren't enough GPUs to meet demand. Hyperscaler strategies: Google Cloud is making an aggressive play for startups by embedding Gemini into its solutions, creating native AI builders on GCP. Their recent deal with Cipher signals they're serious about expanding compute capacity to support this strategy. If you're trying to capture more of the value chain by betting on startups building with your LLM, you need to invest in the infrastructure to back it. AWS has scale and the largest startup footprint, but without a proprietary LLM driving demand, they're competing primarily on compute access and availability. When GPU supply is tight everywhere, being “Switzerland” has its advantages. Microsoft largely sits out the early-stage startup battle. With existing enterprise distribution and key partnerships with OpenAI and now Anthropic, they can focus on inference and deployment rather than competing for training workloads. Many early-stage AI startups are locked to a single provider because of startup programs, credit packages, and early partnerships. But, as compute costs scale and availability remains constrained, that 25% multi-cloud number is likely to grow. Companies are increasingly willing to add operational complexity if it means access to the compute they need. Compute remains the bottleneck. Navigating the supply-demand imbalance – through infrastructure investment, partnerships, and strategic positioning – will determine the next 12, 18, 24 months of growth for both startups and cloud providers.

  • View profile for Tomasz Tunguz
    Tomasz Tunguz Tomasz Tunguz is an Influencer
    405,328 followers

    What force could dethrone AWS after more than a decade of unchallenged dominance? For years, Amazon Web Services ruled the cloud infrastructure market. It was the default choice without a question for every startup. Then OpenAI released GPT-4. Microsoft’s exclusive partnership with OpenAI transformed Azure from a second-place player into the obvious choice for AI-first companies. With this week’s earnings, we are seeing the ultimate impact of that strategic decision. The numbers reveal a market in transition. AWS generates $30.6B in quarterly revenue compared to Azure’s $22.9B and Google Cloud’s $12.5B, but absolute size masks the real story of momentum shifting beneath the surface. Since GPT-4’s launch, Azure has consistently added more to its ARR than AWS. In two of the previous eight quarters, Google has booked more new ARR than Amazon. Jamin Ball’s data highlights the trend. Azure surged from 35.8% market share in Q1 2022 to 46.5% during the GPT-4 launch in Q2 2023, seizing first place through its OpenAI advantage. Google Cloud has captured 6.4 percentage points of market share since Q1 2022, growing from 19.1% to 25.5% in Q2 2025. Both Microsoft & Google have stronger AI value propositions than Amazon with OpenAI models & Gemini models. And it shows in their growth rates: Microsoft’s and Google’s growth rates now exceed 39% and 32%, respectively, and are accelerating. Meanwhile, Amazon’s growth rate is flat at 17%. The market explosion tells an even more dramatic story. Total quarterly ARR additions grew from $5.9B in Q1 2022 to $21.4B in Q2 2025—a four-fold increase that reflects AI’s transformative impact on enterprise spending. Put another way, Google’s new ARR bookings in the last quarter is the size of the whole industry’s bookings just three years ago. With Azure and Google Cloud Platform growing faster than AWS, the once-strong incumbent’s market position may lead to three equal players. The next trillion dollars in cloud revenue will flow to the platforms that best integrate AI into every layer of their stack.

  • View profile for David Linthicum

    Top 10 Global Cloud & AI Influencer | Enterprise Tech Innovator | Strategic Board & Advisory Member | Trusted Technology Strategy Advisor | 5x Bestselling Author, Educator & Speaker

    194,392 followers

    NVIDIA Just Proved It: In the Cloud, It Pays More to Be a Partner Than a Competitor NVIDIA’s quiet restructuring of its DGX Cloud efforts is more than an org chart tweak—it’s a strategic admission that being the arms dealer is far more profitable than trying to be a new army. Let’s be candid: trying to stand up a true public cloud that competes with AWS, Microsoft, and Google was never going to be NVIDIA’s highest-ROI path. The hyperscalers are its largest customers, its biggest route to market, and the primary monetization channel for its GPUs and AI software stack. My read is that there were very direct, offline conversations between NVIDIA and the major cloud providers. The underlying message was likely simple: do you want to sell us tens of billions of dollars’ worth of GPUs and platform software, or do you want to fight us for the same enterprise AI workloads? From that angle, the risk–reward equation becomes obvious. Running a global cloud means massive capital investment, operational complexity, regulatory exposure, and, most importantly, channel conflict with the very companies that drive most of your demand. Partnering means high-margin silicon, systems, and software, sold into every major cloud, amplified by co-selling and deep technical integration. One path pits you against your best customers. The other path makes those customers your force multipliers. By refocusing DGX Cloud internally and moving away from the idea of being a “fourth hyperscaler,” NVIDIA is acknowledging where its true leverage lies: owning the AI acceleration layer that everyone else builds on, rather than fighting to own the entire cloud stack. Let the hyperscalers own the data centers and the customer relationships; NVIDIA will own the infrastructure that makes their AI stories possible. That’s not a retreat. It’s a smarter, more scalable bet on where the real long-term value is in the cloud ecosystem. #NVIDIA #CloudComputing #AIInfrastructure #Hyperscalers #AWS #Azure #GoogleCloud #DGXCloud #CloudStrategy #DigitalTransformation #EnterpriseIT #AIPlatforms #TechStrategy Nvidia reportedly backs away from its effort to make its own public cloud, team reorg eases friction with customers — chipmaker shifts unit's focus to internal R&D

  • View profile for Ben Baldieri

    GPU Project Finance | AI Infrastructure | Solutions with Panchaea | Insights with The GPU

    14,322 followers

    GPU clouds aren’t competing with each other. They’re competing with the hyperscalers. They just don’t realise it yet. Right now, most GPUaaS platforms are still playing the infrastructure game: • Rent out hardware • Compete on price • Win on availability That works today because the hyperscalers aren’t playing that game. They’re not trying to win GPU/h customers. They charge $6–$13 per GPU/h for H100s, not because they can, but because they don’t want to sell you bare metal. That’s not their business model. Their margins are in platform spend, not infrastructure. But that’s going to change. Because the hyperscalers are already: • Designing their own chips • Tuning them to the workloads their own customers already run • Investing billions into infrastructure that looks a lot more like what the neoclouds are building When those chips hit production and enterprise customers really start entering the market at scale? When the custom chips beat H100s or H200s on real enterprise workloads? Then the hyperscalers will enter the infrastructure game, because the economics will finally make sense. And who do you think wins then? Enterprise customers are already on Amazon Web Services (AWS), Microsoft Azure, Google Cloud. They already have billing, integration, compliance, and workflows sorted. And if the hyperscalers suddenly offer more performant hardware, with simpler onboarding and no need for custom setups? Why wouldn’t they switch back? That’s the real risk. Unless you’ve got something those customers can’t get from AWS, Azure or GCP, you're just a bridge. A temporary workaround. A margin play waiting to be crushed. So what’s the solution? Pick a vertical. A niche. A use case. Talk to those customers. Understand what their business actually needs. Then build up the stack in that direction. Offer not just GPUs, but the platform, tooling, services, and support they need to get real outcomes, faster, cheaper, and more transparently than the hyperscalers ever will. And the deeper into the niche you go? The less competition you’ll have. Go deep enough and you’re not selling GPUs anymore. You’re running a micromonopoly. And that’s where your workloads and your margins live. Some playing the infrastructure game will survive, of course, and some will even thrive. But when victory here is a function of: • Speed to market • Cost of capital Thinking you can win on both against some of the biggest and most profitable businesses in history is bold at best, irresponsible at worst. Happy to be wrong here, so let me know your thoughts!

  • View profile for Michael Westerweel

    Mr. Marketplaces | Profitability | ChannelEngine Platinum | Mirakl | Public speaker | Co-founder & CEO @ ChannelMojo | Founder @ Marketplace Meetups

    14,648 followers

    Amazon might drop $10 billion into OpenAI. But not to chat with ChatGPT. This is about chips. Prestige. And crushing Azure’s monopoly over frontier AI. AWS wants OpenAI to stop hugging Nvidia and start cuddling Trainium. Why? Because those gleaming new data centers don’t fill themselves. Quick reminder. Microsoft already holds 27% of OpenAI. They’ve got resale rights to GPT-4 Turbo, GPT-5 and whatever comes next. If you're an AWS exec, that's a massive hole in your Bedrock story. Enter the workaround: don’t resell the models. Just own part of the company. Let OpenAI run its workloads on Trainium. Brag about that. Use the tech inside Ads, Alexa and cashierless stores. Quietly level the playing field. 🍿 Now imagine being a seller watching this unfold. The AI wars are shaping retail infrastructure more than any product launch or peak season forecast. Big cloud moves are about to hit every touchpoint in ecommerce. 💰 AWS isn't investing to play nice. It's investing to make Bedrock look like the real neutral option. 🔋 Trainium just became the hottest chip AWS ever made. 🧠 OpenAI gains chip diversification. Think Nvidia + Google + AWS in its stack. 💼 Sellers will feel it in pricing tools, search ranking, ad targeting and chatbot workflows. 🔐 Microsoft still holds the resale keys. But internal access? That’s still game-changing. And don’t overlook this. A $10B equity check from Amazon would trigger massive regulatory heat on circular funding models. Cloud platforms backing AI startups, which then rent from the same clouds, isn't going unnoticed in DC or Brussels. Zoom out for a sec. Retail operators are now competing on latency, model quality, and inference cost per token. Not just logistics or ACoS. This AI power struggle isn’t abstract. It's shaping how fast your product gets recommended. Or whether it gets recommended at all. Watch what happens when OpenAI quietly optimizes for Trainium next quarter. That’s the tell. #ecommerce #marketplaces #AIcommerce #AWS #OpenAI

  • View profile for Sharad Bajaj

    VP Engineering, Microsoft | Agentic AI & Data Platforms | Building Systems that Make Decisions, Not Predictions | Ex-AWS | Author

    27,788 followers

    Sunday AI Pulse 1. OpenAI signed a multiyear cloud deal with Amazon worth about $38 billion, locking in massive compute capacity and signalling continued hyperscaler competition over AI infrastructure. This is not just a vendor choice. It reshapes where large models run and who controls the physical stack. 2. Meta announced a sweeping plan to invest roughly $600 billion in U.S. infrastructure and jobs over the next three years, with a major focus on AI data centers. This underlines how big tech is shifting from model R&D to a race for physical capacity and nationwide deployment. 3. Microsoft expanded commercial programs and deals this week, from a $9.7 billion cloud contract tied to AI needs to a new Agentic Launchpad in the UK with NVIDIA to accelerate agentic AI startups. The pattern is clear. Cloud providers are bundling compute, go-to-market, and engineering support to turn models into businesses. 4. Big money is still flowing. PitchBook and coverage this week show venture capital and corporate budgets concentrating on AI infrastructure and enterprise AI, though the returns calculus is getting tighter as scrutiny on governance and deployment grows. Expect capital to chase both scale and defensible enterprise moats. 5. Small player to watch. Daylight, a Tel Aviv cybersecurity startup launched in 2025, secured a large preemptive funding package this week. Their focus on AI driven managed detection and response highlights how startups are emerging to solve AI-native security needs as models and infra scale. Early bets here are worth watching for enterprise risk management. My takeaways for leaders and builders 1. This week’s headlines are a reminder that AI is shifting from algorithmic novelty to industrial strategy. The competition is now about data center footprint, network partnerships, and supply chain for compute, not just model accuracy. 2. If you run product or engineering, your near term decisions should prioritize integration points. Where does your model run? Who owns the data path? How will you operate when a provider changes terms or capacity is constrained? 3. For founders and VPs thinking about hiring or fundraising, the playbook matters. If you are building vertical workflows and strong operating models around data and automation, you are building something that survives a shift in who controls the lowest layers. 4. Security and governance are urgent. As infra scales, so does attack surface and operational complexity. Expect new categories of startups and internal teams focused on AI reliability and detection. What stood out for you this week? Are you seeing the same infrastructure centric shift in your org, or is your focus still primarily on models and features? #AI #Infrastructure #EnterpriseAI #AIAgents #DataAndAI #Startups

  • View profile for Jeff Yelton

    President and CEO

    6,689 followers

    The Cloud Isn’t Slowing Down — It’s Running Out of Power AWS, Microsoft, and Google all reported this week, and the story is remarkably consistent: 1. They aren’t short on demand. 2. They’re short on capacity. This isn’t 2001’s telecom bust where “dark fiber” sat unused. Every GPU that comes online is fully booked. Power is now the constraint , not customers. AWS added 3.8 GW of power in the last 12 months and plans to double capacity by 2027. Bedrock could become its next EC2 moment. Microsoft is doubling its data-center footprint in two years, fueled by 900M+ AI users and OpenAI’s $250B Azure commitment. Google raised CapEx to $93B as Gemini-powered enterprise demand explodes. Three takeaways: 1. Capacity is the moat. Whoever controls scalable power wins. 2. Silicon is the margin unlock. Custom chips separate profit leaders from infrastructure providers. 3. Agents are the next workload. The next cloud race is about inference, not training. Data centers are the new oil fields. GPUs are the rigs. And the hyperscaler that owns both will define the next decade of enterprise computing. #Cphere #JustTheFacts #AIInfrastructure #CloudComputing #PrivateEquity #DigitalStrategy #EnterpriseGrowth

  • View profile for Aaron Ginn

    CEO & Co-Founder @ Hydra Host | Forbes 30 under 30

    8,489 followers

    Public cloud won’t win the AI race. Amazon Web Services built the modern cloud. It turned data centers into elastic infrastructure and gave America the compute backbone of the digital era. But now, the company that invented the category is losing altitude. The reason is simple: cloud is no longer the core layer of innovation in a GPU-defined era. The moat is gone. Customers want chips, not cloud. Last week’s 15-hour outage symbolized a decade of drift, the stagnation of the old cloud monopolies. AWS’s market share has fallen from nearly 50% in 2018 to 38% today. Rivals like Microsoft, Google Cloud, and upstarts like CoreWeave and Oracle are outpacing it in AI deals, chip strategy, and model support. The deeper issue isn’t just technology but bureaucracy and hesitation. The symptoms of a CPU-era monopoly struggling to adapt. AWS missed early investment in Anthropic when compute demand was obvious and only wrote a $4 billion check after Google had already secured the partnership. AWS is behaving like a late adopter in a market it once defined. Now Amazon is reorganizing, swapping executives, and rushing out AI tools like Bedrock, SageMaker, and Q. But the trend line is clear: the center of gravity in compute is shifting away from public cloud. Its last play is its own silicon, a hail mary to stay relevant. But production volumes from TSMC are small compared with NVIDIA and AMD. Once you factor in AWS’s custom chips will offer lower performance, that’s a drop in the bucket in coming wave of AI compute demand. The next era of infrastructure leadership won’t come from whoever has the most data center space or best cloud fabric. It will come from whoever can convert power into usable compute fastest with the lowest unit costs possible. AWS taught the world how to rent CPU compute. Now, it has to relearn how to compete in a GPU world.

Explore categories