Most AI conversations in boardrooms do not start with models. They usually start with, “Where do we run this?”
The same three providers still dominate that decision: AWS, Microsoft Azure, and Google Cloud. They control more than sixty percent of global cloud infrastructure spend, with AWS around 30 percent, Azure 20 percent, and Google roughly 13 percent.
On paper, they all do everything. Compute, storage, GPUs, managed Kubernetes, security tools, glossy AI services. In practice, they are splitting the enterprise AI market along three different instincts:
AWS
Breadth, maturity, and scale for complex estates.
Azure
The default choice inside Microsoft-first enterprises.
Google Cloud
The sharp end of data, analytics, and AI-native workloads.
Your job is not to pick a winner in the “AWS vs Azure vs Google Cloud” debate. Your job then is to work out which bias matches your own reality.
Cloud Wars 2025 in a nutshell
If you strip away marketing, three signals matter for enterprise AI:
- Market share and ecosystem gravity
- AI engagement and capability focus
- Regional strength, especially in Asia and Singapore
Market share and ecosystem momentum
Recent data for Q2 2025 indicates:
- AWS around 29 to 30 percent of global cloud infrastructure
- Azure roughly 20 to 22 percent
- Google Cloud about 12 to 13 percent
The big three together control well over sixty percent of spend. That concentration has real effects. Talent, third party tools, and partner ecosystems tend to follow the largest platforms.
AI engagement: who is actually being used for AI
An IoT Analytics study on “cloud AI engagement” showed a different picture from raw market share:
- Microsoft punched above its weight. Its share of AI workloads outpaced its share of total cloud.
- Google also over-indexed on AI relative to its infrastructure share.
- AWS under-indexed slightly, with a large base of infrastructure workloads that are not yet AI-heavy.
In plain language.
Azure and Google Cloud are disproportionately chosen when teams say “this project is about AI”. AWS still runs a massive amount of “everything else”.
AWS vs Azure vs Google Cloud for AI: how they actually differ
You can summarise the “enterprise cloud comparison” for AI in one line:
- AWS is the heavy-duty toolbox.
- Azure is the Microsoft-optimised operating environment.
- Google Cloud is the AI-native analytics lab that grew up.
Under that, the differences get specific.
AWS: scale, services, and deep infrastructure for AI
Strengths for AI workloads
- Breadth of services. AWS still offers the most extensive catalogue. From EC2, EKS, and managed databases to niche data and ML utilities, it has a service for almost every pattern.
- Mature GPU and accelerator options. Strong support for NVIDIA GPUs, Trainium and Inferentia, and tight integration with SageMaker, Bedrock, and the newer S3 Vectors for RAG and agentic architectures.
- Global reach. For organisations spanning multiple continents, AWS regions and availability zones are hard to beat.
Caveats
- Complexity tax. With maturity comes clutter. Teams with uneven AWS experience often end up with fragmented architectures and surprisingly high bills.
- AI experience is catching up, not leading. AWS is investing heavily in generative AI, but from a perception standpoint it is still seen as the infrastructure leader more than the “AI thought leader”.
Good fit when
- You have large, mixed workloads already on AWS and want to layer AI.
- You need fine control over networking, security, and multi-account governance.
- Your teams already speak “AWS” fluently and can exploit the toolbox rather than drown in it.
Azure: the “home ground” for Microsoft-driven enterprises
Strengths for AI workloads
- Native integration with Microsoft 365 and Power Platform. If your organisation lives in Outlook, Teams, and SharePoint, Azure becomes the most straightforward path to embed AI into daily work.
- Strong enterprise and hybrid story. Azure Arc, Azure Stack, and deep Active Directory integration make it attractive where on-prem and regulatory constraints still dominate.
- Copilot and Azure OpenAI. Microsoft’s aggressive push with Copilot and its OpenAI partnership has normalised AI inside office tools before many organisations even finished drafting their AI policy.
Caveats
- Licensing and lock-in concerns. When everything is Microsoft, pricing and flexibility can become opaque, which regulators in Europe are already scrutinising.
- Operational sprawl. Many enterprises discovered they had Azure “by accident” through M365 and then layered infrastructure on top without a clean landing zone.
Good fit when
- You are already deep in the Microsoft ecosystem and want AI to feel native, not bolted on.
- You care about hybrid cloud, identity, and governance as much as GPU counts.
- You want business users in finance, HR, or operations to access AI through familiar tools rather than custom apps.
Google Cloud: AI, data, and analytics as the starting point
Strengths for AI workloads
- Data and analytics DNA. BigQuery, Looker, and Google’s data stack have long been the benchmark for analytics. This heritage now feeds directly into Vertex AI and its generative capabilities.
- AI focus. Google Cloud leads in perception for AI and ML services, with strong engagement from teams that build RAG systems, recommendation engines, and experimentation-heavy workloads.
- Sensible price–performance. Many engineering teams report better bang for buck on certain GPU and storage configurations, although cost comparison is always workload-specific.
Caveats
- Smaller enterprise footprint. Google Cloud still trails AWS and Azure in general-purpose enterprise adoption. That affects partner ecosystems and internal comfort in some conservative organisations.
- Change management. Moving serious workloads to Google sometimes requires a cultural shift. Your team needs to think in terms of SQL, data models, and experimentation, not only VM migration.
Good fit when
- Data, analytics, and experimentation are the backbone of your AI roadmap.
- You are comfortable going with a challenger that is strong exactly where you plan to compete.
- You want a cloud that feels built for AI-first products, not retrofitted.
Strategic tradeoffs C-levels should actually care about
The typical “AWS vs Azure vs Google” blog stays at commodity level. Compute. Storage. Regions.
For enterprise AI, leaders should care about a different set of questions.
-
1Where will your talent thrive or suffocate?
- Existing skill stack. If your architects and engineers are fluent in one cloud, moving pure infrastructure elsewhere just to “chase AI” will slow you down.
- Citizen developer ecosystem. Azure plus Power Platform vs Google plus Workspace vs AWS with its growing low-code story. Which environment will your non-technical teams actually use.
- Partner network. In Singapore and the wider ASEAN region, there are more ready-made partners for AWS and Azure, with Google Cloud closing the gap in analytics and AI.
-
2How do you want to balance control and convenience?
- Full control. Rolling your own on EC2, Kubernetes, and various data stores gives control but increases operational load. Classic AWS pattern.
- Managed AI platforms. Vertex AI or Azure AI services reduce cognitive overhead but lock you deeper into that vendor.
- Compliance posture. Data residency, PDPA, MAS, GDPR, sector-specific rules. The more regulated you are, the more you must think about where models run, where logs live, and what cross-border access looks like.
-
3Single cloud vs multicloud vs “anchored plus escape hatch”
The new AWS–Google multicloud connectivity announcement is not just a networking story. It is an organisational design story.
- Single cloud gives simplicity. One set of contracts, one control plane, one security posture.
- Multicloud by design gives leverage and resilience. It also multiplies your governance and skills requirements.
- Anchored plus escape hatch is how many high-maturity teams quietly operate. They pick a strong primary cloud, then design critical workloads so they can fail over to another provider during major outages or geopolitical shifts.
Singapore and ASEAN lens: what changes in this region
If you operate out of Singapore or the wider region, a few factors sharpen the comparison.
-
1Government, compliance, and Smart Nation
- Singapore’s Smart Nation and its digital government projects have normalised public cloud inside highly regulated ministries and statutory boards.
- MAS TRM and PDPA drive much tighter governance on data handling, logging, and incident response.
- Hyperscalers now have region-specific architectures and blueprints that align with those rules, but implementation quality still depends on your integrator.
-
2Regional investments
- AWS continues to pour capital into Singapore data centers, betting on ASEAN as a growth engine.
- Microsoft has strong government and enterprise ties, especially through M365 and local partner ecosystems.
- Google Cloud is building influence through AI initiatives, such as AI centres of excellence programmes in partnership with governments and local industry.
Competitor intelligence: what their cloud choice tells you
If you are mapping competitors, treat “cloud for AI” as a signal, not gossip.
- On AWS with heavy AI investments often means they have large, legacy or mixed estates. They are likely strong in operations and infrastructure, sometimes slower on greenfield AI experimentation.
- On Azure with Copilot everywhere suggests they are threading AI into knowledge work, sales, and operations rather than building completely new AI products.
- On Google Cloud for core workloads usually signals a strategic bet on data, analytics, and AI-heavy products. Watch their experimentation velocity.
Also watch what they are not doing.
- If a competitor is stuck in a pure SaaS world with no serious IaaS or PaaS footprint, their AI story is probably constrained by their vendors.
- If they are loudly “multicloud”, look for where the real centre of gravity sits. Billing, core data platforms, and identity often betray the true anchor.
This is the kind of pattern work Orion would quietly document in the margin before ever talking about “innovation”.
How to choose: five practical scenarios
Here is a simple decision lens you can take into your next steering committee.
- Scenario 1: Microsoft-centric enterprise, AI as augmentation
- Your people live in Excel, Outlook, Teams, and SharePoint.
- You want AI to show up as Copilot, not as a separate portal.
Likely centre of gravity: Azure
Use Azure for core workloads and AI, but maintain an “escape lane” for specific AI or data projects that may fit better on Google Cloud.
- Scenario 2: Mixed estate, heavy legacy, global footprint
- Multiple regions. Several business units already running things on AWS
- You want to deploy AI services close to data and reduce data movement.
Likely centre of gravity: AWS
Consolidate the messy sprawl into a coherent landing zone, bring in Bedrock, S3 Vectors and managed ML where it helps, and design high-value AI workloads with portability in mind.
- Scenario 3: AI-first product company
- Your competitive moat depends on models, data, and experimentation speed.
- You care more about analytics and MLOps than about traditional ERP hosting.
Likely centre of gravity: Google Cloud
Build around BigQuery, Vertex AI, experiment tracking, and data governance. Keep a minimalist footprint on AWS or Azure where customer requirements dictate.
- Scenario 4: Regulated organisation in Singapore
- MAS, healthcare, education, or sensitive public sector work.
- Board and regulators care deeply about data sovereignty and operational risk.
Likely move: pick a primary cloud, then design AI workloads to be auditable, explainable, and fail-safe.
Here, your cloud choice matters less than your design discipline. Landing zones, IAM, logging, KMS, and data residency are the true risk levers.
- Scenario 5: Leadership wants leverage and resilience
- You are worried about outages, concentration risk, or geopolitics.
- You do not want your AI roadmap tied to one vendor’s pricing mood.
Likely move: anchored multicloud
Pick a strong anchor (often AWS or Azure), then deliberately place selected AI workloads on another provider. Use the new generation of multicloud networking and observability tools to keep that complexity tolerable.
What to do in the next 90 days
A few concrete steps you can take without rewriting your entire strategy.
Map your current gravity.
- Where is your data.
- Which cloud already runs mission-critical systems.
- Where do your engineers actually feel at home.
Decide what “AI” means for you.
- Is it copilots for staff.
- Is it production-grade AI products.
- Is it analytics and decision support.
Run one serious, bounded pilot per cloud.
Even if you expect to consolidate, it is worth running a single AI pilot on each platform with identical goals. The lived experience will cut through vendor decks quickly.
Align certifications and capability building.
Use your chosen direction to shape internal certification paths across AWS, Azure, and Google Cloud. That creates real, portable value for both your people and your organisation.
Design your escape hatch now, not later.
Whatever you pick, decide how you would exit or rebalance if regulations, outages, or pricing shifted sharply.
Picking a cloud is not the strategic decision. How you use it is.
“AWS vs Azure vs Google Cloud” is not a beauty contest.
All three are capable. All three are investing heavily in AI.
The strategic difference sits in how you:
- Shape your AI use cases.
- Concentrate or spread your data.
- Build teams that can wield these platforms with discipline instead of improvisation.
If you want a neutral, Singapore-aware view on how to structure your AI and cloud roadmap, that is the kind of work we do daily with enterprise teams who are tired of vendor theatre.
Talk to us about mapping your cloud for AI strategy before you sign the next multi-year commit.
