The promise of artificial intelligence to transform the global economy is often cast in sweeping,global terms. But new research from London Business School suggests that who adopts AI may matter just as much as how powerful the technology becomes. Rather than delivering broad-based gains, early evidence indicates that AI could amplify existing competitive advantages, allowing already dominant firms to pull further ahead while smaller rivals struggle to keep pace. As policymakers, investors and executives race to understand AI’s real-world impact, this emerging divide is raising a critical question: will AI level the playing field, or entrench a new corporate elite?
How early adopters of AI are reshaping competition and market structure
As machine learning tools and generative models move from pilot projects to core infrastructure, the businesses that embrace them first are quietly rewriting the rules of rivalry.Early adopters are using AI not only to cut costs but to redraw industry boundaries, turning routine operations into high-speed, data-rich systems that are arduous for slower competitors to imitate. In sectors from logistics to retail banking, these firms are building proprietary data assets, automated decision engines and AI-augmented talent pipelines that create a widening gap in capabilities. The result is a new kind of competitive asymmetry: those who move early gain compound advantages in insight, speed and experimentation, leaving latecomers to compete on shrinking margins and legacy processes.
These shifts ripple out into market structure, often concentrating power in the hands of a few AI-enabled incumbents or insurgent entrants. Distinctive patterns are emerging:
- Data-rich incumbents use AI to deepen customer lock-in and optimise pricing at scale.
- Digital-native challengers deploy AI to bypass traditional bottlenecks such as call centres or branch networks.
- Specialist suppliers build narrow AI tools that become industry standards,creating new “choke points” in value chains.
| Firm Type | AI Advantage | Market Impact |
|---|---|---|
| Incumbent with legacy scale | Automation of complex, high-volume workflows | Higher barriers to entry |
| Digital-first entrant | AI-driven personalisation and rapid iteration | Disruption of mid-tier rivals |
| Specialist AI provider | Core algorithms and APIs | New strategic bottlenecks |
Why firm capabilities and data access determine who gains from AI
Some companies walk into the AI era with an arsenal: rich proprietary datasets, cloud-ready infrastructure, and teams that understand how to turn algorithms into alpha.Others arrive with only off-the-shelf tools and thin data trails. The result is a widening performance gap, where firms that already excel at integrating technology into workflows can extract exponentially more value from AI than competitors. These organisations are not just automating tasks; they are reshaping decision-making, product design and pricing in real time, using models trained on data that rivals simply cannot see.
Access to data and the ability to exploit it now function as a competitive filter. Firms with strong in-house capabilities can combine AI with their own operational, customer and market information to build systems that are both defensible and hard to imitate. By contrast, businesses relying solely on public or generic datasets risk commoditised outcomes and thinner margins.
- Proprietary data turns AI outputs into unique strategic assets.
- Technical depth allows faster experimentation and deployment.
- Process maturity ensures models actually change frontline behavior.
- Governance strength builds trust in AI-driven decisions.
| Firm type | Data access | AI payoff |
|---|---|---|
| Frontier adopters | Deep, proprietary, integrated | Market-shifting gains |
| Fast followers | Mixed internal and external | Incremental efficiency |
| Late adopters | Mostly public, fragmented | Limited, imitable benefits |
How regulators and policymakers can prevent an AI driven concentration of power
Public authorities can shape whether AI amplifies existing monopolies or fuels broader competition by intervening early where structural risks are highest. That means scrutinising data deals and cloud partnerships that effectively lock startups into the ecosystems of a handful of tech giants, and applying merger control to acquisitions of AI-native firms before their technology is quietly folded into already dominant platforms. Regulators can also demand algorithmic openness for high-impact systems, requiring firms to disclose model capabilities, training data provenance, and evaluation metrics to autonomous auditors under strict confidentiality, rather than to the market at large. This shifts oversight from voluntary “ethics washing” to enforceable standards, particularly in sectors like finance, healthcare and labor platforms where AI can entrench gatekeeping power.
Policymakers can further rebalance the field by investing in shared infrastructure that lowers the fixed costs of entry. Open compute credits for research labs,public datasets with clear licensing,and interoperability requirements for foundational models help smaller players plug into the AI economy without surrendering control to incumbents. Targeted measures can include:
- Access mandates for critical cloud and chip resources on fair, reasonable and non-discriminatory terms.
- Data portability rights so companies and consumers can move training data between providers.
- Public-interest sandboxes where innovative AI services are tested under supervision, not under the shadow of big-tech infrastructure.
| Policy Tool | Main Target | Power Effect |
|---|---|---|
| Merger scrutiny | Big-tech acquisitions | Prevents silent roll-ups |
| Compute access rules | Cloud & chip giants | Widens AI participation |
| Transparency mandates | High-risk AI systems | Enables real oversight |
Strategies for lagging firms to catch up and deploy AI responsibly
For organisations that have hesitated, the path forward is less about chasing headlines and more about building disciplined capacity. Late adopters can start by ring-fencing modest “learning budgets” and forming cross-functional squads that pair domain experts with data scientists, while embedding legal, compliance and HR from day one. Instead of rushing into full-scale change, they can pilot models on well-bounded use cases-such as document summarisation, demand forecasting or customer-service triage-where risks are easier to monitor and measure. Transparent governance frameworks, including clear model ownership, audit trails and bias-testing protocols, help ensure that speed does not come at the expense of accountability. Crucially, leaders must treat AI as a management challenge, not just a technical one, aligning incentives so that employees are rewarded for surfacing risks as readily as for finding efficiencies.
Closing the gap also means investing deliberately in people and partnerships. Firms that are behind can move faster by leveraging trusted vendors, open-source tools and cloud platforms instead of building everything in-house, while together upskilling staff in data literacy, prompt design and critical evaluation of AI outputs. To avoid exacerbating inequalities inside the firm, training needs to reach frontline workers as much as senior managers, supported by clear interaction about how AI will reshape roles rather than replace them in the dark.The table below illustrates how cautious adopters can map priorities across three dimensions-technology, people and governance-to catch up without cutting corners:
| Dimension | First 6 Months | Next 12 Months |
|---|---|---|
| Technology | Run 2-3 low-risk pilots using external platforms | Integrate successful pilots into core workflows |
| People | Launch basic AI literacy and ethics training | Create internal “AI champions” in each business unit |
| Governance | Set up an AI steering committee with clear guardrails | Embed impact assessments and regular model audits |
- Start small, measure hard: Focus on pilots with clear metrics for value and risk.
- Co-create with employees: Involve staff in design to boost adoption and surface ethical concerns.
- Document everything: Maintain records of data sources, model choices and decision rationales.
- Review continuously: Treat AI systems as living products that require ongoing oversight and revision.
To Conclude
As the dust settles on the hype surrounding artificial intelligence, one conclusion is becoming unavoidable: its economic impact will be shaped less by the technology itself than by who controls and deploys it. The London Business School research underscores that AI is not a rising tide lifting all boats, but a selective current, accelerating those already best positioned to exploit it.
For policymakers, that raises urgent questions about competition, skills and the concentration of power in a handful of firms. For managers, it turns AI from a buzzword into a strategic choice: whether to invest, partner or risk falling behind those who do. And for workers, it signals that the future of jobs will depend not only on what AI can do, but on which employers are driving its adoption and how they choose to use it.
AI might potentially be a general-purpose technology, but its consequences will be anything but general. As adoption spreads, the real fault lines in the global economy may run not between humans and machines, but between the firms that harness AI – and everyone else.