Imagine booking a flight online. You check the price then check again a few hours later and it has gone up. Now imagine not just one airline but all the major airlines subtly raising their prices in sync, even though no human from one airline spoke to a human from another. This is a new challenge called algorithmic collusion. It happens when sophisticated AI programmes, designed to set the “best” price for a company, learn to react to each other in ways that lead to everyone’s prices climbing higher without any direct old-fashioned price-fixing agreement. It is like a silent invisible understanding among machines.
This “silent understanding” poses a unique problem for India as AI integrates deeper into our daily lives and economy. We want to embrace the benefits of AI but we also need to ensure fair play in our markets. How are other countries dealing with this concern?
Regulating the “Invisible Hand”
In the European Union, regulators are concerned about algorithmic collusion. They are trying to determine if existing laws designed for human agreements can cover these AI-driven patterns. They are exploring whether even unintentional coordination by algorithms should be considered anti-competitive and are considering new rules that could compel companies to prove their pricing algorithms are not behaving suspiciously. This highlights the need for a clear legal framework that can adapt to AI’s novel behaviours.
The United States often takes a more reactive approach. Their focus is usually on proving a direct agreement or a clear intent to collude. For algorithmic collusion this means it is much harder to prove unless there is evidence that humans designed the algorithms to fix prices or knowingly allowed them to do so. They might wait for clear harm to occur before stepping in. This shows that relying solely on traditional anti-collusion laws could leave us playing catch-up in the face of rapidly evolving AI.
China has been quicker to issue specific rules for algorithms including those that influence pricing. Their regulations often aim to prevent “unreasonable price discrimination” or “monopolistic practices” by algorithms often linked to broader national economic and social goals. China’s swift action highlights the value of being able to introduce targeted rules when a clear competitive harm from AI is identified.
Challenges in regulating the “black box”
A significant hurdle in tackling algorithmic collusion stems from the “black box” nature of many advanced AI systems. Unlike human decisions, the intricate internal workings and decision-making processes of these algorithms are often opaque even to their creators. It is difficult to pinpoint why an AI agent arrived at a particular price, making it challenging to distinguish between legitimate parallel competitive behaviour and subtle collusive adaptation.
This lack of transparency complicates investigations as there might be no “smoking gun” of human intent or explicit agreement. In addition, assigning liability becomes complex when the collusive outcome arises from autonomous machine learning rather than direct human instruction. This opacity means traditional antitrust tools struggle to identify and analyse such hidden coordinations.
India’s crossroads
When clear risks like algorithmic collusion emerge, India should be ready to introduce specific narrowly tailored regulations that directly address the competitive harm. For instance, China’s regulations on recommendation algorithms aim to prevent anti-competitive behaviour and require some transparency in how algorithms influence market outcomes.
This approach of targeted competition rules will interact significantly with other broader AI regulations being contemplated for India such as those under the proposed Digital India Act. For example, if broader AI regulations mandate a certain level of explainability for high-risk AI systems (akin to the Chinese approach), this directly aids the Competition Commission of India (CCI).
Companies might be required to provide insights into how their algorithms arrive at decisions even if full transparency isn’t possible. This means the CCI, when assessing potential collusion, would have crucial information helping them understand if algorithmic designs inadvertently lead to coordinated prices or if a firm adequately monitored its AI for such tendencies. Such a layered regulatory environment where explainability from broader AI norms supports competition enforcement will be vital to navigating the “black box” problem.
The CCI is no stranger to adapting to technological shifts, having successfully applied its existing legal framework across diverse digital market scenarios. Its ongoing market study on AI complemented by public comments from the Chairperson committing to intervention only after a thorough understanding of AI’s functioning signals preparedness for future AI governance in India.
This stance should be complemented by other sectoral regulators to ensure a whole-of-government approach to regulating AI. By decoding the functioning of AI and fostering an agile regulatory ecosystem, India can ensure that both consumers such as those looking for ticket prices and companies those deploying AI for price monitoring are better off, celebrating innovation without sacrificing competition and consumer welfare.
The author is partner with Axiom5 Law Chambers LLP. Views are personal.

