The Securities and Exchange Board of India (Sebi) are stepping up efforts to regulate the use of artificial intelligence (AI) across the country’s financial markets, proposing that entities bear full responsibility for the outcomes generated by AI tools.
In a consultation paper released on Wednesday, the regulator laid out accountability requirements for market infrastructure institutions and intermediaries that use AI, focusing on data privacy, security, and integrity.
Sebi’s move comes amid a sharp rise in AI adoption across the financial services industry. AI-driven tools are increasingly deployed to enhance investor services, compliance processes, market analysis, and investment strategies.
Recognising both the opportunities and risks of AI, Sebi’s proposal aims to safeguard investor interests in a rapidly digitising market.
“Every person regulated by Sebi that uses such artificial intelligence tools and techniques while conducting its activities in the securities markets and for servicing its clients, regardless of the scope and size of adoption, shal be solely responsible for all the consequences of such use,” Sebi stated in its paper, emphasising that regulated entities would be accountable for the security and integrity of investor data.
The proposal represents a significant development in India’s financial regulation landscape, as it comes at a time when global regulators are also grappling with the governance of AI in financial systems.
Similar moves are underway in the European Union, where the proposed AI Act seeks to establish comprehensive accountability standards, and in the United States, where the Securities and Exchange Commission has issued guidance on AI use.
For Sebi, the push for AI accountability extends a trend of proactive oversight in financial technology. Sebi has already mandated AI use reporting for brokers, depositories, and mutual funds to gain insights into the extent of AI adoption in the market.
However, the new proposal goes further by explicitly assigning responsibility for any decisions or actions taken based on AI outputs, regardless of the degree of AI adoption.
“While recognising the need for intermediaries to embrace the latest techniques and AI tools, it is equally important to ensure the protection of investors with the usage of such tools,” Sebi noted, underscoring that AI should be used in ways that prioritise data security and prevent misuse of sensitive information.
Sebi’s paper highlighted that AI is transforming various aspects of the financial sector, allowing for more precise market analysis, stock selection, and investment decisions. In today’s markets, AI algorithms are increasingly relied upon to inform trading strategies, manage risks, and analyse portfolios, making the stakes higher for accurate and secure AI outputs.
Yet, alongside these advancements, the regulator pointed out that such extensive use of data-driven tools necessitates stringent safeguards to protect investors.
For instance, AI-generated decisions might carry unintended risks if they are based on flawed data or assumptions, something that the new guidelines intend to mitigate by holding firms accountable for all AI outcomes. “The entity…shall be responsible if the output arising from the usage of such tools and techniques is relied upon or dealt with,” Sebi’s proposal reads.
By placing accountability on regulated entities, Sebi aims to push a culture of transparency and responsibility in AI usage, a step that could make India’s capital markets more secure and investor-friendly. The proposal mandates that all market players using AI must not only comply with existing laws but must also implement safeguards to protect investor data and monitor AI-driven outcomes for accuracy and reliability.
While industry players may have mixed reactions to the proposed framework, Sebi is seeking feedback from stakeholders before moving to implement the rules. Some market participants may raise concerns about the potential operational and compliance costs associated with these guidelines, particularly smaller entities that may lack the resources to develop robust oversight mechanisms.
However, investor advocacy groups are likely to support the move, as it seeks to address the growing concerns around data privacy and AI reliability.
With this proposal, Sebi positions itself as a leader in AI governance, signalling to other regulators that advanced technology in finance must be managed with a proactive, investor-centric approach.
The regulator’s consultation paper is open for public comment, with final guidelines expected to follow based on feedback. If implemented, Sebi’s accountability framework could set a precedent for responsible AI use in finance, ensuring that innovation advances alongside robust investor protections.

