Recent scrutiny and restrictions placed on Grok AI in parts of Southeast Asia point to gaps in governance rather than a rejection of artificial intelligence, a cybersecurity executive said, as regulators in the region assess the social and security risks posed by generative AI tools.
Takanori Nishiyama, senior vice president for Asia-Pacific and Japan at Keeper Security, said actions taken or considered by authorities in countries including Indonesia, Malaysia and the Philippines reflected growing concern over how AI systems are deployed and controlled.
He said AI tools increasingly act autonomously, process sensitive data and interact with critical operational systems, making them comparable to a new class of digital identity that often operates beyond traditional security and oversight frameworks.
The issue is particularly pronounced in Asia-Pacific, where regulatory approaches to AI vary widely across borders, Nishiyama said. Singapore has moved towards structured assessment frameworks such as AI Verify, while Japan has favoured a softer regulatory approach focused on innovation, creating uneven risk exposure for organisations operating across multiple jurisdictions.
From a cybersecurity standpoint, Nishiyama said the core challenge lies not in the AI models themselves but in how access, identity and decision-making are governed once systems are deployed.
He warned that unregulated or informal use of AI tools within organisations can introduce unmanaged credentials, expose sensitive datasets and create gaps in accountability that are difficult to audit, particularly for enterprises and public sector bodies.
The risks also extend to end-users, he said, citing the potential for poorly governed AI systems to leak personal data, generate misleading information or be manipulated to carry out unauthorised actions, all of which could undermine public trust.
As AI adoption accelerates, Nishiyama said the focus should shift from outright bans to enforceable safeguards that balance innovation with accountability.
He said measures such as identity-first security, least-privilege access, full auditability and human oversight for high-risk actions would be essential for organisations seeking to deploy AI responsibly while complying with evolving regulation across the region.

