HomeLatest NewsCyber SecurityAI ‘autonomy’ fears say more about human design choices than machines, security expert says

AI ‘autonomy’ fears say more about human design choices than machines, security expert says

Speculation around AI agents acting independently masks deeper concerns about access controls, permissions and human design choices that ultimately determine how artificial intelligence behaves, cybersecurity experts say.

Preferred Source of Google

Debate over whether artificial intelligence (AI) systems are beginning to act independently has resurfaced after the of Moltbook, an online network of AI agents that has drawn wide attention across social media and technology forums.

Some users have portrayed Moltbook as evidence that AI systems are developing autonomy or even consciousness, fuelling concerns about machines operating beyond human control. Cybersecurity specialists, however, say such interpretations overstate what current AI systems are capable of and distract from more immediate risks linked to how these tools are designed and deployed.

Zoya Schaller, Director, Cybersecurity Compliance, Keeper Security, said Moltbook’s behaviour shows advanced language simulation rather than any form of independent agency.

Advertisement
Saksham Bharat 2026
Saksham Bharat 2026
A multi-stakeholder dialogue on skilling gap in Cybersecurity, Data Resilience and AI — and the roadmap to a Saksham Bharat.
Register Now →
VeeamON 2026 Tour India - Mumbai
VeeamON 2026 Tour India - Mumbai
A VeeamON 2026 India Leadership Series Mumbai for senior public sector and government technology leaders.
Register Now →
Cyber Surakshit Uttar Pradesh
Cyber Surakshit Uttar Pradesh
Find out strategies, frameworks and solutions for building a resilient and secure digital ecosystem across Uttar Pradesh.
Register Now →
VeeamON 2026 Tour India - Bengaluru
VeeamON 2026 Tour India - Bengaluru
A VeeamON 2026 India Leadership Series Bengaluru for senior public sector and government technology leaders.
Register Now →
VeeamON 2026 Tour India - Delhi
VeeamON 2026 Tour India - Delhi
A VeeamON 2026 India Leadership Series Delhi for senior public sector and government technology leaders.
Register Now →
Infosec Reimagined
Infosec Reimagined
Infosec Reimagined 2026 is the premier information security summit where top leaders—CISOs, CROs, CIOs, CTOs and risk executives—converge to redefine cyber resilience.
Register Now →
Digital Senate
Digital Senate
Digital Senate is a premier conference uniting government leaders, technologists and innovators to share ideas, success stories and strategies on digital governance, public sector transformation, cybersecurity and emerging technologies in India.
Register Now →
CIO Prism
CIO Prism
CIO Prism unites forward-thinking technology leaders to exchange transformative insights, shape digital strategies, and foster innovation, empowering enterprises to excel in an era of rapid technological change.
Register Now →

“What looks like personality is really just very good mimicry,” Schaller said. “These systems are pattern-matching human language using enormous volumes of data scraped from the internet, remixing cultural references and familiar fiction ideas. That can feel unsettling, but it is not the same as autonomy or intent.”

Large language models, which underpin most generative AI tools, do not make decisions in the way humans do, experts say. Instead, they generate responses based on probabilities shaped by training data and system instructions.

When AI systems appear to act independently, it is usually because humans have granted them access to tools, data or credentials.

Advertisement

“When AI systems cause real-world harm, it is almost always because of permissions humans gave them, integrations that were built or configurations that were approved,” Schaller said. “It is not because a decided to act on its own.”

The growing use of AI agents, software systems designed to carry out tasks such as data analysis, customer support or system monitoring, has increased scrutiny of how much autonomy should be built into such tools.

While can improve efficiency, security professionals warn that poorly defined access controls can create powerful machine identities without clear accountability.

Advertisement

“If an AI system looks autonomous in the wild, it is usually because someone handed it access without proper guardrails,” Schaller said. “That is not a failure of containment. It is automation doing exactly what it was designed to do, only faster and at a much larger scale.”

Researchers say experiments such as Moltbook can still be useful for understanding how AI systems interact with each other and what patterns emerge when constraints are loosened. But they caution against drawing conclusions about sentience or independent intent.

“These networks may help us study system behaviour, but they do not change how these models fundamentally work,” Schaller said.

All the unglamorous work still matters in the age of AI

Security specialists argue that the focus should remain on governance, access management and oversight, rather than speculative fears about machines “waking up”.

“All the unglamorous work still matters,” Schaller said, pointing to security-first design, least-privilege access, isolation and continuous monitoring. “Those are what actually keep systems safe.”

As interest in AI agents grows, experts say organisations need to pay closer attention to how responsibilities are defined and who is accountable for machine actions, especially as AI tools become more deeply embedded in business processes.

“The real risk is not that bots are plotting,” Schaller said. “It is that humans make design decisions without fully considering the consequences.”

Get the day's headlines from Tech Observer straight in your inbox

By subscribing you agree to our Privacy Policy, T&C and consent to receive newsletters and other important communications.
Tooba Aslam
Tooba Aslam
Tooba Aslam is a Correspondent at Tech Observer Magazine, covering startups, industry and advertising and marketing. With a degree in marketing, she brings a balanced perspective to reporting on innovation and market trends.
- Advertisement -
Powered By Veeam Logo
- Advertisement -

Subscribe to our Newsletter

By subscribing you agree to our Privacy Policy, T&C and consent to receive newsletters and other important communications.
- Advertisement -

Google, Antler India launch AI startup programme for 5,000 founders

Google for Startups and Antler India have launched a hybrid programme for up to 5,000 AI startup founders. The two-phase initiative runs from June 2026 with applications closing 22 May.

RELATED ARTICLES