Leading artificial intelligence companies have been found leaking sensitive credentials on GitHub, underscoring the growing risk of unmanaged machine identities as AI and automation expand, according to Shane Barney, Chief Information Security Officer at Keeper Security.
Barney was responding to a recent report by Wiz that identified exposed API keys, tokens and other programmatic secrets across major AI developers. He said such exposures reveal how quickly machine-to-machine connections can grow as development scales and automation deepens.
“Each of these credentials represents an access pathway that, if left unsecured, can expose sensitive systems or data,” Barney said. He noted that as organisations adopt AI and cloud-native development, the number of non-human accounts continues to increase, often beyond the reach of conventional identity and access management systems.
Barney said that when visibility into machine-based credentials is limited, risk spreads quietly across otherwise well-protected systems. He called for sustained visibility and control through enterprise-wide secrets management, continuous monitoring and automated credential rotation.
“Reducing that risk requires continuous oversight and least-privilege access policies that contain exposure without slowing innovation,” he said. “Treating machine-based credentials with the same rigour applied to human users strengthens both resilience and operational trust.”
He added that combining Privileged Access Management with secrets management could further improve governance by enforcing strict access boundaries and accountability for elevated permissions.
“The Wiz findings serve as a reminder that as technology becomes more intelligent and interconnected, security must advance at the same pace,” Barney said. “The fundamentals still apply: know what identities exist, understand what they can access and ensure those privileges are tightly governed.”

