Teramind Inc., a leader in workforce intelligence and user behaviour analytics, has launched its new AI Governance platform. This innovative solution aims to provide crucial behavioural oversight for artificial intelligence agents and tools now integrated into the modern workforce, addressing a growing "governance gap" as AI becomes more autonomous.
Key Takeaways
- More than 80% of workers use unapproved AI tools on the job.
- One-third of employees have shared proprietary data with unsanctioned AI services.
- Nearly half of workers (49%) conceal their AI usage from IT departments.
- AI-associated breaches and data leaks now cost over $650,000 per incident.
The Rise of Agentic AI and the Governance Challenge
As AI agents and tools increasingly operate alongside human employees, acting almost as team members, the need for intensified scrutiny has become paramount. Teramind’s internal research highlights a significant trend: over 80% of workers are now utilising unapproved AI tools in their professional roles. This widespread adoption, coupled with concerning practices like sharing proprietary data with unsanctioned services (reported by one-third of users) and actively hiding AI use from IT teams (49% of employees), points to a critical "governance gap" rather than a technological one.
Isaac Kohen, chief product officer at Teramind, emphasised this point, stating, "This isn’t a technology gap — it’s a governance gap. The answer isn’t less AI. It’s governed AI."
Teramind's Solution for AI Visibility and Control
Responding to the escalating demand for AI oversight, Teramind has developed a platform that requires no additional infrastructure. It leverages existing resources to offer immediate visibility into the AI layer. The platform captures prompts, responses, and autonomous behaviours from a wide range of AI tools, including popular ones like ChatGPT, Microsoft Copilot, and Google Gemini, as well as harder-to-detect "shadow AI" tools.
Teramind’s solution provides a comprehensive 360-degree view of an organisation’s risk profile concerning AI tools, both known and unknown. It achieves this by logging all AI activity through text capture, screen recording, optical character recognition, and full transcripts of agentic actions. This detailed logging aims to shed light on the risks associated with AI, especially concerning the potential exposure of sensitive information and the actions of rogue AI within company boundaries.
Addressing Regulatory Compliance and Risk Mitigation
Teramind underscores its commitment to compliance by ensuring its own AI system produces automatic audit trails that meet stringent regulatory standards. These include FedRAMP, SOC 2, ISO 27001, the EU AI Act, and HIPAA, among others. This focus on regulatory adherence is crucial as AI-associated breaches and data leaks are estimated to cost organisations more than $650,000 per incident, with the full impact of potential exposure and rogue AI actions being even more significant.
The launch of Teramind’s AI Governance platform signifies a proactive step towards enabling organisations to harness the power of AI responsibly, ensuring that the benefits of these advanced tools are realised without compromising security or compliance.