Security Stop-Press: Invisible GenAI Usage Poses Security Risks for Businesses

A new report has revealed that 89 per cent of enterprise Generative AI (GenAI) usage happens without IT oversight, thereby exposing organisations to data leaks and unauthorised access. Many employees use GenAI tools through personal accounts, making security enforcement nearly impossible.

The word Security displayed on screen with hand shaped mouse pointer

The Enterprise GenAI Data Security Report 2025 by LayerX highlights that while GenAI adoption is growing, most usage remains invisible. The report highlights how nearly 72 per cent of employees access these tools outside corporate controls, and only 12 per cent of corporate users authenticate via Single Sign-On (SSO).

It’s likely to be a similar story in smaller businesses too, where IT security is even less likely to be enforced.

The main concern with these findings is data exposure. For example, employees frequently paste sensitive business information, customer data, and proprietary code into GenAI tools, with an average of four pastes per day. Without security measures, organisations risk losing control over critical data.

Tools which utilise browser and app plug-ins or extensions, such as Grammarly, present risks too, since they read everything typed into a browser or read the contents of a document. These apps and extensions collect an excessive amount of personal information in the form of the documents you create and the text you type while the software is in use. This data is then sent to their servers, in order to correct spelling errors and offer writing suggestions.

While GenAI tool apps and browser extensions might not register text fields that may contain sensitive information like passwords, Social Security numbers, or banking information, sensitive information of this nature simply written in a document may well be transmitted to their servers, whether the users intends to or not.

Mitigate the risks

To mitigate the risks, businesses should consider implementing a robust education programme to train employees on data protection best practices and the responsible use of AI tools. The aim is to ensure employees are made fully aware of the data security implications and risks associated with pasting text from confidential documents into GenAI tools, such as ChatGPT.

Implementing robust security policies and web traffic/content monitoring should also be considered.

Without action, GenAI is likely to continue as a growing security blind spot.

< Back to blog