By David Pounder
This blog is part of Gate 15’s blog series “Riding the Tiger: AI Threats and Opportunities”, highlighting the essential considerations for organizational leaders and security professionals.
Artificial intelligence (AI) is transforming the workplace, enabling employees to automate tasks, analyze vast datasets, and generate content or code with unprecedented speed and efficiency. While these advancements drive productivity and innovation, they also introduce new risks, potentially the most impactful of which is that of an Insider. An Insider Threat in this space is an employee that can leverage AI tools to exfiltrate data, manipulate information, or evade security controls far more effectively than before. Alternatively, insiders can accidentally compromise data through misuse of AI tools and processes. Traditional security strategies often fall short against these sophisticated threats, making detection, prevention, and response increasingly challenging.
A simple rule – if an individual has been granted access to the organization in any way, they should be considered an insider and therefore must be accounted for when evaluating risk. These insiders can intentionally or unintentionally misuse their access to harm the organization, whether by stealing sensitive information, sabotaging operations, or facilitating external attacks.
Organizations must recognize that AI-equipped insiders can operate with greater speed and subtlety. For example, an employee using AI-driven scripts might extract confidential information rapidly or easily alter system logs to cover their tracks. This shift demands a cultural and technological response, as the risk landscape evolves alongside AI capabilities.
Case Study. There have already been several examples that highlight how AI has taken insider threats from theoretical risks into operational realities, requiring organizations to adapt their defenses in real time. In 2023, a software engineer at a technology firm used a generative AI tool to write code for a legitimate project but secretly embedded scripts that siphoned sensitive source code and proprietary algorithms to a personal cloud account. Leveraging AI’s ability to automate data packaging and obfuscation, the employee evaded traditional security alerts, resulting in the theft of valuable intellectual property before internal monitoring systems detected the anomaly. This case illustrates how AI can amplify the effectiveness and stealth of insider threats, requiring organizations to rethink both technical controls and employee oversight.
Other potential scenarios include:
- A financial analyst uses an AI-powered tool to automate report generation. While this boosts efficiency, the same tool could be misused to scan and extract sensitive client data in bulk, bypassing manual oversight.
- An employee intent on stealing proprietary information develops prompts that could search from across the organization. Without proper governance or access controls, this information could be made available to the employee who otherwise had no business need.
- A marketing employee might use a generative AI tool to draft client proposals but could also exploit the same tool to aggregate and export proprietary customer lists—circumventing traditional monitoring by disguising the data as innocuous content.
- A system administrator may deploy an AI script to automate routine maintenance, yet with minor tweaks, they could program the tool to identify and extract sensitive intellectual property or passwords.
- A well-meaning employee uses an AI-powered chatbot to quickly answer a client’s technical question and, without realizing it, uploads confidential design documents to the tool for context. The AI service stores or processes this sensitive data externally, inadvertently exposing proprietary information and creating a data leakage risk—despite the employee having no malicious intent.
- Finally, an employee could use AI to review thousands of rows of data contained within spreadsheets. While this could be efficient and significantly lower the processing time, the employee failed to check the outputs and misrepresented the data to senior leadership, creating risk.
Best Practices. The excitement and potential of AI has caused many organizations to dive full-speed into adoption and usage. However, governance and accountability has been left behind. As such, it’s important that organizations build governance that defines clear policies on sanctioned AI tools, data access levels, and human-in-the-loop requirements to prevent unintentional leaks or unauthorized data processing. In addition, organizations are encouraged to consider the following areas:
- Implement AI-Powered Behavioral Analytics: Use User and Entity Behavior Analytics (UEBA) to establish a baseline of normal activity and detect anomalies, such as unusual data access patterns or rapid data transfers indicating AI-assisted theft.
- Establish AI-Specific Governance: Organizations must establish AI‑specific governance to ensure artificial intelligence is used responsibly, securely, and in alignment with business objectives and risk tolerance. This governance should clearly define which AI tools are sanctioned for use, what data types may be accessed or processed by those tools, and the roles and responsibilities of employees, managers, and security teams in overseeing AI usage.
- Deploy Data Loss Prevention (DLP) for AI: Configure DLP solutions to monitor and block sensitive data from being pasted into or uploaded to unauthorized public AI tools.
- Enforce Least Privilege and Just-in-Time Access: Restrict access to sensitive data based on role, using just-in-time access to ensure users only have access when necessary, reducing the risk of misuse.
- Monitor for Shadow AI Use: Identify and block unapproved AI applications (“Shadow AI”) to prevent data leakage and unauthorized usage.
- Employee Training on AI Risks: Train staff on the risks of entering company data into generative AI, phishing, and the proper use of sanctioned AI tools.
- Integrate HR Data for Risk Assessment: Correlate employee behavior with HR metrics (e.g., negative performance reviews or pending departures) to identify potential insider risks.
- Strengthen Privileged Access Management (PAM): Use strict monitoring on high-level accounts, which, if compromised or misused, can cause maximum damage with AI tools.
- Foster collaboration. Increase collaboration between information technology, security, HR, and compliance to create balanced policies that support innovation while safeguarding the company’s people, places, and data.
Ultimately, adapting to AI-driven threats requires both cultural and technological shifts. By proactively updating defenses and educating employees, organizations can harness the benefits of AI while minimizing risks.
Look for our next post in this series as Gate 15 explores Business Continuity & Resilience: AI’s Double-Edged Impact, examining the positive and negative impacts AI has in the realm of business continuity!
Gate 15 works across Critical Infrastructure sectors to help organizations protect their people, places, data, and dollars. The threat environment is constantly shifting, and we are here to boost your resilience with plans, exercises, threat analysis, and operational support against both emerging and enduring threats. Contact our team at Gate15@gate15.global to see how we can assist you in delivering on your mission. Join Gate 15’s Resilience and Intelligence Portal (the GRIP)! Sign up today to stay informed of what’s new in all-hazards homeland security and join us in securing America’s people, places, data, and dollars.
Gate 15: Technology-enhanced, human-driven, homeland security risk management.

Understand the Threats.
Assess the Risks.
Take Action.
