Shadow AI refers to the use of generative AI tools that are not formally approved, governed, or visible to security teams.
As employees adopt new AI tools to improve productivity, usage often happens outside of sanctioned platforms. This creates blind spots where sensitive data may be shared without oversight or enforcement.
The Shadow AI page in Verax provides visibility into these tools and allows you to control their usage.
Note
Shadow AI discovery requires AI traffic to be routed through Verax. Tools that do not pass through Verax cannot be detected or controlled.
What Is Shadow AI?
Shadow AI includes any generative AI tool that:
Is used by employees without formal approval
Is not explicitly governed by organizational policies
Falls outside of existing security controls
Examples include:
Public AI chat tools
Browser-based AI assistants
Embedded AI features in third-party applications
How Verax Detects Shadow AI
Verax monitors AI-related traffic routed through the platform and identifies generative AI tools based on observed usage patterns.
When Verax detects usage of a generative AI tool, it records and displays that tool on the Shadow AI page, even if the tool has not been explicitly integrated or approved.
This allows security teams to discover AI usage that would otherwise remain hidden.
Understanding the Shadow AI Page
The Shadow AI page lists generative AI tools detected in your environment.
For each tool, the following information is displayed:
Name
The name of the generative AI tool.Category
The type of tool, such as Native AI, Code Assistant, or Platform.Last Used
The most recent time the tool was observed in use.Allow
Indicates whether the tool is currently allowed or blocked.
Allowing and Blocking Tools
By default:
Generative AI tools explicitly integrated with Verax are allowed
All other detected tools are blocked
You can control access using the Allow toggle:
Allow enabled
The tool is permitted for use.Allow disabled
The tool is blocked.
Blocking a tool prevents users from interacting with it through Verax.
Relationship to Rules
The Shadow AI page provides tool-level visibility and control.
For more granular enforcement, Verax rules allow you to:
Apply access controls based on users or groups
Restrict usage by tool category
Enforce conditions such as sensitivity or topic similarity
Allowing a tool on the Shadow AI page does not bypass rule enforcement. All allowed tools are still evaluated against your configured rules.
Security Considerations
Allowing a generative AI tool that is not explicitly governed may introduce risk.
Important considerations:
Newly discovered tools may not enforce the same data protections
Content sent to these tools may not be sanitized or controlled
Allowing a tool expands the AI attack surface
Security teams should review newly detected tools before allowing them.
Recommended Workflow
A common workflow for managing Shadow AI is:
Review newly detected tools
Identify tools that require governance
Block tools that present unacceptable risk
Allow approved tools
Create rules to enforce how approved tools can be used
Summary
The Shadow AI page helps you identify and control generative AI tools that operate outside traditional security boundaries.
By combining discovery with rule-based enforcement, Verax enables organizations to reduce Shadow AI risk while maintaining visibility and control over AI usage.