Building Secure AI Agents: Browser Automation Without Compromising Credentials
How to design AI agent systems that can browse, authenticate, and actβwithout credential sprawl or security vulnerabilities
Building Secure AI Agents: Browser Automation Without Compromising Credentials
AI agents that browse the web need to log in. But how do you give an agent credentials without creating a security nightmare?
The dream: An AI agent that can research competitors, check dashboards, fill forms, and complete workflowsβall autonomously.
The reality: Every site the agent touches needs authentication. And every authentication is a potential security failure.
This is the credential sprawl problem. And itβs getting worse as agents get more capable.
The Agent Credential Problem
Scenario: Research Agent
You build an agent that researches your competitors weekly:
- Logs into Crunchbase β API key in
CRUNCHBASE_API_KEY - Logs into LinkedIn β Username/password in
LINKEDIN_PASSWORD - Logs into internal dashboard β Token in
DASHBOARD_TOKEN - Logs into news sites β Various paywall credentials
Each credential is:
- Stored somewhere (env vars? config files?)
- Accessible to the agent process
- Potentially logged or leaked
- Unclear who/what used it when
The Sprawl Multiplies
Add another agent:
- Marketing agent β Needs Facebook, Twitter, LinkedIn
- Finance agent β Needs bank portals, accounting software
- Support agent β Needs Zendesk, Intercom, email
Each with their own credential storage. Each with their own security model. Each a potential breach.
Current Approaches (And Why They Fail)
Approach 1: Environment Variables
# agent.py
import os
linkedin_user = os.environ['LINKEDIN_USER']
linkedin_pass = os.environ['LINKEDIN_PASS']
# Use credentials...
Problems:
- Credentials in shell history
- Leaked to child processes
- Hard to rotate
- No audit trail
Approach 2: Config Files
{
"linkedin": {
"username": "user@company.com",
"password": "SuperSecret123!"
}
}
Problems:
- Committed to git (accidentally)
- Plaintext on disk
- Synced to cloud storage
- No access control
Approach 3: Hardcoded
# DON'T DO THIS
password = "SuperSecret123!"
Problems:
- In git forever
- Visible to anyone with code access
- Never rotates
- No accountability
Approach 4: Cloud Secrets Managers
import boto3
secret = boto3.client('secretsmanager')
.get_secret_value(SecretId='linkedin/creds')
Better, but:
- Cloud vendor lock-in
- Complex IAM setup
- Network latency
- Still no approval gates
The Secure Agent Architecture
What if agents could authenticate without ever seeing credentials?
The Vault-Per-Agent Pattern
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β AGENT ORCHESTRATOR β
β (Clawdbot / Agent Orchestrator) β
βββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββΌββββββββββββββββ
βΌ βΌ βΌ
ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ
β Research β β Marketing β β Finance β
β Agent β β Agent β β Agent β
ββββββββ¬ββββββββ ββββββββ¬ββββββββ ββββββββ¬ββββββββ
β β β
βΌ βΌ βΌ
ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ
β Bitwarden β β Bitwarden β β Bitwarden β
β Vault β β Vault β β Vault β
β (read: β β (read: β β (read: β
β research) β β marketing) β β finance) β
ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ
Each agent has scoped vault access:
- Research agent: Can read Crunchbase, LinkedIn, news credentials
- Marketing agent: Can read social media credentials
- Finance agent: Can read banking, accounting credentials
No agent sees another agentβs credentials. No credentials in environment or code.
Browser Secure for AI Agents
Browser Secure implements this architecture:
1. Scoped Vault Access
# Research agent vault unlock
export BW_SESSION_RESEARCH=$(bw unlock --raw --organization research)
# Marketing agent vault unlock
export BW_SESSION_MARKETING=$(bw unlock --raw --organization marketing)
Each agent only accesses their assigned vault.
2. Auto-Credential Discovery
# Agent needs to log into LinkedIn
browser-secure navigate https://linkedin.com --auto-vault
# Browser Secure:
# 1. Extracts domain (linkedin.com)
# 2. Searches vault for matching credentials
# 3. Presents options interactively (or uses pre-configured)
# 4. Fills credentials without agent ever seeing them
3. Approval Gates
Agents shouldnβt blindly authenticate:
| Action | Approval |
|---|---|
| Navigate, extract, screenshot | None |
| Fill forms, click buttons | Prompt |
| Submit login | Always require approval |
| Delete, purchase, sensitive | 2FA required |
# Agent wants to log in
browser-secure act "submit the login form"
# Browser Secure prompts:
# "Agent is attempting to log into linkedin.com. Approve? (y/n)"
You maintain control. Agents assist, not act autonomously on sensitive operations.
4. Audit Everything
Every agent action is logged:
{
"event": "AGENT_AUTHENTICATION",
"agentId": "research-agent-01",
"sessionId": "bs-20260211054500-abc123",
"site": "linkedin.com",
"action": "submit_login",
"approvedBy": "human@company.com",
"timestamp": "2026-02-11T05:45:00Z",
"chainHash": "sha256:a3f5b2..."
}
Immutable. Tamper-evident. Queryable.
Real-World Example: Research Agent
Letβs build a secure research agent step by step.
Step 1: Create Agent Profile
# Create isolated Chrome profile for this agent
browser-secure profile --create "Research-Agent-01"
This profile:
- Only has research-related extensions (Bitwarden)
- No personal cookies or history
- Can be wiped without affecting other agents
Step 2: Configure Vault Access
In Bitwarden, create a collection:
- Name: Research Agent Credentials
- Items: Crunchbase API key, LinkedIn creds, news site logins
- Access: Only the research agentβs API key
Step 3: Agent Workflow
# research_agent.py - orchestrated by Clawdbot
def weekly_competitor_research():
# Unlock vault (done once per session)
browser_secure.auth(vault="research")
# Navigate to Crunchbase
browser_secure.navigate("https://crunchbase.com", auto_vault=True)
# Agent extracts data (read-only, no approval needed)
competitors = browser_secure.extract(
"list all companies that raised Series A this week in fintech"
)
# Navigate to LinkedIn for deeper research
browser_secure.navigate("https://linkedin.com", auto_vault=True)
# Agent needs to log in - approval gate triggers
# Human approves via notification
browser_secure.act("log in with saved credentials")
# Continue research...
for company in competitors:
data = browser_secure.extract(
f"find employee count and recent news for {company}"
)
save_to_report(data)
browser_secure.close()
Step 4: Human Oversight
Clawdbot: Research agent wants to log into LinkedIn. Approve?
[Approve] [Deny] [Always approve for this session]
You: Approve
Clawbot: Agent logged in. Continuing research...
The agent never sees the password. You maintain control. Everything is audited.
Multi-Agent Orchestration
With Agent Orchestrator, coordinate multiple secure agents:
You: /ao spawn researcher browser-secure
Clawdbot: Spawned researcher. Status: running.
You: /ao spawn marketer browser-secure
Clawdbot: Spawned marketer. Status: running.
You: /ao status
Clawdbot:
Running: 2
- researcher: Running | Vault: research | Last: extracted competitor data
- marketer: Running | Vault: marketing | Last: posting to LinkedIn
Each agent:
- Has isolated browser profile
- Accesses scoped vault
- Requires approval for authentication
- Logs all actions
The Security Model
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 1. AGENT ISOLATION β
β β’ Separate Chrome profiles per agent β
β β’ No shared cookies/session state β
β β’ Profile wiped on session end β
βββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 2. VAULT SCOPING β
β β’ Each agent has vault collection access β
β β’ Credentials never in agent memory β
β β’ Retrieved on-demand by Browser Secure β
βββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 3. APPROVAL GATES β
β β’ Navigation/extraction: Agent can do β
β β’ Authentication: Requires human approval β
β β’ Sensitive actions: 2FA required β
βββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 4. AUDIT TRAIL β
β β’ Every action logged with agent ID β
β β’ Chain-hashed for tamper evidence β
β β’ Queryable for compliance β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Comparison: Insecure vs Secure Agents
| Aspect | Insecure Agent | Secure Agent |
|---|---|---|
| Credentials | Env vars, config files | Vault-scoped, encrypted |
| Access | All credentials, all the time | On-demand, approved |
| Audit | None | Full chain-hashed trail |
| Isolation | Shared browser state | Profile per agent |
| Approval | Autonomous | Human-in-the-loop for auth |
| Breach impact | All credentials compromised | Scoped to agentβs vault |
When to Build Secure Agents
Always Secure
- Accessing production systems
- Financial operations
- Customer data
- Competitive intelligence
- Any compliance-regulated industry
Can Be Lighter
- Public data scraping (no auth)
- Internal testing environments
- Demo/prototype agents
- Read-only public APIs
Getting Started
1. Install Browser Secure
git clone https://github.com/NMC-Interactive/browser-secure.git
cd browser-secure && npm install && npm run build && npm link
2. Set Up Vault
# Bitwarden (recommended)
brew install bitwarden-cli
bw login
export BW_SESSION=$(bw unlock --raw)
3. Create Agent Profile
browser-secure profile --create "My-First-Agent"
4. Build Your Agent
# Use Browser Secure CLI from your agent code
import subprocess
def secure_navigate(url, site=None):
cmd = ["browser-secure", "navigate", url]
if site:
cmd.extend(["--site", site])
else:
cmd.append("--auto-vault")
subprocess.run(cmd, check=True)
def secure_extract(instruction):
result = subprocess.run(
["browser-secure", "extract", instruction],
capture_output=True,
text=True,
check=True
)
return result.stdout
5. Orchestrate with Clawdbot
You: Spawn a research agent to check competitor pricing
Clawdbot: /ao spawn researcher browser-secure
Clawdbot: Agent spawned. Navigating to competitor site...
Clawdbot: Agent needs to log in to see pricing. Approve?
You: Approve
Clawdbot: Agent logged in. Extracting pricing data...
Clawdbot: Done. Report saved.
The Future: Agent-Native Security
Weβre moving toward a world where:
- Every agent has a vault β Scoped, encrypted, audited
- Authentication is delegated β Agents request, humans approve
- Actions are immutable β Complete audit trail
- Breach impact is minimized β Scoped credentials, limited blast radius
Browser Secure is a step toward this future. Built for the OpenClaw ecosystem. Designed for developers who want powerful agents without sacrificing security.
Summary
Building secure AI agents requires:
- Vault-scoped credentials β Never env vars or config files
- Agent isolation β Separate profiles per agent
- Approval gates β Human-in-the-loop for authentication
- Audit everything β Immutable, queryable logs
- Orchestration β Coordinate multiple secure agents
The agents you build today will have access to increasingly sensitive systems. Build them securely from the start.
Resources
- Browser Secure GitHub
- Agent Orchestrator
- Secure Browser Automation Credentials
- Chrome Profile Automation
Building AI agents? Do it securely. Browser Secure gives you vault-backed authentication with human approval gatesβbecause autonomous shouldnβt mean unaccountable.