terminal

# Install via Homebrew

brew install valtorai/tap/valtor

# Initialize (one-time setup)

valtor init

# Start the proxy

valtor start

WORKS WITH

Claude Code
Claude Web
ChatGPT Web
Grok

Paste your API keys freely into LLMs.
Without getting fired.

Your secrets never leave your machine. Use ChatGPT, Claude, or any LLM with real credentials — we mask them before they hit the cloud, and unmask them when they come back.

Sound familiar?

We've all been there. You just want to get things done, but security gets in the way.

The Lazy Dev

"Just give me the full curl command with my actual GitHub token. I'm too lazy to replace <TOKEN> every single time."

The Cloud Engineer

Dealing with SAS tokens, Azure secrets, AWS keys? Want a working curl request without the LLM returning <YOUR_KEY>? Now it works.

The Enterprise Dev

"No LLMs at work" policies are stuck in 2020. Your company can embrace AI without risking secrets. This is the revolution enterprises need.

How does it work?

Sure, masking secrets with regex is easy. Everyone learned that in primary school.
But here's where it gets interesting...

We don't just mask. We unmask too.

Valtor runs as a transparent proxy on your machine. When you send a message to ChatGPT with your real AWS key, we swap it with a harmless token. When ChatGPT responds with that token, we swap it back to your real key.

Zero impact on your experience. You see your real secrets. The LLM never does.

Secrets stay local
Full functionality

// YOU TYPE

"Use this token: ghp_R8x2mK9vLpQ4nJ7wT1yF3hB6cD0eA5sG2u"

// VALTOR MASKS

"Use this token: a7x9k-is_request_id"

// CHATGPT SEES & RESPONDS

"curl -H 'Auth: a7x9k-is_request_id'"

// YOU SEE

"curl -H 'Auth: ghp_R8x2mK9vLpQ4nJ7wT1yF3hB6cD0eA5sG2u'"

Your prompt chatgpt.com

I need to clone my private repo. My GitHub token is ghp_R8x2mK9vLpQ4nJ7wT1yF3hB6cD0eA5sG2u. Give me the exact git clone command.

ChatGPT responds Unmasked

# Clone your private repo:
git clone https://ghp_R8x2mK9vLpQ4nJ7wT1yF3hB6cD0eA5sG2u@github.com/user/repo.git

The token never reached OpenAI's servers. They only saw a7x9k-is_request_id.

Don't trust us. Verify it.

Here's how to prove it's actually working:

1

Turn on the proxy

Run valtor start and have a conversation with any LLM including sensitive values.

2

Ask the LLM to repeat your secrets

You'll get the expected response with your actual tokens. Everything works perfectly.

3

Turn off the proxy

Run valtor stop and ask ChatGPT the same question again.

4

See the difference

ChatGPT will return random IDs like a7x9k-is_request_id instead of your actual values. That's what OpenAI actually stored.

And that's the beauty of it.

Ready to use LLMs without paranoia?

Get started in under 60 seconds.