# Install via Homebrew
brew install valtorai/tap/valtor
# Initialize (one-time setup)
valtor init
# Start the proxy
valtor start
Coming Soon
Linux support is in development. Star us on GitHub to get notified.
Coming Soon
Windows support is in development. Star us on GitHub to get notified.
WORKS WITH
Your secrets never leave your machine. Use ChatGPT, Claude, or any LLM with real credentials — we mask them before they hit the cloud, and unmask them when they come back.
We've all been there. You just want to get things done, but security gets in the way.
"Just give me the full curl command with my actual GitHub token. I'm too lazy to replace <TOKEN> every single time."
Dealing with SAS tokens, Azure secrets, AWS keys? Want a working curl request without the LLM returning <YOUR_KEY>? Now it works.
"No LLMs at work" policies are stuck in 2020. Your company can embrace AI without risking secrets. This is the revolution enterprises need.
Sure, masking secrets with regex is easy. Everyone learned that in primary school.
But here's where it gets interesting...
Valtor runs as a transparent proxy on your machine. When you send a message to ChatGPT with your real AWS key, we swap it with a harmless token. When ChatGPT responds with that token, we swap it back to your real key.
Zero impact on your experience. You see your real secrets. The LLM never does.
// YOU TYPE
"Use this token: ghp_R8x2mK9vLpQ4nJ7wT1yF3hB6cD0eA5sG2u"
// VALTOR MASKS
"Use this token: a7x9k-is_request_id"
// CHATGPT SEES & RESPONDS
"curl -H 'Auth: a7x9k-is_request_id'"
// YOU SEE
"curl -H 'Auth: ghp_R8x2mK9vLpQ4nJ7wT1yF3hB6cD0eA5sG2u'"
I need to clone my private repo. My GitHub token is ghp_R8x2mK9vLpQ4nJ7wT1yF3hB6cD0eA5sG2u. Give me the exact git clone command.
# Clone your private repo:
git clone https://ghp_R8x2mK9vLpQ4nJ7wT1yF3hB6cD0eA5sG2u@github.com/user/repo.git
The token never reached OpenAI's servers. They only saw a7x9k-is_request_id.
Here's how to prove it's actually working:
Turn on the proxy
Run valtor start and have a conversation with any LLM including sensitive values.
Ask the LLM to repeat your secrets
You'll get the expected response with your actual tokens. Everything works perfectly.
Turn off the proxy
Run valtor stop and ask ChatGPT the same question again.
See the difference
ChatGPT will return random IDs like a7x9k-is_request_id instead of your actual values. That's what OpenAI actually stored.
And that's the beauty of it.