Stop Calling OpenAI from the Client: A Frontend Dev’s Guide to AI Security
You're a frontend developer. You build beautiful interfaces in React, Svelte, or Vue. You want
to add some AI magic to your app. You install the OpenAI SDK, drop in your API key, and... wait,
why does the console say dangerouslyAllowBrowser: true?
It's the classic dilemma: you want to move fast and stay in your frontend comfort zone, but the security requirements of LLM APIs seem to demand a full-stack architecture.
The Danger Zone: NEXT_PUBLIC_OPENAI_KEY
We've all been tempted. You create a .env file, prefix your key with NEXT_PUBLIC_ or VITE_, and call it a day. It works on localhost. It
works on Vercel. But it's a ticking time bomb.
Here is the reality: Anything prefixed with NEXT_PUBLIC_ is bundled
into your JavaScript code. It is sent to the user's browser. Anyone can:
- Right-click > "Inspect Element"
- Go to the "Sources" or "Network" tab
- Search for "sk-"
- Copy your key and use it to generate unlimited tokens on your credit card.
Bots scrape the web 24/7 looking for these keys. It's not a matter of if your key will be stolen, but when.
The Backend Gap
The standard advice is "spin up a backend." But for a frontend developer, that often sounds like:
"Just learn Node.js, set up an Express server, configure CORS, handle rate limiting, set up a Redis instance for caching, integrate a database for user credits, and deploy it all to AWS."
That is a lot of friction just to hide one string. It kills momentum. You stop building your product and start building infrastructure.
The Proxy Pattern: Security Without the Server
There is a middle ground. You don't need a full backend; you just need a Proxy.
A proxy server sits between your frontend and the AI provider. It acts as a shield. Instead of your frontend holding the keys, the proxy holds them.
How the Proxy Pattern Works
- Frontend: Sends a request to the Proxy (not OpenAI). It includes a User Token (JWT) to prove who the user is.
- Proxy: Validates the User Token. Checks if they have enough credits.
- Proxy: Injects the secret OpenAI API Key into the request.
- Proxy: Forwards the request to OpenAI.
- OpenAI: Returns the response to the Proxy.
- Proxy: Streams the response back to your Frontend.
This is exactly what PromptProxy does. It gives you that "backend" security layer without you having to write or host any backend code.
Code Comparison
Let's look at the code. It's surprisingly similar, but the security model is completely different.
❌ The "I hope nobody looks" Approach
✅ The PromptProxy Approach
Conclusion
You don't have to become a backend engineer to secure your AI application. By using the Proxy Pattern, you can keep your development velocity high and your API keys safe.
Stop exposing your keys. Switch to PromptProxy today and secure your frontend in minutes.