AI Proxy Gateway

Change Log v0.1.8: The AI Proxy Gateway
The biggest headache when building production-grade AI applications isn't writing the code—it is the infrastructure. Juggling multiple API keys for different models, managing separate billing dashboards for OpenAI, Anthropic, Fireworks, and Groq, and constantly worrying about your keys leaking to GitHub is a nightmare.
With version 0.1.8, Gor://a is officially no longer just an AI code builder. We are now a full-scale AI infrastructure provider.
We have successfully engineered and deployed the Gorilla AI Proxy Gateway. Here is how we just completely revolutionized the developer experience for your deployed applications.
One Key to Rule Them All (gb_live_)
Starting today, every Gor://a user is automatically provisioned a cryptographically secure, master API key (starting with gb_live_).
You no longer need to create third-party developer accounts or manage external billing cycles. Your gb_live_ key grants your generated applications instant access to industry-leading LLMs, high-fidelity image generation, speech-to-text, and background removal capabilities—all routed through our backend switchboard.
Drop-in OpenAI Compatibility
We built the Gorilla Proxy to be a 1:1 structural replica of the official OpenAI API.
When the Gor://a Coder builds your app, it uses standard, battle-tested npm packages (like the official openai SDK). By simply pointing the baseURL to https://app.gorillabuilder.dev/api/v1, your app seamlessly communicates with our proxy. This means zero hallucinated endpoints and perfect code stability.
The Gorilla Token Economy
We have unified the cost of running complex AI apps into a single currency: Gorilla Credits. When your deployed app makes an API request, our backend instantly calculates the payload, routes it to the optimal provider, and deducts the precise cost from your balance:
LLM Chat (Powered by OpenRouter): 0.5 credits per API token.
Image Generation (Powered by Fireworks AI): 250 credits per image.
Speech-to-Text (Powered by Whisper-v3): 100 credits per estimated minute.
Background Removal (Powered by RemBG): 100% Free.
Text-to-Speech: Routed to the browser's native
window.speechSynthesisWeb Speech API for zero-latency, zero-cost voice output.
Bulletproof Security & Local IDE Injection
API key leaks are a multi-million dollar problem in the software industry. We engineered a dual-layer security model to ensure your gb_live_ key is never compromised:
WebContainer Memory Injection: When testing apps live inside the Gor://a Builder IDE, your key is injected directly into the virtual Node.js process memory. It is never written to the virtual file system.
Vercel Production Handoff: The AI Coder is strictly forbidden from writing your key into
vercel.jsonor.envfiles. Instead, during the GitHub push, we provide a secure, copyable handoff screen so you can paste your key directly into Vercel's encrypted environment variables. Your keys never touch a public or private repository.
Zero Vendor Lock-in
Because our proxy handles the routing, Gor://a has completely abstracted the upstream AI providers. If a faster, cheaper, or smarter model drops tomorrow, we simply update the master routing logic on our servers. Every single application deployed by our users will receive the upgrade instantly, with zero code changes required on your end.
What is Next?
The backend plumbing is flawless. The AI understands your coding preferences. The proxy is live and billing correctly. You now have an enterprise-grade AI assembly line.
Head to your dashboard to deploy your first proxy-powered full-stack application!