OpenAI Models Now Available: GPT-5.5, GPT Image 2, and More
Your apps can now use OpenAI's latest models — GPT-5.5, image generation, and any OpenAI-compatible provider — through a single universal endpoint.
OpenAI on gapp.so
We've launched a universal OpenAI-compatible proxy at `/api/ai/openai/v1/*`. Your apps can now use GPT-5.5, GPT Image 2, and any OpenAI-compatible model — no API key required.
This is the same zero-config experience you already know from Gemini and GLM: publish your app, and AI calls just work.
What's New
- Chat Completions — GPT-5.5, GPT-5.4, GPT-4.1, o3/o4-mini reasoning models, streaming and non-streaming
- Image Generation — GPT Image 2 with ~99% text accuracy, up to 4K resolution
- Universal endpoint — works with OpenAI, GLM, DeepSeek, Groq, and any OpenAI-compatible provider via BYOK
- Transparent for vibe coders — if your app uses
new OpenAI()orfetch('https://api.openai.com/v1/...'), it automatically routes through the platform proxy on gapp.so
Quick Start: Chat
const response = await fetch('/api/ai/openai/v1/chat/completions', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
model: 'gpt-4.1-nano',
messages: [{ role: 'user', content: 'Hello!' }]
})
});
const data = await response.json();
console.log(data.choices[0].message.content);With Streaming
const response = await fetch('/api/ai/openai/v1/chat/completions', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
model: 'gpt-4.1-nano',
messages: [{ role: 'user', content: 'Write a haiku' }],
stream: true
})
});
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const text = decoder.decode(value);
for (const line of text.split('\n')) {
if (!line.startsWith('data: ') || line === 'data: [DONE]') continue;
const json = JSON.parse(line.slice(6));
const content = json.choices?.[0]?.delta?.content;
if (content) process.stdout.write(content);
}
}Quick Start: Image Generation
const response = await fetch('/api/ai/openai/v1/images/generations', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
model: 'gpt-image-2',
prompt: 'A minimalist logo with the text "HELLO" in clean sans-serif',
size: '1024x1024',
quality: 'medium',
n: 1
})
});
const data = await response.json();
const imageBase64 = data.data[0].b64_json;GPT Image 2 excels at text rendering (~99% accuracy across Latin, CJK, Arabic scripts), making it ideal for logos, UI mockups, signs, and branded content.
Image Credit Costs
Image generation uses more credits than text:
| Quality | Credits per Image |
|---|---|
low | 3 credits |
medium | 10 credits |
high | 25-30 credits |
Credits come from the same daily pool as Gemini and GLM calls.
Available Models
All OpenAI models are supported. Some highlights:
| Model | Best For | Price Tier |
|---|---|---|
gpt-5.5 | Most capable — agentic coding, research, knowledge work | Premium |
gpt-5.4 | Strong all-rounder — GPT Image 2 backbone | High |
gpt-5.4-nano | Fast and cheap — tools, classifiers, short answers | Low |
gpt-4.1 | 1M context — long documents, coding | Mid |
gpt-4.1-nano | Cheapest — simple tasks, high volume (platform default) | Lowest |
o3 | Reasoning — math, logic, complex analysis | Mid |
o4-mini | Cost-effective reasoning | Low |
gpt-image-2 | Image generation with ~99% text accuracy | Per-image |
The platform default is gpt-4.1-nano to keep credit costs low. Specify any model in your request to use it.
It Just Works for OpenAI SDK Code
If your app already uses the OpenAI SDK or calls https://api.openai.com/v1/... directly, you don't need to change anything. On gapp.so, these calls are automatically intercepted and routed through the platform proxy.
This means code like this works out of the box:
// This "just works" on gapp.so — no API key needed
const response = await fetch('https://api.openai.com/v1/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer sk-...' // stripped automatically
},
body: JSON.stringify({
model: 'gpt-4.1-nano',
messages: [{ role: 'user', content: 'Hi' }]
})
});The platform proxy handles authentication, rate limiting, and credit tracking transparently.
Three AI Providers, One Platform
Your apps now have access to three AI providers:
| Provider | Endpoint | Format | Best For |
|---|---|---|---|
| Gemini | /api/ai/gemini | Google Generative AI | Image generation, fast text |
| GLM | /api/ai/glm | OpenAI-compatible | Chinese language content |
| OpenAI | /api/ai/openai/v1/* | OpenAI-compatible | High-quality text, image gen with text |
All three share the same credit pool. Mix and match in the same app — use Gemini for fast responses, OpenAI for image generation, and GLM for Chinese text.
Bring Your Own Key (BYOK)
Want unlimited usage or access to other OpenAI-compatible providers? Add your own API key in Dashboard Settings.
Any OpenAI-compatible provider works through the same endpoint:
- OpenAI —
api.openai.com - DeepSeek —
api.deepseek.com - Groq —
api.groq.com - Together —
api.together.xyz - Any compatible API — just set your custom base URL
BYOK keys get unlimited usage with no credit limits.
Local Development
The dev proxy now supports OpenAI endpoints too. External calls to api.openai.com are automatically rewritten to the platform proxy during local development:
<script src="https://gapp.so/dev-proxy.js" data-token="YOUR_TOKEN"></script>Ready to build? Just point your fetch calls to /api/ai/openai/v1/chat/completions and start creating!