·4 min read

OpenAI Models Now Available: GPT-5.5, GPT Image 2, and More

Your apps can now use OpenAI's latest models — GPT-5.5, image generation, and any OpenAI-compatible provider — through a single universal endpoint.

featureaiopenaiimage-generation

OpenAI on gapp.so

We've launched a universal OpenAI-compatible proxy at `/api/ai/openai/v1/*`. Your apps can now use GPT-5.5, GPT Image 2, and any OpenAI-compatible model — no API key required.

This is the same zero-config experience you already know from Gemini and GLM: publish your app, and AI calls just work.


What's New

  • Chat Completions — GPT-5.5, GPT-5.4, GPT-4.1, o3/o4-mini reasoning models, streaming and non-streaming
  • Image Generation — GPT Image 2 with ~99% text accuracy, up to 4K resolution
  • Universal endpoint — works with OpenAI, GLM, DeepSeek, Groq, and any OpenAI-compatible provider via BYOK
  • Transparent for vibe coders — if your app uses new OpenAI() or fetch('https://api.openai.com/v1/...'), it automatically routes through the platform proxy on gapp.so

Quick Start: Chat

const response = await fetch('/api/ai/openai/v1/chat/completions', {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
  body: JSON.stringify({
    model: 'gpt-4.1-nano',
    messages: [{ role: 'user', content: 'Hello!' }]
  })
});

const data = await response.json();
console.log(data.choices[0].message.content);

With Streaming

const response = await fetch('/api/ai/openai/v1/chat/completions', {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
  body: JSON.stringify({
    model: 'gpt-4.1-nano',
    messages: [{ role: 'user', content: 'Write a haiku' }],
    stream: true
  })
});

const reader = response.body.getReader();
const decoder = new TextDecoder();

while (true) {
  const { done, value } = await reader.read();
  if (done) break;
  const text = decoder.decode(value);
  for (const line of text.split('\n')) {
    if (!line.startsWith('data: ') || line === 'data: [DONE]') continue;
    const json = JSON.parse(line.slice(6));
    const content = json.choices?.[0]?.delta?.content;
    if (content) process.stdout.write(content);
  }
}

Quick Start: Image Generation

const response = await fetch('/api/ai/openai/v1/images/generations', {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
  body: JSON.stringify({
    model: 'gpt-image-2',
    prompt: 'A minimalist logo with the text "HELLO" in clean sans-serif',
    size: '1024x1024',
    quality: 'medium',
    n: 1
  })
});

const data = await response.json();
const imageBase64 = data.data[0].b64_json;

GPT Image 2 excels at text rendering (~99% accuracy across Latin, CJK, Arabic scripts), making it ideal for logos, UI mockups, signs, and branded content.

Image Credit Costs

Image generation uses more credits than text:

QualityCredits per Image
low3 credits
medium10 credits
high25-30 credits

Credits come from the same daily pool as Gemini and GLM calls.


Available Models

All OpenAI models are supported. Some highlights:

ModelBest ForPrice Tier
gpt-5.5Most capable — agentic coding, research, knowledge workPremium
gpt-5.4Strong all-rounder — GPT Image 2 backboneHigh
gpt-5.4-nanoFast and cheap — tools, classifiers, short answersLow
gpt-4.11M context — long documents, codingMid
gpt-4.1-nanoCheapest — simple tasks, high volume (platform default)Lowest
o3Reasoning — math, logic, complex analysisMid
o4-miniCost-effective reasoningLow
gpt-image-2Image generation with ~99% text accuracyPer-image

The platform default is gpt-4.1-nano to keep credit costs low. Specify any model in your request to use it.


It Just Works for OpenAI SDK Code

If your app already uses the OpenAI SDK or calls https://api.openai.com/v1/... directly, you don't need to change anything. On gapp.so, these calls are automatically intercepted and routed through the platform proxy.

This means code like this works out of the box:

// This "just works" on gapp.so — no API key needed
const response = await fetch('https://api.openai.com/v1/chat/completions', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'Authorization': 'Bearer sk-...'  // stripped automatically
  },
  body: JSON.stringify({
    model: 'gpt-4.1-nano',
    messages: [{ role: 'user', content: 'Hi' }]
  })
});

The platform proxy handles authentication, rate limiting, and credit tracking transparently.


Three AI Providers, One Platform

Your apps now have access to three AI providers:

ProviderEndpointFormatBest For
Gemini/api/ai/geminiGoogle Generative AIImage generation, fast text
GLM/api/ai/glmOpenAI-compatibleChinese language content
OpenAI/api/ai/openai/v1/*OpenAI-compatibleHigh-quality text, image gen with text

All three share the same credit pool. Mix and match in the same app — use Gemini for fast responses, OpenAI for image generation, and GLM for Chinese text.


Bring Your Own Key (BYOK)

Want unlimited usage or access to other OpenAI-compatible providers? Add your own API key in Dashboard Settings.

Any OpenAI-compatible provider works through the same endpoint:

  • OpenAIapi.openai.com
  • DeepSeekapi.deepseek.com
  • Groqapi.groq.com
  • Togetherapi.together.xyz
  • Any compatible API — just set your custom base URL

BYOK keys get unlimited usage with no credit limits.


Local Development

The dev proxy now supports OpenAI endpoints too. External calls to api.openai.com are automatically rewritten to the platform proxy during local development:

<script src="https://gapp.so/dev-proxy.js" data-token="YOUR_TOKEN"></script>

Ready to build? Just point your fetch calls to /api/ai/openai/v1/chat/completions and start creating!

Ready to share your creation?

Publish your AI-built app and get a landing page in seconds.

Submit Your App