Multi-model, one chat.
Swap between GPT-4o, Claude Opus, o4-mini, and the rest without losing context. Run two models side by side when you want a second opinion. One conversation, six minds.
· A thoughtful AI chat
Switch between GPT and Claude mid-conversation. Branch to explore alternatives without losing the thread. Attach files, ground with web search, keep every word. Three free messages a day — upgrade for unlimited.
Today's chat
Pick any model.
Powered by the frontier
· Why quiet
Onboarding wizards, autoplay demos, robot mascots, banners promising AGI by Thursday. Pop-ups asking if you're really sure you want to leave. We skipped all of it. A chat app should be a room to think in — nothing between you and the model.
What you need stays: both frontier families in one place, conversations that branch when you want to explore, web search when you need citations, files when you need context. What you don't need is nowhere to be found. No newsletter traps, no AI “employees,” no dashboards you didn't ask for.
· Features
Six features, one account. Each does its job, stays out of your way, and gets out of the page when you close the tab.
Swap between GPT-4o, Claude Opus, o4-mini, and the rest without losing context. Run two models side by side when you want a second opinion. One conversation, six minds.
Fork any reply to chase a different angle. Both threads persist — swap back when you want the original.
Native grounding pulls live sources mid-answer. Every claim links back to where it came from.
Drag in a PDF, a screenshot, a design file. The model reads it, you keep working.
Every chat is searchable, exportable, yours. We don't train on any of it.
Three messages a day on every model, no card required. Pro when you're ready, not before.
· In motion
Run GPT and Claude on the same input. Keep the one you like, branch the one you don't, or take a piece of each.
You ·
Not a mockup — this is how the compare view behaves. Interaction lives in the app.
· Use cases
oran.chat is built for thinking work. Here's what different kinds of thinking look like inside it.
Draft in Claude's voice. Sharpen in GPT's. Branch to try a different tone without losing the original line.
Force two models to argue. Synthesize the disagreement. Use the places they agree as your shortest path.
One cleaner place to think out loud with GPT and Claude. No tab-switching, no premium paywall before you've even tried it.
· Pricing
Two tiers. Same features, different generosity. Everything you actually need is in both columns.
Enough to know whether you like it.
Quota resets at midnight local.
For the days you'd rather not count.
Cancel any time, keep your history.
· FAQ
OpenAI's GPT-4o, GPT-4.1, and o4-mini, plus Anthropic's Claude Opus 4.7, Sonnet 4.6, and Haiku 4.5. New models show up within days of release.
Because then you can't compare. In oran.chat, one conversation can run on whichever model suits the turn — branch to another when you want a second read, swap mid-thread when the tone is wrong. One history. One search bar. No duplicate subscriptions.
Three messages every day, on every model including Opus. No credit card, no lock-out after a trial. Quota resets at midnight local. If you hit the limit, the models stay visible — they just wait until tomorrow.
Unlimited messages, priority routing on busy hours, longer context windows, higher file-upload limits, and early access to new models as they ship. TODO: final feature copy + price.
No. Conversations stay private to your account. OpenAI and Anthropic don't train on API traffic by default, and we don't log content for model training. You can export or delete anything, any time.
At any reply, fork a new thread. The parent conversation stays exactly where it was — the branch gets its own context, its own model choice, its own future. Swap between branches like tabs; nothing is lost.
Yes. Every chat exports to Markdown from the settings panel. Bring your history anywhere; it's yours.
Still curious? hello@oran.chat
· A quiet invitation
No credit card. No onboarding wizard. You're one link away from a chat with every model that matters.