Multi-Model Intelligence
Choose from 15+ AI providers including OpenAI GPT-5.2, Claude Opus 4.5, Google Gemini 3, DeepSeek R1, Grok 4.1, and more. Run models locally with Ollama, LM Studio, or llama.cpp.
Your API keys. Your choice. Direct access to the latest models with no middleman markup.
Download FreeBring Your Own Key
Use your own API keys from any provider. Full control over costs with no middleman markup. Pay directly to the AI provider at their rates.
Run 100% Locally
Use Ollama, LM Studio, or llama.cpp for completely offline, private AI assistance. Your code never leaves your machine.
Switch Instantly
Change models mid-conversation based on the task. Use GPT-5.2 for reasoning, Grok 4.1 Fast for speed, or Kimi K2 for long context.
15+ Cloud AI Providers
OpenAI
cloudGPT-5.2, GPT-5.2-Codex, o1, o3-mini, GPT-4o
Anthropic
cloudClaude Opus 4.5, Claude Sonnet 4.5, Claude Haiku 4.5
Gemini 3 Pro, Gemini 3 Flash, Gemini 2.5 Pro
Google Vertex
cloudVertex AI with Gemini 3, PaLM, Codey models
Microsoft Azure
cloudAzure OpenAI with GPT-5.2, GPT-4o
DeepSeek
cloudDeepSeek-R1-0528, DeepSeek V3, DeepSeek Coder
Mistral
cloudMistral Large 3, Codestral 25.08, Devstral 2
Groq
cloudUltra-fast Llama 3.3, Mixtral, Gemma
xAI (Grok)
cloudGrok 4.1 Fast, Grok 4 Heavy, Grok 3 (2M context)
Qwen
cloudQwen 2.5, Qwen Coder (Alibaba Cloud)
Kimi
cloudKimi K2 Thinking, Kimi K2-Instruct, Kimi Linear (256K context)
MiniMax
cloudMiniMax M2.1, MiniMax-M1 (1M context), Speech 2.6
Z.ai
cloudGLM-4.7, GLM-Image, GLM-4.6V (multimodal)
OpenRouter
cloudAccess 200+ models from all providers
LiteLLM
cloudUnified API for 100+ LLM providers
Run Models on Your Machine
Ollama
localLlama 3.3, DeepSeek-R1-Distill, Mistral, Qwen, Grok 3
LM Studio
localAny GGUF model, open-source LLMs
llama.cpp
localDirect llama.cpp server integration
vLLM
localSelf-hosted high-performance inference
Right model for the right task
Different tasks require different strengths. Sidian makes it easy to switch models based on what you're working on.
Complex Architecture & Reasoning
Use GPT-5.2, Claude Opus 4.5, or o1 for deep reasoning
Ultra-Fast Responses
Use Groq or Grok 4.1 Fast (2M context) for blazing speed
Long Codebase Analysis
Use Kimi K2 (256K), MiniMax-M1 (1M), or Gemini 3 Pro
Code Generation
Use Codestral 25.08, GPT-5.2-Codex, or DeepSeek Coder
Click any model to switch instantly
Zero Code Retention
Sidian never stores your code on our servers. With cloud providers, your code goes directly to them via their official APIs. Use local models and nothing ever leaves your machine.
- API keys stored locally, encrypted on your device
- Direct API calls to providers, never proxied
- 100% offline mode with Ollama, LM Studio, or llama.cpp
- No telemetry or usage tracking in local mode
Ollama • DeepSeek-R1-Distill-70B
Network activity
Zero external connections
Code transmitted
None, all processing local
Sidian server connection
Disabled (offline mode)
Get Started in 30 Seconds
Setting up your preferred AI provider is quick and simple.
Open Settings
Go to Settings → AI Providers in Sidian. You'll see all available providers listed.
Add Your API Key
Paste your API key from the provider's dashboard. Keys are encrypted and stored locally.
Start Coding
That's it! Switch between models anytime from the model selector in the chat panel.
Get started with Sidian today
Join the next generation of developers using AI-powered coding. Download now and experience the future of development.