Features / Multi-Model
AI Providers

Multi-Model Intelligence

Choose from 15+ AI providers including OpenAI GPT-5.2, Claude Opus 4.5, Google Gemini 3, DeepSeek R1, Grok 4.1, and more. Run models locally with Ollama, LM Studio, or llama.cpp.

Your API keys. Your choice. Direct access to the latest models with no middleman markup.

Download Free

Bring Your Own Key

Use your own API keys from any provider. Full control over costs with no middleman markup. Pay directly to the AI provider at their rates.

Run 100% Locally

Use Ollama, LM Studio, or llama.cpp for completely offline, private AI assistance. Your code never leaves your machine.

Switch Instantly

Change models mid-conversation based on the task. Use GPT-5.2 for reasoning, Grok 4.1 Fast for speed, or Kimi K2 for long context.

Cloud Providers

15+ Cloud AI Providers

OpenAI

cloud

GPT-5.2, GPT-5.2-Codex, o1, o3-mini, GPT-4o

Anthropic

cloud

Claude Opus 4.5, Claude Sonnet 4.5, Claude Haiku 4.5

Google

cloud

Gemini 3 Pro, Gemini 3 Flash, Gemini 2.5 Pro

Google Vertex

cloud

Vertex AI with Gemini 3, PaLM, Codey models

Microsoft Azure

cloud

Azure OpenAI with GPT-5.2, GPT-4o

DeepSeek

cloud

DeepSeek-R1-0528, DeepSeek V3, DeepSeek Coder

Mistral

cloud

Mistral Large 3, Codestral 25.08, Devstral 2

Groq

cloud

Ultra-fast Llama 3.3, Mixtral, Gemma

xAI (Grok)

cloud

Grok 4.1 Fast, Grok 4 Heavy, Grok 3 (2M context)

Qwen

cloud

Qwen 2.5, Qwen Coder (Alibaba Cloud)

Kimi

cloud

Kimi K2 Thinking, Kimi K2-Instruct, Kimi Linear (256K context)

MiniMax

cloud

MiniMax M2.1, MiniMax-M1 (1M context), Speech 2.6

Z.ai

cloud

GLM-4.7, GLM-Image, GLM-4.6V (multimodal)

OpenRouter

cloud

Access 200+ models from all providers

LiteLLM

cloud

Unified API for 100+ LLM providers

Local Providers

Run Models on Your Machine

Ollama

local

Llama 3.3, DeepSeek-R1-Distill, Mistral, Qwen, Grok 3

LM Studio

local

Any GGUF model, open-source LLMs

llama.cpp

local

Direct llama.cpp server integration

vLLM

local

Self-hosted high-performance inference

Smart Switching

Right model for the right task

Different tasks require different strengths. Sidian makes it easy to switch models based on what you're working on.

Complex Architecture & Reasoning

Use GPT-5.2, Claude Opus 4.5, or o1 for deep reasoning

Ultra-Fast Responses

Use Groq or Grok 4.1 Fast (2M context) for blazing speed

Long Codebase Analysis

Use Kimi K2 (256K), MiniMax-M1 (1M), or Gemini 3 Pro

Code Generation

Use Codestral 25.08, GPT-5.2-Codex, or DeepSeek Coder

Claude Opus 4.5
Active
GPT-5.2 / GPT-5.2-Codex
Grok 4.1 Fast (2M context)
Kimi K2 Thinking
Gemini 3 Pro (Deep Think)
Ollama (Local)

Click any model to switch instantly

Privacy First

Zero Code Retention

Sidian never stores your code on our servers. With cloud providers, your code goes directly to them via their official APIs. Use local models and nothing ever leaves your machine.

  • API keys stored locally, encrypted on your device
  • Direct API calls to providers, never proxied
  • 100% offline mode with Ollama, LM Studio, or llama.cpp
  • No telemetry or usage tracking in local mode
Local Model Active

Ollama • DeepSeek-R1-Distill-70B

Network activity

Zero external connections

Code transmitted

None, all processing local

Sidian server connection

Disabled (offline mode)

Get Started in 30 Seconds

Setting up your preferred AI provider is quick and simple.

1

Open Settings

Go to Settings → AI Providers in Sidian. You'll see all available providers listed.

2

Add Your API Key

Paste your API key from the provider's dashboard. Keys are encrypted and stored locally.

3

Start Coding

That's it! Switch between models anytime from the model selector in the chat panel.

Get started with Sidian today

Join the next generation of developers using AI-powered coding. Download now and experience the future of development.