The software was tested on MacOS (Sequoia) and UNIX-based systems. Windows wasn’t actively tested as a release candidate.
Requirements
- macOS, Linux, or Windows
- minimum 4GB RAM for interactive browsing mode (requires local inference via Ollama with
qwen2.5:3b)
macOS / Linux
# Core (API + local models)
curl -fsSL https://swizzlm.com/install.sh | bash
# Full (core + browser automation + Ollama)
curl -fsSL https://swizzlm.com/install.sh | bash -s -- --full
Windows
via PowerShell
# Core
irm https://swizzlm.com/install.ps1 | iex
# Full
powershell -c "& { irm https://swizzlm.com/install.ps1 } -Full"
via Winget
winget install SwizzLM.Swizz
The binary is installed to ~/.swizz-llm/bin/swizz (macOS/Linux) or %LOCALAPPDATA%\swizz-llm\bin\swizz.exe (Windows) and automatically added to your PATH.
Ollama (optional)
Ollama is used for running local models and interactive browsing. The app will auto-start Ollama if the binary is in your PATH. The --full install script installs Ollama and pulls the required model automatically.
LM Studio (optional)
LM Studio is a desktop app for running local models. You can download and install it to support more options for local inference.