____ ___ _ _ ____
/ ___| / _ \ | | / \ | _ \
\___ \ | | | | | | / _ \ | |_) |
___) | _ | |_| | _ | |___ _ / ___ \ _ | _ <
|____/ (_) \___/ (_) |_____| (_) /_/ \_\ (_) |_| \_\
Systematic Online or Local Autonomous Robot
Say its name. It listens. SOLAR is a voice-activated AI agent that runs fully on your Windows machine — handling calls, sending messages, writing code, and controlling your desktop through natural speech. No cloud required.
// voice commands
// capabilities
Every feature below is implemented in real, running Python — not a roadmap.
EnumWindows. It screenshots the notification, runs it through a vision model to read the caller's name, then asks you by voice whether to answer — or declines with a custom voicemail, all autonomously.config.yaml. SOLAR listens 24/7 using fully offline Vosk STT — no internet, no cloud microphone, zero latency..ics file, and opens it straight in your calendar app.localhost:11434. Vision tasks go to the multimodal model, code to Malicus7862 DeepSeek Coder, writing to DeepSeek v2, and general chat to the main model. No API keys, no data leaving your machine.// runtime flow
Every interaction starts with your voice. SOLAR routes it through local models — no cloud required.
config.yaml// ai stack
Pick fully local Ollama models or Ollama's cloud models — STT and TTS are always local either way.
// quick start
Pick the install that fits your hardware. Both handle dependencies and config automatically.
🛡️ Both scripts require Administrator PowerShell · Vosk STT & Kokoro TTS are shared between both setups
Every line of SOLAR is public. Fork it, extend it, add new skills in skills.py, or swap in different Ollama models. The project is built to be hacked on.