[SKV-4350] Add OLLAMA_SUPPORTS_VISION env var, update docs (#4351)

Co-authored-by: Suchintan <suchintan@users.noreply.github.com>
This commit is contained in:
Aaron Perez
2025-12-22 19:20:31 -06:00
committed by GitHub
parent d57ff99788
commit 9645960016
6 changed files with 11 additions and 3 deletions

View File

@@ -546,10 +546,11 @@ Recommended `LLM_KEY`: `GEMINI_2.5_PRO_PREVIEW`, `GEMINI_2.5_FLASH_PREVIEW`
| `ENABLE_OLLAMA`| Register local models via Ollama | Boolean | `true`, `false` |
| `OLLAMA_SERVER_URL` | URL for your Ollama server | String | `http://host.docker.internal:11434` |
| `OLLAMA_MODEL` | Ollama model name to load | String | `qwen2.5:7b-instruct` |
| `OLLAMA_SUPPORTS_VISION` | Enable vision support | Boolean | `true`, `false` |
Recommended `LLM_KEY`: `OLLAMA`
Note: Ollama does not support vision yet.
Note: Set `OLLAMA_SUPPORTS_VISION=true` for vision models like qwen3-vl, llava, etc.
##### OpenRouter
| Variable | Description| Type | Sample Value|