Pi Agent a38ea1db51 feat: implement Ollama HTTP client for Model Gateway (issue #39)
Add async HTTP client wrapping the Ollama REST API with:
- OllamaClient with generate, generate_stream, chat, embed, list_models, is_healthy
- NDJSON streaming parser for /api/generate streaming responses
- Serde types for all Ollama API endpoints
- OllamaError enum with Http, Api, Deserialization, StreamIncomplete variants
- OllamaClientConfig for timeout and connection pool settings
- Integration into ModelGatewayServiceImpl (constructor now returns Result)
- 48 tests (types serde, wiremock HTTP mocks, error handling, config)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-10 13:56:53 +01:00
2026-03-08 10:28:54 +01:00

llm-multiverse

System for orchestrating local LLM agents

Description
System for orchestrating local LLM agents
Readme 3 MiB
Languages
Rust 60.3%
Python 37.5%
Shell 1.9%
Dockerfile 0.3%