shahondin1624
  • Germay
  • Joined on 2026-01-30
6c38044904 feat: semantic cache (#32): main.rs
9f49cab2da feat: semantic cache (#32): memory.proto
3d0addcd0a feat: semantic cache (#32): _index.md
02e27a6a60 feat: semantic cache (#32): prompt.rs
c9677d7d5b feat: semantic cache (#32): mod.rs
13cebc225f feat: semantic cache (#32): issue-031.md
2da5e97bf9 feat: semantic cache (#32): similarity.rs
35eb00b3aa feat: semantic cache (#32): mod.rs
d6e5902a59 feat: semantic cache (#32): issue-032.md
shahondin1624 created branch feature/issue-32-semantic-cache in llm-multiverse/llm-multiverse 2026-03-09 23:45:34 +01:00
shahondin1624 opened issue llm-multiverse/llm-multiverse#120 2026-03-09 23:08:33 +01:00
Tech debt: minor findings from issue #31 review
shahondin1624 closed issue llm-multiverse/llm-multiverse#31 2026-03-09 23:08:17 +01:00
Implement extraction step (model call for relevant segment)
shahondin1624 commented on issue llm-multiverse/llm-multiverse#31 2026-03-09 23:08:17 +01:00
Implement extraction step (model call for relevant segment)

Implementation complete in PR #119.

Extraction step: Post-retrieval LLM call via Model Gateway Inference RPC extracts relevant segments from memory corpus.

  • ExtractionClient with prompt…
shahondin1624 created pull request llm-multiverse/llm-multiverse#119 2026-03-09 23:08:09 +01:00
feat: implement extraction step (#31)
8471c5b19f feat: extraction step (#31): service.rs
b6fdf531f8 feat: extraction step (#31): main.rs
41211d20a2 feat: extraction step (#31): config.rs
52c0cba16d feat: extraction step (#31): lib.rs
20e1bba174 feat: extraction step (#31): memory.proto