dev-python/local-llm
Local-LLM is a llama.cpp server in Docker with OpenAI Style Endpoints.
Reverse Dependencies
Reverse dependancies are sometimes conditional based on your USE flags, Ebuild version and sometimes other packages. please keep this in mind.


View
Download
Browse