gpo.zugaina.org

Search Portage & Overlays:

dev-python/llmlingua

To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.

Screenshots

  • llmlingua-0.2.1
    ~amd64 ~x86
    dev quality python_targets_python3_11 python_targets_python3_12 python_targets_python3_13 python_targets_python3_14

    View      Download      Browse     License: MIT   
    Overlay: pypi

USE Flags

dev
* This flag is undocumented *
quality
* This flag is undocumented *
python_targets_python3_11
* This flag is undocumented *
python_targets_python3_12
* This flag is undocumented *
python_targets_python3_13
* This flag is undocumented *
python_targets_python3_14
* This flag is undocumented *