sci-misc/ik_llama-cpp
llama.cpp fork with additional SOTA quants and improved performance
-
ik_llama-cpp-9999curl openblas +openmp blis rocm cuda vulkan flexiblas wmma +amdgpu_targets_gfx908 +amdgpu_targets_gfx90a +amdgpu_targets_gfx942 +amdgpu_targets_gfx1030 +amdgpu_targets_gfx1100 +amdgpu_targets_gfx1101 +amdgpu_targets_gfx1200 +amdgpu_targets_gfx1201 amdgpu_targets_gfx803 amdgpu_targets_gfx900 amdgpu_targets_gfx906 amdgpu_targets_gfx940 amdgpu_targets_gfx941 amdgpu_targets_gfx1010 amdgpu_targets_gfx1011 amdgpu_targets_gfx1012 amdgpu_targets_gfx1031 amdgpu_targets_gfx1102 amdgpu_targets_gfx1103 amdgpu_targets_gfx1150 amdgpu_targets_gfx1151
View
Download
Browse License: MIT Overlay: guru
Runtime Dependencies
ik_llama-cpp-9999
curl?
( net-misc/curl:= )
openblas?
( sci-libs/openblas:= )
openmp?
( llvm-runtimes/openmp:= )
blis?
( sci-libs/blis:= )
flexiblas?
( sci-libs/flexiblas:= )
rocm?
( >=dev-util/hip-6.3:= >=sci-libs/hipBLAS-6.3:= wmma?
( >=sci-libs/rocWMMA-6.3:= )
)
cuda?
( dev-util/nvidia-cuda-toolkit:= )
vulkan?
( media-libs/vulkan-loader )

