sci-ml/flash-attn
Flash Attention: Fast and Memory-Efficient Exact Attention (Python component).
-
flash-attn-2.8.3~amd64cuda rocm python_single_target_python3_11 python_single_target_python3_12 python_single_target_python3_13 debug
View
Download
Browse License: BSD
Overlay: tatsh-overlay
USE Flags
cuda
Global: Build CUDA binaries.
rocm
* This flag is undocumented *
python_single_target_python3_11
* This flag is undocumented *
python_single_target_python3_12
* This flag is undocumented *
python_single_target_python3_13
* This flag is undocumented *
debug
Global: Enable extra debug codepaths, like asserts and extra output. If you want to get meaningful backtraces see http://www.gentoo.org/proj/en/qa/backtraces.xml