sci-ml/flash-attn
Flash Attention: Fast and Memory-Efficient Exact Attention (Python component).
-
flash-attn-2.8.3~amd64cuda rocm python_single_target_python3_11 python_single_target_python3_12 python_single_target_python3_13 debug
View
Download
Browse License: BSD
Overlay: tatsh-overlay
Runtime Dependencies
flash-attn-2.8.3
python_single_target_python3_11?
( dev-lang/python:3.11 )
python_single_target_python3_12?
( dev-lang/python:3.12 )
python_single_target_python3_13?
( dev-lang/python:3.13 )
python_single_target_python3_11?
( sci-ml/einops[python_targets_python3_11
(-)
] )
python_single_target_python3_12?
( sci-ml/einops[python_targets_python3_12
(-)
] )
python_single_target_python3_13?
( sci-ml/einops[python_targets_python3_13
(-)
] )
cuda?
( sci-ml/caffe2[cuda,flash] )
rocm?
( sci-ml/caffe2[rocm] )
sci-ml/pytorch[python_single_target_python3_11
(-)
?
,python_single_target_python3_12
(-)
?
,python_single_target_python3_13
(-)
?
]
python_single_target_python3_11?
( dev-lang/python:3.11 )
python_single_target_python3_12?
( dev-lang/python:3.12 )
python_single_target_python3_13?
( dev-lang/python:3.13 )