dev-python/tensor-parallel
Automatically shard your large model between multiple GPUs, works without torch.distributed
USE Flags
dev
* This flag is undocumented *
python_targets_python3_11
* This flag is undocumented *
python_targets_python3_12
* This flag is undocumented *
python_targets_python3_13
* This flag is undocumented *
python_targets_python3_14
* This flag is undocumented *


View
Download
Browse