distributed-llm-pretraining-torchtitan
Facilitates distributed LLM pretraining in PyTorch using TorchTitan, optimizing performance across multiple GPUs with advanced parallelism.
Install this skill
or
distributed-llm-pretraining-torchtitan5 files
Comments
Sign in to leave a comment.
No comments yet. Be the first to comment!
Install this skill with one command
/learn @fabioeducacross/distributed-llm-pretraining-torchtitan