training-llms-megatron
100
Trains large language models (2B-462B parameters) using NVIDIA Megatron-Core with advanced parallelism strategies. Use when training models >1B parameters, need maximum GPU efficiency (47% MFU on H100), or require tensor/pipeline/sequence/context/expert parallelism. Production-ready framework use...
Facilitates training of large language models using NVIDIA Megatron-Core with advanced parallelism for optimal GPU efficiency.
Install this skill
or
training-llms-megatron5 files
Comments
Sign in to leave a comment.
No comments yet. Be the first to comment!
Install this skill with one command
/learn @davila7/distributed-training-megatron-coreGitHub Stars 22.3K
Rate this skill
Categorydevelopment
UpdatedMarch 16, 2026
davila7/claude-code-templatesSecurity Score
Audited on Feb 28, 2026
No security issues detected