training-llms-megatron by davila7

Trains large language models (2B-462B parameters) using NVIDIA Megatron-Core with advanced parallelism strategies. Use when training models >1B parameters, need maximum GPU efficiency (47% MFU on H100), or require tensor/pipeline/sequence/context/expert parallelism. Production-ready framework used for Nemotron, LLaMA, DeepSeek.

DevOps
15.7K Stars
1.4K Forks
Updated Jan 12, 2026, 05:31 AM

Why Use This

This skill provides specialized capabilities for davila7's codebase.

Use Cases

  • Developing new features in the davila7 repository
  • Refactoring existing code to follow davila7 standards
  • Understanding and working with davila7's codebase structure

Skill Snapshot

Auto scan of skill assets. Informational only.

Valid SKILL.md

Checks against SKILL.md specification

Source & Community

Skill Version
main
Community
15.7K 1.4K
Updated At Jan 12, 2026, 05:31 AM

Skill Stats

SKILL.md 367 Lines
Total Files 1
Total Size 0 B
License MIT