Site icon

HydPy is hosting an NVIDIA Nemotron 3 Super Workshop – India

​HydPy is hosting an NVIDIA Nemotron™ 3 Super Workshop

This interactive workshop dives deep into Nemotron 3 Super, NVIDIA’s newly released 120B (12B active-parameter) open hybrid Mamba-Transformer Mixture-of-Experts (MoE) model. Designed to solve complex, dense technical problems autonomously, Nemotron 3 Super handles long-context analysis, precise reasoning, and coding while remaining computationally efficient. Throughout this course, participants will explore the model’s architectural innovations such as Latent MoE, Multi-Token Prediction (MTP), and its hybrid backbone and learn how to customize, optimize, and deploy it using NVIDIA’s fully open weights, datasets, and recipes.

Exit mobile version