Course curriculum

    1. Course Introduction

    2. NeurIPS LLM Efficiency Challange

    3. NeurIPS LLM Efficiency Challenge Q&A

    4. Hands On LLM Fine-tuning

    5. Start Your Experiments!

    1. Introduction to LLM Evaluation

    2. Demystifying Perplexity

    3. HumanEval and LLM Performance Analysis

    4. LLM Benchmarks

    5. Deep Dive into HELM

    6. Chatbot Arena

    7. Use Case Specific Benchmarks

    8. Evaluating LLM Apps

    9. Conclusions

    10. LLM Evaluation Q&A

    1. Introduction to Data for Training LLMs

    2. Find Out More about MosaicML

    3. Friendly Advice

    4. Evaluation

    5. How Much Data?

    6. Data Sources & Cost

    7. Q&A

    8. Which Data?

    9. Q&A

    10. Logistics of Data Loading

    11. Conclusions

    1. Introduction to Training & Fine-tuning Techniques

    2. Hardware Requirements

    3. Q&A

    4. Memory Usage

    5. What Should You Train?

    6. Q&A

    7. Training Observability

    8. Fine-tuning

    9. Q&A

    1. Course Assessment

    2. Resources for Further Learning

About this course

  • Free
  • 37 lessons
  • 4 hours of video content

Your Goals

Sign up for this free Weights & Biases course to:

  • Learn the fundamentals of large language models

    Find out about the types of LLMs, model architectures, parameter sizes and scaling laws.

  • Curate a dataset and establish an evaluation approach

    Learn how to find or curate a dataset for LLM training. Dive into the evaluation metrics for various LLM tasks and compare their performance across a range of benchmarks.

  • Master training and fine-tuning techniques

    Learn hands-on advanced training strategies like LoRA, prefix tuning, prompt tuning, and Reinforcement Learning through Human Feedback (RLHF).

Guest Instructors

Jonathan Frankle

Chief Scientist at MosaicML

Jonathan Frankle is Chief Scientist at MosaicML, where he leads the company's research team toward the goal of developing more efficient algorithms for training neural networks. In his PhD at MIT, he empirically studied deep learning with Prof. Michael Carbin, specifically the properties of sparse networks that allow them to train effectively (his "Lottery Ticket Hypothesis" - ICLR 2019 Best Paper). In addition to his technical work, he is actively involved in policymaking around challenges related to machine learning. He will be joining the computer science faculty at Harvard in the fall of 2023. He earned his BSE and MSE in computer science at Princeton and has previously spent time at Google Brain, Facebook AI Research, and Microsoft as an intern and Georgetown Law as an “Adjunct Professor of Law.”

Weiwei Yang

Principal SDE Manager at Microsoft Research

Weiwei Yang is a Principal Software Development Engineering Manager leading an applied machine learning team at Microsoft Research (MSR). Her research interests lie in resource-efficient learning methods inspired by biological learning. Weiwei aims to democratize AI by addressing sustainability, robustness, scalability, and efficiency in ML. She has successfully applied her research to organizational science, countering human trafficking, and stabilizing energy grids. Before joining Microsoft Research, Weiwei worked extensively in Bay Area startups and managed several engineering teams.

Mark Saroufim

PyTorch Engineer at Meta

Mark Saroufim is an engineer on the PyTorch Applied AI team where he maintains and contributes to a wide variety of PyTorch projects. His interests are in broadly improving the performance and usability of real world ML systems. Mark will be setting up the evaluation pipeline, answering technical questions from the community as well reproducing winning models.

Weights & Biases Instructors

Darek Kłeczek

Machine Learning Engineer

Darek Kłeczek is a Machine Learning Engineer at Weights & Biases, where he leads the W&B education program. Previously, he applied machine learning across supply chain, manufacturing, legal, and commercial use cases. He also worked on operationalizing machine learning at P&G. Darek contributed the first Polish versions of BERT and GPT language models and is a Kaggle Competition Grandmaster.

Ayush Thakur

Machine Learning Engineer

Ayush Thakur is a MLE at Weights and Biases and Google Developer Expert in Machine Learning (TensorFlow). He is interested in everything computer vision and representation learning. For the past 7 months he’s been working with LLMs and have covered RLHF and how and what of building LLM-based systems.

Learn to harness the power of LLMs with our comprehensive course. Discover the importance and history of LLMs, explore their architecture, training techniques, and fine-tuning methods. Gain hands-on experience with practical recipes from Jonathan Frankle (MosaicML), and other industry leaders,and learn cutting-edge techniques like LoRA and Prefix Tuning. Perfect for machine learning engineers, data scientists, researchers, and NLP enthusiasts. Stay ahead of the curve and become an expert in LLMs.

Course Reviews

Hear from the Certification takers

5 star rating

Amazing Free Course

Geonmo Gu

I learned a lot of invaluable insights from this course. In particular, Jonathan Fankle from MosaicML gives excellent tips about training LLMs. Two things that I agree with the most are: - Start small, and - Don't start training if you don't h...

Read More

I learned a lot of invaluable insights from this course. In particular, Jonathan Fankle from MosaicML gives excellent tips about training LLMs. Two things that I agree with the most are: - Start small, and - Don't start training if you don't have an evaluation dataset. Thanks a lot for offering this amazing course!

Read Less
4 star rating

More of it!

CHINONSO ODIAKA

What a great course. For anybody telat8vrly new to deep learning and NLP, I promise you that you won't be overwhelmed. Great content.

What a great course. For anybody telat8vrly new to deep learning and NLP, I promise you that you won't be overwhelmed. Great content.

Read Less
5 star rating

Heartfelt Thanks for Outstanding LLM Tuning Training

Andy Andurkar

I am grateful for the insightful LLM tuning training you provided. Your expertise and teaching have enhanced my understanding . Thank you for your generosity and dedication to sharing knowledge.

I am grateful for the insightful LLM tuning training you provided. Your expertise and teaching have enhanced my understanding . Thank you for your generosity and dedication to sharing knowledge.

Read Less
5 star rating

It was really great

Somesh Fengade

thanks for this comprehensive course

thanks for this comprehensive course

Read Less

Prerequisites

  • Working knowledge of machine learning

  • Intermediate Python experience

  • Familiarity with DL frameworks (Pytorch/Tensorflow)

Be first in line to unlock your LLM potential and earn your certificate.