Sign in to confirm you’re not a bot
This helps protect our community. Learn more
Deep Learning with MLflow
4Likes
153Views
Jun 182024
In this video, we explore the integration of MLflow into the finetuning process of pre-trained language models. Finetuning pretrained Large Language Models (LLMs) on private datasets is an excellent customization option to increase a model’s relevancy for a specific task. For example, a base pre-trained transformer model without specialized fine-tuning will face challenges for certain tasks (e.g. understanding complex legal language and doing legal analysis). In such scenarios, fine-tuning a model while using tracking tools like MLflow help to ensure that every aspect of the training process (metrics, parameters, and artifacts) are reproducibly tracked and logged, allowing for the analysis, comparison, and sharing of tuning iterations. We discuss MLflow 2.12 and the recently introduced MLflow Deep Learning features to track all the important aspects of fine tuning a large language model for text classification, including the use of automated logging of training checkpoints in order to simplify the process of resumption of training. Blog Post: https://mlflow.org/blog/deep-learning...

Follow along using the transcript.

VectorLab

620 subscribers