Blog Archive

Friday, December 6, 2024

what is SFT, PPO, and DPO

In the context of reinforcement learning (RL) and fine-tuning, SFT (Supervised Fine-Tuning), PPO (Proximal Policy Optimization), and DPO (Direct Preference Optimization) are approaches used for training large language models (LLMs). Here's their relationship and role:


1. SFT (Supervised Fine-Tuning)

  • What it is: SFT is the process of fine-tuning a pre-trained language model on labeled datasets. The model is trained to predict the correct output (e.g., a response or classification) given an input.
  • Role: This step establishes a baseline model that learns task-specific patterns from curated datasets.
  • Relation to RL: SFT is typically used as a precursor to RL-based methods like PPO or DPO. While SFT relies on explicit supervision, RL-based methods rely on feedback signals.

2. PPO (Proximal Policy Optimization)

  • What it is: PPO is an RL algorithm that optimizes a policy using rewards. In the context of LLMs, it is used in RLHF (Reinforcement Learning with Human Feedback) to align models with human preferences.
  • How it works:
    • A reward model (often trained on human preference data) provides feedback on the quality of model outputs.
    • PPO adjusts the model to maximize these rewards while maintaining the stability of updates (ensuring the policy does not diverge too far from the original SFT model).
  • Relation to SFT: PPO fine-tunes the SFT model further by incorporating reward signals to improve alignment with human preferences.

3. DPO (Direct Preference Optimization)

  • What it is: DPO is a method designed to align models directly with preference data without requiring a reward model or RL algorithms like PPO. It uses preference pairs (e.g., output A is preferred over B) to optimize the model.
  • How it works:
    • Instead of learning a separate reward function, DPO directly optimizes the model to generate preferred outputs based on preference labels.
    • It avoids the complexities of RL (e.g., policy constraints in PPO).
  • Relation to PPO:
    • Both aim to align models with human preferences.
    • DPO is simpler and more efficient but may not achieve the same level of performance in complex scenarios.
  • Relation to SFT: Like PPO, DPO starts from an SFT model and fine-tunes it further using preference data.

Summary of Relationships

  • SFT: Foundation, establishes a baseline model for downstream fine-tuning.
  • PPO: Uses RL to improve alignment by optimizing a reward function derived from preferences.
  • DPO: Simplifies preference optimization, directly aligning the model with preference data without the need for a reward model or traditional RL.

Together, these methods form a pipeline where SFT provides a task-specific base, and PPO or DPO refine it for alignment and preference optimization.

No comments:

Post a Comment