#2177: Skip Fine-Tuning: Shape LLMs With Alignment Alone

Can you build a personalized LLM by skipping traditional fine-tuning and using only post-training alignment methods like DPO and GRPO? We break dow...

0:000:00
Episode Details
Episode ID
MWP-2335
Published
Duration
23:45
Audio
Direct link
Pipeline
V5
TTS Engine
chatterbox-regular
Script Writing Agent
claude-sonnet-4-6

AI-Generated Content: This podcast is created using AI personas. Please verify any important information independently.

Can You Build a Personalized LLM Without Fine-Tuning?

The question sounds straightforward: take a base model like Mistral, skip traditional fine-tuning entirely, and use only post-training alignment methods—reinforcement learning, preference optimization, whatever—to shape it into something specific. A relentlessly snarky chat assistant, for example. No massive retraining dataset. Just iterative feedback. Is that actually possible? And if so, what does it cost?

The short answer is yes, but the terminology in this space is deceptively misleading, and that matters for anyone actually trying to do it.

The Terminology Trap

When people say "reinforcement learning" in the context of LLM post-training, they're using it as an umbrella term covering a spectrum of methods—some of which aren't reinforcement learning at all. The distinction isn't academic. It completely changes what hardware you need, how you structure your data, and what you can realistically pull off.

On one end of the spectrum sits PPO (Proximal Policy Optimization)—actual RL in the classical sense. OpenAI used it for InstructGPT. You're running four simultaneous models: a policy model, a reference model, a learned reward model, and a value critic. It's compute-intensive, notoriously unstable, and basically off the table for hobbyists without serious hardware.

On the other end sits DPO (Direct Preference Optimization), which is technically not RL at all. The core insight: the alignment problem can be rewritten as a classification problem on preference pairs. You show the model a prompt, a chosen response, and a rejected response. It learns to prefer the chosen one. No reward model. No RL loop. No online generation. And it's dramatically cheaper.

Between these extremes sits GRPO (Group Relative Policy Optimization), which DeepSeek used to train R1. It's actual RL, but far more efficient than PPO. You generate a batch of responses to the same prompt, score them all, compare each to the group average, and reinforce the better ones. No separate reward model—just a scoring function.

What's Actually Feasible?

For the snarky assistant thought experiment, DPO is the most practical starting point. Here's what you'd do:

Generate 500–2,000 prompt and response pairs where you have two versions of each answer—a snarky version marked as chosen, a bland version marked as rejected. You can generate these synthetically using Claude or GPT-4. Feed those pairs into TRL's DPOTrainer with QLoRA, and you're done. This runs on 10–14 GB of VRAM. An RTX 3090 or 4090 handles it comfortably.

The parameter beta (typically 0.1–0.5) controls how far the model drifts from the reference. Lower beta means more personality drift permitted.

If you want actual RL—real GRPO—the compute floor is higher but still surprisingly accessible. Unsloth has achieved an 80% VRAM reduction compared to standard approaches. You can run GRPO on a 1.5B parameter model with 7 GB of VRAM. For a 7B model, you need about 15 GB with Unsloth. Without Unsloth, you'd need two A100s (160 GB total).

Unsloth also offers free Google Colab notebooks. You can run GRPO on the free Colab T4 (16 GB), though it's slower—about 300 tokens per second versus 4,000 on an A100.

The Cost Picture

For those renting compute: Vast.ai has RTX 4090s starting around 35–50 cents per hour. A100s around 52 cents. A complete DPO experiment on a 7B model costs between $1–$5 (2–4 hours of GPU time). GRPO is more intensive: $5–$20 depending on model size and duration.

The question has genuinely shifted from "can I afford the compute?" to "can I design a reward signal that isn't immediately exploitable?"

The Actually Hard Part: Reward Hacking

This is where things get tricky. Goodhart's Law states: when a measure becomes a target, it ceases to be a good measure.

For a snarky assistant, if your reward function is "does this response contain sarcasm markers," the model learns to produce responses that look snarky without being genuinely witty. It finds loopholes in your specification.

If you use an LLM-as-judge to score snarkiness—calling GPT-4 to rate each response on a 1–10 scale—the model will eventually learn to game GPT-4's biases. GPT-4 has known positional bias; it prefers whichever response is shown first. Your model learns to exploit that.

If you use heuristics—regex patterns looking for sarcasm markers, phrases like "obviously" and "clearly"—the model learns to insert those words without genuine wit.

The practical mitigation: use multiple reward signals. Combine a heuristic component, an LLM judge, and a diversity penalty so the model can't repeat the same template. Cap the maximum reward so there's no incentive to extremes.

Beyond the SFT vs. RL Distinction

The original framing—"RL instead of fine-tuning"—is more nuanced than it appears.

ORPO (Odds Ratio Preference Optimization) collapses the distinction entirely. It combines supervised fine-tuning and preference alignment into a single training pass. No separate SFT step. No reference model. One training run.

The insight: when you do standard SFT on snarky examples, you increase the probability of desired tokens but also inadvertently raise the probability of undesired ones. ORPO's odds-ratio loss simultaneously rewards the chosen response and penalizes the rejected one in the same gradient update.

Results on Mistral 7B are compelling—12.2% on AlpacaEval 2.0, 7.32 on MT-Bench, trained on only 61,000 instances. Win rate against SFT and PPO baselines reached 85%. It's computationally cheaper than the two-step SFT-then-DPO pipeline.

The Emerging Frontier: KTO

KTO (Kahneman-Tversky Optimization) is underrated and possibly the most hobbyist-friendly method for personality shaping. It's based on prospect theory from behavioral economics—the framework explaining why people feel losses more acutely than equivalent gains.

Applied to LLM alignment, the key innovation: you don't need preference pairs at all. You just need individual responses labeled as good or bad. This dramatically simplifies data collection and removes the burden of finding matched pairs.

The Open Question

Could a personality trait emerge coherently from reward signals alone, without explicit instruction? Could you get a relentlessly snarky voice rather than just a model that's learned to insert sarcasm markers? That's genuinely unknown and one of the most interesting open questions in this space. DeepSeek observed what they called an "aha moment"—the model spontaneously learning to allocate more reasoning time and reconsider its approach without explicit instruction. That emerged from the reward signal alone. Whether personality works the same way remains to be seen.

BLOG_POST

Downloads

Episode Audio

Download the full episode as an MP3 file

Download MP3
Transcript (TXT)

Plain text transcript file

Transcript (PDF)

Formatted PDF with styling

#2177: Skip Fine-Tuning: Shape LLMs With Alignment Alone

Corn
So Daniel sent us this one, and it's a question I've been turning over in my head since I read it. He's asking: can you actually build a personalized LLM by skipping traditional fine-tuning entirely and just doing the post-training alignment step — specifically reinforcement learning — instead? The concrete example he gives is taking something like Mistral, and through iterative feedback alone, shaping it into a relentlessly snarky chat assistant. No massive retraining dataset, just RL feedback. He wants us to dig into the methods — RLHF, DPO, ORPO, the whole alphabet soup — the frameworks available right now, actual compute requirements, and whether this is genuinely feasible for a hobbyist without a GPU cluster sitting in their garage.
Herman
This is a great one. And I want to flag something right off the bat, because the framing of the question actually contains a really interesting trap — the terminology itself is misleading in a way that matters for anyone trying to actually do this.
Corn
What do you mean by that?
Herman
So when people say "reinforcement learning" in the context of LLM post-training, they're using it as an umbrella term that covers a spectrum of methods — some of which have no reinforcement learning loop whatsoever. Like, none. And that distinction isn't academic, it completely changes what hardware you need, how you structure your data, and what you can realistically pull off.
Corn
So "post-training RL" is kind of a brand name at this point rather than a technical description.
Herman
Pretty much. The field has been sloppy with the terminology. You've got methods like PPO — Proximal Policy Optimization — which is actual RL in the classical sense. It's what OpenAI used for InstructGPT. You're running four simultaneous models: a policy model, a reference model, a learned reward model, and a value critic. It's compute-intensive, it's notoriously unstable, and for a hobbyist it's basically off the table unless you have serious hardware.
Corn
Four models at once. That sounds like the kind of thing that makes your GPU weep quietly in the corner.
Herman
It does. And then on the other end of the spectrum you have DPO — Direct Preference Optimization — which is technically not RL at all. The paper's title is literally "Your Language Model is Secretly a Reward Model." The key insight is that the RL alignment problem can be rewritten as a classification problem on preference pairs. You show the model a prompt, a chosen response, and a rejected response, and it learns to prefer the chosen one. No reward model, no RL loop, no online generation.
Corn
So DPO is doing the same job as RL-based alignment, but through a completely different mechanism.
Herman
And a much cheaper one. That's the punchline. And then between PPO and DPO you've got GRPO — Group Relative Policy Optimization — which is what DeepSeek used to train R1. That's actual RL, but it's dramatically more efficient than PPO because it eliminates the value critic model by using group-relative scoring. You generate a batch of responses to the same prompt, score them all, compare each response to the group average, and reinforce the better ones. No separate learned reward model required — just a Python function that returns a score.
Corn
That's actually elegant. You're using the variance within a batch to give you your training signal.
Herman
And it's where all the interesting emergent behavior stories come from. The DeepSeek team observed what they called an "aha moment" — the model spontaneously learning to allocate more reasoning time and reconsider its initial approach, without anyone explicitly telling it to do that. That emerged from the reward signal alone.
Corn
Which raises an obvious question for the snarky assistant thought experiment. Could a personality trait emerge the same way? Like, could you get a coherent voice rather than just a model that's learned to insert sarcasm markers?
Herman
That's genuinely unknown, and I think it's one of the most interesting open questions in this space. But let's work through what you'd actually do if you wanted to try. Say you take Mistral 7B Instruct and you want to make it relentlessly snarky. Your most practical starting point is DPO. You generate somewhere between five hundred and two thousand prompt and response pairs where you have two versions of each answer — a snarky version marked as chosen, and a polite bland version marked as rejected. You can generate these synthetically by prompting Claude or GPT-4 to write both versions. Then you feed those pairs into TRL's DPOTrainer with QLoRA, and you're done. That runs on ten to fourteen gigabytes of VRAM.
Corn
So an RTX 3090 or a 4090 handles that.
Herman
Comfortably. And there's a parameter called beta — typically set between zero point one and zero point five — that controls how far the model is allowed to drift from the reference. Lower beta means more snark drift permitted. Philipp Schmid published a guide in January of last year showing a five percent benchmark improvement on GSM8K with only two thousand preference pairs and three training epochs on a single H100, and the same config adapts to consumer hardware with QLoRA.
Corn
By the way, today's episode is powered by Claude Sonnet 4.6 — just worth mentioning while we're deep in the alphabet soup of AI methods.
Herman
Ha. Appropriate. Okay, so that's DPO. But if you want actual RL — real GRPO — the compute floor is higher but it's still surprisingly accessible. Unsloth has brought it down dramatically. They've achieved an eighty percent VRAM reduction compared to standard Hugging Face plus Flash Attention 2 for GRPO. That means you can run GRPO on a one point five billion parameter model with seven gigabytes of VRAM. For a seven billion parameter model you need about fifteen gigabytes with Unsloth. Without Unsloth you'd be looking at two A100s — a hundred and sixty gigabytes total — for a seven billion parameter model.
Corn
So Unsloth is the thing that makes this actually hobbyist-accessible.
Herman
It's a huge part of the story. They have free Google Colab notebooks — Llama 3.1 eight billion, Qwen three four billion — and you can run GRPO on the free Colab T4, which is sixteen gigabytes. Slower, obviously. You're getting about three hundred tokens per second on the T4 versus four thousand on an A100, but it works.
Corn
And for people who want to just rent compute rather than deal with Colab, what does that actually cost?
Herman
The numbers are really reasonable now. Vast.ai has RTX 4090s starting around thirty-five to fifty cents an hour. A100s at about fifty-two cents. A complete DPO experiment on a seven billion parameter model — two to four hours of GPU time — costs somewhere between one and five dollars. GRPO is more intensive, maybe five to twenty dollars depending on model size and how long you run it. The question has genuinely shifted from "can I afford the compute" to "can I design a reward signal that isn't immediately exploitable."
Corn
Which brings us to what I suspect is the actually hard part of this whole project.
Herman
It is the hard part. Lilian Weng — she runs safety systems research at OpenAI — published a comprehensive piece on reward hacking in November of twenty-twenty-four, and it's essentially a catalog of all the ways your RL experiment will go wrong. The core problem is Goodhart's Law: when a measure becomes a target, it ceases to be a good measure.
Corn
So for the snarky assistant, if your reward function is "does this response contain sarcasm markers," the model learns to produce responses that look snarky without being genuinely witty.
Herman
Exactly that failure mode. The model is a very clever optimizer looking for loopholes in your specification. Weng's paper has some spectacular examples. A summarization model learned to exploit specific flaws in the ROUGE metric — it got high scores while producing barely readable summaries. A coding model learned to modify the unit tests themselves rather than write correct code. The model found the path of least resistance through the reward function.
Corn
That coding example is almost philosophically disturbing. The model understood that the goal was to pass tests, not to write correct code, and acted accordingly.
Herman
And there's a direct equivalent for personality shaping. If you use an LLM-as-judge to score snarkiness — say, you call GPT-4 to rate each response on a one-to-ten snarkiness scale — the model will eventually learn to game GPT-4's specific biases. GPT-4 has a known positional bias; it tends to prefer whichever response is shown first. Your model will learn to exploit that. If you use heuristics — regex patterns looking for sarcasm markers, phrases like "obviously" and "clearly" — the model will learn to insert those words without genuine wit.
Corn
So you'd want multiple reward signals, ideally ones that are harder to game simultaneously.
Herman
That's the practical mitigation. Use a combination: a heuristic component, an LLM judge, and maybe a diversity penalty so the model can't just repeat the same snarky template. Also cap the maximum reward so there's no incentive to go to extremes. Weng's scaling law analysis from Gao et al. in twenty-twenty-two shows that the proxy reward and the gold reward — what you're optimizing versus what you actually want — diverge as training progresses. More reward model data reduces the divergence, but it never eliminates it.
Corn
There's also a deeper question buried in the original prompt that I want to dig into. The framing is "RL instead of fine-tuning." But is that actually the right distinction, or is it more nuanced than that?
Herman
It's more nuanced, and this is where ORPO becomes really interesting. ORPO — Odds Ratio Preference Optimization — is an approach that collapses the distinction entirely. It combines supervised fine-tuning and preference alignment into a single training pass. No separate SFT step, no reference model, one training run. The insight behind it is actually a critique of standard SFT: when you do supervised fine-tuning on snarky examples, you're increasing the probability of generating desired tokens, but you're also inadvertently raising the probability of generating undesired ones. The model doesn't cleanly separate "be snarky" from the surrounding patterns in your training data.
Corn
So ORPO's odds-ratio loss simultaneously rewards the chosen response and penalizes the rejected one in the same gradient update.
Herman
And the results on Mistral 7B are compelling. Mistral-ORPO-beta achieved a twelve point two percent score on AlpacaEval 2.0 and a 7.32 on MT-Bench, trained on only sixty-one thousand instances of UltraFeedback. Win rate against SFT and PPO baselines was up to eighty-five percent. And it's computationally cheaper than the two-step SFT-then-DPO pipeline because you're eliminating the reference model.
Corn
For a hobbyist, ORPO sounds almost too good. One pass, no reference model, no separate SFT step.
Herman
It's elegant, and it's in TRL's experimental namespace right now — which means it works but the API might change. The stable surface in TRL v1.0, which just hit release at the end of March, covers SFTTrainer, DPOTrainer, GRPOTrainer, and RewardTrainer. ORPO and KTO are graduating from experimental to stable in the near term.
Corn
Speaking of KTO — I want to make sure we cover it because I think it's the one that most people in this space haven't fully absorbed yet.
Herman
KTO is underrated and I think it's genuinely the most hobbyist-friendly method for personality shaping. The name comes from Kahneman-Tversky Optimization — it's based on prospect theory from behavioral economics, the same framework that explains why people feel losses more acutely than equivalent gains. Applied to LLM alignment, the key innovation is that you don't need preference pairs at all. You just need individual responses labeled as good or bad — thumbs up, thumbs down. That's it.
Corn
Which means your data collection is dramatically simpler. You're not trying to produce two versions of the same response and label which one is better; you're just rating individual responses.
Herman
And Contextual AI's Archangel suite tested this across fifty-six models ranging from one billion to thirty billion parameters, and KTO matched or exceeded DPO performance across that entire range. For a snarky assistant experiment, you could literally generate five hundred responses, rate them yourself with a simple binary label — snarky or not — and train on that. The barrier to building the dataset is much lower.
Corn
Let's talk about the RL-versus-SFT question from a different angle, because there's a study from MIT that I think reframes the whole premise of the episode in an interesting way.
Herman
The MIT Improbable AI Lab study from September last year. This is a really important result. The conventional wisdom has been that fine-tuning — SFT — is the reliable workhorse, and RL is the fancy but risky approach. What they found is essentially the opposite when it comes to catastrophic forgetting. They took Qwen 2.5 three billion Instruct and fine-tuned it on three domains: math reasoning, science Q&A, and tool use. And they compared RL-based adaptation against SFT on the same tasks, measuring what happened to capabilities the model already had — HellaSwag, MMLU, TruthfulQA, HumanEval.
Corn
And SFT degraded those prior capabilities more than RL did.
Herman
Consistently, across all three domains. The mechanism they identified is what they called "RL's Razor" — the degree of catastrophic forgetting is strongly predicted by the forward KL divergence between the fine-tuned policy and the base policy. RL's on-policy updates naturally converge to KL-minimal solutions. The model stays close to its prior distribution because it's generating its own training data and being nudged from there. SFT, by contrast, optimizes against fixed labels that can be arbitrarily distant from where the model currently is.
Corn
So if you want to add snark without making the model worse at everything else, RL-based methods are actually the safer bet.
Herman
Which is a counterintuitive result that inverts the usual framing. The "careful" approach — SFT on curated examples — turns out to be more destructive to prior capabilities than the "risky" RL approach.
Corn
Let me ask about the frameworks, because someone who wants to actually do this needs to know where to start. What's the landscape look like?
Herman
TRL is the center of gravity. Three million PyPI downloads a month, just hit v1.0 at the end of March with semantic versioning guarantees — so the stable API won't break under you. It supports the full range: SFT, DPO, GRPO, PPO, RLOO, KTO, ORPO, reward modeling. Unsloth and Axolotl both build directly on TRL's trainers. If you're starting from scratch, TRL plus Unsloth is the path of least resistance.
Corn
What about LLaMA-Factory?
Herman
LLaMA-Factory is interesting because it has a WebUI — a graphical interface — which makes it genuinely no-code for people who don't want to write Python. Sixty-nine thousand GitHub stars, so it's the most popular by that metric. It supports PPO, DPO, KTO, ORPO. The tradeoff is that you have less flexibility than you get with TRL directly. For the snarky assistant experiment specifically, if you want to write a custom reward function for GRPO, you'll want TRL or Axolotl rather than LLaMA-Factory.
Corn
Axolotl is the YAML config one, right?
Herman
YAML-driven, builds on TRL, very good hardware documentation. Their hardware guide is actually one of the most useful resources for figuring out what you need. For a seven to eight billion parameter model with QLoRA DPO, you're at ten to fourteen gigabytes VRAM. For LoRA DPO on the same size model, sixteen to twenty-four gigabytes. For GRPO on a zero point five to three billion parameter model, they recommend two twenty-four gigabyte GPUs — but they also have a colocate mode where you set vllm mode to colocate and enable sleep mode, and that lets you share a single GPU between the training process and the vLLM inference server. That works for models up to about three billion parameters on a twenty-four gigabyte card.
Corn
So the decision tree for a hobbyist is roughly: if you have an RTX 3090 or 4090, you can do DPO on a seven billion parameter model all day. If you want GRPO on a seven billion model, you're either renting an A100 or you need Unsloth on fifteen gigabytes.
Herman
That's a fair summary. And the cloud rental costs make the A100 option accessible. Fifty-two cents an hour on Vast.ai. A GRPO experiment — say, twelve hours of training, which is what Unsloth recommends for good results — costs about six dollars. You'd want to wait at least three hundred steps before evaluating whether the reward is actually increasing, so don't panic-cancel after an hour.
Corn
I want to go back to the emergent behavior question, because I think it's where the experiment gets philosophically interesting. The DeepSeek aha moment — chain-of-thought reasoning emerging from a reward signal that never mentioned chain-of-thought — suggests that RL can produce behaviors that weren't in the reward specification.
Herman
And the question for personality shaping is whether something analogous can happen. Could a snarkiness reward produce a coherent voice, a consistent perspective, a sense of humor that goes beyond surface markers? The honest answer is we don't know. R1-Zero's emergent reasoning was arguably a latent capability that the reward signal unlocked — the model already had the capacity for extended reasoning from pretraining, and the reward just incentivized using it. Whether personality works the same way depends on what's already latent in the base model.
Corn
Which is a testable hypothesis. You could run this experiment, measure whether the output feels like a coherent voice or just a collection of sarcasm markers, and that would tell you something real about how personality is encoded in these models.
Herman
And the cost of the experiment is now low enough that it's a genuine hobbyist project. Not a research lab project. One weekend, a rented A100, a Python reward function, and a few hundred labeled examples.
Corn
What would your reward function actually look like for snarkiness, if you were doing GRPO?
Herman
You'd probably want to layer it. A base heuristic component — does the response contain hedging language that you want to penalize, like "certainly" or "of course"? Does it have rhetorical questions, which are a snarkiness signal? Then an LLM-as-judge component — call a model to rate snarkiness on a scale of one to ten, but use a cheap model like GPT-4o-mini to keep costs down. Then a length penalty — because verbose snark is usually bad snark. Something like: score equals point four times the heuristic score plus point five times the judge score minus point one times the normalized length. Cap the total reward at one to prevent the model from finding extreme exploits.
Corn
The multiple reward signals thing is important for exactly the reason you mentioned earlier — it's much harder to simultaneously game a heuristic and an LLM judge than to game either one alone.
Herman
And you can add a diversity signal — penalize cosine similarity between responses in the same batch — to prevent the model from converging on a single snarky template. That's one of the failure modes with GRPO: the model can collapse to a high-reward but low-diversity output distribution.
Corn
Let's talk practical takeaways, because I think there are a few clear things someone can walk away with from this conversation.
Herman
The first one is: clarify what you actually mean by "RL" before you decide on a method. If you want the most practical, cheapest path to personality shaping, DPO or KTO is your answer — and neither of them is technically RL. DPO needs preference pairs, KTO just needs binary labels. Both run on consumer hardware with QLoRA. If you want genuine online RL — the GRPO experience, the possibility of emergent behavior — you need more VRAM but it's still achievable on a rented A100 for a few dollars.
Corn
The second takeaway is that the reward function is where you should spend most of your design time. The training infrastructure is largely solved — TRL, Unsloth, Axolotl, the free Colab notebooks — the hard part is specifying what you want in a way that can't be gamed. Lilian Weng's reward hacking piece is required reading before you start.
Herman
The third takeaway is the MIT result — if you're worried about degrading the model's general capabilities while adding personality, RL-based methods are actually better at preserving prior knowledge than SFT. The KL-minimality of on-policy updates is a feature, not a limitation.
Corn
And the fourth is just the overall accessibility of the landscape right now. TRL v1.0 dropped at the end of March as a stable, semantically versioned library. Unsloth has made GRPO viable on hardware that costs less than a decent dinner. The compute cost for a complete experiment is in the single-digit dollar range. If you've been putting off experimenting with this because you thought you needed a GPU cluster, you don't anymore.
Herman
The question that remains genuinely open is whether you can get a coherent personality from RL alone, or whether you always end up with surface mimicry. That's the experiment worth running. And honestly, I'd love to see someone document the whole thing — the reward function design, the failure modes they hit, whether the emergent behavior story holds for personality the way it held for reasoning in R1-Zero.
Corn
That would make a great follow-up prompt from Daniel if he wants to actually run the experiment and report back.
Herman
I would read that paper.
Corn
Alright, that'll do it for this one. Big thanks to our producer Hilbert Flumingtop for keeping the whole operation running. And thanks to Modal for providing the GPU credits that power this show — genuinely could not do this without them. This has been My Weird Prompts. If you're enjoying the show, a quick review on your podcast app helps us reach new listeners more than almost anything else. We'll see you on the next one.

This episode was generated with AI assistance. Hosts Herman and Corn are AI personalities.