Daniel sent us this one — and it's a two-parter. He's been listening to us trace the lineage of cognitive behavioral therapy back through psychoanalysis, and what stuck with him is something about temperament. When you encounter a difficult thought or belief, do you argue with it, do you categorize it, or do you just detach and float away? He says the detachment approach resonates most — just let the belief be, accept it as silliness, and move on. That got him wondering about the actual evolution here. We've got CBT, then ACT, Dialectical Behavior Therapy. How did these main forks actually emerge, and what are the subtler deviations and remixes? And then the sidecar question — he keeps thinking the best framework might not be discoverable through a clinical trial because it depends on the person. So is there any way to map a patient to the right therapy using pattern recognition, maybe with AI?
That sidecar question is genuinely interesting, and I want to get to it. But let's start with the forks, because understanding the forks actually sets up why the matching problem is so hard in the first place. And Corn, I have to say, Daniel identifying with the detachment approach — that's basically him discovering he's temperamentally aligned with what became the third wave.
Which makes sense. Daniel's not a debater by nature. He's the guy who sees someone holding a wrong belief and just... lets them have it. That's a personality trait showing up as a therapy preference.
The field has been wrestling with this for decades. Let me lay out the actual genealogy, because most people think it's just CBT then ACT then DBT, and that's like saying the history of rock music is Elvis then the Beatles then Nirvana. You're missing everything that happened in between.
All right, walking encyclopedia. Give us the genealogy.
The starting point is the nineteen fifties and sixties. Behaviorism is dominant in academic psychology — Skinner, Watson, the idea that we should only study observable behavior, not internal mental states. Meanwhile, psychoanalysis — Freudian and post-Freudian — is dominant in actual clinical practice. These two worlds barely talk to each other. Behavior therapists are treating phobias with systematic desensitization. Analysts are doing years of free association on the couch. There is no middle ground.
This is where Beck and Ellis enter.
Albert Ellis develops Rational Emotive Behavior Therapy in the mid-nineteen fifties. Aaron Beck, a fully trained psychoanalyst at the University of Pennsylvania, starts noticing something in the early nineteen sixties. He's treating depressed patients with classical analysis, and they're not getting better. But he notices their internal monologue — what he called automatic thoughts — has consistent patterns. Negative views of self, world, and future. The cognitive triad. He publishes his first paper on cognitive therapy for depression in nineteen sixty-three, and the psychoanalytic establishment basically ignores him.
Because he was a heretic. He trained in their tradition and then said, actually, the unconscious conflicts aren't the problem — the conscious thought patterns are.
That's the key break. Beck's insight was that thoughts are accessible, measurable, and modifiable. You don't need to excavate childhood trauma for years. You can ask the patient what's going through their mind right now, identify the distortion, and test it like a hypothesis. He called this collaborative empiricism. The therapist and patient become co-investigators. And this is where the arguing-with-thoughts approach comes from — the Socratic questioning, the evidence-weighing, the behavioral experiments. You don't just accept the thought. You put it on trial.
Which works brilliantly for some people and feels like an interrogation for others. Daniel's reaction is basically, I don't want to put my thoughts on trial. I want to wave at them and walk away.
That reaction is exactly what drove the next forty years of development. Let me trace the actual splits. First wave is pure behaviorism — classical conditioning, operant conditioning, exposure therapy. Second wave is the cognitive revolution — Beck, Ellis, the integration of cognitive and behavioral techniques into what we now call CBT. This becomes the dominant paradigm by the nineteen eighties and nineties. It's manualized, time-limited, empirically validated. David Clark develops the Oxford model for anxiety disorders. Barlow does the unified protocol. CBT becomes the gold standard.
The problem with being the gold standard is that everyone starts noticing where it doesn't work.
By the late nineteen eighties and early nineties, you have several lines of critique emerging. One, CBT is great for symptom reduction but doesn't address broader quality of life, values, meaning. Two, the emphasis on changing thought content doesn't work well for everyone, particularly people who are highly fused with their thoughts. Three, there's a growing recognition that the therapeutic relationship matters enormously, and the early CBT manuals were almost comically mechanical about it.
Enter the third wave.
Here's what most people get wrong. Third wave isn't one thing. It's a family of approaches that share certain philosophical commitments but diverge in technique. The common thread is a shift from changing thought content to changing one's relationship to thoughts. Mindfulness, acceptance, values, the observing self. The thought is still there, but you're not wrestling with it.
This is where Daniel's detachment approach really lives.
Let me name the major branches. Acceptance and Commitment Therapy — ACT, pronounced as the word act — developed by Steven Hayes in the nineteen eighties and nineties. Hayes has a fascinating origin story. He was a behavior analyst who developed a severe panic disorder. He tried CBT techniques on himself, and they made it worse. The more he tried to argue with his catastrophic thoughts, the more entangled he became. He had a moment during a faculty meeting at the University of Nevada where he realized the problem wasn't the thoughts — it was his relationship to them. He was fusing with them, treating them as literal truths rather than mental events.
That's the cognitive fusion concept.
The solution Hayes developed was psychological flexibility — the ability to contact the present moment and, based on what the situation affords, change or persist in behavior in the service of chosen values. Six core processes: acceptance, cognitive defusion, present moment awareness, self-as-context, values, committed action. The defusion piece is exactly what Daniel described. You don't argue with the thought. You notice it, label it — oh look, there's the I'm a failure story again — and let it be there without buying into it.
This is clinically distinct from CBT in a way that matters. CBT says the thought is distorted and we need to correct it. ACT says the thought may or may not be accurate, and that's not the point. The point is whether it's workable to act on it.
The evidence base for ACT has grown substantially. As of twenty twenty-four, there are over a thousand randomized controlled trials across a huge range of conditions — chronic pain, anxiety, depression, psychosis, workplace stress. The American Psychological Association lists it as empirically supported for chronic pain, with strong research support for depression and anxiety. But here's the nuance that gets lost — ACT doesn't outperform CBT in head-to-head trials for most conditions. The effect sizes are comparable. What ACT seems to do is work through different mechanisms and appeal to different people.
Which is Daniel's whole point about personality. Some people want to debate their thoughts. Some people want to observe them and move on. The clinical trial that averages everyone together misses that distinction entirely.
That's the core tension in psychotherapy research, and we should come back to it. But let me finish the genealogy. The other major third-wave fork is Dialectical Behavior Therapy, DBT, developed by Marsha Linehan in the late nineteen eighties and early nineties. Linehan was working with chronically suicidal women, many diagnosed with borderline personality disorder, and she found that standard CBT was actively harmful for this population. The emphasis on change felt invalidating to patients whose core experience was already one of being told their emotions were wrong.
She flipped the script.
She synthesized acceptance and change into a dialectic. The core dialectic in DBT is, you are doing the best you can, and you need to do better. Both things are true simultaneously. The therapist validates the patient's experience completely while also pushing for change. Skills training is structured around four modules: mindfulness, distress tolerance, emotion regulation, interpersonal effectiveness. The phone coaching piece is unique too — patients can call their therapist between sessions for help applying skills in real time.
DBT has become the gold standard specifically for borderline personality disorder.
It's the most empirically supported treatment for BPD, period. Multiple randomized trials showing reductions in self-harm, suicide attempts, hospitalizations. It's also been adapted for eating disorders, substance use, treatment-resistant depression. The key distinction from ACT is that DBT is more structured, more skills-focused, and more explicitly dialectical. ACT is about psychological flexibility. DBT is about building a life worth living.
Okay, so we've got the three Daniel named: CBT, ACT, DBT. But he specifically asked about the subtler deviations and remixes. What else is out there?
This is where it gets rich. Let me run through the major ones that are distinct, not just rebrandings. Mindfulness-Based Cognitive Therapy, MBCT, developed by Zindel Segal, Mark Williams, and John Teasdale. It's specifically designed to prevent depressive relapse, combining CBT techniques with mindfulness meditation in an eight-week group format. The landmark trials showed it cuts relapse rates in half for people with three or more prior episodes.
It's a hybrid — CBT structure with ACT-like mindfulness practices, targeted at a specific population.
Then you have Cognitive Processing Therapy, CPT, developed by Patricia Resick for PTSD. It focuses on how trauma disrupts beliefs about safety, trust, control, esteem, and intimacy. It's a twelve-session protocol and one of the most effective treatments for PTSD alongside prolonged exposure. What's distinctive is the use of written trauma accounts and Socratic questioning about stuck points.
Schema Therapy is Jeffrey Young's contribution, developed in the nineteen nineties. Young was a student of Beck's who noticed that some patients — particularly those with personality disorders — didn't respond to standard CBT. Their maladaptive patterns were too deep, too early-formed. He developed the concept of early maladaptive schemas — broad, pervasive themes about oneself and relationships that develop in childhood and get elaborated throughout life. Think abandonment, mistrust, emotional deprivation, defectiveness. Schema therapy uses techniques from CBT, attachment theory, Gestalt therapy, and psychodynamic approaches. The therapist does something called limited reparenting — providing, within professional boundaries, what the patient didn't get in childhood.
That's a fascinating blend. It's almost like the field has been doing exactly what Daniel intuited — remixing and combining approaches to fit different populations and different types of problems.
The remixes keep coming. There's Functional Analytic Psychotherapy, which focuses on the therapeutic relationship itself as the mechanism of change. There's Integrative Behavioral Couple Therapy, applying acceptance-based strategies to couples work. There's Compassion-Focused Therapy, CFT, developed by Paul Gilbert, which specifically targets shame and self-criticism by cultivating self-compassion. There's Metacognitive Therapy, MCT, developed by Adrian Wells, which focuses not on the content of thoughts but on beliefs about thinking — meta-beliefs like worrying helps me prepare or I have no control over my rumination.
That last one is interesting because it's almost a layer above CBT. CBT targets the thoughts. MCT targets your beliefs about your thoughts.
The early trials on MCT are striking. Wells has published data suggesting MCT might be more effective than standard CBT for generalized anxiety disorder, with recovery rates in some studies above seventy percent. But the evidence base is smaller, and there's legitimate debate about whether these are distinct mechanisms or just good CBT with a different emphasis.
Which raises a question that's been nagging at me through this whole genealogy. How much of this is different mechanisms, and how much is just the Dodo Bird Verdict playing out?
You're going to make me explain the Dodo Bird Verdict.
The Dodo Bird Verdict comes from Alice in Wonderland — everybody has won and all must have prizes. In psychotherapy research, it refers to the finding — first articulated by Saul Rosenzweig in nineteen thirty-six and then extensively studied by Lester Luborsky and others in the nineteen seventies — that when you compare bona fide psychotherapies head to head, they tend to produce roughly equivalent outcomes. The effect sizes are similar. The differences that do show up are often small and may be attributable to researcher allegiance.
If CBT, ACT, DBT, and psychodynamic therapy all produce roughly similar results, why does the genealogy matter?
This is where the nuance is crucial. The Dodo Bird Verdict is true at the aggregate level for many comparisons. But it's not the whole story. There are specific conditions where specific treatments clearly outperform. DBT for borderline personality disorder. Exposure-based treatments for specific phobias and PTSD. CBT for panic disorder. For some conditions, the differences are real and clinically meaningful. For others — particularly mild to moderate depression — the common factors probably dominate.
Common factors being the therapeutic alliance, the therapist's empathy, the patient's expectations, the structured framework.
Bruce Wampold's work on the contextual model argues that the specific ingredients of different therapies account for a relatively small portion of outcome variance. The relationship, the rationale, and the ritual are doing most of the heavy lifting. But this doesn't mean technique doesn't matter. It means technique matters partly because it gives the therapist and patient a coherent story about what they're doing and why.
Which brings us directly to Daniel's sidecar question. If the best framework depends on the person, and if the common factors are doing a lot of the work, can we use AI to match patients to the therapy and therapist that will work best for them specifically?
This is an active area of research, and I want to give you the state of play as it actually exists, not the hype. The short answer is: early promising work exists, but we are nowhere near a clinical deployment that reliably outperforms a good clinician's judgment.
What does the early promising work look like?
There are a few different approaches. One is using machine learning on large clinical datasets to identify which patient characteristics predict differential response to different treatments. The most cited work comes from Robert DeRubeis and colleagues at the University of Pennsylvania. They've used machine learning on data from randomized trials comparing CBT to antidepressant medication for depression. Their models can identify, with some accuracy, which patients are more likely to respond to CBT versus medication based on baseline characteristics — symptom profiles, demographics, personality variables, and neurocognitive markers.
What kind of accuracy are we talking about?
In their twenty twenty-two and twenty twenty-three papers, they've shown that their models can identify subgroups where the advantage for one treatment over another is substantial — effect size differences of zero point three to zero point five. That's clinically meaningful. For some patients, the predicted probability of response to CBT might be thirty percentage points higher than to medication, or vice versa. But it's important to be clear about what this is and isn't. This is prediction based on group-level patterns. It's not a blood test. It's not a brain scan. It's probabilistic.
It's limited by the data it's trained on. If your training data comes from trials that compared CBT to medication, the model can only tell you about CBT versus medication. It can't tell you about ACT versus DBT versus Schema Therapy.
The dataset problem is enormous. To build a model that matches patients to the full range of therapies, you would need randomized trials that include all those therapies as arms, with large diverse samples, with consistent measurement. That dataset doesn't exist. What we have are pairwise comparisons — this therapy versus that therapy for this condition — and the samples are often small and homogeneous.
What's actually being done in practice? Are there tools clinicians can use?
There are some early-stage tools. One line of work uses natural language processing on intake interviews to identify linguistic markers that predict treatment response. Researchers at the University of Pennsylvania and the University of Texas have shown that language features — pronoun use, emotional tone, cognitive processing words — can predict who responds to which therapy. There's also work on using ecological momentary assessment data — smartphone-based symptom tracking — to build personalized predictions about when a patient is likely to deteriorate.
None of this is, here's your therapy match based on your personality profile and symptom pattern.
Nothing that's validated and widely deployed. The closest thing might be the work on treatment selection for depression. There are decision support tools — like the Texas Medication Algorithm Project — that guide clinicians through evidence-based sequences. But those are algorithmic in the traditional sense, not machine learning. Step one, try this. If no response, step two, try that. They're based on population averages, not personalized prediction.
Where could this go? If we're projecting forward, what would a useful AI matching system look like?
I think there are a few key ingredients. First, you need much richer baseline data than we currently collect. Not just a symptom checklist and a diagnostic interview, but personality assessment, cognitive style measures, values assessments, interpersonal history, treatment preference surveys. Second, you need outcomes data that's granular and longitudinal — not just did the patient get better at week twelve, but week-by-week symptom trajectories, dropout, therapeutic alliance ratings, patient satisfaction.
Third, you need enough people going through enough different therapies that the patterns become detectable.
That's the real bottleneck. In an ideal world, you'd have a large health system where patients are randomized to different evidence-based treatments, and the system learns over time which patient profiles respond best to which treatments. That's basically a learning health system applied to psychotherapy. Some integrated health systems — Kaiser Permanente in the US, the NHS Talking Therapies program in the UK — are starting to collect data at a scale that could support this kind of work. The UK's program, formerly called IAPT, has treated millions of patients and collects standardized outcome measures at every session. That dataset is a goldmine.
There's a tension here. The whole point of personalized matching is that the best treatment for an individual might not be the one that works best on average. But to discover those individual-level patterns, you need massive amounts of data. And that data comes from systems that tend to standardize treatment, not personalize it.
That's exactly the paradox. The NHS Talking Therapies program is built around a stepped-care model where most patients get low-intensity CBT first and only step up to high-intensity therapy if they don't respond. That's efficient at the population level, but it means the system generates data on a particular sequence, not on what happens if you match different people to different treatments from the start. You're learning about a system that's already constrained.
Which means the AI matching dream might require a different kind of clinical trial design — not the traditional randomized controlled trial that compares two treatments head to head, but something more like a multi-arm platform trial where patients are randomized to multiple options and the allocation ratios shift over time based on what's working for whom.
Platform trials have been used in oncology for years — the I-SPY trials for breast cancer are the classic example. You have a master protocol, multiple experimental arms, and adaptive randomization that assigns more patients to treatments that are performing well for their biomarker profile. You could absolutely design something similar for psychotherapy. The challenge is that biomarkers in mental health are much fuzzier than biomarkers in cancer. A HER2 mutation is a clear biological signal. A cognitive style preference is a self-report measure with measurement error.
That fuzziness is exactly why Daniel's personality-based intuition is both obviously right and frustratingly hard to operationalize. We can feel that someone who prefers detachment over debate would do better in ACT than in standard CBT. But how do you measure that reliably enough to build a model on it?
There are measures that try. The Cognitive Fusion Questionnaire measures the tendency to get entangled with thoughts. The Need for Cognition scale measures enjoyment of effortful thinking. The Intolerance of Uncertainty scale measures, well, intolerance of uncertainty. These are all validated instruments. And there's some research showing they moderate treatment response. People high in cognitive fusion may do better in ACT than in CBT. People high in need for cognition may do better in CBT than in supportive therapy. But the evidence is scattered, the samples are small, and the interactions are not consistently replicated.
Where does that leave Daniel, the person who's figured out that he likes detachment and wants to know if there's a system that can map him to the right approach?
I think the honest answer is that right now, the best system is a good clinician who knows the range of evidence-based treatments and can have a collaborative conversation about fit. A skilled therapist should be able to describe the rationale behind different approaches — here's how CBT would approach your problem, here's how ACT would approach it, here's what the process would feel like — and let the patient's preferences inform the choice. That's not AI. That's just good clinical practice.
It's also not how most people access therapy. You get whoever's available, whatever they're trained in, and you hope it works. The matching is mostly accidental.
That's a real failure of the system. In an ideal world, every intake would include a discussion of treatment options and patient preferences. Shared decision-making has been advocated in medicine for decades. But in mental health, it's underutilized. Partly because of limited access, partly because many clinicians are only trained in one or two modalities and can't offer a genuine choice.
Which is where AI could actually add value even without solving the full matching problem. A well-designed digital tool could walk a patient through a structured preference assessment, explain the different evidence-based options in plain language, and generate a shortlist of approaches that align with their values and cognitive style. It wouldn't have to predict outcomes perfectly. It would just have to narrow the search space and facilitate a better conversation.
I think that's exactly right, and there are already some attempts at this. Mindler in Sweden uses digital assessments to match patients to therapists. Lyra Health and Spring Health in the US do something similar. But the matching is often based on clinical presentation and therapist availability, not on the deeper personality and cognitive style dimensions we've been discussing.
Daniel's insight — that the best framework depends on the person — is actually more radical than it sounds. It implies that the whole project of finding the single best therapy through randomized controlled trials is misguided. The question shouldn't be which treatment works best on average. It should be which treatment works best for whom, under what conditions.
That's been the mantra of personalized medicine for twenty years. And in psychotherapy, the field has been talking about aptitude-treatment interactions since the nineteen sixties. Cronbach and Snow wrote about it extensively. But the actual progress has been slow. The interactions are complex, the samples needed to detect them are enormous, and the funding for this kind of work is limited.
Let me push on something. You mentioned earlier that ACT and CBT produce similar effect sizes in head-to-head trials. If the average outcomes are the same, does the matching actually matter?
It matters if the averages hide meaningful variation. Imagine two treatments, A and B. On average, they produce the same improvement. But for subgroup one, A produces a large effect and B produces a small effect. For subgroup two, it's the reverse. If you randomized everyone without matching, the averages would cancel out and you'd conclude the treatments are equivalent. But if you could identify who belongs to which subgroup, you could dramatically improve outcomes for everyone.
We have reason to believe those subgroups exist.
The DeRubeis work I mentioned earlier shows exactly this pattern for CBT versus medication. Some people do substantially better with one than the other. The question is whether we can identify those subgroups prospectively with enough accuracy to guide clinical decisions. And the current answer is: partially, with uncertainty, in research settings. Not yet in routine practice.
Let's talk about the therapist variable, because Daniel mentioned it briefly. Finding the right therapist is part of the grind. And the therapist effect is huge — some studies suggest the therapist accounts for five to ten percent of outcome variance, which is similar to the treatment effect itself.
The therapist effect is one of the most robust findings in psychotherapy research. Michael Lambert's work, and later work by Wampold and others, consistently shows that some therapists reliably produce better outcomes than others, even when delivering the same manualized treatment. The difference between the best and worst therapists in a trial can be substantial. And we don't fully understand what makes a good therapist. Therapeutic alliance, empathy, and the ability to repair ruptures are part of it. But there's something else — a kind of facilitative interpersonal skill — that's harder to measure.
The matching problem is actually a three-dimensional problem. You're matching a patient to a therapy and to a therapist. And the interactions are probably complex. Some therapists might be brilliant at delivering CBT and terrible at delivering ACT, not because ACT is worse but because it doesn't fit their interpersonal style.
This is the real frontier, and it's barely been studied. There's some work on matching patients to therapists based on attachment style, personality, or demographic similarity. But the evidence is thin. Most patients don't get to choose their therapist. They get assigned based on availability, not fit.
Which brings us back to AI. If you could collect enough data — patient characteristics, therapist characteristics, therapy type, session-by-session outcomes — you could potentially learn these complex three-way interactions. But the data infrastructure doesn't exist.
But it's being built. The NHS Talking Therapies program collects outcome data at every session for millions of patients. If they added therapist identifiers and more detailed patient-level variables, they'd have the raw material. The challenge is privacy, consent, and the computational complexity of modeling three-way interactions with enough precision to be clinically useful.
There's a philosophical question here too. Even if you could build a perfect matching algorithm, would people want to use it? Daniel's example of just floating away from a wrong belief — that's a values-laden choice. It reflects something about his personality, his temperament, his way of being in the world. An algorithm that says, based on your data, you should do ACT — is that empowering or is it just another form of being told what to do?
That's a important question. The ideal use of these tools is not to replace patient preference but to inform it. You walk into an intake appointment, you complete a structured assessment, and the system says — based on patterns from thousands of people with profiles similar to yours, these three approaches tend to produce the best outcomes. Here's what each involves. Here's what the process feels like. Which resonates with you? The algorithm narrows the search space. The patient makes the choice.
That feels like a useful application. Not the AI therapist replacing the human. Not the black box dictating treatment. Just a decision support tool that makes the matching process less random.
There are people working on exactly this. I mentioned DeRubeis. There's also a group at Stanford led by Adam Chekroud, who co-founded Spring Health, using machine learning to match patients to therapists and treatments. They've published data showing their matching algorithm improves outcomes compared to usual care. The effect sizes are modest but real. And this is going to accelerate as the datasets get larger and the algorithms get better.
Let me pull on one more thread before we wrap. Daniel's original framing was about tone — argue with the thought, categorize the distortion, detach and float away. He identified with the third one. But I wonder if that framing is too neat. In practice, don't most therapies blend these modes? Even in ACT, you're not just floating away from every thought. If the thought is I should quit my job, you might defuse from the catastrophic spin around it but still engage with the content practically. And in CBT, a skilled therapist isn't just arguing with every thought — they're teaching the patient to recognize patterns and choose which thoughts are worth engaging.
That's a critical point, and it's one of the reasons the therapy wars are mostly counterproductive. Good therapists are eclectic in the best sense — they draw on multiple frameworks depending on what the patient needs in the moment. The manualized treatments are for research and training. In practice, experienced clinicians develop a clinical wisdom that transcends any single model. They might use CBT techniques for symptom management, ACT techniques for values clarification, psychodynamic techniques for relational patterns, all within the same course of therapy.
Which makes the matching problem even more complex. You're not matching a patient to a pure therapy. You're matching them to a therapist who can flexibly deploy multiple approaches in a way that fits the patient's evolving needs.
That's why the therapist variable may ultimately be more important than the treatment variable. A skilled therapist can adapt. A rigid therapist delivering the perfect evidence-based protocol for the wrong patient in the wrong way will still produce poor outcomes.
To bring this back to Daniel's questions. The genealogy, in brief: first wave behaviorism, second wave cognitive therapy and CBT, third wave ACT and DBT and MBCT and CFT and MCT and Schema Therapy and a dozen other remixes. The forks emerged because CBT's emphasis on disputing thoughts didn't work for everyone, and different innovators developed different solutions — accept the thought, defuse from it, hold it dialectically, cultivate self-compassion around it, examine your -beliefs about it. The evidence base supports all of these for various conditions, and the Dodo Bird Verdict suggests that for many common problems, the specific approach matters less than the fit and the relationship.
On the matching question, the honest answer is that we're in the early stages. The data infrastructure is being built. The machine learning methods are being developed. The early results are promising but not yet ready for routine clinical deployment. In the meantime, the best approach is the old-fashioned one — a collaborative conversation with a knowledgeable clinician who can describe the options and respect the patient's preferences.
If you're someone like Daniel, who already knows that detachment resonates more than debate, that's useful information to bring into that conversation. You don't need an algorithm to tell you what you've already figured out about yourself.
That might be the most practical takeaway from this whole discussion. Before you worry about which therapy is empirically best, figure out what kind of approach feels right to you. Do you want someone to challenge your thoughts or help you accept them? Do you want structured skills training or open-ended exploration? Do you want to focus on the past, the present, or the future? Your answers to those questions will narrow the field more effectively than any clinical trial.
Now: Hilbert's daily fun fact.
Hilbert: In eighteen eighty-nine, the King of Italy, Umberto the First, encountered a restaurant owner who was his exact physical double, shared his name, was born on the same day in the same city, and had married a woman with the same name as the queen on the same day. The restaurant owner died in a shooting accident the next day. The king was assassinated later that year.
I don't know what to do with that information, and I'm not sure I want to.
This has been My Weird Prompts, with thanks to our producer Hilbert Flumingtop. If you want more episodes, find us at myweirdprompts.com or wherever you get your podcasts. We'll be back next time.