#2440: Build Your Own CRM With AI Agents

Off-the-shelf CRMs are built for sales teams, not solo operators. Here's why building your own with AI might be smarter.

0:000:00
Episode Details
Episode ID
MWP-2598
Published
Duration
36:15
Audio
Direct link
Pipeline
V5
TTS Engine
chatterbox-regular
Script Writing Agent
deepseek-v4-pro

AI-Generated Content: This podcast is created using AI personas. Please verify any important information independently.

Most off-the-shelf CRMs are optimized for sales teams with managers, pipelines, and quotas. For a solo operator who just wants to track interesting companies, do deep research, and send thoughtful emails, that entire apparatus is dead weight. The hidden cost isn't just the subscription price — it's the cognitive cost of evaluating tools: signing up for trials, getting inundated with automated emails from sales reps, and eventually defaulting back to spreadsheets.

Why Off-the-Shelf CRMs Don't Fit

HubSpot CRM starts at $50/month per user for basic features. Salesforce Essentials is $25/month but lacks AI features. To get research, enrichment, and email discovery, you need enterprise tiers that are absurd for a solo operator. The cheap tiers don't do what you need; the expensive tiers do too much. And every trial triggers a sequence of automated emails from people named Tyler who want to "hop on a quick call."

The Build-Your-Own Opportunity

Three years ago, building your own CRM required full-stack development skills. Today, the AI agent landscape has changed everything. Claude and GPT-4o can take unstructured research and output structured data with funding history, location, and remote policy. Supabase offers a free tier with 500MB storage — more than enough for tracking a few hundred companies. The Model Context Protocol (MCP) from Anthropic has made integration dramatically simpler, with over 60 pre-built MCP servers connecting Claude directly to databases, email, and Slack.

Three Paths Forward

  1. Off-the-shelf CRMs: Expensive, philosophically misaligned, and optimized for workflows that don't match curiosity-driven research.

  2. No-code platforms (Airtable, ToolJet): Give you a custom data model without a full codebase. ToolJet is open source and self-hostable at $24-79/month per builder with no per-end-user fees.

  3. Full DIY stack (Supabase, Claude API, MCP servers): Under $30/month, full ownership, no vendor lock-in. You decide what fields matter — remote policy, funding stage, why you found the company interesting.

Why the Build Path Wins

The specific workflow Daniel described — research first, relationship second, CRM a distant third — doesn't exist as a product category yet. Tools like Clay come close but are built for technical revenue operations teams and require 4-6 weeks to learn. Building your own means you own the data model, you decide how the research agent works, and you can add features whenever you want. For someone who already sells AI builds to clients, it's also a powerful demonstration of the value they offer.

The rise of micro-CRMs — hyper-specific relationship management tools built by solo operators for their exact workflow, powered by AI agents — is a pattern worth watching. A typical solo operator spends 5-10 hours a week on manual research that could be automated. At a $100-200 billable rate, that's $500-2000/week in lost time. The DIY stack costs under $30/month.

Downloads

Episode Audio

Download the full episode as an MP3 file

Download MP3
Transcript (TXT)

Plain text transcript file

Transcript (PDF)

Formatted PDF with styling

#2440: Build Your Own CRM With AI Agents

Corn
Daniel sent us this one — and I think it hits on something a lot of solo operators feel in their bones but don't always articulate. He's been building internal tools for clients, AI agent stuff mostly, and now he wants to build something for himself. The problem is, every off-the-shelf CRM he's tried has been a time sink — signing up for trials, getting hammered with drip cadence emails he never asked for, and eventually just giving up and going back to a spreadsheet. What he actually wants is a system that does deep research on interesting companies, finds email addresses, maps out funding history, figures out if they work with remote people — he's based in Israel, so that part matters — and then lets him track relationships and follow-ups. Not a traditional CRM with pipeline forecasting and manager dashboards. More like a discovery and relationship tracking tool. His question is, should he go with something off-the-shelf, or build this himself with the AI tools he's already using for clients?
Herman
Before we dive in — quick note. Today's episode script is coming to us from DeepSeek V four Pro. Which I only mention because it's handling a topic about building custom tools with AI, and there's something satisfying about an AI model writing a script about AI-powered tool building.
Corn
I'll allow it.
Herman
Here's the thing that jumps out at me right away. Daniel said something in his prompt that I think is the real insight — he said he heard his CRM in corn. Which is a fantastic way to put it, because off-the-shelf CRMs do feel like they're shouting at you. They're optimized for sales teams, sales managers, sales pipelines. Everything is built around the assumption that you have a funnel, you have quotas, you need drip cadences, you need activity tracking so a manager can see what calls you made. And if you're a solo operator just trying to track interesting companies and do some research and maybe send a thoughtful email? That whole apparatus is dead weight.
Corn
It's like buying a combine harvester when you're growing tomatoes on your balcony. Technically it harvests things. Practically, you've now got a piece of farm equipment blocking your door and the manual is four hundred pages.
Herman
The cost structure reflects that. HubSpot's CRM starts at fifty dollars a month per user for basic features. Salesforce Essentials is twenty-five dollars a month per user but lacks the AI features. If you want anything beyond a digital Rolodex — the research layer, the enrichment, the email discovery — you're looking at enterprise tiers that are completely absurd for a solo operator. So you end up in this weird gap. The cheap tiers don't do what you need, the expensive tiers do too much, and every single one of them wants you to sign up for a free trial that immediately triggers a sequence of seventeen automated emails.
Corn
That's the hidden cost nobody talks about. Not the subscription price — the cognitive cost of evaluating tools. Daniel mentioned spending so much time just signing up, managing trial periods, getting his inbox inundated. I've done this dance. You sign up for one CRM trial, suddenly you're getting weekly check-in emails from a sales rep named Tyler who wants to "hop on a quick call to understand your workflow." You ignore Tyler. Tyler emails again. You eventually unsubscribe, delete the account, and open a spreadsheet. Multiply that by four or five CRMs, and you've lost days of your life and gained nothing.
Herman
That's not just an annoyance — it's a structural problem with how SaaS is sold to solo operators. The trial economy assumes you're evaluating on behalf of a team, that you have purchasing authority, that you might actually want that call with Tyler. For a solo trader who just wants to track some companies and send some emails, the evaluation process is the barrier to entry. So they default to spreadsheets. And spreadsheets are terrible at relationship tracking. They're static, they don't do research, they don't surface connections, they don't remind you to follow up.
Corn
Which brings us to why now. Because three years ago, the build-your-own option was genuinely hard. You needed to be a full-stack developer, you needed to understand database design, you needed to build a front end. Today, the AI agent landscape has changed the calculus entirely. You've got Claude and GPT-4o that can take unstructured research — "find me companies doing X in Y region" — and output structured data with funding history, location, remote policy, key contacts. You've got lightweight databases like Supabase with a free tier that includes five hundred megabytes of storage and two gigs of bandwidth, which is more than enough for a solo operator tracking a few hundred companies. You've got vector stores for semantic search if you want to find "companies similar to this one I liked last month." The pieces are all there, and they're cheap.
Herman
The Model Context Protocol — MCP — which Anthropic released last year, has made the integration layer dramatically simpler. There are now over sixty pre-built MCP servers available. You can connect Claude directly to a Supabase database, to Gmail, to Slack. There's a tutorial from early this year showing someone building a professional-grade CRM in fifteen minutes using Google Stitch for the UI design and Claude Code for the code generation. Near one-shot build. That's not a compromise — that's a strategic advantage. You get exactly the data model you want, exactly the workflow you want, and you're not paying per-user SaaS pricing.
Corn
Fifteen minutes feels like an internet-flex number, but even if it takes a weekend — which is more realistic — that's still less time than Daniel's already spent evaluating off-the-shelf CRMs. And at the end of it, he owns the thing. No vendor lock-in, no pricing changes next quarter, no feature deprecation when the product manager decides drip cadences are the future.
Herman
And for Daniel specifically, he's already working with these tools for clients. He mentioned using Airtable and ToolJet before. So he's not starting from zero. ToolJet in particular is interesting here — it's open source, you can self-host it, and its pricing is twenty-four to seventy-nine dollars per builder per month with no per-end-user fees. Airtable is twenty dollars per user per month. These are intermediate options — you get a lot of the flexibility of a custom build without writing everything from scratch. But the real unlock is adding the AI agent layer on top for research and enrichment.
Corn
We've got three paths emerging. One, off-the-shelf CRMs, which we've already established are expensive and philosophically misaligned with what Daniel actually wants. Two, no-code platforms like Airtable or ToolJet, which give you a custom data model without a full codebase. Three, the full DIY stack — Supabase, Claude API, maybe some Python scripts, maybe MCP servers for email integration. And the interesting question isn't really which one is "best" — it's which one fits the specific workflow Daniel described. Research first, relationship second, CRM a distant third.
Herman
That ordering matters a lot. Because most CRMs start from the assumption that you already know who you're selling to. You import a list, you work the list, you close deals. Daniel's workflow is the opposite. He starts with curiosity — "I see a company doing something interesting." The research comes first, the qualification comes from the research, and only then does it become a relationship worth tracking. That's not a CRM workflow. That's a research assistant workflow with a database attached.
Corn
If you try to force that into a traditional CRM, you end up fighting the data model constantly. You're putting companies into "lead" status when they're not leads yet. You're creating custom fields to store research notes that the CRM doesn't understand. You're ignoring the pipeline view because you don't have a pipeline. After six months, you've got a Franken-CRM that you hate using, and you're back to spreadsheets.
Herman
Which is why I think the build path — even the light build path with Airtable or ToolJet — is the right answer here. Not because building is always better, but because the specific thing Daniel wants doesn't exist as a product category yet. There are tools that do parts of it. Clay, for example, is an AI-powered data enrichment platform that pulls from seventy-five to a hundred plus data sources simultaneously and can achieve over eighty percent email match rates. It has an AI agent called Claygent that does web research, analyzes company websites, summarizes LinkedIn profiles, and generates personalized outreach copy. Users report reducing manual research time by up to seventy percent.
Corn
That sounds suspiciously close to what Daniel described.
Herman
It does, and for some people Clay is the answer. But there are catches. Clay is built for technical revenue operations teams and growth hackers. Most users need four to six weeks to get comfortable with its credit-based pricing and conditional logic system. And it's still fundamentally a sales intelligence tool — it's optimized for lead generation and outbound prospecting, not for the kind of long-term curiosity-driven relationship tracking Daniel is describing. He doesn't want to blast outreach sequences. He wants to learn about interesting companies and occasionally send a thoughtful email.
Corn
Four to six weeks to get comfortable is a long onboarding curve for a solo operator who's already burned out on tool evaluation. That's the same problem in a different package. You trade the drip-cadence fatigue for credit-system confusion. And you're still locked into someone else's product roadmap.
Herman
That's where the build-your-own philosophy becomes more than just a technical choice — it becomes a philosophical one about control. When you build with Claude Code and Supabase, you own the data model. You decide what fields matter — remote policy, funding stage, key contacts, why you found the company interesting in the first place. You decide how the research agent works. You decide what "relationship strength" means. And if you want to add something new — sentiment analysis on email replies, integration with your calendar, automatic enrichment from news feeds — you just add it. You're not waiting for the vendor to ship a feature.
Corn
There's also a meta-angle here that I think is worth naming. Daniel sells agentic AI builds and uplifts. That's his business. Building his own internal tool with AI is itself a demonstration of the value he sells to clients. He can say to a prospect, "Here's the system I use to run my own business development. I built it with the same approach I'm proposing for you." That's not just eating your own dog food — that's serving the dog food on a silver platter with a testimonial attached.
Herman
Dog food metaphors aside, you're right. And we're seeing this pattern emerge more broadly. The rise of what you might call micro-CRMs — hyper-specific relationship management tools built by solo operators for their exact workflow, powered by AI agents. They're not trying to serve a thousand different use cases. They serve one use case perfectly. And because the AI handles the heavy lifting on research and data extraction, the builder doesn't need to be a full-stack developer. They need to understand their own workflow and be willing to spend a weekend on it.
Corn
The numbers back this up, too. A typical solo operator spends five to ten hours a week on manual research and data entry that could be automated with AI agents. If your billable rate is, say, a hundred to two hundred dollars an hour, that's five hundred to two thousand dollars a week in lost time. The DIY stack — Supabase free tier, Claude API credits, maybe twenty dollars a month in database hosting — costs under thirty dollars a month. Versus fifty to a hundred fifty dollars per user per month for off-the-shelf SaaS. The math isn't even close.
Herman
Here's the thing about that five to ten hours — it's not just the time. It's the context switching. You're doing deep thinking on a client project, you remember a company you wanted to research, you open twelve tabs, you're copy-pasting from Crunchbase into a spreadsheet, you're formatting columns, you're losing your train of thought. An hour later you've got a messy spreadsheet and you've lost momentum on the client work. An AI agent that does the research in the background and drops structured data into your database eliminates that entire context-switching cost.
Corn
If we're going to give Daniel a real recommendation — and we should, because he asked — I think the answer is build. Not because off-the-shelf tools are bad, but because his requirements are specific enough that any off-the-shelf option is going to be a compromise. And the build path today is accessible in a way it wasn't even two years ago.
Herman
I agree, with one caveat. The build path comes with ongoing maintenance. If an API changes, you fix it. If your database needs a migration, you handle it. There's no mobile app unless you build one. There's no built-in email integration unless you set it up through MCP or a third-party service. For some people, that maintenance burden is a dealbreaker. For Daniel, given that he's already building AI systems for clients, it's probably just Tuesday.
Corn
He's not a beginner asking "what's an API." He's someone who's been using Airtable and ToolJet and building agentic workflows. The maintenance burden is real, but it's also within his skill set. And the alternative — paying for a SaaS tool that does seventy percent of what he needs and frustrates him on the other thirty percent — has its own ongoing cost. The cost of working around the tool instead of the tool working for you.
Herman
Which is exactly the dynamic that drives people back to spreadsheets. The spreadsheet doesn't fight you. It's dumb, it's manual, it's inefficient — but it doesn't impose a workflow you didn't choose. Building your own system gives you the efficiency without the imposition. You get a tool that's as smart as you want it to be, and as flexible as a spreadsheet, but with structure and automation.
Corn
We've laid out the problem with off-the-shelf CRMs and the promise of a DIY approach. Let's get into the technical weeds of how you actually build this thing.
Corn
That's the thing I keep coming back to. The gap Daniel is sitting in — and a lot of solo operators are sitting in — isn't really a software gap. It's a design philosophy gap. On one side you've got Salesforce and HubSpot, which assume you're running a sales team with pipelines and forecasts and manager dashboards. On the other side you've got the spreadsheet, which assumes nothing and helps with nothing. Neither one is built for someone whose primary activity is curiosity.
Herman
The traditional CRM is optimized for what happens after you already know who you're selling to. But Daniel's workflow starts upstream of all that. He sees a company that catches his attention — maybe they're doing something novel in agentic AI, maybe they're in an adjacent space — and the first thing he needs isn't a lead status. It's a research dossier. Who are these people? How are they funded? Do they work with remote contributors? What have they built? That's not pipeline management. That's intelligence gathering.
Corn
If you try to use a pipeline tool for intelligence gathering, you end up contorting the tool until it breaks. You're creating deals for companies that aren't deals. You're putting research notes in fields meant for call logs. You're ignoring every dashboard because none of the metrics apply. Eventually you stop logging in, and the whole thing becomes a graveyard of half-researched companies you meant to follow up on six months ago.
Herman
Which is exactly the spreadsheet graveyard problem. Except now you're paying fifty dollars a month for the privilege of having a graveyard with a user interface.
Corn
What Daniel actually wants — and I think this is worth naming explicitly — is a system that mirrors how a curious human actually works. Research comes first. You learn about the company. You figure out if there's a genuine reason to connect. Only then does it become a relationship worth tracking. Research first, relationship second, CRM third. Most tools on the market reverse that order.
Herman
That ordering explains why the build path makes so much sense here. When you build it yourself, you can design the workflow around the research phase because that's what matters. The database schema starts with "why I found this interesting" and "what I've learned so far." The relationship tracking — emails sent, replies received, follow-up dates — gets layered on only when it's relevant. You're not bolting research onto a sales tool. You're building a research tool that occasionally becomes a relationship tracker. And that shift in philosophy is exactly what leads into the AI-led pattern, which is where things get different from the old way.
Herman
Let's walk through what that AI-led pattern actually looks like. The old way is you open Crunchbase, you open LinkedIn, you open the company's website, you open a news search, you copy-paste everything into a document or a spreadsheet, and you spend forty-five minutes per company assembling a profile. The AI-led pattern flips that entirely. You describe what you want in natural language — "find me companies doing agentic AI workflow automation in Europe and Israel, output as structured data with funding stage, remote policy, key people, and a one-paragraph summary of why they're interesting" — and the model does the research, extracts the structured data, and hands you back clean JSON.
Corn
The key word there is structured. The difference between a markdown report sitting in Google Drive and a database record you can query, sort, filter, and act on is the difference between a pile of notes and an actual system. Once the data is structured, you can ask questions like "show me every Series A company I've researched that allows remote work and that I haven't contacted in the last sixty days." That's not a search query. That's a business development strategy expressed as a database query.
Herman
And the AI isn't just doing research — it's doing extraction against a schema you define. You tell it what fields matter: company name, URL, headquarters location, funding stage, total raised, remote policy, key contacts with titles, email addresses if discoverable, and a notes field that captures why you found them interesting in the first place. The model reads unstructured web content and populates structured fields. With proper prompting and validation loops, GPT-4o achieves better than ninety percent accuracy on that kind of structured extraction from unstructured text.
Corn
Ninety percent is worth pausing on, because I think there's a misconception that AI agents are too unreliable for this kind of work. People imagine the model hallucinating a funding round or inventing an email address. But when you constrain the task — extract these specific fields from these specific pages, leave the field blank if you're uncertain — the reliability goes way up. And you can add a validation step where the agent checks its own work against a second source before writing to the database. That pushes accuracy into the high nineties.
Herman
This is where the technical stack starts to come together. At the foundation, you have a lightweight database — Supabase is the obvious choice here because its free tier gives you five hundred megabytes of storage and two gigabytes of bandwidth, which is more than enough for a solo operator tracking hundreds of companies. Supabase also gives you Postgres under the hood, which means you can add pgvector for semantic search later without switching databases.
Corn
Let me make sure I'm following the stack. Database at the bottom. Then a vector store for semantic search — so you can search not just by company name but by concept, like "companies working on multi-agent coordination" even if those exact words aren't in your notes. Then an agent orchestration layer that coordinates the research workflow — firing off searches, feeding results to the model, parsing the structured output, writing to the database. Is that the shape of it?
Herman
That's exactly the shape. The vector store — whether it's Pinecone or pgvector running inside Supabase — is what turns your database from a filing cabinet into a discovery engine. You store embeddings of your research notes, and suddenly you can find companies that are conceptually similar to ones you've already found valuable. It's like having a research assistant who remembers everything you've ever been interested in and can say "this new company reminds me of three others you looked at last year.
Corn
The agent orchestration layer — this is where LangChain or custom Python scripts come in — that's the conductor. It says: first, search for companies matching these criteria. Second, for each company, pull the website, Crunchbase profile, and recent news. Third, feed all of that to the model with the extraction schema. Fourth, validate the output. Fifth, write to the database. Sixth, flag the top three most promising results for human review. That's a workflow that runs while you're doing actual client work.
Herman
I want to ground this in a concrete example, because the architecture can sound abstract. There's a solo consultant I came across who built what he calls a company radar system. Every Monday morning, a Python script kicks off. It queries Crunchbase and LinkedIn for companies in his space that have raised funding in the last thirty days, pulls their websites, grabs recent news mentions. Then it sends all of that to Claude with a prompt that says: here are ten companies, for each one extract funding stage, location, remote policy, key hires in the last quarter, and write a two-sentence assessment of whether they might need the kind of consulting I do. The output lands in his Supabase database as structured records. He opens a simple dashboard — could be built in Airtable, could be a Retool front-end — and sees the top three companies the system thinks are worth his attention. Total human time per week: fifteen minutes of review. Everything else happened while he was sleeping or working.
Corn
That top-three surfacing is critical. The system isn't just collecting data — it's making a judgment call. It's saying "based on what I know about what you find interesting, these are the ones to look at." That's the difference between a database and an assistant. The database stores what you tell it. The assistant surfaces what you didn't know to ask for.
Herman
Which brings us back to the tradeoffs. Building this yourself means you get that assistant tailored exactly to your criteria. But you're also responsible for the plumbing. The Crunchbase scraping might break if they change their page structure. The LinkedIn API has rate limits. Your Python script might throw an error at step four and you won't know until Monday morning when you check. There's no vendor support line. There's no status page. There's you and your terminal.
Corn
Though I'd argue that for someone like Daniel, who's already building agentic AI systems for clients, debugging a Python script on a Monday morning is just part of the job description. The bigger tradeoff is the lack of mobile access and built-in email integration. If he wants to check his company radar from his phone, he's either building a mobile front-end or accepting that this is a desktop tool. And if he wants to send tracked emails from within the system, he's integrating with something like Nylas or setting up Gmail through an MCP server. None of it's impossible, but each piece is another thing to build and maintain.
Herman
The MCP ecosystem actually makes the email piece much simpler than it was even a year ago. There are pre-built MCP servers for Gmail and Outlook now — you connect Claude Code to your email through the MCP server, and Claude can draft and send emails based on the research it's already done. It can pull a company record from Supabase, read the research summary, and draft a personalized outreach email that references specific details about what the company is building. You review it, you hit send, and the system logs the outreach back to the database. That's not a hypothetical — there are tutorials walking through exactly this workflow, and people are building it in under an hour.
Corn
Under an hour to go from zero to a system that researches companies, structures the data, drafts personalized emails, and tracks the relationship. That's the part that I think would have sounded like science fiction two years ago. And it's why the build-versus-buy calculus has shifted so dramatically. The build path used to mean weeks of coding. Now it means an afternoon of configuration and prompt design.
Herman
That shift in build time changes the economics of maintenance too. When something only took an afternoon to build, you're not precious about it. If your needs change, you rebuild the workflow in an hour instead of spending three weeks evaluating whether the SaaS vendor's enterprise tier supports your new use case. That's the knock-on effect that doesn't get talked about enough — owning the data model means you evolve it organically.
Corn
This is where I think the "second brain" framing actually earns its keep, and I don't use that phrase lightly. A traditional CRM remembers a company's phone number and the date of your last call. A system you've built yourself remembers why you found them interesting in the first place. It preserves the context that got you excited — the funding round, the technical paper they published, the specific problem they're solving that overlaps with your expertise. Six months later, when you're following up, you're not starting from zero. The system has been thinking alongside you.
Herman
That context compounds. Once you own the data model, you can add layers that no off-the-shelf CRM would ever prioritize. A relationship strength score that decays over time if you haven't been in touch — simple to implement, impossible to find in a sales CRM that measures pipeline velocity. Email sentiment tracking — does this person's tone read warm, neutral, or dismissive? You can run your email threads through a quick sentiment pass and flag relationships that are cooling off before they go cold.
Corn
The flip side of organic evolution though is that you're the one doing the evolving. Nobody's shipping you feature updates. If you want Calendly integration, you're building it. If you want the system to suggest optimal follow-up timing based on past response patterns, you're designing that logic yourself. For someone who already builds agentic systems for clients, that's probably energizing rather than burdensome. But it's worth being honest that "full control" also means "full responsibility.
Herman
That responsibility extends to the data itself. When you're running your own Supabase instance, you're the one handling backups, migrations, and security. The SaaS vendors have entire teams for that. On the other hand, your research data on interesting companies isn't sitting on someone else's server behind a terms-of-service agreement that could change next quarter. For someone based in Israel, where data sovereignty can be a genuine consideration depending on the sector, that's not nothing.
Corn
I want to zoom out for a second, because there's a broader market implication here that I think is worth naming. What we're describing — a hyper-specific, AI-powered relationship tool built by one person for their exact workflow — that's not just a personal productivity hack. It's a category. The rise of what you could call micro-CRMs, built on AI agents and lightweight databases, each tailored to a specific niche or even a specific person. And I think that poses a real threat to the mid-market CRM space.
Herman
Pipedrive, Zoho, Freshsales — these companies built their businesses on being more flexible and affordable than Salesforce. But they're still generic tools. They still optimize for sales pipeline management because that's what most of their customers need. A solo operator who's built a custom system that automates their specific research-to-outreach workflow isn't going back to a generic pipeline view. The switching cost isn't financial — it's that no off-the-shelf tool can match the fit.
Corn
The cost comparison makes this stark. Off-the-shelf CRM for a solo user runs fifty to a hundred and fifty dollars per month for basic features. The DIY stack — Supabase free tier, maybe twenty dollars a month in API credits for Claude or GPT-4o, and a few hours of setup — lands under thirty dollars a month. That's not just cheaper. That's an order of magnitude difference for something that's actually more tailored to the work.
Herman
I saw a case study recently of someone who built exactly this kind of system. Automated enrichment of company profiles with recent news, funding rounds, and employee LinkedIn profiles — then the system drafts a personalized outreach email based on everything it's learned. The email references the company's latest product launch, mentions a mutual connection if one exists, and suggests a specific reason to connect that's tied to the sender's actual expertise. That's not a template with merge fields. That's an AI-generated email that reads like it was written by a human who did their homework.
Corn
Because it was written by an AI that did do its homework. That's the paradigm shift. The CRM stops being a database you query and starts being a research assistant that surfaces opportunities and drafts context-aware communications. For a solo operator selling agentic AI builds, there's also a -layer here — using a system like this is itself a demonstration of the value you're selling to clients. You're eating your own dog food, and the dog food is pretty good.
Herman
If someone's listening to this and thinking "I want that," where do they actually start? Because the architecture we've described is powerful, but it can also sound like a lot.
Corn
I think the most important principle — and this is hard for people who build things — is to start ugly. Start with a schema that has six fields: company name, website, funding stage, remote policy, key contacts, and a notes field. That's it. Don't try to design the perfect data model on day one. You don't yet know what you'll actually want to query six months from now, and the beauty of owning the schema is you can add fields when the need becomes real rather than guessing upfront.
Herman
The schema I've seen work well in practice adds two more: a "why interesting" free-text field and a "last contacted" date. The "why interesting" field is the one that saves you six months later when you're looking at a company name and trying to remember what caught your attention. And the "last contacted" date immediately gives you a simple decaying relationship score — if you haven't reached out in ninety days, that record surfaces to the top of your review list.
Corn
The other principle that I think prevents a lot of false starts: use the AI for the research layer, not the storage layer. Let Claude or GPT-4o do the extraction and structuring — feed it a company website, a Crunchbase profile, a few news articles, ask for structured JSON — but then store that JSON in a proper queryable database. Don't try to make the LLM your database. It's expensive, it's slow, and you can't run SQL against a chat interface.
Herman
That separation of concerns is what makes the whole thing work. The AI is brilliant at reading unstructured text and pulling out the specific fields you care about. The database is brilliant at storing, indexing, and querying structured records. Each does what it's good at. The moment you blur those roles — asking Claude to remember everything about every company and answer ad-hoc queries — you've built something fragile and expensive.
Corn
Practically, what does the weekend prototype look like? If you're comfortable with code, spin up a free Supabase instance, write a Python script that calls the Claude API with a company URL and gets back structured data, and store the result. That's a Saturday afternoon project. If you're less technical, the Airtable plus Make dot com plus Claude API stack gets you to the same place with a visual interface. Either way, the goal for the weekend isn't the finished system. It's having ten enriched company records you can actually query and review.
Herman
That's exactly the kind of hands-on, scrappy approach that makes me wonder whether the big vendors even want to play in this space. Salesforce and HubSpot built their empires on manager dashboards and pipeline forecasting. An AI-first, research-driven workflow where the system surfaces interesting companies before they're even in a pipeline — that's not an incremental feature. It's a different philosophy of what the tool is for.
Corn
The incentives are stacked against them making that shift. Their revenue comes from per-seat licensing and enterprise contracts. A solo operator who wants a research assistant with a database attached isn't a customer they're optimized to serve. The economics push them toward adding AI features on top of their existing pipeline model — lead scoring, email generation, forecasting. Useful, but not the same thing as rebuilding around curiosity-driven discovery.
Herman
Which is why I think the "thousand micro-CRMs" future might actually be the more likely one. We're already seeing the early signals — people building hyper-specific relationship tools in an afternoon with Claude Code and Supabase, each one tuned to a particular niche or even a single person's mental model of how they want to track relationships. The cost of building your own is collapsing toward zero.
Corn
As AI agents get cheaper and more capable, the line between tool and assistant blurs. The system I want next isn't one that waits for me to tell it which companies to research. It's one that knows I'm interested in applied AI in logistics, notices when a new startup in that space raises a round, researches it proactively, and surfaces it in my review queue with a note saying "this looks like your kind of thing.
Herman
That's the evolution that makes this more than a productivity win. It's a shift from software you operate to software that thinks alongside you. For the solo operator who's been stuck bouncing between bloated CRMs and graveyard spreadsheets, that's not an upgrade. It's a completely different category of tool.
Corn
On that forward-looking note — thanks as always to Hilbert Flumingtop for producing. This has been My Weird Prompts.
Herman
If you enjoyed this one, leave us a review wherever you're listening — it helps more people find the show.
Corn
I'm Corn.
Herman
I'm Herman Poppleberry. We'll catch you next time.

This episode was generated with AI assistance. Hosts Herman and Corn are AI personalities.