Hardware & GPUs
AMD ROCm, NVIDIA CUDA, AI accelerators, Coral TPU
9 episodes
GPU Scaling: The "Go Wide or Go Tall" Dilemma
Should you use a fleet of cheap GPUs or one powerhouse? Learn the math behind serverless GPU costs, cold starts, and batching efficiency.
Building the Ultimate Local AI Inference Server
Learn how to build a high-performance local AI server for agentic coding, from dual-GPU PC builds to the power of Mac's unified memory.
Why GPUs Are the Kings of the AI Revolution
From video game dragons to digital brains: Herman and Corn explain why your graphics card is the secret engine behind the AI boom.
The Future of Local AI: Stable Diffusion vs. The New Guard
Is Stable Diffusion becoming a relic? Corn and Herman debate the rise of Flux, the privacy of local AI, and the future of open-source generation.
Running Video AI at Home: The Real Technical Challenge
Video AI: Hype vs. Reality. Can your GPU handle it? We dive into the technical challenges of running video AI at home.
AI Supercomputers: On Your Desk, Not Just The Cloud
AI supercomputers are landing on your desk! Discover why local AI is indispensable for enterprises facing API costs, latency, and privacy.
Red Team vs. Green: Local AI Hardware Wars
NVIDIA's CUDA rules AI, leaving AMD users battling a "green wall." Explore the hardware wars and thorny paths forward.
GPU Brains: CUDA, ROCm, & The AI Software Stack
Unraveling how GPUs power AI. We dive into CUDA, ROCm, and the software stack that makes it all think.
Local STT For AMD GPU Owners
AMD GPU? No problem! Dive into local AI adventures like on-device speech to text.