I've been working on a side project — a web app built on React 19, TypeScript, MUI on the frontend and Node.js/Express with Firebase/Firestore on the backend, deployed on Cloud Run. The project is at a stage where there's real code, real data, and real users, which makes me much more careful about what I let an AI assistant touch without oversight. GitHub Copilot (and AI coding assistants in general) are genuinely useful, but out of the box they have no knowledge of your project's conventions, your architecture boundaries, or which quality concerns actually matter. Ask it to "fix the login page" and it might improve the UI while quietly bypassing an auth check. Ask it to "add a new field to the database" and it won't know whether existing records need a migration. What I wanted was something closer to a team of specialists — each one with deep knowledge of their domain — that I could invoke by name, with consistent behaviour across every session....
A few months ago, while looking through one of my kids’ wardrobes, I stumbled upon a forgotten treasure: a Lego Mindstorms EV3 robot, gathering dust and long outgrown. The sight instantly sparked an idea—could I use this old robot as a playground to learn more about AI, agents, LLMs, and the Model Context Protocol (MCP)? I wanted a hands-on project that would let me experiment with these concepts in a tangible, physical way. Why I Started: Learning by Doing My main motivation was curiosity. I’d been reading about AI agents and the MCP ecosystem, and I wanted to see what it would take to give a physical robot a “brain” powered by modern AI. The goal was to bridge the gap between software and hardware, and to see how far I could push the boundaries of what’s possible with open-source tools and a bit of creative hacking. Dusting Off the Robot The EV3 had been sitting unused for years, but after a quick change of batteries and some fiddling, it was ready for action. The robot runs ev...