Skip to main content

Reviving a Lego Mindstorms Robot: My Journey into AI, Agents, and MCP Servers

A few months ago, while looking through one of my kids’ wardrobes, I stumbled upon a forgotten treasure: a Lego Mindstorms EV3 robot, gathering dust and long outgrown. The sight instantly sparked an idea—could I use this old robot as a playground to learn more about AI, agents, LLMs, and the Model Context Protocol (MCP)? I wanted a hands-on project that would let me experiment with these concepts in a tangible, physical way.

Why I Started: Learning by Doing
My main motivation was curiosity. I’d been reading about AI agents and the MCP ecosystem, and I wanted to see what it would take to give a physical robot a “brain” powered by modern AI. The goal was to bridge the gap between software and hardware, and to see how far I could push the boundaries of what’s possible with open-source tools and a bit of creative hacking.

Dusting Off the Robot
The EV3 had been sitting unused for years, but after a quick change of batteries and some fiddling, it was ready for action. The robot runs ev3dev, a Debian-based operating system for Lego Mindstorms, which provides a flexible environment for programming in Python and other languages. However, the onboard version of Python available on ev3dev is quite old , which would soon become a limitation.

I started by building a basic toolset in Python, using the official Python bindings to control the robot’s motors and sensors. My first iteration was then a simple onboard MCP server, running directly on the EV3 and using the basic toolset just built. This let me send commands to the robot over the network—drive forward, turn, read sensor data, and so on.

From Barebones Python to the MCP Python SDK (and moving offboard)
Initially, everything about the mcp server was hand-rolled. It worked, but it was clunky. Even though much of the code was custom-built, almost all of the actual coding was done with the help of GitHub Copilot, which made the process much smoother and more productive.

I wanted to take advantage of the official MCP Python SDK, which would make everything cleaner and more maintainable. However, I quickly ran into a roadblock—the EV3’s onboard Python was too old to support the SDK. To get around this, I moved the MCP server off the robot and onto my computer. This not only let me use modern Python and the latest MCP frameworks, but also freed up resources on the EV3 and made it easier to experiment with new features.

Setting Up the Server Project with UV and Python 3.14
Once I decided to run the MCP server on my computer, I set up a new Python environment using Python 3.14. To manage dependencies and ensure fast, reproducible installs, I used UV, a modern Python package manager. This made it easy to install the MCP SDK and any other libraries I needed, and allowed me to take advantage of the latest features in the Python ecosystem. The server project itself was structured to be lightweight and modular, making it simple to extend with new tools and capabilities as the project evolved.

Calibrating for the Real World
One of the unexpected challenges was calibration. I spent a fair bit of time tweaking parameters and running calibration routines to make sure the robot could drive straight, turn accurately, and reliably detect obstacles. It was a great reminder that working with real hardware is always messier than pure software.

Giving the Robot a Brain: LLMs and Agents
With the basics in place, I wanted to see what would happen if I connected a large language model to the robot. Enter Claude Desktop, which I set up to act as both the LLM and the AI agent. By hooking it up to the MCP server’s toolset, I could issue high-level commands in natural language — “drive forward until you see an obstacle,” “turn left 90 degrees,” “report sensor readings” — and have the agent translate those into concrete actions.

This was where things really got interesting. The robot was no longer just following scripts; it was responding to open-ended instructions, making decisions, and (occasionally) surprising me with its creativity.

Lessons Learned and Future Directions
I briefly considered upgrading the robot’s OS image to squeeze out a bit more performance, but quickly realized it was a rabbit hole that would distract from my main goal: learning about AI and agents. Sometimes, it’s better to work with what you have and focus on the fun parts.

Looking ahead, I’m tempted to try integrating a more advanced AI voice system, replacing the robot’s built-in synthesizer with something more natural. There are also some fascinating parallels to autonomous vehicles, sensors, decision-making etc.

In many ways, this tiny LEGO robot felt like a miniature version of much larger autonomous systems — self-driving cars included — where sensing, decision-making, and execution all need to line up.

Not bad for something that had been forgotten in a wardrobe for years.

Comments

Popular posts from this blog

GWT and Spring Security

Update! - Based on the post below, and my other post regarding Spring Security and OpenID, I have added Open-ID support to the sample application below. For those interested, here's the write-up of changes. I've spent quite some time digging into ways of integrating GWT and Spring Security. It all started by reading the following post in the GWT Forum - Best practices/ideas for GWT with Spring Security (or equivalent) , and then checking out this blog - GWT and Spring Security . To make matters worse, I started reading Security for GWT Applications and specifically about the "Cross-Site Request Forging"-attacks. Now, what could I do about it? Well, starting by setting up my own project (Maven-based) with all updated dependencies (GWT 2.0.3 etc) and started reading the Spring Security Reference Documentation (puh!). Instead of See Wah Cheng's approach of implementing a custom authentication service, I decided to rely on standard namespace configuration...

Using Spring Security's OpenID implementation (openid4java) on Google App Engine

The goal with this exercise is to have a running example of an OpenID login on a simple Spring application, using Google as the OpenID Provider. Note that the application will be running on Google App Engine and that Spring Roo is only used for simplicity of creating the project files. Any Spring-based application could use the same implementation. First of all, create a simple project using Spring Roo (or any equivalent framework), including the default security setup: project --topLevelPackage com.technowobble persistence setup --provider DATANUCLEUS --database GOOGLE_APP_ENGINE entity --class ~.domain.MyEntity field string --fieldName name controller all --package com.technowobble.controller security setup This setup only provides us with a form-login, which is not what we wanted. So what about OpenID? Well, if it wasn't for Google App Engine, I would happily have added an <openid-login>-tag to applicationContext-security.xml, but things are never that easy, are the...

Using SmartGWT with Jersey RESTful backend on Spring Roo

I decided to give SmartGWT a run, and specifically the RestDataSource functionality using Jersey. To make things easier, I'm using Spring Roo to set up my project and scaffold much of the boilerplate code, configurations etc. My approach is to have my domain objects exposed as RESTful web services and create a matching SmartGWT DataSource in order to show them in a GUI. Let's get started, shall we? First of all, you'll need Spring Roo (unless you just download the project source code and want to run it directly). I'm using the following Roo script to start things off: project --topLevelPackage com.technowobble persistence setup --provider HIBERNATE --database HYPERSONIC_IN_MEMORY entity --class ~.domain.Message field string --fieldName value controller class --class ~.ws.MessageResource --preferredMapping rest dependency add --groupId com.sun.jersey --artifactId jersey-server --version 1.3 dependency add --groupId com.sun.jersey.contribs --artifactId jersey-s...