You know what’s annoying?

Telling your smart home AI the same thing. Every. Single. Day.

“Hey, 100 ppm on the NOx sensor is normal for my house.”

Next day.

“The NOx is at 98 ppm, that seems elevated…”

NO. I JUST told you that yesterday. AND THE DAY BEFORE THAT.

This drove me actually crazy. So I built something.

The problem with every Home Assistant AI integration

They’re all stateless. Every conversation starts from absolute zero. Your request gets piped to ChatGPT or Claude, you get an answer, done. No memory. No learning. No context carried over between sessions. It’s like having a very smart assistant with severe amnesia — useful in the moment, useless over time.

Home Mind

Home Mind is an AI assistant for Home Assistant that learns and remembers across conversations.

Here’s what that actually looks like in practice:

Day 1:

You: What's the NOx sensor reading?
AI:  100 ppm. That's typically elevated for indoors...

You: Actually, 100 is normal for my home. Remember that.
AI:  Got it. I'll remember 100 ppm is within normal range for your NOx sensor.

Day 2, fresh conversation:

You: How's the air quality?
AI:  NOx is at 98 ppm, which is within normal parameters for your home.

No reminder needed. It just knows.

The same works for device nicknames, user preferences, corrections, sensor baselines — anything you tell it gets stored and used in future conversations.

How the memory works

The memory layer is Shodh Memory, a cognitive memory backend that does some genuinely interesting stuff:

  • Semantic search — understands meaning, not just keywords. “NOx sensor” and “air quality” are connected concepts, not separate text buckets.
  • Hebbian learning — memories that get activated together strengthen together, like an actual brain.
  • Natural decay — old, unused memories fade out. Your home doesn’t need to remember forever that you had guests over last year.

On top of that sits the Home Mind server, which orchestrates between the LLM, Shodh, and the Home Assistant API. The whole thing runs locally on your network.

Multi-LLM support

This was a big addition in recent versions. You’re not locked into one provider:

  • Anthropic Claude (default) — Claude Haiku is fast and cheap for this kind of use
  • OpenAI — drop-in alternative
  • Ollama — fully local inference, no API key, no cloud, no data leaving your house

That last one is important if you care about privacy. Your home context, your sensor data, your preferences — all processed locally on your own hardware. I run it with llama3.1 and it works well for everyday queries.

The architecture

HA Assist (Voice + Text)
        ↓
HA Custom Component
        ↓
Home Mind Server
        ↓
Shodh Memory + LLM API + HA REST API

Works with voice through the Wyoming protocol, or text through the Assist interface. Whatever STT setup you already have — local Whisper, cloud — it doesn’t matter.

Custom AI personality

You can give Home Mind a custom persona without touching the core system. Set a name, a tone, a communication style. The built-in smart home capabilities (tool usage, memory, device control) get appended after your custom prompt — so you’re shaping who it is, not replacing how it actually works.

Example:

You are Ada, a concise and slightly sarcastic smart home assistant
who doesn't waste words.

Can be set per-request, via the HA integration config, or as a server-level default.

Current state

Version v0.12.0, 7 releases in. What works today:

  • Voice control via HA Assist
  • Cognitive memory with Shodh
  • Streaming responses
  • HACS integration
  • Multi-LLM support (Anthropic, OpenAI, Ollama)
  • Local inference via Ollama
  • Custom system prompts
  • Persistent conversation history (SQLite)
  • Automatic memory cleanup

What’s still coming: multi-user support (OIDC) and an HA Add-on package so you don’t need Docker.

Getting started

You need Docker, a Home Assistant instance with a long-lived access token, and an LLM API key (or Ollama running locally).

git clone https://github.com/hoornet/home-mind.git
cd home-mind
cp .env.example .env
# edit .env with your HA URL, token, and LLM key
./scripts/deploy.sh

Then install the HACS component from https://github.com/hoornet/home-mind-hacs, point it at your server, set it as your conversation agent, done.

Full docs on the GitHub wiki.


If it saves you from repeating yourself one more time, buy me a coffee. ☕

Questions or feature ideas? Open a GitHub issue.