If I am not mistaken the tradeoff is losing add-ons but being able to install other services.
So… what is your experience? Are add-ons useful/common for your use case?
I’m running the docker version as I’m also using the rpi for other things, like imageview and pi hole. I don’t really miss addons, the only annoying thing is that most documentation assumes you’re running ha os.
But if you don’t plan to use it for anything else than HA, I’d go for HA OS.
I run my own a VM.
I was sceptical about running in a OS that I can’t run my normal updates and automations on but HA OS has been rock solid and easy. Plus you get a few more features
I second that, I just put it in a VM on my proxmox host. zero issues so far.
You can go supervised! You still have most of the operating system available to your needs and you can still use add-ons. I use it for years and it works like a charm
HA OS is the way to go.
You don’t want to have to think about it. HA OS just works. You set it up and let it run.
There’s no sense in trying to kerfuffle other things into it. You don’t want to do too much on the Pi anyway because it’ll lower the responsiveness of Home Assistant slightly. If you want a server that does things, buy a separate NAS and run it alongside HA OS.
This is what I do with a Pi running HAOS and a Synology ds920+ running backups and everything else. It’s been rock solid, gives me a decent backup solution, my home automation is stable and responsive and no-fuss, and plenty of options for tinkering. Highly recommend.
I used a ton of AddOns, really practical because they also embed themselves easily into the rest of Home Assistant. I would go for the HA OS. But I also do wish there was a AddOn to install random docker images.
I recommend HA OS. What happened to me is that I used docker, got everything set up how I liked it, then had to move over to HA OS when I needed a specific add on and didn’t have any other solution.
If you don’t already have a plan for other services, might not make sense to use docker, too.
I’ve run both, and the OS version is much more stable and easier to keep running. Whether you use an rpi or a VM, use the dedicated OS and save yourself the heartache of trying to get your hardware working with docker.
You can also run hass os in a vm then you still get add-ons, from what I understand
home assistant in docker is definitely not for the feint of heart! the networking requirements are actually quite intense, and really don’t map well to virtual networks like dockers uses
… among other issues
HAOS on a pi; i’ve tried the docker thing time and time again, and the next chance i get in blowing it all away and starting on real hardware again
I have HA running in docker on a Pi 3 and Z-wave JS running in another on the same Pi. Added a purpleair integration for outdoor air quality, national weather service, some local sensors, and sql to get data from another node. People have made me paranoid about SD card failures, so I regularly image it to my main server. I mostly use HA to visualize environmental data, but it also runs the lights in a hydroponic farm and the house during vacations, via z-wave outlets. Have not tried to integrate it with google or amazon.
The only inconveniences I’ve found with docker is that you can’t restart HA from its web interface and, if you update regularly, old images quickly fill a smaller card, so you have to remember to purge.
It’s now possible to restart HA from the web interface. P
I pulled the latest HA version based on you comment in this old thread, and you’re right! There is a restart button now. Thanks.
Running it on a bare Pi, HAOS, imho you get the most performance, and support if it goes wrong.
Running on more powerful hardware (x64 host), VM all the way. It’s so much easier when you can snapshot, move VMs around, and split out components when needed.
Its not super hard to manually set up with docker or podman but you have to deal with integrating and updating the add-ons yourself. I ran out of CPU on a pi4 (due to a buggy websocket client in the end) and moved to a small form factor x86_64 server under Rocky. I ran manually using just containers (podman in this case) and it worked fine but integrating and updating the equivalent of add-ons was a lot of manual plumbing work that I don’t find much fun anymore.
I switched back to hassos, but under KVM. This for me is the best of both worlds: I get the fully managed/integrated work of Frenck and friends for HA and can still access and manage the machine normally (and use it for other services).
There’s nothing remotely realtime about the python code in the core HA, it works well in a reasonbly provisioned VM (4 cores, 8G ram) backed by a good SSD. There is some religion in the community about not using VM’s: it is a layer of complexity and I understand why folks on discord don’t want to help people with it, but technically it works well for this class of app.
I’d always run HAOS. When you need Docker containers which are not available as add-ons I would look for a machine that can run Proxmox so you can run a Docker VM and a HAOS vm in parallel.
I’m running Supervised in Docker. I don’t know how I managed to do it but it wasn’t hard. I use addons, and also have other things running on my Pi
There’s nothing that you can’t do with docker that you can do with addons, but many things you can do with docker that you can’t do with addons.
Addons are marginally easier to setup but if you have technical skills, docker is also not a lot of work. You can use something like Portainer to get a similar easy interface. So I think it’s down to if you have the technical skills for docker.
I run ha supervised and I do both, but the system does complain that I do that.