

Why not open a PR to make it configurable? The maintainer is super active and friendly.
Why not open a PR to make it configurable? The maintainer is super active and friendly.
Yeah those are good points. Also noticed the CDN thing, it’s a bit annoying for a privacy-first project… But should be an easy fix 😄
Stirling’s backend is Java. So, yeah, heavy and slow sounds about right.
Ah, thanks for mentioning. Yep, they have a docker image; as mentioned, a nixpkg will be available soonTM; and frankly, you can just build / download the release artifacts and put them on any static host.
Please read the title of the post again. I do not want to use an LLM. Selfhosted is bad enough, but feeding my data to OpenAI is worse.
Yep, that’s the idea! This post basically boils down to “does this exist for HASS already, or do I need to implement it?” and the answer, unfortunately, seems to be the latter.
Thanks, had not heard of this before! From skimming the link, it seems that the integration with HASS mostly focuses on providing wyoming endpoints (STT, TTS, wakeword), right? (Un)fortunately, that’s the part that’s already working really well 😄
However, the idea of just writing a stand-alone application with Ollama-compatible endpoints, but not actually putting an LLM behind it is genius, I had not thought about that. That could really simplify stuff if I decide to write a custom intent handler. So, yeah, thanks for the link!!
Thanks for your input! The problem with the LLM approach for me is mostly that I have so many entities, HASS exposing them all (or even the subset of those I really, really want) is already big enough to slow everything to a crawl, and to get bad results from all models I’ve tried. I’ll give the model you mentioned another shot though.
However, I really don’t want to use an LLM for this. It seems brittle and like overkill at the same time. As you said, intent classification is a wee bit older than LLMs.
Unfortunately, the sentence template matching approach alone isn’t sufficient, because quite frequently, the STT is imperfect. With HomeAssistant, currently the intent “turn off all lights” is, for example, not understood if STT produces “turn off all light”. And sure, you can extend the template for that. But what about
A human would go “huh? oh, sure, I’ll turn off all lights”. An LLM might as well. But a fuzzy matching / closest Levensthein distance approach should be more than sufficient for this, too.
Basically, I generally like the sentence template approach used by HASS, but it just needs that little bit of additional robustness against imperfections.
Thanks for sharing your experience! I have actually mostly been testing with a good desk mic, and expect recognition to get worse with room mics… The hardware I bought are seeed ReSpeaker mic arrays, I am somewhat hopeful about them.
Adding a lot of alternative sentences does indeed help, at least to a certain degree. However, my issue is less with “it should recognize various different commands for the same action”, and more “if I mumble, misspeak, or add a swear word on my third attempt, it should still just pick the most likely intent”, and that’s what’s currently missing from the ecosystem, as far as I can tell.
Though I must conceit, copying your strategy might be a viable stop-gap solution to get rid of Alexa. I’ll have to pay around with it a bit more.
That all said, if you find a better intent matcher or another solution, please do report back as I am very interested in an easier solution that does not require me to think of all possible sentence ahead of time.
Roger.
Never heard about willow before - is it this onw? Seems there is still recent activity in the repo - did the creator only recently pass away? Or did someone continue the project?
How’s your experience been with it?
And sure, will do!
That is actually a really interesting approach to moderation, huh.
Disagree. CSS allows you to do whatever you want with it, usually with just a handful of lines. The “it’s so difficult to center things!” meme is, well, a meme.
Lol, exact same situation here.
Quick question, did the migration to continuwuity break calls for you as well?
Grew up on it. My dad set up a Ubuntu 4.10 PC for my brother and I when we were 3/5 (no internet, obv), and it stuck.
Used Windows for a brief time in highschool to be able to play online with friends.
Went right back to Linux when going to university. Will never change back, both for ideological reasons and because Linux is just better.
Next step: NixOS on a phone
A substantial amount of open source devs will probably just give up working on their projects if they can no longer be installed by most users.
That will also affect Graphene users.
Graphene will also only work until Google one day says “You know what… No!” and stops allowing it on their (new) hardware. I don’t think that’s far in the future.
TBH, it sounds like you have nothing to worry about then! Open ports aren’t really an issue in-and-on itself, they are problematic because the software listening on them might be vulnerable, and the (standard-) ports can provide knowledge about the nature pf the application, making it easier to target specific software with an exploit.
Since a bot has no way of finding out what services you are running, they could only attack caddy - which I’d put down as a negligible danger.
My ISP blocks incoming data to common ports unless you get a business account.
Oof, sorry, that sucks. I think you could still go the route I described though: For your domain example.com
and example service myservice
, listen on port :12345
and drop everything that isn’t requesting myservice.example.com:12345
. Then forward the matching requests to your service’s actual port, e.g. 23456
, which is closed to the internet.
Edit: and just to clarify, for service otherservice
, you do not need to open a second port; stick with the one, but in addition to myservice.example.com:12345
, also accept requests for otherservice.example.com:12345
, but proxy that to the (again, closed-to-the-internet) port :34567
.
The advantage here is that bots cannot guess from your ports what software you are running, and since caddy (or any of the mature reverse proxies) can be expected to be reasonably secure, I would not worry about bots being able to exploit the reverse proxy’s port. Bots also no longer have a direct line of communication to your services. In short, the routine of “let’s scan ports; ah, port x is open indicating use of service y; try automated exploit z” gets prevented.
I am scratching my head here: why open up ports at all? It it just to avoid having to pay for a domain? The usual way to go about this is to only proxy 443 traffic to the intended host/vm/port based on the (sub) domain, and just drop everything else, including requests on 443 that do not match your subdomains.
Granted, there are some services actually requiring open ports, but the majority don’t (and you mention a webserver, where we’re definitely back to: why open anything beyond 443?).
Client side, under advanced:
That’s a setting
While I don’t like it, it’s not hidden either:
https://bentopdf.com/privacy.html
There should definitely be an option to disable this for self-hosting, but if it’s just a counter for how often each tool is used by all users combined… Eh…
(Stirling also has something similar)