- Rabbit R1 AI box is actually an Android app in a limited $200 box, running on AOSP without Google Play.
- Rabbit Inc. is unhappy about details of its tech stack being public, threatening action against unauthorized emulators.
- AOSP is a logical choice for mobile hardware as it provides essential functionalities without the need for Google Play.
I’m confused by this revelation. What did everybody think the box was?
Magic
In all reality, it is a ChatGPTitty "fine"tune on some datasets they hobbled together for VQA and Android app UI driving. They did the initial test finetune, then apparently the CEO or whatever was drooling over it and said “lEt’S mAkE aN iOt DeViCe GuYs!!1!” after their paltry attempt to racketeer an NFT metaverse game.
Neither this nor Humane do any AI computation on device. It would be a stretch to say there’s even a possibility that the speech recognition could be client-side, as they are always-connected devices that are even more useless without Internet than they already are with.
Make no mistake: these money-hungry fucks are only selling you food cans labelled as magic beans. You have been warned and if you expect anything less from them then you only have your own dumbass to blame for trusting Silicon Valley.
If the Humane could recognise speech on-device, and didn’t require its own data plan, I’d be reasonably interested, since I don’t really like using my phone for structuring my day.
I’d like a wearable that I can brain dump to, quickly check things without needing to unlock my phone, and keep on top of schedule. Sadly for me it looks like I’ll need to go the DIY route with an esp32 board and an e-ink display, and drop any kind of stt + tts plans
Latte Panda 2 or just wait a couple years. It’ll happen eventually because it’s so obvious it’s literally unpatentable.
I think the issue is that people were expecting a custom (enough) OS, software, and firmware to justify asking $200 for a device that’s worse than a $150 phone in most every way.
The Rabbit OS is running server side.
I didn’t know how much work they put into customizing it, but being derived from Android does not mean it isn’t custom. Ubuntu is derived from Debian, that doesn’t mean that it isn’t a custom OS. The fact that you can run the apk on other Android devices isn’t a gotcha. You can run Ubuntu .deb files on other Debian distros too. An OS is more of a curated collection of tools, you should not be going out of your way to make applications for a derivative os incompatible with other OSes derived from the same base distro.
I would expect bespoke software and OS in a $200 device to be way less impressive than what a multi billion dollar company develops.
Without thinking into it I would have expected some more custom hardware, some on device AI acceleration happening. For one to go and purchase the device it should have been more than just an android app
The best way to do on-device AI would still be a standard SoC. We tend to forget that these mass produced mobile SoCs are modern miracles for the price, despite the crapy software and firmware support from the vendors.
No small startup is going to revolutionize this space unless some kind of new physics is discovered.
I think the plausibility comes from the fact that a specialized AI chip could theoretically outperform a general purpose chip by several orders of magnitude, at least for inference. And I don’t even think it would be difficult to convert a NN design into a chip or that it would need to be made on a bleeding edge node to get that much more performance. The trade off would be that it can only do a single NN (or any NNs that single one could be adjusted to behave identically to, eg to remove a node you could just adjust the weights so that it never triggers).
So I’d say it’s more accurate to put it as “the easiest/cheapest way to do an AI device is to use a standard SoC”, but the best way would be to design a custom chip for it.
They’re not a chip
manufacturerdesigner though, and modern phone processors are already fast enough to do near real time text generation and fast image generation (20 tokens/second llama 2, ~1 second for a distilled SD 1.5, on Snapdragon 8 Gen 3)Unfortunately, the cheapest phones with that processor seem about $650, and the Rabbit R1 costs $200 and uses a MediaTek Helio P35 from late 2018.
Neither AMD nor nVidia are chip manufacturers. They just design them and send them off to TSMC or Samsung to get made.
The hardware seems very custom to me. The problem is that the device everyone carries is a massive superset of their custom hardware making it completely wasteful.
Custom hardware and software I guess?
Qualcomm is listed as having $10 billion in yearly profits (Intel has ~20B, Nvidia has ~80B), the news articles I can find about Rabbit say its raised around $20 million in funding ($0.02 billion). It takes a lot of money to make decent custom chips.
Running the Spotify app and dozens of others on a custom software stack?
Same. As soon as I saw the list of apps they support, it was clear to me that they’re running Android. That’s the only way to provide that feature.
Most of the processing is done server side though.
Isn’t Lemmy supposed to be tech savvy? What do people think the vast majority of Linux OSs are? They’re derivatives of a base distribution. Often they’re even derivatives of a derivative.
Did people think a startup was going to build an entire OS from scratch? What would even be the benefit of that? Deriving Android is the right choice here. This R1 is dumb, but this is not why.
It could have been a local AI and some special AI chip not found in all android phones, but since it is run in cloud, the privacy is really a problem