• 0 Posts
  • 160 Comments
Joined 1 year ago
cake
Cake day: September 9th, 2023

help-circle

  • And even in some prototype bus, the Gyrobus, in the 50’s that used an electrically charged flywheel that was also (to some degree) regeneratively recharged when breaking:

    Rather than carrying an internal combustion engine or batteries, or connecting to overhead powerlines, a gyrobus carries a large flywheel that is spun at up to 3,000 RPM by a “squirrel cage” motor.[1] Power for charging the flywheel was sourced by means of three booms mounted on the vehicle’s roof, which contacted charging points located as required or where appropriate (at passenger stops en route, or at terminals, for instance). To obtain tractive power, capacitors would excite the flywheel’s charging motor so that it became a generator, in this way transforming the energy stored in the flywheel back into electricity. Vehicle braking was electric, and some of the energy was recycled back into the flywheel, thereby extending its range.

    Source: Wikipedia: Gyrobus



  • Altering the prompt will certainly give a different output, though. Ok, maybe “think about this problem for a moment” is a weird prompt; I see how it actually doesn’t make much sense.

    However, including something along the lines of “think through the problem step-by-step” in the prompt really makes a difference, in my experience. The LLM will then, to a higher degree, include sections of “reasoning”, thereby arriving at an output that’s more correct or of higher quality.

    This, to me, seems like a simple precursor to the way a model like the new o1 from OpenAI (partly) works; It “thinks” about the prompt behind the scenes, presenting only the resulting output and a hidden (by default) generated summary of the secret raw “thinking” to the user.

    Of course, it’s unnecessary - maybe even stupid - to include nonsense or smalltalk in LLM prompts (unless it has proven to actually enhance the output you want), but since (some) LLMs happen to be lazy by design, telling them what to do (like reasoning) can definitely make a great difference.












  • That’s movements and genres, though.

    Stop describing your paintings using sentences like “the Mona Lisa meets Monet’s water lillies”.

    or

    Stop describing your music using sentences like “similar to Bohemian Rhapsody but with drum rhythms inspired by Drop It Like It’s Hot”.

    would be somewhat more like it.

    These sentences, funnily enough, sound close to something I would write in experimental prompts for a txt2img or txt2music AI model.




  • Have a look at LibreELEC “just enough OS for Kodi” for the Pi - at least if you plan on using it primarily for running Kodi as a “casting receiver”. (LibreELEC even supports Docker containers as Kodi add-ons too, if you need the Pi to run more than just Kodi.)

    • Kodi will natively play some types of links shared to it through the “Kore” remote app’s “Play on Kodi” in the share menu.
    • Even more types of links are supported with the right add-ons such as YouTube links through Invidious add-on (or YouTube add-on with your own token), local broadcasters’ VoD content, some paid streaming services and many more but it’s a bit hit and miss…
    • With DLNA enabled in Kodi, many more types of stream URLs can be extracted from websites and sent to Kodi with apps like “Web Video Caster”. I think this one has the option of routing the stream through the phone (which is only necessary for some types of streams).

    Depending on your media consumption habits and requirements, it might not be the perfect solution, but possibly prettttty good one for a Raspberry Pi.