Cryptography nerd

Fediverse accounts;
@[email protected] (main)
@[email protected]
@[email protected]

Lemmy moderation account: @[email protected] - [email protected]

@[email protected]

Bluesky: natanael.bsky.social

  • 0 Posts
  • 245 Comments
Joined 1 year ago
cake
Cake day: January 18th, 2025

help-circle
  • It’s actually kinda easy. Neural networks are just weirder than usual logic gate circuits. You can program them just the same and insert explicit controlled logic and deterministic behavior. To somebody who don’t know the details of LLM training, they wouldn’t be able to tell much of a difference. It will be packaged as a bundle of node weights and work with the same interfaces and all.

    The reason that doesn’t work well if you try to insert strict logic into a traditional LLM despite the node properties being well known is because of how intricately interwoven and mutually dependent all the different parts of the network is (that’s why it’s a LARGE language model). You can’t just arbitrarily edit anything or insert more nodes or replace logic, you don’t know what you might break. It’s easier to place inserted logic outside of the LLM network and train the model to interact with it (“tool use”).



















  • On a company device the owner (the company) is the end, and you’re just given the task of operating it.

    It varies between jurisdictions, but in general, you better believe they have every right to investigate any suspicions regarding how company assets (work devices) are used and whether their agents may appear unprofessional when using official company communication channels (literally your work phone number, which is used in RCS messages).

    In plenty of places there’s still privacy rights for employees, but their main purpose is generally preventing overbearing surveillance and protecting your personal data contents in personal communication channels (like if you’re using personal webmail on a work device).