Cryptography nerd

Fediverse accounts;
[email protected] (main)
[email protected]
[email protected]

Lemmy moderation account: @[email protected] - [email protected]

@[email protected]

Bluesky: natanael.bsky.social

  • 0 Posts
  • 118 Comments
Joined 6 months ago
cake
Cake day: January 18th, 2025

help-circle






  • This case didn’t cover the copyright status of outputs. The ruling so far is just about the process of training itself.

    IMHO the generative ML companies should be required to build a process tracking the influence of distinct samples on the outputs, and inform users of potential licensing status

    Division of liability / licensing responsibility should depend on who contributes what to the prompt / generation. The less it takes for the user to trigger the model to generate an output clearly derived from a protected work, the more liability lies on the model operator. If the user couldn’t have known, they shouldn’t be liable. If the user deliberately used jailbreaks, etc, the user is clearly liable.

    But you get a weird edge case when users unknowingly copy prompts containing jailbreaks, though

    https://infosec.pub/comment/16682120














  • Access controls is the big difference. Apps with sensitive data can choose to hide stuff to a system wid search API. It can do so on an individual level, even. And even if it previously was accessible it can be drumroll recalled. Exposure happens when a search is made.

    Microsoft Recall is all or nothing. Once it has been displayed Recall has it and you can’t selectively erase stuff. Exposure is immediate. It’s just purge the whole database, or leave it all in there. Apps can’t retroactively flag stuff.

    … But leaving AI summaries on by default was very stupid by Apple