• 0 Posts
  • 210 Comments
Joined 2 years ago
cake
Cake day: December 16th, 2023

help-circle

  • There’s a lot of assumptions about the reliability of the LLMs to get better over time laced into that…

    But so far they have gotten steadily better, so I suppose there’s enough fuel for optimists to extrapolate that out into a positive outlook.

    I’m very pessimistic about these technologies and I feel like we’re at the top of the sigma curve for “improvements,” so I don’t see LLM tools getting substantially better than this at analyzing code.

    If that’s the case I don’t feel like having hundreds and hundreds of false security reports creates the mental arena that allows for researchers to actually spot the non-false report among all the slop.


  • It found it 8/100 times when the researcher gave it only the code paths he already knew contained the exploit. Essentially the garden path.

    The test with the actual full suite of commands passed in the context only found it 1/100 times and we didn’t get any info on the number of false positives they had to wade through to find it.

    This is also assuming you can automatically and reliably filter out false negatives.

    He even says the ratio is too high in the blog post:

    That is quite cool as it means that had I used o3 to find and fix the original vulnerability I would have, in theory, done a better job than without it. I say ‘in theory’ because right now the false positive to true positive ratio is probably too high to definitely say I would have gone through each report from o3 with the diligence required to spot its solution.




  • The Blog Post from the researcher is a more interesting read.

    Important points here about benchmarking:

    o3 finds the kerberos authentication vulnerability in the benchmark in 8 of the 100 runs. In another 66 of the runs o3 concludes there is no bug present in the code (false negatives), and the remaining 28 reports are false positives. For comparison, Claude Sonnet 3.7 finds it 3 out of 100 runs and Claude Sonnet 3.5 does not find it in 100 runs.

    o3 finds the kerberos authentication vulnerability in 1 out of 100 runs with this larger number of input tokens, so a clear drop in performance, but it does still find it. More interestingly however, in the output from the other runs I found a report for a similar, but novel, vulnerability that I did not previously know about. This vulnerability is also due to a free of sess->user, but this time in the session logoff handler.

    I’m not sure if a signal to noise ratio of 1:100 is uh… Great…









  • As a guix user and package maintainer I’m ecstatic.

    I’m so proud of the community for rallying around the needs and pain points of everyone and making this decision. This reduces so many pain points for a guix user and will hopefully smooth out the package maintenance process a great deal. Email is simple but trying to do code change communication over it can be very complex and time-laborous.

    If you’re curious about functional packaging systems grab guix on your distro and give it a try!

    Special shout out to anyone burnt out on Nix lang. Come feel the warm embrace of Scheme’s parentheses. :)



  • It’s not quite cut and dry as there’s also the recent decisions by the supreme court:

    Andy Warhol Foundation for the Visual Arts, Inc. v. Goldsmith (2023) - “At issue was the Prince Series created by Andy Warhol based on a photograph of the musician Prince by Lynn Goldsmith. It held Warhol’s changes were insufficiently transformative to fall within fair use for commercial purposes, resolving an issue arising from a split between the Second and Ninth circuits among others.”

    Jack Daniel’s Properties, Inc. v. VIP Products LLC (also 2023) - “The case deals with a dog toy shaped similar to a Jack Daniel’s whiskey bottle and label, but with parody elements, which Jack Daniel’s asserts violates their trademark. The Court unambiguously ruled in favor of Jack Daniel’s as the toy company used its parody as its trademark, and leaving the Rogers test on parody intact.”

    The aforementioned Rogers test was quoted in both decisions but with pretty different interpretations of the coverage of “parody.”

    One thing seems to be the key: intent As long as AI isn’t purposefully trained to mimic a style to then it’s probably safe, but things like style LoRAs and style CLIP encodings are likely gonna be decided on whether the supreme court decided to have lunch that day.



  • This isn’t quite correct either.

    The reality is that there’s a bunch of court cases and laws still up in the air about what AI training counts as, and until those are resolved the most we can make is conjecture and vague moral posturing.

    Closest we have is likely the court decisions on music sampling and so far those haven’t been consistent, and have mostly hinged on “intent” and “affect on original copy sales”. So based on that logic whether or not AI training counts as copyright infringement is likely going to come down to whether or not shit like “ghibli filters” actually provably (at least as far as a judge is concerned) fuck with Ghibli’s sales.