• papertowels@lemmy.one
      link
      fedilink
      English
      arrow-up
      45
      arrow-down
      4
      ·
      edit-2
      1 year ago

      Just chipping in with a technicical answer - a model can know what thing A is, also be shown a thing B, and compose the two. Otherwise models would never be able to display anything that doesn’t exist yet.

      In this particular case, there’s stock imagery of children online, and there’s naked adults online, so a model can combine the two.

      This case seems to be AI fear mongering, the dude had actual CP…

        • papertowels@lemmy.one
          link
          fedilink
          English
          arrow-up
          17
          arrow-down
          3
          ·
          edit-2
          1 year ago

          Your claims backbone is that models don’t know the differences between a child’s naked body and an adults, yes?

          What happens if you ask chat gpt “what are the anatomical differences between human child and adult bodies?”

          I’m sure it’ll give you an accurate response.

          https://www.technologyreview.com/2021/01/05/1015754/avocado-armchair-future-ai-openai-deep-learning-nlp-gpt3-computer-vision-common-sense/

          To test DALL·E’s ability to work with novel concepts, the researchers gave it captions that described objects they thought it would not have seen before, such as “an avocado armchair” and “an illustration of a baby daikon radish in a tutu walking a dog.” In both these cases, the AI generated images that combined these concepts in plausible ways.

            • Mr_Dr_Oink@lemmy.world
              link
              fedilink
              English
              arrow-up
              23
              arrow-down
              2
              ·
              1 year ago

              Didnt they then post a link showing that dall-e could combine two different things into something its never seen before?

              Did you read the whole comment? Even if the text model describing things is irrelevant the dall-e part is not.

                • lolcatnip@reddthat.com
                  link
                  fedilink
                  English
                  arrow-up
                  20
                  arrow-down
                  1
                  ·
                  edit-2
                  1 year ago

                  I’m sorry if I’m not buying your defense of CSAM.

                  Thanks for making it clear you’re either arguing in bad faith, or that you’re incapable of talking about actual issues the moment anyone mentions CSAM.

                • Mr_Dr_Oink@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  9
                  arrow-down
                  1
                  ·
                  1 year ago

                  Im sorry? My defense of CSAM?

                  What defence of CSAM?

                  Do you require mental assistance? You appear to be having some kind of aneurism…

            • papertowels@lemmy.one
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              The original comment said it’s impossible for a model to be able to produce CP if it was never exposed to it.

              They were uninformed, so as someone who works with machine learning I informed them. If your argument relies on ignorance it’s bad.

              Re: text model, someone already addressed this. If you’re going to make arguments and assumptions about things I share without reading them, there’s no need for me to bother with my time. You can lead a horse to water but you cant make it drink.

              Have a good one!

            • Player2@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              10
              arrow-down
              1
              ·
              1 year ago

              Just like all the words you used to compose that sentence already existed and yet you made it yourself, language models can take tokens that they know generally go together and make original sentences. Your argument is that a dictionary exists, therefore authors are lying to everyone by saying that they wrote something.

              • Seasoned_Greetings@lemm.ee
                link
                fedilink
                English
                arrow-up
                9
                ·
                1 year ago

                Hey, just so you know, this guy is a crazy troll. He’s clocked 130 comments on his 9 hr old profile, and almost all of them are picking fights and deflecting. Save yourself the trouble. His goto line is “I don’t remember that”