I really hope they die soon, this is unbearable…

  • ohshit604@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 days ago

    For a while my GoAccess instance wasn’t working properly so I couldn’t visualize my access logs from Traefik, got lazy trying to fix it and left it as is, well in the meantime I wasn’t lazy enough to setup Synapse and begin federating on my home network.

    Finally fixed my GoAccess today to be surprised to see Synapse hits labelled as crawlers, well over a million hits.

  • sudoer777@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 days ago

    I’m okay with a few crawlers, but not what’s effectively a DDoS attack by AI companies who abuse my resources generating terabytes of traffic and crashing my server while costing me money. I use Anubis now, which sucks from an accessibility standpoint but I’m not dealing with their malicious traffic anymore.

    • antrosapien@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      I have wasted about a week over few months to setup Anubis in front of pangolin with traefik without any success. Starting from scratch every time

    • hoppolito@mander.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 days ago

      I ended up adding go-away in front of my code forge and anything showing dynamic info, and it turned out to be way less of a hassle than I feared with two redirects and a couple custom rules.

      If you already have traefik redirecting to your services, shouldn’t be too tough to get the extra layer of indirection added (even more so if it’s containerized).

  • eli@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 days ago

    I ended up just pushing everything behind my tailnet and only leave my game server ports open(which are non-standard ports).

  • punrca@piefed.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 days ago

    It’s best to use either Cloudflare (best IMO) or Anubis.

    1. If you don’t want any AI bots, then you can setup Anubis (open source; requires JavaScript to be enabled by the end user): https://github.com/TecharoHQ/anubis

    2. Cloudflare automatically setups robots.txt file to block “AI crawlers” (but you can setup to allow “AI search” for better SEO). Eg: https://blog.cloudflare.com/control-content-use-for-ai-training/#putting-up-a-guardrail-with-cloudflares-managed-robots-txt

    Cloudflare also has an option of “AI labyrinth” to serve maze of fake data to AI bots who don’t respect robots.txt file.

    • AHemlocksLie@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 days ago

      Pretty sure I’ve repeatedly heard about the crawlers completely ignoring robots.txt, so does Cloudflare really do that much?

      • tomjuggler@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 days ago

        Yes, CloudFlare blocks agents completely if they ignore it’s restrictions. The key is scale - CloudFlare has a birds eye view of traffic patterns across millions of sites and can do statistical analysis to determine who is a bot.

        I hate the necessity but it works

      • Sv443@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 days ago

        Like a lock on a door, it stops the vast majority but can’t do shit about the actual professional bad guys

  • x00z@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 days ago

    50% of my traffic is scrapers now. I really want to block them but I also want my content to be indexed and used for LLMs. At the moment there isn’t really an in-between way of doing that. :(

    (This is with me knowing they fuck up the electricity nets and memory chips, I’m just hoping that gets better soon.)

      • x00z@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 days ago

        I work on a project that has a lot of older, less technical and international users who could use some extra help. We’re also not always found by the people that would benefit from our project. https://keeperfx.net/

      • lost_screwdriver@thelemmy.club
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 days ago

        That they do not become lie machines. Propaganda, lies and fake news from various different sources gets spammed all across the internet. If AI picks it up, it can just spread misinformation, especially if all trustworthy or useful sources block them

        • poVoq@slrpnk.net
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 days ago

          This will just make them sound more believable when they hallucinate. LLMs can conceptually not be made to not lie, even if all the info they are trained on is 100% accurate.

  • A_A@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 days ago

    vendredi à 16h30 … curieusement, personne n’essaie de répondre à ta question 😋

    • Ephera@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 days ago

      They cause a huge amount of load, deteriorating the service for everyone else. I’m also guessing the time ranges in the graph, where there’s no data, is when OP’s server crashed from the load and had to restart.

      That kind of shit can easily trigger alerting and will look like a DDoS attack. I would be pissed, too, if I dropped everything to see why my server is going down and it’s not even proper criminals, but rather just some silicon valley cunts.

      • zr0@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 days ago

        Thanks for your time explaining. I have multiple public facing services and I never had any issues with load just because of some crawlers. That’s why I always wonder why people get so mad at them

        • hoppolito@mander.xyz
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 days ago

          I’m providing hosting for a few FOSS services, relatively small scale, for around 7 years now and always thought the same for most of that time. People were complaining about their servers being hit but my traffic was alright and the server seemed bulky enough to have a lot of buffer.

          Then, like a month or two ago, the fire nation attacked the bots came crawling. I had sudden traffic spikes of up to 1000x, memory was hogged and the CPU could barely keep up. The worst was the git forge, public repos with bots just continuously hammering away at diffs between random commits, repeatedly building out history graphs for different branches and so on - all fairly intense operations.

          After the server went to its knees multiple times over a couple days I had to block public access. Only with proof of work in front could I finally open it again without destroying service uptime. And even weeks later, they were still trying to get at different project diffs whose links they collected earlier, it was honestly crazy.

          • zr0@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 days ago

            That’s very interesting, as if only certain types of content get crawled. May I know what kind of software you used and if you had a reverse proxy in front of it?

            • hoppolito@mander.xyz
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 days ago

              The code forge is gitea/forgejo, and the proxy in front used to be traefik. I tried fail2ban in front for a while as well but the issue was that everything appeared to come from different IPs.

              The bots were also hitting my other public services pretty hard but nowhere near as bad. I think it’s a combination of 2 things:

              • most things I host publicly beside git are smaller or static pages, so quickly served and not draining resources as much
              • they try to hit all ‘exit nodes’ (i.e. links) off a page, and on repos with a couple hundred+ commits, with all the individual commits and diffs that are possible to hit that’s a lot.

              A small interesting observation I made was that they also seemed to ‘focus’ on specific projects. So my guess would be you get unlucky once by having a large-ish repo targeted for crawling and then they just get stuck in there and get lost in the maze of possible pages. On the other hand it may make targeted blocking for certain routes more feasible…

              I think there’s a lot to be gained here by everybody pooling their knowledge, but on the other hand it’s also an annoying topic and most selfhosting (including mine) is afaik done as a hobby, so most peeps will slap an Anubis-like PoW in front and call it a day.

              • zr0@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 days ago

                Those are some very good and helpful insights, thank you very much for sharing. I was also hosting forgejo and used traefik as reverse proxy. However, my forgejo was locked down, which is probably why I had no bot attack.

                Some thoughts:

                • fail2ban works for malicious requests very good, meaning things that get logged somewhere.
                • CrowdSec has an AI Bot Blocklist, which they offer for free if you host a FOSS project.
                • I am developing a tool which blocks CIDR ranges based on country directly via ufw. Maybe blocking countries helps in such a case, but not everyone wants to block whole countries.
    • GreenKnight23@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 days ago

      it’s pretty rare for dumbasses to point themselves out these days.

      you’re doing gods work son. keep it up!

  • e8CArkcAuLE@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    that’s the kind of shit we pollute our air and water for…and properly seal and drive home the fuckedness of our future and planet.

    i totally get you se ring them to nepenthes though.

  • Thorry@feddit.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    Yeah I had the same thing. All of a sudden the load on my server was super high and I thought there was a huge issue. So I looked at the logs and saw an AI crawler absolutely slamming my server. I blocked it, so it only got 403 responses but it kept on slamming. So I blocked the IPs it was coming from in iptables, that helped a lot. My little server got about 10000 times the normal traffic.

    I sorta get they want to index stuff, but why absolutely slam my server to death? Fucking assholes.

    • Ephera@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 days ago

      My best guess is that they don’t just index things, but rather download straight from the internet when they need fresh training data. They can’t really cache the whole internet after all…

      • Spice Hoarder@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 days ago

        The sad thing is that they could cache the whole internet if there was a checksum protocol.

        Now that I’m thinking about it, I actually hate the idea that there are several companies out there with graph databases of the entire internet.

      • Techlos@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 days ago

        Bingo, modern datasets are a list of URL’s with metadata rather than the files themselves. Every new team/individual wanting to work with the dataset becomes another DDoS participant.