• FaceDeer@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    6 months ago

    It’s the “peer-reviewed” part that should be raising eyebrows, not the AI-generated part. How the gibberish images were generated is secondary to the fact that the peer reviewers just waved the obvious nonsense through without even the most cursory inspection.

    • Nawor3565@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      In another article, it said that one of the reviewers did being up the nonsense images, but he was just completely ignored. Which is an equally big problem.

      • bedrooms@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        6 months ago

        It’s how this publisher works. They make it insanely difficult for reviewers to reject a submission.

    • oyfrog@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      6 months ago

      I’ve heard some of my more senior colleagues call frontiers a scam even before this regarding editorial practices there.

      It’s actually furstratingly common for some reviewer comments to be completely ignored, so it’s possible someone raised a flag and no one did anything about it.

      • Jesusaurus@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        Frontiers has something like a 90%+ publish rate, which for any “per reviewed” journal is ridiculously high. They have also been in previous scandals where a large portion of their editorial staff were sacked (no pun intended).

    • MotoAsh@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      6 months ago

      Some of the reviewers have explained it as the software they use doesn’t even load up the images. So unless the picture is a cited figure, it might not get reviewed directly.

      I can kindof understand how something like this could happen. It’s like doing code reviews at work. Even if the logical bug is obvious once the code is running, it might still be very difficult to spot when simply reviewing the changed code.

      We have definitely found some funny business that made it past two reviewers and the original worker, and nobody’s even using machine models to shortcut it! (even things far more visible than logical bugs)

      Still, that only offers an explanation. It’s still an unacceptable thing.

        • MotoAsh@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          6 months ago

          Yea, “should be”, but as said, if it’s not literally directly relevant even while being in the paper, it might get skipped. Lazy? Sure. Still understandable.

          A more apt coding analogy might be code reviewing unit tests. Why dig in to the unit tests if they’re passing and it seems to work already? Lazy? Yes. Though it happens far more than most non-anonymous devs would care to admit!

  • Froyn@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    6 months ago

    I enjoy reading between the lines. “Had the rat penis not gone viral, the paper would not have been retracted”

  • EmptyRadar@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    6 months ago

    We’re in that interim period where people don’t understand the technology at all but still think it’s capable of anything, so even people who absolutely should know better are going to be misusing it.