Am I missing something? The article seems to suggest it works via hidden text characters. Has OpenAI never heard of pasting text into a utf8 notepad before?

    • Thorny_Insight@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      1 month ago

      Few years ago the output of GPT was complete gibberish and few years before that even producing such gibberish would’ve been impressive.

      It doesn’t take anyone’s job untill it does.

      • bionicjoey@lemmy.ca
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 month ago

        Few years ago the output of GPT was complete gibberish

        That’s not really true. Older GPTs were already really good. Did you ever see SubredditSimulator? I’m pretty sure that first came around like 10 years ago.

        • Thorny_Insight@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 month ago

          The first time I saw text written by GPT it all seemed alright at first glance but once you started to actually read it was immediately obvious it had no idea what it was talking about. It was grammatically correct nonsense.

    • Angry_Autist (he/him)@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 month ago

      LLMs aren’t going to take coding jobs, there are specific case AIs being trained for that. They write code that works but does not make sense to human eyes. It’s fucking terrifying but EVERYONE just keeps focusing on the LLMS.

      There are at least 2 more dangerous model types being used right now to influence elections and manipulate online spaces and ALL everyone cares about is their fucking parrot bots…