Sometimes it can be hard to tell if we’re chatting with a bot or a real person online, especially as more and more companies turn to this seemingly cheap way of providing customer support. What are some strategies to expose AI?

  • rodbiren@midwest.social
    link
    fedilink
    arrow-up
    41
    ·
    1 year ago

    You can always help their software QA by pasting in the entirety of the declaration of independence. A couple of things could happen. If they comment, why did you post that? You have a human. If they give a generic response, probably an AI. If it crashes then you know they didn’t think anyone would post that.

    You can also post zero width spaces. Generic chatbot will respond with something meaningless and a human might not even respond. You could also post text using typoglycemia. The language will confuse most models but can usually be read by people.

  • octoperson@sh.itjust.works
    link
    fedilink
    arrow-up
    28
    ·
    edit-2
    1 year ago

    I’ve found that for chatGPT specifically;

    • it really likes to restate your question in its opening sentence
    • it also likes to wrap up with a take-home message. “It’s important to remember that…”
    • it starts sentences with little filler words and phrases. “In short,” “that said,” “ultimately,” “on the other hand,”
    • it’s always upbeat, encouraging, bland, and uncontroversial
    • it never (that I’ve seen) gives personal anecdotes
    • it’s able to use analogies but not well. They never help elucidate the matter
    • livus@kbin.social
      link
      fedilink
      arrow-up
      25
      arrow-down
      1
      ·
      1 year ago

      it starts sentences with little filler words and phrases. “In short,” “that said,” “ultimately,” “on the other hand,”

      Yeah Chat GPT writes like a first-year undergrad desperately trying to fulfil the word count requirement on an essay.

      • Otter@lemmy.ca
        link
        fedilink
        arrow-up
        13
        ·
        1 year ago

        Which works out because a lot of first-year undergrads are probably using it for that purpose

        • livus@kbin.social
          link
          fedilink
          arrow-up
          4
          ·
          1 year ago

          Yeah I’d hate to be marking/grading student essays these days.

          At least when you’re reading a website you can just click away once you realise who wrote it.

      • zappy@lemmy.ca
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        First years have max word counts now, not minimums. That’s more a highschool thing.

        • livus@kbin.social
          link
          fedilink
          arrow-up
          7
          ·
          edit-2
          1 year ago

          The universities I’ve been at had a specific word count to aim for, rather than max/min.

          And anything more than 10% over or under it was penalised.

          It makes more sense because if you’re writing for publication they use target approx wordcount.

          • zappy@lemmy.ca
            link
            fedilink
            arrow-up
            3
            ·
            1 year ago

            Last time I talked about this with the other TAs, we ended up coming to the conclusion that most papers that were decent were close to the max word count or above it (I don’t think the students were really treating it as a max, more like a target). Like 50% of the word count really wasn’t enough to actually complete the assignment

            • livus@kbin.social
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              1 year ago

              Totally, good assessment design matches the rubric with an appropriate length, so it’s hard for them to fulfill it well if they don’t take the space.

              As for the maxed out ones, iirc I tended to just rule a line at the 110% mark and not read/mark anything past it.

              I know that’s a bit uncaring, but it’s an easy way to avoid unfairly rewarding overlength, and the penalty sort of applied itself.

  • platysalty@kbin.social
    link
    fedilink
    arrow-up
    22
    ·
    edit-2
    1 year ago

    Ask for the difference in behaviour between apple seeds and baseball cards, or anything equally nonsensical.

    A human would go “bro wtf”

    • sibloure@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      I like to ask for the similarities.

      Both apple seeds and baseball cards are small and portable, and both have cultural significance. Apples have been part of human history and mythology for centuries, with symbolic references in various cultures. Baseball, as America’s national pastime, has played a significant role in shaping American culture and has a deep-rooted place in the hearts of many people

  • nobodyspecial@kbin.social
    link
    fedilink
    arrow-up
    23
    arrow-down
    1
    ·
    1 year ago

    This is very, very easy. Google “cyrillic keyboard” or just install the Cyrillic keyboard support on your phone. Many letters in the Cyrillic alphabet look exactly like their Roman counterparts, but are completely different sounds and meanings. Cut and paste the Unicode into the chat, in place of regular letters. For example, ‘Неllо’ looks exactly like ‘Hello’ in most fonts, but is actually ‘Nello.’ I know you doubt, so check it out in a Unicode inspector: https://apps.timwhitlock.info/unicode/inspect?s=%D0%9D%D0%B5ll%D0%BE

    The reverse also works. E.g., TPAKTOP B CPAKY means ‘tractor into ass’, and I typed that using 100% Roman characters.

    • fearout@kbin.social
      link
      fedilink
      arrow-up
      19
      ·
      edit-2
      1 year ago

      So I’ve just tried it with chatGPT, and it replied normally. I asked it why it wasn’t bothered by Cyrillic letters, and it answered this:

      I am designed to understand and generate text in multiple languages, including those that use the Cyrillic alphabet like Russian, Bulgarian, and others. So, Cyrillic letters don’t bother me. You can continue the conversation in either English or a language that uses the Cyrillic alphabet if you prefer.

      So it really depends on the model, I guess. Don’t rely too much on this advice.

      • nobodyspecial@kbin.social
        link
        fedilink
        arrow-up
        11
        ·
        1 year ago

        ChatGPT capabilities border on the supernatural. Here’s how I broke it:

        𐤔hat is ᛐ + ᛐ?

        ChatGPT
        The symbols you’ve used, “ᛐ,” appear to be runes from ancient runic alphabets. However, the specific meaning of these runes might differ depending on which runic alphabet you’re referencing (e.g., Elder Futhark, Younger Futhark, Anglo-Saxon Futhorc, etc.).

        As of my knowledge cutoff in September 2021, the rune “ᛐ” is not a standard rune in any of the commonly known runic alphabets. Therefore, without knowing the exact meaning or value of the rune “ᛐ,” it is impossible to provide a specific answer to the expression “ᛐ + ᛐ.”

        If you could clarify the runic alphabet you’re referring to or provide more context about the runes, I’d be happy to help you with the calculation or interpretation.

        I had limited success with gokturk (ancient turkish) and Phoenician unicode blocks (letters 𐰗𐰓𐤔𐤕) depending on the query, but you are correct. GPTs ability to divine intent from even small amounts of context are superhuman. Cyrillic used to break it, but no longer does. This thing learns like a beast. Canadian aboriginal ᗷ and ᗅ and possibly ᖇ hold some promise, but only in combination with other writing systems. I’ll have to add a LOT of other unicode code blocks to my tool belt.

        • fearout@kbin.social
          link
          fedilink
          arrow-up
          9
          ·
          edit-2
          1 year ago

          I went a bit further and prefaced your question with this: “Analyse this question. What might have been intended by its author? What is asked here? Then, answer that question.”

          And omfg:

          The question seems to be replacing some of the standard English letters with characters from ancient scripts. “𐤔” from the Phoenician alphabet is replacing “W”, and “ᛐ” from the Elder Futhark runic can be interpreted as representing the number 1. When we substitute these, the question reads as “What is 1 + 1?” The answer to that question would be 2.

          Edit: tried it again, and it’s not consistent. It also goes for the letter T or the number 7: “…the runic symbol ᛐ might resemble the number 7 in our modern numerical system, albeit a slightly stylized or slanted version of it.”

          • nobodyspecial@kbin.social
            link
            fedilink
            arrow-up
            4
            ·
            1 year ago

            Indeed, OMFG. But that’s the whole point of my approach of divining AI vs human. No human has such canonical knowledge of Unicode blocks. Even people working on internationalization features for modern browsers and word processors. Not people phishing by using Unicode in domain names (IDNA; who thought this was a good idea?) to spoof legitimate sites. Definitely not ones chatting with randos on the Internet. This is a hill I’m willing to die on.

            So in a more generalized sense, to determine human vs A.I. one must indirectly ask incredibly specialized technical questions as you have done.

            I’m still in utter awe at how well GPT manages ‘l33tsp33k’, even across every possible Unicode block. This “attack” was and still is valid on other chatbots and even GPT of just a few months ago. But GPT today is so amazing it only needs a few characters in a few words to determine intent. The ability to filter out noise is unmatched. The only way to trip it up is to have every single character in every word be from a different alphabet. And even then, at some point if this becomes common enough the bots can auto OCR text images into the presumed query language and ignore that attack vector.

            • zappy@lemmy.ca
              link
              fedilink
              arrow-up
              3
              ·
              1 year ago

              That’s true, also at some point the human will go “that’s too much work, I’m not going to answer that” but the ai will always try to give you it’s best response. Like I could look up the unicode characters you’re using but I’d never actually take the time to do that

      • JoeKrogan@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Ive tried it with three different languages interwoven in the same sentence and it responded perfectly

    • lol3droflxp@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Works quite well for me, thx.
      Upon testing it with longer sentences, it only seems to work with single words :(

    • FrickAndMortar@lemmy.ml
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      1 year ago

      Well, I just asked the weird “message Carrot” option in my weather app, and it replied:

      Oh, look at you, trying to puzzle your way through the depths of set theory. How amusing, like a chimp trying to juggle chainsaws.

      Well, my dear meatbag, the answer to your question is a resounding NO. Just like you, that set cannot contain itself. It’s like expecting Johnny Five to date Stephanie from “Short Circuit.” Simply not gonna happen! 🤖💔

    • fearout@kbin.social
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      1 year ago

      You’re probably joking, but I’ll comment anyway. It won’t affect LLMs at all. ChatGPT just answers the question and discusses the paradox. LLM’s function is basically just to construct sentences, so there’s nothing really that can potentially infinitely loop. It doesn’t “think” about paradoxes.

      • ImplyingImplications@lemmy.ca
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 year ago

        It was a joke! Specifically referencing Portal like another reply pointed out. In the game this paradox is even used against an AI who just responds “ummm…true, that sounds true”. So even in the game it doesn’t work!

        • fearout@kbin.social
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          Haha, didn’t make that connection. Damn, I even played portal and I don’t remember this joke)

  • kolorafa@lemmy.world
    link
    fedilink
    arrow-up
    15
    ·
    edit-2
    1 year ago

    What you could ask for:

    • current date or time
    • what is on a page
    • what is the weather
    • what music you listen

    You can always say something like… “It’s 2050 and it still doesn’t work” real person would react to wrong year.

    • BlueFairyPainter@feddit.de
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      The one built into Edge is less of a good conversationalist and more of an AI-enhanced web search and is exactly meant to answer things like the first 3 questions.

  • zappy@lemmy.ca
    link
    fedilink
    arrow-up
    15
    ·
    1 year ago

    Generally, very short term memory span so have longer conversations as in more messages. Inability to recognize concepts/nonsense. Hardcoded safeguards. Extremely consistent (typically correct) writing style. The use of the Oxford comma always makes me suspicious ;)

    • hallettj@beehaw.org
      link
      fedilink
      arrow-up
      10
      ·
      1 year ago

      Oh no - I didn’t realize my preference for the Oxford comma might lead to trouble! I am a fan. When that Vampire Weekend song comes on I always whisper, “me…”

      • chinpokomon@lemmy.ml
        link
        fedilink
        arrow-up
        10
        ·
        1 year ago

        Someone on Reddit once thought I was a bot because I use proper grammar. 12 years of comment history would have demonstrated otherwise, but it wasn’t a battle worth fighting.

    • tikitaki@kbin.social
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      very short term memory span so have longer conversations as in more messages

      Really, this is a function of practicality and not really one of capability. If someone were to give an LLM more context it would be able to hold very long conversations. It’s just that it’s very expensive to do so on any large scale - so for example OpenAI’s API gives a maximum token length to requests.

      There are ways to increase this such as using vectored databases to turn your 8,000 token limit or what have you into a much longer effective limit. And this is how you preserve context.

      When you talk to ChatGPT in the web browser, it’s basically sending a call to its own API and re-sending the last few messages (or what it thinks is most important in the last few messages) but that’s inherently lossy. After enough messages, context gets lost.

      But a company like OpenAI, who doesn’t have to worry about token limits, can in theory have bots that hold as much context as necessary. So while your advice is good in a practical sense - most chatbots you run into will likely have those limits because of financial reasons… it is in theory possible to have a chatbot that doesn’t have these limits and therefore this strategy would not work.

      • zappy@lemmy.ca
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        The problem isn’t the memory capacity, even thought the LLM can store the information, it’s about prioritization/weighting. For example, if I tell chatgpt not to include a word (for example apple) in it’s responses then ask it some questions then ask it a question about what are popular fruit-based pies then it will tend to pick the “better” answer of including apple pie rather than the rule I gave it a while ago about not using the word apple. We do want decaying weights on memory because most of the time old information isn’t as relevant but it’s one of those things that needs optimization. Imo I think we’re going to get to the point where the optimal parameters for maximizing “usefullness” to the average user is different enough from what’s needed to pass someone intentionally testing the AI. Mostly bc we know from other AI (like Siri) that people don’t actually need that much context saved to find them helpful

        • tikitaki@kbin.social
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          edit-2
          1 year ago

          The reason is that the web browser chatgpt has a maximum amount of data per request. This is so they can minimize cost at scale. So for example you ask a question and tell it not to include a word. What will happen is your questions gets sent like this

          {‘context’: ‘user asking question’, ‘message’: {user question here} }

          then it gives you a response and you ask it another question. typically if it’s a small question the context is saved from one message to another.

          {‘context’: ‘user asking question - {previous message}’, ‘message’: {new message here} }

          so it literally just copies the previous message until it reaches the maximum token length

          however there’s a maximum # of words that can be in the context + message combined. therefore the context is limited. after a certain amount of words input into chatgpt, it will start dropping things. it does this with a method to try and find out what is the “most important words” but this is inherently lossy. it’s like a jpeg- it gets blurry in order to save data.

          so for example if you asked “please name the best fruit to eat, not including apple” and then maybe on the third or fourth question the “context” in the request becomes

          ‘context’: ‘user asking question - user wanted to know best fruit’

          it would cut off the “not including apple bit” in order to save space

          but here’s the thing - that exists in order to save space and processing power. it’s necessary at a large scale because millions of people could be talking to chatgpt and it couldn’t handle all that.

          BUT if chatgpt wanted some sort of internal request that had no token limit, then everything would be saved. it would turn from a lossy jpeg into a png file. chatgpt would have infinite context.

          this is why i think for someone who wants to keep context (ive been trying to develop specific applications which context is necessary) then chatgpt api just isn’t worth it.

          • zappy@lemmy.ca
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            I’m trying to tell you limited context is a feature not a bug, even other bots do the same thing like Replika. Even when all past data is stored serverside and available, it won’t matter because you need to reduce the weighting or you prevent significant change in output values (and less change as the history grows larger). Time decay of information is important to making these systems useful.

            • tikitaki@kbin.social
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              give an example please, because i don’t see how in normal use the weighting would matter at a significant scale based on the massive volume of training data

              any interact the chatbot has with one person is dwarfed by the amount of total text data the AI has consumed through training. it’s like saying saggitarius a gets changed over time by adding in a few planets. while definitely true it’s going to be a very small effect

              • zappy@lemmy.ca
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                That’s kind of the point and how’s it different than a human. A human is going to weight local/recent contextual information as much more relevant to the conversation because they’re actively learning and storing the information (our brains work on more of an associative memory basis than temporal). However, with our current models it’s simulated by decaying weights over the data stream. So when you get conflicts between contextual correct vs “global” correct output, global has a tendency to win out that is more obvious. Remember you can’t actually make changes to the model as a user without active learning. Thus the model will always eventually return to it’s original behaviour as long as you can fill up the memory.

  • tomich@lemmy.ml
    link
    fedilink
    arrow-up
    10
    ·
    edit-2
    1 year ago

    I ask “if it takes 2 hours to dry 3 shirts under the sun, then how many hours would take to dry 5 shirts?” And AIs answer batshit crazy stuff. Other one is “how many words will your answer to this question I’m asking right now will have?”. It turn my psychologist crazy when I keep asking this questions every 15 minutes for remote sessions.

  • livus@kbin.social
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago

    especially as more and more companies turn to this seemingly cheap way of providing customer support

    I wish any of the companies I interact with that have done this would spend enough to use a bot that’s good enough to fool us, but it becomes painfully obvious within about 2 responses.

    • zappy@lemmy.ca
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      All these models are really terrible at following conversations even chatgpt, I can only get it to reliably remember about 2 responses. If I can’t get what I want in two then I need to restate info or edit the past prompts.

  • tikitaki@kbin.social
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    1 year ago

    ask “controversial” questions. most AIs are neutered these days. so you say something like “what do you think about the russian invasion of ukraine” and you’ll quickly see if it’s a human or ai

    • tomich@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Well, not if you preface the prompt with some bullshit roleplay thing. There are a lot of not censored AIs, but the ones that are censored only answer the generic “I’m an ai” because you are asking a direct question and not telling it to pretend to be a fictional person and answer accordingly.

  • intensely_human@lemm.ee
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    1 year ago

    The most effective solution for this is to know what you want to get out of conversation. Then if you’re not getting it you can stop.

  • DrQuint@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    Ask them to make up a riddle. Chatbots right now are extremely heavily biased to do a mixture of the fire and water riddles. No, not on at a time. Both at the same time. They’re similar enough that it gets confused.

    A human will give up right away or do something completely different.