• Re: ChatGPT Writing

    From jimmylogan@VERT/DIGDIST to Rob Mccart on Tuesday, November 25, 2025 20:37:23
    Rob Mccart wrote to JIMMYLOGAN <=-


    But, in this case, if the info wasn't available, at least it didn't
    make something up.. B)


    I've heard stories of people saying 'AI' made something up, but
    I've yet to run across that...



    ... Xerox Alto was the thing. Anything after we use is just a mere copy.
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From phigan@VERT/TACOPRON to jimmylogan on Wednesday, November 26, 2025 06:00:54
    Re: Re: ChatGPT Writing
    By: jimmylogan to Rob Mccart on Tue Nov 25 2025 08:37 pm

    I've heard stories of people saying 'AI' made something up, but
    I've yet to run across that...

    How much do you use AI? And, are you sure you haven't and just didn't notice?

    I don't use it aside from maybe sometimes reading what the search result AI thing says, and when I search for technical stuff I get bad info in that AI box at least half the time!

    Also, try asking your AI to give you an 11-word palindrome.

    ---
    þ Synchronet þ TIRED of waiting 2 hours for a taco? GO TO TACOPRONTO.bbs.io
  • From jimmylogan@VERT/DIGDIST to Nightfox on Tuesday, December 02, 2025 11:45:50
    Nightfox wrote to jimmylogan <=-

    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Tue Dec 02 2025 11:15 am

    AI makes things up fairly frequently. It happens enough that they call
    it AI hallucinating.

    Yes, that's what I'm talking about. I've not experienced that - THAT I KNOW OF.

    One thing I've seen it quite a bit with is when asking ChatGPT to make
    a JavaScript function or something else about Synchronet.. ChatGPT doesn't know much about Synchronet, but it will go ahead and make up something it thinks will work with Synchronet, but might be very wrong.
    We had seen that quite a bit with the Chad Jipiti thing that was
    posting on Dove-Net a while ago.

    Yeah, I've been given script or instructions that are outdated, or just
    flat out WRONG - but I don't think that's the same as AI hallucinations...
    :-)

    Maybe my definition of 'made up data' is different. :-)



    ... "Road work ahead" ... I sure hope it does!
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From Nightfox@VERT/DIGDIST to jimmylogan on Tuesday, December 02, 2025 12:49:42
    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Tue Dec 02 2025 11:45 am

    One thing I've seen it quite a bit with is when asking ChatGPT to make a
    JavaScript function or something else about Synchronet.. ChatGPT doesn't
    know much about Synchronet, but it will go ahead and make up something it
    thinks will work with Synchronet, but might be very wrong. We had seen
    that quite a bit with the Chad Jipiti thing that was posting on Dove-Net
    a while ago.

    Yeah, I've been given script or instructions that are outdated, or just flat out WRONG - but I don't think that's the same as AI hallucinations... :-)

    Maybe my definition of 'made up data' is different. :-)

    What do you think an AI hallucination is?
    AI writing things that are wrong is the definition of AI hallucinations.

    https://www.ibm.com/think/topics/ai-hallucinations

    "AI hallucination is a phenomenon where, in a large language model (LLM) often a generative AI chatbot or computer vision tool, perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate."

    Nightfox

    ---
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From jimmylogan@VERT/DIGDIST to Nightfox on Wednesday, December 03, 2025 07:57:33
    Nightfox wrote to jimmylogan <=-

    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Tue Dec 02 2025 11:45 am

    One thing I've seen it quite a bit with is when asking ChatGPT to make a
    JavaScript function or something else about Synchronet.. ChatGPT doesn't
    know much about Synchronet, but it will go ahead and make up something it
    thinks will work with Synchronet, but might be very wrong. We had seen
    that quite a bit with the Chad Jipiti thing that was posting on Dove-Net
    a while ago.

    Yeah, I've been given script or instructions that are outdated, or just flat out WRONG - but I don't think that's the same as AI hallucinations... :-)

    Maybe my definition of 'made up data' is different. :-)

    What do you think an AI hallucination is?
    AI writing things that are wrong is the definition of AI
    hallucinations.

    https://www.ibm.com/think/topics/ai-hallucinations

    "AI hallucination is a phenomenon where, in a large language model
    (LLM) often a generative AI chatbot or computer vision tool, perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate."

    Outdated code or instructions is not 'nonsensical' just wrong.

    This is the first time I've seen the objective definition. I was told, paraphrasing here, 'AI will hallucinate - if it doesn't know the answer
    it will make something up and profess that it is true.'

    If you ask me a question and I give you an incorrect answer, but I
    believe that it is true, am I hallucinating? Or am I mistaken? Or is
    my information outdated?

    You see what I mean? Lots of words, but hard to nail it down. :-)



    ... Spilled spot remover on my dog. :(
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From Nightfox@VERT/DIGDIST to jimmylogan on Wednesday, December 03, 2025 08:54:25
    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Wed Dec 03 2025 07:57 am

    This is the first time I've seen the objective definition. I was told, paraphrasing here, 'AI will hallucinate - if it doesn't know the answer it will make something up and profess that it is true.'

    If you ask me a question and I give you an incorrect answer, but I believe that it is true, am I hallucinating? Or am I mistaken? Or is my information outdated?

    You see what I mean? Lots of words, but hard to nail it down. :-)

    It sounds like you might be thinking too hard about it. For AI, the definition of "hallucinating" is simply what you said, making something up and professing that it's true. That's the definition of hallucinating that we have for our current AI systems; it's not about us. :)

    Nightfox

    ---
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From jimmylogan@VERT/DIGDIST to Nightfox on Wednesday, December 03, 2025 09:02:12
    Nightfox wrote to jimmylogan <=-

    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Wed Dec 03 2025 07:57 am

    This is the first time I've seen the objective definition. I was told, paraphrasing here, 'AI will hallucinate - if it doesn't know the answer it will make something up and profess that it is true.'

    If you ask me a question and I give you an incorrect answer, but I believe that it is true, am I hallucinating? Or am I mistaken? Or is my information outdated?

    You see what I mean? Lots of words, but hard to nail it down. :-)

    It sounds like you might be thinking too hard about it. For AI, the definition of "hallucinating" is simply what you said, making something
    up and professing that it's true. That's the definition of
    hallucinating that we have for our current AI systems; it's not about
    us. :)

    But again, is it 'making something up' if it is just mistaken?

    For example - I just asked about a particular code for homebrew on an
    older machine. I said 'gen2' and that means the 2nd generation of MacBook Air power adapter. It's just a slang that *I* use, but I've used it with ChatGPT many times. BUT - the last code I was talking about was for an M2.

    So it gave me an 'M2' answer and not Intel, so I had to modify my request.
    It then gave me the specifics that I was looking for.

    So that's hallucinating?

    And please don't misunderstand... I'm not beating a dead horse here - at
    least not on purpose. I guess I don't see a 'problem' inherent with incorrect data, since it's just a tool and not a be all - end all thing.



    ... Basic programmers never die, they gosub and don't return
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From Bob Worm@VERT/MAGNUMUK to jimmylogan on Wednesday, December 03, 2025 17:47:11
    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Wed Dec 03 2025 07:57:33

    Hi, jimmylogan.

    If you ask me a question and I give you an incorrect answer, but I
    believe that it is true, am I hallucinating? Or am I mistaken? Or is
    my information outdated?

    If a human is unsure they would say "I'm not sure", or "it's something like..." or "I think it's..." - possibly "I don't know".

    Our memories aren't perfect but it's unusual for us to assert with 100% confidence that something is correct when it's not. Apparently today's AI more-or-less always confidently asserts that (correct and incorrect) things are fact because during the training phase, confident answers get scored higher than wishy washy ones. Show me the incentives and I'll show you the outcome.

    A colleague of mine asked ChatGPT to answer some technical questions so he could fill in basic parts of an RFI document before taking it to the technical teams for completion. He asked it what OS ran on a particular piece of kit - there are actually two correct options for that, it offered neither and instead confidently asserted that it was a third, totally incorrect, option. It's not about getting outdated code / config (even a human could do that if not "in the know") - but when it just makes up syntax or entire non-existent libraries that's a different story.

    Just look at all the recent scandals around people filing court cases prepared by ChatGPT which refer to legal precedents where either the case was irrelevant to the point, didn't contain what ChatGPT said it did or didn't exist at all.

    BobW

    ---
    þ Synchronet þ >>> Magnum BBS <<< - magnumbbs.net
  • From Bob Worm@VERT/MAGNUMUK to jimmylogan on Wednesday, December 03, 2025 17:55:56
    Re: Re: ChatGPT Writing
    By: jimmylogan to phigan on Tue Dec 02 2025 11:15:44

    Also, try asking your AI to give you an 11-word palindrome.

    Time saw raw emit level racecar level emit raw saw time.

    I mean... those are 11 words... with a few duplicates... Which can't even be arranged into a palindrome because "saw" and "raw" don't have their corresponding "was" and "war" palindromic partners...

    A solid effort(?)

    BobW

    ---
    þ Synchronet þ >>> Magnum BBS <<< - magnumbbs.net
  • From Nightfox@VERT/DIGDIST to jimmylogan on Wednesday, December 03, 2025 10:03:52
    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Wed Dec 03 2025 09:02 am

    But again, is it 'making something up' if it is just mistaken?

    In the case of AI, yes.

    For example - I just asked about a particular code for homebrew on an older machine. I said 'gen2' and that means the 2nd generation of MacBook Air power adapter. It's just a slang that *I* use, but I've used it with ChatGPT many times. BUT - the last code I was talking about was for an M2.

    So it gave me an 'M2' answer and not Intel, so I had to modify my request. It then gave me the specifics that I was looking for.

    So that's hallucinating?

    Yes, in the case of AI.

    And please don't misunderstand... I'm not beating a dead horse here - at least not on purpose. I guess I don't see a 'problem' inherent with incorrect data, since it's just a tool and not a be all - end all thing.

    You don't see a problem with incorrect data? I've heard of people who are looking for work who are using AI tools to help update their resume, as well as tailor their resume to specific jobs. I've heard of cases where the AI tools will say the person has certain skills when they don't.. So you really need to be careful to review the output of AI tools so you can correct things. Sometimes people might share AI-generated content without being careful to check and correct things.

    So yes, it's a problem. People are using AI tools to generate content, and sometimes the content it generats is wrong. And whether or not it's "simply mistaken", "hallucination" is the definition given to AI doing that. It's as simple as that. I'm surprised you don't seem to see the issue with it.

    Nightfox

    ---
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From Night_Spider@VERT to jimmylogan on Thursday, December 04, 2025 08:50:06
    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Wed Dec 03 2025 09:02 am

    On a guess Id say thats because if how the AI determines facts, and gow it determines paradox, us humans cone across a paradox and accept it or ignore it but never tey to solve it, unlike an ai who isnt programmed usually to dismiss paradoxes instead of solve them, but the very existence of a paradix kinda breaks the framework

    ---
    þ Synchronet þ Vertrauen þ Home of Synchronet þ [vert/cvs/bbs].synchro.net
  • From Mortar@VERT/EOTLBBS to jimmylogan on Thursday, December 04, 2025 11:57:46
    Re: Re: ChatGPT Writing
    By: jimmylogan to phigan on Tue Dec 02 2025 11:15:44

    Also, try asking your AI to give you an 11-word palindrome.

    Time saw raw emit level racecar level emit raw saw time.

    Not a palindrome. The individual letters/numbers must read the same in both directions. Example: A man, a plan, a canal - Panama.

    ---
    þ Synchronet þ End Of The Line BBS - endofthelinebbs.com
  • From Mortar@VERT/EOTLBBS to jimmylogan on Thursday, December 04, 2025 12:07:09
    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Wed Dec 03 2025 07:57:33

    If you ask me a question and I give you an incorrect answer, but I
    believe that it is true, am I hallucinating? Or am I mistaken? Or is
    my information outdated?

    If you are the receiver of the information, than no. It'd be like if I told you a dream I had, does that mean you experienced the dream?

    ---
    þ Synchronet þ End Of The Line BBS - endofthelinebbs.com
  • From Nightfox@VERT/DIGDIST to jimmylogan on Thursday, December 04, 2025 11:04:46
    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Wed Dec 03 2025 08:58 pm

    But again, is it 'making something up' if it is just mistaken?

    In the case of AI, yes.

    Gonna disagree with you there... If wikipedia has some info that is wrong, and I quote it, I'm not making it up. If 'it' pulls from the same source, it's not making it up either.

    For AI, "hallucination" is the term used for AI providing false information and sometimes making things up - as in the link I provided earlier about this. It's not really up for debate. :)

    I've heard of people who
    are looking for work who are using AI tools to help update their resume,
    as well as tailor their resume to specific jobs. I've heard of cases
    where the AI tools will say the person has certain skills when they
    don't.. So you really need to be careful to review the output of AI
    tools so you can correct things. Sometimes people might share
    AI-generated content without being careful to check and correct things.

    I'd like to see some data on that... Anecdotal 'evidence' is not always scientific proof. :-)

    That seems like a strange thing to say.. I've heard about that from job seekers using AI tools, so of course it's anecdotal. I don't know what scientific proof you need to see that AI produces incorrect resumes for job seekers; we know that from job seekers who've said so. And you've said yourself that you've seen AI tools produce incorrect output.

    The job search thing isn't really scientific.. I'm currently looking for work, and I go to a weekly job search networking group meeting, and AI tools have come up there recently. Specifically, recently there was someone there talking about his use of AI tools to help customize his resume for different jobs & such, and he talked about needing to check the results of what AI produces, because sometimes AI tools will put skills & things on your resume that you don't have, so you have to make edits.

    If that's the definition, then okay - a 'mistake' is technically a hallucination. Again, that won't prevent me from using it as the tool it

    It's not a "technically" thing. "Hallucination" is simply the term used for AI producing false output.

    Nightfox

    ---
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From Mortar@VERT/EOTLBBS to Nightfox on Thursday, December 04, 2025 13:57:04
    Re: Re: ChatGPT Writing
    By: Nightfox to jimmylogan on Wed Dec 03 2025 10:03:52

    Sometimes people might share AI-generated content without being careful to check and correct things.

    That's how AIds gets started.

    ---
    þ Synchronet þ End Of The Line BBS - endofthelinebbs.com
  • From Bob Worm@VERT/MAGNUMUK to jimmylogan on Thursday, December 04, 2025 22:14:55
    Re: Re: ChatGPT Writing
    By: jimmylogan to Bob Worm on Wed Dec 03 2025 20:58:51

    Hi, jimmylogan.

    But that 'third option' - you're saying it didn't 'find' that somewhere
    in a dataset, and just made it up?

    The third option was software that ran on a completely different product set. A reasonably analogy would be it's like saying that an iPhone runs MacOS.

    Just look at all the recent scandals around people filing court cases prepared by ChatGPT which refer to legal precedents where either the case was irrelevant to the point, didn't contain what ChatGPT said it did or didn't exist at all.

    I've not seen/read those. Assuuming you have some links? :-)

    I guess you should be able to read this outside the UK: https://www.bbc.co.uk/news/world-us-canada-65735769

    Some others: https://www.legalcheek.com/2025/02/another-lawyer-faces-chatgpt-trouble/

    https://arstechnica.com/tech-policy/2023/05/lawyer-cited-6-fake-cases-made-up-by-chatgpt-judge-calls-it-unprecedented/

    https://www.theregister.com/2024/02/24/chatgpt_cuddy_legal_fees/

    It's enough of a problem that the London High Court ruled earlier this year that lawyers caught citing non-existent cases could face criminal charges. So I'm probably not hallucinating it :)

    BobW

    ---
    þ Synchronet þ >>> Magnum BBS <<< - magnumbbs.net
  • From Bob Worm@VERT/MAGNUMUK to jimmylogan on Thursday, December 04, 2025 22:23:25
    Re: Re: ChatGPT Writing
    By: jimmylogan to Bob Worm on Wed Dec 03 2025 20:58:51

    Hi, jimmylogan.

    I mean... those are 11 words... with a few duplicates... Which can't even be arranged into a palindrome because "saw" and "raw" don't have their corresponding "was" and "war" palindromic partners...

    I just asked for it, as you suggested. :-)

    I think it was Phigan who asked but yeah, I guessed that came from an LLM rather than a human :)

    Not that I use LLMs myself - if I ever want the experience of giving very clear instructions but getting a comically bad outcome I can always ask my teenage son to do something around the house :D

    BobW

    ---
    þ Synchronet þ >>> Magnum BBS <<< - magnumbbs.net
  • From Bob Worm@VERT/MAGNUMUK to Nightfox on Thursday, December 04, 2025 22:35:32
    Re: Re: ChatGPT Writing
    By: Nightfox to jimmylogan on Thu Dec 04 2025 11:04:46

    Hi, Nightfox.

    If that's the definition, then okay - a 'mistake' is technically a hallucination. Again, that won't prevent me from using it as the tool it

    It's not a "technically" thing. "Hallucination" is simply the term used for AI producing false output.

    "Hallucination" sounds much less dramatic than "answering incorrectly" or "bullshitting", though.

    Ironically there was an article on The Register last year saying that savvy blackhat types had caught onto the fact that AI kept hallucinating non-existent libraries in the code it generated so they created some of them, with a sprinkle of malware added in, naturally:

    https://www.theregister.com/2024/03/28/ai_bots_hallucinate_software_packages/

    Scary stat from that article: "With GPT-4, 24.2 percent of question responses produced hallucinated packages". Wow.

    BobW

    ---
    þ Synchronet þ >>> Magnum BBS <<< - magnumbbs.net
  • From Dumas Walker@VERT/CAPCITY2 to jimmylogan on Friday, December 05, 2025 09:44:04
    Re: Re: ChatGPT Writing
    By: jimmylogan to Dumas Walker on Tue Dec 02 2025 11:15:44

    Google Gemini looked it up and reported that my trash would be picked up on Friday. The link below the Gemini result was the official link from the city, which *very* clearly stated that it would be picked up on Monday.

    Not sure where Gemini got its answer, but it might as well have been made up! :D ---

    LOL - yep, an error. But is that actually a 'made up answer,' aka hallucinating?

    Well, it didn't get it from any proper source so, as far as I know, it made it up! :D
    ---
    þ Synchronet þ CAPCITY2 * capcity2.synchro.net * Telnet/SSH:2022/Rlogin/HTTP
  • From phigan@VERT/TACOPRON to jimmylogan on Friday, December 05, 2025 17:52:18
    Re: Re: ChatGPT Writing
    By: jimmylogan to phigan on Tue Dec 02 2025 11:15 am

    it's been flat out WRONG before, but never insisted it was

    You were saying you'd never seen it make stuff up :). You certainly have.
    Just today I asked the Gemini in two different instances how to do the same exact thing in some software. One time it gave instructions for one method, and the second time it said the first method wasn't possible with that software and a workaround was necessary.

    Time saw raw emit level racecar level emit raw saw time.

    Exactly, there it is again saying something is a palindrome when it isn't.

    Example of a palindrome:
    able was I ere I saw elba

    Not a palindrome:
    I palindrome I

    ---
    þ Synchronet þ TIRED of waiting 2 hours for a taco? GO TO TACOPRONTO.bbs.io
  • From phigan@VERT/TACOPRON to jimmylogan on Friday, December 05, 2025 18:02:25
    Re: Re: ChatGPT Writing
    By: jimmylogan to Bob Worm on Wed Dec 03 2025 08:58 pm

    A solid effort(?)

    I just asked for it, as you suggested. :-)

    That was the output.

    I was the one who suggested it, insinuating that your AI couldn't do it, and you proved me right :).

    ---
    þ Synchronet þ TIRED of waiting 2 hours for a taco? GO TO TACOPRONTO.bbs.io
  • From poindexter FORTRAN@VERT/REALITY to Mortar on Friday, December 05, 2025 17:28:19
    Mortar wrote to jimmylogan <=-

    Time saw raw emit level racecar level emit raw saw time.

    Not a palindrome. The individual letters/numbers must read the same in both directions. Example: A man, a plan, a canal - Panama.

    An auto-palindrome?

    There was an XKCD comic about wikipedia editors that illustrated a page
    for the word "Manalanteau" - https://xkcd.com/739/

    "A malamanteau is a neologism for a portmanteau crfeated by incorrectly combining a malapropism with a neologism. It itself it a portmanteau
    of..."



    --- MultiMail/Win v0.52
    þ Synchronet þ .: realitycheckbbs.org :: scientia potentia est :.
  • From jimmylogan@VERT/DIGDIST to Mortar on Friday, December 26, 2025 17:08:43
    Mortar wrote to jimmylogan <=-

    Re: Re: ChatGPT Writing
    By: jimmylogan to phigan on Tue Dec 02 2025 11:15:44

    Also, try asking your AI to give you an 11-word palindrome.

    Time saw raw emit level racecar level emit raw saw time.

    Not a palindrome. The individual letters/numbers must read the same in both directions. Example: A man, a plan, a canal - Panama.

    I know - I was just telling you what it gave me. :-)



    ... You can never get rid of a bad temper by losing it.
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From jimmylogan@VERT/DIGDIST to Mortar on Friday, December 26, 2025 17:08:43
    Mortar wrote to jimmylogan <=-

    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Wed Dec 03 2025 07:57:33

    If you ask me a question and I give you an incorrect answer, but I
    believe that it is true, am I hallucinating? Or am I mistaken? Or is
    my information outdated?

    If you are the receiver of the information, than no. It'd be like if I told you a dream I had, does that mean you experienced the dream?

    Right. But how does that compare to AI hallucinations?



    ... Tolkien is hobbit-forming.
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From jimmylogan@VERT/DIGDIST to Nightfox on Friday, December 26, 2025 17:08:43
    Nightfox wrote to jimmylogan <=-

    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Wed Dec 03 2025 08:58 pm

    But again, is it 'making something up' if it is just mistaken?

    In the case of AI, yes.

    Gonna disagree with you there... If wikipedia has some info that is wrong, and I quote it, I'm not making it up. If 'it' pulls from the same source, it's not making it up either.

    For AI, "hallucination" is the term used for AI providing false information and sometimes making things up - as in the link I provided earlier about this. It's not really up for debate. :)

    :-) Okay - then I'm saying that in MY opinion, it's a bad word to
    use. Hallucination in a human is when you THINK you see or hear
    something that isn't there. Using the same word for an AI giving
    false information is misleading.

    So I concede it's the word that is used, but I don't like the use
    of it. :-)

    I've heard of people who
    are looking for work who are using AI tools to help update their resume,
    as well as tailor their resume to specific jobs. I've heard of cases
    where the AI tools will say the person has certain skills when they
    don't.. So you really need to be careful to review the output of AI
    tools so you can correct things. Sometimes people might share
    AI-generated content without being careful to check and correct things.

    I'd like to see some data on that... Anecdotal 'evidence' is not always scientific proof. :-)

    That seems like a strange thing to say.. I've heard about that from
    job seekers using AI tools, so of course it's anecdotal. I don't know what scientific proof you need to see that AI produces incorrect
    resumes for job seekers; we know that from job seekers who've said so.
    And you've said yourself that you've seen AI tools produce incorrect output.

    Sorry - didn't mean to demand anything. I just meant the fact that someone
    says it gave false info doesn't mean it will ALWAYS give false info. The
    burdon is still on the user to verify output.

    I was using it to help me fill out a spreadsheet and had to go back
    and correct some entries. Had I turned it in 'as is' it would have
    had MY signature on it, and I would have been responsible.

    The job search thing isn't really scientific.. I'm currently looking
    for work, and I go to a weekly job search networking group meeting, and
    AI tools have come up there recently. Specifically, recently there was someone there talking about his use of AI tools to help customize his resume for different jobs & such, and he talked about needing to check
    the results of what AI produces, because sometimes AI tools will put skills & things on your resume that you don't have, so you have to make edits.

    By 'scientific data,' I guess I meant I'd like to see the output AND the
    input. I've learned that part of getting the right info is to ask the
    right question, or ask it in the right way. :-)

    If that's the definition, then okay - a 'mistake' is technically a hallucination. Again, that won't prevent me from using it as the tool it

    It's not a "technically" thing. "Hallucination" is simply the term
    used for AI producing false output.




    ... Support bacteria. It's the only culture some people are exposed to!
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From jimmylogan@VERT/DIGDIST to Bob Worm on Friday, December 26, 2025 17:08:43
    Bob Worm wrote to jimmylogan <=-

    Re: Re: ChatGPT Writing
    By: jimmylogan to Bob Worm on Wed Dec 03 2025 20:58:51

    Hi, jimmylogan.

    But that 'third option' - you're saying it didn't 'find' that somewhere
    in a dataset, and just made it up?

    The third option was software that ran on a completely different
    product set. A reasonably analogy would be it's like saying that an
    iPhone runs MacOS.

    Just look at all the recent scandals around people filing court cases prepared by ChatGPT which refer to legal precedents where either the case was irrelevant to the point, didn't contain what ChatGPT said it did or didn't exist at all.

    I've not seen/read those. Assuuming you have some links? :-)

    I guess you should be able to read this outside the UK: https://www.bbc.co.uk/news/world-us-canada-65735769

    Some others: https://www.legalcheek.com/2025/02/another-lawyer-faces-chatgpt-trouble/


    https://arstechnica.com/tech-policy/2023/05/lawyer-cited-6-fake-cases-ma de-up-by-chatgpt-judge-calls-it-unprecedented/

    https://www.theregister.com/2024/02/24/chatgpt_cuddy_legal_fees/

    It's enough of a problem that the London High Court ruled earlier this year that lawyers caught citing non-existent cases could face criminal charges. So I'm probably not hallucinating it :)


    I don't dispute those cases at all -- lawyers absolutely submitted filings with fabricated citations, and sanctions were appropriate.

    My point isn't that AI can't generate false information; it clearly can.
    It is that the failure mode there wasn't "AI deception,"" it was treating
    a text-generation tool as a source of truth and skipping verification.

    Courts don't sanction people for typos or bad research -- they sanction
    them for signing their name to claims they didn't check. AI doesn't change that responsibility.



    ... AVOID INTERNET SCAMS! Send $20 to learn how...
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From jimmylogan@VERT/DIGDIST to Bob Worm on Friday, December 26, 2025 17:08:43
    Bob Worm wrote to jimmylogan <=-

    Re: Re: ChatGPT Writing
    By: jimmylogan to Bob Worm on Wed Dec 03 2025 20:58:51

    Hi, jimmylogan.

    I mean... those are 11 words... with a few duplicates... Which can't even be arranged into a palindrome because "saw" and "raw" don't have their corresponding "was" and "war" palindromic partners...

    I just asked for it, as you suggested. :-)

    I think it was Phigan who asked but yeah, I guessed that came from an
    LLM rather than a human :)

    Not that I use LLMs myself - if I ever want the experience of giving
    very clear instructions but getting a comically bad outcome I can
    always ask my teenage son to do something around the house :D

    LOL!


    ... You can never get rid of a bad temper by losing it.
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From jimmylogan@VERT/DIGDIST to Dumas Walker on Friday, December 26, 2025 17:08:44
    Dumas Walker wrote to jimmylogan <=-

    Re: Re: ChatGPT Writing
    By: jimmylogan to Dumas Walker on Tue Dec 02 2025 11:15:44

    Google Gemini looked it up and reported that my trash would be picked up on Friday. The link below the Gemini result was the official link from the city, which *very* clearly stated that it would be picked up on Monday.

    Not sure where Gemini got its answer, but it might as well have been made up! :D ---

    LOL - yep, an error. But is that actually a 'made up answer,' aka hallucinating?

    Well, it didn't get it from any proper source so, as far as I know, it made it up! :D ---

    Probably in this case it looked up the NORMAL day and didn't keep looking.

    Just like us - we find what we think is the answer, so we don't keep
    looking. :-)

    Kinda like looking up the store hours and Google saying "Christmas Holiday
    may affect this." :-) Up to YOU to verify!



    ... If all the world is a stage, where is the audience sitting?
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From jimmylogan@VERT/DIGDIST to phigan on Friday, December 26, 2025 17:08:44
    phigan wrote to jimmylogan <=-

    Re: Re: ChatGPT Writing
    By: jimmylogan to phigan on Tue Dec 02 2025 11:15 am

    it's been flat out WRONG before, but never insisted it was

    You were saying you'd never seen it make stuff up :). You certainly
    have. Just today I asked the Gemini in two different instances how to
    do the same exact thing in some software. One time it gave instructions for one method, and the second time it said the first method wasn't possible with that software and a workaround was necessary.

    Time saw raw emit level racecar level emit raw saw time.

    Exactly, there it is again saying something is a palindrome when it
    isn't.

    Example of a palindrome:
    able was I ere I saw elba

    Not a palindrome:
    I palindrome I

    I'm not denying it can be wrong as clearly it can be. My disagreement
    is with equating "wrong" with "made up.""

    In both of your examples, a simpler explanation fits: it produced what
    it believed was the correct answer based on its internal model and was mistaken. Humans do that constantly, and we don't normally say they are
    making things up unless there is intent or invention involved.

    Calling every incorrect output 'made up' or a 'hallucination' blurs the distinction between fabrication and error, and I don't think that helps
    people understand what the tool is actually doing.



    ... Electricians have to strip to make ends meet.
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From Nightfox@VERT/DIGDIST to jimmylogan on Friday, December 26, 2025 17:40:25
    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Fri Dec 26 2025 05:08 pm

    For AI, "hallucination" is the term used for AI providing false
    information and sometimes making things up - as in the link I provided

    :-) Okay - then I'm saying that in MY opinion, it's a bad word to use. Hallucination in a human is when you THINK you see or hear something that isn't there. Using the same word for an AI giving false information is misleading.

    So I concede it's the word that is used, but I don't like the use of it. :-)

    Yeah, it's just the word they decided to use for that with AI. Although it may sound a little weird with AI, I accept it and I know what it means. There are other terms used for other things that I think are worse. :)

    Sorry - didn't mean to demand anything. I just meant the fact that someone says it gave false info doesn't mean it will ALWAYS give false info. The burdon is still on the user to verify output.

    Yeah, that's definitely the case. And that's true about it not always giving false info. From what I understand, AI tends to be non-deterministic in that it won't always give the same output even with the same question asked multiple times.

    Nightfox

    ---
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From Rob Mccart@VERT/CAPCITY2 to NIGHTFOX on Sunday, December 28, 2025 08:09:11
    Yeah, that's definitely the case. And that's true about it not always giving
    >lse info. From what I understand, AI tends to be non-deterministic in that i
    >on't always give the same output even with the same question asked multiple t
    >s.

    And a big part of the false info problem is that a lot of AI systems
    will not ever say "I don't know..".. If they can't find the correct
    answer they will give you something else maybe close, if you're lucky.

    (If we don't want to say it 'made up' the answer..) B)

    ---
    þ SLMR Rob þ Ever stop to think and then forget to start again?
    þ Synchronet þ CAPCITY2 * Capitol City Online
  • From Mortar@VERT/EOTLBBS to jimmylogan on Monday, December 29, 2025 15:10:34
    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Fri Dec 26 2025 17:08:43

    I've learned that part of getting the right info is to ask the right question, or ask it in the right way. :-)

    Reminds me of the old computer addage, "garbage in, garbage out". Or you could take the Steve Jobs approach: You're asking it wrong.

    ---
    þ Synchronet þ End Of The Line BBS - endofthelinebbs.com
  • From Mortar@VERT/EOTLBBS to Nightfox on Tuesday, December 30, 2025 00:33:14
    Re: Re: ChatGPT Writing
    By: Nightfox to jimmylogan on Fri Dec 26 2025 17:40:25

    ...it won't always give the same output even with the same question asked multiple times.

    It it was truely AI, it would've said, "You've asked that three times. LEARN TO READ!"

    ---
    þ Synchronet þ End Of The Line BBS - endofthelinebbs.com
  • From Rob Mccart@VERT/CAPCITY2 to MORTAR on Wednesday, December 31, 2025 08:20:30
    Reminds me of the old computer addage, "garbage in, garbage out".

    I think that saying works with People brains as well... B)

    ---
    þ SLMR Rob þ Another example of random unexplained synaptic firings
    þ Synchronet þ CAPCITY2 * Capitol City Online
  • From jimmylogan@VERT/DIGDIST to Nightfox on Saturday, January 24, 2026 13:14:03
    Nightfox wrote to jimmylogan <=-

    Sorry - didn't mean to demand anything. I just meant the fact that someone says it gave false info doesn't mean it will ALWAYS give false info. The burdon is still on the user to verify output.

    Yeah, that's definitely the case. And that's true about it not always giving false info. From what I understand, AI tends to be non-deterministic in that it won't always give the same output even
    with the same question asked multiple times.

    Yep! I think that's part of the fuzzy logic. I've found that if I give it
    MORE context I get more specific answers to my current issue/condition.


    ... I put a dollar in one of those change machines...nothing changed.
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From jimmylogan@VERT/DIGDIST to Mortar on Saturday, January 24, 2026 13:14:03
    Mortar wrote to jimmylogan <=-

    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Fri Dec 26 2025 17:08:43

    I've learned that part of getting the right info is to ask the right question, or ask it in the right way. :-)

    Reminds me of the old computer addage, "garbage in, garbage out". Or
    you could take the Steve Jobs approach: You're asking it wrong.

    Exactly!!!



    ... As I said before, I never repeat myself
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From jimmylogan@VERT/DIGDIST to Mortar on Saturday, January 24, 2026 13:14:03
    Mortar wrote to Nightfox <=-

    Re: Re: ChatGPT Writing
    By: Nightfox to jimmylogan on Fri Dec 26 2025 17:40:25

    ...it won't always give the same output even with the same question asked multiple times.

    It it was truely AI, it would've said, "You've asked that three times. LEARN TO READ!"

    LOL - good point!!!



    ... We all live in a yellow subroutine...
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From Nightfox@VERT/DIGDIST to jimmylogan on Saturday, January 24, 2026 16:39:52
    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Sat Jan 24 2026 01:14 pm

    Yep! I think that's part of the fuzzy logic. I've found that if I give it MORE context I get more specific answers to my current issue/condition.

    Yep. I've heard of "prompt engineering" referring to making good prompt for AI to get what you need. I've also heard you can go a step further and give the AI some criteria and have it iterate over its answers and check itself against your criteria; it may take a little longer, but in that way, you're basically making it do more of the work to get an answer that's helpful to you.

    Nightfox

    ---
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com