• Pliny@lemmy.fmhy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    I agree I think right now the notion is very dystopian and with the current iteration of chatbots it doesn’t seem like a realistic long term solution. But you only have to think a few years down the line when LLMs have been fine tuned for this specific use case and AI is ubiquitous in our society similar to the iPhone now, that you can see how it will become totally normalised.

    • subito@beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I think it’s dangerous to try to cure loneliness with an AI, regardless of sophistication and tuning, because you end up with human who’s been essentially deceived into feeling better. Not only that, but they’re going to eventually develop strong emotional attachments to the AI itself. And with capitalism as the driving force of society here in the U.S. I can guarantee you every abusive, unethical practice will become normalized surrounding these AI’s too.

      I can see it now: “If you cancel your $1,000/a year CompanionGPT we can’t be held responsible for what happens to your poor, lonely grandma…” Or it will be even more direct and say the old, lonely person: “Pay $2,500 or we will switch of ‘Emotional Support’ module on your AI. We accept PayPal.”

      Saying AI’s like this will be normalized doesn’t mean it’s an ethical thing to do. Medical exploitation is already normalized in the US. Not only is this dystopian, it’s downright unconscionable, in my opinion.

    • Pliny@lemmy.fmhy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      And just to be clear I don’t see it as a substitute for human interaction, more of a bolt on that helps people day to day.