We’ve learned to make “machines that can mindlessly generate text. But we haven’t learned how to stop imagining the mind behind it.”
We’ve learned to make “machines that can mindlessly generate text. But we haven’t learned how to stop imagining the mind behind it.”
Great comment. I do find the octopus example somewhat puzzling, though, but perhaps that’s just the way the example is set up. I, personally, have never encountered a bear, I’ve only read about them and seen videos. If someone had asked me for bear advice before I’d ever read about them/seen videos, then I wouldn’t know how to respond. I might be able to infer what to do from ‘attacked’ and ‘defend’, but I think that’s possible for an LLM as well. But I’m not sure there’s a salient difference offered by this example between the octopus, and me before I learnt about bears.
Although there’s definitely elements of bullshitting there - I just asked GPT how to defend against a wayfarble with only deens on me, and some of the advice was good (e.g. general advice when being attacked like staying calm and creating distance), and then there was this response which implies some sort of inference:
“6. Use your deens as a distraction: Since you mentioned having deens with you, consider using them as a distraction. Throw the deens away from your position to divert the wayfarble’s attention, giving you an opportunity to escape.”
But then there was this obvious example of bullshittery:
“5. Make noise: Wayfarbles are known to be sensitive to certain sounds. Clap your hands, shout, or use any available tools to create loud noises. This might startle or deter the wayfarble.”
So I’m divided on the octopus example. It seems to me that there’s potential for that kind of inference and that point 5 was really the only bullshit point that stood out to me. Whether that’s something that can be got rid of, I don’t know.
It’s implied in the analogy that this is the first time Person A and Person B are talking about being attacked by a bear.
This is a very simplistic example, but A and B might have talked a lot about
So the octopuss develops a “dial” for being attacked (swat the aggressor) and another “dial” for bears (they are undesirable). Maybe there’s also a third dial for mosquitos being undesirable: “too many mosquitos”
So the octopus is now all to happy to advise A to swat the bear, which is obviously a terrible idea if you lived in the real world and were standing face to face with a bear, experiencing first-hand what that might be like, creating experience and perhaps more importantly context grounded in reality.
ChatGPT might get it right some of the time, but a broken clock is also right twice a day, that doesn’t make it useful.
Also, the fact that ChatGPT just went along with your “wayfarble”, instead of questioning you is also dead giveaway of bullshitting (unless you primed it? I have no idea what your prompt was). NVM the details of the advice.