Talking dogs and Chatbots?
Alex Monaghan posted on linked in
LaMDa tells us it’s sentient, and people believe it because it sounds convincing, even though we know this is complete nonsense. ChatGPT tells us there is legal precedent, and we believe it because its response is beautifully formatted and referenced, even though it has made up all the data. A good UI is no guarantee of good execution.
I can throw a ball for my dog, and tell it to fetch the ball, and order me a pizza before it comes back. The dog will happily disappear, and return with the ball. Should I assume that the pizza has been ordered? No!
I replied:
By default, we assume that an agent that appears to be following the implicit rules of dialogue is actually following those rules. If the dog had been a magical talking dog, had said “OK” before chasing the ball, and nothing on return, we would have assumed that it did order the pizza.
If it didn’t. it should have said. We would have assumed that the magic that made the dog able to talk also made it able to understand and work with the conventional patterns of obligations and commitments that result from conversations.
Since ordinary dogs are pack animals, its reasonable to assume that a talking dog would have an understanding of social relationships and obligations, but of course that could be wrong. It could turn out that magical talking dogs are happy to engage in conversation, perfectly fluent and idiomatic, but bad at understanding what they have implicitly committed themselves to. It may turn out that the talking dog who says “OK” isn’t aware that this signals an agreement to a plan and an obligation to report back if the plan fails.