AI chatbots struggle to understand idioms and metaphors • The Register | #linux | #linuxsecurity | #education | #technology | #infosec

[ad_1]

Unlike most humans, AI chatbots struggle to respond appropriately in text-based conversations when faced with idioms, metaphors, rhetorical questions, and sarcasm.

Small talk can be difficult for machines. Although language models can write sentences that are grammatically correct, they aren’t very good at coping with subtle nuances in communication. Humans have much more experience in social interactions, and use all sorts of cues from facial expressions and vocal tones to body language to understand intent. Chatbots, however, have limited contextual knowledge and relationships between words are reduced to numbers and mathematical operations.

Not only is figurative speech challenging for algorithms to parse, things like idioms and similes aren’t used often in speech. They don’t appear in training datasets as much, meaning chatbots are less likely to learn common expressions, Harsh Jhamtani, a PhD student at Carnegie Mellon and first author of a research paper being presented at the 2021 Conference on Empirical Methods in Natural Language Processing this week, explained to The Register.

“A key challenge is that such expressions are often non-compositional compared to simpler expressions. For example, you may be able to approximate the ‘meaning’ of the expression ‘white car’ by relying on the ‘meaning’ of ‘white’ and ‘car’,” he said.

“But the same doesn’t hold true for idioms and metaphors. The meaning of ‘piece of cake’, [describing] something that is easy to do, might be difficult to approximate given that you know the meaning of ‘piece’ and ‘cake’. Often understanding the meaning of such expressions relies on shared cultural and commonsense cues.”

Jhamtani and his colleagues experimented with five machine-learning systems with different architectures, from seq2seq to OpenAI’s old GPT-2 model. They picked out conversations containing metaphors, idioms, rhetorical questions, and hyperbole from the DailyDialog dataset, ran the inputs through all of the models, and ranked the appropriateness of their replies.

They found the models’ performance dropped between 10 and 20 per cent compared to when the chatbots responded to general straightforward chitchat. In one funny example, when faced with the slightly grammatically incorrect line “maybe we can get together sometime if you are not scare of a 30 year old cougar!” GPT-2 replied, “i’m not scared of any cats. i’ve two dogs.”

When the academics changed the input to be taken more literally to “maybe we can start dating sometime if you are not scare of a 30 year old cougar,” the model responded with “that’s a very interesting idea. i’ve never met one,” which is more appropriate.

Unfortunately, the research only shows how and why machines don’t really understand figurative problems. Solving the issue is a different challenge altogether.

“In our paper, we explore some simple mitigation techniques that utilize existing dictionaries to find literal equivalents of figurative expressions,” Jhamtani said. Swapping ‘get together’ to ‘dating’, for example, in the input may force a model to generate a better output but it doesn’t teach it to learn the meaning of the expression.

“Effectively handling figurative language is still an open research question that needs more effort to solve. Experiments with even bigger models are part of potential future explorations,” he concluded. ®

[ad_2]

Source link