\
A RECENT study has revealed shifting attitudes in our communication style with artificial intelligence, with nearly half of respondents believing that manners matter when it comes to robots.
The poll, which surveyed 2,000 US adults, indicates that a growing number of Americans believe in being “nice” and “polite” towards online chatbots and virtual assistants like AI, Alexa, and Siri.
48% of Americans say they’ve adopted a “polite manner” when it comes to working or chatting with artificial technology, including automated services and chatbots.
Commissioned by Talker News and conducted by Talker Research, the study highlighted Americans’ behavioral habits when interacting with “a robot of some kind.”
The study also found that younger generations are more likely to be polite when interacting with AI.
56% of Gen Z respondents claim being polite is their “default style” when “talking” or using automated technology.
Millennials are also likelier to be well-mannered during their AI encounters, with 52% of Americans born between 1981 and 1996 claiming to be nice to technology.
However, the study also showed a significant decline in courteous behavior among older generations.
Only 44% of Gen X reported they adopt a more respectful tone with AI, while just 39% of Baby Boomers say the same.
The survey also asked respondents who noted their polite interactions why they chose to act that way to someone – or rather something – that “wasn’t human.”
68% of respondents claimed that it was just “their way” of being, or the way they were taught to treat others – even AI – the way they would like to be treated.
Meanwhile, 29% of self-described polite users said they felt everyone “deserves” to be treated with dignity and respect, human or not.
According to Microsoft’s Kurtis Beavers, a director on the design team for Microsoft Copilot, an AI “companion,” manners do matter when it comes to interacting with artificial intelligence.
“Using polite language sets a tone for the response,” he said, as reported by Microsoft.
Using basic etiquette when interacting with AI “helps generate respectful, collaborative outputs.”
Beavers explains that large language models (LLMs), also known as generative AI, are trained on human conversations.
America’s Approach to Artificial Intelligence
2,000 US adults were surveyed about their approaches to AI.
These users were specifically asked whether they were “nice” and “polite” to artificial technology, including chatbots, virtual assistants like Alexa and Siri, or other robots.
- I’m polite – I say hello, please, and thank you, and ask things nicely in general: 44%
- I’m not polite but I’m to the point — I ask and expect an answer: 25%
- I’m sometimes impolite — I’ll swear or be abrupt with AI: 7%
- None of the above – 24%
Thus, if an AI encounters a polite human, it’s more than likely to be polite back.
Beavers also suggested that generative AI “mirrors” its responses, and their “level of professionalism, clarity, and detail.”
“It’s a conversation,” he said, adding that it’s up to the human user to “set the vibe.”
Still, not everyone who interacts with AI is compassionate.
500 US adults – or 25% of the surveyed users – say they’re not necessarily polite to AI, but operate on a more “functional” scale.
They may not say “please” or “thank you,” but they’ll also remove most emotions from their encounter.
27% of surveyed users believe that it’s “ok” to be rude or mean to artificial technology like Alexa or Siri.
They reason that the technology harbors “no feelings,” and thus won’t resent or dislike being spoken down to – or shouted at.
What are the arguments against AI?
Artificial intelligence is a highly contested issue, and it seems everyone has a stance on it. Here are some common arguments against it:
Loss of jobs – Some industry experts argue that AI will create new niches in the job market, and as some roles are eliminated, others will appear. However, many artists and writers insist the argument is ethical, as generative AI tools are being trained on their work and wouldn’t function otherwise.
Ethics – When AI is trained on a dataset, much of the content is taken from the Internet. This is almost always, if not exclusively, done without notifying the people whose work is being taken.
Privacy – Content from personal social media accounts may be fed to language models to train them. Concerns have cropped up as Meta unveils its AI assistants across platforms like Facebook and Instagram. There have been legal challenges to this: in 2016, legislation was created to protect personal data in the EU, and similar laws are in the works in the United States.
Misinformation – As AI tools pulls information from the Internet, they may take things out of context or suffer hallucinations that produce nonsensical answers. Tools like Copilot on Bing and Google’s generative AI in search are always at risk of getting things wrong. Some critics argue this could have lethal effects – such as AI prescribing the wrong health information.
Surprisingly, men and women both tend to agree that AI “deserves manners,” with 49% of men and 47% of women responding they’re polite to AI.
However, men are more likely to feel at ease with being rude to or swearing at AI.
34% of men claimed they’re fine with being impolite to artificial tech, compared to just 20% of women.
Ultimately, just like with the (human) individuals you encounter in real life, it pays to be nice to artificial intelligence.
Nearly one in four Americans – 39% – believe that one day, our past behavior towards artificial technology, chatbots, and the like will be “taken into account somehow.”
And, while AI can’t necessarily be “mean” to you or take offense to your tone yet, it may not always be the bigger “person.”
Just like interacting with humans, kindness counts – and a few “please and thank you’s” go a long way.