close
close

Latest Post

Iceland considers dynamic pricing for its tourist tax Stock prices rise; S&P 500 and Nasdaq climb from record highs

WASHINGTON (AP) — When disinformation researcher Wen-Ping Liu studied China’s efforts to influence Taiwan’s recent elections using fake social media accounts, he noticed something unusual about the most successful profiles.

They were female, or at least they appeared to be. Fake profiles claiming to be from women received more engagement, more attention, and more influence than supposedly male accounts.

“Pretending to be a woman is the easiest way to gain credibility,” said Liu, an investigator with Taiwan’s Ministry of Justice.

From Chinese and Russian propaganda agencies to online scammers and AI chatbots, it pays to be a woman. This is proof that while technology is becoming more sophisticated, the human brain is still surprisingly easy to hack. One reason for this is age-old gender stereotypes that have migrated from the real world to the virtual world.

Humans have long attributed human characteristics such as gender to inanimate objects – ships are one example. So it makes sense that human-like features would make fake social media profiles or chatbots more attractive. However, questions about how these technologies can reflect and reinforce gender stereotypes are increasingly coming to the fore as more voice assistants and AI-powered chatbots enter the market and the lines between man (and woman) and machine become increasingly blurred.

“You want to add a little emotion and warmth, and you can do that easily by choosing a woman’s face and voice,” says Sylvie Borau, a marketing professor and online researcher in Toulouse, France. Her work has shown that Internet users prefer “female” bots and find them more human than the “male” versions.

People tend to perceive women as warmer, less threatening and more approachable than men, Borau told The Associated Press. Men, on the other hand, are often perceived as more competent, but also more threatening or hostile. For this reason, many people are consciously or unconsciously more willing to engage with a fake account posing as a woman.

When OpenAI CEO Sam Altman was looking for a new voice for its AI program ChatGPT, he turned to Scarlett Johansson. Altman told her that users would find her voice — who played the eponymous voice assistant in the movie “Her” — “soothing,” she said. Johansson refused Altman’s request and threatened to sue when the company settled on a voice she described as “eerily similar.” OpenAI shelved the new voice.

Feminine profile pictures, especially those that show women with flawless skin, full lips and big eyes in provocative outfits, can be another online lure for many men.

Users also treat bots differently depending on their perceived gender: Borau’s research has found that “female” chatbots are far more likely to experience sexual harassment and threats than “male” bots.

Female social media profiles receive, on average, more than three times as many views as male ones, according to an analysis of over 40,000 profiles conducted for AP by Cyabra, an Israeli technology company specializing in bot detection. Female profiles that say they are younger receive the most views, Cyabra found.

“If you create a fake account and pretend to be a woman, the account will have a greater reach than if you pretend to be a man,” Cyabras’ report says.

The online influence campaigns of countries like China and Russia have long used fake women to spread propaganda and disinformation. These campaigns often exploit people’s views of women. Some pose as wise, caring grandmothers who dispense down-to-earth wisdom, while others imitate young, conventionally attractive women who enjoy talking politics with older men.

Last month, researchers at NewsGuard found that hundreds of fake accounts, some with AI-generated profile pictures, were being used to criticize President Joe Biden. This happened after some Trump supporters began posting a personal photo announcing that they “will not vote for Joe Biden.”

While many of the posts were authentic, more than 700 came from fake accounts. Most of the profiles claimed to be from young women in states like Illinois or Florida; one was called PatriotGal480. However, many of the accounts used nearly identical language and had profile photos generated by AI or stolen from other users. And while they couldn’t say for sure who was running the fake accounts, they found dozens with ties to countries like Russia and China.

X removed the accounts after NewsGuard contacted the platform.

A UN report suggested there’s an even more obvious reason why so many fake accounts and chatbots are female: they were created by men. The report, titled “Are Robots Sexist?”, examined gender differences in the tech industry and concluded that greater diversity in programming and AI development could lead to fewer sexist stereotypes being embedded in products.

For programmers who want to make their chatbots as human as possible, Borau says, this presents a dilemma: If they choose a female persona, are they promoting sexist views about real women?

“It’s a vicious circle,” Borau said. “Humanizing AI could dehumanize women.”

Leave a Reply

Your email address will not be published. Required fields are marked *