Google Top Search Results Might Be Wrong

Spammers and scammers use artificial intelligence (AI) tools to create content and get people to download malware and viruses, but Google’s algorithms are ranking some of those pages in the lead spots ahead of legitimate information and listings.

Artificial intelligence (AI) adds a new layer to erroneous rankings for searches, including misleading targeted ads and low-quality websites built to appear at the top of the search-results page.

The Wall Street Journal reported that this can lead to scams intended to steal credit-card numbers and other personal information.

Several disturbing examples were provided by Nicole Nguyen, a personal tech columnist at The WSJ, but the most concerning example surfaced in query results related to information on how to change her default Google account.

advertisement

advertisement

The top result was a byline by Morgan Mitchell, content manager at Adobe, which led to an article posted on LinkedIn.

It turns out that Mitchell, who appears as an author of 150 articles in Q&A format, does not exist -- although many of those articles include customer-service phone numbers. None of those numbers belong to Google or Adobe.

Mark Williams-Cook, a search-engine specialist and director at marketing agency Candour, told the WSJ that to rank high in search results, spammers now publish posts on established and authoritative sites that Google tends to favor, such as LinkedIn, Reddit and Quora.

“Web spam is not new, but the tools are, and they have lowered the barrier for entry,” Williams-Cook told the WSJ.

After Adobe confirmed that no one by the full name of Morgan Mitchell works at the company, Williams-Cook suspected the profile and its posts were created using AI, which makes it easy to create content and post it on any of the social networks without being labeled as misinformation or spam.

Spammers also are using AI-generated images to create unique dating profiles, according to findings from Satnam Narang, a senior staff engineer at Tenable, as reported by Bloomberg.

Narang found a Bumble profile for an “attractive 36-year-old brunette woman” named Megan. He ran the photo through a AI-detection website, and decided that Megan was likely a fake.

Amid all this, OpenAI has now created and released Sora, which is being taught to understand and simulate the physical world in motion. It creates video from text, raising the chances of extreme fraudulent content.

The Federal Trade Commission (FTC) is taking measures to make fraudulent AI impersonation illegal, and is seeking public comment on a supplemental bill that would prohibit the impersonation of individuals. The proposed change in rules, announced Thursday, would extend protections, with greater clarification of how much of a resemblance the image needs to show.

The changes will also make it possible for the agency to more quickly target the makers of the code used in these types of scams. There is a rule based on government personnel, but the initial proposal doesn't cover the impersonation of private individuals.

The proposed expansions to the final impersonation rule would protect individuals.

“Fraudsters are using AI tools to impersonate individuals with eerie precision and at a much wider scale,” stated FTC Chair Lina M. Khan. “With voice cloning and other AI-driven scams on the rise, protecting Americans from impersonator fraud is more critical than ever.

Next story loading loading..