Most people use ChatGPT to answer simple queries, draft emails, or produce useful (and useless) code. But spyware companies are now exploring how to use it and other emerging AI tools to surveil people on social media.
In a presentation at the Milipol homeland security conference in Paris on Tuesday, online surveillance company Social Links demonstrated ChatGPT performing “sentiment analysis,” where the AI assesses the mood of social media users or can highlight commonly-discussed topics amongst a group. That can then help predict whether online activity will spill over into physical violence and require law enforcement action.
Founded by Russian entrepreneur Andrey Kulikov in 2017, Social Links now has offices in the Netherlands and New York; previously, Meta dubbed the company a spyware vendor in late 2022, banning 3,700 Facebook and Instagram accounts it allegedly used to repeatedly scrape the social sites. It denies any link to those accounts and the Meta claim hasn’t harmed its reported growth: company sales executive Rob Billington said the company had more than 500 customers, half of which were based in Europe, with just over 100 in North America. That Social Links is using ChatGPT shows how OpenAI’s breakout tool of 2023 can empower a surveillance industry keen to tout artificial intelligence as a tool for public safety.
“There is never going to be a way of making AI unbiased…”
But according to the American Civil Liberties Union’s senior policy analyst Jay Stanley, using AI tools like ChatGPT to augment social media surveillance will likely “scale up individualized monitoring in a way that could never be done with human monitors,” he told Forbes.
That’s a problem not just because this kind of technological eavesdropping could amplify inaccuracies or biases. It could also chill online discourse because everyone feels “that they’re being watched, not necessarily by humans, but by AI agents that have the ability to report things to humans who can bring consequences down on your head,” Stanley added.
ChatGPT maker OpenAI didn’t respond to requests for comment. Its usage policy says it does not allow “activity that violates people’s privacy,” including “tracking or monitoring an individual without their consent.”
“We strictly adhere to OpenAI policies,” said Social Links communications director Hector Talavera. “We only use ChatGPT for analyzing text, including summarizing content, identifying topics, classifying text as positive, neutral, or negative, and evaluating sentiment for various elements in the text.” (He also reiterated a previous statement, in which the company said it never had any ties to the Russian government and Kulikov condemned Russia’s invasion of Ukraine.)
Meta spokesperson Ryan Brack said the company has a team of over 100 who are focused on combating unauthorized scraping. “We enforce our policies against unauthorized scrapers, including legal action, when we find our terms have been violated,” Brack said.
Social Links’ Talavera said Meta had made an error in tying it to the 3,700 banned accounts, adding that the company does not mass scrape social media. Instead, a user asks Social Links to grab posts that are relevant to their investigation, which the user can then analyze within Social Links’ user interface. The data is then stored on the user’s computer, not on a Social Links server. “It is more convenient and effective than using Google and web browsers, but basically, it is the same,” Talavera added.
It remains a powerful tool to monitor people’s activities online, one that the company promotes as useful for tracking protest movements. In a conference demo, Social Links analyst Bruno Alonso used the software to evaluate the online reaction to a controversial deal recently cut by Spain’s acting prime minister to give amnesty to Catalan politicians who’d attempted to gain regional independence in 2017. The tool scanned Twitter for posts containing keywords and hashtags, including “amnesty” and ran them through ChatGPT, which assessed their sentiment as positive, negative or neutral, displaying the results in an interactive graph. The tool is also capable of quickly summarizing and dissecting online discussions on major platforms like Facebook, picking out commonly-discussed topics.
It’s also possible within Social Links to search for facial recognition matches once the tool has flagged that someone is presenting “negative” sentiment on social media, according to Alonso’s presentation. The user can take a face and, using Social Links’ own algorithms, look for matches across social media, giving police a wider view of the individual’s identity. “The possibilities are really endless,” Alonso added. Talavera said that the company doesn’t store facial images, but scours the web for matches. He added that it was similar to a reverse image search on Google, though Social Links also uses its own facial recognition software that can look through public social media groups and users’ photo collections.
The demonstration was one of many focused on artificial intelligence at Milipol, a sprawling trade show where companies market weaponry and surveillance technology to law enforcement and homeland security agencies. Andy Martin, a British engineer for Israeli smartphone forensics giant Cellebrite, said large language models like ChatGPT were going to be hugely beneficial for all manner of law enforcement operations, from looking through call records to find anomalies in a person’s “pattern of life,” through to “augmented interviews” where the AI will feed the interviewer information during an interrogation.
He warned, however, that law enforcement must be transparent with its use of AI because of its reliability and bias issues. “There is never going to be a way of making AI unbiased,” he said, noting, as have others, that technologies programmed by humans reflect human fallibility.
Also in attendance: Italian surveillance company Cy4gate, whose new Gens.AI tool creates convincingly human social media profiles from a list of characteristics. Not only does the AI render legitimate looking avatars, they can also be set loose on platforms like Facebook and Telegram as autonomous AI personas. Such fake profiles are used by undercover investigators to learn more about a criminal suspect, though typically they’re controlled by humans, not AI. Cy4gate claims its avatars are realistic enough to “get close to the target and build a trust relationship” and avoid detection. One avatar Forbes generated by Gens.AI and showcased at Milipol – an early-30s, female personal trainer – was live across social media sites, including Facebook, where it had been active since 2019.
Meta outlaws undercover police accounts. Soon after Forbes highlighted the account to Meta, it was removed.
Combining the likes of Social Links and Gens.AI leads to the very real possibility of a bizarre kind of AI echo chamber, where AI social media surveillance software made by one spyware company is being used to monitor AI personas created by another, with ChatGPT and other LLMs in the mix. In a word, the dawn of AI surveillance cannibalism.