How are the algorithms that will choose the next president of the government?
Data science and artificial intelligence are playing a more and bigger role during democratic elections; Algorithms are becoming more important in political campaigns, even if they still cannot rule a nation.
Even if we have not yet arrived at that scary future, data science and artificial intelligence are playing a more and bigger role during democratic elections.
Examples include the presidential campaigns of Barack Obama and Donald Trump, the Danish Synthetic Party, and the catastrophic data breach in the Macron campaign.
The short story “Universal Suffrage” by Isaac Asimov was published in 1955. In it, he explains how the first electronic democracy used the most cutting-edge computer in the world (Multivac) to determine the vote of a whole nation with only one human voter’s involvement.
Opinion monitoring, the “sentiment analysis”
Barack Obama’s 2012 presidential campaign in the United States was one of the first political campaigns to successfully combine big data methods and social network research.
Traditional surveys of voters’ intentions, based on phone calls or in-person interviews, were complemented by social media research throughout his campaign (and many others thereafter).
These analytics provide a low-cost, almost real-time way to assess voter mood. Techniques for Natural Language Processing (NLP), notably those for sentiment analysis, are used for this.
These methods examine the content of tweets, blogs, etc., and attempt to determine if the opinions expressed here are favorable or unfavorable about a certain candidate or electoral message.
Since the most active social network users are often young and tech-savvy and do not accurately reflect the overall population, sampling bias is their biggest issue.
That makes it difficult for these tools to accurately anticipate election outcomes, even though they are quite helpful for examining voting patterns and public opinion.
Election campaign interference: Donald Trump’s case
The use of social networks to affect states of thought and regulate voting is more worrisome than the research on emotions in these networks.
Donald Trump’s presidential campaign in the 2016 US election is a well-known example. Big data analysis and psychographic profiles played a significant role in a win that the polls had been unable to foresee.
It wasn’t a mass manipulation, but different voters received different messages depending on assumptions about their receptivity to certain arguments.
They also received additional communications from the candidate along with skewed, fragmented, and occasionally conflicting material.
The business Cambridge Analytica embroiled in a scandal over the improper gathering of data on millions of Facebook users was given the assignment.
Kosinski’s psychometric research that supported how a user profile may be created with a small number of likes while still being as accurate as if their family or friends did serves as the foundation for Cambridge Analytica’s technique.
The issue with this strategy is not the use of technology, but rather the “covert” character of the campaign, the psychological manipulation of impressionable voters by direct appeals to their emotions, or the purposeful dissemination of false information through bots.
Emmanuel Macron experienced this during the 2017 French presidential elections. Just two days before the elections, his campaign had a significant email theft.
A large number of bots were in charge of spreading material that purported to contain proof of crimes being committed but ultimately turned out to be fraudulent.
Government and politics: The Synthetic Party
The prospect that we are under the control of artificial intelligence (AI) is no less unsettling than the prior point.
In the country’s most recent parliamentary elections, the Synthetic Party represented by a chatbot named Leader Lars and led by artificial intelligence declared its intention to run for office.
There are people behind the chatbot, especially the MindFuture foundation for art and technology.
Although the Synthetic Party looks extravagant (with ideas as audacious as a universal basic income of more than €13,400 per month, double the average pay in Denmark), it has sparked discussion on whether an AI may one day run the world.
Can a modern, well-instructed, and resource-rich AI control us?
When we look back at recent developments in artificial intelligence, we can observe that they happen quickly and one after the other, especially in the area of natural language processing with the invention of architectures based on transformers.
One of the most impressive and cutting-edge examples is ChatGPT, which was created by OpenAI.
This chatbot can generate text, carry out complex activities like developing computer programs from a few cues, and react intelligently to practically any inquiry posed in natural language.
Corruption-free yet lacking in openness
The use of AI in governance would have several benefits. On the one hand, they are significantly more capable than any human in processing data and knowledge to make decisions.
It would not be affected by human interests and would be (in theory) immune from the corruption phenomena.
chatbots just respond, rely on the data someone gives them, and deliver responses. They don’t have the freedom to act or think “spontaneously.”
Instead of being active or directing agents, it is preferable to conceive of these systems as oracles that can provide answers to queries such, as “What do you believe would happen if.” and “What would you recommend in case of.”
Numerous scientific research has examined the potential drawbacks and risks of this sort of intelligence, which is based on extensive neural networks.
The lack of openness in the decisions they make is a major issue. They often function as “black boxes,” leaving us in the dark about the deductive processes they used to arrive at a result.
Not to mention that people are operating the computer, and through the texts they used to train the AI, they have been able to incorporate certain prejudices, whether knowingly or unwittingly.
On the other side, as many ChatGPT users have discovered, the AI is not immune from providing inaccurate information or recommendations.
Technological advancements provide us a peek of an AI that will eventually “rule us,” yet not without the necessary human control for the time being. The discussion should soon transition from a technological to an ethical and social level.