Governments can and do use artificial intelligence to direct their citizens and their politics. But are we ready how far this could go?
Governments have access to large amounts of data that they can – and often use – to analyze and predict the behaviors of their citizens using artificial intelligence (AI) strategies.
However while AI can help policy makers by providing highly accurate predictions, identifying trends and patterns, predicting complex associations and improving profitability, it may also introduce risks to the privacy and security of citizens and threaten free decision-making in society.
Researchers from three universities in Spain explored these risks in a study who interviewed government officials about their institution’s use of artificial intelligence. One councilor said AI has helped his city predict outcomes to help make better decisions during the recent COVID-19 pandemic. “The use of artificial intelligence to predict possible infections and deaths has been used with statistical models. These models have helped us to improve both health care and the movement of people in cities when a lockdown was necessary,” the adviser said. However, the same official also noted, “The use of apps to track the location of users’ devices, albeit still anonymously, has highlighted the need to regulate the use of both artificial intelligence technology and other similar technologies”.
Another Spanish politician interviewed said: “We use artificial intelligence to predict possible criminal acts in the city. When artificial intelligence and our analyzes tell us that there is a neighborhood where serious crimes, such as murders, can be committed, we increase the number of police patrols in these neighborhoods.
The recent exponential growth in the use of AI saw the emergence of the new field of behavioral data science, which combines techniques from behavioral science, psychology, sociology, economics and business, and uses the processes of computer science, data-centric engineering, statistical models, information science and/or mathematics to understand and predict human behavior using AI.
While this predictive power can be deployed to better design and implement policies, as the First Counselor noted, privacy concerns are growing. As more data is obtained from citizens, predictions may soon reach levels of efficiency similar to observations, which would increase concerns about state surveillance. Governments with this kind of intelligence could risk infringing on privacy and hampering free decision-making in society.
Unlawful use of these technologies can be applied to change the behavior of citizens, including influencing election results. For example, behavioral data from US Facebook users was analyzed using behavioral prediction algorithms developed by Cambridge Analyticaand employed to alter election results during the 2016 US presidential campaign between Donald Trump and Hilary Clinton.
Many questions remain regarding the risks to citizen privacy posed by government use of AI and behavioral data science. These include: the ethics of collecting and analyzing data generated unintentionally by citizens; how the results obtained by the government from these data analyzes should be explained to citizens; and whether (and how) such analysis may violate individuals’ privacy.
Governments can better achieve the UN Sustainable Development Goal effective, accountable and responsive institutions if they use AI to improve services to citizens and society, and adopt ethical principles and values to guarantee citizens’ privacy. Solutions could include develop legislation related to AI and behavioral data science limit potentially unethical uses and avoid improper or improper use of this technology. Effective government practices and policies will help citizens have more confidence in the use of AI, behavioral data science, and mass analytics of collective behavior and intelligence. In today’s global culture where the Internet is the primary tool for communication, data, and behavioral analytics-based decisions. have become essential for public actorshowever, with technology legislation often lagging behind, many societies are currently underprepared for this inevitable future.
Jose Ramon Saura is an Associate Professor of Digital Marketing at Rey Juan Carlos University in Spain. His research explores theoretical and practical perspectives of digital marketing and user-generated content (UGC), focusing on data mining, knowledge discovery, and information science. He has worked with a wide range of companies, including Google, Deloitte, L’Oréal, Telefónica or MRM/McCann. He declares no conflict of interest.
Originally published under Creative Commons by 360info™.
Read also | Seeing emotions is just the start of brain-reading technology