Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

“We call on the UN to invest in data-driven predictive methods for promoting peace”, Nature researchers on the eve of ViEWS conference

Save for later
  • 4 min read
  • 16 Oct 2018

article-image

Yesterday in an article published in Nature, the international journal of science, prominent political researchers, Weisi Guo, Kristian Gleditsch, and Alan Wilson, talked about how artificial intelligence can be used to predict outbursts of violence to potentially save lives and promote peace. This sets the stage for the ongoing two-day ViEWS conference organized by Uppsala University in Sweden, which focuses on Violence Early-Warning Systems.

Per their investigation, Governments and international communities can often flag spots, that may become armed violence areas using algorithms that forecast risks. These algorithms are similar to those predicting methods used for forecasting extreme weather. These algorithms estimate the likelihood of violence by extrapolating from statistical data and analyzing text in news reports to detect tensions and military developments. Artificial intelligence is now poised to boost the power of these approaches.

Some already working AI systems in this area include Lockheed Martin’s Integrated Crisis Early Warning System, the Alan Turing Institute’s project on global urban analytics for resilient defense which understands the mechanics that cause conflict and the US government’s Political Instability Task Force.

The researchers believe Artificial Intelligence will help conflict models make correct predictions. This is because machine learning techniques offer more information about the wider causes of conflicts and their resolution and provide theoretical models that better reflect the complexity of social interactions and human decision-making.

How AI and predictive methods could prevent conflicts


The article describes how AI systems could prevent conflicts and take necessary actions to promote peace. Broadly the researchers suggest the following measures to improve conflict forecasting:

  1. Broaden data collection
  2. Reduce unknowns
  3. Unlock access to the largest independent learning library in Tech for FREE!
    Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
    Renews at ₹800/month. Cancel anytime
  4. Develop theories
  5. Set up a global consortium


Ideally, AI systems should be capable of offering explanations for violence and provide strategies for preventing it. However, this may prove to be difficult because conflict is dynamic and multi-dimensional. And the data collected presently is narrow, sparse and disparate.

AI systems need to be trained to make inferences. Presently, they learn from existing data, test whether predictions hold, and then refine the algorithms accordingly. This assumes that the training data mirrors the situation being modeled which is not the scenario in the real case and sometimes makes the predictions unreliable. Another important aspect the article describes is modeling complexity. The AI system should decide where it is best to intervene for a peaceful outcome and decide how much intervention is needed.

The article also urges conflict researchers to develop a universally agreed framework of theories to describe the mechanisms that cause wars. Such a framework should dictate what sort of data is collected and what needs to be forecast.

They have also proposed that an international consortium should be set up to develop formal methods to model the steps society takes to wage war. The consortium should involve academic institutions, international and government bodies and industrial and charity interests in reconstruction and aid work. All research done by the members must use open data and be reproducible and have benchmarks for results. Ultimately their vision for the proposed consortium is to “set up a virtual global platform for comparing AI conflict algorithms and socio-physical models.”

They concluded saying, “We hope to take the first steps to agree to a common data and modeling infrastructure at the ViEWS conference workshop on 15–16 October. “

Read the full article on the Nature journal.

Google Employees Protest against the use of Artificial Intelligence in Military.

‘Employees of Microsoft’ ask Microsoft not to bid on US Military’s Project JEDI in an open letter

Google opts out of Pentagon’s $10 billion JEDI cloud computing contract, as it doesn’t align with its ethical use of AI principles