The best regulated crypto exchange
Find out why?
Christmas game by Currency.com
Win 1 Bitcoin. 1000 presents
Get My Gift

Military artificial intelligence: safety and ethical issues explained

By Connor Freitas

It has been described as the battlefield of the 21st century but some critics fear military artificial intelligence could increase the number of unlawful killings.

The march towards military artificial intelligence (AI) seems inevitable. Beijing is hoping to establish technological dominance in this space – and Chinese newspapers have already set out a stark vision of what this could look like. Imagine swarms of drones that could automatically attack enemies and malware that could infiltrate the networks of rivals and steal intelligence.

Situation rooms where commanders decide the best course of action to take could become a thing of the past as artificial intelligence in army ranks becomes more prolific. AI-enabled smart weapons could analyse situations and complete missions independently, and some military newspapers in China have set out the vision of creating technology that would be capable of winning before war could even begin.

All of this has created a race worldwide to achieve supremacy on what is being dubbed “the battlefield of the 21st century”. Military AI is also seriously being considered by another superpower, the United States, which recently advertised a job vacancy for a military ethicist who would be capable of navigating the countless grey areas associated with this technology.

When it comes to military artificial intelligence, ethics are a huge issue. The scenes depicted in The Terminator may look slick and cool but, in the real world, great thought needs to be given to assuring this technology is applied safely and lawfully. Plus, with countries around the world pursuing these capabilities – each with a different agenda – a lack of standardisation could result in a worrying absence of ground rules when such weaponry is deployed for the first time. Experts have warned that different cultures have different values, and this may mean some nations do not share the ethical considerations of others.

Here, we’re going to look at the implications of military AI, and examine the ethics and safety issues associated with it.

The battle for military AI

As Russian President Vladimir Putin once opined, whoever becomes the leader in AI will become the ruler of the world. Initially, it seems appealing to send robots on to battlefields instead of humans – but unless these machines are intelligent enough to accurately assess threats and navigate their way around, the exercise would be fruitless. Although AI systems have been proven to outperform even the most seasoned military pilots when pitted against each other in a simulated environment, the challenge comes in unfamiliar settings where these computers have never been tested before.

Military artificial intelligence can also prove useful when it comes to analysing the vast amounts of data gathered during surveillance operations. Just look at the US, whose drones captured 37 years of footage in 2011 alone. Again, ethical concerns are not far around the corner – just like Facebook, Google and Amazon are scrutinised over how they collect and use data, armies are likely to come under pressure to ensure these impressive capabilities are not abused.

Human rights organisations including the American Civil Liberties Union fear that AI in army infrastructure could result in an increase in the number of unlawful killings and civilian casualties – and to compound the problem, information about such operations may not enter the public domain because of mounting secrecy. In a statement, they especially expressed concern about the relaxation of standards in the US military when lethal strikes are taking place. Whereas “near certainty” that a target was present used to be mandatory, this has since been downgraded to “reasonable certainty” – creating a risk that individuals could be misidentified and innocent bystanders could be killed.

Some researchers argue that the military community is best placed to establish the ethical boundaries for the use of AI, pointing to the long precedent for international agreements concerning the rules of war. Of course, there will always be the risk of rogue nations that will fail to follow such frameworks – in a similar vein to North Korea, which has been accused of pursuing a weapons of mass destruction programme despite international condemnation and economic sanctions. There are also calls for the practitioners of AI, the people who bring software to life, to have a central role in making ethical considerations. To this end, there are proposals for ethics to be taught to students in AI classes.

If done correctly, there is potential for artificial intelligence to help the military achieve their goals in a more ethical way — improving precision, reducing bloodshed, and equipping officials with better information about the threats facing national security. Conversations on an international level, before AI goes mainstream, may be pivotal to achieving this.

FURTHER READING: Cyber crimes, North Korea and crypto: should we be worried?

FURTHER READING: Saudi Arabia oil attack: The impact on oil prices

Subscribe to Currency.com news
iMac Image
The most beautiful trading app
google play storeapple store
iPhone Image
iPhone Image