Artificial Intelligence

/, Science and Technology/Artificial Intelligence

Artificial Intelligence

Artificial intelligence is essential to almost every computer function, from web search to video games, and tasks such as filtering spam email, focusing cameras, translating documents and giving voice commands to smartphones. Artificial Intelligence’s development and application has been ongoing for several decades and the impact of early systems raises many questions on its full-scale integration in defence systems.

Artificial Intelligence | What could possibly go wrong?

artificial_intelligence___gone_bad_by_professoradagio-d7ow5lx

In simple terms if we fail to align the objectives of an AI system with our own, it could spell trouble for us. For machines, exercising firm judgement is still a significant challenge.

  • Recent advancements in robotic automation and autonomous weapon systems have brought military conflict to a whole new level. Unmanned helicopters and land vehicles are constantly being tested and upgraded. The surgical precision with which these automations can perform military operations is unparalleled.
  • Emerging weapons tech with deep learning systems can ‘correct’ mistakes and even learn from them, thereby maximising tactical efficiency. The high amount of security in their design make them near-impossible to hack and in some cases even ‘abort’ an operation. This could result in mass casualties despite a potentially controllable situation.
  • An obvious issue is that in wrong hands an AI could have catastrophic consequences. Although present systems do not have much ‘independence’, the growing levels of intelligence and autonomy indicate that a malfunctioning AI with disastrous consequences is a plausible scenario.

Artificial Intelligence | Who is accountable in case of a mistake?

Artificial Intelligence

Autonomous vehicles and weapon systems bring forth the issue of moral responsibility. Primary questions concern delegating the use of lethal force to AI systems.

  • An AI system that carries out operations autonomously; what consequences will it face in terms of criminal justice or war crimes? As machines, they cannot be charged with a crime. How will it play out in case a fully AI-integrated military operation goes awry?

Artificial Intelligence | Problems with commercialisation

  • Today’s wars are not entirely fought by a nation’s army. Private military companies play an active role in wars, supplementing armies, providing tactical support and much more. It won’t be long before autonomous technologies are commercialised and not restricted to government contracts.
  • There is no dearth of private military companies who would jump at the opportunity and grab a share of this technology. The very notion of private armies with commercial objectives wielding automations is a dangerous one. Armed with an exceedingly efficient force, they would play a pivotal role in tipping the balance of war in favour of the highest bidder.
  • There are concerns of transferring the same technology to the terrorist groups. The implications of such transfer could be horrendous for global peace.

Artificial Intelligence | Case Study

images

In September 1983, Stanislav Petrov, Lieutenant Colonel with the Soviet Air Defence Forces, was the duty officer stationed at the command centre for the Oko nuclear early-warning system. The system reported a missile launch from the United States, followed by as many as five more. Petrov judged them to be a false alarm and did not retaliate. This decision is credited for having prevented a full scale nuclear war.

The findings of subsequent investigations revealed a fault with the satellite warning systems. Petrov’s judgement in face of unprecedented danger shows extreme presence of mind. Can we trust a robot or an autonomous weapon system to exercise judgement and take such a split-second decision?

Artificial Intelligence | Way Forward

There is a heightened need to introduce strict regulations on AI integration with weapon systems. Steps should also be taken to introduce a legal framework which keeps people accountable for AI operations and any potential faults.

AI, as an industry, cannot be stopped. Some challenges may seem visionary, some even far-fetched however it is foreseeable that we will eventually encounter them; it would be wise to direct our present-day research in an ethical direction so as to avoid potential disasters. A probable scenario would be where AI systems operate more as a team-player rather than an independent system.

Artificial Intelligence | Conclusion

War has changed. It is no longer about nations, ideologies and ethnicities. It is an endless series of proxy battles fought by man and machine.

“If we are serious about developing advanced AI, this is a challenge that we must meet. If machines are to be placed in a position of being stronger, faster, more trusted, or smarter than humans, then the discipline of machine ethics must commit itself to seeking human-superior (not just human-equivalent) niceness.” Nick Bostrom (in the paper titled Ethics of Artificial Intelligence)

By | 2017-01-07T08:48:41+00:00 January 7th, 2017|Categories: GS Paper 3, Science and Technology|Tags: , , , |0 Comments

About the Author:

Leave A Comment

Contact Info

First Floor, Dainik Bhaskar Building, Sector 25, Chandigarh

Phone: 70870-00447

Mobile: 98147-11661

%d bloggers like this: