Should we fear Artificial Intelligence? | Livemint

Introduction –

Artificial intelligence, which is a system that has human-level intelligence, i.e., it can do multiple tasks as easily as a human can and can engage in a “thought” process that closely resembles humans. Should one fear Artificial Intelligence? As with most things, the answer is both yes and no.

Why we must not?

  • Beginning with why one must not “fear” Artificial Intelligence, such systems are actually pretty dumb. This is because even the most intelligent systems today have artificial specific intelligence, which means they can perform one task better than any human can, but only that one task.
  • Any task that it is not specifically programmed for, howsoever simple it may seem to us, such a system would find impossible to undertake.
  • Artificial general intelligence, however, has so far remained theoretical, and is possibly decades away from being developed in any concrete manner, if at all. Therefore, any fear of a super-intelligent system that can turn on humans in the near future is quite baseless.

 

Why we must fear?

  • First, and most importantly, jobs. A May 2017 study by Lawrence Mishel of the Economic Policy Institute, argues that in the past, automation did not have any negative effect on the job market, but actually increased the number of available jobs. There can be no doubt that at least some jobs will be negatively affected by Artificial Intelligence, but the nature of these jobs and the nature of the jobs that may replace them, if at all, is hazy at best. It is this lack of clarity that one must be wary of.
  • Second, the use of Artificial Intelligence in weapons leading to ‘autonomous weapons’. Whether a machine that has been given the ability to make life and death decisions on the battlefield can adequately account for subjective principles of war such as proportionality and precautions. The underlying issue here is not that weaponized Artificial Intelligence would be smart, but that it would not be smart enough.
  • Third, privacy and data security. It must be remembered that the entire Artificial Intelligence ecosystem is built on the availability of great amounts of data and enhancing efficiency requires continued availability of such data. Constant inputs and feedback loops are required to make Artificial Intelligence more intelligent. This raises the question of where the required data comes from, and who owns and controls it. The possible authoritarian implications of this, ranging from indiscriminate surveillance to predictive policing, can be seen in the recent plan released by China’s state council to make China an Artificial Intelligence superpower by 2030.

Conclusion –
It is necessary to be open-eyed and clearheaded about the practical benefits and risks associated with the increasing prevalence of Artificial Intelligence. It is not going to go “rogue” and turn on humans (at least in the near future), and talk of such a theoretical existential risk must not blind policymakers, analysts, and academics to the very real issues raised by Artificial Intelligence.