Listen to this article!
What is Artificial Intelligence?
Artificial Intelligence or AI is the theory and development of computer systems that are capable of performing tasks that require human intelligence. In short, it is the field in science that creates intelligent machines that think and act like humans.
Many people may be happy with the influx of artificial intelligence machines that can help human beings do a lot of things that take more time or is more complex for humans to do, but there are actually some who fear that AI robots may replace the human race.
In fact, there are already horror stories of AI, such as the AI of Google that is becoming “highly aggressive”. In 2018, a driver crashed and died while using Tesla’s autopilot because he wasn’t paying attention.
What is ethics, and why is it essential to AI?
Ethics, on the other hand, is a set of moral principles that control behaviour. There are a lot of discussions with regards to ethical issues that may arise with the influx of AI machines.
Ethics is vital for AI because of the potential of these intelligent computer systems to cause harm to individuals and to society. It is also relevant to question the moral status of these machines.Click to tweet this!
The ethics of AI can be divided into machine ethics, which pertains to the moral behaviour of artificial moral agents or AMAs, and roboethics, which has to do with the moral conduct of human beings while they design, create, treat, and use AI machines.
Ethical issues that need to be addressed by AI;
AI machines are likely to be biased because people are biased too. AI systems rely on data, which can include inherent bias that may be unintentional. The trainers and the training process of AI may relay their own bias unconsciously to the systems that they train.
AI has both a constructive and destructive impact. For example, many people may lose jobs with the replacement of AI machines in the near future although the creation of AI may also lead to the creation of new jobs and addition of social benefits in the longer term.
AI systems should be able to explain themselves, especially when justifying the decisions or conclusions that they make especially when they affect many people. For example, there should be a concrete explanation as to why an AI machine in the bank rejects a loan application.
Despite the laws of robotics, some people could use the AI machines to their advantage or even abuse them based on their personal intentions. For example, the hacking of an autonomous car can turn it into a weapon that’s why it’s essential to have healthy security controls.
There are questions posed, such as who should be liable when something terrible happens. Should it be the owner of the machine, the one who is in charge of manufacturing, the designer, or the user? The AI machine cannot be prosecuted, so it has to be one of the above.
Measures established in order to address ethics in AI
With the risks posed by AI machines, some companies have stepped up to address the issue of ethics of AI. The cars produced by Tesla now function semi-autonomously while collecting data that contributes to the growth of the machine learning algorithm. In this way, a software update is dispatched to all existing cars so that they can accurately perceive and react to their surroundings.
Google and DeepMind have an ethics board and charter. Microsoft, on the other hand, has its own AI principles and now has an AI ethics committee since 2018. Amazon, with help from the National Science Foundation, is currently sponsoring research on the “fairness in artificial intelligence”. Facebook co-founded AI ethics research in Germany.
Technology is now gearing towards artificial intelligence, but before completing this project, the ethical aspects need to be considered and should be done the soonest possible time. This will also influence the acceptance of AI.
- Anirudh is the Editor in Chief and Main Writer at Clickdotme. He does not like describing himself in the third-person and had a hard time coming up with these two sentences!