Over the past two decades, humanity has made significant technological advances, and among the forefront of these is the widespread use and proliferation of artificial intelligence, or AI.
AI has been a concept in computing circles and academia for almost as long as computers have been around, in fact one of the most famous historical measures for judging the intelligence of a machine is the Turing Test, which was developed by noted computer science pioneer Alan Turing in 1950.
Widespread use and research of artificial intelligence, however, remained largely confined to major government organisations and educational institutions until the late 90's and early 2000's, when advances in computing technologies made many previously hypothesized approaches to artificial intelligence and machine learning a reality. In particular, the practicality of implementing artificial neural networks has become feasible.
Artificial Neural networks are not a novel concept in computer science, the basic idea stems from a theory that biological, and thus human brains could be emulated by artificial means using circuits and transistors to mimic human neurons and synapses which are considered as electrical impulses travelling along pathways. This idea was proposed more than 70 years ago, however due to hardware and technology limitations at the time, such approaches were largely confined to the theoretical realm.
In the last decade however, many advances (such as advanced multithreaded GPU architectures, widespread availability of high speed memory and a paradigm shift towards open source software) made it possible to create large scale artificial neural networks which could in theory mimic biological brains and by extension intelligence.
These advances have made neural networks as well as some other artificial intelligence approaches now available for many commercial applications. Some of these applications include the use of automated facial recognition, natural language processing for automatic speech recognition and translation; which are, at face value, very useful tools to have especially for a developing nation with a large relatively young population. A few examples of how this technology is used today:
● Facial recognition is used at many international airports now to speed up the security check, immigration and boarding process, and it is much more effective and less error prone than previous manual labour intensive systems
● Automatic machine translation has allowed people from all over the world to enjoy foreign movies and tv shows in the own language without waiting for human created dubs or subtitles (which may or may not be produced)
● Machine learning approaches have tremendously sped up the process to search for vaccines and treatments for new diseases, a topic which is very relevant in the current situation worldwide
● Robotic manufacturing is now the norm in most modern factories, and AI is accelerating this trend by replacing labour intensive parts of the manufacturing with intelligent robots
The advancement of AI has ushered in many benefits for society at various levels, but has also raised some serious issues, which we will address below:
This is a topic which has received significant press coverage lately, and is one of the most pressing issues for AI researchers and industry alike. The core question that needs to be answered with regards to ethics in artificial intelligence is, "what is the right thing to do, and can we trust an artificial intelligence system to make this decision?"
In the context of a developing nation, there are many areas which could benefit from the application of AI and automation, however answering this question correctly is a major challenge that must be addressed. A good example of this is, recently, since self-driving cars became commercially available, there has been close scrutiny on the decision-making processes of the AI systems controlling the vehicle, especially in adverse situations: If we had a self-driving bus, and it was full of passengers and it was about to collide with a smaller vehicle due to the road conditions, would it decide to collide (and risk injuring pedestrians) or veer off and potentially drive into a ditch endangering its own passengers? These sort of issues need to be properly addressed before such technology is made available on a large scale.
Artificial Intelligence has a tremendous use case for speeding up the notoriously slow legal processes of developing nations, however, once again there are issues that need to be tackled here.
For one, what algorithms does the AI system use in making decisions which would directly affect people's lives? In the previous example, if there was an accident where the artificial intelligence system failed, who would be held legally liable? The manufacturer? The programmer? The developer of the original AI algorithms? This legal responsibility aspect is a huge risk that a lot of organizations, businesses and individuals would be reluctant to undertake.
Another problem that may arise from the use of AI in legal frameworks is the evolution of the laws themselves. In the mid-1800s in the USA, there were laws which made harbouring fugitive slaves a crime. Until a few decades ago, activities which are perfectly normal and legal today were considered crimes. There is no reason to believe that laws will not continue to evolve, and so AI systems which are trained on obsolete laws and cases may be heavily biased towards precedents which are no longer applicable. There would need to be proper governance strategies and guidelines in place before such technologies are used in places that have great ramifications.
A lot of coverage has recently been given to the topic of whether algorithms and intelligent systems can be truly fair. One famous example of this is the use of facial recognition amongst law enforcement agencies. It was seen in recent years that such systems were inherently biased towards some sections of the population, and upon further investigation, the cause of the bias seemed to stem from the fact that the data sets used to train the systems reflected the (perhaps subconscious) biases of the original developers.
Very recently, many major companies like IBM, Amazon, Microsoft and Google have restricted usage of their facial recognition technologies by law enforcement. This has uncovered a disturbing truth about how AI systems are developed, and those in the computing academia are familiar with it: an AI system is only as good as the data we train it with" - a very notable example of this is the Google image labelling problem: a few years ago, the Google machine vision system incorrectly classified apes as black people and vice versa - Google's solution? Remove apes (like gorillas, chimps etc.) from the training dataset.
In the perspective of Bangladesh, fairness is a major issue for a lot of aspects of daily life: who gets the job? Who gets the admission to the college? Who gets chosen for a random check? Who gets selected for the team?
When developing AI systems to support the decision making for these (and other) issues, it is very important to determine whether the data used to train the system reflects any (known or unknown) biases.
Any system developed and used widely by people is prone to failure eventually. When we make an expensive electronics purchase, say a TV, an AC or a fridge, we will usually not make the purchase if there is no warranty offered. Similarly with AI systems, there will be hesitation with adopting such systems if there is no warranty offered, and the consequences of critical systems failing due to an AI fault can be catastrophic.
One fear that many people have with AI is the emergence of what is known as a technological singularity: the concept that a super intelligent AI system will be developed, and this will have the ability to create other AI systems and evolve itself out of our control and eventually wipe us out. This concept has been around for more than a century and is often depicted in movies (the Matrix, Terminator etc) as the "super intelligent killer robots take over the world" trope.
While this is (currently) still far-fetched, the rate of technological advancement might make such (albeit less dramatic) scenarios a possibility during our lifetime. AI systems in control of weaponry and military infrastructure (and more importantly, decision making) is something that we all can agree the technology is not mature enough to handle.
The way forward
The way to progress, as we have seen with many technologies in the previous centuries is by making mistakes and learning from them. Nuclear power, automobiles, aviation, mobile phones etc. have all had pitfalls along the way but have eventually emerged as widely used technologies which work for the benefit of mankind. Artificial intelligence too, is one such technology but it would serve us well to learn from past mistakes of adopting new technology and employ well thought out methods for adoption.
The author is a PhD. candidate at the School of Computer Science and Engineering, Florida Institute of Technology. His research interests include machine learning, IoT and wireless sensor networks.