This is a reflection on the future of artificial intelligence and associated ethical issues, supported by the literature. The references are present at the end.
Abstract
Artificial intelligence is changing industries and communities around the world in many areas, thus ethical principles have to be taken into account for a balanced evolution of the society. Bias, transparency, explain ability and fairness are some of the key areas that AI algorithms developers must consider. Some possible future problems, as unemployment and treats to democracy, and some problems that are appearing now, like treats to privacy, must be taken into account now before it’s too late to prevent them. For this, research and education in AI are essential.
INTRODUCTION
Artificial Intelligence (AI) can be defined as a field that combines computer science and good datasets to permit problem-solving, for this, logic-based techniques and advanced analysis techniques are applied, like machine learning and deep learning, to interpret, support and automate decisions to take into action. It permits to reduce costs and increase productivity in industries, due to the accuracy and precision it provides. [2][5][9]
Artificial intelligence is radically changing industries and communities around the world. It has become ingrained in many parts of today’s era society, from video viewing recommendations and autonomous driving cars to online purchase recommendations, ad vertisements, fraud detection, generative AI chatbots like ChatGPT and many others.
Over time, more companies started to use AI to address specific problems. By 2025, according to Gartner, 50% of businesses already have platforms for using AI, when in 2020 just 10% of companies used it. But this is just the beginning, because it is expected that the ways we work, live, and interact with others will change even more. In one side, it is anticipated that this new revolution will enhance and improve our lives and societies. However, in the other, it could result in significant changes to our way of life and society standards.
The time to comprehend these technologies influence and anticipate their unfavourable implications is shrinking. Therefore, it is very important to take ethical principles into account and do not let the evolution of technology continue to outpace its proper regulation. [5][9][8][17]
THE FUTURE OF AI
At leading firms, there is currently a transition from artificial intelligence being a additional feature to a key feature, because AI is becoming more integral to company operations and strategy. AI will reach and expand in a lot of areas of interest, bringing a lot of innovation and progress into those areas, some examples of what is expected to happen are the following:
-
Medicine and Healthcare: AI can help automate repetitive tasks to save time, with evidence-based decisions by identifying risk factors, it can anticipate outcomes even using patient-specific algorithms. It is expected that it will help to prevent around 86 percent of errors in the healthcare sector, while, at the same time, reducing costs. It will also permit a better understanding of the different elements (like birthplace, diet, pollution, etc.) that affect a person’s health, even permitting to discover when a person is most likely to develop a chronic illness and offer pre-emptive treatment to stop its advancement. This gives insight that permits to alter undesirable outcomes into a better clinical result. [8][6]
-
Retail: Retailers saved over $340 billion in 2022 by implementing AI throughout all of their business operations, according to a Capgemini study. Amazon is already trying to determine the safety of operations for delivering items with drones. This can be a reality in the next ten years. Also, this will lead to a more autonomous and individualized experience, with dressing rooms with screens, virtual racks customized according to data-defined personas and more personalization based on past behaviours and trends. [6]
-
Banking: The value of AI in Banking is expected to reach $300 billion by the end of 2030, according to IHS Markit’s AI in Banking research. It will lead to lower costs, more productivity and better customer experiences. It will also dominate sectors like business intelligence and security in the coming ten years. Another progress revolves around Robo advisors in wealth, they can revolutionize the banking industry, saving time for consumers and wealth managers. [6][10]
-
Education: AI-enhanced education will help teachers to deliver high quality education more widely. This comes from the fact that it permits to lower teacher’s workload, giving them time to concentrate on students learning, which improves learning outcomes. However, every student is distinct and has a varied learning path, this can become problematic if the AI does not have it into account. Primary grades students are especially relevant in this matter. Finally, AI in education is promising, especially considering recent advancements in reinforcement learning methods, but learners and educators must always be at the heart of AI development for it to have a good impact. [4][12]
Nowadays, the AI that exists is all characterized as Weak AI or Narrow AI, it is programmed to complete particular tasks. Contrary to what the name implies, this form of AI is anything but weak, as it supports some potent applications like Apple’s Siri, Amazon’s Alexa, IBM’s Watson, OpenAI’s ChatGPT, Google’s Gemini and autonomous vehicles.
The other type of AI is named Strong AI, with two distinct types, Artificial General Intelligence and Artificial Super Intelligence. A computer with general intelligence would, theoretically, be capable of learning, problem solving, and future planning, having an intelligence like the one of human, and possibly having self-awareness. Super intelligence would be even more intelligent and capable when compared to humans. Although there are no real-world applications of strong AI, researchers are already competing to develop it, creating an AI that can handle efficiently a variety of jobs. Experts say that superintelligence will be achievable in less than 30 years. However, they also estimate that the chance of things going bad are about one third. [9][17][11]
IMPORTANCE OF REGULATION AND ETHICS
Despite the many advances for society that AI is yet to create, ethics and regulation cannot be left aside. We can look at the regulations and ethics in 3 perspectives, a micro-perspective, corresponding to algorithms and organizations, a meso-perspective, corresponding with employment, and a macro-perspective, respective to democracy and peace. [7]
3.1 Micro-Perspective: Algorithms and Organizations
Although AI is fundamentally objective and without prejudice, this does not mean necessarily that AI systems are always free from having biases. In reality, any bias existing in the raw data used to train an AI system endures and might even be increased because of the nature of the technology. This is a big problem, because biased AIs suffer from results that cannot be generalized widely, creating various errors.
For example, research has shown that due to the images used to train algorithms for self-driving cars, their sensors are better at recognizing lighter skinned people than darker ones. To combat these problems, there can be created criteria’s that are generally accepted standards for the training and testing of AI algorithms, a process similar to what happens on the consumer and safety testing protocols used in physical products. With this process, we can obtain steady regulation even when the AI systems technology change with time. [7][18]
Techniques such as Deep Learning, a crucial method utilized by a lot of AI systems, are essentially a black box, making it hard to take explanations from the algorithms. It can be simple to evaluate the quality of the output of the algorithms by observing the right classifications on a test set, for example, but the way to arrive to those results are mostly opaque.
This problem is accentuated in cases of intentional opacity (for example when a company wants to keep an algorithm secret), technical ignorance, or application scale (when various methods and programmers are involved). This is one of the big challenges for the future implementation of AI in many areas, because, for many applications, these justifications are crucial. For example, in medicine and healthcare, the professionals must accept responsibility for their choices. Thus, they cannot accept a machine-generated diagnosis or treatment without being informed of the logic behind it. This is a very important aspect that still needs to be addressed. [2][12][3]
Another important thing is to encourage industry players to create the most transparent and accountable algorithmic systems, with features that allow users to change algorithm-based judgments and fix erroneous data. Big data technologies must be used responsibly and ethically, thus companies should be held accountable for the choices they make with help of AI, maybe being held accountable for algorithmic errors too. With the before in mind, organizations should give better ways to give people access to their data and inform how their information is being used to inform decisions. Data science experts should continue to establish new best practices for the fair and ethical use of artificial intelligence, and maybe swear allegiance to a moral code, comparable to that of lawyers or medical professionals. [12]
3.2 Meso-Perspective: Employment
Many jobs are at risk of being replaced by AI and AI based automation technology due to the rapid growth of machine learning, automation, and robotics. With special emphasis on highly qualified professionals and white-collar workers, the ones that, in an office or other administrative setting, perform desk, management, or administrative work. However, the replacement of jobs is not a recent occurrence. It has happened in the past, were jobs like switchboard operators, elevator operators, and typists have vanished due to technology development.
Every time a technological revolution has taken place, people have been worried about technical unemployment and technological job obliteration, in the last times the new jobs created compensated the jobs that were lost. This time, with the AI revolution, it is still unclear if new jobs will be generated in other sectors to accommodate the individuals that will lose their jobs. This is related to both the potential number of new jobs, which might be far lower than the jobs loss, and the necessary degree of competence to execute them. [17][7]
If the higher unemployment becomes a reality, it will lead to much less disposable income in the general population, which, in turn, can create big problems in the society. To prevent this, regulation can help. Businesses can be forced to invest in employee training for positions that cannot be automated. States may also choose to restrict automation. Other possibility is to limit the amount of hours employees can work each day, becoming possible to share the labour that remains between the workers. Finally, even a Universal Basic Income can be considered. [7][2][12]
3.3 Macro-Perspective: Democracy and Peace
One aspect remains to be addressed, regarding who will watch over the AI practices that governments and entities employ, the creators of the regulations themselves. For example, China uses AI, Big Data and surveillance to employ its social credit system that has the objective to “allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step”, as written in the Planning Outline for the Construction of a Social Credit System in China (2014), something that could be very undesirable in a western perspective.
In a contrary way, San Francisco’s outlaw face recognition technology, researchers are exploring solutions that makes persons undetectable to automatic surveillance cameras and the General Data Protection Regulation (GDPR), which severely restricts how personal information can be stored and processed, was introduced by the European Union. This last one in contrast to China’s and, to a lesser extent, the United States’ efforts to lower obstacles for businesses to use AI. This can slow down AI research in the EU, revealing that we must achieve an equilibrium between personal privacy and economic progress. [14][15][16][7]
To permit a sustainable development of AI, one that leads to democracy, peace and prosperity, education and research on the topic are fundamental. For this, it is essential to improve funding for AI research, particularly basic research, with government involvement when corporate investment is not beneficial. Particularly topics that promote accountability, fairness and research into minimizing algorithmic unfairness. To comprehend these difficulties, it will be crucial to bring together computer scientists, social scientists, and academics who study the humanities. In a educational point of view, it is very important to educate the general public in artificial intelligence topics, and be aware of how these issues affect them, mainly explaining the difference between Narrow AI and Strong AI. [13][12]
Finally, worldwide coordination of legislation will be required, where governments and industries are heard. Due to the nature of AI, a limited solution that only affects some nations while leaving out others is unlikely to be successful in a long term. [12][2]
CONCLUSION
The presence of AI in our society is constantly increasing, and this trend will only increase in the coming years and decades, with increasing influence in a lot of areas in the society, such as medicine and healthcare, retail, banking, education, among others. This advance is already showing several problems in the area that have to be investigated for a more promising future, such as bias, privacy, transparency, explain ability, fairness, among others.
The future of AI is still full of mysteries that just time will answer. However, we have to be prepared for all possible outcomes so we can respond the best way to them and avoid problems. Consequently, the study of ethics and applying the right regulations is essential and must be a focus of industries, organizations and governments, with an emphasis on being sufficiently exact to guarantee the safeguard of ethical values, as security, privacy and safety, and sufficiently broad to allow for future evolutions in the quick changing world we live in. Finally, all of this must be done on a global scale, not just in specific organizations or governments.
REFERENCES
[1] Mike Bechtel. 2022. The Future of AI | Deloitte US. (2022). Retrieved July 25, 2022 from https://www2.deloitte.com/us/en/pages/consulting/articles/the-fut ure-of-ai.html.
[2] Alan Bundy. 2017. Preparing for the future of Artificial Intelligence. (2017). doi: 10.1007/s00146-016-0685-0.
[3] Jenna Burrell. 2016. How the machine ’thinks’: Understanding opacity in ma chine learning algorithms. doi: 10.1177/2053951715622512.
[4] Muhammad Ali Chaudhry and Emre Kazim. 2022. Artificial Intelligence in Education (AIEd): a high-level academic and industry note 2021. AI and Ethics, 2, 1, 157–165. isbn: 0123456789. doi: 10.1007/s43681-021-00074-z.
[5] Gartner. 2021. What Is Artificial Intelligence (AI) | Gartner. (2021). Retrieved July 25, 2022 from https://www.gartner.com/en/topics/artificial-intelligence.
[6] Sakshi Gupta. 2021. How Will Artificial Intelligence Affect Our Lives in the Future? (2021). Retrieved July 25, 2022 from https://www.springboard.com/blo g/data-science/artificial-intelligence-future/.
[7] Michael Haenlein and Andreas Kaplan. 2019. A brief history of artificial intelligence: On the past, present, and future of artificial intelligence. California Management Review, 61, 4, (July 2019), 5–14. doi: 10.1177/0008125619864925.
[8] J. Matthew Helm, Andrew M. Swiergosz, Heather S. Haeberle, Jaret M. Karnuta, JonathanL.Schaffer, ViktorE.Krebs, AndrewI.Spitzer and PremN.Ramkumar. 2020 Machine Learning and Artificial Intelligence: Definitions, Applications, and Future Directions. Current Reviews in Musculoskeletal Medicine, 13, 1, 69–76. doi: 10.1007/s12178-020-09600-8.
[9] IBM. 2022. What is Artificial Intelligence (AI)? IBM. (2022). Retrieved July 25, 2022 from https://www.ibm.com/cloud/learn/what-is-artificial-intelligence.
[10] IHS Markit. 2022. News Release IHS Markit Online Newsroom. (2022). Retrieved July 25, 2022 from https://news.ihsmarkit.com/prviewer/release_only/slug/technologyglobal-business-value-artificial-intelligence-banking-reach300-billion-203.
[11] Vincent C. Müller and Nick Bostrom. 2016. Future Progress in Artificial Intelligence: A Survey of Expert Opinion. In Springer, Cham, 555–572. doi: 10.1007/978-3-319-26485-1_33.
[12] Cecilia Munoz, Megan Smith, and DJ Patil. 2016. Big Data : A Report on Algorithmic Systems , Opportunity , and Civil Rights Big Data : A Report on Algorithmic Systems , Opportunity , and Civil Rights. Executive Office of the President of USA, May. https://www.whitehouse.gov/sites/default/files/microsites/ostp/2016_0504_data_discrimination.pdf.
[13] Cornelius Puschmann and Jean Burgess. 2014. Metaphors of big data. International Journal of Communication, 8, 1, 1690–1709.
[14] The Economist. 2016. China invents the digital totalitarian state. TheEconomist. (2016). Retrieved July 25, 2022 from https://www.economist.com/briefing/2016/12/17/china-invents-the-digital-totalitarian-state.
[15] The New York Times. 2019. San Francisco Bans Facial Recognition Technology- The New York Times. (2019). Retrieved July 25, 2022 from https://www.nytimes.com/2019/05/14/us/facial-recognition-ban-san-francisco.html.
[16] Simen Thys, Wiebe Van Ranst, and Toon Goedeme. 2019. Fooling automated surveillance cameras: Adversarial patches to attack person detection. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2019-June, (June 2019), 49–55. isbn: 9781728125060. arXiv: 1904.08653. doi: 10.1109/CVPRW.2019.00012.
[17] Weiyu Wang and Keng Siau. 2019. Artificial intelligence, machine learning, automation, robotics, future of work and future of humanity: A review and research agenda. Journal of Database Management, 30, 1, (Jan. 2019), 61–79. doi: 10.4018/JDM.2019010104.
[18] Benjamin Wilson, Judy Hoffman, and Jamie Morgenstern. 2019. Predictive Inequity in Object Detection. http://arxiv.org/abs/1902.11097 arXiv: 1902.11097