Some time ago, I branched out from my Deep Learning hobby into the ethical aspects of Artificial Intelligence. In the industry, most of them are gathered under the topic called roboethics. It was fun working on the A.I. projects. All the science projects and papers aside, I was reading prospects and watching conference recordings; and the time came when I realized: “what I’m doing is fun and all, but I probably need to help with the boring stuff.”

You don’t need to spend much time to find a flurry of contradicting opinions about what Artificial Intelligence will bring about, from collapsing the job market to stabilizing the global economy, from radically changing the war machine to finding cures for millions of incurable patients. Most of it is speculation, si-fi, and wishful thinking. And do I need to mention that Deep Learning technology is the latest cutting edge technology, to which investors and scientists are attracted like bees to honey? According to Grand View Research, the Market share of the A.I. Industry was USD 62.4 billion in 2020 and is projected to reach USD 733.7 billion in 2027 with a CAGR of 42.2% from 2020 to 2027. Everyone and their grandma are jumping on the bandwagon in an attempt to catch the stream.

Until only recently, not too many people realized that to use this technology, you need very little understanding. The availability of open-source software and hardware that enables Deep Learning technology is unprecedented as testified by Dr. Buck to the US Senate in April 2018. on two occasions.

However, there is little attention paid to fundamental ethical questions about the rapid change and how it affects people and their relationship with each other in the short and long term. We witness rapid encroachment of more automation into our workplaces and our homes. That will definitely reshape our environment and fast. Will we be able to adopt without going nuts in the literal sense? Curiously, the public profile of all the four witnesses that spoke on that hearing shown, that not a single one had any education in roboethics. And European and Canadian hearings that year were no much better. The area is so new and uncharted that it is not uncommon to see confusion when asking or answering questions about it. There is one key difference with any other older technologies, which may not leave us any time for preparation. Development in this area can be highly spontaneous and unpredictable. If we do not pick up the pace of formulating and answering the right questions, we will not notice the moment of a singularity incident. Everyone will be affected, and no one will be ready.

Furthermore, think if you’ve heard about any discovery brought to life without failures or harmful side effects. Neither nature nor humans found any better way than experimentally test new things. A failed experiment in a lab is an unfortunate event; a failed experiment with a global technology poses an existential threat. AI technologies have a very low entry threshold, unlike nuclear tech, for instance. So, on the one hand, it’s available to virtually anyone. On the other hand, it can be as dangerous as nuclear tech. But the last drop for me was an opinion of a renowned professor in AI tech that the safety concerns of AI development are neither his area of expertise nor his interest.

Today you’ve come here perhaps because you were looking for Artificial Intelligence information. It is this, among other things. But mostly, it is other things. Scientists research how to make a safe A.I., but few and far between those who research the machine questions: What happens if A.I. becomes sentient? What is the full spectrum of the relationships between humans and A.I.? Are we growing lazy and inept in human-human relationships to the point we prefer interacting with A.I. instead? Do we even need A.I. as we are already struggling to stay connected, remain human, retain cultural values with the rapid changes the progress brings each day? Presently I’m planning to review The Sane Society by E. Fromm and /the social dilemma_ to elaborate on this struggle.

I ventured deep into human ethics, and I realized that if we were to bring new life into existence, the only way it would be compatible with us is if we are ethical about our intent, the process, and the result; ethical towards us, our children, Earth and the potentially new sentient beings. Like father, like son – would be the right quote. And if the question of a new thing that may become sentient is still on the table despite being another complex factor to our already complicated life, it should be solved as if they were our children, not our slaves.

In conclusion, I will leave you with this thought. When I was beginning this journey, I was excited about this new technology until I realized that we weren’t going to create anything better than us unless by accident. So, who are the most suitable people that can make this technology ethical? I’m pretty sure they are not the people who are striving for a quick profit or scientists whose interest lies in technology for technology’s sake.