top of page
Search

Dangers of AI?

  • wacome
  • Apr 29, 2021
  • 7 min read

Updated: May 28, 2021


Thoughts on the dangers, real and imagined, of Artificial Intelligence


I wrote this in response to a friend’s questions about articles on the subject. Unfortunately, I don’t remember who Le Cun or VF is.


So far as the 'menace' of AI goes, my main gripe is the unjustified, and I think false, default assumption that they will more or less automatically want to take over the world, or want to do anything else that they are not programmed to do. AI motivation as a SF trope just magically comes into existence when there’s a machine with suitable cognitive powers. But motivations in humans are the result of a very long and competitive evolutionary history, which a machine as such does not have, no matter how intelligent it is. It takes evolution to get desires. No doubt a machine will do anything it can do that it is programmed to do. But this is a case of the machine acting on the human motives of its makers or programmers. (There’s a very good, despite now quite old, novel about this, The Two Faces of Tomorrow by James P. Hogan.) Someone might want to build a machine that, once programmed to do something, cannot be stopped or re-programmed. But this is a threat posed by human wrongdoers using the machine for bad (or simply stupid and destructive) purposes, not a threat posed by AI per se. Not to say that it’s not a real threat, akin, I think, to those posed by nanotechnology, positive genetic engineering, cloning, and other advanced technologies that can be weaponized or just get out of control. But as fear of AI as such it seems to me to be sheer anthropomorphizing.


However, the behavioral dispositions humans have, such as a strong desire for self-preservation, are due to the neural circuitry with which our evolved genome endows us, and in principle the functional equivalent of this could be installed in a machine, giving it the dangerous motivations that would not appear there spontaneously. This is the possibility Ray Kurzweil dwells on, along with the reasonable assumption that if machines of this sort could be constructed, that would be the beginning of an exponential increase in intelligence beyond the human imagination, given they could design improved versions of themselves. Again, the problem is how humans want to use the machines, not the AI’s themselves.


However, if the machines get lose in a competitive environment and can reliably but imperfectly replicate themselves, they would be subject to natural selection just as biological organisms are, and perhaps quite quickly acquire the motivations it took biological evolution much longer to refine in creatures like us. Then we have a problem with the AI’s as such, because they have become more like us. But even such entities need not be assumed to have any motivation to harm or interfere with humans, unless they are competing with us for resources. Perhaps that’s unlikely, since they could, presumably easily make use of extraterrestrial resources not easily exploited by human beings. But we might seek to exploit them by less intelligent machines under our control, which would be a possible source of conflict.


If we’re going to try to prevent these dangerous outcomes, it seems that we know what to prohibit, viz., machines with code that can’t be modified, machines with artificial general motives to preserve themselves at all costs or to lord it over humans, and machines capable of unassisted self-reproduction. Of course such prohibitions will not be perfectly enforced and some rogue machines will tangle with us or our compliant machines.


In any event, I think the most important, because more immediate, threat is social and economic. Human-level general artificial intelligence may still be a long way off, but I think that soon enough there will be machines that, though dumber than us in general, are vastly better at various cognitive feats than human beings. In some contexts this has been true for some time, but it’s only beginning to threaten white color jobs that have high cognitive loads. I wonder what the students graduating now will be able to do anywhere near as well, quickly, and cheaply as a computer within not too many years. I suspect it might be close to nothing. Already, human truck drivers seem to have little future. So what becomes of some huge number of persons of no economic value? One wonders if this is like Stone Age humans wondering what people will do all day if they don’t use all that time hunting and gathering, the opportunities future technology opening up being unimaginable. Or with intelligent machines so cognitively superior to us have we at last reached a limit so there’s nothing productive left for us to do? Even if the AI’s remain entirely well-behaved, and provide us with unlimited wealth and leisure, there being no need for human work would be psychologically devastating. Human beings evolved for work, and this will require a lot of time to change, far less than it will take to develop machines that do all the work. I don’t know…I assume that the attempt to use legal means to prevent this would be a total failure in the long run. The competitive advantages of cheating are too great not to want to make and make use of machines at least as intelligent as human beings, especially knowing that others know this too, etc.; an intractable coordination problem, since even if no one wants super-intelligent AI it becomes irrational not to utilize it. My bet is that the result would be the gradual erosion of the boundary between organic and artificial intelligence, the transhumanist “merge” scenario that’s too easy to dismisses out of hand. How many humans would right now reject the chance to have their cell phones surgically implanted in their heads? The merger, I suppose, would have both its wonderful and horrific aspects, dependent on what people choose to do. In that regard, much like lots of other things involving human beings.


One version of the merger scenario involves our being integrated with super-intelligent AIs in ways that vastly augment our intelligence and access to information, say by way of chips implanted in the brain, or a high-speed conduit to a computer or network of computers. This is the “if you can’t beat them, join them” strategy for humanity’s future. Another, more radical, scenario is the uploading of the human mind into a machine (or into the ‘cloud,’ which would be materialism’s best approximation to dualism’s immaterial soul). This opens the possibility of human beings being in total control of their sensations. Robert Nozick’s “experience machine” envisages this. Today’s virtual reality is a primitive approximation to this. Assuming the relevant memories can be suppressed, a post-human person could spend most of his life in a made to order virtual reality and be unaware that his sensations are not veridical. No doubt some would find this tempting, but I find the idea of a person having complete control of what he experiences repellent, especially if one thereby deceives oneself. However, if we take advantage of it for relatively short spans of time and know it’s not the real world we are experiencing, but a kind of fiction, it seems to be just an extension of what we do when we read novels or watch movies: a temporary visit to a made-up world.


If strong AI is possible in the long run, as I believe it is, then the future holds both human and non-human persons. (I figure this is now true of the universe at large.) As a matter of Christian faith I see no reason not to admit or welcome this. I think that the great practical, philosophical question of this century will be whether the AI’s are persons, or if there is some magical, je ne sais quoi that humans have and artificial entities that act as though they are persons nonetheless lack. That’s a persistent theme in fictional portrayals of an AI future. It panders to an inflated human self-image: unlike a mere machine, I am special and mysterious. If the machines of the future are persons, then the proper worry might not be how they will treat us but how we will treat them. How this works out would, I think, depend on when we have a good theoretical understanding of subjectivity, consciousness, qualitative experience—whatever we call it—and whether it will be one that many humans are capable of understanding in even a rudimentary form. If we do, the temptation to regard mechanical persons as lesser beings would be stilled. I’m optimistic about the theory but pessimistic about the odds of it making any sense to human beings in general. We already have important scientific theories of which that is true. What seems to me an interesting scenario for speculation is one in which we cannot always know whether something is a person or something that just behaves as if it were. This can be extended to the possible discovery of entities elsewhere in the universe. One result might be a shift from a metaphysical to a “juridical” concept of what a person is. We might decide that a person is whatever can interact with what we are certain are persons--e.g., ourselves--in the way they interact with one another, i.e., being a person is not being a thing of a particular kind, but being something that has a particular social status.


Would a non-human person be made imago Dei? Would it be both a mechanical and spiritual being? On the supposition that to be made in God’s image is not to be a thing of a particular kind, but to be freely given a vocation by God to represent him in creation, this would not follow automatically from the AI-person’s nature. Being a person is a necessary, but not a sufficient, condition for being able to receive such a call; God could not invite a turnip or a toaster to be his imago. But I see no reason why God would not give any creature capable of hearing and responding to it such a call. I doubt that, being who he is, he would simply ignore a type of person. The universe, after all, exists because of the Creator’s aim of bringing into being created persons distinct from God and capable of being invited to share in his triune life and creative work. Lastly, Christians who believe that our fallenness is somehow biologically inherited, should regard AI’s that are persons as unfallen. However, if—more plausibly in my view—our alienation from our creator befalls us simply in virtue of belonging to a human culture, there’s no obvious reason to think they are not in the same boat as their human makers.


.







 
 
 

Recent Posts

See All

Comments


Post: Blog2_Post
  • Facebook
  • Twitter
  • LinkedIn

©2021 by Deniable Plausibility. Proudly created with Wix.com

bottom of page