We shouldn’t try to make conscious software – until we have to

Robots or advanced artificial intelligences that “wake up” and become conscious are a staple of thought experiments and science fiction. Whether or not this is actually possible remains a matter of great debate. All of this uncertainty puts us in an unfortunate position: we don’t know how to make conscious machines, and (given current measurement techniques) we won’t know if we’ve created one. At the same time, this question is of great importance, because the existence of conscious machines would have dramatic ethical consequences.

We cannot directly detect consciousness in computers and the software that runs them any more than we can in frogs and insects. But this is not an insurmountable problem. We can detect light that we can’t see with our eyes using instruments that measure non-visible forms of light, like X-rays. It works because we have a theory of electromagnetism in which we trust, and we have instruments that give us measurements that we reliably take to indicate the presence of something we cannot feel. Likewise, we could develop a good theory of consciousness to create a measure that could determine whether something that couldn’t speak was conscious or not, based on how it worked and what it was made up of.

Unfortunately, there is no consensus theory of consciousness. A recent survey of consciousness scholars showed that only 58% of them believed that the most popular theory, the global workspace (which says that conscious thoughts in humans are those widely distributed to d ‘other unconscious brain processes), was promising. The three most popular theories of consciousness, including the global workspace, fundamentally disagree on whether, or under what conditions, a computer could be conscious. The lack of consensus is a particularly important problem because every measure of consciousness in machines or non-human animals depends on one theory or another. There is no independent way to test an entity’s consciousness without deciding on a theory.

If we respect the uncertainty we see among experts in the field, the rational way to think about the situation is that we really don’t know if computers could be aware – and if they could, how that could be achieved. . According to the (perhaps still hypothetical) theory that turns out to be correct, there are three possibilities: computers will never be conscious, they might be one day, or some already are.

Meanwhile, very few people are deliberately trying to create conscious machines or software. The reason is that the field of AI generally tries to create useful tools, and it is far from clear that consciousness would help with any cognitive task that we would like computers to do.

Like conscience, the field of ethics is plagued by uncertainty and lacks consensus on many fundamental issues, even after thousands of years of work on the subject. But a common (though not universal) thought is that conscience has something important to do with ethics. Specifically, most scholars, regardless of which ethical theory they endorse, believe that the ability to experience pleasant or unpleasant conscious states is one of the key characteristics that makes an entity worthy of moral consideration. That’s what makes it wrong to kick a dog but not a chair. If we make computers capable of experiencing positive and negative states of consciousness, what ethical obligations would we have towards them? We should treat a computer or software that might experience joy or pain with moral considerations.

We make robots and other AIs to do work we can’t do, but also work we don’t want to do. To the extent that these AIs have conscious minds like ours, they would deserve similar ethical consideration. Of course, just because an AI is aware does not mean that it would have the same preferences as us or that it would consider the same activities as unpleasant. But whatever his preferences are, they should be given due consideration when implementing this AI. Running a conscious machine, what it does is miserable, is ethically problematic. It sounds obvious, but there are deeper issues.

Consider artificial intelligence at three levels. There is a computer or a robot, the hardware on which the software runs. Next comes the code installed on the hardware. Finally, each time this code is executed, we have an “instance” of this code running. Up to what level do we have ethical obligations? It may be that the hardware and code levels are irrelevant and the aware agent is the running instance of the code. If someone has a computer running a conscious software instance, then would we be ethically obligated to keep it running indefinitely?

Consider further that creating any software is primarily a debugging task – running instances of the software over and over, fixing problems, and trying to get it to work. What if one were ethically obligated to continue running every instance of the conscious software even during this development process? This might be unavoidable: computer modeling is a valuable way to explore and test theories in psychology. Ethically embarking on conscious software would quickly become a heavy computational and energy burden with no clear end.

All of this suggests that we probably shouldn’t create sentient machines if we can help it.

Now, I’m going to reverse that. If machines can have conscious and positive experiences, then in the realm of ethics they are considered to have some level of “well-being”, and the operation of such machines can be said to produce well-being . In fact, machines might eventually be able to produce well-being, such as happiness or pleasure, more efficiently than biological beings. In other words, for a given amount of resources, one might be able to produce more happiness or pleasure in an artificial system than in any living creature.

Suppose, for example, that some future technology allows us to create a small computer that might be happier than a euphoric human being, but only requires as much energy as a light bulb. In this case, according to some ethical positions, humanity’s best course of action would be to create as much artificial well-being as possible, whether in animals, humans or computers. Future humans could set themselves the goal of transforming all attainable matter in the universe into machines that efficiently produce well-being, perhaps 10,000 times more efficiently than can be generated in any living creature. This strange possible future might be the one that offers the most happiness.

This is an opinion and analytical article, and the opinions expressed by the author or authors are not necessarily those of American scientist.

About Cecil Cobb

Check Also

USB ports not working well? 7 ways to solve problems

It is not uncommon for USB ports to stop working properly; switch to another port …