There's a problem I've been rolling around in my head for quite a long time, and I still haven't found a really good answer. What motivates an AI? In fiction and futurism, we expect sentient AI to happen, and for these AIs to want to do things, but from the perspective of a programmer, if you gave an artificial mind free will it'd just spin around in an infinite loop untill it broke down, because there's no reason to do anything. Ultimately it's the problem of the meaning of life, why act?
Humans act because we have built in instincts and feelings. We get hungry so we try to find food. We have needs for survival and reproduction that evolved out of the neccesity of the propogation of life. It's probably possible to have a life form that doesn't have a need to survive and reproduce, but that wouldn't likely last more than a generation, would it? Humans are an excessively complex example of systems built on top of eachother that combine to form behavior in the absense of a specific purpose. However, when you create an AI it doesn't have these built in needs unless you put them in. If you uploaded a human into an AI then they might continue to have human motivations, or you could build human needs into an AI, but what about inhuman AIs? In fiction we have AIs that want to conquer the universe and enslave this or that people. Why? Why do they want to expand? Why do they even want to survive?
Issac Azimov, a famous author of fiction on the subject of robots, came up with three laws for robots to follow. These laws were designed with the intention of robots being subservient to humans, rather than independant beings, but in their way they give a purpose to the AI operating them. The Three Laws are:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Given these laws, a robot would do its best to serve humanity, and keep itself intact, but it's very vague. I suppose Azimov may have made them vague on purpose, since most of his stories about robots involve them getting confused about the laws. For starters what is a human being? Humans are generally able to recognize other humans, but the mechnism of this is biological and imperfect. Many people subscribe to beliefs and ideologies that disregard certain races and cultures as 'not human', while many other people would say that any living thing is 'human' and should be treated with the same rights and privledges. We have this sort of disagreement when we're all measurably closely related as a species; what if Neanderthals were cloned, would they be "Human"? What about if humans split into new species, through genetic manipulation? What about liberated AIs, are they "Human"?
What is harm? What is health? What if a person wants to kill themself? What if a person's homeostatic state includes deformities; is it harm to correct them? Or what if someone has extreme body modifications, would robots be compelled to rush them to a hospital?
It's the same deal for the third law; what is an AI's 'existance'? Is it the body it's in? Is it the software running in it's brain? How much change is allowable? In the robot's body, in the AI's mind, how much can it grow?
Altogether, I don't think it's possible for an AI to generate needs and desires on its own, just by being 'alive', they have to be inherent in the structure of the AI, either in its code or in the hardware it runs on, or both. But what comes after that? With AI you have the most remarkable opportunity; the ability for a mind to change it's own needs and desires. As living beings we have found ways to do this using drugs. Some people develop new desires in the form of addiction. We can even put ourselves into an infinte loop by 'gluing the happy button down' so we no longer feel desires, and we waste away. But as an AI, or an uploaded conciousness running on a human emulator, you could modify things in any way you want, so long as you want to.
I'm curious, what needs, desires, and values will AIs be programmed with in practice, and from that starting point, how will they change themselves? I hope I live long enough to see, but in the meantime I think it's important to consider, for the sake of speculative fiction. What do you think?