Войти в систему

Home
    - Создать дневник
    - Написать в дневник
       - Подробный режим

LJ.Rossia.org
    - Новости сайта
    - Общие настройки
    - Sitemap
    - Оплата
    - ljr-fif

Редактировать...
    - Настройки
    - Список друзей
    - Дневник
    - Картинки
    - Пароль
    - Вид дневника

Сообщества

Настроить S2

Помощь
    - Забыли пароль?
    - FAQ
    - Тех. поддержка



Пишет Journal de Chaource ([info]lj_chaource)
@ 2016-07-31 22:25:00


Previous Entry  Add to memories!  Tell a Friend!  Next Entry
Can artificial intelligence think, and if so, should it be treated as a human?
In the history of AI, the same story repeats itself:

- it is said that machines are not "intelligent" because they cannot perform an activity X, which is regarded as characteristic of human intelligence,
- a machine is eventually built such that it performs the activity X better than the best humans,
- it is then explained that the activity X is merely routine and mechanical, albeit complex; being capable of X does not really mean to be "intelligent".

Various AI machines have been programmed and trained to play board games (checkers, chess, Go), to walk on ice without falling, to play musical instruments by reading sheet music, to drive cars, to talk using words so that people want to keep talking to them (the "Turing test"), to plan and execute complicated series of actions in a changing environment, and even to "think about their own thinking". None of this has lead to acknowledging that machines have reached the "fully human level of intelligence".

A reluctance to admit that machines can "really think" is understandable. However, I see a deeper problem here: we actually have no idea what it means to "really think" or to be "truly intelligent." We do not have any objective operational criteria of these things.

Consider the following facts:

- some animals can successfully adapt to complex and changing environments, which has been suggested as a definition of "intelligence"
- some animals can learn new actions and execute them - another suggested definition of "intelligence"
- other animals cannot adapt and die out when environments change

- some humans can successfully learn new cognitive skills, such as memorization of texts or music, mathematical calculations, etc., that no animals ever do
- other humans cannot learn some or all of these skills, and we don't know why
- the behavior of humans includes using a natural language that defies description (dictionaries and grammar textbooks have to be revised every so often and are never complete)
- the behavior of humans includes conceptualization of the external world; even though the world is the same, the conceptualization is different in every culture, and there is no clarity as to what some of these concepts "really" mean

- animal behavior is "rational" with respect to a set of goals and criteria, which may not be always immediately apparent, and certainly may not be known to the animal itself
- human behavior is "rational" with respect to a set of goals and criteria that people are often unaware of
- computer's behavior is either fully prescribed or "rational" with respect to a set of goals ultimately imposed by the programmer

- we treat animals as more or less sub-human in proportion to how easily we can manipulate the behavior of the animal within the animal's given behavior range
For example, an animal that always responds to a certain stimulus in the same way will never be considered human.

- AI systems such as neural networks are useful only after training for a long time and on large amounts of data
- there is no clarity as to "how" the AI system performs its tasks after training, and why it fails to perform well in certain cases
- there is no way of simply "fixing" and AI system after training, so that it performs better at a certain task; an AI system has no "switches" that will steer it one way or another
- the only way to improve the AI's performance is to perform more training and hope for the best; if additional training fails to bring results, we won't know why that happened
- despite this, the AI remains controllable in a very direct way by power switches and such

- people become fully functioning adults only after learning and training for a long time
- there is no clarity as to why some people fail to learn or fail to perform well despite training
- there are no "switches" inside people that make them better at performing a certain task; we don't have pills that, say, make all people into good pianists
- the only way to improve a person's performance at a task is to give them additional training and hope for the best; if additional training fails to bring results, we won't have a way of understanding why that happened

- since people's behavior cannot be manipulated directly by some switches or chemicals, we evolved a complicated system of communication, which includes components such as verbal (choice of words), visual (facial expressions, body language), and musical (intonation and rhythm of speech), and communicates emotions as well as factual information
- we do not really know how human communication works in all detail; for instance, we do not have a list of all possible meaningful facial expressions together with their precise meaning, or a full description of all possible speech modulations and their effects on the message that the other person gets when talking

- we also treat animals as more sub-human if they do not look sufficiently like humans, because then we cannot communicate with them nonverbally as we do among humans
- some animals share with us some part of the nonverbal communication patterns, e.g. we can influence a dog's or a cat's behavior to some extent using communication, - then we start feeling emotionally attached to these animals

- since communication does not always make other humans behave as we want, we use other means of coercion, such as the concept of "responsibility for your actions" and the punitive measures for people who misbehave
- the concept of "responsibility" is not applied to animals because their behavior is less complicated, so we can use more crude means of coercion without significantly reducing the range of their behavior, and/or they are incapable of regulating their behavior from pure communication

- our brains are evolved for purposes of survival in a certain range of conditions; we are genetically pre-trained to live in a certain kind of external world, in a certain kind of society, etc., and then we are trained further during our formative years

Conclusions:

- We will never feel that a machine is "like a human" until the range of behavior of the machine approaches that of a human, including the full range of verbal and nonverbal communication, learning about new phenomena in the outside world, and action planning, with the full range of physical motion in the world
- We will start feeling that a machine is "like a human" if we can communicate with the machine in the way we are used to communicating with other people, that is, if we are able to learn to influence the machine's complicated behavior through communication (rather than through direct programming or "switches")
- If even a small part of the machine's behavior resembles the kind of emotional communication what we are trained for, we will start feeling emotional attachment to the machine (just as we do for pets).

- Training AI takes a long time, and AI will not achieve the same level as humans unless AIs are pre-trained to the same highly advanced level that humans are genetically evolved to, and unless AIs spend a comparable amount of time (many years) training

- Suppose for the sake of argument that we succeed in creating a "full AI" machine that learns the same things that humans do and has the same range of behaviors. Then this machine will be actually human by any objective criterion. People will perceive it as actually human, and its behavior will be too complicated to describe or to influence effectively with "switches". You will have to talk to it, to ask it nicely to do things, etc., just like you do with a human.

- The "full AI" will have to be controlled by rewards and punitive measures, just as humans. We will need to give it "rights" and "responsibilities", otherwise we won't sufficiently motivate it to do things.

- It will have to become highly immoral to influence the behavior of an AI machine by tampering with the electric parts inside it or by "switching it off". This would be similar to a physical assault on a person.

- Each "full AI" will have to train separately, and they won't be able to copy each other's "training results", because every AI's neural network will be different due to different training data (so there will be no "telepathy" possible between AI, just as it is impossible between people)

- No human will be able to "understand how the full AI really works". The range of behaviors of "full AI" will be far too complicated to even describe completely (just as it is for humans).

- We will not be able to say why certain AIs do well on certain tasks while other don't, nor we will be able to fix the "underperforming" AIs or how to train them "better". The AIs themselves also won't be able "fix" their "programming" or to explain why they failed to learn adequately. The results of training will be uncertain: Most AIs will achieve a certain level, but some AIs will significantly overachieve or underachieve. (Just as humans do.) It is not clear why AIs should be able to achieve a super-human level on any given task, especially since we don't know in detail how human brains work on various tasks.

It's easier and faster to take a newborn baby and train it to be a full human adult than to design and train a "full AI". The results of raising a baby to adult status will be just as uncertain. Nobody knows how to raise children "correctly", but at least we are prepared by evolution to deal with that task, which is not the case for the task of designing and training AIs.

After to these considerations, I think we will never be able to build a machine that we will perceive as "fully human".



(Читать комментарии) (Добавить комментарий)