Tuesday, January 8, 2013

// // Leave a Comment

Universal Artificial Intelligence - Matches of Todor Arnaudov's teenage works from 2001-2004 to later (2005-2006-2007) related more popular works by Marcus Hutter and Shane Legg, Additional Discussions and other Todor's works, and a new Challenge/Extension of the reasons for Herbert Simon's "Bounded Rationality" (Satisficing)

Abstract/Introduction
I've mentioned this with less details in translations and in slides, here I show more explicitly matches regarding the UAI definitions of Hutter and Legg, condensed in this abstract:

A Formal Definition of Intelligence for Artificial Systems (abstract)

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.97.7530

See also: Definition of Machine Intelligence" (2007)

Slides, prepared by Todor for teaching that paper to students in his AGI courses
Презентация на български
Slides in English

In another section I refer to (remind about) other related works of mine and discuss Herbert Simon's bounded rationality, where I give an additional reason why the "sufficicing".

The end part of the work, with an excerpt from one 2001. work of mine, "Man and Thinking Machine" which introduces AGI tests.

A natural continuation of this work would be to continue that discussion and my views on testing machine intelligence. One of the first publications in this blog was in this topic, informally introducing the "Educational Test" - stay tuned.

* M. Hutter has earlier publication on UAI, too (find AIXI), I myself found him and Schmidhuber in September 2009.
 ** Regarding the compression-prediction, shortest/simplest (algorithimic complexity part) - that's an intrinsic part of my "teenage theory", but I won't elaborate it in this very article.

Copyright note: The texts written and translated by Todor are (C) Todor, All rights reserved etc. :)
 The cited texts from the other authors are theirs, and they are used for research purposes and for appropriate comparison and illustration.***

*** More on copyright - later... :) It was also in the teenage works, recently reprinted one here, but in Bulgarian only.
**** Languages used: English and Bulgarian/Български

***** Last edited: 8 Jan 2013, 03:30

Contents
1. "Nobody knows what intelligence is"...
2. Definition of Machine Intelligence
3. Supposed Reasons for Behaviorism Avoidance by the Psychologists Community
4. Introduction of the Principles of Human Behavior Drive - More Pleasure and Less Displeasure for Maximum Possible Time
5. Observation, Reward, Action and Bounded Reward
6. Meta-Physical reasons for reward and punishment
7. Local maximum, biggest sum of pleasure for a given time ahead (...) - a multi-topic paper from 2004
8. Extension to the Bounded rationality - there are no non-trivial global undeniable single optimum external rewards for intelligent agents as complex as humans*.
9. Physical vs Cognitive rewards
10. The period ahead should be short enough for the target resolution, range, domain etc.
11. Defining Universal Intelligence, Turing test faults, better tests...
12. Memories and background of the author in 2001
Appendices:
- Original texts in Bulgarian of part of the citations
- Link to another short discussion on the matches between Hutter and Legg's setting and my works, published in an article from 2/2010


Article Begin

1. "Nobody knows what intelligence is"...


Hutter and Legg 
"...A fundamental difficulty in artificial intelligence is that nobody really knows what intelligence is, especially for systems with senses, environments, motivations and cognitive capacities which are very different to our own..."

Todor Arnaudov


That's one of the absurd claims of some AGI-ers and of the AI-ers, and IMHO it's perhaps caused by  linguistic "insufficiencies" leading to conceptual ones. Anything that a mind call with a word have and should have a referent, and the referent is something which is available there in the sensory data, or in data that can be derived or implied from it. Anything can be traced to some roots. Any two things have something in common when seen generally enough, even if that's only that they are "things", "entities", are in the same memory and namely those general things are "what intelligence is", or whatever "anything that's named and used anyhow is".

If two "things" are so different that the common can't be found ("objectively" or for the evaluator), then obviously those two things cannot be put in the same category, or they need their categorization and classification to be revisited. Also, if a concept is so vaguely defined, that it refers to too many things - then its meaning should be specified more precisely and explicitly, or to be explained by referring to sensory records examples.

Hopefully, Hutter and Legg do claim that they know what universal intelligence is:

2. Definition of Machine Intelligence
Hutter and Legg:
"...To formalise this we combine the extremely flexible reinforcement learning framework with algorithmic complexity theory. In reinforcement learning the agent sends its actions to the environment and receives observations and rewards back. The agent tries to maximise the amount of reward it receives by learning about the structure of the environment and the goals it needs to accomplish in order to receive rewards. ..."

Todor Arnaudov:
IMHO the RL is generally trivial direction, known since "hedonistic" and utilitaristic philosophy and economical theories, and that approach should have been taken long ago, but there's another research nonsense I wish to criticise: it didn't seem obvious due to the psychologists "fear" and disgust of the behaviorism. 

3. Supposed Reasons for Behaviorism Avoidance by the Psychologists Community

My explanation is that it's the (general) hard-sciences insufficiencies of many psychologists, I have heard and seen for myself their "hatred" about behaviorism, which is not surprising - it treats animals and humans as deterministic machines etc., while most people, including psychologists:
- Prefer believing in magic and not to really understand/define the subject matter in "technical" terms
- Don't really understand behaviorism, perhaps don't see the asymmetry of data exchange inside the system to the data exchange with the environment (observations are not enough for complete model, it's a "Hidden-Markov Process), which doesn't diminish behaviorists observations and laws, when  particular assumptions and particular resolution is set on the unobservable underlying data.
- See determinism and behaviorism wrongly, and treating randomness as "free will" - there is randomness in human behavior, but results achieved by randomness are not merits of the evaluated person.
- Don't see the multi-agent/non-integral self, which is evident both in behavioral data (any observation of a human being is behavioral data and experiment with results) and in the  neuroscience (how brain works) and decision making "irrationality". The multi-agent/non-integral self explains the apparent irrationality of human behavior, while the subsystems/virtual "selfs"/virtual control-causality units/brain modules can be rational through all the them. See:That confusion is discussed many times in old works regarding soul, free will, desires for being "magical" and reasons to believe so, the confusions about rationality etc. in the works cited here, in many others and in public email-group discussions.

See for example:
http://artificial-mind.blogspot.com/2012/11/nature-or-nurture-socialization-social.html
http://artificial-mind.blogspot.com/2012/02/philosophical-and-interdisciplinary.html
4. Introduction of the Principles of Human Behavior Drive - More Pleasure and Less Displeasure for Maximum Possible Time


From the abstract of "Teenage Theory of Universe and Mind, Part 2" (Conception about the Universal Predetermination, Схващане за всеобщата предопределеност, част втора) -  an epistolary work of Todor Arnaudov in a dialog with the philosopher Angel Grancharov. 
Ffirst published in "Sacred Computer" e-zine in 9/2002, in Bulgarian
The drive towards lesser displeasure is the engine of human behavior. The primary/initial pleasure is our desire to be fed and dry. Subsequently pleasure is conditioned with the "higher departments" of the cortex of the cerebrum. After learning, new forms of "mental" pleasures are created, besides the directly-sensual ones: joy-sadness, pride-shame, pleasure-displeasure, happiness - misery etc.

"The self-preservation instinct" is only a special case, when man believes, that his life is bringing him less of displeasure, than death would cause. After death is realized as "the lesser of two evils" than living, human purposefulness is directed towards it.... *2002-1 (see original text)

In this work I also notice that the religious concepts of "heaven" and "hell", which are spread in most of the religions, are concepts which illustrate and justify this fundamental drive, as extremes of the both ends of the pleasure-displeasure (reward/punishment) spectrum. The best reward is endless pleasure and calmness, the worst punishment is endless pain.

5. Observation, Reward, Action and Bounded Reward


Hutter and Legg:  (bold added by Todor)
"...To denote symbols being sent we will use the lower case variable names o, r and a for observations, rewards and actions respectively. The process of interaction
produces an increasing history of observations, rewards and actions, o1r1a1o2r2a2o3r3a3o4 : : :.
The agent is simply a function, denoted by , which is a probability measure over actions conditioned on the current history, for example, (a3jo1r1a1o2r2). How the agent generates this
distribution over actions is left completely open, for example, agents are not required to be Turing
computable. (...)
Additionally we bound the total reward to be 1 to ensure that the future value ... is finite. "
...

Todor Arnaudov
From  "Teenage Theory of  Universe and Mind, Part 3", first published in July 2003 in "Sacred Computer" (in Bulgarian)

39. People are afraid of retribution –
displeasure, initiated by conscience – like with
animal training, where man is taught to feel
guilty, to suffer
.

[bold - added now]
If the “programming”; the training of a
conscience is for preventing crimes and harms,
then conscience is necessary for a stable and
robust functioning of society, expressed as
lowering as much as possible the probability for
events which lead to a bigger displeasure, which
would make the distance of the system to its
goal (pleasure) bigger.


Conscience and the systems describing “decent
and indecent” and “good and bad” behavior
should lower down the total amount of
displeasure and to increase the pleasure in the
whole, at the expense of single members of the society.

Using appropriate programming (breeding,
upbringing, training ) society – the aggregation
of interactive agents, each of them aiming at
suffering less displeasure – is aiming at limiting
the chance of achieving single “big pleasure”
which however decreases the total amount of
pleasure in society.


For example if someone robs a bank, he gets
rich and becomes a possessor of means for
achieving higher pleasure in the system of
society. In the same time, though, he causes a lot
of displeasure to people, connected with the
bank, and they are not only the ones who were
inside the room where the robbery happened.

The people there were frightened, there could
have been hurt or even dead ones, [this reflects
also to the people related to these people], and
others may lose money, [feeling of security,
prestige, job, ...].

So the pleasure of the system consisting of
robber-and-people-connected-with-the-bank
falls down, that's why bank robberies are
unacceptable for society.

The pleasure of one single man has a finite
magnitude.


The pleasure, the happiness or so represents the
degree of completion of behavioral goal of a
man - a higher pleasure and lower displeasure.

A single man cannot be “infinitely happy”, that's
why one who got more happy in the expense of
several made a little unhappier could result in a
lower total sum of satisfaction.
...
And another section, which is more meta-physical and goes further than H. and L. work:
...

6. Meta-Physical reasons for reward and punishment

40. Sometimes the upbringing is a pure “animal
training”. For example “one should enter here
only with a hat on, there – only with a hat off;
the fork should stay over here, the knife – there;
you should fold the serviette exactly like this
etc.
It's “an animal training”, because if you have not
been taught that not obeying this rules is “ill-bred”,
you would not feel remorse, would not be
ashamed etc., because in this circumstances the
displeasure comes not from the substance of
the rules, but from the breaking a rule that
should have been obeyed
, according a particular
prescription, created by someone.

According to the Theory, Universe is a perfect
computer
, always following its rules (laws)
without any mistake. That's why the errors in
the  operation
of the derivative computers (for
example men), that is the occurrence of
undesirable condition is perceived as “breaking
a Universal law”
. This brings displeasure,
because the goal of the derivative computers is
to imitate Universe – [and this includes] not to
make any mistakes. [To execute exactly their
program, in their domain.]

If a well behaving one, a trained like an animal
sees someone else who does not obey an
artificial rule
such as putting a hat off, he
would feel, the well behaving one feels a
discomfort, a displeasure caused by the
difference between the expected == predicted
and desired, and the reality. This difference is
the “mistake” that results in a displeasure.


Another work, which was translated in 4 chunks, the original work consists of a paper, a short comment by another person, then two very long answers which are like new bigger papers, covering more topics.

7.  Local maximum, biggest sum of pleasure for a given time ahead (...) - a multi-topic paper from 2004

Analysis of the meaning of a sentence, based on the knowledge base of an operational thinking machine. Reflections about the meaning and artificial intelligence

Causes and reasons for human actions. Searching for causes. Whether higher or lower levels control. Control Units. Reinforcement learning.

Motivation is dependent on local and specific stimuli, not general ones. Pleasure and displeasure as goal-state indicators. Reinforcement learning.

Intelligence: A Search for the biggest cumulative reward for a given period ahead, based on a given model of the rewards. Reinforcement learning.

Part 3 - first published 3/2004 at bgit.net, and in "Sacred Computer" e-zine in Bulgarian
http://artificial-mind.blogspot.com/2010/02/causes-and-reasons-for-any-particular.html

Part 4

The purpose is a GOAL; a state, where the Control Units aims to. The "thirst for a purpose/meaning" is a search for local maxima, indicating to the Control Unit how close the goal is. [E.g. - an orgasm... ;) ]
(...)
I think that's what human does and I think [as of 2/2004] this is "mind" [intelligence]: search for pleasure and avoidance of displeasure by complex enough entities for a given period ahead [prediction, planning]; the "enough-ness"" is defined by the examples we already call "intelligent": humans at different ages, with different IQ, different characters and ambitions.
The behaviour of each of them could be modeled as a search for the biggest sum of pleasure (displeasure is a negative in the sum) for a selected period of time ahead, which is also chosen at the 
moment of the computation.

(...)

PS. When I was about to write "a local maximum" I was about to say also "for a close enough neighborhood" (very short period), influenced by the Mathematical Analysis I've been studying lately, however mind is much more complex than the Analysis, i.e. it's based on much more linearly independent premises, that is impossible to be deduced from each other.

It's not required the period for the prediction to be short enough, because like in the example with the chocolate, caries and the dentist, immediate actions can be included in functions that give result shifted far ahead in the future; at the same time, the same action can be regarded in the immediate future, in the example situation, the eater will feel ultimate pleasure in the following one second.

One amongst many functions can be chosen or constructed, and they all can give both negative or positive results, depending on minor apparently random details - the mood of the mind in the particular moment; which variables are at hand, which the mind has recalled of, what the mind is thinking of right then..
(...)
 ////Notice: The period ahead, locality, period chosen at the moment of computation (it can change)///

See also the other parts of the work, which cover many things, including locality of the rewards and the specifics of the environment, what rewards can be taken for the very near locality.


http://artificial-mind.blogspot.com/2010/02/motivation-is-dependent-on-local-and.html

Motivation is dependent on local and specific stimuli, not general ones. Pleasure and displeasure as goal-state indicators. Reinforcement learning.

Causes and reasons for human actions. Searching for causes. Whether higher or lower levels control. Control Units. Reinforcement learning.



http://artificial-mind.blogspot.com/2010/02/causes-and-reasons-for-any-particular.htm
E.g. if you are a millionaire, you can still buy a donut or candy from the street, and you can still feel this as the most rewarding and good thing to do at the moment of making this decision, instead of walking by. A naive opinion is that "a millionaire wouldn't do it, because he only goes to fancy restaurants etc." - see more in the text.

8. Extension of the Bounded rationality

My reflections in the long multi-topic "work in 4 chunk"s from 2004 seems to be related to the Herbert Simon's economical theory for which he has a Nobel Prize - "bounded rationality" and "satisficing", I've not familiarized myself with original papers about it so far, only short mentions, now that I see a more specific definition there:

Regarding the interpretation here: http://www.economist.com/node/13350892


The Economist, on Herbert Simon's theory
"...Simon maintained that individuals do not seek to maximise their benefit from a particular course of action (since they cannot assimilate and digest all the information that would be needed to do such a thing)... Not only can they not get access to all the information required, but even if they could, their minds would be unable to process it properly. The human mind necessarily restricts itself. It is, as Simon put it, bounded by “cognitive limits”."

Todor Arnaudov
I see that my work has added an extension of that theory.

 The cognitive limits are true, the section about the attention that is a resource that have to be managed is also true, however I challenge that the restriction and "sufficining" is only due to them. Obviously there are also physical limits, one universal resource - time. Attention is  assumed as mapped to it, but time is also a cost for
traveling. There are also more costs, which are neither monetary, nor time, and are not properties of the products, but are crucial and are not mentioned (see below).

Finally there is one other issue with the agents, which makes the usage of a global reward function, global maximum of something, unreliable:

There are no non-trivial global undeniable single optimum external rewards for intelligent agents as complex as humans* (see note)

The agents
often don't know the optimum and cannot compute it ("don't know what they want") - there are contradicting or vague rewards and optimums, and dependencies regarding the current state of the mind. The agent doesn't always know what's optimal, what's the exact value of reward he would get (and he may change his mind right away after the decision). Take into account that mind is not integral and there's no intrinsic integral self, but an integral of self. (See... )

An individual can't decide "rationally" also, when:

- He knows that anything from a set of possible actions could make him feel good and will not hurt him (the donkey with the two haystacks)
 - Because there are different domains and different incompatible rewards running in parallel, different "virtual control units", different resolutions, different time-spans to make a prediction about expected reward, and the agent may expect also many unpredictable reactions from the environment which may change the reward to a punishment, if some events happen some way.

In Bulgaria people say: "You don't know what you win, when you lose, and what you lose, when you win."

What's better - 2 kilos of bananas or 1.5 kilos of apples and 0.5 of oranges, if the prices are the same? Or why not a kilo of bananas and a pint of beer? Or a candy, a kilo of potatoes and a toothbrush?

 It depends on full history, on some random occasions such as where exactly you are, what have you discussed in this very moment, how far are you from the markets/shops, are you busy and how much, how expensive is traveling for you as a matter of time, money, expected troubles, boredom etc.

 What's better - to spend 34 minutes to go to a shop, where the product costs $X cheaper, or to spend more money, but not to travel? How cheaper should it be, to travel how far? If that includes transportation costs, or if it doesn't? When? In what conditions of the traffic or the streets (snowy, icy, rainy... traffic jam, no traffic jam), what weather, what mood of the agent, is she alone or go shopping with friends, what other tasks he has to complete, how important they are, what age he is, how healthy he is, etc. etc. Those all, at every very specific moment modulate the behavior.
One of my claims is that human decision in the reward-framework of mine should be local, some local maximum of reward (pleasure) are searched of some particular virtual causality-control units (among many in the mind), the ones which are in charge.

 Often there's not a well defined global maximum and best choice for the entire agent, only for the sub-units for small periods, and except for trivial cases when the agent's desires are sharp and simple enough, and when the resources are far greater than the prices to get particular rewards.

 See also: 
http://artificial-mind.blogspot.com/2012/11/nature-or-nurture-socialization-social.html

http://artificial-mind.blogspot.com/2011/10/rationalization-and-confusions-caused.html


9. Physical vs Cognitive rewards

Notice that that's one part of my definition, that's the "physical" reward, it's a coarse grained neurotransmitter-hormonal-subcortical driver of the neocortex, it's generally related to the basal ganglia reward system, but also other neurotransmitter emitting and emotion drivers such as the amygdala, PAG,  hypothalamus.
This reward system, in fact all those different coarse-grained neurotransmitter/neurohormones/gross operation is conditioned/connected to the "higher, finer grained one - the"cognitive" one, which is also about prediction and can be expressed in terms of rewards, but the rewards are different.


One of the differences is that the physical system is about matching with the desired, the cognitive is about matching with the predicted.

The cognitive system also predicts the amount of physical reward, and the physical rewards cause "glitches" of the cognitive, it's the reason for the apparently "irrational" behavior of humans and one of the sources of confusion regarding the notion of "free will". See many discussions on this in past works.

10.
The period ahead should be short enough for the target resolution, range, domain etc.
 Regarding the period ahead.. Actually it should be "short enough", but "short" can have different meanings. Short enough for the target resolution and range; in the target domain etc.

It's not always short in the literal continuous terms of the Calculus if put in one resolution, but it can be spread in different resolutions, and each should be short enough and smooth enough for the prediction capabilities at the particular level of abstraction/selection of rewards and the particular resolution. See the work for the example with the chocolate, caries and the dentist.

* There are no non-trivial global undeniable optimum rewards for intelligent agents as complex as humans
There can be, of course. The true drives at micro level are neurotransmitters, and there are simple goals: infinite pleasure for inifinite amount of time - in Heaven. This is abstract, however , and before going there, many micro actions have to be performed, which are specific and fall in the above cases. Physical pleasures are or should be bounded, and they should be diverse and varied enough - "free economy" instead of monopoly, - in order the agent not to get stuck to elementary  self-sustaining cycles as in drug addiction. In fact in reality there are such self-sustaining cycles and "wirehead" behavior, but it's less evident - the cycles are longer, there are overlaps, variations etc. If human behavior is studied in more low resolution terms though, humans are wireheads, and that's why Freudian etc. theories about "eveything being driven by sex" are so successful. In fact it's driven by dopamine, oxytocin, adrenaline, ... name them, which have their easiest ways to be produced, which are love, sex, games, gambling, etc.   See also: ... ..

11. Defining Universal Intelligence, Turing test faults, better tests...
Hutter and Legg 
(bold by Todor)
"...Universal intelligence spans simple adaptive agents right up to super intelligent agents like
AIXI, unlike the pass-fail Turing test which is useful only for agents with near human intelligence.
Furthermore, the Turing test cannot be fully formalised as it is based on subjective judgements.
Perhaps an even bigger problem is that the Turing test is highly anthropocentric, indeed many have
suggested that it is really a test of humanness rather than intelligence. Universal intelligence does
not have these problems as it is formally specified in terms of the more fundamental concept of
complexity...."

Todor Arnaudov

First a few notes on "formalness".

I don't think the observation, reward action  and the normalized reward in this form of letters o, r, a and [0,1] is more formal - clear and explicit - than the way it is stated in my works.

On the rest from this quote:


".... " Todor, first published in 11/2001 in "Sacred Computer" e-zine*Turing tests has many weaknesses. It works only for "experienced" enough artificially intelligent agents, and even they could be easily revealed, if one asked them questions related, for example, to their parents or childhood. In order to slip from this awkward situation, the MM should either lie, or have information about its childhood being "suggested" primarily.
A thinking machine that's too fast would have to be artificially slowed down, because the instantaneous answers would reveal its non-human identity. A slow thinking machine, that would need one full minute to answer even elementary question will also be immediately recognized. Inhumanly complicated thoughts of another thinking machine will also suggest human evaluator that he doesn't communicate with an individual from his specie...

Turing tests has too many "humanized" requirements. [Also] It is not required to know everything in order to think. The machine shouldn't have to "lie" to humans that it's a human, in order to prove, that it's a thinking being! When determining machine's intelligence (thinking capacities), capacities to learn and to process information, relative the quantity of its current knowledge, i.e. to measure not only its absolute intelligence, but also the potentials of the machine.

Thinking (intelligence) doesn't appear at once. It's a result of development, which is apparent also in humans. Ain't the first steps made, and the first words uttered by a one-year old child, compared to her helplessness one year ago, displays of a progress? Namely that's what we should search in the machine - development, [because] thinking is a progress, and not knowing-all.

Perhaps many principles of operation of a thinking machine can be made. Some of them would be better than the rest by their:
- simplicity
-  requirements regarding the computational performance of the underlying programming system
- initial amount of information in the system when it's turned-on [less is better]
- etc.

If we wish to verify/evaluate how "good" the "intelligent algorithm" that we have developed is, another way to do it is by putting the system in a very "tough" (hard, difficult) information situation, i.e. to let it develop with maximally narrow information input flow.

The capacities of the machine to "get smart"/get intelligent under such extreme conditions determine the potential of the AI [agent]. The less data are required in order an AI agent to get intelligent, the more it resembles to the human intelligence and to the "ideal AI"/"perfect AI".

I define the "ideal AI" as the simplest and the shortest algorithm, that possess minimal initial information, and requires minimal input information, in order to develop to a thinking machine. Other kinds of "ideal AI" can be specified also - ones that need the least amount of computational hardware,  that use most efficiently/optimally [originally written "in full"] memory, the ones having the highest relative performance (speed) etc.


Human brain can be taken as a starting point and an example. Without sight or hearing, and even without both of the main human senses, the man (the brain) can become intelligent. An example for that are the deaf-blind people. [There are ones who have University degrees - without seeing and hearing anything.]

Thinking machine IS NOT an artificial man

Recognizability of the "artificialness" of someone doesn't diminish his thinking (intelligence)! It is not necessarily to strictly anthropomorphize the machine. Perhaps it's not surprising that man, believing that he's a creature made in the image of God,  is inclined to create a new creature in his own image. It would be wonderful. However we shouldn't introduce our  own "technical" and principle faults (weaknesses) just to make it more resemblant to us. There's room for both humans and thinking machine in the Universe [see note 2]. The thinking machine won't be limited by the Earth like us [see note 3]. They would be capable to function even without oxygen supply and the weightlessness of outer space or super-gravity will not harm them, during acceleration or on cosmic objects that have higher gravity than the Earth's one. The machines will be more resistant to radiation, higher or lower temperatures, will have more senses etc...


12. Memories and background of the author in 2001...

     An interesting detail is that I was 17 years old in the beginning of 11-th grade at the time of that "Man and Thinking Machine" paper, which I consider the Bulgarian AGI and "Cosmist" Manifest on the Internet. There was one earlier "manifest", an essay that I sent for a "millenium competition" in the end of 1999, but it wasn't published. :))

I had read a very few popular AI, psychology and neuroscience books which at best were from the 80s, often early 80s. - the titles and short notes are still in my reader's diary. :) Many were first published in 60-s-70-s, and the biggest "wikipedia" from that time for me was a Bulgarian encyclopedia printed in 1974. Some highschool textbooks, for example in Biology, were more recent, from the 90-ies.

Unfortunately I couldn't find more contemporary literature in the libraries to which I had access and of course the newest research literature doesn't go to normal public libraries. I knew about a "central scientific library", but it was in Sofia (I was in Plovdiv), that was prohibitive itself, and I doubt that I would have been allowed to go there and use it.

There was one nonsense rule that prevented me to use even the biggest local library before certain age, because they didn't allow children below 14 or 16? to subscribe. We once went with my father, and the woman said: "Isn't he too little? Go to the children section!. What an insult! it was a poor small house near by... Not that the central library happened to be so much better regarding contemporariness and my country and the libraries were in a poor condition since the socialists fall in 1989. They had some items from late 80-es and 1990 or so, rarely more modern technical books on programming, software products.

Anyway, it has all been enough for me to see that most of the AI part was "a wrong direction". And as later found, the "AI: Modern Approach", a book I've heard about as "a classic", wasn't modern at all, so maybe I haven't missed a lot by not reading it back then.

What about the Internet? - Well, I have managed to arrange access from my home since one year. Probably there might have been pirate books in English and Russian, especially on Russian servers, but I don't know. I didn't use e-mule etc. though (Napster - neither).

My PCs at the time was poor, as of 2001 it was a Pentium 90 MHz with 16? MB RAM and ~850 MB HDD, and a VGA monitor of 640x480 :D, at least it was with a flat CRT screen. (I upgraded to 32 MB, but I don't remember when exactly). CD-Recorders were too expensive, so I was constantly in debt for disk space and couldn't afford downloading much. The Internet speed was also poor, 3-4 KB/s was a good one    through a dial-up through the phone. And you can be online from time to time, mostly during the nights, due to the lower prices and more free lines.

What about my English - well... I've always got an A at school :-D, so I did use it to browse the net, but  I  didn't really search/studied seriously scientific papers etc. at that time. I was busy to do, see and find so many other things. After all, I  knew it and invented it myself - why go read others' works so much!?... ;)

...Continues...

Appendices


Appendix 2002-1

Original text from the abstract of "Teenage Theory of Universe and Mind, Part 2" (Conception about the Universal Predetermination)

 Из резюмето на "Схващане за всеобщата предопределеност, Част втора", публикуван за първи път в сп. "Свещеният сметач", септември 2002 г.

(...) Стремежът към по-малкото неудоволствие - двигател на човешкото поведение. Първичното удоволствие е желанието да бъдем "сити" и "сухи". Впоследствие удоволствието се свързва и с "висшите отдели" на кората на крайния мозък, при което освен прякосетивното усещания на удоволствие-неудоволствие, след обучение, се създават и "мисловни" удоволствия и неудоволствия: радост-тъга, гордост-срам, удовлетворение-неудовлетворение, щастие-нещастие и пр. 
"Инстиктът за самосъхранение" е само частен случай, при който човек смята, че животът му носи по-малко неудоволствие, отколкото смъртта би му причинила. След като смъртта се осъзнае като "по-малкото зло" от живота, човешката целеустременост се насочва към нея. (...) 
Appendix 2001-1

Оригиналният текст от "Човекът и Мислещата Машина", (Анализ на възможността да се създаде Мислеща машина и някои недостатъци на човека и органичната материя пред нея) /Първа статия от поредицата "Мислещата Машина"/, публикуван в сп. "Свещеният сметач", бр. 13 (11)/2001 г.
(...)

Мислещата машина НЕ Е изкуствен човек
Разпознаването на "изкуствеността" на някого не омаловажава мисленето му! Не е нужно непременно ММ да се антропоморфира. Сигурно е естествено човекът, смятайки се за същество сътворено по подобие Божие, да изпитва влечение да сътвори ново същество по свое подобие. Това би било прекрасно. Не бива обаче да внасяме в ММ своите "технически" и принципни недостатъци само за да прилича повече на нас. Във вселената има място и за хората и за ММ 2. Мислещите машини няма да бъдат ограничени като нас от Земята3. Те ще могат да действат и без наличие на кислород, космическата безтегловност или свръх-тегловност (ускорения, косм. тела с по-голяма гравитация) няма да им влияят зле, те ще са по устойчиви на радиация и високи/ниски температури, ще имат повече сетива, и т.н..

 ....Тестът на Тюринг има много слаби места. Той работи само за достатъчно "опитни" изкуствени интелекти. Но дори и те биха могли да бъдат лесно разпознати от човека, ако той им зададе въпроси свързани, например, с техните родители и детство. За да се измъкне от неудобното положение, ММ ще трябва или да излъже, или информацията за "детството" й да бъде "внушена" предварително. Скоростта на прекалено бърза ММ ще трябва изкуствено да се забавя, тъй като мигновените отговори ще издадат машината. Бавните ММ, които се нуждаят например от минута, за да отговорят и на най-елементарния въпрос също ще бъдат разпознати на момента. "Нечовешки" сложните размисли на друга ММ пак ще подсетят човека, че не общува със същество от своя вид... 
  Тюринговият тест има прекалено "очовечени" изисквания. Не е необходимо да знаеш всичко, за да мислиш. Не е необходимо машината да "излъже" хората, че е човек, за да докаже,че е мислещо същество ! При определяне на машинните мисловни капацитети биха могли да се проверяват уменията на машината да учи и да обработва информация, спрямо количеството на нейните текущи знания, т.е. да се измерва не само абсолютната интелигентност, но и потенциала на машината. Мисленето не се появява изведнъж. То е резултат на развитие, което се забелязва и при хората. Нима първите крачки и думи изречени от едногодишното дете, сравнени с безпомощността му преди година, не показват развитие? Именно това трябва да търсим у машината - развитие, мисленето е прогрес, а не всезнайство. 

  Вероятно могат да се сътворят огромен брой принципи на действие на Мислеща Машина. Някои от тях ще бъдат по-добри от останалите било по простота, било по изисквания към бързодействието на използваната програмируема система, по обема на началната информация в системата и пр. Ако искаме да проверим колко е "добър" създаденият от нас "разумен алгоритъм" можем да поставим машината в много "тежко" информационно положение, т.е. да я оставим да се развива при максимално стеснен приток на информация към ИИ. Възможността да "поумнее" при такива изключителни условия определят потенциала на ИИ. Колкото по-малко данни са необходими на един ИИ, за да стане разумен, толкова повече той се доближава до човешкия и до "идеалния ИИ", под което разбирам възможно най-прост и кратък алгоритъм, който притежава минимална начална и се нуждае от минимална входна информация, за да се развие до Мислеща Машина. Могат да бъдат посочени и други видове "идеални ИИ" - нуждаещи се от най-малко апаратни средства, използващи най-пълно паметта, имащи най-високо относително бързодействие и т.н. 
  Човешкият мозък може да бъде взет като отправна точка. Без зрение или слух, дори при липса и на двете основни човешки сетива, човекът (мозъкът) може да стане разумен. Пример за това са сляпо-глухите индивиди.


Appendix 3:

There's a short discussion on the matches with Hutter's works in that translation, too:
http://artificial-mind.blogspot.com/2010/02/intelligence-search-for-biggest.html

0 коментара: