Tuesday, August 27, 2013

// // Leave a Comment

Issues on the AGIRI AGI email list and the AGI community in general - an Analysis


Please, see the essay: Issues-AGI-community-AGIRI-8-2013.pdf [the link is gone, I've got to find the file]

"...Note: That post refers also to general problems in the AGI community, including the academic one – such as [that] average researchers are not generally intelligent enough and they do not have a complete grasp of all the areas they should have....
(...)

"- Multi-intra-inter-domain blindness/insufficiency [see other posts from Todor on the list] - people claim they are working on understanding "general" intelligence, but they clearly do not display traits of general/multi-inter-disciplinary interests and skills.

[ Note: Cognitive science, psychology, AI, NLP/Computational Linguistics, Mathematics, Robotics … – sorry, that's not general! General is being adept, fluent and talented in music, dance, visual arts (all), acting, story-telling, and all kinds of arts; in sociology, philosophy; sports … (…) ... + all of the typical ones + as many as possible other hard sciences and soft sciences and languages, and that is supposed to come from fluency in learning and mastering anything. That's something typical researchers definitely lack, which impedes their thinking about general intelligence. ](...) "

+ a lot more, see the work

8/8/2013


Todor
“Twenkid” Arnaudov


From:
Todor Arnaudov <
xxxx>
To:
xxx@listbox.com
Subject:
Re: [agi] Student here, please advise.
Date:
Thu, 8 Aug 2013 00:51:15 +0300

A post to AGIRI AGI list, (minor
corrections of typos, added/changed
a few words et
c., and added several
additional
notes.
http://www.listbox.com/member/archive/303/2013/08/search/dHdlbmtpZA/sort/time_rev/page/1/entry/2:3/20130807175128:7EE86152-FFAB-11E2-A2BE-8DED093DF8D9/

Note: That post refers also to general problems in the AGI community,
including the academic one – such as average researchers are
not generally intelligent enough and they do not have a complete
grasp of all the areas they should have.

@Emahn
Check
this course program, AFAIK that was the first University course on
AGI, it suggests what topics/fields to be studied and in what
sequence:
http://artificial-mind.blogspot.com/2010/04/universal-artificial-intelligence.html



@All
>http://www.listbox.com/member/archive/303/2013/08/sort/time_rev/page/1/entry/0:28/20130802164109:D9F5ED2C-FBB3-11E2-95E0-C8B515BE0CB4/


Ben>
Off this list, AGI folks have shared ideas and inspired each others
research plenty..
Ben>
It may be the case on this list that people don't get inspired by
each others' work though, I dunno -- I haven't
actually
monitored this list carefully for some years...


Todor:
Well, that's how I see the case here.

  • Sometimes there's a cross of points, but it tends to be one-sided - the other side does not agree or doesn't have reciprocal interest, "he's the only one with that idea", doesn't accept that there are
    similarities - "they are superficial ones"
  • There's no ranking: too many leaders, but zero followers and no way to judge about who should be trusted
  • Very few make an effort to get to know with others' work out of the tiny
    superficial talks in emails; furthermore:


  • Long emails are often blamed "you talk too much" or are just
    ignored or answered in 2 lines, even if the author clearly has what
    to say --> actually the readers can't concentrate on the subject
    matter, "don't have time" or don't care - the author
    doesn't have the authority needed in order to justify the time and
    efforts they are supposed to invest, the other readers believe they
    are too much superior
  • There's severely not consistent background of the participants, and what's
    worse, there are severe gaps in the backgrounds which the
    "sufferers" are not aware of, not willing or not capable
    to fill, yet they act like if they didn't have those gaps

The above is connected to:

- Multi-intra-inter-domain blindness/insufficiency [see other posts from Todor on the list] - people claim they are working on understanding "general" intelligence, but they clearly do not display traits of general/multi-inter-disciplinary interests and skills.

[ Note: Cognitive science, psychology, AI, NLP/Computational Linguistics, Mathematics, Robotics … – sorry, that's not general! General is being adept, fluent and talented in music, dance, visual arts (all), acting, story-telling, and all kinds of arts; in sociology, philosophy; sports … (…) ... + all of the typical ones + as many as possible other hard sciences and soft sciences and languages, and that is supposed to come from fluency in learning and mastering anything. That's something typical researchers definitely lack, which impedes their thinking about general intelligence. ]

Some can't play or remember a tune of 10 or 20 tones themselves, or can't draw a face or dance the simplest dance, but they consider music or art to be "a hard problem".

They should not be considered as qualified for such claims, but they do make them...

Moreover, since "AGI problem is an unsolved problem" to prove them wrong in terms they would get, and due to that weakness of many pseudo AGI-ers who are not generally intelligent themselves (not versatile) to realize that due to their non-versatile intelligence they can't understand the trivialness of some problems which are unreachable for their minds, they would rather believe they know better, because they have some other credentials and external social ranks - PhDs, conference publications. It doesn't matter that those publications obviously yield zero meaningful results.

Probably most of the researchers in all fields fall in this poor group. Musicians or artists know that what they do is not a hard problem, it's rather an easy one. They say Mozart has composed "on the fly" – well, actually every improvising musician does. [The author does, too.]

-- MUSICAL IMPROVISATION

You need to be able to recognize the scale - in case you're jamming with a bass or a piano, or to select a scale if you direct - and then you should follow the scale; while playing your track you should remember a few tones in the past, your direction up/down/jump/change up-down, maybe a few melodic patterns you've already played in the session, and you have to be able to see a few tones ahead in the frame of the scale, what you're going to play next.

Sometimes there can be a modulation - scale change, there are some positions which are better for a change than others, depending both on the tones, the rhythm, the pace. It's not much.

The performer may also "interpret" with "emotions", which is just varying some of the parameters of the tone - adding vibrato, tremolo, changing pace - slow down, speed up; slide, de-tune etc.

That's all about the "MAGIC" of creativity in improvisation. It's profitable for the professional musicians to keep the audience believing that they are "magicians".

...

-- CONITNUES...

"General" is often used as something "not specified", "vaguely defined", instead of "universal" and "versatile".

See also: http://artificial-mind.blogspot.com/2013/02/versatile-self-improvement-vsi-vi-vlsi.html

I've discussed many times this issue, it hurts all researchers in the AGI field - subjects that are trivial for an appropriate mind are assumed to be "hard problems", what about the really hard ones for the people who solve those pseudo hard ones.

-- CATEGORIES of participants

For example one active guy here seems to be definitely just a mathematician, I guess also a sort of low-level small-scale programmer (not system-wide cross-module software architect), more precisely he's doing discrete mathematics and some... arithmetic? This is supposed to be fully qualified AGI...

Others demonstrate clearly philosophical traits. Philosophy is general per se, but taken alone it lacks the "practitioner" part, the average philosophers' works didn't lead to practical inventions, because philosophy usually is too general and ungrounded - it's "theory without practice" and is not applicable without marrying with other fields.

When these fields are inaccessible by the philosopher's mind, there are only general words or concepts. It's possible that the philosopher does "feel" some kind of introspective understanding, but she can't express it in "convertible" terms, so it stays "inside".

See also: Rationalization and Confusions Caused by High Level Generalizations and the Feedforward-Feedback Imbalance in Brain and Generalization Hierarchies
http://artificial-mind.blogspot.com/2011/10/rationalization-and-confusions-caused.html

There's another issue - the other persons do not understand the deep philosophical implications, that the philosopher aims to communicate and sometimes assumes as being obvious, because the audience are not philosophers and do not see so deeply or generally.

There are also social thinkers, that goes with philosophy, but again – without access to the more "hard" sciences it obviously can't make a runnable machine/theory, and most of the "technical" thinkers are in another category and don't get social guys the same way as they expected.

Others are aware of neuroscience and have insights, and sometimes build bridges to other fields in which are also adept, but if the rest of the audience are unaware of the neuroscience or they lack the broadness of knowledge - or can't focus, can't invest in understanding all that's required and would have made the links obvious - they absolutely wouldn't get the insights of the neuroscientifically-aware ones and the links to philosophy, mathematics, linguistics, etc. The audience may lack appropriate background in those other fields, too, which makes it worse:

The author may be blamed for "sharing ideas without research" or just "fantasizing", when the real situation could be that the audience haven't read the papers which the author has had, and which she assumed that the others are aware of.

Most people in general have weak or missing visual imagination, the best one's is more modest than the graphics of an 8-bit PC - what about the gap between the ones who "see things" and can transform them mentally, and the ones who don't and speak in boring strings of words without grounding or visual mappings - that's yet another point of impossible contact of minds.

Some people think verbally, strictly logically which in this case means: sequentially and over generalized without reference to specifics, and with no chance of seeing how those generalized symbols were born from pixels and sounds. They are rarely citing specific data, they cannot see what the visually minded say or imply in terms of transformations.

Other people think or try to express their ideas visually, but without generalizing, working with unrelated images, failing to see concepts, but just a bunch of sensory inputs, which "can't be generalized, it's an endless variety, because the endless variety can't be generalized in its original form".

The above leads to mistaking specific concepts and concrete sensory input with/for generalizations - a mess, which the sufferers assume is "a prove that AGI is magic, impossible, unsolved problem" - while what's true is that if the observer can't generalize a given set of samples, they just should be kept untouched, they are samples of "specifics". Only the properties which are common are generalized.

...

-- WORKABLE THEORIES and IMPLEMENTATIONS

Some people try to work on workable theories and implementations, but this list is a home of the poorest and the most lonely ones in the AGI community, even though some of them were some of the pioneers of the new wave of that community, long before the "institutionalized" researchers took it as "prestigious".

The list's researchers poorness impedes their opportunities/motivation for concentrated work/producing academic-style materials - many believe the mainstream academic system (including many aspects of the peer-reviewed journals etc.) has intrinsic corruptions and have left it for "political" reasons.

Moreover, even if they do know how or have potential to develop working machines, this is a big effort that may take a lot of time before they could have a complete system - coded and running. If they haven't produced visible results already, that doesn't imply they wouldn't do after years of collection of critical mass, as long as they could work.
Besides they are supposed to be 10, 100 or 1000 times more capable than the normally funded and organized ones from the academic/industrial competition. Current ones can't afford visiting appropriate conferences or travel around research centers and are alienated.
They should have much broader knowledge and skills, acquire new knowledge and skills in a shorter time and work much faster, because:
-- they can't afford truly focussed work - too much other troubles, too much sub-problems they should solve alone, a lot of wasted time in attempts to find partners or develop some "booster-funding" technologies, plenty of frustration due to the isolation and helplessness against all the problems [including the dumb financial etc. ones] they have to solve [implement] alone (or give up)
-- they do not have students, partners or "slaves" to give the dirty job to [or barely have, but it's hard to motivate anyone without funding]

Overall, they should shoot 100 or 1000 targets with one bullet, or they "die out" [in the race]
Welcome to the list of the losers... :))
However some of these "losers", due to the extreme requirements they face, may really be 50 or 100 times more productive or knowledgeable and non-conventional than the "ordinary" funded and supported competition, and may have guts and balls that the others lack.

Otherwise they should have given up, be part of the existing institutes - "institutionalized" - or from the "AI". But they are not from those institutes, because when they proclaimed that "AI was wrong" they were outsiders already, heading towards new directions.

Furthermore, those brave ones are supposed to believe and find a way to make thinking machine possible on cheap, old and slow hardware, otherwise they should have another reason to give up to the supercomputer owners and the rich institutionalized researchers...

[Yes, one can borrow a server on the cloud, but that poses other problems (such as privacy and communication overhead, and access to sensors etc.), and that's not cheap for poor independent researchers anyway]
However there's a catch here, which might favor the poor ones - the rich pampered ones' power has its by-effects of dimming their mind so much that they may invent and throw hydrogen PFLOPS bombs for solving "cockroach problems" - just to show off their power, - instead of using a couple of $0.10 or $0.00 MFLOPS jars from the garbage can to catch the little insects alive, study them thoroughly through the glass and then letting them continue living free... :))
That's why the "battle" is not decided already...
[Music goes here]
Terminator 2 Score ''Helicopter / Tanker Chase'': http://www.youtube.com/watch?v=h80Sm9jW7Ms
=== Todor "Tosh" Arnaudov ===

.... Author of the world first University courses in AGI (2010, 2011): http://artificial-mind.blogspot.com/2010/04/universal-artificial-intelligence.html


####

0 коментара: