Lojban In General

Lojban In General


An open source AI research & developing project uses lojban

posts: 4740

I have long daydreamed of an idea similar to his, to develop robotic
sensing and movement systems. The idea is to create a robot with a
very stable form, such as a quadruped or hexaped, controlled remotely
by a computer. The computer would try to extrapolate a 3D model of the
robot's surroundings from the side-mounted cameras. It would then vary
and evolve its visual recognition software by how well its hypotheses
hold up when the robot moves to look from a different perspective. It
would also evolve movement navigation strategies from the robot's
attempts to move through the environment.

It's kind of an extension of this idea:

http://www.youtube.com/watch?v=RZf8fR1SmNY&NR=1

That having been said, it raises another can of worms about how to
make sure the AI won't try to kill us. I will hasten to interject that
disclaimer before Robin does.

-Eppcott


On Wed, Jan 27, 2010 at 10:35 AM, Super-User <lojban-out@lojban.org> wrote:
>
> An open source AI research & developing project uses lojban
>
> Author: Super-User
>
> Hi!
> I started an open source AI research & developing project which aims at making AI that could be finally considered as human. In addition, I'm going to make lojban as AI's first language.
> If you're interested in it, please visit:
> http://gpai.cc
>
>
>
>
> To unsubscribe from this list, send mail to lojban-list-request@lojban.org
> with the subject unsubscribe, or go to http://www.lojban.org/lsg2/, or if
> you're really stuck, send mail to secretary@lojban.org for help.
>
>


To unsubscribe from this list, send mail to lojban-list-request@lojban.org
with the subject unsubscribe, or go to http://www.lojban.org/lsg2/, or if
you're really stuck, send mail to secretary@lojban.org for help.

posts: 99 United States

The interview with Jürgen Schmidhuber featured on slashdot is nifty:
http://hplusmagazine.com/articles/ai/build-optimal-scientist-then-retire

On Thu, Jan 28, 2010 at 12:51, Super-User <lojban-out@lojban.org> wrote:
>
> Re: An open source AI research & developing project uses lojban
>
> Author: Super-User
>
> Em... that's a good idea. I accept it.
>
>
>
>
> To unsubscribe from this list, send mail to lojban-list-request@lojban.org
> with the subject unsubscribe, or go to http://www.lojban.org/lsg2/, or if
> you're really stuck, send mail to secretary@lojban.org for help.
>


To unsubscribe from this list, send mail to lojban-list-request@lojban.org
with the subject unsubscribe, or go to http://www.lojban.org/lsg2/, or if
you're really stuck, send mail to secretary@lojban.org for help.

How do you make sure your children won't kill you?

codrus

On Wed, Jan 27, 2010 at 12:13 PM, Matt Arnold <matt.mattarn@gmail.com>wrote:

>
>
> That having been said, it raises another can of worms about how to
> make sure the AI won't try to kill us.
>

posts: 493

Well, there are many methods, all more questionable than the last. I for
one am in favor of the "ball and chains" method.

That's an excellent point codrus. I know you were joking Matt but it's
always funny how many people see "i Robot" and suddenly believe that all AI
research is bad and will lead mankind to his doom. I think Hollywood has
been a leading cause in us all having an over-extended fear of the un-known
(especially sci-fi related unknown). Kind of reminds me of
http://dresdencodak.com/2009/09/22/caveman-science-fiction/

On Thu, Jan 28, 2010 at 3:00 PM, chris kerr <letsclimbhigher@gmail.com>wrote:

> How do you make sure your children won't kill you?
>
> codrus
>
>
> On Wed, Jan 27, 2010 at 12:13 PM, Matt Arnold <matt.mattarn@gmail.com>wrote:
>
>>
>>
>> That having been said, it raises another can of worms about how to
>> make sure the AI won't try to kill us.
>>
>

posts: 86 United States

It doesn't help that the movie made the suggestion that the Three Laws of
Robotics would lead to the end of humanity and only a robot which did NOT
have to follow said laws could stop it. Because it had a friggin' heart.

Being someone who has read (nearly?) all of Asimov's books on the subject of
robots, the Three Laws, and the messing with the latter, it is painfully
obvious that the movie has that exactly backwards.

Assuming that a way could be found to hardwire the Three Laws *as
stated*into an AI, that would successfully keep our children from
killing us.

On Thu, Jan 28, 2010 at 2:10 PM, Luke Bergen <lukeabergen@gmail.com> wrote:

> Well, there are many methods, all more questionable than the last. I for
> one am in favor of the "ball and chains" method.
>
> That's an excellent point codrus. I know you were joking Matt but it's
> always funny how many people see "i Robot" and suddenly believe that all AI
> research is bad and will lead mankind to his doom. I think Hollywood has
> been a leading cause in us all having an over-extended fear of the un-known
> (especially sci-fi related unknown). Kind of reminds me of
> http://dresdencodak.com/2009/09/22/caveman-science-fiction/
>
>
> On Thu, Jan 28, 2010 at 3:00 PM, chris kerr <letsclimbhigher@gmail.com>wrote:
>
>> How do you make sure your children won't kill you?
>>
>> codrus
>>
>>
>> On Wed, Jan 27, 2010 at 12:13 PM, Matt Arnold <matt.mattarn@gmail.com>wrote:
>>
>>>
>>>
>>> That having been said, it raises another can of worms about how to
>>> make sure the AI won't try to kill us.
>>>
>>
>


--
mu'o mi'e .aionys.

.i.a'o.e'e ko klama le bende pe denpa bu

posts: 17

It reminds me of a genetic algorithm experiment I wrote some time ago as a proof of concept to myself. I created a 3D environment with a lumpy landscape, some basic physics (gravity, Newtonian motion) and placed a target in it that'd move to a new random location if touched. Then I allowed a population of randomly generated GAs to operate a virtual tank, one at a time, within the environment. A simple fitness function was chosen to determine the best drivers: the more times the target was hit within a given timeframe, the more chance that that GA's bitstring (its DNA equivalent) would get to "breed" and its offspring would form the next generation.

In generation 1 most of the drivers just sat still, or twitched about, or would spin around in circles, but every now and then, by pure fluke, a driver would hit the gas and haphazardly nail a target. This went on for a couple more generations, but then amazingly by about the 7th or so there'd be emergent behaviours that really surprised me - drivers that'd hit every target while executing a series of deft manoeuvres and handbrake turns. All this from artificial evolution alone - albeit in a limited environment.

The mechanics of the driver were very simplistic: just a couple of formulae that were analogous to hard-wired Bayesian networks, and a set of associated weights and thresholds (represented by a bitstring) that the GA supervisor would modify through breeding. Inputs to the drivers were environmental measurements such as velocity, angle to target, distance to target, height of surface below tank, and the heights of the surface in front, behind and to the sides of the tank.

Breeding was a wheel-of-fortune style selection. Imagine a vast wheel divided into as many evenly sliced pieces as there were total targets hit for the Nth generation. Then imagine that for each driver, it has a number of those slices allocated to it that equals the number of targets it hit. Now to find a breeding pair and generate one offspring, the wheel is spun twice and the two bitstrings indicated (on occasions this meant the same one twice) are merged by randomly selecting between them from one end to the other, with a little random mutation thrown in for good measure. This is repeated until the N+1th set of bitstrings is ready to roll, and then the fitness evaluation begins again.

Sadly I doubt I have the source any more (I was also checking out DarkBASIC's 3D abilities at the same time, horribly amateur language), but it's relatively trivial to recreate.

Now, relating this to the lojban-speaking AI may be significantly less trivial but still perhaps worth a look. I have a few ideas for the language interface for example, that utilise neural nets with evolved topologies, and am happy to expand those ideas, but not so eager to do the work involved...

I think the key to good AI isn't so much in worrying about the mechanics of how a robot would reach its goal - it can be tooled up to figure that out for itself - it's how you define a complex goal or set of goals (the fitness function) in the first place. And no, that goal does not have to default to "kill", that's just laziness :-)

kozmikreis



On 27 Jan 2010, at 20:13, Matt Arnold wrote:

> I have long daydreamed of an idea similar to his, to develop robotic
> sensing and movement systems. The idea is to create a robot with a
> very stable form, such as a quadruped or hexaped, controlled remotely
> by a computer. The computer would try to extrapolate a 3D model of the
> robot's surroundings from the side-mounted cameras. It would then vary
> and evolve its visual recognition software by how well its hypotheses
> hold up when the robot moves to look from a different perspective. It
> would also evolve movement navigation strategies from the robot's
> attempts to move through the environment.
>
> It's kind of an extension of this idea:
>
> http://www.youtube.com/watch?v=RZf8fR1SmNY&NR=1
>
> That having been said, it raises another can of worms about how to
> make sure the AI won't try to kill us. I will hasten to interject that
> disclaimer before Robin does.
>
> -Eppcott
>
>
> On Wed, Jan 27, 2010 at 10:35 AM, Super-User <lojban-out@lojban.org> wrote:
>>
>> An open source AI research & developing project uses lojban
>>
>> Author: Super-User
>>
>> Hi!
>> I started an open source AI research & developing project which aims at making AI that could be finally considered as human. In addition, I'm going to make lojban as AI's first language.
>> If you're interested in it, please visit:
>> http://gpai.cc
>>
>>
>>
>>
>> To unsubscribe from this list, send mail to lojban-list-request@lojban.org
>> with the subject unsubscribe, or go to http://www.lojban.org/lsg2/, or if
>> you're really stuck, send mail to secretary@lojban.org for help.
>>
>>
>
>
> To unsubscribe from this list, send mail to lojban-list-request@lojban.org
> with the subject unsubscribe, or go to http://www.lojban.org/lsg2/, or if
> you're really stuck, send mail to secretary@lojban.org for help.
>



To unsubscribe from this list, send mail to lojban-list-request@lojban.org
with the subject unsubscribe, or go to http://www.lojban.org/lsg2/, or if
you're really stuck, send mail to secretary@lojban.org for help.

posts: 4740

Luke,

I follow Dresden Codak as well. I really like that particular strip.

Of course, systems that perform non-linguistic tasks, with no
socialization, need no more than an incredibly simple system of
motivation. "You're programmed to pilot this vehicle to wherever the
passenger says." It is absurd to think such a system has any reason to
do any other task, such as overthrow the human race. It has no
motivation to do so. But Super-User is talking about a
human-equivalent AI, which means deliberately creating a motivational
system complex enough to want to do _anything_.

With human-equivalent AI, we face a question of our own motivation.
What's the point in creating a new person with human rights? What is
to be gained from that? If it is made to be like us, with our own
instinct for self-improvement, and is allowed to self-improve to
become smarter than us, why should it use that power to act in our
best interests? How are we better off to create someone who has an
advantage over us? We have a bad enough class system as it is.

Suppose we carefully limit the human-equivalent AI to stay at the
level of an imbecile human. Ethically, you wouldn't do that to a
human. Do we then have an obligation to keep the computer running
forever? Is shutting off the computer an act of murder? It's best to
decide what one thinks of these issues before, not after.

-Eppcott


On Thu, Jan 28, 2010 at 3:10 PM, Luke Bergen <lukeabergen@gmail.com> wrote:
> Well, there are many methods, all more questionable than the last.  I for
> one am in favor of the "ball and chains" method.
> That's an excellent point codrus.  I know you were joking Matt but it's
> always funny how many people see "i Robot" and suddenly believe that all AI
> research is bad and will lead mankind to his doom.  I think Hollywood has
> been a leading cause in us all having an over-extended fear of the un-known
> (especially sci-fi related unknown).  Kind of reminds me
> of http://dresdencodak.com/2009/09/22/caveman-science-fiction/
>
> On Thu, Jan 28, 2010 at 3:00 PM, chris kerr <letsclimbhigher@gmail.com>
> wrote:
>>
>> How do you make sure your children won't kill you?
>>
>> codrus
>>
>> On Wed, Jan 27, 2010 at 12:13 PM, Matt Arnold <matt.mattarn@gmail.com>
>> wrote:
>>>
>>>
>>> That having been said, it raises another can of worms about how to
>>> make sure the AI won't try to kill us.
>
>


To unsubscribe from this list, send mail to lojban-list-request@lojban.org
with the subject unsubscribe, or go to http://www.lojban.org/lsg2/, or if
you're really stuck, send mail to secretary@lojban.org for help.