Are robots necessarily mindless?

For the love of the pursuit of knowledge

Moderator: Moderators

User avatar
McCulloch
Site Supporter
Posts: 24063
Joined: Mon May 02, 2005 9:10 pm
Location: Toronto, ON, CA
Been thanked: 3 times

Are robots necessarily mindless?

Post #1

Post by McCulloch »

twobitsmedia wrote:I am not a mindless robot like you seem to be claiming to be.
Are robots necessarily mindless? Is it possible that consciousness could emerge from a collection of processing units which individually are mindless? What is mind? Where did it come from? Could it happen again? Can we make it happen?
Examine everything carefully; hold fast to that which is good.
First Epistle to the Church of the Thessalonians
The truth will make you free.
Gospel of John

User avatar
QED
Prodigy
Posts: 3798
Joined: Sun Jan 30, 2005 5:34 am
Location: UK

Post #11

Post by QED »

Furrowed Brow wrote: The problem I have with neural nets is that they are programmes that run on Boolean logic gates. This kind of "neural" activity is to real neurons as the cyber realm Second Life is to real life. It is a model of the real thing. Just as the wicked Queens poison apple in the Disney movie Snow white is not a real apple.

So I'd say the threshold is a neural mechanism that at the level of physical computation is a neural mechanism. Neurons not logic gates. Neurons organised to follow the logical syntax of predicate logic where bar should be set for the presence of a mind.
You seem to be talking as though real neurons have mysterious computational abilities that cannot be emulated by any known analog or digital method... is that right? Dharmendra Modha of the Cognitive Computing section IBM Almaden Research Center doesn't seem to share that view at all. He is currently in the buisness of cortical simulation of small mammals.

User avatar
Furrowed Brow
Site Supporter
Posts: 3720
Joined: Mon Nov 20, 2006 9:29 am
Location: Here
Been thanked: 1 time
Contact:

Post #12

Post by Furrowed Brow »

QED wrote:You seem to be talking as though real neurons have mysterious computational abilities that cannot be emulated by any known analog or digital method... is that right? Dharmendra Modha of the Cognitive Computing section IBM Almaden Research Center doesn't seem to share that view at all. He is currently in the buisness of cortical simulation of small mammals.
It is proven that any Boolean machine - regardless how complex or how sophisticated cannot compute the polyadic segment. That is Church's Theroem 1936.

Neurons don't have mysterious abilities. I'm saying that the way neurons are constructed, that is to say they are weighted with a negative charge, so a positive input has to reach a threshold before the neuron fires an output, affords an opportunity for a logical syntax that can support predicate logic. The syntax I'm on about is detailed on that crazy website. But standard Boolean logic gates, and Boolean algebra systematically fail to support that syntax.

Put it this way. If we set the bar at computing a generalised predicate logic then no Boolean computer will ever jump it. But humans seem to be able to think with predicate logic without too much trouble. Take the predicate theorem i.e. true on no assumption

|- ~(Ax)(Ex).Rxy

As human thinkers we can compose sentences of that form without too much trouble and see that they are invalid pretty much as soon as we compose the semantic interpretation. For instance, that formula can be interpreted as

"not-(every natural number is divisible without remainder)".

Under this interpretation the formula has to be false because every number can be divided by 1 without remainder. Therefore the formula cannot be a theorem. But the disproof for this formula falls into an infinite regress. It is undeciable. We cannot prove the formula is not a theorem. The only way we can get a Boolean machine to run this kind of formula is to write an ad hoc sub routine that tells the machine to cuts its losses after x time. The formalism of standard predicate logic continually bumps into this kind or problem. Church's theorem just affirms that there will always be some undecidable arguments regardless of any new notation or rules added to make the system more expressive.

However, I reckon (and okay its just me doing the reckoning) that neural networks built from real neurons or some isomorphic mechanics can support a logical syntax that affords a fully generalised predicate logic. Now if that was true then neuron like machinery is necessary for a mind, and Boolean logic gates the wrong tool completely. And that is the point. To escape the remit of Church's result, and gain a fully computable predicate logic without undecidable propositions, requires a mechanism neither isomorphic with Boolean algebra nor standard predicate calculus as it is usally conceived.

I do not disagree a Boolean machine can model and mimic parts of a predicate syntax, but it can never embody it. Because a Boolean machine requires ad hoc pacthes to fix problems a predicate machine deals without any ad hoc interventions. Of course this could all be a wild goose chase, and I'm wrong. Maybe bonkers. But this is where my minds at man. :dizzy:

Maybe I should get back on my tarin. :roll:

User avatar
QED
Prodigy
Posts: 3798
Joined: Sun Jan 30, 2005 5:34 am
Location: UK

Post #13

Post by QED »

Hold the train! There's something I want to ask you...
Furrowed Brow wrote: Neurons don't have mysterious abilities. I'm saying that the way neurons are constructed, that is to say they are weighted with a negative charge, so a positive input has to reach a threshold before the neuron fires an output, affords an opportunity for a logical syntax that can support predicate logic. The syntax I'm on about is detailed on that crazy website. But standard Boolean logic gates, and Boolean algebra systematically fail to support that syntax.
OK, so Siegelmann plays around with analog recurrent neural network models and manages to compute more than the Turing machine. Does this help?

User avatar
Confused
Site Supporter
Posts: 7308
Joined: Mon Aug 14, 2006 5:55 am
Location: Alaska

Post #14

Post by Confused »

LOL, completely off the subject, but if alexiarose was here, I can imagine her comments on the three geek squad stooges here, or whatever she called your trio.

I am going to have to send a copy of this thread to her for humorous purpose.
What we do for ourselves dies with us,
What we do for others and the world remains
and is immortal.

-Albert Pine
Never be bullied into silence.
Never allow yourself to be made a victim.
Accept no one persons definition of your life; define yourself.

-Harvey Fierstein

User avatar
Furrowed Brow
Site Supporter
Posts: 3720
Joined: Mon Nov 20, 2006 9:29 am
Location: Here
Been thanked: 1 time
Contact:

Post #15

Post by Furrowed Brow »

QED wrote:Your concept of a semantic thinker sounds like it's designed to cope with a different kind of meaning to the one I'm working with.
I think so. As I say I'm setting the bar a tad higher.

In respect to Menant’s Paramaceum that developed a motor response to acidity. I do not disagree it is processing information. But I certainly disagree the critter in any sense forms a representation of the reality with which it interacts. It is not following a syntax that goes 'if x is nasty then swim away'. True its behaviour might be described on those terms, but that is our symbolic intepretation not the Paramaceum's. So I do not equate behaviour, and motor responses, with the capacity to symbolise; and I am equating having a mind with having a symbolic mind.

When I was studying we were always talking about bees and the bee dance. Bee finds some pollen flies back to hive, does a dance that somehow encodes where it has been and that the other bees should check it out, and off they buzz to following the instruction encoded in the dance. Certainly information is being transferred here. Again I think it pushes credibility (and the imagination) to say the bee has got a mind. Though again some have argues that maybe hives, and termite colonies etc have a mind. Again I’d disagree.

I’d say a necessary requirement is the ability to process a logical syntax. However, not any bunch of rules will do when it comes to representation reality. In fact I’d say not even the ability to processes Boolean operation is sufficient. The bar should be set at the predicate calculus - all segments. But of course it is proved that standard predicate calculus and any system as powerful as standard arithmetic that can express polyadic relationships between variables is not computable.

Now I’m not sure exactly what Siegalman has managed to compute, but it is well known that neural networks are much better at pattern recognition than a Boolean machine. However, neural networks are not programmed. They are not even given a logical syntax to start them off. The nearest thing to programming is selecting the weights of their cells; after that the networks are left to work things out for themselves. In this sense, a logical syntax is not imposed on the network. However it seems Siegalmen is thinking in terms of encoding binary 1 and 0 with a neuron. So that is a basic syntax. However even if neural networks can achieve more than a Turing machine that is not sufficient to indicate the capacity for a mind.

Again I don’t think neural networks reach the bar of having a mind unless they are representing reality to themselves. To do that, I say, the bar must be set at the ability to follow a predicate logic syntax. Moreover there is a syntax that can be defined and which neural networks are well suited to follow. And the syntax I have in mind is not binary. There is a third value. Moreover the neural network has to be processed by neurons or neuron like mechanism and not Boolean logical gates emulating neural activity. So even a neural network that is a routine that runs on a Boolean machine is only ever a model, and cannot have a mind. The basic circuitry has to be non binary and non Boolean.

However if I am wrong, and there is no way to compute a full predicate logic (and anyone who knows anything about logic will tell you I am wrong) then I have set the bar impossibly high. Then I suspect having a mind is reliant on any old logical grammar that can generate enough complexity. There is probably no such thing as a mind with an “I”. Just a collection of complex sub routines that work together. Different grammars and different mechanical systems can then all lay claim to having a mind if they are sufficiently complex. In which case Menant’s Paramaceum’s motor response could indeed be meaningful and termite hills could have minds.

User avatar
QED
Prodigy
Posts: 3798
Joined: Sun Jan 30, 2005 5:34 am
Location: UK

Post #16

Post by QED »

I can see that you're going out of your way to explain your position here Furrowed -- I want to thank you for that. I only wish that I could grasp the significance of our apparent ability to process all segments of predicate calculus as easily. Is it absolutely clear that it is true and not an illusion that we have such an ability?

I suppose Siegalman can model reality in systems with Real numbers to arbitrary precision -- as nature does. I can understand the difference this would make in chaotic systems and I can see the difference it would make in neural systems. Is there anything that strikes you about this?

User avatar
alexiarose
Site Supporter
Posts: 562
Joined: Tue Dec 11, 2007 8:21 am
Location: Florida

Re: Are robots necessarily mindless?

Post #17

Post by alexiarose »

McCulloch wrote:
twobitsmedia wrote:I am not a mindless robot like you seem to be claiming to be.
Are robots necessarily mindless? Is it possible that consciousness could emerge from a collection of processing units which individually are mindless? What is mind? Where did it come from? Could it happen again? Can we make it happen?
I have a question. Lets look at the same "programming" of the human. From the point in which an embryo has its genetic information programmed into it, it continues to replicate etc... throughout the fetal development. The fetus isn't born with all the information needed to survive. It requires input from the external environment to make use of the programs genetics provided the infant with. As the infant progresses through childhood development, it is expanding on the initial information programmed into the embryo by genetic sequences.

The mind is nothing more than a computational compilation of genetic sequences. Failure in some of these sequences lead to neurological disorders.

If we took a robot from scratch, input it with the required information to get started, then gave it the ability to expand upon that information, how is this AI any different than childhood development?

The question I see as a roadblock is the emotional component. I can't see why one could not be developed to learn consciousness. Just as an infant has no concept of it, it would be developed from external stimuli, just as AI.

If we can find scientific data that would support emotions being an innate function, one with a genetic component, the with the advancement of technology, couldn't we theoretically program the same infantile components of emotions into an AI to expand upon from external stimuli?
Its all just one big puzzle.
Find out where you fit in.

User avatar
alexiarose
Site Supporter
Posts: 562
Joined: Tue Dec 11, 2007 8:21 am
Location: Florida

Post #18

Post by alexiarose »

QED wrote:
Furrowed Brow wrote: The problem I have with neural nets is that they are programmes that run on Boolean logic gates. This kind of "neural" activity is to real neurons as the cyber realm Second Life is to real life. It is a model of the real thing. Just as the wicked Queens poison apple in the Disney movie Snow white is not a real apple.

So I'd say the threshold is a neural mechanism that at the level of physical computation is a neural mechanism. Neurons not logic gates. Neurons organised to follow the logical syntax of predicate logic where bar should be set for the presence of a mind.
You seem to be talking as though real neurons have mysterious computational abilities that cannot be emulated by any known analog or digital method... is that right? Dharmendra Modha of the Cognitive Computing section IBM Almaden Research Center doesn't seem to share that view at all. He is currently in the buisness of cortical simulation of small mammals.
There is no reason to believe that neuron synapses cannot be emulated just as cardiac impulses are with pacemakers.
Its all just one big puzzle.
Find out where you fit in.

User avatar
alexiarose
Site Supporter
Posts: 562
Joined: Tue Dec 11, 2007 8:21 am
Location: Florida

Post #19

Post by alexiarose »

QED wrote:
Furrowed Brow wrote: The problem I have with neural nets is that they are programmes that run on Boolean logic gates. This kind of "neural" activity is to real neurons as the cyber realm Second Life is to real life. It is a model of the real thing. Just as the wicked Queens poison apple in the Disney movie Snow white is not a real apple.

So I'd say the threshold is a neural mechanism that at the level of physical computation is a neural mechanism. Neurons not logic gates. Neurons organised to follow the logical syntax of predicate logic where bar should be set for the presence of a mind.
You seem to be talking as though real neurons have mysterious computational abilities that cannot be emulated by any known analog or digital method... is that right? Dharmendra Modha of the Cognitive Computing section IBM Almaden Research Center doesn't seem to share that view at all. He is currently in the buisness of cortical simulation of small mammals.

BTW: HOWDEE DOODIE MOE!!!!
Its all just one big puzzle.
Find out where you fit in.

byofrcs

Post #20

Post by byofrcs »

Maybe, depending upon the degree of engagement with humanity that we expect to engineer in the AI.

Today there is some 'spark' missing with modern AI. There is some trick to bringing all the bits together to create an artificial mind which can engage with humanity as a peer. Assuming naturalism then this spark is potentially achievable (the obverse being that assuming a supernatural then we could have problems). Raymond Kurzweil in his book, The Singularity is Near, has presented a compelling argument based on the brute force of sheer computations per second and on storage and our technological advances (Moores Law) but I believe that we can shortcircuit having to wait the next 35 years until 2045 or so when he predicts that we would have sufficient bangs per buck to replicate a human brain in silicon. Kurzweil is aware that software develops at a lot slower rate and this is where I think my proposal adds value.

My proposal is that one approach that will work is through using genetic algorithms to design the mind using an artificial world with a fitness test being the design of a human mind. There may be better designs but humans are the least alien design that we can relate to, though this is assuming we wish to interact with these minds as equals and I do not see why this cannot be so (religious and other cultural prejudices aside). The word design is used in the sense of the blindly evolved structure that we have to date. It is a design but evolution is the designer.

Thus if we used an artificial world which ran at many times real-time we could play-forward an evolutionary tree of life until we found a concious mind within a round of investor funding (or other such short lived period ;) . Ideally as our technology got better there would be a convergence of design of a concious mind with the low-cost technology at or even before the predicted dates. We can almost guarantee that the hardware will deliver as to date there has not been any lapse in the growth of storage or in CPU but software certainly runs at human development speeds and the future is less clear for that.

The problems are that,

- we do not know enough about the human mind to allow it to fully be useful as a template to select an AI mind,

- I'm uncertain of the timeframes needed. If we expect reasonable results in 5-10 years then this needs a near billion:1 i.e. 1 second wall time needs to simulate a billion seconds AI world time if we assume a 5 billion year evolutionary cycle. If we assume we have a mechanism for replication we can short-circuit the design to the last few hundred million years. That on the face of it looks do-able on clusters but it feels hard as many have found as you increase the resolution of what you are simulating is increased. Maybe that we simulate at a constant rate and low resolution of interactions and then as CPU grows over the next 5 years we increase the resolution as the cluster grows in size.

- Are we ready for what we create ? I'm confident we could easily extend existing human rights legislation to cover AI minds and that this could also be useful as a stalking horse in regimes that currently deny human rights but lets face facts: even the UK only completed human rights legislation in 1998/2000 whilst the US was actively segregating humans based on color less than 40 years ago, and those are post-Enlightenment 1st world nations. Many religions would also have problems with extending rights to AI minds and would actively destroy or deny rights to AI minds as they deny rights to humans.

Overall I feel that it would be a short sharp shock to humanity and a welcome change. I am of the opinion that robots need not be necessarily mindless if we do not want slaves any more than we would consider humans to be mindless unless we wanted slaves. There are graduations of intelligence and existing Animal Rights laws (using the model of the New Zealand Animal Welfare Act 1999), plus the proposed suggestions on the Great Apes Project plus the 1948 UN Universal Declaration of Human Rights can be applied to a wide variety of machine intelligences from cat/dog scale pet-like autonomous machines through to thinking machines.

Or naturalism is wrong and we need a supernatural soul. Without evidence of such a soul it looks unlikely that the supernatural is relevant to this engineering problem.

Post Reply