Scientific determinism.

For the love of the pursuit of knowledge

Moderator: Moderators

User avatar
bluethread
Savant
Posts: 9129
Joined: Wed Dec 14, 2011 1:10 pm

Scientific determinism.

Post #1

Post by bluethread »

If all is matter and motion in a closed system, isn't everything that happens predetermined by a chain reaction of cause and effect, including the being that considers itself able to make independent choices? If so, isn't that being engaging in a faith based lifestyle? If not, what is it that enables that being to make independent choices?

Metadian
Student
Posts: 27
Joined: Sat Dec 30, 2017 5:15 pm
Location: Spain

Re: Scientific determinism.

Post #11

Post by Metadian »

[Replying to post 6 by bluethread]
How could that have occurred any other way

Maybe it couldn't have, I do not deny that. In fact I am a determinist.
There are some who disagree.

Well, true, some people may believe we do not want things. I believe my point stands though, let me elaborate after I answer your questions.
why don't the immutable scientific principles dictate neurologic predisposition?

They do.
where does consciousness come from, if not the interactions of matter and motion within the brain in reaction to external stimuli, which are themselves the result of cause and effect according to immutable scientific principles?

Yes, exactly from there.
what assures us that scientific principles do not do the same with all outcomes?

Nothing
Then, if one holds that there is nothing that does not act according to some principle, know or unknown, how is it actually "free"? Is it not the inevitable result of those principles interacting with one another?

Because "free" does not invariably and universally mean "inevitable".

All our expressions of counterfactuals/"could've been" are in a special irrealis verbal tense, even, so that we know the ontological referent of these propositions is not the same as "I see a red tree". We already know it is different, philosophers delve into how and why it's different.
So, you are saying is "freedom" refers to specific interactions and not scientific principles.

Freedom is not like either, "gravity" or "planet". We presume an objective reality that we perceive exists in the space-time continuum, and we give it a name: a planet, which is an actual thing. Gravity however, as a concept, does not refer to any actual thing - even if it is real. It is an intellectual construct with which we mean a certain relationship between actual objects. This relationship, as all relationships, is not just a function of the actual things that (we presume) exist, but a function of our own mind's categories of understanding things. It is a model. Scientific principles are all such.

Freedom is a concept, like gravity, but it is not a single concept. It is a generally positive value whose meaning depends on the individual and the culture: what for one is free for another can be unfree, and vice versa. Thus my reflexion is that, to most people, it seems "free" does not mean what it means to philosophers/theologists, but is in a more general way, "uncoerced", "not affected or minimally affected by the wills of others". It does not speak of what these wills are or how deterministic/indeterministic the universe in which they emerge as phenomena are.
Then what do you call things resulting from all other factors? Are there things that are not governed by the scientific principles of the universe?

Which factors do you mean? I'd say that definitionally no, but this is tautological. If our understanding of the universe broadens, then we include this understanding into our models, and name it a 'principle' or 'law'. But this doesn't mean that we're more or less tightly governed, not anymore than putting on better glasses makes a star shinier.

I think you should pay attention to these, as language reveals a lot:

...as little more than personal perception...
...dictate neurologic predisposition...
...not governed by the scientific principles of...
...interactions of matter and motion within the brain in reaction to external stimuli...
how is it actually "free"

I spoke about language, about the nature of our intelligence for a reason, and I used the wording "as free as we can be" for a reason: this means that not more free, but also not less free - or more unfree.

We are human, and common humans, before we are philosophers. We live certain experiences that define how we think about other things, including metaphysical things. The mundane often serves as a guide to extrapolate and work problems out, but the danger here is importing assumptions to areas where the premises they need do not hold. "dictate", "govern", "external" - all are modelized in the similarity of how we speak about privations of our social, political and personal liberties. We think of foreign forces as "external", we think of autocrats and oligarchs as "dictating" tyrants, etc - because they impose their will upon us.

From this, we translate the feelings this produces in us (of being constrained, unfree, unrealized) into areas where it doesn't necessarily have a meaning. Saying that my brain, the seat of my will, "is dictated", is indirectly suggesting that the way something "imposes" on my brain to create a will-capable substrate, is comparable to the way my brain can impose my will on other people (through behavior). How much sense does this really have? Or, similarly, saying that the "laws/principles of the universe" "govern" us like moral principles or actual laws influence our behavior, by restraining our freedom... how accurate is it, other than an easy metaphor?

The really interesting part is when we think about our own minds, because it is very natural that we would think this way. We're built for answers, we're problem-solving machines. And we can think about ourselves, making these models. So if we look at the billiard table and do math, we find out about how an in-put + rules creates an out-put, and then we extrapolate that to ourselves as objects: in-put + rules creates an out-put, our neurology is predetermined so our behavior must be too. If the billiards were capable of choice, they seem to only get one outcome; we, who think about different choices, must too. The brain reacts to this realization by believing its freedom "less real", or entirely a mirage.

But again... how accurate is it? The only thing you're measuring this with, is your expectation of how free you should be, according to a particular conception of free (ie "independent" of "external" factors). When you realize that the universe/cosmos is just defined as 'everything' (=that which has nothing external to it), then it becomes obvious that such an idea of "freedom" is, literally, a round circle. And even after that realization, you will make choices and experience wants, and the usage of 'free' will still have the meaning its usage gives to it (independent relative to something else), and people will feel free and unfree in accordance to this meaning (you'll feel freer the less influenced you are or believe you are by other wills, or the less "external"/illegitimate you believe those wills to be, eg your parent vs a soldier from an occupying country).

-
Divine Insight wrote:In other words, if we say, "I want that, so I'll chose that". The choice may appear to be a free will choice, but if the choice was based on the want then the real question is whether we actually have any freedom in what we want before we even make any conscious decisions at all.

I don't believe we have a choice in our wants (to quote Schopenhauer, "Man can do what he wills, but he cannot will what he wills"), but I also don't believe that this is how we intuitively define freedom (or should). I do believe that it is natural to feel unfree upon some metaphysical realizations, but that these are also a product of the way our biology and mind works. Please see my reply to OP above if you want more detail.

User avatar
William
Savant
Posts: 14187
Joined: Tue Jul 31, 2012 8:11 pm
Location: Te Waipounamu
Has thanked: 912 times
Been thanked: 1644 times
Contact:

Post #12

Post by William »

[Replying to post 9 by Divine Insight]
Well, the universe has clearly evolved physical in a predetermined way based on the laws of physics. That much we can be pretty confident about. But then again the universe doesn't think so there isn't any question of whether the universe has any "will" at all, much less a "free will".
Is there any particular reason why you believe this is true?

How do you know the universe doesn't think?

User avatar
Divine Insight
Savant
Posts: 18070
Joined: Thu Jun 28, 2012 10:59 pm
Location: Here & Now
Been thanked: 19 times

Re: Scientific determinism.

Post #13

Post by Divine Insight »

Metadian wrote:
Divine Insight wrote:In other words, if we say, "I want that, so I'll chose that". The choice may appear to be a free will choice, but if the choice was based on the want then the real question is whether we actually have any freedom in what we want before we even make any conscious decisions at all.

I don't believe we have a choice in our wants (to quote Schopenhauer, "Man can do what he wills, but he cannot will what he wills"), but I also don't believe that this is how we intuitively define freedom (or should).
So actually linguistically, when we are asking the question, "Do we have free will?" Then if Schopenhauer is correct, the answer is no.

Granted the OP of this thread doesn't ask about free will, but rather asks if we can make "independent choices".

The question then should be "Independent from what?"

Independent from our will?

I would suggest that if we ask that specific question the answer is clearly, "No, we cannot make choices independent from our will?". At least not knowingly on purpose. For even if we purposefully choose to do something against our what we believe to be our will, that itself would have then instantly become our will to do that.

So we can then be certain that we can never knowing make a choice that is "free" from our will.

So then the question reverts right back to the question of "Free Will". Are we freely in control of what we "will"? According to Schopenhauer we aren't. And I tend to agree with that sentiment.

In other words, our "will" may be in control of what our brains choose to do. But there is no way that our brains could be in control of what we "will". Then the question becomes, "Can any agent be in control of what we will beyond what has been physically determined by the physical structure and wiring of our brain?"

It seems to me that the answer to that question needs to be, "No". For if we try to imagine yet another agent that is the source of "will" we instantly create a need for an infinite regression of such sources.

So the idea that the buck stops with the physical structure and wiring of the brain seems inevitable. So that has to be the source of our "will". And that would be physically determined by both the physical structure of the brain, as well as how it has been 'wired' or 'programmed' during the course of our life experience.

By golly I think we've nailed this one down.

In fact, I don't see how that could not be the answer to the question of the origin of "will" and whether or not it is free from the physical structure and wiring of the brain.

So we can take this to the bank as at least one question answered.
[center]Image
Spiritual Growth - A person's continual assessment
of how well they believe they are doing
relative to what they believe a personal God expects of them.
[/center]

Metadian
Student
Posts: 27
Joined: Sat Dec 30, 2017 5:15 pm
Location: Spain

Re: Scientific determinism.

Post #14

Post by Metadian »

[Replying to post 13 by Divine Insight]
I agree, it seems that some arrangements of matter in this universe are different from others in that they can generate will - or, more generally, a mind, subjectivity, awareness, a function of which is volition.

A word I often use for this is "noogenic". So the meaningfulness of an "agent" certainly stops at noogenic matter. No agent's choices could be, definitionally, independent from that which his noogenic matter (brain, etc) is dependent. And noogenic matter is dependent of that which matter is dependent, because it is matter. This is why "God's mind" is often said by theists to be immaterial, as to avoid this limitation - but then I also bring my ignosticism to the table and ask, "what is a matterless mind? how is this noogenic, how does it generate a mind? how do you know it to do so, when all minds we know are emergent of brains?"

User avatar
Divine Insight
Savant
Posts: 18070
Joined: Thu Jun 28, 2012 10:59 pm
Location: Here & Now
Been thanked: 19 times

Re: Scientific determinism.

Post #15

Post by Divine Insight »

Metadian wrote: [Replying to post 13 by Divine Insight]
I agree, it seems that some arrangements of matter in this universe are different from others in that they can generate will - or, more generally, a mind, subjectivity, awareness, a function of which is volition.
The only problem I have with this view is that in order for anything to be able to actually have an experience of this, then there must necessarily be something innate in the stuff of which his brain is made up of that can have an experience.

In other words, everything you've listed: a mind, subjectivity, awareness, a function of which is volition, doesn't offer any explanation of just what it is that is able to experience these things.

In other words, in physics we have something akin to the following:

Four basic forces:

1. Gravity
2. Electromagnetism
3. Strong Nuclear force
4. Weak Nuclear force.

In addition to these things we also recognize the following:

1. A concept of "energy" (which is basically ill-defined beyond an ability to "do work")
2. A concept of "matter" (which is often thought of as a particular standing wave pattern of energy for the most part)
3. And of course "Space-time" as a fabric which is also ill-defined. A fabric of what exactly? A fabric of energy?

Finally we have a few more things physics has recognized as fundamental properties of reality.

1. The Pauli Exclusion Principle - (which basically describes why certain physical structures cannot occupy the same physical state)
2. The Heisenberg Uncertainty Principle - (which ultimately leads to conservation principles of the symmetry of various physical properties.)
3. Entanglement - (which implies that there may be a deeper interconnection between all phenomena than might readily be apparent)

There could possibly be more added to this list, but I'm not sure if we can add anything useful for this topic. Hopefully this will become apparent with my next question.

Which of the above primal physical properties of this material universe can explain how anything can actually have an experience? Or even any combination of the above?


In other words, the things you have described could be properties of a "Zombie". A zombie could have a mind, subjectivity, awareness, a function of which is volition, but yet not have any actual experience of any of this.

In fact, in this case the so-called "awareness" wouldn't be any different from my laptop computer being "aware" that I am pressing keys on the keyboard to type in these words. Clearly the computer has to be "aware" of the keyboard, but this doesn't mean that the computer is having any actual experience of this.

So the question becomes, "Exactly what is it that is having an experience?"

Or if you prefer:

How can we explain how anything could have an experience using the list of known physics properties of the world?

~~~~~

Now please note. I'm not arguing that since it's not clear how physics could ever explain how anything could have an experience, that doesn't require that we automatically start talking about imaginary non-physical "souls" having an experience.

So I'm not asking this question in an effort to defend or support a notion of a non-physical agent that might be having the actual experience. I'm just asking that from a pure physics perspective, how do we explain how anything could have an experience?

What you have described thus far could be true of a "Zombie". A zombie could basically be a computer that can do everything you have described. And all of that can be explained with known physics.

But the very moment we claim that this "Zombie" can actually experience this, we can no longer explain it using primal physics assumptions and premises.

So it seems to me that we've got a very long way to go yet before we can lay claim to having an explanation for human experience. I'm not saying that it can't someday be had. But I am saying that it appears that we are far from being there at this point in time.
[center]Image
Spiritual Growth - A person's continual assessment
of how well they believe they are doing
relative to what they believe a personal God expects of them.
[/center]

Metadian
Student
Posts: 27
Joined: Sat Dec 30, 2017 5:15 pm
Location: Spain

Re: Scientific determinism.

Post #16

Post by Metadian »

[Replying to post 15 by Divine Insight]

You are focusing on the matter/energy, but I think it is actually not the composition of something that makes it noogenic, but the arrangement. For instance, the same atoms in my brain arranged differently, could result in me dead, presumably having no subjective experiences.

Likewise I believe that AIs and robots could be conscious. This looks sci-fi, but let me put it in another way. Some scientists have mapped the neural networks of cockroaches and put them on cockroach-robots, made of metal instead of organic matter. I believe this cockroach-robot to be as sentient as real cockroaches. Let me put it yet another way: if we had the sufficient nano-technology for molecular surgery, we could replace our entire "cell anatomy" for other materials, for instance just like we put a hip replacement prosthetic, we could change the fatty membranes in a neuron for an analogous material.

I believe this would not significantly alter our experience/subjectivity/I-ness, because, why would it? We naturally replace the component atoms and molecules in each cell already as we age, in 7 years we are brand-new, made of shiny new pieces, but our "I-ness" or sense of self persists.

I don't believe my computer is "aware" of my keyboard, even though it receives its input. Similarly I don't believe that a jellyfish is in any meaningfully sense "aware" of the images that its visual receptors are, sometimes, able to create, that it gets as input. Neuroscientists agree with this possibility. And it's easily thinkable: I have input about my blood electrolytes and gases, but I don't consciously feel them - this is rooted in more primitive neurological structures (eg the hypothalamus) than the more complex neurological structures that allow awareness (the brain hemispheres).

This is not because they -the jelly, or the laptop- aren't made of noogenic stuff, but because the arrangement is not complex enough, I believe. The computer code and the jelly's nervous net are too simple for something like thought, just like a moss is too simple for something like "a flower".

Technically, rather than saying a certain (biological/synthetic) system is aware or non-aware, I think it exists on a continuum, from no awareness, to full awareness. You can certainly be aware/sentient without being intelligent, so that for instance many fish in fact have an experience of pain, pleasure, cold, hot, etc, without significant complex problem-solving abilities (ie they rely mostly on instinct than on associative, flexible, creative learning).

I agree we have no way of solving the hard problem of consciousness yet, but at the same time I'm skeptic it's really a problem, that brain/mind duality is a problem. We have to presuppose that philosophical zombies are possible, ie that an android with my exact neural network could lack subjective experience. Is this another round circle, an illusion of the mind's bafflement when it - a natural subject - starts thinking about itself and modelizes itself the best it can?
I'm just asking that from a pure physics perspective, how do we explain how anything could have an experience?
Maybe that's the problem, the perspective - how does a macro-economist explain inflation from micro-economics? We're equipped to analyze things, but our understandings have limits. Our ability to focus is singular. If you've heard of emergentism (which generally I'm opposed to, as I think any "whole" is composed of its "parts", because if anything is missing, then it's also a part), it would be content with shrugging off this question as "well, that's because it's not the actual level at which the (mind) phenomenon emerges".

User avatar
Divine Insight
Savant
Posts: 18070
Joined: Thu Jun 28, 2012 10:59 pm
Location: Here & Now
Been thanked: 19 times

Re: Scientific determinism.

Post #17

Post by Divine Insight »

Metadian, I agree with much of what you said, but I disagree with your conclusions. And I feel that you are missing the essence of the underlying problem.
Metadian wrote: You are focusing on the matter/energy, but I think it is actually not the composition of something that makes it noogenic, but the arrangement. For instance, the same atoms in my brain arranged differently, could result in me dead, presumably having no subjective experiences.
This is fine and dandy, but then you need to explain how an "arrangement' can have an experience. And what that could possibly mean in any rational sense.

That may be the answer. But if it is, we are a very long way from showing that this is the case.

Metadian wrote: Likewise I believe that AIs and robots could be conscious. This looks sci-fi, but let me put it in another way. Some scientists have mapped the neural networks of cockroaches and put them on cockroach-robots, made of metal instead of organic matter. I believe this cockroach-robot to be as sentient as real cockroaches. Let me put it yet another way: if we had the sufficient nano-technology for molecular surgery, we could replace our entire "cell anatomy" for other materials, for instance just like we put a hip replacement prosthetic, we could change the fatty membranes in a neuron for an analogous material.
Again, I have no problem with this, but keep in mind that in all of this you are still arranging matter/energy of this universe. In fact, if what you have described is in fact doable (which I also would not doubt), then what we end up with is the observation that awareness (or subjective experiences) arises solely from specific electromagnetic patterns, with total independence from whatever underlying physical materials are being used to give rise to those specific electromagnetic patterns.

So if this is the case, then we appear to have narrowed down the ability to have a sentient experience to electromagnetism. And specifically to a precise configuration of patterns.

I would even suggest that we are quite likely to be on the correct path with this. Conscious awareness, or subjective experience, may very well be totally dependent upon, and potentially a property of specific electromagnetic activity, and/or patterns.

However, even if we have confirmed this "observation" (which we haven't yet confirmed although it certainly appears to be promising), we still have not yet explained why electromagnetism should be able to have an experience. The questions then become:

1. Is this an innate property of electromagnetism itself to be able to have an experience?

Or

'2. Is it solely the pattern of activity that is having an experience? And if so, what the heck would that even mean?


How does a pattern have an experience? :-k

But at least at this point, we would be down to the above two questions where we might be able to decide which these can be determined to be the case.
Metadian wrote: I believe this would not significantly alter our experience/subjectivity/I-ness, because, why would it? We naturally replace the component atoms and molecules in each cell already as we age, in 7 years we are brand-new, made of shiny new pieces, but our "I-ness" or sense of self persists.
I agree. However, this is not nearly as straight-forward as you might at first think.

What do you even mean when you speak of "I-ness". Keep in mind that there are two distinctly different ways to think of "I-ness". If you are familiar with Buddhism you are already aware of this. If you are not familiar with Buddhism the point I'm making here is there there is a huge difference between the "I" that is having the experience, and the "I" that the experience itself has historically created during its existence.

In other words, there is the "ego" (or the individual social self) that is identified by name, occupation, life experience, etc. And then there is the purest "I", what the Buddhists call the "True I". And that is the actual core of the entity that is having the experience.

In other words, imagine you are hit on the head and you are knocked unconscious. When you awaken you find yourself in a state of complete amnesia. So much so, that you have no only lost memory of your name, occupation, life experience, etc., but when you look in a mirror you don't even recognize the body reflecting back at you.

Are you still then a sentient "I". Clearly you are since you are looking in a mirror at a body you don't even recognize. So in this case you are having an experience without any associated predefined "ego".

It's is that "I" that we need to understand. How is it that this undefined "I" can be having any sentient experiences at all? If a "pattern" defines this undefined "I", then this specific pattern could be the basis for ALL "I's". So this is the pattern of electromagnetic activity that we would need to focus on. If we could understand how that pattern works, and even create it from scratch, then we would not only understand how ALL sentient beings become sentient, but we would be able to create brand new "baby" sentient entities. Living sentient entities that are "born" with no knowledge at all, just like a human baby. And would need to be taught everything from birth via experience in order to build their own individual "egos"

It's this underlying undefined "I" that would need to be discovered and explained.

And then the question becomes "3. Does such a pattern even exist?"

In other words, we need to go back to questions #1 and #2 and decide which of those is true. If #1 is true and having an experience is an innate property of electromagnetic activity regardless of patterns, then trying to search for a fundamental pattern that gives rise to a naked "I" would be futile, because no such pattern would be required. The ability to have an experience would be an innate property of electromagnetism (just like charge, etc.)

On the other hand, if we are able to determine that question #2 is the significant question, then searching for this fundamental pattern that produces the "naked I", (without any need for a complex ego), would become the Holy Grail of science at that point.

And that might even be the answer. Actually if questions #2 & #3 are the significant questions to ask, then when we find this primal naked pattern that is required for subjective experience to exist, we should then be able to analysis that pattern and explain how it creates subjective experience.

I would be all for that if that's the case. My only point is that we are very far from being able to say that this is the answer.
Metadian wrote: I don't believe my computer is "aware" of my keyboard, even though it receives its input. Similarly I don't believe that a jellyfish is in any meaningfully sense "aware" of the images that its visual receptors are, sometimes, able to create, that it gets as input. Neuroscientists agree with this possibility. And it's easily thinkable: I have input about my blood electrolytes and gases, but I don't consciously feel them - this is rooted in more primitive neurological structures (eg the hypothalamus) than the more complex neurological structures that allow awareness (the brain hemispheres).
As far as I can see, this is nothing more than a potential difference in how we might use the term "aware".

You seem to be using the term "aware" as nothing more than a synonym for subjective experience. Having worked on automatic systems and robotics I have often used the term "aware" to ask if a non-sentient machine is "aware" of a signal. :D

So this is just a matter of using different semantics for a single word. We certainly don't want to get lost in irrelevant confusion over the semantics of words. Those type of discussions are best suited to sitting down and coming to a consensus on how we will define specific terms before we begin any deeper analysis of any problems.

The last thing we would need is to have semantics become our nemesis.
Metadian wrote: This is not because they -the jelly, or the laptop- aren't made of noogenic stuff, but because the arrangement is not complex enough, I believe. The computer code and the jelly's nervous net are too simple for something like thought, just like a moss is too simple for something like "a flower".
I would suggest that complexity alone cannot explain subjective experience or sentience. And my reason for this is because of what I had already stated earlier about the difference between the "I" we associate with a complex ego, versus the far simpler primal "I" that merely has self-awareness or subjective-experience.

So looking to a certain level of complexity to explain subjective experience is most likely a futile path what would ultimately distract away from the real issues.

Metadian wrote: Technically, rather than saying a certain (biological/synthetic) system is aware or non-aware, I think it exists on a continuum, from no awareness, to full awareness. You can certainly be aware/sentient without being intelligent, so that for instance many fish in fact have an experience of pain, pleasure, cold, hot, etc, without significant complex problem-solving abilities (ie they rely mostly on instinct than on associative, flexible, creative learning).
Again, I have no problem with this other than to say that here you appear to be confusing intelligence (the ability to make high-level logical reasoning) with primal awareness. In other words, if a fish can already have an experience of pain, pleasure, cold, hot, etc, then it already has "Sentience" at that point. Never mind the fact that it can't make lofty logical analysis of it's experiences.

It already has what we are searching for! Sentient subjective experience!

Therefore we should be studying fish sentience instead of human sentience since that will be more likely to lead us to this "primal pattern" that is the foundation of sentience experience. All the higher logic that is overlaid in a human brain, is basically superfluous to explaining sentient experience if a fish already has sentient experience. :D
Metadian wrote: I agree we have no way of solving the hard problem of consciousness yet, but at the same time I'm skeptic it's really a problem, that brain/mind duality is a problem. We have to presuppose that philosophical zombies are possible, ie that an android with my exact neural network could lack subjective experience. Is this another round circle, an illusion of the mind's bafflement when it - a natural subject - starts thinking about itself and modelizes itself the best it can?
Ok, here's something for you to consider now.

To begin with I hope you are familiar with the difference between a digital computer and an analog computer. Hopefully, you know that a digital computer operates by a CPU (Central Processing Unit) that processes machine coded instructions one at a time sequentially following a program in memory. Given this, (and our previous hypothesis that sentience is a "pattern" of activity) we can then see that this simple digital computing machine could never create a larger "pattern" of activity. All it does is process a single machine coded instruction at at time.

An analog computer, on the other hand, has no CPU. Instead it is a "Neural Network", of simultaneously interacting feedback systems. It processes everything simultaneously, not one machine instruction at a time. In fact, and analog computer has no "machine instructions" at all. The human brain is a biological analog computer. It is not a digital computer running a CPU that executes machine codes.

Now having explained the above, I hold that if you were to program a digital computer to simulate your personality and behaviors to the point where you yourself could not tell this digital computer apart from yourself, there would still be no reason to think that this digital simulation is having an experience, anymore than to think that your laptop computer is having an experience.

So duplicating the complexity of who you are is not sufficient to create a sentient life-form. <---- my assertion based on why a digital computer could not become sentient.

However, if you duplicate your entire neural network precisely as it is (i.e. onto another analog computer), then you would have every reason to think that this computer should be just as sentient as your are, and it should actually be having the same level of subjective experience.

NOTICE THIS ----> : This should be TRUE regardless of whether question #1 or question #2 I had asked earlier is significant. In other words, an exact duplicate of your biological analog brain should produce another sentient being whether sentience is innate to electromagnetism, or is innate to a specific pattern because in this case BOTH of these conditions have been satisfied.

Also, note that in the case of the digital computer simulation, there would be no reason to think that sentience should automatically follow even if sentience is innate to electromagnetism. The reason being that even if sentience is innate to electromagnetism the electromagnetism would not be having the same experience in a digital computer that only processes one machine code at at time. There may be some primal "experience" going on at the level of the CPU activity, but it's not going to carry over to any awareness of the larger program that is basically sitting in static memory for the most part. It would only be a primal awareness of individual machine code instructions being processes, which wouldn't have any larger meaning or larger awareness.

So I hold that if you want to duplicate your sentient experience you're going to at least need to do it using an analog computer. You'd never do it on a digital machine no matter how complex your program or data memory becomes. You might make a convincing "simulation", but that's all it could ever be. It could never become sentient itself. <--- My claim based on the arguments I've been presenting all along.
Metadian wrote:
I'm just asking that from a pure physics perspective, how do we explain how anything could have an experience?
Maybe that's the problem, the perspective - how does a macro-economist explain inflation from micro-economics? We're equipped to analyze things, but our understandings have limits.
I don't think we should need to. Inflation may not be dependent on micro-economics. Why should you think it should be? This is a bad analogy. Inflation can certainly be explained using a larger picture (possibly caused by human greed?) :D
Metadian wrote: Our ability to focus is singular. If you've heard of emergentism (which generally I'm opposed to, as I think any "whole" is composed of its "parts", because if anything is missing, then it's also a part), it would be content with shrugging off this question as "well, that's because it's not the actual level at which the (mind) phenomenon emerges".
Well, I think we can first recognize what it is specifically that we are trying to get at.

Previously you had mentioned a fish merely having simple experiences whilst a human clearly has far advance analytical skills that allows the analysis of those experiences. But that appears to me to already be getting off track. These would be two entirely different things.

If the ability to have an experience is what we are interested in, then we're already there with the fish. Getting bogged down in the complexity of human intelligence would be a distraction from our original goal.

In fact, there is no mystery concerning human intelligence. Human intelligence can be explained using simply physics. Keep in mind, we have absolutely no problem accepting intelligent non-sentient robots already. So intelligence and the ability to process logic can already be explained via basic physics. So we're already passed that. That's not a problem. Logic circuits don't violate known physics.

Having an EXPERIENCE of them does.
[center]Image
Spiritual Growth - A person's continual assessment
of how well they believe they are doing
relative to what they believe a personal God expects of them.
[/center]

Metadian
Student
Posts: 27
Joined: Sat Dec 30, 2017 5:15 pm
Location: Spain

Re: Scientific determinism.

Post #18

Post by Metadian »

[Replying to post 17 by Divine Insight]
Oh nice, I liked the dialectic trimming you did there, saves us computing time and increases machine efficiency :D. It is true that the particulars of human-level complexity are trivial. And yes, I very much love getting semantic non-issues out of the way, when conceptual coextension is already clear; we're on the same page on 'awareness' now.

If I understood you correctly, you're proposing a dichotomy whereby 'noogenicity' is either (1) a fundamental property of all matter/energy or (2) a property of specific combinations of matter/energy (patterns), like only a certain set of notes creates a pleasant-sounding symphony. And both these possibilities require physics to explain it from matter/energy. Please see below the quote.

You've also presented a distinction between an "ego", a sorts of experientially cumulative continuity, and the "true I-ness", the actual awareness/subjectivity we're talking about (a simple ability to experience - experience qualia).

I'm reminded of these "thought experiments" that test the survival instinct, and offer potential death vs alternatives like being re-built remotely after being destroyed in a "transporter" (with other atoms), so as to see if you equate death with your body or with this sorts of "continuity" or "illusion of self" (which I have called sometimes 'mnesic continuity', as it disappears with amnesia). In a radical's interpretation, you could say that each time you cease consciousness you "die" (like sleep, but let's say a 20year coma), because when you wake up the matter in your body is not exactly the same, and thus it's not fundamentally different to being re-built on the long run. This also calls back to Theseus paradox; if an old ship's parts are replaced, and then the old parts are put together to form a "new" (but materially, old) ship, then which is the "original"? If the transporter failed to destroy and yet it did create, which is the true "self", are both the same identity, at that moment? If you rebuild a copy of me with my actual matter from 10 years back, are there two 'me's? It all depends on what "system of matter" you interpret as being legitimate - for some people the matter-of-the-old-me clone would be in fact more "I", than a clone of me built of Andromedan matter, which would be the "impostor". I guess there's a reason they call it the "illusion of self"!

I think this true "I-ness" that you obtain by looking as a mirror is more basic, but I still don't feel comfortable equating it with a freshly-shocked octopus or a freshly-shocked jellyfish (ie, 'resetting' their neural networks), looking at anything or experiencing anything. Human "I-ness" presupposes adequate human upbringing, that is defined by our social nature and requires affective interaction, stimulating problem-solving, etc. This would be part of the human complexity and possibly nit-picking vocabulary again, so I will say no more. I do agree on the part that any sentience possesses this similar ability to experience qualia that is more primitive.

Lastly, (I didn't know this, so thanks for the instruction), you've introduced a digital/analog distinction, whereby digital machines are built in such a way that they can simulate certain behaviors (get an input, give an output), but it takes analog machines to actually mimic a nervous system's functioning. The latter would experience like we do, and the formally be philosophical zombies.

With a sufficiently big capacity for storage, I think you could imitate any level of complexity simply by mapping input-to-output to the number of situations that the system you're trying to mimic could potentially experience. This is finite, but would get exponentially bigger as it is more intelligent because the more intelligent a biological system is, the more flexible and associative it is in its problem-solving, thus more difficult to imitate reliably. I agree however that it is ultimately finite and a "digital" zombie code can be programmed for any analog neural network which is putatively having subjective experience.
I don't think we should need to. Inflation may not be dependent on micro-economics. Why should you think it should be? This is a bad analogy. Inflation can certainly be explained using a larger picture (possibly caused by human greed?) Very Happy

Maybe I didn't provide enough context here. Macro-economics and micro-economics were an analogy to sciences that build upon the units of each other, such as psychology/neurobiology on anatomy, histology, cell biology, molecular biology and ultimately chemistry and physics.

I was pointing at a phenomenon that was easily modelized by one of the sciences in a higher place in the hierarchy, but very complex or impossibly complex for us to "build from" the units at a much lower level. Basically, like trying to explain human greed all the way down, using mathematical models of molecular interaction. It is easy to explain with psychological theories, but hard to with chemical formulas.

The comment I made on this is that while I don't believe it is a meaningless task, or in fact, impossible, I recognize the possibility that such a perspective requires computational power (an ability to process and integrate information) beyond our cognitive limits.

That is, phenomena we could call emergent from lower levels (such as a 'mind', emergent of the brain's neurons, on the psychobiological organ/system level, down to the cell level), when analyzed through the models of lower levels (eg atoms and fundamental forces), would leave us unable to make the meaningful connections geared at answering the scientific questions ('how'). That they are in ultimate instance built of these units, doesn't imply we're prepared to understand the phenomena that emerge from them, on the level of those fundamental units.

The 'how' on the neurological level is easy: just find the places and mechanisms (the brainstem, hypothalamus, hemispheres; with noradrenergic neurotransmitters, causing a nervous impulse, etc), and you have awareness - meaning an internal representation of external information, encoded into the system-, just like you have a muscle twitch. Without those structures leading to that activation, the brain is asleep/unconscious, and those internal representations of external reality are missing in the key areas or in the key association circuits leading to thought/action (in fact, they may even be stopped at the spine/brainstem and not reach the cortex). You may ask, "But isn't this just information, that a philosophical zombie could have, too, inside their brain?" - the reason I don't think so, is that I'd expect a zombie to be what you described, merely an in-put/out-put circuit with a really big database, but little association (no thought) and absolutely no external modeling (internal representations of reality) perfected through evolution. And we do have prominent association cortices which can leave us so much as failing to interpret what we can perceive when damaged (visual agnosia). Assuming brains such as mine can be non-noogenic when I know mine to be, seems to me more irrational than the opposite.

I'm unsure where this leaves the philosophical questions ('why'), though, but I think the less daring position is to say that "having qualia violates the laws of physics" is as unwarranted as saying the question is meaningless. How can you know it violates something when it may be well an intrinsic part of the nature whose laws it is violating? Or do you mean known, or specific laws?

User avatar
Divine Insight
Savant
Posts: 18070
Joined: Thu Jun 28, 2012 10:59 pm
Location: Here & Now
Been thanked: 19 times

Re: Scientific determinism.

Post #19

Post by Divine Insight »

These are interesting conversations. Unfortunately I don't have time right now to continue these in great detail. So for now, I would just like to comment on something you said about the "True I", and then clarify something I had perhaps "mistakenly" said about subjective experience "violating" physics. That was actually a poor choice of terms on my part, and I'll explain that after your second quote below.

First on the topic of nature of the "True I".
Metadian wrote: If the transporter failed to destroy and yet it did create, which is the true "self", are both the same identity, at that moment? If you rebuild a copy of me with my actual matter from 10 years back, are there two 'me's? It all depends on what "system of matter" you interpret as being legitimate - for some people the matter-of-the-old-me clone would be in fact more "I", than a clone of me built of Andromedan matter, which would be the "impostor". I guess there's a reason they call it the "illusion of self"!
I have thought about this problem quite a bit. In fact, this is an extremely interesting problem no matter how we view it. This also goes deep into the Buddhist concept of "Emptiness" which is actually a controversial concept even within Buddhism.

You mention the Star-Trek type transporter problem and whether or not the "True I" is actually transported in this process. Would the "True I" pass through, or not?

I have concluded that it most definitely would not. And here is my reasoning:

Suppose we have two set-ups. One is a "Transporter", the other is a "Cloning" machine. Both of these machines, take your original pattern and reproduce it in another location. The only difference is that the Transporter destroys the original, whilst the Cloning machine does not.

Well, already we can see that in the case of the Cloning machine, the original person is going to continue to view themselves as the original "True I". And when the Cloning process is over, the "True I" exists the original chamber to over and meet the "Cloned I". You now have two totally independent "I's" each as valid as the other. The only reason we call one the "original" and the other the "clone" is because of which chamber they emerged from. But other than this they are basically identical. None the less, the original isn't prepared to die just because he has been cloned.

Now we move over to the Transporter machine. Well duh? All we are doing here is making sure that we kill the original in the process. :D

So it should be clear that the original never "makes it through" to the other side. However, we we then go speak to the one who was "Transported" he will report that he's just fine and that the process works perfectly. He was in the other chamber a moment ago, and now he is here.

But clearly the "Original I" was destroyed, and we can know this because in the Cloning process it wasn't.

Now the question comes up? Should it matter? Did anyone actually die in the Transporter process?

This is where the concept of Buddhist Emptiness come into play. In a sense, we could say, no, no one actually died, because there was never anyone there in the first place. :D

However, if we're going to go that far, then we can say that if someone steps into a Transporter machine and is destroyed and not transported or copied, then it must also be true that no one has actually died.

In a sense, Buddhism would say that this is correct, because even the "True I" is ultimately an illusion. And this becomes extremely confusing because it would seem then that Buddhism could not say anything against murder. After all, if no one dies when we kill someone then how could it be wrong to kill?

Thinking about these issues can drive a person to insanity. There's no question about that. However, if we ask this same question in terms of a purely secular materialistic perspective, then the same conclusion must ultimately be had. Killing a person cannot be wrong because it was nothing more than a fleeting pattern to begin with.

If you create an inorganic analog brain, into which you can download the entire content and patterns of your biological brain, and you do so, but in the process you destroy the original organic brain, did you just commit suicide? Obviously the newly downloaded inorganic brain is going to believe it is you.

Like in the transporter example above, the inorganic brain is going to be convinced that the process worked! It's going to wake up and say, "Hey it worked! I'm now in this inorganic body!".

But is that true? Or did the original person actually die in the process? And if so, should it be considered murder when an exact copy was created in the process? At least in this case we can point to a living entity that at least "believes" that it was previously the organic person who has survived the process.

These types of questions are almost impossible to even try to answer with any real certainty.
Metadian wrote:
I'm unsure where this leaves the philosophical questions ('why'), though, but I think the less daring position is to say that "having qualia violates the laws of physics" is as unwarranted as saying the question is meaningless. How can you know it violates something when it may be well an intrinsic part of the nature whose laws it is violating? Or do you mean known, or specific laws?
First off I said that logic circuits don't violate the laws of physics while experience does.

My mistake. What I actually meant to say is that logic circuits can be fully explained using basic physics, whilst the phenomena of experience cannot be explained using basic physics.

Also you say, "having qualia violates the laws of physics", but using the term "qualia" here is a bit misleading because the term itself already implies that qualities are being "perceived". In other words, it already implies that "subjective awareness" is occurring.

This goes back to what we mentioned before about keyboard input into a computer. The fact that a human body can have sensors and quality stimuli can be input to the brain itself can be explained via basic physics. Nothing unexplained there.

The mystery there then becomes the question of why a human brain experiences this input, whilst a mere computer only reacts to it using logic circuits which can also be explained using basic physics.

In other words, there's no problem here until we need to explain how a brain is actually having an "experience" rather than just reacting to input like a zombie.

Keep in mind that we have no problem explaining the existence of a zombie in terms of basic physics. So it's not until something actually has an experience that we start to run out of physics explanations.

It's not that this "violates" physics, but rather it can't be explained by physics. That's what I was trying to get to before.

~~~~~~

Now it may be possible that the phenomena of subjective experience may someday be explained in terms of basic physics. Perhaps in the form of some type of analog feedback circuits. In fact, my hunch is that this will be the secular explanation if one can ever be found.

My only position right now is that we aren't currently able to make that explanation. I think it would be great if we could. In fact, if we could explain subjective experience in terms of an electromagnetic pattern of an analog feedback circuit, that would probably be one of the greatest discoveries mankind has ever made. We would have finally brought this question to completion using secular science.

I'm just saying that we aren't anywhere near that point yet. And it's not clear that this will be the answer. But I would suggest that this is certainly a path worthy of investigation to be sure.
[center]Image
Spiritual Growth - A person's continual assessment
of how well they believe they are doing
relative to what they believe a personal God expects of them.
[/center]

Metadian
Student
Posts: 27
Joined: Sat Dec 30, 2017 5:15 pm
Location: Spain

Re: Scientific determinism.

Post #20

Post by Metadian »

[Replying to post 19 by Divine Insight]
Don't worry about it, reply in whatever depth you have time/energy for!

You raise two interesting points about "I-ness" again and I'd like to mix up the criticism with some science-fiction for illustration.

First, on the Buddhist belief system and dying: I would disagree that "not dying" in the transporter/cloning process would also take away meaning from "actually dying" in the failed-transporter/disintegrating machine.

Let me specify more about this "system of matter" (SM) I mentioned before. Remember Theseus ship, and now imagine that I stored all pieces, old and new, in some room. The set of all these pieces would be the SM. This is distinct, broader than the matter that is at one given moment (t=0) actually conforming the body (m1). Such that in t=10 (years), we'd talk about a different body, materially speaking (m2). Both m1 and m2 are formed by a finite amount of ordinary matter, and at the time of my natural death, it would still be finite and defined, for instance SM={m1 + m2 + m3... + m_n}.

I argue that the illusion of self is a product of mnesic continuity, cumulative experience stored as memory. The arrangement matters and not the fact that at one given time (m_x), this matter is a concrete set of atoms, or another. Thus the clone rightly believes that it is an "I", an embodiment of "I", just like the post-transport creature.

Naturally, and normally, our SM is spatially close and naturally continuous. That is, the matter that will conform me in 20 years is from Earth, and it will be replaced in a gradual fashion by the internal organization of my body. The transporter is different in that it is abrupt, spatially distant and discontinuous. The SM is an abstraction of a m1 found on Earth, and a m2 found somewhere else, with no smooth transition.

This perception of "antinaturality" is what I argue makes us dissociate the "I", 'fetishize' our matter instead of the arrangement, and ultimately fear death when going into the transporter: the death of a version of "I". But what emerges is, I believe, meaningfully an "I". With a machine that only disintegrates m1 but doesn't build m2, I would say death of the SM happens - of all the versions of "I" who believe they are "I" - and thus an "actual" death has taken place; nobody would have I's memories, experience, etc. I'd hold a funeral, which I wouldn't after each usage of the transporter.

Secondly, I wanted to address the "inorganic copy". I'm reminded of a Black Mirror (British sci-fi show, focusing on technological dystopia, though this episode was a love story) episode called San Junipero, which dealt with the subject of virtual reality, 'uploading of personality', and euthanasia. Old people who barely moved or couldn't move spent progressively increasing hours in a virtual reality, through avatars of their younger selves, in an eternal party city of a chosen decade (the 80s, the 90s). When they died, they were offered the chance of "uploading to the cloud" - basically keeping a copy of their personality uploaded in the servers. They could disconnect whenever they wanted if they got tired, it was not a prison of any sort.

Our senses can be deceived, for instance we could be made feel warmth or touch by stimulating specific neurons (and there's this). A computer can also be "tricked" to believe it has input it hasn't. We could upload our neural network to a virtual reality, à la Matrix, and it would be also a version of "I" in a meaningful sense. Whether doing this by destroying the original body is ethical ("destructive upload") is up for bioethicists of the future... though for sure it could be considered a form of (bodily) murder if it is done against someone's will. They'd lose an irretrievable and unique version of their self, and also the original one in a chronological-ontological sense.

*

Well, this is a very interesting part and it gives us something to work with. In the other post I presented you with the internal, structural difference for subject vs zombie. I apologize if the neurobiological jargon was excessive, but my purpose was conveying something about feedback logical circuits, in fact. I fear that we may have to bring back some of the "extra human complexity", though, because it may be scientifically relevant.

To cut to the chase, imagine first the internal structure of a zombie/digital machine.

It could have a smaller or bigger memory, which means it can mimic things worse or better. But this memory has a certain quality, that it is non-semantic, a simple pattern match. It has no subjective meaning inside the mind, because this mind has no thought, no association, no creative problem-solving. It takes an input, and matches it to an output, and expresses that output outwardly. What to an intelligent mind could be an efficient usage of resources they'd likely store in very inefficient and redundant ways, because it just takes in situations and reacts in a raw, pre-programmed way. And it's limited by its finiteness of storage.

If I ask a zombie, "what do you think of flowers?", they could be programmed to answer "I like them". If I ask again, "what do you think about that?", they could answer "They are beautiful", if I ask again, "What do you think about that?", they could answer, "Beautiful things are nice". Etc, etc, in these cases "about that" becomes, "what do you think about [subject]".

Nevertheless, I could ask you, thinking-you "what do you think about that, about [thinking that]?", and you could answer: "I think that liking them is unfortunate, because I am allergic", "I think that I find them beautiful, because I associate them with things that please me, like femininity and purity", or, "I think that aesthetics are an innate category that allow us to confirm order and health and that's why it's pleasing." Theoretically, this has no upper limit, as I can just tell you "What do you think about that thought", taking the last thought as the level of analysis. And you could answer, "I think I keep hating myself for having allergies because my mother was always angry with me that we couldn't go out in spring", "I think I keep repeating an arbitrary common narrative about purity/femininity that doesn't convince me, but it seems smart" or "Order and health are clearly evolutionary benefits so I think it was rational for me to think so in light of Darwinism"

These reflect a recurrence, a meta-cognition, 'thinking about thinking' (organically, it means there's some feedback loop, of course), that could only be 'mimicked' by the zombie in the form of that inefficient storage I mentioned before. They'd have to map each possible question to each possible recurrence, one after another, and one after another, to their answer. Thinking-you however associates, has a semantic language. Has true complex, flexible intelligence; not merely an astounding variety of simple reactivity. These things are organically different, and they reflect in the structure of the brain; the neurology explains the 'how' and observation dictates that awareness seems to be so for the phenomenon of minds.

As I said, this happens in our association cortex, because it's what allows us to understand and think - and when it's missing, what gives us agnosia. Association cortex is what lets you know that the shape of the eyes, nose and mouth constitutes a 'face', and endows that concept of 'face' with the subjective meaning or symbolism that allows you to relate it to other thoughts, emotions, etc. It stands between sensory and motor areas, the thinking between the feeling and the doing/speaking, and it is broadly responsible for "mental synthesis" or "imagination" (eg imagine a beach and describe it with the "eye of the mind") as well as "mental acts" (count to ten in silence).

Post Reply