explaining consciousness

I think most people who read Daniel Dennett's book Consciousness Explained would agree that at best he made a good attempt to explain away the problem. I think that The Mind's I, which he put together with Douglas Hofstadter, was much better and in particular I liked the story A Conversation with Einstein's Brain (*), because it points out what the problem is actually about (x). I recommend it as a first step into this topic, after all it was my first step ...

(*) The online version of the story is full of typos; you might want to read the actual book to enjoy it.

(x) In short: If my conscious experience is the result of some complex software running on the neural network that is my brain, then the physical details should not matter as long as the underlying computer is equivalent to a universal Turing machine. But such a machine can be realized in many different ways, e.g. as a complicated state machine described in a large book. Can a book be conscious? What happens to my consciousness if this computer pauses or halts forever? Where exactly is my consciousness if the pages of the book are distributed over the world? etc.


11 comments:

CapitalistImperialistPig said...

Your post has a better link - I want to read Dennett and Hofstadter.

But is there really such a thing as a "pure" coincidence?

wolfgang said...

>> pure coincidence

I became a solipsist a while ago, trying to find a consistent interpretation of quantum theory.
And in my solipsistic world all coincidences are pure.

wolfgang said...

Btw since your (?) blog posts suggest a reduced sense of self, I recommend this book.

Lee said...

Whatever "sense of self" or "I" is, it doesn't seem to take much of the brain for it to persist. I've seen many dementia patients in my life and the sense of "I" is there when everything else of their conscious self is lost. It seems to persist up to the point where the is no longer capable of sustaining life.

Lee said...

That should be "the brain is no longer capable of sustaining life."

wolfgang said...

Many years ago I broke an arm and it had to be fixed under anesthesia. But I had eaten before so they did not want to give me the full dose in case I throw up.
I remember that pain and sensory inputs were reduced but it was clear to me that "I" was there, so I think this is a very robust/basic feeling and I assume animals have it as well.

But I suspect that Wittgenstein's proposition 7 applies and we may never have a mathematical/physical derivation of "self" ... after all many people, from Daniel Dennett to cip seem to not understand what we are even talking about.

Lee said...

>> and I assume animals have it as well.

Yeah, but I wonder where the line is drawn? I think all living cells respond to their surrounding and make decisions based on their recent past, and present environmental circumstances. Is it possible they all have a sense of self?

Even though I agree that other animals probably have some sort of sense of self, I don't think it is a feeling we can understand. We tend to anthropomorphize everything because that is all we understand.

I also wonder how much the sense of self varies from person to person. I think you have pointed out before that it may vary more than one would expect.

Maybe someday we'll have some sort of mechanism that allows us to answer some of these questions in at least a qualitative way. Who knows?

wolfgang said...

>> it may vary more than one would expect
Yes, and it even varies within the same human quite a bit, even when we are fully awake ...

wolfgang said...

... for future reference, this is how Penrose's collaborator sees it.
I think it is interesting that so far attempts to simulate the neural network of worms failed.

Lee said...

Thanks for the link. I'd stopped reading that thread a while ago so I would have missed it.

It seems to me that his thinking is at least somewhat similar to yours. You've been saying for years that we are never going to understand the measurement problem without taking quantum gravity into account.

The inability to model the behavior of C. elegans seems like it should be troubling from a lay point of view. However, as a non-expert it is really difficult to determine which arguments should be taken seriously. Most of my life I just took it for granted the human brain is nothing more than a Turing machine configured in such a way that what we call consciousness can emerge. I have no idea if that's true or not anymore. The only thing I become more certain of in my old age is my own ignorance.

wolfgang said...

My own two-sentence summary of the Penrose argument is here and one can read more about efforts to simulate C. elegans here and there.