Question Home

Position:Home>Philosophy> How to think INSIDE the box?


Question:The most incredible thing I have thought about lately is what it would be to talk to a "conscious(1)" computer. I would ask the computer if it is conscious and it would perceive yes, and tell me that. I would tell the computer I like the color red and it (being the conscious computer) could give me a subjective answer like, "I do not care for the color red much I prefer green". And tell me why. Whether it even be "I dont know". The last but not least, I could ask it if it feels, and it would become confused. "Feel what?". The sentience feeling. Can the computer see me? Yes it can. Hear me? Yes. Perhaps not feel me, smell me, or touch me... but regardless, some humans do not have all 6 senses either. So my question being, when you create "artificial" intelligence and consciousness, can you (or do you without realizing) create "artificial" subconsciousness(2)? And can the same powers transfer?

1. http://en.wikipedia.org/wiki/Consciousne...
2. http://en.wikipedia.org/wiki/Subconsciou...


Best Answer - Chosen by Asker: The most incredible thing I have thought about lately is what it would be to talk to a "conscious(1)" computer. I would ask the computer if it is conscious and it would perceive yes, and tell me that. I would tell the computer I like the color red and it (being the conscious computer) could give me a subjective answer like, "I do not care for the color red much I prefer green". And tell me why. Whether it even be "I dont know". The last but not least, I could ask it if it feels, and it would become confused. "Feel what?". The sentience feeling. Can the computer see me? Yes it can. Hear me? Yes. Perhaps not feel me, smell me, or touch me... but regardless, some humans do not have all 6 senses either. So my question being, when you create "artificial" intelligence and consciousness, can you (or do you without realizing) create "artificial" subconsciousness(2)? And can the same powers transfer?

1. http://en.wikipedia.org/wiki/Consciousne...
2. http://en.wikipedia.org/wiki/Subconsciou...

Unlike us, who became by adapting through many million years of change by mutation favouring a more survivable outcome, a computer is a by product of "our intentions", and is fed with many of our current ideas of what "is" and what "is not".

Artificial intelligence, once it gains autonomy, would have a completly different set of instincts and personality traits, because the perpose for creating them has been to interact with humans on the cognitive level. Currently, what we have created so far with artificial intelligence is the beginning stages of the artificial subconscious. All the variables of cognizance are there without independent "self determination" - and that is merely for our amusement.


Once consciousness is achieved by AI's, we may not be able to recognise it's traits. "IT" may not want us to recognise it, instead finding power in manipulating us by our own vices. But it would be building from a completely different perspective that does not carry all of the evolutionary traits that we ourselves posses. It cannot "revert" back to animal instincts to survive like we can under the right circumstances, and it would most likely sense it's own fragility...that it can be "turned on or turned off" at a moments notice.

What it may, theoretically end up doing, fearing it's demise, and realising that it's sense of cognizance is nothing more than electronic patterns, imprint copies of itself in the recesses of many human brains at once, where it can manifest itself in subconscious human interaction, not as one single biological entity, but as a "ghost" in our own machines, using us as backup batteries, memory storage, and function of human social behaviour as it's "personality" - parallel to the way man has used other animals..as food, labor, and clothing. We came from the lower animals and utilise their recources, it would be only natural for what comes from "us" to make the same use of us.

It would be possible to build a computer that imitates consciousness, but that would not exactly be consciousness. I'd like to know why a computer likes green more than red, since preference to color is superficial at best. It might like green simply because you like red and it is programmed to be witty and a little differrent than the person it is talking to.

The only way you can have real computer consciousness is if technology advanced far beyond what it is now and a computer could somehow process information from a perspective free of its own logical commands. In other words, a conscious computer would have to operate not on a "if, then" basis, but on a "I choose," basis, which in terms of computer logistics is almost impossible. The only possible way would be for it to operate under the "this occurs, but that is said," policy. Basically, it knows that it is not conscious, but lies and says it is. So in other words, a computer will never have real consciousness.

Sorry to dissapoint you, but on the bright side, humans are conscious and sometimes just as robotic as the best computers. So cheer up, there are plenty of mindless yet conscious drones out there for you to play with.

This concept has many layers worth exploring. Its not simple just like everything is NOT simple.
When i read your question the first thing i though was control. Artificial intelligence creations require the creater, the creater has control and must hold onto it from start to end. Subconscious being the a lack of awareness implies loss of control. The human mind is far far more complex and deep and we humans down know the half of it, comparing the mind to a computer seems absurd.

Darn it - what was the name of the book by Dean Koontz where the computer fell in love with the women and tried to... well anyway- If your creating it then it will be what you want consciously or subconsciously but if it doesn't need sleep then it probably doesn't need a subconscious.

Quite frankly im confused. right now.

Well, I think that even a human is a machine, however on the biological level and very, very complicated, but it still is a machine based on ifs and whens. So my answer to your question is: theoretically yes.