samedi 22 août 2015

Perspective

I think it was in November, 2013 on the old JREF board that I started discussing artificial intelligence with posters like FudBucker, RussDill, Brian-M, and Skeptic Ginger. Somewhere during that, I decided to try to figure out how the human mind works. I thought about it for the next 14 months. Then I started working on it continuously for the 7 months after that. Except for one discussion I had on Facebook, all of my interaction has been here. I wouldn't have gotten anywhere without this board. However, I'm now not sure where I got to.

I didn't find what I was looking for. I still don't know the nature of conscious experience or qualia. I can't say if Strong or Weak AI theory is true. What I derived were bounding equations and interactions that tell how information processing is related to consciousness. That would mean being able to detail a human-class AI. It isn't simulation on a neuronal scale. It's basically a functional model. The equations don't relate to any particular substrate. As far as I can tell, you could make a human-class AI out of clockwork parts if you could get it to conform to the boundary limitations.

I figured out relationships to prejudice, creativity, and the evolutionary aspect of the brain. It falsifies Dualism, Idealism, and William Lane Craig's description of God. I have not thought of any way to falsify the simulation argument given by Nick Bostrom back in 2003, http://ift.tt/1KcAi3D. In fact, if you can create a human-class AI then this would seem to bolster his argument.

Dennett's argument about Strong AI was that you could not truly get a Chinese Room to do all of the functions that a real human mind can do. I suppose the best example of this is Watson which is good at associating information but doesn't actually understand any of it. And, of course, Watson would never pass for human, not even for a six year old child. In fact, Watson can't even match the problem solving ability of an adult rat which is theoretically about 1/1000th that of a human. So, if my model actually does work like a human brain, I can't use a lack of functionality to argue that it can't be conscious.

Maybe I should be happy. However, I have been mostly just depressed. It's possible that my ideas are just another case of Dunning-Kruger. Until recently, I was well aware of the gaps about how to get a computer to act like a brain. Then something occurred to me about two weeks ago. It seemed like the missing piece of the puzzle. I've thought about it over and over since then but I can't find any issues. But, even if it is right, I didn't find any big answers and what I did find is probably going to irritate a lot of people. It disproves life after death. It disproves pretty much all of the current theories about cognition including Global Workspace, Integrated Information Theory, Neural Coalition Theory, and Multiple Drafts Theory. It counters Harris' arguments about free-will which I suppose is curious since it also would reduce free-will to a mathematical model.

Then you have people like Hawking, Gates, and Musk screaming that this is a genie we can't put back into the bottle. Then I think about how the Greek mathematicians thought that the duodecahedron was somehow too big of an idea for ordinary people to deal with. I wonder what they would thought of seeing children using 20 sided dice to play Dungeons and Dragons. And, I think about Hofstadter's statement that he has seen no progress in AI theory in the past 30 years.

Perspective would be helpful.


via International Skeptics Forum http://ift.tt/1KcAi3F

Aucun commentaire:

Enregistrer un commentaire