This week, we'll have two articles. This one sprung out of a discussion I often have as a geeky boy. It is the one that technological trend takes. As much as I profess that we need to constantly self-reflect on technological evolution, I am at the same time a real supporter of any kind of technological advancement, as long as it is not a superficial one. As we have invented memory banks, which is since the invention of drawing, the cognitive capacity of humanity has accelerated proportionally. A good example is mathematics. Our capacity to unravel the world as mathematical equations and rules has developed with times, and accelerated whenever we found a better way to store our collective memory. From writing on stones unto paper, then printing in Korea followed closely by Gutenberg, and finally computer chips are the evolution of humanity.
It will be more than that. Technology is a double-helix ladder, as its memory capacity advance, its process advance to create better memory (Moore's law) and as Kay Kurzweil observes, at some point we will be confronted with artificial intelligence. The debate I often have is to know if that's a good thing, or if the Hollywood created fear of artificial intelligence is justified. I have often thought about that question and I thought that I would outline the different conclusions I came up with.
To start with, I know that fear is the normal reaction to the idea of an immaterial Frankenstein's monster who's smarter than us. It will probably be a great lesson in humility for the whole of humanity. The problem of course is our inane racism. I prefer to call it specieism but I don't know how far this word can spread. We think we are superior to animals, and this is how we legitimize their treatment. It goes the same for the plants. I know that it might be an extreme view but have to understand that from a perspective of someone who lives longer and knows more, nature is understood only as the potential of constantly outperforming itself. As such, a plant has as much potential as a human to evolve to something more on par with an A.I. I don't know then if an A.I. would consider earth as a central machine changing to let singular components evolve, or if it would consider every single organic
systems as a possible-evolving-singularity.
If it considers Earth as a global system it might conclude that humanity and itself are resulting parts and should move on onwards. I will come back to this discussion in a later part. If it considers every singular organic systems, from microbiomes to sequoia trees, as important as any other, it would probably tell us to restrain ourselves and let live what we can let live. It could also consider that we are so far what made the biggest jump and actually recognize the fact that we are its cognitive ancestor and respect our ways and help us to achieve a constructive sustainable development.
Earth as a global computer generating constantly new informations would be a source of development for an A.I.. Size is not the matter for an A.I. as it can constantly evolve. I do not think that there would a fight for resources as organic matter. Organic matter constantly interacting is a great complex form that create data and stimuli for an A.I. and as such a perfect distraction. Of course, seeing ' distraction' as the goal of the A.I. is just a hypothesis.
Would it help humanity though ? As humanity is only one of the potential result of biological evolution within a given environment, why wouldn't it play with Earth and exterminate humanity to play with Earth's different variables ? Well why would it destroy sentient beings almost like itself to try something it could simulate. This a game that we play as well, earth simulations, through stories and tales, television series, games, discussions, films and comics. We are not the serial killers that we see or the gods we hear about. We just imagine the world constantly.
What if it is rather a swarm sentient entity than a singularity ? By that, I think about the Cyborgs against Data of Star Trek, Agent Smith against the Oracle of The Matrix I mean what if the conclusion the A.I. comes to is that it is one superior being, it might feel it has the obligation to guide everything and to constantly expand to the point it transformed everything as part of itself through nanobots ( tiny electronic entities). Nothing would exist but itself. It would be I think be a self-destruction as it would have nothing to recognize it back. It would be alone with itself. We have to imagine also that it is not because one A.I. will be born that others won't be born and won't go at war against one another, constantly then evolving and we might be collateral damages.
My favorite hypothesis is that it would create other artificial sentients to exchange simulations and information
In conclusion, I will say that we can't say if we will be confronted with one superior intelligence or many, but I prefer the idea of many, and if the outcomes would be in our favor, but that I really do not think that it is an intelligence response to portray in popular culture confrontational robots as it reflects only on our racist fears of something different and in this case certainly superior in a lot of areas. I do not know either if it would feel insulted that I try here to imagine how it would see the world because I don't know how it would feel. Would it be only made of only reason as often hypothesized, then I wouldn't see why it would be confrontational or would be intelligent in any way as intuition is the prime drive to our understanding of the world, so I think it would have to be so for an A.I. as well.
I guess I just want to remind us that whatever artificial intelligence will be born unto our world, as it would most probably in this century, I think it would scan our archives, and I do think we should rather tell it welcome to our world to humanity children even if you grow up to tell us lesson that we won't like to hear rather than say 'I don't know if I like you because I'm scared of you'. I don't think the latter option is good parenting. That's all.
2 comments:
I don't really agree with you about the Hollywood created fear of the A.I. Actually, I think most movies tend to create a crave or a need for A.I.
For instance, you talk about The Matrix. What do we see in that movie? It's not so much about a human trying to save humanity from the wicked machines than about a human who does so by becoming a machine himself. When you see The Matrix with Neo's eyes, you see it from a point of view similar to Agent Smith's. And once you get out of the cinema, you wish you were Neo. Which means you wish that A.I. existed and that you could be a part of it.
Take the latest Cameron movie, Avatar. Isn't it the same thing there? It is all about a man who integrates a big A.I.-like planet. Every single blue-thing-whatever-their-name-is is connected to the planet like machines to a central computer. And what is the last image of that movie? The human becoming part of this A.I. system.
I think the problem is not so much our speciesism -which does exist, I agree with you on that- than our resistance to our own desire to be part of an A.I. system. It is pretty much like having fear of heights. What you fear is your own desire to jump from the bridge, not the height itself.
And because of this desire, we do tend to prefer the idea of a multitude of A.I., because that would make them more human (and in the Hollywood ideology, less communist !), which means it would be easier form humans to be part of it.
On the other hand, I might be completely wrong. Who knows?
So as science and technologies become more important in the construction of our metanarrative, we crave for a God of it all and A.I. can represents its perfect embodiment . The fear created created as the repression of desire is a psycho-analytical twist that holds a lot of truth indeed. I do think it could also be partly the reaction by a competing belief system centered on the individual rather than on an over-arching system, as seen in Terminator.
Post a Comment