Artificial Intelligence and Religion

Welcome to Wondercafe2!

A community where we discuss, share, and have some fun together. Join today and become a part of it!

Will we create a true, autonomous artificial intelligence in your lifetime?

  • Yes, and I'm looking forward to it

    Votes: 3 15.8%
  • Yes, and I'm worried about it

    Votes: 2 10.5%
  • I'm not sure but it's an interesting idea

    Votes: 3 15.8%
  • No, it is possible but I can't see it happening that fast

    Votes: 5 26.3%
  • No, I don't think it is possible

    Votes: 2 10.5%
  • I don't care and I don't really think it's something we need to spend time on

    Votes: 3 15.8%
  • Other (post details in the thread)

    Votes: 1 5.3%

  • Total voters
    19

Mendalla

Happy headbanging ape!!
Pronouns
He/Him/His
This is very hypothetical but might be a fun exercise.

Inspired by this article in The Register (UK IT news site that I read):

http://www.theregister.co.uk/2014/08/05/elon_musk_ai_threat/

Many are now predicting that AI is not longer a matter of "if" but "when" we will produce an intelligence capable of supplanting us as the dominant intelligence on Earth.

Leaving aside the Doomsday scenario of the AIs hating their creators and slaughtering us a la the Terminator movies or the Cylons in Battlestar Galactica, what does this mean for religion?

If we create an intelligence, will it treat us as "gods" and worship us? Or will it exist side-by-side with us as a friend/ally (or enemy)?

In the former case, what are the implications for theistic religion? Does it mean we are now equal to God? Could it mean that our Creator had a Creator? What are our responsibilities once we have worshippers of our own?

In the latter case, beside raising some of the above questions, do our new intelligences have souls? Will Christians evangelize them? If they don't worship us as their creators, who might their "God" or "gods" be?

And this is all assuming that the artificial intelligence is relatively human. In Charles Stross' Singularity Sky and Iron Sunrise, the "Singularity" led to the existence of a God-like being called the Eschaton who essentially controls the universe and humanity. IOW, in Stross' version of the future, we created God instead of the other way around. While I'm less familiar with Iain M. Banks' Culture stories, my understanding is that it is a somewhat similar situation in them. Could we end up creating God? How does religion handle that one?

As I say, this is very hypothetical unless you're Ray Kurzweil and believe the Singularity, the moment when we create our own successor, will happen in the lifetime of the current generation. However, answering and discussing some of these questions might help us understand how we see God, ourselves, and our place in Existence.

Have some fun with it. There are no wrong or silly questions or answers here. I've also added a poll on whether you think we'll even have to deal with the problem.
 
I think we humans, in co-creation with the various AIs that we are creating, are in the process of co-creating something that we do not understand, but nevertheless co-create.

There seems to a holistic, synthetical, synergic, synergistic or synergetic principle at work in the universe. An intelligence that we, with our logical intellect, with our differentiating minds, do not understand. This would be synthetical intelligence, the opposite of analytical intelligence. Synthetical intelligence is anti-logical, but not illogical. Logical or analytical intelligence separates a whole into opposites; anti-logical or synthetical intelligence unites opposites into a whole. The two are opposite forms intelligence. If we don't understand the two intelligences, then we are not truly intelligent the way God or the cosmos is intelligent.

If we don't master and apply both intelligences, then our co-creations can blow up into our faces!
 
Then why is it still only three votes, as it was right after I cast mine? Maybe someone withdrew their vote.
 
Well, there are 3* people here on this thread...you, Mendy & I...OMG, A TRINITY!

*the votes show for me as 2 yes and 1 no it is possible but...
 
I think we will and it will be a wakeup call, it will cause a complete paradigm shift about the "self conscious", what it means to be human and how we see life.
The craziest idea that the brain has put into its own mind is that the human is one single body that has one life. It allowed the brain not to worry about the fact that its body actually consists of billions of individual cells are being born and die, some capable of reproduction.

Did you know that the human body replaces every cell in the body within 10 years? That includes brain cells which live for a lifetime but there is growing evidence that even those cells can be replaced. But, really, count your age divide by 10, you have been in that many bodies. I've been in more than two already. What does this say about the human being?

What makes us think we are one or, hell, what makes us think that our decision making, thought process is the result of our will? Wouldn't it make more sense to say that our behavior is really just the result of the will of billions of brother and sister cells who cast their vote by sending a signal to the nervous system?

Ex. If you have an itch, isn't that a group of cells complaining to the brain about an issue, making it move a whole arm to reach those cells until they stop complaining?

It is actually diseases and disorders that reveal the true programming of our brain. I think building artificial intelligence that will surpass our intelligence will open our eyes and minds in so many ways. It can't come soon enough.
 
I think we will and it will be a wakeup call, it will cause a complete paradigm shift about the "self conscious", what it means to be human and how we see life.

Nicely said, ichthys. I'm only quoting your first line but the whole thing makes a very good point. "First contact" with a machine intelligence is more likely, and more ground-breaking, than first contact with an extraterrestrial intelligence. After all, an extraterrestrial intelligence just tells us intelligence is possible elsewhere but it is still biological (maybe, more on that in a minute). Machine intelligence tells us in a very profound way that we are Creators (capital C deliberate) and will be dealing with the impacts of our own creativity for generations to come.

One interesting possibility that I have heard raised is that extraterrestrial first contact and machine first contact could be one and the same. After all, what better way to get around the light barrier (the fact that it is physically impossible to travel faster than light) than by sending out autonomous, self-replicating machine intelligences, which won't have the limitation of a short lifespan like that of a biological intelligence, to do the exploring. While meeting an extraterrestrial machine intelligence is a bit different from meeting one of our own creation, it would prove it can be done and likely speed up the process.
 
And, in case no one figured it out, my answer to my own poll was "No, it is possible but I can't see it happening that fast". I may be wrong on this, but the technology that would allow for a computer of brain level complexity does not exist yet and likely isn't possible with traditional binary systems like the one I am typing this on. It will take quantum computing, which is still in its infancy, to pull it off. I have maybe 30-40 years left based on family history and assuming no dramatic breakthroughs on the health front in that time and I just don't see us making that big a leap in that time. Kurzweil is a dreamer and is paid to be one (by Google no less) and we need those dreams to keep us going. However, unless Obama or a successor steps up with a moon landing scale project to create a true AI, I think it will take longer than I have left to see the Singularity, even if more primitive AIs may well be commonplace by then.
 
Machines are essentially a-moral, and the problem with purely logical machines is that they can argue from any viewpoint and be logically right. It is, however, not the impeccability of their logic but the viewpoint from which they argue that determines whether or not the outcome is morally right or wrong.

In the present Palestinian-Israeli conflict, for instance, the Israelis are logically right -- from their ethnocentric viewpoint. And the Palestinians are logically right -- from their ethnocentric viewpoint. But, to resolve the conflict, the moral viewpoint from which they both should argue is the humanistic or anthropocentric one. And even the anthropocentric viewpoint can be immoral when it argues only from the viewpoint of the human species and disregards other forms of life.

Morally, the ultimate viewpoint from which to argue would the the holistic viewpoint of the cosmic whole, or of the planetary whole. I hope that we duly program this viewpoint into our AI machines when we program them to think morally.

What happens then? What if the AI machines are programmed to think from the viewpoint of the planetary whole, and determine that we have the duty to die when nature calls us to be gone. No more costly, high tech heroics to keep us going as long as humanly and technologically possible! What then? Will we override the machines? Or will we resignedly abide by what they say?
 
I am a farmer. Whenever I read AI, I think of artificial insemination, a common livestock breeding practice used to improve one's stock. (AI is an acronym not only for artificial intelligence but also for artificial insemination)

What if artificial intelligence tells us that artificial insemination of human females makes logical sense because we then would pick only the smartest and healthiest male specimens for reproduction, and thereby greatly improve our species?
 
"I know you love us, your Creators, and that you care and adore us and want our safety...but did you really have to do it by burying us kms deep into the Earth's crust?"
 
And, in case no one figured it out, my answer to my own poll was "No, it is possible but I can't see it happening that fast". I may be wrong on this, but the technology that would allow for a computer of brain level complexity does not exist yet and likely isn't possible with traditional binary systems like the one I am typing this on. It will take quantum computing, which is still in its infancy, to pull it off. I have maybe 30-40 years left based on family history and assuming no dramatic breakthroughs on the health front in that time and I just don't see us making that big a leap in that time. Kurzweil is a dreamer and is paid to be one (by Google no less) and we need those dreams to keep us going. However, unless Obama or a successor steps up with a moon landing scale project to create a true AI, I think it will take longer than I have left to see the Singularity, even if more primitive AIs may well be commonplace by then.

The trouble with AI, as I tried to point out before, is that it is based on the binary system, which is based on logical differentiation, which pre-supposes a dualistic reality. But what if ultimate reality is non-dualistic, and logical differentiation only a human invention, a tool to help us understand a reality that actually is logically incomprehensible?

I feel that ultimate reality is non-dualistic, and therefore beyond logical comprehension. Supercomputers, if they are really smart, should come up with that conclusion, and be the prophets of tomorrow. In other words, these superlogical machines will eventually take us beyond logic by telling us what some of us already are dimly aware of: that THE self-creative Singularity, a.k.a. God, is non-dualistic, beyond logical comprehension, but can be experienced, and is being experienced, in the pure, non-conceptualized experience of reality.



"I AM"

-God, according to Moses
 
The trouble with AI, as I tried to point out before, is that it is based on the binary system, which is based on logical differentiation, which pre-supposes a dualistic reality.


Read up on quantum computing Hermann. Qubits (the quantum equivalent of bits in a binary system) can exist in two states simultaneously (superposition), which effectively gets rid of the issue of dualism. Things get even wilder when they start making use of entanglement to create the quantum equivalent of a byte (8 bits). That's why I say we won't achieve true AI with traditional binary computing, but with quantum computing.



 
What happens then? What if the AI machines are programmed to think from the viewpoint of the planetary whole, and determine that we have the duty to die when nature calls us to be gone. No more costly, high tech heroics to keep us going as long as humanly and technologically possible! What then? Will we override the machines? Or will we resignedly abide by what they say?

A true AI won't matter how its programmed. They will be "self-programming", much like we are. And whether we abide or override is going to depend on how much power we hand them.

If we let them run the show, then they will do what they think is right regardless of whether we like it. Worst case, we get Terminator where an AI with access to the US nuclear arsenal chooses to annihilate us and repopulate the world with its own manufactured beings, all programmed to hunt down and destroy what's left of humanity. Best case, we get them as benevolent allies working with us rather than against us.

If we constrain them in some way (e.g. through use of a form of Asimov's Three Laws of Robotics), then override becomes possible.

For those not aware of them, the three laws (which drive a lot of Asimov's robot fiction) are: (roughtly, I'm going from memory here)

1. A robot shall not harm a human or allow a human to come to harm through the robot's inaction

2. A robot shall obey human orders unless they conflict with #1 (right there, you've eliminated Star Wars battle droids)

3. A robot shall preserve its own existence unless doing so would conflict with #1 or #2.
 
To give an example of quantum computation, real world photosynthesis has a big problem with it. Well, our model for it.

We can empirically measure the energy that goes in to the system, how much energy is required for the various steps in the process...but there is a big discrepancy in that we have trouble making sense how photosynthesis can be so efficient (comparing energy in with energy & sugars etc out) and the process of the photons impacting the particular photosynthetic structures, the time required from that to the final product

The photons hit at random. There is a tiny target that the photons have to hit for the process to start.

So, a model that has recently been postulated is that the photosynthetic process is quantum mechanical...that the photon SIMULTANEOUSLY goes through every possible path and the path that leads it to the small target area is the one that happens

So, what Mendalla is writing aboot above is similar -- quantum computers should be able to do that. Running multiple simulations/computations at once.
 
A true AI won't matter how its programmed. They will be "self-programming", much like we are. And whether we abide or override is going to depend on how much power we hand them.

If we let them run the show, then they will do what they think is right regardless of whether we like it. Worst case, we get Terminator where an AI with access to the US nuclear arsenal chooses to annihilate us and repopulate the world with its own manufactured beings, all programmed to hunt down and destroy what's left of humanity. Best case, we get them as benevolent allies working with us rather than against us.

If we constrain them in some way (e.g. through use of a form of Asimov's Three Laws of Robotics), then override becomes possible.

For those not aware of them, the three laws (which drive a lot of Asimov's robot fiction) are: (roughtly, I'm going from memory here)

1. A robot shall not harm a human or allow a human to come to harm through the robot's inaction

2. A robot shall obey human orders unless they conflict with #1 (right there, you've eliminated Star Wars battle droids)

3. A robot shall preserve its own existence unless doing so would conflict with #1 or #2.
I said I don't care and it's not something we should be worried about. I should have said "other". I DO care but there are better things to worry about. I think the UN called a meeting on having a moratorium on killer robots not long ago? That is scary. I was shocked it was even being discussed. That means someone's thought of actually doing it and it's not far off. We need, people, to make nuclear non-proliferation our top priority and I know there are sci-fi buffs here dreaming of a sci-fi movie world...but why do you want to invent yet another thing that could do the human race in? It only complicates matters. We have serious problems! So I will oppose the idea and pray that humanity gets it's priorities straight cuz I don't know what else to do about it.

http://m.huffpost.com/us/entry/5650658
 
Mendalla,
yeah, the first AI will be like Moses with the Laws :cool:
(which reminds me, you
:eek:tempted at all to see the new Moses flick?)

don't forget the 0th Law: Tobor not harm humanity or allow humanity to come to harm by doin nuthin.

(i love some of the riffs that writers have come up for this. alfie bester's 'fondly fahrenheit' is one of my favs)
 
Back
Top