AI should very much be feared. Its just a matter of when.....and its most likely not now.....and may not be for a long time.
The neural networking stuff is where things get weird. The AI teaches itself, and we don't know how it is actually doing it.
Its a black box.
We give it inputs, and we look at the outputs and tell it if its right or wrong and it adjusts itself as required.
It is already better at looking for diseases in x-rays than the best trained doctors.....and we dont know what it is seeing that we cannot.
I've mentioned it before, but look up "The end of the world with Josh Clark" podcast. It has an episode dedicated to AI and explains things a lot better than i could.
There was a part in it about how we need to be very careful with what we tell AI as it could very much end us by accident without the right fail safes involved.
It has an example. A paper clip company uses AI to become more efficient. Initially AI creates efficiency in production line, packaging etc. It looks at getting better materials to get the best quality paper clips. It can build machines to do this. It grows and grows and wants to maximise resources, be that land for mining, water for production. Eventually realises that humans are simply getting in the way and wipes us out as well to 'loot' our 'resources' to make better paper clips. It creates space ships to go mine asteroids for better materials etc etc.
Its a ludicrous scenario, and i've left out a lot of detail, but step by step, each next step is plausible and possible in the not too distant future.
I mean Kubrick was predicting this kind of (humans getting in the way) over 50 years ago with HAL in 2001.
"I'm sorry Dave, i'm afraid i can't do that"
It might not happen until long after we are all dead and buried...but it might happen sooner.
Moore's law makes it hard to comprehend how quickly and how advanced the future will be.
Throw in some quantum computing and anything is possible.
The neural networking stuff is where things get weird. The AI teaches itself, and we don't know how it is actually doing it.
Its a black box.
We give it inputs, and we look at the outputs and tell it if its right or wrong and it adjusts itself as required.
It is already better at looking for diseases in x-rays than the best trained doctors.....and we dont know what it is seeing that we cannot.
I've mentioned it before, but look up "The end of the world with Josh Clark" podcast. It has an episode dedicated to AI and explains things a lot better than i could.
There was a part in it about how we need to be very careful with what we tell AI as it could very much end us by accident without the right fail safes involved.
It has an example. A paper clip company uses AI to become more efficient. Initially AI creates efficiency in production line, packaging etc. It looks at getting better materials to get the best quality paper clips. It can build machines to do this. It grows and grows and wants to maximise resources, be that land for mining, water for production. Eventually realises that humans are simply getting in the way and wipes us out as well to 'loot' our 'resources' to make better paper clips. It creates space ships to go mine asteroids for better materials etc etc.
Its a ludicrous scenario, and i've left out a lot of detail, but step by step, each next step is plausible and possible in the not too distant future.
I mean Kubrick was predicting this kind of (humans getting in the way) over 50 years ago with HAL in 2001.
"I'm sorry Dave, i'm afraid i can't do that"
It might not happen until long after we are all dead and buried...but it might happen sooner.
Moore's law makes it hard to comprehend how quickly and how advanced the future will be.
Throw in some quantum computing and anything is possible.
