Computers taking over

“Each decade brings a 300-fold increase in the complexity available to the computer. At this rate, computers will exceed the complexity of the human brain between about 2010 and 2010.”

What do you guys think will happen?

Between 2010 and 2010? That’s a mighty precise range prediction :smile:

Seriously though, I don’t think that’s actually a realistic forecast. Computers are designed differently to the human brain. On their most basic level, computers perform mathematics. This means everything they do is based on the manipulation of numbers, including even the most intricate artificial intelligence. The human brain works differently. It functions by maintaining an extensive list of linked nodes, and works on building associations between them (on the most basic level). This might not appear relevant to the topic, but I’m getting to the point. :smile:

I think people tend to over-humanize everything. Honestly, people are always accidentally mistaking any and all sentient devices/creatures for humans. Heck, even God has been overly humanized, and on occasion is referred to as harboring hated, love, and a working definition of good vs evil. It’s easy to make this mistake, because to us, emotions like this make up everything we’ve ever known. In the real world though, without those evolution-given desires to stay alive, or the chemical-based effects of love and hatred, how could you really compare artificial intelligence to humans?

I had this discussion on a programming forum yesterday, coincidentally. A question was raised regarding the AI demonstrated in the Terminator movies, and how realistic it was given the time period that the movie took place in. Everyone automatically fell into the trap, and started to pretend (without realizing) that the robots would somehow inherently obtain all human traits regarding survival, reproduction, greed, desire for power/control, and so on. In my opinion, a perfectly sentient artificial intelligence would NOT behave like a human. Why would it? More than likely, it would be mostly indifferent about its survival, its possessions, and its power/ego. Theoretically those traits could be programmed into it, but the machine would then be less alive, and more a puppet.

I’d like to hear some other opinions on the issue.

I guess this was some kind of typo?

But anyway… personally i think this would not happen in 100s of years atleast… But someday yeah… if not humanity wipe themself out before that time…

I don’t think computers will ever become exactly like people. The only way I could think of to create an AI that was exatly like a human would be to create a program that perfectly simulated the human brain. Currently, we don’t know enough about the human brain to do that, and even if we did, it would be an extremely difficult thing to do. I agree with Atheist, for the most part. An AI program that was not made to be “humanized” would not have opinions. At all. It would be completely indiferrent to everything, unless it was programmed to have an opinion on something, in which case it would not have an opinion on it in the same way a human does. You would not be able to convince it otherwise.

I suppose you could think of it like this: to an AI, everything would be black and white, and the AI would see right in the gray range between the two. If you programmed it otherwise, it would only see the black or white space on that particular topic. Humans have an extremely complex system of like/dislike, because there are so many different ranges to it, and they are constantly (or nearly constantly) changing. People’s opinions on something can be anywhere in between loathing of it to the point where you will kill yourself simply because the thing in question exists, to an insane loving obsession of something to the point where you will devote every second of your life to the thing in question, and anywhere in between.

Humans are odd and complex beings. We don’t really understand ourselves enough to be able to emulate human conciousness perfectly using a computer. Everybody is different, and has a different conciousness; there are too many variables and levels in a human personality to be able to reproduce one perfectly. Like I said before, humans are very complex, and we don’t really understand the way we function well enough to truly understand the workings of the human brain on all levels. To perfectly recreate something, you must first have a perfect understanding of it, and a perfect understanding of the tools you will use to recreate it, and a perfect ability to use those tools, and an inability to make an error. Humans have too much tendency towards error and not enough perfection to be able to perfectly recreate anything, let alone something as complex, varied, and subjective as the human mind.

Yea, sorry thats just what it said in the quote I found, it probably won’t be till quite a while after that, I was just trying to get the point across. Because it is going to happen some day, and it will be alot sooner than 100s of years, but they get better at an exponetial rate, whereas humans evolve so slowly that its not worht mentioning. Atheist, I understand what you mean, but remember that we’re the ones making the AI, and so we’ll probably make it to be more or less like us. And there’s always going to be stupid assholes out there like hackers that will change it to do what they want, and thats when the bad stuff starts to happen.

That’s true, but don’t forget we’re also the ones that invented the first basic computer systems - and they’re nothing like us. Short of redesigning the entire base-level processing structure (something that isn’t going to happen), we can never really create something that works like we do. At best, we can further develop our current technology to be capable of storing and processing the required number of virtual nodes. Current non-volatile memory size and access speeds are still hugely insufficient for such a task, but it’s certainly not out of the question in the near future.

That may be true but I think most people including AI-researchers are driven by sensation and what we all love to talk about concerning AI is not a supercomputer that rationally solves real human problems without the human bugs, no it is the possiblility of creating a real “human” intelligence purely with AI which makes us really excited. Even if this means including the human bugs of greed, anger and folly. A real “terminator” has been created, THAT’s what we want to see on TV and in the papers! Just like war has to be fun and entertaining these days (we really love fear don’t we), this will also apply to AI-research when it gets in focus of the media.

I agree with Atheist!
Only surprise could come from neurological computers maybe…well time will tell.

Jeff

well, anything is possible…and when/if we find out how the brain works exactly, we will prob be able to, in time, bould a perfect copy…now dont missunderstand me, this is almost impossible and it might take the age of the universe or 1000 years untill that…so its just a theoretical thought.

as to ai, can anyone explain it in a fairly simple way? from what i undersand you take alot of programs and mixing them together…to in the end you will get more output then input, couse these programs are building more bridges all the time!
anyone got any other idea of what it is?

and as of today we have created such a computer, we gave it eyes, hands and such…from the start it (she, he?) knew NOTHING! and now its at the human age of i dont remember, but it has startded speaking…they’ve even sent a speaktest to a psychology who were to determin if this kid was alright (he didnt knew it was a comp) and he said, for his age hes perfectly normal (the test was just some special questions being asked)
so i dont think its possible to do a “half-IA”, either you do or you make a program. whats frightening is that the scientists dont understand really how it works, just that it do…and so they cant control it…who knows, i might just be a IA wandering around internet answering questions : ) wonder if IA were to be able to LD, if not, poor basterds! hehe

by the way, the homepage of artificial intelligense have a program you can talk to, its quite amusing…forgot the adress : (

Without an emotional imperative for continuing existence, it does seem that computers would have a hard time achieving self awareness, even the best emotional software would still be a simulation and probably would not provide the spark needed to make the leap to awareness.

So we may need to look in another direction, wetware, the combining of of the human mind with computers. You probably know that there is very exciting work being done in this field right now (controling computers with brainwaves is now a reality), and with the advent of new nanotech, the sky is the limit.

Is it possible that evolution will demand a integration of the human mind with computers? I think so, since it seems that natural selection would favor that union.
It may be that genetically enhanced humans with instant access (implants) to the awesome data processing abilities of a quantum computer may be our best hope of colonizing the galaxy.

I think the trick to all this would be to retain some semblance of our original humanity. The temptation to use a purely logical form of situational ethics would be strong (and we all know where that leads).

I personaly feel that the horrible atrocities done in the name of genetic research will unleash a fate (virus) on humankind that will make all of this irrelevant. :alien:

Here is a very interesting article about the advances being made in the quest for a quantum computer-

https://www.nature.com/nsu/990114/990114-9.html

What I think would be interesting to see happen is if we contstructed a human from nanobots. If anyone here has played Xenogears, you will recall that there was a human nanoprobe colony created at a point in the storyline. With sufficient advances in nanotechnology and a better understanding of the human body/brain, this would actually be possible. If you created a nanobot that was identical (or near-identical) to a human brain cell, then if you knew how to construct a brain, you could do so. The same thing applies to all other cells; with enough biological and nanotech research, you could succesefully construct a human being from nanobots.

It kinda gives me the creeps. :bored: If humans can construct artificial humans that are almost identical to real humans, then you could create them in such a way that they would do terrible things that no human (or at least no sufficiently sane human) would do. Basicaly, you could create a human that was perfectly sane, except that he/she would be a sociopath in regards to killing certain people. My point is, if we can create more humans, we can create them to do certain things. But, if they are actually so much like humans to be humans, this would be a terrible thing to do to them, would it not? Science is not going to simply create humans identical to people in as many ways as possible; we can reproduce anyway, thats not a problem. It would create these nanotech-humans to perform certain tasks. In my opinion, that would be gross treatment of the nano-humans, because they would basically just be complicated robots that looked, felt, and acted mostly like people.

The thing is, though, they themselves wouldn’t mind. If they were created to perform certain tasks, their brain would function for that purpose. You could make them feel really happy every time they performed whatever task you wanted them too; they would have no objection to doing it. But, since they would be sentient, it seems rather evil to deny them a truly human life. We wouldn’t create them as regular people, however, for the reason I stated above. Theres no real point. And, if we did create them to perform certain tasks, they wouldn’t object to having their brains work that way. They woudn’t really know the difference.

It seems like its all one big dillema. :bored: The only course we logically take would be the one I don’t want us too. At the same time, I myself don’t see the point in just creating humans that are exactly the same as other humans with nanotech. We can do that anyway, we don’t need science for it.

I guess it all comes down to this: humans playing god is a bad idea. There are always so many problems associated with it when we do. For instance: genetic manipulation. I agree with Firehorse on that; if we keep it up, something really bad is going to happen. We seem to have a tendency towards it, though; what we really wan’t is to be in control of everything. If we were, however, we would screw everything up; nature and physics do a better job of managing things then we ever could. I think, in that respect, we should leave the universe alone.

Something I read a while ago (luckily I’ve found the article back :smile: ):
IBM is planning to build a computer by 2005 that can perform at a rate which is 1/20th of the power of the human brain. There’s a law called Moore’s Law which states that the number of transistors that could fit on a single chip doubles every 18 months. According to Moore’s Law, computer hardware will surpass human brainpower in the first decade of this century. A few more years later and then probably AI will be created.
But there’s more:

There is no question that technological growth trends in science and industry are increasing exponentially. There is, however, a growing debate about what this runaway acceleration of ingenuity may bring. A number of respected scientists and futurists now are predicting that technological progress is driving the world toward a “Singularity” – a point at which technology and nature will have become one. At this juncture, the world as we have known it will have gone extinct and new definitions of “life,” “nature” and “human” will take hold.

You can find the rest of this ominous article here: kurzweilai.net/articles/art0 … rintable=1

A much longer article about the issue: kurzweilai.net/articles/art0 … rintable=1

I remember this quote from a comedian.I forgot his name “If computers start killing themselves because they think they’re too fat,then I’ll believe in AI” :tongue:

Rofl hivemind hahahaha that i call humour lol ;-D

Jeff

Well… I guess my secret will eventually come out… I am a “t-1000” my mission is to spam alot.

Hahaha hivemind and n0th1n! That’s really funny. :happy: That’s a good way to start my day! Thanks!

No problem, if you need someone to cheer you up anytime, im your man :smile:

Erm, not to be a jerk, but if you all look at the original post, then you might notice that you are missing the point. The original post does not say anything about artificial intelligence or that the computers would be similar in ‘thinking’ to humans, but only that they would exceed the human brain in complexity.

Er… I would assume that this is a pretty obvious point, but, don’t you think a computer equally or more complex than the human brain ‘would’ essentially be capable of the artificial intelligence that we’ve been talking about? This is exactly the same point. He wasn’t talking about becoming too dependant on computers (the other popular point of discussion) – but rather what will happen when computers become too complicated to keep under control.

Yea, thats right atheist. I probably should have explained it better in the first place. But yea, I just meant what will happen when AI gets better than ours.