A common rule of thumb is that a computer would have to be able to process at least a trillion calculations per second to be able to come close to replicating the human brain. But increasingly, scientists are realizing that this is not enough. Things that we do subconsciously are incredibly difficult for machines, which don't as yet function on the subconscoius level.
Tyson Durst of the University of Alberta proposes a new field of study, "artificial neuroscience," that would explore not only the possibilities and limits to artificial intelligence, but the ways in which sentient machines would interact with humans. Asimov's laws of robotics aside, would AI machines subscribe to the same set of ethics and morals as humans? Would they perceive the world differently, and if so, how? Would sentient machines ultimately view themselves as oppressed and campaign for their freedom?
Such questions have traditionally existed in the realm of science fiction. But as thinkers like Eliezer Yudkowsky of the Singularity Institute argue, we need to start thinking about them now, for by the time we develop AI and are faced with very real ethical crises, it will be too late.
Sources: Asimovlaws.com, Univ. of Alberta