IF YOUR ROBOT COMMITS MURDER, SHOULD YOU GO TO PRISON?



Roll back approximately twenty years. What do you see? Mankind had just tapped into the power of transistors. We started exploiting combinations of multiple silicon transistors to perform basic binary tasks.
We did not stop there, however. Mankind’s best minds worked together to innovate further. There were smaller transistors – offering higher power. Capacities doubled almost every 18 months – validating Moore’s law every single time. Apart from exponentially faster computing systems, the current pace of technological progress also teases us with fully autonomous machines or robots. These fully autonomous machines will not tire, will not rest and will be highly efficient. As they come to ‘life’, many robots are expected to replace us humans doing simple tasks. There is even speculation that they could take on much more complex roles – like replacing our police force.
But what if autonomous robots end up on the other end of this spectrum? What if, instead of being law enforcement agents, robots end up committing crimes? What consequences follow? Should we hold a human individual responsible for a robot’s actions? How reliable should a robot be before we can trust it? 
There will be many legal implications with this issue. Additionally, there there will be a more pressing issue – the personal issue. How will we accept a robot in critical places such as hospitals? We simply cannot react the same if a robot is performing a critical surgery. Yes, robots are highly efficient, but that isn’t adequate in critical situations. Autonomous machines may make certain choices in the middle of surgery, based on probabilities of success – but if a robot fails simply because of probability, do you hold it responsible for negligence? 

In my opinion, we’re pretty far off from achieving true consciousness in these biological beings. A robot is a machine at the end of the day – a machine that works on pre-fed instructions and rules. We can create robots that obey these specific codes or laws such that these autonomous machines never break civil conduct. Like Asimov’s Three Laws of Robotics. You might argue that we do not need jurisdiction for robots for they will never break this code of conduct. But as machines becoming increasingly complex and autonomous – they will gain the ability to interpret laws and rules as they like. An autonomous machine can think and evolve – that’s where most of the research on artificial intelligence is focused at right now.
Preparing for the worst then – how is society supposed to react to robotic crimes?

Who’s responsible for Robotic Crimes?

In many cases, a legal theory is evocative of possible methods to problems that will require further work to evaluate. A legal theory allows us to define certain classes of ethical and well-defined legal problems. Once robots are a sizeable portion of human society – there will be an inevitable need to regulate their actions and the resultant consequences. For that, we’re going to need a revamp of our existing laws.


Robots as Quasi-persons?
We can treat robots as Quasi-persons. Globally, our current legal system makes sure that the final responsible entity is always a human or a corporation. We are a long way away from calling a robot a human, so this could be a possible solution. Quasi-persons is a simple concept. Minor children are a prime example of quasi-persons.
Minors do not enjoy the full privileges. They cannot sign contracts, and cannot involve themselves in various legal arrangements. They can do this only through the actions of their parents or lawful guardians. The same reasoning could be applied to robots, and they could be considered quasi-agents. In such cases, the individual who grants robots permission to act on their behalf is legally responsible for all of its action. If robots are commercially adopted by humans on a global scale – it’ll mean owners are the entities responsible for potential robot crimes.
This kind of legal setup may not be up for mass adoption, however. Primarily because it aims to protect manufacturers and organizations, and instead puts the burden on owners for robotic conduct. This could consequently lead to lower adoption rates for robots by the masses. 

Regarding crime and punishment, it doesn’t make sense to physically punish a robot. Even if a robot has a body, torture and punishment are baseless as robots have no emotions. Sure, it’ll render the robot unable to achieve it’s goal or task – but somehow, this method seems incomplete. 
We cannot solve practical, ethical and meta-ethical problems by legal theory alone. There is still a long way to go, of course, and our laws will adapt as artificial intelligence itself evolves.


Comments

Popular posts from this blog

DirectX 12 vs DirectX 11

Android N's "Freeform Windows"

The Chord Mojo turns good headphones into great ones