top of page

Stuart Russell

By: Nate McGowan

Stuart Russell is experienced and acclaimed in the Artificial Intelligence community. He has a degree in Physics from the University of Oxford and a Ph.D. in Computer Science from Stanford University. Russell is currently a Professor of Computer Science, and more specifically Artificial Intelligence, at the University of California, Berkeley. Along with Peter Norvig, the Director of Research at Google inc., Russell wrote a world-renowned computer science textbook, Artificial Intelligence: A Modern Approach.[1]

​

Russell’s current research revolves around aspects of algorithmic bias and ethics in artificial intelligence decisions. On the topic of motorist deaths in partially self-driving cars, Russell stated that building AI systems to help humans without unintended consequences is difficult because human society is incapable of defining what it wants. Therefore, programming machines to maximize the happiness of the majority of people is problematic in itself.[2] Furthermore, ethics and morals are defined relative to different nations, communities, and individuals. A company cannot program a system to comply with the morals of everybody.


In his article “The ethics of AI: how to stop your robot cooking your cat,” John C Havens of The Guardian wrote about Russell’s 2015 speech at The Center for Existential Risk at the University of Cambridge. Russell introduced his method for teaching ethics to AI called inverse reinforcement learning (IRL). Robots would observe human behavior and build an understanding of human ethics and morals by learning which behaviors come from ethics and morals. He states that “For instance, if we run out of meat when cooking we know not to

cook our pet cat, but this is a value we would need to program in a kitchen robot's algorithm.” [3] Russell believes that companies should design AI not to act on the ethics of the programmers, but to learn from the behaviors of the users. This would obviously become problematic when people with poor ethics show the AI their behaviors and the kitchen robot ends up cooking the cat.

​​

Russell and many others have realized this issue, especially on the topic of autonomous weapons. In the short film “Slaughterbots”, made by Russell and the Future of Life Institute, a dreaded possible reality is depicted in which completely autonomous drones are militarized and bought by the wrong people for acts of terrorism. Russell states that the video “shows the results of integrating and militarizing technologies that we already have.” He goes on, selecting his words carefully, saying, “allowing machines to choose to kill humans will be devastating to our security and freedom.” [4] Russell stands by the advancement of AI and the integration into humans’ daily lives but only to a certain extent. Giving AI too much power and allowing it to choose to carry acts of violence is not using AI to our benefit, it will only put us in danger.

​

 

Source: berkeley.edu

References

[1] Stuart Russell's Resumé, Professor of Computer Science and Engineering, University of California, Berkeley

​

[2] Waters, Richard. "Frankenstein Fears Hang Over AI." FT.com (2017)ProQuest. Web. 6 Feb. 2019.

​

[3] "The ethics of AI: how to stop your robot cooking your cat; By tracking how people live their values, businesses can and must instil ethical frameworks into the technologies of the future." Guardian [London, England], 23 June 2015. Academic OneFile, http://link.galegroup.com/apps/doc/A419095395/AONE?u=tel_a_belmont&sid=AONE&xid=abb5bb74. Accessed 7 Feb. 2019.

​

[4] “Slaughterbots” The Future of Life Institute, 13 November 2017. https://www.youtube.com/watch?v=HipTO_7mUOw

Stuart Russell.jpg
bottom of page