top of page

Alan Winfield

By: Max Kelley

Alan Winfield is a professor of robot ethics at the University of the West of England (UWE) Bristol.[1]  Winfield received his PhD in Digital Communications from the university of Hull and shortly after founded a company, APD Communications Ltd, which specialized in software development for safety-critical communication systems.[2]  In 1993, he went on to co-found the Intelligent Autonomous Systems Laboratory at the University of West England, which later became the Bristol Robotics Laboratory.  His research mainly focuses on finding the limits of artificial intelligence, particularly in mobile robots.  Near the beginning of his career, Winfield was focused mainly on the engineering side of robotics, but throughout time, his focus has shifted increasingly toward the ethics side.[3] 

​

Winfield is very outspoken, both on his own blog[4] and Twitter feeds and in interviews with traditional media outlets.  One common subject that comes up a lot in these interviews is the idea of the driverless car and the ethical decisions that will need to be made by the AI “driving” the vehicle.  One of Winfield’s main arguments on this subject is that we, as a society, need to agree on how these robots will be programmed to deal with situations such as this.  He says that we need to “own the responsibility” of any cases where the rules set out for the machine result in consequences we may not have expected or intended.  He says there needs to be more regulation – currently much of the work in this field is being done by private companies, such as Google and Facebook, and there is currently no standardization or stringent regulation.  He likens a driverless car to the autopilot systems in commercial aircraft and says that any ‘autopilot’ system installed in a vehicle should have to go through just as much testing and prove to be just as reliable. 

 

One of the main building blocks of ethical decisions, Winfield says, is the ability to predict the future.  This is an ability that we, as humans, have, but robots don’t.  We see when something bad is about to happen and can imagine various scenarios where we try to step in and help and see how we can best intervene.  A robot cannot do that unless it I programmed to.  In essence, the robot has to simulate various scenarios without actually acting on them until it finds the one that causes the least harm to the humans around it.[5]  This forms the basis for the robot to make the ethical decisions it needs to.

 

Clearly, Winfield has many new ideas for how things should work in the fields of artificial intelligence and robotics.  He is one of the most respected names in the business, and has every right to be.

 

​

Source: bbc.co.uk

References

[1] “Robot Ethics in the 21st Century - with Alan Winfield and Raja Chatila.” The Royal Institution.  Accessed 02/08/19. https://youtu.be/z3VHbLeq0BU

 

[2] “Professor Alan FT Winfield: Biography.”  Accessed 02/08/19.   

http://www.cems.uwe.ac.uk/~a-winfield/shortbiog.htm

 

[3] “HARDtalk Alan Winfield.” BBC News. Accessed 02/08/19

https://youtu.be/z5cW76iRDpo

 

[4] Winfield’s Blog: http://alanwinfield.blogspot.com/

 

[5] “Robot Ethics in the 21st Century - with Alan Winfield and Raja Chatila.” The Royal Institution.  Accessed 02/08/19. https://youtu.be/z3VHbLeq0BU

Alan Winfield.jpg
bottom of page