Alumna, Dr Chiara Picardi is a Research Associate in the Assuring Autonomy International Programme (AAIP). Alumni Voices spoke to Chiara about the importance of taking a multidisciplinary and inclusive approach to Artificial Intelligence so it can be used safely in the world.
During my PhD in Electronics I developed a machine learning system able to classify different grades of cognitive impairment in Parkinson’s disease patients. Once I obtained my PhD, I moved to Computer Science where my work is now focused on the safety of the autonomous systems involving robotics, machine learning and artificial intelligence. In particular, I ensure that when a system goes into production it is actually safe for people. We know that autonomous systems are very exciting and they can be beneficial in many different domains, but we need to be sure that they are safe before being used in the real world, especially in safety–critical domains like healthcare.
Any system able to make decisions autonomously using artificial intelligence needs to be safe. Ensuring that when the decision is not correct the system will not be harmful is crucial in safety critical domains like healthcare, where a decision can impact the life of a patient. The automotive industry is also another safety critical domain, where a wrong decision of a self-driving car can have catastrophic consequences – as unfortunately happened in an Uber accident.
At the moment I’m involved in a multidisciplinary project, designing a humanoid robot. The aim of the robot (called DAISY) is to help with the initial triage of the patients arriving at the A&E department of a hospital. DAISY will produce a report based on some questions (identifying patient family history and symptoms) and vital signs measurements like blood pressure and body temperature. The report contains suggestions for treatment and further examinations and will be sent to a senior doctor. The robot can speed up the process of the A&E triage, reducing the waiting time which is a big problem for the NHS at the moment. In future we hope that the DAISY project can be used in York Hospital as a clinical trial.
“The fact is that in the real world, real problems require multiple perspectives.”
I collaborate externally with the NHS through an emergency consultant on the DAISY project. The emergency consultant is crucial for the project because he gives us the domain knowledge that we don’t have, designing the algorithm implemented in the robot. The DAISY project also involves an internal collaboration with the Law Department and an external collaboration with the University of Southampton. The external and internal collaborations are important to make sure that the systems designed will be accepted by the different stakeholders and we consider appropriate laws and standards during the design process. The fact is that in the real world, real problems require multiple perspectives. Computer Scientists can’t design a robot working in a hospital without any clinical knowledge or knowledge regarding laws and standards to be followed. The University of York helped me to find external and internal collaborators with different views and expertise, being open not only to work with academia but also with industrial partners. I think it is absolutely necessary for a project involving people with different expertise appropriate for the project because no one knows everything.

The AAIP gives you so many opportunities to collaborate with different people, from a wider community and with the fellowship scheme other academics and industrial collaborators can come and work with us on a project for a defined period, giving us the opportunity to know what they’re doing and have joint publications. Demonstrator projects funded by the AAIP also gave me the opportunity to work with industries on exciting projects like a wildfire detection system implemented via satellite – we can collaborate with a really wide community. I think the best thing about the program is the continuous opportunities to gain larger knowledge and find collaborators.
Inclusivity brings different perspectives. In a project with disabled young people we realize how important and valuable their opinion is in designing the technology of the future. They had a different perspective pointing out factors which we did not consider before. I personally identify as disabled so I can understand the importance of different points of view. I think the more differences in the scientists involved, the better for the design. If it was down to me, I would include the wider community more during the design of systems employed in society, but this is not necessarily easy.
I think that one of the most important ethical questions is how to make sure that the decisions made by an autonomous system are fair. This means that they should not be based on characteristics like the colour of skin or gender. We know that autonomous systems learn from data and sometimes they can find patterns in the data leading to learning something unethical and prejudiced. Data should include enough samples representing the whole of society, including minorities, otherwise the machines could be prejudiced or not work appropriately. In my job assuring the quality and the completeness of data is really important.
“We know that autonomous systems learn from data and sometimes they can find patterns in the data leading to learning something unethical and prejudiced.”
In safety critical applications, it’s really important that the decision made by the autonomous system is not harmful. What we need to do is to be sure that an autonomous system is safe enough to go into production because we don’t want bad consequences. ‘Safe enough’ means that, when possible, we need to be sure that the decision made is correct and when it is not possible, mechanisms are in place to avoid bad outcomes. For example, when considering a self-driving car we need to be certain that a pedestrian crossing the road will always be spotted leading to the decision to stop. In order to ensure that a pedestrian will never be missed, other sensors present in the car could be used. In terms of healthcare, the diagnosis made by a system detecting skin cancer should be checked by a clinician to be sure that the autonomous system is correct.
When I started here I really liked the campus, and the community of the University, for this reason I really didn’t want to leave after my PhD so I decided to continue to work here. The best thing for me as a student was the life on the campus and the community of the University of York. Now as a staff member, I like that there is the possibility to collaborate with a variety of different people from different industries and walks of life.
If you would like to find out more about the Assuring Autonomy International Programme please follow this link.