Ageing populations have prompted engineers to design robots that are able to assist with various everyday needs. These robots also have the potential to help patients suffering from various disabilities, or people with reduced mobility due to an accident or stroke. However, the introduction of such devices, and the resulting human–robot interactions, have produced different reactions within societies. In this report, we sketch in three important issues regarding the use and design of service robots.
IDEAS AND DISCUSSION
The first concern is the acceptability of building a care robot in a humanoid style. We believe that there is a trade-off in this regard. On one hand, a humanoid robot facilitates trust and intuitive interaction; however, humans tend to treat a humanoid robot like a living being and may get too attached. Thus, humanoid robots may in the long run damage human affairs.
The second question to be answered is whether these robots should replace the human assistants currently overseeing patient care. The advantages of the robots are that they can provide repetitive care for longer times in a more reliable way. Nevertheless, human contact is a fundamental aspect of care and patients are more responsive to humans in certain situations. Therefore, rather than replacing human care robots should only help augment the capabilities of the human caregiver.
The final important aspect to discuss is whether such robots are invading the privacy and autonomy of the person whom they are supposed to assist. Although a robot is an object, constant monitoring and invasion of private spaces might make patients uncomfortable. Thus a robot should be personalized over time and by means of any available inputs to be able to respond better to specific patient needs, to autonomously make decisions, and to ask permission for certain actions. In addition, if doubts arise as to the proper course of action, humans should have the final decision.
Care robots that can offer various services for elderly people or people with reduced mobility have great potential to be used in hospitals, nursing houses or in apartments. They should nevertheless always aim at augmenting human care. Possible effects of dehumanizing the treatment process should be mitigated by programming the robots based on specific patient needs.
Group members: Ekin Basalp, Anna-Maria Georgarakis, Eduard Villaronga, Jan Burri, Fabienne Forster, Vanessa Rampton
The growing interest in developing care robots especially designed for supporting humans in their daily tasks is fed mainly by the increasing proportion of elderly people in our society. Equipped with a wide range of sensors the most recent technologies promise robots programmed with a large array of skills ranging from assistance in handling tasks to calls for emergency services in case of accidents. Right at the forefront are humanoid robots designed to look and behave as human as possible.
Despite all the possible benefits provided by the application of robots in performing care-giving tasks, main ethical issues start to emerge. Should care robots look like a human? Can their appearance and behaviour make them interactive enough to provide humans with companionship at the cost of human isolation? Several problems arise from this emotional attachment, such as, if the robot breaks or loses all user specific information, which is essential to build a fake relationship between human and machines. In addition, this robotic solution can drive public policies towards less humanistic approaches to elderly people in general. Is the elderly population prepared to face a loss of privacy and personal liberty? The loss of privacy and independence of living in an asylum-like environment might be less intrusive than a camera and a machine overseeing the person. Are robot users conscious of all the information collected about them? While more and more companies have been working to release sophisticated systems capable of acting more “courteous, friendly and affable as a gentleman”, many ethical questions remain without an answer. So far, the use of care robots is mostly limited to research labs and it is still an open question how human behaviour can be affected by the continuous use of robotic systems and how mature the technology ought to be in order to allow its application.
Autonomous or semi-autonomous vehicles are expected to fundamentally change transport as we know it by providing the opportunity to re-organize traffic and minimize unforeseeable human interactions. Nowadays, owners of the latest Tesla cars receive a large variety of driver assisting systems to improve the safety of car passengers and the road users around it, such as forward-facing radars. However, the car is not limited to semi-autonomous driving but is also equipped with a fully autonomous driving mode which can be manually activated by the driver. Although the autonomous system is still under development and lacks legal permission, the owner of the car has full access to this mode. Tesla states that it is not “yet” permissible to use this function without the driver’s supervision while at the same time, as the manufacturer, it gives every driver the option to use the autonomous driving mode by designing and delivering the car in a particular way. So is Tesla being negligent and therefore jointly responsible in the case of an accident that occurs during the use of the autonomous driving system? Does Tesla influence the driver’s decision-making and nudge him/her towards the irresponsible use of that mode simply by providing it? And if so, in what way does Tesla influence the driver’s decision-making process?
A car is an instrument which can not only harm its owner but poses an additional threat to other vehicles and pedestrians in its surroundings. In several respects, it can be compared to a gun. Now if the owner of a gun lends it to a friend who is not capable of using it or is not allowed to use it, and this friend commits a crime with it, the original owner can be accused of negligence: we would hold him partially responsible for giving away his gun. Since autonomous driving systems are not yet fully tested, and it is still unclear how they behave in extreme situations, it should only be possible and permissible that trained Tesla test drivers activate this mode. However, Tesla gives this possibility to the public by including these systems in all of its cars, rather than removing the hardware or software before the car is delivered, or only implementing it in testing cars. So in the case of an accident which occurs due to an unintended performance of the system, both Tesla and the driver should be held responsible. In the former’s case this is because the company already advertises the use of certain functions and gives regular human beings, who will very likely test these functions, the opportunity to do so.
Nevertheless, Tesla can legitimate its decision by stating that its cars are sold to mature and autonomous people in possession of a free will. The manufacturer therefore simply expects that the owners will not use those functions in any way that deviates from the terms and conditions — even if they have access to a technology that allows them to do so. In this way, it can put the entire responsibility on the owner’s side.