Humans + robots = Privacy disaster?
Posted: June 10, 2025
Consider a world in which robots serve as companions to the elderly, provide healthcare services, help travelers navigate through airports, are personal assistants, care for children, cook meals, and act as interaction avatars. Now consider that – though still somewhat of a novelty – robots are, indeed, beginning to provide these and other services. The future is now.
To perform these services, a robot must have the ability to interact with humans and other robots. It must be able to quickly react to unanticipated events, “read” human emotions and reactions, and pursue goals.
Sometimes called a “social robot,” this type of Artificial Intelligence (AI)-enabled machine typically uses a combination of cameras, sensors, and microphones that enables it to react to the visual and auditory environment in human-like ways. This means that privacy, along with other ethical considerations, is a threshold issue to discuss and solve before the novelty of social robots transforms into a common occurrence.
Moreover, people do seem to care about privacy related to social robots. One interesting study found that privacy had a direct impact on whether a participant intended to use a social robot for an intended purpose: “We found that the privacy manipulation had a pronounced effect on robot use intentions. Respondents in the more invasive scenario were significantly less likely to be willing to use such a robot, controlling a range of predictors.”
Regardless of the impact of privacy on consumer acceptance of social robots in day-to-day life, privacy issues are guaranteed to effect compliance with current data protection requirements and the possibility of new regulations specifically targeted at robotics. Following are a few privacy concerns that bear discussion as the marketplace continues to travel down the path of social robots functioning in our society.
Data collection & transparency
One of the privacy concerns about social robots is that, by necessity, they collect a massive amount of data about the world around them, including about the people around them. Unlike in the online world, where a comprehensive online privacy notice can serve to communicate the data that an organization collects and how it uses and shares that information, there is no similar notice mechanism for that social robot helping a family find an airport terminal gate. As one source asks, “When robots are everywhere, what happens to the data they collect?”
There are workable solutions to the need for transparency. Areas that use social robots, like airports or medical centers, can post signage in those areas that describe social robot use and collection/use practices, similar to CCTV disclosures. Social robots themselves could carry notices or display QR codes that link to notice. Transparency is a solvable question. The challenge will be to establish a standard for mechanisms and contents of notice.
Consent to collect and delegated consent?
The question of consent regarding social robot’s collection and use of personal data may be less straightforward to answer than that of transparency.
First, depending on the application, a social robot may need to collect and analyze video images, voice recordings, motion recordings, facial geometry, and other pieces of personal data for every person in its immediate vicinity. It is unlikely that all those individuals will have had the opportunity to agree to the robot’s data collection and use.
In work settings, such as hospitals, where employees must interact with or be in the vicinity of a social robot and so have their personal data collected and used by that robot, there may not be a viable solution for employees who opt out or do not opt in but wish to keep their job – and that is assuming that the employer/employee relationship power balance even provides the possibility for a freely-given consent.
More uniquely, as social robots become more sophisticated and operate as agents on behalf of human beings, the question of whether robots can provide consent for a human being becomes more pressing.
For example, take the case of a social robot that operates as an administrative assistant. The social robot may respond to communications as part of that duty, including agreeing to or denying requests. However, there is the question of whether a machine’s consent is valid, and whether the consent is binding to the individual that authorized that social robot to give those types of consents.
As another case in point, given that social robots are beginning to fill caregiving and companion roles for individuals with dementia, it may be useful for those caregiving social robots to have the power to agree to or refuse actions on the individual’s behalf (or on behalf of the human with legal authority to make decisions for the individual).
The individual in question may not be able to give informed consent themselves, but a typical caregiver frequently faces the need to give consent on their patient’s behalf related to proceeding with medical procedures, sharing information with other caregivers, granting access to the individual’s home, and many other activities. A social robot in a similar role would face the need to do the same.
Automated decisions
In a way, every decision that a social robot makes is an automated decision because no human intervention is typically involved. Given that many jurisdictions apply special protections and requirements to automated decision making, it will be interesting to see how regulators and consumers alike interpret these special protections as applied to social robots.
For example, the social robot leading a family through a congested airport makes hundreds, if not thousands, of small decisions without contacting a human being for validation or input. Conducting a Data Protection Impact Assessment (DPIA) for all these decisions may not be practical. Similarly, a data subject interacting with a social robot may not have visibility into every one of those hundreds of decisions, or an opportunity or natural mechanism to object to each decision the robot makes.
Security
As one source mentions, “Large-scale data breaches pose a significant risk, particularly when robots operate in sensitive areas such as hospitals, banks, or government facilities.” Even without exposure to these sensitive areas, if we think about a social robot that gathers information about every person with whom it comes into contact – location, interactions, driving behavior, social behavior, shopping behavior – there could be real harm to data subjects if that information got into the wrong hands. Moreover, social robots may also themselves pose security risks if they enter otherwise secure facilities with a purpose to do harm.
Fortunately, though physical and information security are two constantly evolving fields, strong security protocols (encryption, firewalls, secured access, etc.) will apply equally well to social robot security as to other types of applications – if robotics companies acknowledge security as a priority.
Final thoughts
As unbelievable as it may seem, social robots are beginning to fill roles in our society that place them in our everyday lives – collecting and using data the whole while. The exciting benefits that social robots may provide us all could be substantial, including protecting and caring for vulnerable groups of people.
At the same time, there are interesting privacy questions that we will need to address in parallel with the advancement of social robot use to balance those benefits with the risk to personal data. How to provide transparency about social robot data collection and handling, get appropriate consents for social robot data collection, provide superior security protections are all critical points to discuss. Additionally, deciding whether and how to apply automated decision-making protections to social robot activities, and whether and how social robots can (or cannot) provide consent on a human’s behalf will fuel privacy professionals’ debates for some time to come.
A privacy professional’s AI checklist
Though AI technology and legislation are rapidly evolving, there is enough of a trending pattern for savvy businesses to get ahead of the AI train. To help an organization make privacy-sensitive and future-proofed AI decisions use our AI top 10 checklist to support:
- Identifying data goals, strategy, and tactics
- Determining legal basis
- Solving transborder data flow concerns
- Considering data sets