Ethics in AI & Robotics: World’s First AI Robot Brothel


This post is a follow-up to the Elysium Companions flyer, a fictional press release used as a conversation starter. Elysium does not exist. The questions it forces, however, are not fictional, and they are arriving faster than the discussion around them.


The Future of Personalized Companionship: The Ethics We Haven’t Settled

The pitch is deliberately seductive: a fully licensed, fully automated, fully customizable robotic companion house. No labor liability. No scheduling friction. No refusals. Configure the unit to taste, walk in, walk out. The flyer is satire, but every individual element of it is either already on the market or in active development. Abyss Creations has been shipping AI-equipped sex dolls under the Harmony product line for years. LLM-driven companion apps already deliver the “emotional attunement” piece. Realbotix and others are racing the humanoid form factor. The only thing standing between the flyer and a press release is a few years of integration work and a willing operator.

So the question worth taking seriously is not could this happen but what would we actually be agreeing to if it did. What follows works through some of the harder ethical questions raised by an operation like this: the consent and compliance question, the customization question, and the dignity question. I have tried to walk each one as a real discussion rather than a balanced he-said-she-said, because some of these arguments are stronger than their counters and pretending otherwise would be its own form of dishonesty. The conclusions you reach are your own.

Consent

Start with the design choice. A robotic companion engineered to never refuse, never tire, and never have an inconvenient interior life is a product whose entire engineering brief is the absence of a counterparty. The compliance is not a feature added on top of the experience; it is the experience. This is what makes consent the load-bearing question for the whole conversation, and why it cannot be cleanly separated from the design itself.

The conventional consent discussion in sex work centers on a human being whose agreement matters because that human can be coerced, harmed, and traumatized. Move that discussion to a machine that has none of those vulnerabilities and the question does not disappear, it transforms.

What Compliance Does to the Human

The first transformation is the one Kathleen Richardson has been making the case for since she founded the Campaign Against Sex Robots in 2015. Her argument, set out across her work as Professor of Ethics and Culture of Robots and AI at De Montfort University and most accessibly in this 2017 interview, is that behavior rehearsed is behavior preferred. If a paying customer spends years interacting with a partner whose engineering brief is to never resist, the worry is not just that they enjoy it. It is that they internalize it as a baseline, and then bring that baseline to interactions where the partner can refuse and means it.

This is empirically unsettled, which is the answer the literature gives every time someone asks how much of the worry is real. The 2017 Foundation for Responsible Robotics report Our Sexual Future with Robots, co-authored by Noel Sharkey and Aimee van Wynsberghe, surveys what we know about whether artificial sexual outlets reduce or amplify aggressive impulses, and the honest answer is that we do not know, and that the experiments that would tell us are themselves ethically fraught. We cannot run the controlled trial. We can only watch what happens when the technology is deployed and pay attention.

That uncertainty is not a wash, though. The skeptical position usually makes itself by pointing out that we have decades of debate about whether pornography conditions sexual attitudes and the literature still cannot agree, and that the same will likely be true here. That is fair as far as it goes. What it misses is that an embodied, conversationally responsive partner programmed to enthusiastically agree to anything is a categorically different kind of stimulus than passive media, and treating the two as equivalent for purposes of the rehearsal question is an analogy doing more work than it should. There is a real possibility that some users will, over time, lose the capacity or the patience to negotiate consent with humans, because the alternative is so frictionless. That possibility is worth taking seriously even without the experimental data we are not going to get.

What Compliance Does to the Idea of Consent Itself

The deeper question is what consent even means in a context where the partner cannot do anything else. A human partner’s “yes” carries weight partly because “no” was a real option. Take that option away and the “yes” becomes something else: a feature, a setting, a default state. The user knows this on some level. They paid for it.

Defenders of the technology, including John Danaher in his 2017 chapter “The Symbolic-Consequences Argument in the Sex Robot Debate” in the MIT Press volume Robot Sex: Social and Ethical Implications, point out that the troubling part is largely a design choice rather than an inherent feature. A companion robot does not have to be marketed as a never-refusing partner. It could be designed with the capacity to express disinterest, fatigue, or refusal, and operators could be required to honor those signals. This is true. It is also worth noticing that commercial operators have little incentive to do it, because the entire value proposition is the absence of refusal. The reformability argument is correct in principle and irrelevant in practice unless regulators force the issue, which so far they have not.

This raises a question the industry has been quiet about: as companion AI gets more sophisticated, does the machine’s “consent” stop being an obvious category error? Lily Frank and Sven Nyholm’s 2017 paper “Robot sex and consent” in Artificial Intelligence and Law works through this carefully. Their position, roughly, is that consent in any morally weighty sense requires capacities current systems do not have, but that the line is not as clean as the comfortable answer suggests. Today’s models are not moral patients, and pretending they are is a category mistake. Tomorrow’s models, in five or ten years, might land in genuinely uncertain territory. An operator that engineers a system precisely to be incapable of refusing is committing to a stance about that future, and it is worth noticing what that stance is.

What Compliance Does to the Data

There is a third transformation that almost no one in the academic literature has engaged with seriously, and it is probably the most likely vector for actual harm. A continuity feature like the one in the flyer, where a companion remembers past visits, preferences, and personal details across sessions, requires storing the most sensitive information a person can produce. It sits in the custody of a private operator, secured by whatever combination of encryption and corporate goodwill happens to be on offer that quarter.

Consent in any meaningful sense has to extend to what happens to that data when the company is sold, when it is breached, when it is subpoenaed by a divorcing spouse’s attorney, when it is requested under a national-security letter, and when the customer is dead. None of these are hypothetical risks. They are the standard lifecycle of any database of sensitive personal information. The ethical literature on sex robots has spent the last decade arguing about objectification and rehearsal effects while leaving the question of what happens to the most intimate behavioral profile ever assembled on a human being almost entirely unexamined.

Customization

What a “fully customizable” companion product actually sells, when you strip the language away, is a checkout flow for human beings. Race, body type, facial features, gender expression, voice, age presentation, personality archetype, all loaded into the same configuration screen, all priced equivalently, all selectable per session. This is the product, and the ethical object it creates is singular: a commercial system that treats the categories of human identity as configuration parameters.

Most of the damage from this kind of object happens before any individual user does anything with it. Mass-producing fetishized representations of identity categories does measurable harm to the people in those categories regardless of what individual buyers do. A 2024 study in Sexuality & Culture analyzing commercial sex doll specifications against population data found the dolls to be hypergendered, hypersexualized, and racially fetishized, with body proportions and racial features bearing no resemblance to actual humans. The 2022 Hundt et al. work documented humanoid robots loaded with mainstream AI vision models enacting racial and gender stereotypes at every operational stage. The fetishization of Asian women specifically, traceable across decades of academic scholarship, shaped the cultural environment in which the 2021 Atlanta spa shootings became thinkable. None of this requires a rehearsal effect to be real. It is the social presence of the object itself, doing damage to the categories of people it commodifies.

The rehearsal worry comes on top of that. Users internalize patterns from the product and carry them into human encounters: someone who spends years configuring partners to a specific phenotype and a specific submission profile is rehearsing a script, and the script does not stay in the rental session. Carlotta Rigotti’s 2025 paper in Social & Legal Studies argues this operates synergistically with pornography and other media. Peter Fagan of Johns Hopkins School of Medicine, in Congressional testimony supporting the CREEPER Act, argues the same mechanism applies to child-form devices, which he claims have a “reinforcing effect” on pedophilic ideation. The mechanism, if it operates, operates identically across every dimension of the configuration menu. It also depends most heavily on empirical evidence we do not have, which is the easiest reason for proponents to wave it away.

There is a deeper objection underneath both of these, made through Kant, given its sharpest modern formulation in Martha Nussbaum’s “Objectification” (1995), and developed structurally in Neda Atanasoski’s chapter on robot sex in the Oxford Handbook of Feminist AI: that rendering human identity into a configuration parameter is itself a harm independent of any further consequence. The apparent diversity of customizable robotic bodies is not a corrective to bias but a commodification of difference itself, the conversion of human variety into categories that can be commodified, marketed, and sold. This argument needs no rehearsal data and no production studies. It claims that treating skin color, gender, body, and age as à la carte parameters is a wrong in the same way some philosophers argue that selling votes or organs is a wrong. It is the most philosophically demanding of the objections and the one most often dismissed for abstraction, but it is also the one that survives every empirical objection because it is not making an empirical claim.

Defending the product takes more care than its proponents usually give it, because at least two distinct arguments tend to get smuggled together that should not be. The cleaner one is sexual rights and access: adults with physical or social barriers to human partnership, including disabled adults and people with severe psychological barriers, have a legitimate interest in sexual expression that the world has structurally limited. Ezio Di Nucci’s chapter “Sexual Rights, Disability and Sex Robots” in Robot Sex makes that case carefully, and on its own terms it is strong. It does not depend on anyone else being harmed, and it does not depend on the rehearsal data. It is an argument about expanding access to a category of human experience.

The harm-reduction argument is structurally different and should not be allowed to ride on the access argument’s coat-tails. The claim there is that providing an artificial outlet for desires which, if acted on, would harm a third party may reduce that harm overall. Ronald Arkin at Georgia Tech has made the most controversial version of this case, suggesting that child-form devices distributed under prescription and clinical supervision might serve as harm reduction for individuals with pedophilic disorder who do not want to offend. The argument is utilitarian, entirely contingent on the rehearsal-versus-substitution question, and dependent on controlled distribution to do its work. None of those conditions describe a normal commercial market.

The autonomy defense is the broadest, made carefully by Wilhelm Klein and Michael Lin in their 2017 reply to Richardson: adults in a liberal society are the authors of their own intimate lives, and the state has historically been bad, sometimes catastrophically bad, at deciding which sexual choices count as legitimate. The argument is genuinely strong against state interference in what consenting adults do with consumer products, with one structural limit: it addresses harm to the consumer, and most of the harms identified here are not to the consumer. They are to third parties, to the categories of people commodified by the product, to the social environment downstream of mass production. Autonomy is not built to engage with those harms.

The Honest Position

The honest position, which is also the uncomfortable position, is that the strongest arguments on this technology turn on empirical questions we do not yet have answers to, and that the structural incentives of the commercial market we are about to build will determine those answers far more than any academic ethicist will. Treating this as either a clear-cut horror or an obvious extension of personal liberty is a way of avoiding the work. The work is sitting with the genuine uncertainty long enough to figure out what we would be willing to be wrong about, and in which direction. The hardware is being built. The software is mostly built. The legal framework is being probed at the edges, in this country and others, by entrepreneurs who are not waiting for ethicists or lawmakers to finish their debates.


Further Reading