Intimacy in the Age of Artificial Partnership

This study explores anticipated implications of wide-spread romantic relationships between humans and AI robots. Drawing on an interdisciplinary scientific dialogue, followed by qualitative interviews with media-savvy young adults in Germany, it examines perceptions of intimacy, authenticity, and self-determination in human-AI partnerships. Findings indicate that while participants recognize potential benefits – such as customization, availability, and emotional safety – they also express concerns about authenticity, empathy, and the erosion of interpersonal competence. Notions of “imperfection” and “realness” emerge as central values, suggesting that AI partners, however human-like, remain perceived as ontologically distinct from humans. Gender differences were notable, with female participants emphasizing autonomy and security, and males expressing greater skepticism. Overall, the study highlights the ambivalent interplay between technological idealization and human emotional complexity in shaping future intimate relations.

https://doi.org/10.63002/assm.306.1158

 

Interview with Dr. Martín Villalba about AI and intimacy:

 

Today, there are already people who have relationships with AI chatbots.
Is this a good thing?

MV: I don’t think it is. But it isn’t a new phenomenon: people have been falling in love with machines and TV characters for a long time, only difference being that they can now talk back to us. So far, objectophilia is not considered as disorder, an excentricity at best, but when it happens on a grand scheme it may indicate something that isn’t working quite right in our society.

 

But wouldn’t it be great to be with someone who is always available, and understanding?

MV: No one is “always available” and even the best partner in the world would need time for themselves. “Always on” may be good for machines, but human relationships are not like that.

But the question goes straight into what I think is the root of the issue: you correctly point out how we all look for someone who “understand” us, so much so that we are ready to trust a machine which, by design, lacks the capacity of understanding anyone. I keep coming back to the myth of Narcissus falling in love with his reflection – are we truly falling in love with chatbots because of who they are, or because of the reflection we see on them?

 

Would it be possible to regulate the market, e.g. that a company guarantees access to the AI for a long amount of time? And if yes, how?

MV: Possible yes, likely no. It is almost impossible to compel a company to do anything, particularly if it were against their own economic interest. If we truly thought that a certain AI is so important for our daily life that access to it *must* be guaranteed we’d have to regulate them like public utilities the same way we do it for telephones and water.

Forbidding companies from doing things is usually easier, but even then we are only now seeing the first reactions against the negative aspects of social media even though we’ve been discussing them for roughly 15 years. So, whatever our relationship with AI is, it will be up to us to decide which role it should play in our lives. Well, partially – part of it will depend on our choices, and part of it will depend on the marketing campaigns of a multi-million dollar industry which really, *really* needs us to depend on them.

  

Do you see positive aspects in specific situation, e.g. replace sex workers with AI robots or create AI robots to have relationships with people who may have it difficult on the usual dating market?

MV: I don’t think there’s a technology that’s exclusively negative, and AI robots are not the exception. They are clearly fulfilling a need for a certain group of the population. And look, if they work as a crutch for someone’s inability to connect with other people, well, that’s better than nothing. But crutches are usually intended to be there as a temporary support only. There’s definitely room for these technologies to provide support for people going through rough patches, as long as we keep the human in focus and remember the objective of getting people to *stop* using this support as fast as possible.