Lensa App’s Magic Avatars Sparks Controversy Over Sexualized Results

AI-generated art has been dominating headlines and forum discussions in the last few weeks. Much of the frenzy involved debates on whether this technological advancement will negatively impact artists. A big part of this discussion centers around how much of AI technology’s “art” is made up of a patchwork of real — stolen — artwork. 

After all, the artificial intelligence we have today isn’t actually intelligent yet. It’s just fancy machine learning that has been fed a massive amount of data, which it uses along with some complicated algorithms to solve a complex mathematical problem. In the case of AI art, the AI is fed millions of images sourced from the web to generate its database of references.

Even though this debate is important, and ongoing, smartphone app Lensa recently found itself at the heart of another potentially worrying facet of AI-generated art. Apparently, the app has been oversexualizing avatars of women, while men were comparatively represented in stylized artworks or high-end careers.

Lensa: Cute Avatars or Problematic AI Gimmick?

Prisma Labs launched Lensa, an AI-powered photo editing app with a subscription monetization model in 2018. The app’s popularity shot up in late November 2022 after it introduced a “Magic Avatar” feature which uses Stable Diffusion image synthesis to let people create custom avatars of themselves.

If you upload various pictures of yourself, the app can theoretically generate a limitless supply of custom cartoon avatars of your face. It uses a combination of your photos and the massive cache of images it draws from the web to generate these avatars. 

While many people have complained the avatars usually don’t even look like them, the bigger realization is that the app tends to sexualize images of women (and children). Plenty of women have voiced their concern over how the app displayed their likeness with naked bodies or scant clothing, often styled in sexualized poses.

Lensa app Magic Avatar results depicting avatars of an Asian woman
An example of the type of results women get when using the app, as shown by MIT Technology Review.

Olivia Snow shared her experience on Wired, writing, “I received several fully nude results despite uploading only headshots.” This sentiment is shared by many people, especially women, who complained the app would add cartoonishly sexualized features like giant breasts to their avatars and often put them in sultry poses. 

The problem seems to be worse among traditionally over-sexualized demographics like Asian women. Melissa Heikkiläarchive from MIT Technology Review wrote about how the app clearly only recognized her Asian descent and so modeled her avatars from anime and “most likely porn” considering the content she received. Her white female colleague got fewer sexualized results comparatively. Snow also claims one woman even told her the app would often give her avatars ahegao face.

Men, on the other hand, seemed to not have the same struggles. Instead, their Lensa avatars usually depict them in high-end careers, in artistic stylized character portraits, or in majestic poses wearing armor.

Variety of male avatars created using Lensa app
The results men get from the app tend to differ greatly from the results women get, as shown by MIT Technology Review.

Children Aren’t Safe Either

While its depiction of women is concerning, even worse, as Snow explores in detail in her Wired article, is the app’s sexualization of photos of children. Lensa has a couple of terms of service instructions that direct you to only submit photos containing “no nudes”. It also specifies “no kids, adults only.”

Snow decided to violate this last rule by uploading 10 childhood photos of herself, and the results were truly harrowing. Even though Snow didn’t upload any photos with suggestive undertones, the Lensa app wasted no time in displaying her childhood face (or at least a likeness) in the nude with the breasts of an adult woman.

Even though this technically violated the app’s terms, it’s common for young children and teenagers to use apps that have age restrictions. Social media is a common example — as most social media apps have an age restriction of 13, yet most are inundated with young kids.

The result is a now viral app that may get an influx of young people using its Magic Avatar features, only to get back images of themselves which make them feel violated and uncomfortable. On top of that, as Snow points out, this can be used by their peers as a form of bullying. Not only that, but these images may skew all of the app’s results with younger-looking faces as its database fills with these kinds of content.

Sadly, Prisma Labs doesn’t seem to be enforcing its terms of service as it doesn’t prevent people from uploading nudes or images of children. It doesn’t remove them either. What’s even more worrying is the terms don’t include any mentions of prohibiting you from uploading someone else’s photos. Either Lensa’s developers didn’t consider this issue at all or it doesn’t concern them. An easily implementable solution to this problem would be to only accept selfies taken by you from within the app when you want to create a new avatar. 

This all begs the question, then: Why is this happening in the first place?

Why is Lensa Sexualizing its Female Users?

We’ve established what’s wrong with Lensa, so it’s a good time to explore what’s causing this potentially dangerous development in AI art. As I mentioned above, Lensa uses Stable Diffusion to create its avatars. 

Megan Fox Instagram post depicting Lensa Magic Avatar result
Even Megan Fox joined in on the Lensa trend, but complained about its less than stellar results.

Stable Diffusion is an open-source text-to-image AI model released by Stability AI (among others) in August 2022. It generates images based on text prompts by using LAION-5B, a massive open-source data set composed of images which have been scraped off the internet. This image-training dataset contains a large amount of sexualized images, since these types of images represent a large segment of the available imagery on the web.

While sexualized images of men obviously also feature in this dataset, research has shown the internet has an extreme leaning towards pornified content containing women. This likely isn’t news to you, but it does inform why Stable Diffusion’s data set also skews heavily in this direction. It also explains why any AI art generators using this data subsequently display women in an oversexualized way.

Another notable aspect of this is how the AI trains itself to “see” women. Since it connects various aspects of the images it collects with various text descriptions, this influences how it connects certain concepts to specific imagery. For example, women may see significantly different results depending on how their uploaded images match sets of images generally associated with terms like “beauty.” 

The examples I mentioned above support this, as Asian women got very different results to white women. Many women of color also reported getting results that anglicized their features — meaning it whitened their skin tones and changed their hair to match European styles. This is likely because the internet favors a Western image of beauty, and the AI picked up on that.

Lensa isn’t the only AI art generator that uses Stable Diffusion, which means it’s not the only app suffering from these types of problems. Although Lensa is based on Stable Diffusion 1.x, which is now outdated in terms of its hyper-sexual data set. Stability AI released Stable Diffusion 2.x in November 2022, which filters out NSFW material from its training set, since it’s clearly become a big issue.

Despite these shortcomings, Lensa’s developers have attempted to respond to the backlash.

Lensa Attempts to Change Its Ways

Given the quick rise in popularity of the app and people’s subsequent ire at paying for avatars that made them feel violated, the topic naturally rose up in the swirl of the internet vortex. Prisma Labs has since responded by denying culpability. On an external FAQ site, the company admits “occasional sexualization is observed across all gender categories, although in different ways.”

Prisma Labs CEO Andrey Usoltsev has also spoken to media sites like TechCrunch to reiterate why the company can’t be held accountable for the avatars it creates. Prisma Labs seems to be unwilling to admit to having any control over the type of content its app produces. Instead simply saying it’s the result of people who violate its terms of service, as well as the data set feeding the app’s content.

“Neither us, nor Stability AI could consciously apply any representation biases; to be more precise, the man-made unfiltered data sourced online introduced the model to the existing biases of humankind. The creators acknowledge the possibility of societal biases. So do we.”

— Prisma Labs CEO Andrey Usoltsev

Yet, social biases have been a hot topic in AI art generation for quite some time now. Researchers in the fields of machine learning and AI training recognize these biases and actively discuss how to work around and mitigate them. 

When creating something like Stable Diffusion or the Magic Avatars feature, developers have to decide which data sets to use and how to implement them into their models. They also get to decide whether they want to take any steps to mitigate certain biases (or introduce new ones) or not.

By not actively addressing these biases in the first place, despite having knowledge of them, Prisma Labs accepted them as a given. It’s also possible the company simply didn’t prioritize dedicating any resources to improving its AI art model in this particular way. The result is widespread criticism, bad publicity (which may not be bad depending on your view), and plenty of official backpedaling.

Apparently, the company is now working on a fix to prevent the Magic Avatar feature from generating nude avatars on the app. It remains to be seen if it will be able to manage this successfully. At the end of the day, though, this has sparked an important and necessary conversation around both AI art generation and the internet as a whole when it comes to how both portray people.

The Future of Sexualizing Women Online

Lensa’s Magic Avatar feature isn’t so much the problem as a symptom of it. Women and children are among some of the most vulnerable demographics when it comes to online exploitation, and developments like these amplify it.

No one, at this point in time, moderates the content AI art generators create, and it’s getting easier to create more realistic deepfakes. While it’s also been used for a host of other reasons, deepfake technology is primarily used to create pornographic content using the faces of women without their consent.

What’s worse, is the technology is becoming more accessible with tools like Lensa, and others specifically created for this purpose which will not be named here. To make matters worse, apps like Lensa provide an excellent way for developers to gather even more examples of your face.

Tweet with a screenshot of Prisma Labs terms and conditions in black and white
You’re handing over full ownership rights of your photos to this company.

In its terms of service, Lensa explicitly states it now owns the photos you provide it in perpetuity, with permission to use and modify them. The company has made it clear it uses your photos to train its AI as well, giving it the ability to to use a likeness of your face to generate avatars for other people or in any other way it pleases.

Protect Your Identity — You Only Have One

When a cybercriminal compromises your passwords, you can change them. You can’t do that with your face, especially if it’s uploaded to a service whose terms explicitly say it owns anything you provide it. Once your identity has been compromised, it’s extremely difficult to rectify the situation because the internet so rarely forgets anything. Your best bet is to take as many precautions as you can to avoid potentially harmful situations like identity theft, doxxing, swatting, and now, deepfakes.

This means avoiding apps that can literally steal your face — like Lensa. Right now, this may not seem like much to worry about, but deepfake technology is improving at an alarming rate and becoming more widely available. Imagine someone using your face to commit identity theft — they could do just about anything. You might not think of yourself as a big target, but identity theft can happen to anyone.

With the advent of Web3/metaverse and online interactions becoming normalized, people could use your face in VR games, VR pornography, and pose as you on dark web sites while looking for illegal services. They could make videos using your face to blackmail you and even pose as you during online interviews to secure a job with a company for the purpose of extortion or even corporate espionage.

These examples may seem far-fetched but cybercriminals are coming up with inventive ways to steal people’s identities, and commit fraud, extortion, and other crimes every day. Think about the different ways you can already be targeted today — many of them may have seemed improbable just a decade ago.

Criminals already steal people’s photos and online accounts to pose as them while they scam other people. This technology just takes it one step further and gives them the ability to use your identity in new ways. On the other side of that coin, some people might even steal your face simply to hide their own, as the constant surveillance we experience today sparks a subculture of people unwilling to use their real likeness on the web. 

Just like cybercriminals, you need to keep adapting your strategies and updating your toolset to stay safe online. This means limiting who has access to your social profiles, using tools to secure your devices, and staying on top of scams. You can use CyberGhost VPN to secure your connection, preventing outsiders from hijacking your session and gaining access to everything you do online — including when you upload your selfies to the cloud. No one’s getting through our unbreakable 256-bit AES encryption.

Leave a comment

Write a comment

Your email address will not be published. Required fields are marked*