By Atreya Mathur
“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.” —Eliezer Yudkowsky
Again and again, artificial intelligence (AI) has demonstrated its sheer power to create and tell stories by making visual art, writing poems, code, composing music, and even testing astrological compatibility. Or has it? AI seems to be (machine) learning and doing it all—perhaps, it has taken a step even further to play a little on the human psyche and create “magic avatars” envisaging who one may want to be. If one has ever imagined what they might look like if they were Monet’s or Van Gogh’s muse or if they were animated by artists from Disney or Pixar— AI has got it covered. Now, one can get stunning portraits of all these and many more at the low cost of $10 and likely a few morals here and there — if one is willing to ignore some major ethical red flags (as tempting as that may be…) as well as concerning legal and privacy issues.
In the recent past, AI-generated art has become increasingly ubiquitous owing to the quick turnaround time and detailed prompts to collaborate and create artwork. With the accelerated rate of improvement and enhanced neural networks, AI is becoming more talented, more quickly. AI software (or the people behind the code) like DALL.E 2 among others, is now being accused of stealing artists’ protected works without consent to generate “new” images. Only days after South Korean illustrator Kim Jung Gi passed away (October 3, 2022), his work was fed into an AI model and reproduced. A 34-year-old Polish artist, Greg Rutkowski also stated that AI models should exclude the work of living artists after learning thousands of AI-generated images were copying his fantasy style and the fact that his name was searched over 93,000 times while the images were being produced. Lensa’s “magic avatars” is one such AI model that is being accused of copying artists’ work to create “magic avatars” or AI-generated portraits. Lensa’s magic avatars grant instant gratification to those who want to see themselves exactly as they desire, making it an instant darling of the digitally savvy… while possibly/probably referring to works of real artists’ and our contemporaries’ styles, leading living artists and artists’ estates to ask for accountability.
Screenshot of the download window for Lensa AI on the iOS App Store
What is Lensa?
Launched in 2018, Lensa is a product of Prisma Labs — a company based in Sunnyvale, California that recently topped the iOS app store’s free chart. Though it was created in 2018, the application did not become popular until Prisma Labs introduced its “magic avatar” feature in 2022. Lensa uses artificial intelligence to digitize and generate users’ portraits in a variety of categories, from anime to fantasy to what they call “stylish” which most closely resembles an oil painting. The app itself is free, but the portraits require an in-app purchase. With a seven-day “free trial,” users can upload 10 to 20 selfies and then select a package of unique avatars, ranging from 50 for $3.99, 100 for $5.99, or 200 for $7.99. A year-long subscription is $35.99.
How does AI create the avatars?
To create “magic avatars” Lensa uses Stable Diffusion, an open-source AI deep learning model, which draws from a database of art scraped from the internet. Stable Diffusion has been around since 2020 and was founded by Emad Mostaque but released to the public only in August 2022. Stable Diffusion draws from a database called LAION-5B, which includes 5.85 billion image-text pairs, filtered by a neural network called CLIP ( also open-source). Other recent applications to now employ Stable Diffusion include Canva. An independent analysis was conducted by researchers and tech experts Andy Baio and Simon Willison, where they explored 12 million images used to train Stable Diffusion and found out the websites where it pulled images from, along with the artists, famous faces, and fictional characters found in the data. They employed Willison’s Datasette project to make a data browser to explore the images and traced the origins to platforms like Blogspot, Flickr, DeviantArt, Wikimedia, and Pinterest. Pinterest, of which is the source of roughly half of the collection. This essentially implies that the AI has been trained on unadulterated internet images with minimal filters and restrictions, and that have been taken from across the internet regardless of whether they are copyright protected works of other artists or not. Stability AI, the company that funds and disseminates the Stable Diffusion software removed “illegal content” from Stable Diffusion’s training data, including child sexual abuse material. Additional changes to their policies were also made in late 2022 to make it harder for Stable Diffusion to generate certain types of images that include nude and pornographic output, photorealistic pictures of celebrities, and images that mimic the artwork of specific artists such as the case of Greg Rutkowski. But who makes these decisions as to which artists are fair game and which are off limits? Perhaps it should not be AI…
What makes the avatars so “magical”?
Unlike other filters or photo-editing applications to edit or modify photos, Lensa generates images that do not necessarily look “real,” but rather lean into a new kind of photo distortion rooted in its other-worldliness and dreams. The application requires a minimum of 10 photos (with a maximum of 20) and demonstrates examples of “good” and “bad” selections of “selfies” to upload. A good selection is an up-close selfie that showcases natural features while a bad selection is a distanced pose, or a group photo. There are explicit instructions to not upload any group photos or photos with any sort of nudity. (It does seem concerning to note that while no images are uploaded with nudity, the AI generated images contain nudity…) After the photos are selected, the application takes up to 20 minutes to generate the portraits in 10 styles: fantasy, fairy princess (or prince), focus, pop, stylish, anime, light, kawaii, iridescent, and cosmic.
The “portraits” have a striking similarity to the user of the application, but there is something both dream-like and dystopian in the similarities and differences of the output. As an example below, Lensa accurately captured the user’s dark hair with bangs and brown eyes. What was most unsettling was the accuracy with which it captured the user’s “winged eyeliner,” red lips and somewhat closed-mouth smile that was present in many photos and is present in real life. The differences whether in terms of length of hair or the clothes or poses were also deliberate as to imagine something of a fantasy.
LensaAI generated “magic avatars”
Ethical, moral and legal concerns
While millions of users around the world began generating and falling in love with their vanity… and narcissus-like magic avatars, concerns grew within the artist communities online. Not only were these AI-generated portraits taking away commission opportunities for digital artists, but some of those artists’ who rely on commissions of artworks were being used to train the AI model that generated them, and often without their permission.
Screenshot of tweets by Prisma Labs
A number of artists who spoke out against Lensa, including Jon Lam who stated that “Lensa uses Stable Diffusion which is still using Datasets from stolen data and art all over the internet. This is how it knows how to mimic art styles. It’s unethical, and Big Tech is behind this ripping off artists everywhere for $8 a pop. This is what normalizing data/art thievery looks like. It’s malicious apps disguised as fun trends. If you are an artist, or truly appreciate us, Stop messing with this.” Digital artist Meg Rae posted a warning stating “Do not use the Lensa app’s ‘Magic Avatar’ generator. It uses Stable Diffusion, an AI art model, to sample artwork from artists that never consented to their work being used. This is art theft.”
As mentioned earlier, Lensa does employ a copy of the open-source neural network model Stable Diffusion to train its AI. This means anyone has access to the open source data without any restrictions. The model taps into a pool of billions of images from all corners of the internet, which are compiled into a dataset called LAION-5B. Stable Diffusion then uses these images to learn techniques that it applies to generate new works, which Lensa claims “are not replicas of any particular artist’s artwork.” While this is ethically dubious, the copyright law regarding these datasets is still murky. LAION’s website states that the datasets are simply indexes to the internet, i.e. lists of URLs to the original images together with the ALT texts found linked to those images. While LAION downloaded and calculated CLIP embeddings of the pictures to compute similarity scores between pictures and texts, they subsequently discarded all the photos. This means that because the datasets only contain URLs of images, they serve as indexes to the internet, which do not violate copyright law. It may be interesting to compare this to the US Court of Appeals decision stating that Google’s creation and display of thumbnail images does not infringe copyright and that Google was not responsible for the copyright violations of other sites which it frames and links to. The rationale was that Google does not store the images; its own page simply provides HTML instructions that direct a user’s browser to access and display a third-party website. Scraping public images from the internet, even copyrighted ones, to create something transformative would likely be fair use and be a defense against copyright infringement but only if the copyright infringement was levied against a human-made image not something created by a machine. In fact, the images generated are not copyright protected until the human authorship can be proved in the magic avatars. In addition, the open-source nature of Stable Diffusion means that any copyright infringement is the end-user’s responsibility. Even if AI art can clear these legal obstacles, the ethics are of course still deeply concerning.
Lensa’s app has been trained on artwork created and posted by artists across the internet, and some artists claim this not only devalues their own work by AI mass producing 50-100 images at a fraction of the cost of a commission, but it is also potentially appropriating their work, including their signature. Artists and others pointed out that in the AI-generated images one could see the mangled fragments of the artists original signatures in the corners of the portraits, as seen in the images below. Arguments were made against the same as well, stating that this is not what the “signatures” were. “This is the AI noticing that its training dataset always has signatures and reproducing that element.” One person pointed out that it was “entirely possible that these are watermarks from photography studios, which would be more likely since people are seeding this AI with photos” while another reiterated that “copyright applies just as much to photos as it does to drawings and paintings” and regardless this work could be infringing an artist’s rights. Another commented stating that the worst part is that “future updates can be tweaked to avoid this.” It is interesting to think back to a simple rule-of-thumb jest attributed to Bob Oliver, “if you steal from one man, it’s plagiarism. If you steal from several, it’s research.” And who is better at doing research than a machine processor capable of processing hundreds of thousands of images. Is this theft? or is it simply “research” to create something new?
Screenshot of tweets by Lauryn Ipsum with signature fragments of artists on LensaAI generated “magic avatars”
Example of an AI-generated “magic avatar” with fragmented signature of artist on the top left
Artists in online communities like DeviantArt, which produce the kind of art that Lensa refers to, usually self-regulate. If someone posts art that looks like another artist’s work, that person is usually criticized for copying and ostracized from the community. But it’s more difficult to attribute responsibility when an algorithm generates the artwork. As of now, original artists are not receiving any payment from Lensa for the use of any images. And concerningly, if people become accustomed to paying so little for so many portraits, it may be a challenge for artists to produce artwork and be paid their dues for the same. Who can compete with machine making seemingly intricate portraits?! Is this the dawn of the new prete-a-…. fashion? the ultimate Vanitas?
In December 2022, a digital artist named Ben Moran tweeted that moderators of r/Art (a 22 million member art forum on Reddit) banned Moran from the subreddit for breaking their “no AI art” rule. Moran had posted an image of their digital illustration, titled “a muse in warzone,” and moderators removed it and banned them from the subreddit stating it was an AI design or generated piece. Moran responded that they could provide a process or the PSD file of that painting to prove that Moran was the artist and that they were not using any AI-supported technology. Moran further stated that the punishment was “not right” and provided a link to their portfolio on DeviantArt. A moderator for r/art replied that they did not believe him and “Even if you did ‘paint’ it yourself, it’s so obviously an Al-prompted design that it doesn’t matter. If you really are a ‘serious’ artist, then you need to find a different style, because A) no one is going to believe when you say it’s not Al, and B) the AI can do better in seconds what might take you hours. Sorry, it’s the way of the world.” Moran’s response to this was that “Being accused of being an AI artwork is just like telling me that I’m a random guy and all of my job is just typing some words to have a painting in one or two hours. I think it’s not a good comparison.” Since AI is churning out artwork at a fraction of the time and cost and websites are (with good intention) trying to ban AI works on art websites to protect artists, who is able to differentiate between AI artwork and human produced work like in the case of Moran? Are human artists being reprimanded and devalued for work they have been creating long before AI?
Additionally, the updated policy provides more detail on privacy rights for residents of California, Colorado, Connecticut, Utah, and Virginia—the only five states with comprehensive privacy laws, some of which go into effect in the new year. For example, users in those states can request information about what user data is collected and to have it deleted. The legal team at Prisma Labs decided to add the state-specific section for the benefit of its core user base and after conducting a review of soon-to-be required legal notices.
Finally, while users may or may not own the rights to the photos generated by “magic avatar,” individuals may still have a right of publicity. The right to publicity prevents someone’s likeness, including their image, from being used commercially without permission. By granting rights to images through these applications a user could end up seeing their face on the developer’s website or marketing materials without granting explicit permission.
Screenshot of tweets of Prisma Labs
The issue with artificial intelligence is that there really seems to be no precedence… yet. (No doubt that in time there may be more lawsuits and complaints to peruse through!) AI is doing more than we know and a majority of it remains unregulated. There are no laws that strictly lay down any standards for ownership of work or liability and accountability of actions. Terms and conditions, privacy policies and good practices assist in ensuring that there are some standards followed and that basic violations of privacy do not go ungoverned but they can be vague and riddled with loopholes. It is important to note that one cannot copyright a “style” of work, only a piece of work itself. If the AI-produced work is ‘transformed enough’ from any original source input, it will be challenging for an artist to claim infringement. However, if the AI work is substantially similar to any artists’ prior work or that it appears to be copied, then infringement may be present and legal remedies would likely be available. “Theft” of art work through machine learning at least at this point seems to lack legal backing though ethical considerations must be taken into account. While the law does not prohibit sampling work to transform it (like using the fair use doctrine), is it moral to continue engaging with AI models to purchase mass produced and cheap art? Or are different “fair use” standards required for AI generated artwork?
Will AI artwork ever truly replace traditional art or the work of digital artists? While it may be relatively simple to make an artwork that looks aesthetic enough using AI, it is still difficult to create a very specific work regardless of detailed text-prompts, with a specific subject and context. So while apps like Lensa may be fun and trendy in the short run, the personality of the artist remains an important context for their work especially if commissioned. It is interesting to think of whether Lensa or similar apps could replace the market. Would a person who wants to purchase a high quality commissioned portrait rather employ a human artist or would they choose AI? It seems unlikely that AI would carry the same prestige or value but it remains challenging for artists who feel increasingly ripped-off.
As of now, behind all the AI software(s) is a human-run company which can be held accountable and liable for violation of any laws. At a minimum, perhaps these companies should seek informed consent for the data that they use to train their machine learning algorithms as the artworks are not public property just because they may be publicly available online.
Read more: What else is AI upto these days?
Screenshots from the Co-Star App
Co-Star: AI is now being used to chart out astrological stars and predict compatibility. After one inputs their information, including their place and time of birth, Co-star gives detailed daily readings as well as compares the user’s astrological charts with friends on the application to guide relationships. While access to most information is free, for more detailed readings one can make an “offer” of a certain sum of money from $1 – $20 to receive the full and “complex” reading. See more here: https://www.costarastrology.com/
Images generated on DALL•E 2 using text prompt: oil painting of a robot holding a paintbrush and painting a portrait
DALL•E 2: AI art platform creates images from text descriptions in seconds. One can input a detailed text prompt for which an image is generated. 50 credits are allotted to a user per month to generate a number of images at no cost. The app is available for $36 and bypasses hefty legal fees usually charged by lawyers. See more here: https://openai.com/dall-e-2/
DoNotPay: An artificial intelligence bot is set to defend a human in court for the first time ever in February 2023. The world’s first robot lawyer will help a defendant fight a traffic ticket in court. The Artificial Intelligence (AI) bot developed by DoNotPay will run on the defendant’s smartphone. It will listen to court arguments in real time and advise the defendant on what to say via an earpiece.The defendant will only say what the AI instructs them to say in court. To use the service, one has to input basic information about a specific legal issue and the information will be processed using AI to generate a legal document tailored to those specialized needs. DoNotPay was initially developed to help people contest parking tickets in London. Since its launch in 2015 where it was initially a chatbot, it has expanded to cover a variety of legal issues. See more here: https://donotpay.com/
- Atreya Mathur, Art-istic or Art-ificial? Ownership and copyright concerns in AI-generated artwork, Center for Art Law (November 2022), available at https://itsartlaw.org/2022/11/21/artistic-or-artificial-ai/
- Haje Jan Kamps, UPDATED: It’s way too easy to trick Lensa AI into making NSFW images, TechCrunch (December 2022), available at https://techcrunch.com/2022/12/06/lensa-goes-nsfw/
- Hillary K. Grigonis, New AI Apps Raise Questions About Copyright and More, Rangefinder (December 2022), available at https://www.rangefinderonline.com/news-features/industry-news/new-ai-apps-raise-questions-about-copyright/
- Madeline Garfinkle, What is Lensa AI? And Does it Pose Privacy and Ethical Concerns?, Entrepreneur (December 2022), available at https://www.entrepreneur.com/business-news/what-is-lensa-ai-app-and-is-it-dangerous-for-your/441148#:~:text=Lensa%20uses%20artificial%20intelligence%20to,portraits%20come%20at%20a%20cost
- Nicolas Clark, Lensa’s viral AI art creations were bound to hypersexualize users, Polygon (December 2022), available at https://www.polygon.com/23513386/ai-art-lensa-magic-avatars-artificial-intelligence-explained-stable-diffusion
About the Author
Atreya Mathur is the Director of Legal Research at the Center for Art Law. She was the inaugural Judith Bresler Fellow at the Center (2021-22) and earned her Master of Laws from New York University’s School of Law where she specialized in Competition, Innovation, and Information Laws, with a focus on copyright, intellectual property, and art law.
Comments are closed.