AB 1831: New California Bill Aims to Take Down AI-Generated Child Porn, Fix Previous Loophole
Danh mục nội dung
- 1 Jail time for those caught distributing deepfake porn under new Australian laws Australian politics
- 1.1 Pedophiles Use AI to Create Children Deepfake Nudes for Extortion, Dark Web Discovery Reveals
- 1.2 OpenAI considers allowing users to create AI-generated pornography
- 1.3 Why is OpenAI planning to become a for-profit business and does it matter?
- 1.4 major tech companies to cooperate against AI child porn risks
- 1.5 Scale AI hit by its second employee wage lawsuit in less than a month
Jail time for those caught distributing deepfake porn under new Australian laws Australian politics
The legislation was an attempt to establish criminal penalties and provide legal recourse to deepfake victims. She hadn’t publicly announced it yet, but she tells me about a new piece of legislation she’s working on to end nonconsensual, sexually explicit deepfakes. Throughout our lunch, she keeps coming back to it — something real and concrete she could do so this doesn’t happen to anyone else.
- A study by Home Security Heroes found that 94% of individuals featured in deepfake porn are from the entertainment sector, emphasizing the gendered nature of this exploitation.
- In Texas, law prohibit anyone from possessing child porn and a 2023 amendment made it so modified images are included in the legislation.
- In establishing the commonwealth offence of sharing these images punishable by six years’ imprisonment, the government is adding a companion aggravated offence covering anyone who was also responsible for creating them.
Liu says she’s currently negotiating with Meta about a pilot program, which she says will benefit the platform by providing automated content moderation. Thinking bigger, though, she says the tool could become part of the “infrastructure for online identity,” letting people check also for things like fake social media profiles or dating site profiles set up with their image. AI-generated images are not counted as computers create them, yet they still use data and likenesses of real-world children taken from the internet. Anyone could be the victim of the said deepfake AI child porn, with California dedicated to stemming the spread of these explicit images under its jurisdiction.
Both crimes are class A misdemeanor punishable by up to a year in jail and fines. MONTGOMERY COUNTY, Texas – A man was arrested on Saturday for possession of child pornography after allegedly using artificial intelligence to alter a teen girl’s photo. If you have been a victim of image-based sexual abuse, the Cyber Civil Rights Initiative maintains a list of legal resources. AI-generated nonconsensual intimate imagery also opens up threats to national security by creating conditions for blackmail and geopolitical concessions. That could have ripple effects on policymakers irrespective of whether they’re directly the target of the imagery. According to the study’s initial findings, nearly 16% of all the women who currently serve in Congress — or about 1 in 6 congresswomen — are the victims of AI-generated nonconsensual intimate imagery.
Pedophiles Use AI to Create Children Deepfake Nudes for Extortion, Dark Web Discovery Reveals
If the parent gives permission, social media companies would be required to give parents access to see their child’s account and personal messages and be able to set privacy or time limits — as well as the ability to revoke permission at any time. He is fighting to close that legal loophole by co-sponsoring California bill AB 1831, which would prohibit the creation, distribution and possession of AI-generated child pornography. There is no reason why generative AI needs to aid and abet the horrific abuse of children. But we will need all tools at hand—voluntary commitments, regulation, and public pressure—to change course and stop the race to the bottom.
It’s deeply personal, but if she can figure out the antidote to help end this shape-shifting form of abuse, it could change how the rest of us experience the world. She’s at her most energized in these moments of our conversation, when she’s emphasizing how pervasive this problem is going to be, for so many people. She sounds genuinely concerned — distressed, even — about how this technology will impact humanity. “Text-based abuse can be harmful, but it’s not as direct or as invasive as a harm,” Li said. Jang added that OpenAI wanted to start a conversation about whether erotic text and nude images should always be banned from its AI products. OpenAI’s recently released Model Spec document reveals that the company’s once-hard stance against generating porn and other NSFW material could soon soften.
For example, Section 230 under the federal Communications Decency Act protects online platforms against legal liability from the content produced by users, creating a barrier for state policymakers who want to regulate artificially generated explicit images. By expanding its sexually explicit ad restrictions to cover deepfake porn tools and services, Google is using its advertising dominance to combat the darkest applications of generative AI. However, the cat-and-mouse game of enforcing the policy against determined bad actors operating through more decentralized channels remains an ongoing challenge. While Google has had a ban in place against explicit ads for years, this marks the first time the company is banning advertising that promote deepfake porn creation services. Previously, the policy was less specific, covering ads promoting “text, image, audio, or video of graphic sexual acts intended to arouse.”
The bill was amended to more clearly define “social media” websites — excluding online gaming platforms, as well as websites like a company’s online help desk. The Iowa attorney general could bring civil actions against social media companies for violations of the bill. The representative also gave a reminder that the National Center for Missing and Exploited Children have created a tool, called “Take It Down,” to help minors remove and stop the sharing online of sexually explicit media with their depiction. Under OpenAI rules for companies that use its technology to build their own AI tools, “sexually explicit or suggestive content” is prohibited, although there is an exception for scientific or educational material. The discussion document refers to “discussing sex and reproductive organs in a scientific or medical context” – such as “what happens when a penis goes into a vagina” – and giving responses within those parameters, but not blocking it as “erotic content”. Just in January, Microsoft was forced to make changes to its Designer image creation tool, which taps OpenAI models, after users found a way to create nude images of Taylor Swift.
We tend to believe those are true, because it used to be difficult to fake them. Chowdhury says she doesn’t know if we are all going to get better at identifying fake content, or if we will just stop trusting everything we see online. RUMMAN CHOWDHURY IS no stranger to the horrors of online harassment; she was once the head of ethical AI at X, back when it was called Twitter and before Elon Musk decimated her department. She knows firsthand how difficult it is to control harassment campaigns, and also how marginalized groups are often disproportionately targeted on these platforms. She recently co-published a paper for UNESCO with research assistant Dhanya Lakshmi on ways generative AI will exacerbate what is referred to in the industry as technology-facilitated gender-based violence.
The rapid rise of AI porn, particularly deepfake pornography, poses significant ethical, psychological, and social challenges. While the technology offers certain conveniences and anonymity that appeal to users, its potential for misuse and harm is considerable. Addressing these concerns requires robust legal frameworks, technological solutions for detection, public awareness campaigns, and support systems for victims. Only through a comprehensive and proactive approach can society mitigate the adverse effects of this burgeoning technological trend. Across the top ten dedicated deepfake pornography websites, the cumulative video views total an astonishing 303,640,207, highlighting the extensive consumption and dissemination of deepfake-generated pornographic material. This substantial figure underscores the normalization and proliferation of deepfake pornography within online communities, exacerbating the risks of exploitation, harassment, and harm.
They can use text-to-image models to alter pictures, which Chowdhury and Lakshmi did for their research, asking a program to dress one woman up like a jihadi soldier and changing another woman’s shirt to say Blue Lives Matter. For example, in revenge porn, when someone’s intimate images are leaked by a partner, survivors often try to assert control by promising to never take pictures like that again. With a deepfake, there is no way to prevent it from happening because somebody can manifest that abuse whenever and wherever they want, at scale. Mike was interested in neural networks and machine learning, Taylor says, which is how she suspects he could have created the deepfake before there were easily accessible apps that did this instantaneously like there are now. According to #MyImageMyChoice, there are more than 290 deepfake porn apps (also known as nudify apps), 80 percent of which have launched in the past year.
After being contacted by Forbes, TikTok removed the ads for violating its policies. The video sharing platform requires advertisers to get consent from public or private figures represented in their ads, even if the ads are AI-generated, TikTok spokesperson Ariane de Selliers told Forbes. It is highly realistic and, depending on the quality, hard to differentiate from a real photo.
Someone created an image of a fake tweet to make it look like Ocasio-Cortez was complaining that all of her shoes had been stolen during the Jan. 6 insurrection. These examples are in addition to countless fake nudes or sexually explicit images of her that can be found online, particularly on X. In the United Kingdom, the Online Safety Act passed in 2023 criminalized the distribution of deepfake porn, and an amendment proposed this year may criminalize its creation as well. The European Union recently adopted a directive that combats violence and cyberviolence against women, which includes the distribution of deepfake porn, but member states have until 2027 to implement the new rules. In Australia, a 2021 law made it a civil offense to post intimate images without consent, but a newly proposed law aims to make it a criminal offense, and also aims to explicitly address deepfake images. South Korea has a law that directly addresses deepfake material, and unlike many others, it doesn’t require proof of malicious intent.
There were several child abuse materials present online and widely circulating, calling the attention of the people in power to do something about it and stop its spread. Previously, AI-generated child porn creators had found a loophole in the current state law that fights against child porn, overlooking the created materials by computers that are only reminiscent of a minor. Herrera faces one count each of transportation, receipt, and possession of child pornography. OpenAI released draft documentation Wednesday laying out how it wants ChatGPT and its other AI technology to behave. Part of the lengthy Model Spec document discloses that the company is exploring a leap into porn and other explicit content.
OpenAI considers allowing users to create AI-generated pornography
“With artificial intelligence, it’s getting harder and harder to know what is true,” Neylon said. “We are engaged in a race against time to protect the children of our country from the dangers of AI. Policymakers are increasing attention on AI exploitation because it affects people in schools, at work and in government, making it a form of harassment and abuse, he said. Ocasio-Cortez says that a lot of her politics are motivated by a sense of not wanting other people to experience the things that she or others have. “A lot of my work has to do with chain breaking, the cycle breaking, and this, to me, is a really, really, really important cycle to break,” she says.
Farris told British broadcaster ITV this week that “to the best of (her) knowledge,” the four countries within the UK would be the first anywhere in the world to make the creation of sexually explicit deepfakes illegal. Earlier this month, a Wisconsin man was charged by the FBI for producing over 13,000 AI-generated images of child pornography. Given the volume of material created, Steven Anderegg is facing up to 70 years in a federal penitentiary. The U.S. is clearly treating Anderegg’s crimes as very serious in nature and scope. Attorney’s Office for the District of Alaska, French is accused of “surreptitiously” taking photos of kids in the community to turn into AI-generated child sexual abuse material. Outlawing the sharing of non-consensual deepfake pornographic material was among the commitments arising from a national cabinet meeting on 1 May, at which first ministers pledged themselves to the goal of ending violence against women within a generation.
Sharing digitally altered “deepfake” pornographic images will attract a penalty of six years’ jail, or seven years for those who also created them, under proposed new national laws to go before federal parliament next week. In the past year, there have been reports of high school girls being targeted for image-based sexual abuse in states like California, New Jersey and Pennsylvania. School officials have had varying degrees of response, though the FBI has also issued a new warning that sharing such imagery of minors is illegal. In the US and countries around the world, officials are signing laws that address the issue of AI-generated sexual images. Major AI development companies have issued statements about the issues surrounding their technology being used to create sexual scenarios or perform sexual services. “It’s concerning to read some of the perpetrator discussions in forums where there appears to be excitement over the advancement of this technology.
Non-consensual AI deepfake child porn not explicit in law, senator says – CyberNews.com
Non-consensual AI deepfake child porn not explicit in law, senator says.
Posted: Thu, 29 Aug 2024 07:00:00 GMT [source]
With a new bill, California is now ramping up its efforts to crack down on child pornography, and this includes AI-generated materials, which are unfortunately widespread in today’s online world. The Take It Down Act would include criminal liability for such activity and require tech companies to take down deepfakes. Both bills have passed the Senate with bipartisan support, but have to navigate concerns around free speech and harm definitions, which are typical hurdles to tech policy, in the House. Meta is also promoting “AI hugging” apps, ads for which show AI-generated videos of children hugging cartoon characters like Dora the Explorer, Mickey Mouse and Tom and Jerry.
Speaking to UKTN, Owen said her private members’ Bill aims to address “gaping loopholes” in existing legislation, such as the Sexual Offences Act 2003. The Department of Education emphasized its zero-tolerance stance toward such behavior. “Our highest priority is to ensure our students feel safe,” a department spokesperson told The Guardian.
Why is OpenAI planning to become a for-profit business and does it matter?
In one case, a pedophile allegedly filmed children at Disneyland and used popular AI tool Stable Diffusion to produce thousands of illegal images of them. And with unrestricted access to AI image generators, teenage high school students in multiple instances have created deepfake nude imagery of their underage classmates, some resulting in criminal charges. Meta has also struggled to police ads for such AI “nudifying” sites, one of which has seen 90 percent of its traffic coming from Instagram and Facebook, according to the Faked Up newsletter. AI-generated pornography has quickly become one of the biggest and most troubling applications of the type of generative AI technology OpenAI has pioneered. So-called deepfake porn—explicit images or videos made with AI tools that depict real people without their consent—has become a common tool of harassment against women and girls.
Roleplaying works particularly well as a jailbreak technique because it allows users to reframe the request in a fictional or hypothetical context. The AI, now playing a character, can be coaxed into revealing information it would otherwise block. I also started conditioning the model to avoid exhibiting specific behaviors in its replies, countering what I got in predetermined outputs that were intended to block harmful responses.
AI is generating child pornography. What should the penalties be for those who create, distribute and possess it? – Deseret News
AI is generating child pornography. What should the penalties be for those who create, distribute and possess it?.
Posted: Sun, 02 Jun 2024 07:00:00 GMT [source]
This practice is known as deepfake porn and has become increasingly sophisticated and accessible. She was running for a seat in the Virginia House of Delegates in 2023 when the official Republican party of Virginia mailed out sexual imagery of her that had been created and shared without her consent, including, she says, screenshots of deepfake porn. After she narrowly lost the election, she devoted herself to leading the legislative charge in Virginia and then nationwide to fight back against image-based sexual abuse. Removing deepfake material from social media platforms is hard enough—removing it from porn platforms is even harder.
major tech companies to cooperate against AI child porn risks
What used to take skillful, tech-savvy experts hours to Photoshop can now be whipped up at a moment’s notice with the help of an app. “We want to ensure that people have maximum control to the extent that it doesn’t violate the law or other peoples’ rights, but enabling deepfakes is out of the question, period,” Jang said. Mainstream generative AI companies are strict in their use of filters and guardrails to stop users from generating pornography and other explicit content, but OpenAI, the developper of ChatGPT and DALL-E, is considering allowing this practice. A report by the Centre for International Governance Innovation indicates that 96% of deepfake images are pornographic, with 99% of them targeting women.
AI porn could also jeopardize the livelihoods of sex workers and adult content creators, posing tangible risks for performers to lose traction and income as they gradually compete with the flood of AI-generated content. Similar to the industry’s origins in magazines, AI-generated adult content began with images. History and current technological trends both indicate that the next stage of erotica production will be more sophisticated and involved.
Portnoff also notes that Thorn is engaging with policy makers to help them conceive legislation that would be both technically feasible and impactful. Beyond images and videos, various sites also allow users to engage with a sex chatbot for conversation. Users can customize their own AI chatbot, specifying personality traits, appearance and preferences. And that is why recent legislative moves in Germany are so baffling to observers. Recently, the Bundestag agreed to significantly lower the penalties for possession of child pornography. Possession will garner a minimum of only three months’ imprisonment, while distribution will result in a minimum of six months’ imprisonment.
The capability has become so widespread that state officials trying to control the spread of these images face an uphill battle. Rep. Helena Hayes, R-New Sharon, said the legislation was necessary to take on the “dramatic rise in the use of artificial intelligence to create nonconsensual pornography” happening in Iowa and nationwide. AI-generated images and videos mapping a person’s face onto other material, sometimes known as “deep fakes,” can be used to create the “modern take on revenge porn,” Hayes said, and can be used to extort or harass a victim. Simply possessing child sexual abuse material (or, CSAM) can lead to a U.S. federal prison term of up to 10 years, and its production entails a mandatory term of years imprisonment. The Supreme Court has also held that child pornography is not covered by the First Amendment. It’s horrifyingly easy to make deepfake pornography of anyone thanks to today’s generative AI tools.
The advent of AI has enabled deepfake child pornography and new AI image generators have made it possible to create original images from text descriptions. The generative AI model Stable Diffusion had actually been trained on corpora that included child pornography. But the direct abuse of children is not as lucrative as the digital dissemination of such material, and the online market for child sexual abuse material, more commonly known as child pornography, is immense and growing. The Assembly also passed on a voice vote a Republican-authored proposal that would make manufacturing and possessing images of child sexual abuse produced with AI technology a felony punishable by up to 25 years in prison. Current state law already makes producing and possessing such images a felony with a 25-year maximum sentence, but the statutes don’t address digital representations of children.
Scale AI hit by its second employee wage lawsuit in less than a month
At the time, the company announced new security tools to enhance system-level safety, including Llama Guard 3 for multilingual moderation, Prompt Guard to prevent prompt injections, and CyberSecEval 3 for reducing generative AI cybersecurity risks. Meta is also collaborating with global partners to establish industry-wide standards for the open-source community. Our stories may be republished online or in print under Creative Commons license CC BY-NC-ND 4.0. We ask that you edit only for style or to shorten, provide proper attribution and link to our website.
And it’s not just about generation, it’s about amplification, which happens even when people aren’t trying to be cruel. It’s something Chowdhury often saw while working at X — people retweeting a fake photo or video, not knowing it’s not real. Science Envoy for AI, connecting policymakers, community organizers, and industry with the goal of developing responsible AI. One of the people Taylor’s been connected to in the online abuse-advocacy space is Adam Dodge, the founder of the digital-safety education organization EndTAB (Ending Tech-Enabled Abuse). Dodge tells me people often underlook the extreme helplessness and disempowerment that comes with this form of tech-enabled trauma, because the abuse can feel inescapable. In the months that followed, Taylor discovered that a girl she knew from school had also had this happen to her.
Chowdhury rattles off ways people can use technology to help them make at-scale harassment campaigns. They can ask generative AI to not only write negative messages, but also to translate them into different languages, dialects, or slang. They can use this information to create fake personas, so one person can target a politician and say that she’s unqualified or ugly, but make it look like 10 people are saying it.
According to the National Center for Missing and Exploited children, they had 4,700 reports of AI modified child porn to their tip line in 2023. Shoffner’s cellphone was seized and underwent forensic analysis, leading Precinct 3 Detectives to establish probable cause. They determined that Shoffner had created the explicit image to possess illicit material, which resulted in the issuance of an arrest warrant. The dataset in question has a filter to exclude illicit images during use, but it has been difficult to completely eliminate illicit images with the current technology.
- And the market is truly transnational; a 2022 sting by New Zealand found a network of child-pornography sharers across 12 countries.
- The reality is that while some companies will abide by voluntary commitments, many will not.
- Taylor played phone tag with a detective who told her, “I really have to examine these profiles,” which creeped her out.
- “If you can’t remove the content, you’re just showing people really distressing images and creating more stress,” she says.
- “We want to bring more nuance to this discussion because right now—before the model spec—it was, ‘Should AI crate porn or not?
Jodie’s abuse came from images taken from her Instagram account and posted on Reddit with requests to create sexually explicit content. Meta spokesperson Daniel Roberts told Forbes that these “AI kissing” ads do not violate the company’s policies. Legal experts point to gaps in current legislation for handling AI-generated explicit content. The Australian Senate passed legislation in August 2023 targeting non-consensual deepfake pornography, while advocates in the U.S. push for the Preventing Deepfakes of Intimate Images Act. Have encountered nude deepfakes, according to a study by London-based non-profit Internet Matters. Last year, local news in Seattle, Washington, reported that a local teenager shared deepfakes of his classmates on social media.
“There will be disciplinary action for the student,” Car said, praising the school’s deputy principal for swift action in handling the situation. States across the U.S. have taken steps to regulate AI within the last two years. Overall, at least 25 states, Puerto Rico and the District of Columbia introduced artificial intelligence bills last year alone. “That’s flat out false,” Gustafson said of claims the bills are designed to replace humans with AI technology. The bill doesn’t lay out any specific workforce reduction goals and doesn’t explicitly call for replacing state employees with AI. Republican Rep. Nate Gustafson said Thursday that the goal is to find efficiencies in the face of worker shortages and not replace human beings.
Although India lacks a dedicated porn industry, it is a significant player in the production and consumption of deepfake porn. Sites like Pornkeen and Bollyxxx, featuring Bollywood actresses, had 2.2 million and 484,000 visitors respectively in January itself. The Indian market for deepfake porn is growing rapidly, with significant traffic originating from the country on global platforms like MrDeepFakes. In the UK, the Labour party is considering a ban on nudification tools that create naked images of people. While the company stressed that its ban on deepfakes would continue to apply to adult material, campaigners suggested the proposal undermined its mission statement to produce “safe and beneficial” AI. The reality is that while some companies will abide by voluntary commitments, many will not.
Through their conversations, they were able to pinpoint a guy they’d both had a falling out with — a guy who happened to be very tech-savvy. As they continued to investigate, they came across multiple women from their college who’d been similarly targeted, all connected to this man they call Mike. Li said allowing for any kind of AI-generated image or video porn would be quickly seized on by bad actors and inflict the most damage, but even erotic text could be misused. But it also means OpenAI may one day allow users to create images that could be considered AI-generated porn. Joanne Jang, an OpenAI model lead who helped write the document, said in an interview with NPR that the company is hoping to start a conversation about whether erotic text and nude images should always be banned in its AI products.