Over the past few years, we’ve seen AI-generated content move from experimental research in computer science labs to one of the engines of digital content creation.
Synthetic media provides significant responsible, creative opportunities across society. However, it can also cause harm. As the technology becomes increasingly accessible and sophisticated, the potential harmful, as well as responsible and beneficial impacts, can increase. As this field matures, synthetic media creators, distributors, publishers, and tool developers need to agree on and follow best practices.
With the Framework, AI experts and industry leaders at the intersection of information, media, and technology are coming together to take action for the public good. This diverse coalition has worked together for over a year to create a shared set of values, tactics, and practices to help creators and distributors use this powerful technology responsibly as it evolves.
PAI’s Responsible Practices for Synthetic Media offers recommendations for three categories of stakeholders contributing to the societal impact of synthetic media:
Based around the core concepts of consent, disclosure, and transparency, the Framework outlines key techniques for developing, creating, and sharing synthetic media responsibly.
Along with stakeholder-specific recommendations, the Framework asks organizations to:
PAI’s Responsible Practices for Synthetic Media is a living document. While it is grounded in existing norms and practices, it will evolve to reflect new technology developments, use cases, and stakeholders. Responsible synthetic media, infrastructure development, creation, and distribution are emerging areas with fast-moving changes, requiring flexibility and calibration over time. PAI plans to conduct a yearly review of the Framework and also to enable a review trigger at any time as called for by the AI and Media Integrity Steering Committee.
The Partnership on AI’s (PAI) Responsible Practices for Synthetic Media is a set of recommendations to support the responsible development and deployment of synthetic media.
These practices are the result of feedback from more than 100 global stakeholders. It builds on PAI’s work over the past four years with representatives from industry, civil society, media/journalism, and academia.
With this Framework, we seek to:
The intended stakeholder audiences are those building synthetic media technology and tools, or those creating, sharing, and publishing synthetic media.
Several of these stakeholders will launch PAI’s Responsible Practices for Synthetic Media, formally joining this effort. These organizations will:
PAI will not be auditing or certifying organizations. This Framework includes suggested practices developed as guidance.
PAI’s Responsible Practices for Synthetic Media is a living document. While it is grounded in existing norms and practices, it will evolve to reflect new technology developments, use cases, and stakeholders. Responsible synthetic media, infrastructure development, creation, and distribution are emerging areas with fast-moving changes, requiring flexibility and calibration over time. PAI plans to conduct a yearly review of the Framework and also to enable a review trigger at any time as called for by the AI and Media Integrity Steering Committee.
Synthetic media presents significant opportunities for responsible use, including for creative purposes. However, it can also cause harm. As synthetic media technology becomes more accessible and sophisticated, its potential impact also increases. This applies to both positive and negative possibilities — examples of which we only begin to explore in this Framework. The Framework focuses on how to best address the risks synthetic media can pose while ensuring its benefits are able to be realized in a responsible way.
Further, while the ethical implications of synthetic media are vast, implicating elements like copyright, the future of work, and even the meaning of art, the goal of this document is to target an initial set of stakeholder groups identified by the PAI AI and Media Integrity community that can play a meaningful role in: (a) reducing the potential harms associated with abuses of synthetic media and promoting responsible uses, (b) increasing transparency, and (c) enabling audiences to better identify and respond to synthetic media.
For more information on the creation, goals, and continued development of PAI’s Responsible Practices for Synthetic Media, see the FAQ.
Those building technology and infrastructure for synthetic media, creating synthetic media, and distributing or publishing synthetic media will seek to advance ethical and responsible behavior.
Here, synthetic media, also referred to as generative media, is defined as visual, auditory, or multimodal content that has been generated or modified (commonly via artificial intelligence). Such outputs are often highly realistic, would not be identifiable as synthetic to the average person, and may simulate artifacts, persons, or events. See Appendix A for more information on the Framework’s scope.
PAI offers recommendations for different categories of stakeholders with regard to their roles in developing, creating, and distributing synthetic media. These categories are not mutually exclusive. A given stakeholder could fit within several categories, as in the case of social media platforms. These categories include:
Responsible categories of use may include, but are not limited to:
These uses often involve gray areas, and techniques for navigating these gray areas are described in the sections below.
The following techniques can be deployed responsibly or to cause harm:
For examples of how these techniques can be deployed to cause harm and an explicit, nonexhaustive list of harmful impacts, see Appendix B.
Those building and providing technology and infrastructure for synthetic media can include: B2B and B2C toolmakers; open-source developers; academic researchers; synthetic media startups, including those providing the infrastructure for hobbyists to create synthetic media; social media platforms; and app stores.
Disclosure can be direct and/or indirect, depending on the use case and context:
Aim to disclose in a manner that mitigates speculation about content, strives toward resilience to manipulation or forgery, is accurately applied, and also, when necessary, communicates uncertainty without furthering speculation. (Note: The ability to add durable disclosure to synthetic media is an open challenge where research is ongoing).
Those creating synthetic media can range from large-scale producers (such as B2B content producers) to smaller-scale producers (such as hobbyists, artists, influencers and those in civil society, including activists and satirists). Those commissioning and creative-directing synthetic media also can fall within this category. Given the increasingly democratized nature of content creation tools, anyone can be a creator and have a chance for their content to reach a wide audience. Accordingly, these stakeholder examples are illustrative but not exhaustive.
Disclosure can be direct and/or indirect, depending on the use case and context:
Aim to disclose in a manner that mitigates speculation about content, strives toward resilience to manipulation or forgery, is accurately applied, and also, when necessary, communicates uncertainty without furthering speculation.
Those distributing synthetic media include both institutions with active, editorial decision-making around content that mostly host first-party content and may distribute editorially created synthetic media and/or report on synthetic media created by others (i.e., media institutions, including broadcasters) and online platforms that have more passive displays of synthetic media and host user-generated or third-party content (i.e., social media platforms).
Disclosure can be direct and/or indirect, depending on the use case and context:
Aim to disclose in a manner that mitigates speculation about content, strives toward resilience to manipulation or forgery, is accurately applied, and also, when necessary, communicates uncertainty without furthering speculation.
Channels (such as media institutions) that mostly host first-party content and may distribute editorially created synthetic media and/or report on synthetic media created by others.
Channels (such as platforms) that mostly host third-party content.
20. Clearly communicate and educate platform users about synthetic media and what kinds of synthetic content are permissible to create and/or share on the platform.
While this Framework focuses on highly realistic forms of synthetic media, it recognizes the threshold for what is deemed highly realistic may vary based on an audience’s media literacy and across global contexts. We also recognize that harms can still be caused by synthetic media that is not highly realistic, such as in the context of intimate image abuse. This Framework has been created with a focus on audiovisual synthetic media, otherwise known as generative media, rather than synthetic text which provides other benefits and risks. However, it may still provide useful guidance for the creation and distribution of synthetic text.
Additionally, this Framework only covers generative media, not the broader category of generative AI as a whole. We recognize that these terms are sometimes treated as interchangeable.
Synthetic media is not inherently harmful, but the technology is increasingly accessible and sophisticated, magnifying potential harms and opportunities. As the technology develops, we will seek to revisit this Framework and adapt it to technological shifts (e.g., immersive media experiences).
List of potential harms from synthetic media we seek to mitigate:
Synthetic media, also referred to as generative media, is visual, auditory, or multimodal content that has been artificially generated or modified (commonly through artificial intelligence). Such outputs are often highly realistic, would not be identifiable as synthetic to the average person, and may simulate artifacts, persons, or events.
PAI’s Responsible Practices for Synthetic Media: A Framework for Collective Action is a set of recommendations to support the responsible development and deployment of synthetic media. The intended audiences are those creating synthetic media technology and tools or creating, sharing, and publishing synthetic media content. The Framework builds on PAI’s work over the past four years with industry, civil society, media/journalism, and academia to evaluate the challenges and opportunities for synthetic media.
What PAI is not doing:
Think of this document like a constitution, not a set of laws. We provide recommendations to ensure that the emerging space of responsible synthetic media has a set of values, tactics, and practices to explore and evaluate. This document reflects the fact that responsible synthetic media (and its associated infrastructure development, creation, and distribution) is an emerging area with fast-moving developments requiring flexibility and calibration over time.
Synthetic media presents significant opportunities for responsible use, including for creative purposes. However, it can also cause harm. As synthetic media technology becomes more accessible and sophisticated, its potential impact also increases. This applies to both positive and negative possibilities — examples of which we only begin to explore in this Framework. The Framework focuses on how to best address the risks synthetic media can pose while ensuring its benefits are able to be realized in a responsible way.
We recognize, however, that many institutions collaborating with us are explicitly working in the creative and responsible content categories. In the Framework, we include a list of harmful and responsible content categories, and we explicitly state that this list is not exhaustive, often includes gray areas, and that specific elements of the Framework apply to responsible use cases as well.
This Framework has been created with a focus on visual, auditory, or multimodal content that has been generated or modified (commonly via artificial intelligence). Such outputs are often highly realistic, would not be identifiable as synthetic to the average person, and may simulate artifacts, persons, or events. However, the Framework may still provide useful guidance for the creation and distribution of synthetic text.
Additionally, this Framework focuses on highly realistic forms of synthetic media, but recognizes the threshold for what is deemed highly realistic may vary based on audience’s media literacy and across global contexts. We also recognize harms can still be caused by synthetic media that is not highly realistic, such as in the context of intimate image abuse. In addition, this Framework only covers generative media, not the broader category of generative AI as a whole. We recognize that these terms are sometimes treated as interchangeable.
PAI developed the Responsible Practices for Synthetic Media from January 2022 to January 2023, through:
Development timeline, 2022
One of the expectations of Framework supporters is the submission of a case example, in which the organization reflects on how the Framework can be applied to a synthetic media challenge it has faced or is currently facing. By collecting real-world examples of use cases to pressure test the Framework against, we can see how the Framework principles stand up against technological advancements and public understanding of AI-generated and modified content.
Those that join the Framework effort will explore case examples or analysis related to the application of its recommendations as part of the Framework Community of Practice. Over the course of each year, PAI will host convenings where the community applies the Framework to these cases, as well as additional public cases identified by PAI staff. The 11 case studies we published in March 2024 provide industry, policy makers, and the general public with a shared body of case material that puts the Framework into practice. These case studies allow us to pressure test the Framework and to further operationalize its recommendations via multistakeholder input, especially when applied to gray areas. The case studies also provide us with opportunities to identify what areas of the Framework can be improved upon to better inform audiences.
Although regulation and government policy are emerging in the synthetic media space, the Framework exemplifies a type of norm development and public commitment that can help to strengthen the connection between policies, entities, and industries that are relevant to responsible synthetic media. While we have intentionally limited the involvement of policymakers in drafting the Framework, we have thought about its development as a complement to existing and forthcoming regulation, as well as intergovernmental and organizational policies on AI, mis/disinformation, and synthetic and generative media. For example, we have thought about the Framework alongside the EU AI Act, the EU Code of Practice on Disinformation, as well as the launch of the Deepfake Task Force Act in the U.S. Following the launch of the Framework, we plan to engage the policy community working on and around AI, mis/disinformation, and synthetic and generative media policy, including through a policymaker roundtable on the Framework in 2023.
PAI worked with over 50 global institutions in a participatory, year-long drafting process to create the current Responsible Practices for Synthetic Media. Participating stakeholders included the broader AI and media integrity field of synthetic media startups, social media platforms, AI research organizations, advocacy and human rights groups, academic institutions, experiential experts, news organizations, and public commenters.
Development timeline, 2022
The Framework is not a static document, but a living one. You can think of the Framework like a constitution, and not a set of laws, providing the burgeoning generative AI space with a set of guidelines for ethical synthetic media. PAI will revise the Framework each year in order to reflect new technology developments, use cases, and stakeholders.Part of that evolution will be informed by case examples from the real-world institutions building, creating, and sharing synthetic media. Institutions that join the Responsible Practices for Synthetic Media will provide yearly reports or analysis on synthetic media cases and how the Framework can be explored in practice. These cases will be published and inform the evolution of synthetic media policymaking and AI governance.
2023 was the year the world woke up to generative AI, and 2024 is the year policymakers will respond more firmly. In the past year, Taylor Swift fell victim to non-consensual deepfake pornography, and a misleading political narrative. A global financial services firm lost $25 million due to a deepfake scam. And politicians around the world have seen their likeness used to mislead in the lead up to elections.
In the U.S., on the heels of a White House Executive Order, NIST will be “identifying the existing standards, tools, methods, and practices… for authenticating content and tracking its provenance, [and] labeling synthetic content.” This policy momentum is taking place alongside real world creation and distribution of synthetic media. Social media platforms, news organizations, dating apps, courts, image generation companies, and more are already navigating a world of AI-generated visuals and sounds, already changing hearts and minds, as policymakers try to catch up.
How then can AI governance capture the complexity of the synthetic media landscape? How can it attend to synthetic media’s myriad uses, ranging from storytelling to privacy preservation, to deception, fraud, and defamation, taking into account the many stakeholders involved in its development, creation, and distribution? And what might it mean to govern synthetic media in a manner that upholds the truth while bolstering freedom of expression? To spur innovation while reducing harm?
What follows is the first known collection of diverse examples of the implementation of synthetic media governance that responds to these questions, specifically through Partnership on AI’s (PAI) Responsible Practices for Synthetic Media. Here, we present a case bank of real world examples that help operationalize the Framework — highlighting areas synthetic media governance can be applied, augmented, expanded, and refined for use, in practice.
These eleven stakeholders are a seemingly eclectic group; they vary along many axes implicating synthetic media governance. But they’re all integral members of a synthetic media ecosystem that requires a blend of technical and humanistic might to benefit society. As Synthesia rightfully notes in their case, “No single stakeholder can enact system-level change without public-private collaboration.”
Some of those featured are Builders of technology for synthetic media, while others are Creators, or Distributors. Notably, while civil society organizations are not typically creating, distributing, or building synthetic media (though that’s possible), they are included in the case process; they are key actors in the ecosystem surrounding digital media and online information who must have a central role in AI governance development and implementation.
Read together, the cases emphasize distinct elements of AI policymaking and seven emergent best practices we explore below. They exemplify key themes that support transparency, safety, expression, and digital dignity online: consent, disclosure, and differentiation between harmful and creative use cases.
The cases not only provide greater transparency on institutional practices and decisions related to synthetic media, but also help the field refine policies and practices for responsible synthetic media, including emergent mitigations. Secondarily, the cases may support AI policymaking overall, providing broader insight about how collaborative governance can be applied across institutions.
The cases also accentuate several themes we put forth when we launched the Framework in 2023, like:
Here, we offer emergent best practices from across cases, followed by brief analysis about the goals of the transparency case development, what PAI learned throughout the process, and how the cases will inform future policy efforts and multistakeholder work on synthetic media governance.
Cases that primarily focus on this theme: Synthesia, Respeecher, TikTok, Bumble
Several cases respond to a hotly debated question: which stakeholders in the technology pipeline are responsible for content monitoring and moderation? The debate typically includes some suggesting that moderation by Builders or Creator platforms would stifle innovation and expression, thereby putting too much power in the hands of a few institutions. However, others argue that failing to moderate at the model, technology development, and even infrastructure layer makes it harder to prevent harm downstream.
One of the most public examples of this debate took place far upstream from the institutions featured in this case, but illustrates these tradeoffs: in 2019, the CEO of Cloudfare, an internet security company, reversed course and terminated 8chan, a media platform that allowed “extremists to test out ideas, share violent literature, and cheer on the perpetrators of mass killings.” In explaining his decision, and his conflictedness, Cloudfare’s CEO mapped out the many institutions undergirding the Internet while questioning how to balance freedom of expression with safety, and the roles they should play in doing so.
Builders and Creators, and policymakers, often face a similar conflict. In our cases, though, several Builder and Creator platforms engaged in normative content moderation (or training data decision making — which in essence affects content development) to support harm mitigation, despite the fact that they are not Distributors of content who are typically those assumed to be responsible for moderating content and for whom much regulatory activity is focused. By doing so, they provide a degree of redundancy in content moderation systems later downstream, possibly minimizing the harmful content that eventually reaches audiences.
For instance, Synthesia, a Builder of synthetic media technology, has implemented detection and moderation capabilities at the point of creation. As they note, “Until recently, most content moderation has happened at the point of distribution: a user of digital creation tools could create content without any restrictions.” As with all content moderation, there is inevitably ambiguity in content evaluations, and they differentiate between “obviously harmful content,” “obviously harmless content,” and “gray zone” content — for which they provide a few examples. However, this moderation taking place before content gets to social media platforms can help support harm mitigation further downstream, though it should be pursued transparently in order to illuminate the often subjective decision-making that takes place when moderating gray area content.
For example, Synthesia describes choices they made about misleading videos about sexual health or cryptocurrency — and how by thwarting their development, they provide meaningful support for eventual social media platform moderation processes that might need to filter out this harmful content. Given such a fast moving field, and the limits of moderation on social media platforms, this might support the actual reduction of harmful content’s spread downstream (when done transparently).
Adobe, as a Builder, also took steps to build in technological affordances that would help affect what content is included in and accessible via their models. They are working to enable and protect creators by attaching a “Do Not Train” tag to the metadata of their work so they can ensure that specific content is moderated out of the technology driving synthetic media, and that products further downstream do not then distribute such content.
Notably, while the CBC is not a Builder, their decision as a potential Distributor in which they chose to not proceed using synthetic media for a storytelling use case, stemmed from the lack of responsibility taken on this task by the software provider — pointing out how Distributors may rely on the content and data decisions made by Builders when thinking about creating and distributing synthetic content.
Just like with more canonical content moderation conducted by Distributors, any moderation should be conducted transparently and mindfully, so as to not stifle innovation and creative expression by those using these tools. We recommend that Builders making content moderation decisions at that stage of development document their actions and disclose their practices, and note that many policies include content moderation transparency stipulations (and they should continue being refined).
Many cases talked about the need to balance creative expression and safety/harm mitigation. Some provided discrete examples of content that blurs the line between these categories, while others offered only broad acknowledgment of the common tension between these values.TikTok emphasized their goal of supporting creative expression alongside harm mitigation. WITNESS analyzed the ways in which the creative and harmful might blur, describing how a specific creative project intended to “stir the conscience” could also create unintended harm, describing “a serious possibility that artistic projects that lack prior consent and/or fail to clearly communicate their synthetic nature to audiences [can cause unintentional harm]”; Respeecher underscored how they maintain a role as a company devoted to creativity that often serves the entertainment industry and supports accessibility, while also exploring how they acknowledge, and then seek to mitigate, the harmful impacts of synthetic media. Adobe described safety mechanisms in their models that also serve those looking to create using their technology. Even Bumble discussed the often blurry line between those using synthetic media to create fraudulent profiles to defraud users and a non-malicious use like “a member [uploading] a photo of themselves to their profile that has been digitally altered to show them in a location they’ve never been to before.” Synthesia highlights the “gray zone” as part of their analysis, including examples of such content related to sexual health and cryptocurrency contexts.
The cases that explicitly explain the specific gray areas, rather than overarchingly describing this tradeoff as a concept, help the field understand tradeoffs and how decisions are being made at institutions that implicate the distribution and spread of speech. They have several benefits: they serve as a model for other institutions looking for guidance around exact or analogous scenarios, support broader openness by institutions in this sector, and help users and audiences navigate interactions with the institution in a more informed way.
While it is difficult for institutions to build out a comprehensive set of all of the decisions they have made related to gray area cases, a best practice approach to sharing edge cases and tricky calls must be pursued to ensure that the field is adequately balancing creative expression and harm mitigation. And, of course, different institutions and individuals may have varied perspectives on the appropriate balance between these two considerations. Further, while we encouraged institutions to ground cases in real-world examples of these gray areas, to begin building up these more specific case resources, it will likely take more than this voluntary case study exercise to ensure they are shared at scale, and over time.
Cases that primarily focus on this theme: Adobe, BBC, CBC, OpenAI
If Builders implemented more consistent and standardized indirect disclosures — signals for conveying whether a piece of media is AI-generated or AI-modified, based on information about a piece of content’s origin that are not user facing — Distributors would have clearer signals that content has been AI-generated, and thus could moderate more easily and support content transparency.Take, for example, Adobe’s exploration of Content Credentials. Content Credentials are signals that allow consumers of content to understand the origins and changes made to digital files; they’re built off of the C2PA standard, incorporating both invisible watermarking and cryptographically signed metadata. At present, such protocols are baked into Adobe Firefly (and as of this year, OpenAI’s DALL·E), thereby enabling social media platforms and content distributors to know when content has been synthesized using those technologies — a step in the right direction for wider adoption.
Baking in such signals of indirect disclosure at the model development stage could also support those distributing content who must deal with identifying harmful synthetic media.
Bumble explicitly describes how such shared standards for indirect disclosure could support them, as a potential Passive Distributor of synthetic media: “[The C2PA standard] would solve detection issues outlined [in the case] and establish trust in the image at every step — all the way from creation to when it’s uploaded on a platform. However, this approach would require industry-wide support in order to reliably use it, as well as an invaluable and forward-thinking proof of concept.”
TikTok, another Distributor, echoes this sentiment: “If Builders would implement more content provenance/metadata or watermarking techniques in their models, it would greatly benefit our detection and labeling efforts.
While there will always be bad actors ignoring such guidance, and artifact-level signals are only one part of media literacy, these realities should not paralyze the field into passivity; Builders and Creator platforms should adopt indirect disclosures to support Distributors adjudicating content, thereby mitigating harm.
How the field communicates about the impact of methods for evaluating content is just as important as their technical robustness and design.Several cases explored disclosure methods for supporting audience understanding that content has been AI-generated or not, often through labels attached to individual pieces of content. However, many institutions also highlighted how labels that were applied to specific artifacts did not just have an impact in that particular instance, but were also related to broader societal attitudes and understanding of AI. For instance, societal understanding of what it means to manipulate media, concern that content is synthetic, or belief that labels are applied inaccurately, might affect the impact a specific label attached to a particular artifact has on audiences. This underscores how vital broader literacy and educational campaigns are to field-wide efforts to uphold the truth, and to mitigate harm from synthetic media.
For example, OpenAI recognized that any decisions they made about image provenance signals would exist amidst a context where policymakers and the public might be overconfident in the accuracy and utility of such signals. Furthering materials and public education about the limitations of indirect disclosure methods is a vital prerequisite for their widespread adoption in a manner that serves the public interest. It is also a variable that can affect institutional decision making when implementing different synthetic media governance tactics.
Adobe, writing about their experience designing their Content Credentials, also underscored the ways in which public education is vital for artifact level interventions to work. They highlight the need for future details on “how to accurately create a meaningful and comprehensive disclosure,” especially in light of the fact that AI-generated modifications, especially those that are low stakes and do not mislead or cause harm, will soon be so ubiquitous that it could affect how labels, and absence of labels, can signify content credibility or authenticity.
TikTok further emphasized the relationship between artifact-level interventions and broader education, stating, “our disclosure efforts cannot be separated from our efforts to be transparent with our users about what content is created with AI, and to provide users with information and guidance around why we label AIGC, and why we ask them to do the same.”
Lack of public education about AI capabilities affected how the CBC, for example, chose to proceed when exploring synthetic media implementation in its reporting for a story that required anonymizing a subject. In other words, partially because audiences were not yet comfortable and well-versed in what synthetic media is and is not, the CBC was understandably reluctant to implement synthetic media in the newsroom; doing so would require broader public literacy before experimenting more readily with AI technologies moving forward. Notably, the BBC did not consider this to be a concern when adopting AI-driven privacy methods for storytelling about Alcoholics Anonymous.
Many in the field agree that we need broader public education, but how society learns about AI, and the impact of such efforts, is rarely described in detail. Based on PAI’s previous research on how societal attitudes towards manipulated media labels are often connected to the public’s understanding of the institutions involved in their deployment, we are interested in future work that engages with civic institutions — e.g., libraries — and other spaces inhabited by trusted intermediaries that would support audience education about AI. Further, community-centric disclosures that do not get applied solely by technology platforms and large institutions might support greater trust in AI literacy and labels.
Of course, companies Building, Creating, and Distributing synthetic content still have a role to play in educating their audiences about synthetic content and direct disclosures, and they should do so in a way that is open and share access to data about the impact of different direct disclosure and education approaches.
Connected to Best Practice 2, one of the major mitigations for ensuring that the line between creative content and harmful content does not blur involves disclosure. Even artistic examples of synthetic media should default to require disclosure — though such disclosures should ultimately preserve, rather than threaten, artistic expression and the creative process.TikTok and Adobe notably described the development of methods that enable creators to disclose that content has been AI-generated. For TikTok, this is a toggle that creators could leverage to self-disclose that content has been AI-generated, and in the case of Adobe, it takes shape through Content Credentials. WITNESS’ case describes how such disclosure should accompany creative projects developed with synthetic media in order to mitigate the unintended consequences of such content — concepts they’ve elaborated upon previously.
Respeecher’s case explained how labeling is useful for creative content but must also not come at the expense of creative expression; as they note, for creative contexts like art and entertainment, “overt labeling of a character’s voice as synthetic may detract from the user experience, [and] creators have expressed concerns that such labels could disrupt narrative immersion or artistic expression.”
The BBC acted as a Creator and Distributor of synthetic media for privacy preservation by obfuscating the faces of subjects in a documentary on Alcoholics Anonymous. They included two different forms of disclosure for that project: in the beginning, the narrator provided auditory disclosure that the project used synthetic media and whenever a subject appeared on the screen, they were accompanied by a caption disclosing that it was an AI-modified image. Other projects that have employed privacy preservation via synthetic media, like the documentary film project Welcome to Chechnya, used halos above the heads of synthetically altered subjects to convey that they had been edited using AI. Creators can therefore consider labeling as part of their creative act.
Respeecher meaningfully highlights the tension that may exist between transparency values and storytelling efforts that require suspension of disbelief. However, taken together, the cases imply a broader benefit to labeling content when, per much of WITNESS’ work, it is done in a manner that does not detract from the goals of the creative pursuit. Ultimately, creative uses of synthetic media should be labeled in a manner that does not jeopardize the storytelling or artistic goals of the project.
Cases that primarily focus on this theme: D-ID, WITNESS, PAI
Consent proved challenging for many institutions across cases. While legal boundaries offer some guidance, responsible creation requires more than achieving the legal bare minimum around topics like intellectual property, and the Framework begins to provide this guidance. WITNESS suggested that consent is even more vital when real people are depicted, advocating for an amendment to the Framework that emphasizes the benefit of “seeking consent when the likeness of real people is directly involved in the input or output of the AI-generation process.” They go on to highlight that this should not be mandatory, since there are “some circumstances in which consent may not be pertinent, feasible, or even needed.”The WITNESS case, alongside the D-ID case, dealt with creative projects including real people who could not provide consent — either because they were no longer alive or had been kidnapped –and both provide insight into how to navigate this scenario.
D-ID, writing about a particularly sensitive context — domestic violence — talked to the nuclear family of the featured individual who was no longer alive. Of course, they first needed to deem the social impact goals of educating the public about domestic abuse via the project to be worth the potential emotional tumult of reaching out to families. They even went a step further to bolster consent, allowing the families to actively participate in “co-creating the content and scripts” for the development of the media. This takes informed and active consent — not just about the sheer fact that a creator is using the likeness of their kin, but consent with how that likeness is being used — to the next level.
The WITNESS case also offers guidance for how creators can navigate consent when subjects have been kidnapped or killed. As they note “although there is no clear-cut way to know the preferences of the deceased or missing, contacting relatives, a person’s estate, or next-of-kin could be a proactive step in that direction. This approach has been adopted in prior situations, for example by Propuesta Cívica, when they constructed a deepfake of murdered journalist Javier Váldez. Interestingly, this example relied upon footage from an archive, presenting interesting questions about the possibility of an archive to grant consent to the creator to use footage of individuals depicted within it, serving as a proxy for the actual individuals’ families themselves. Archives of the future might consider stipulations for those submitting material that relate to whether or not the archive can be used for creating synthetic media.
For more details on best practices for informed consent for audio-visual content more broadly, see this 2-page guide from WITNESS. Ultimately, Creators using synthetic media for expressive purposes should seek consent, especially when their projects feature real people, and even if those real people themselves cannot grant consent.
As WITNESS has noted in a previous report, “for many democratic societies with a tradition of free speech, an individual’s “public” or “private” status is important when considering whether their consent is necessary before they become the target of a cultural work. Somebody whose words and actions are of legitimate public interest and concern is generally deemed to merit less control over their likeness than an everyday private citizen.”PAI’s case study brought this premise into focus, helping provide insight into a thorny question posed in the WITNESS report: in what cases, if any, is consent needed to target individuals in positions of power? The PAI case focused on instances of synthetic media depicting public, political figures around elections — including one that was informational and ostensibly received the figure’s consent, and others that did not. While they were not satirical, they did include examples of politicians, individuals who people should be able to deepfake in order to satirize, but not to, as the PAI Framework suggests, “[Manipulate] democratic and political processes, including deceiving a voter into voting for or against a candidate, damaging a candidate’s reputation by providing false statements or acts, influencing the outcome of an election via deception, or suppressing voters.”
The public status of the politicians in the PAI elections case highlights the ways in which consent might take shape differently depending on the type of political speech one is producing with synthetic media. For example, in the U.S., a jurisdiction with very pronounced speech protections, there are clear categories — like interfering with election processes — that are outright limited. The Biden robocoll example featured in the PAI case is clear because it featured misleading content describing inaccurate processes for voting. It’s also possible, though, to imagine a satirist producing a deepfake video of Joe Biden making fun of his gaffes by depicting him in the oval office giving a speech, including content that touches upon topics related to voting practices — thereby presenting a less clear cut scenario than the actual robocall example. This could indeed be satirical, but it could also be used as satirical cover by those looking to mislead the electorate on where to vote. While bad actors might not follow guidance around consent, and in the case of Biden, power and public figure status is very clearcut, those looking to satirize public figures should consider power, status of the individual, and potential harm when determining consent practices. Doing so can support harm mitigation, without stifling the project.
The PAI case meaningfully notes, though, that those Building synthetic media like OpenAI have implemented content moderation practices in text-to-image software like Dall*E that prevents individuals from creating synthetic media for public figures, like Barack Obama. This likely derives from OpenAI’s risk assessment of harmful consequences of content creation, but notably stifles creative and satirical expression too. Greater transparency about how OpenAI and others who enact similar filters and content refusals weighed the variables like public vs. private figure status, risk of harm (via threat models), and power against creative potential would support the responsible use of synthetic media. Downstream, Creators must consider these variables when determining consent practices for synthetic content.
The case studies in this collection offer the AI field greater transparency into synthetic media governance, highlighting how PAI’s Responsible Practices for Synthetic Media can be applied, augmented, expanded, and refined for use in practice.While we plan to conduct more detailed follow up about the case process, and lessons learned for multistakeholder AI governance, we reflect briefly on several aspects of the governance process, including accountability, transparency, adaptability, and complexity.
Voluntary frameworks for AI governance are often (understandably) critiqued for providing a facade of rigor and lack of commitment. Many have written on the attempts by technology companies in particular to tout voluntary governance that serves their interests in order to stave off government regulation. This is often true.
At the same time, it has become clear through our years of work on synthetic media that in the absence of specific government regulation on synthetic media that can keep pace with the field’s development, as well as appetite from stakeholders across sectors for guidance on synthetic media practices that is informed from an ecosystem perspective, PAI could provide a basis for how institutions across the AI field would consider and behave around values like transparency, digital dignity, safety, and expression. This could also provide a foundation featuring policies that have been tested, in practice, that can inform regulatory momentum.
Enforcing a reporting requirement was one way for us to remedy the typical lack of accountability for voluntary governance frameworks. We were honest about our inability to strictly mandate guidelines, but we could enforce adherence to providing case studies, where institutions would offer transparency about how they are approaching our guidance. We hoped that doing so might not only deepen adherence to our practices and principles across Framework supporters, but would also help provide transparency about how they did so, thereby providing civil society and the field at large with foundational material and to support them holding institutions to account.
In the future, we hope to consider how to enable civil society organizations beyond PAI to pressure test and advocate for more, specific details from the case writers in media and industry.
These eleven examples provide a rich tapestry of the challenges and opportunities synthetic media governance presents. We were struck by the variety across cases. Some include specific artistic examples, while others focus on broad tradeoffs that implicate AI model development, or specific considerations of news organizations using synthetic media. While they cannot cover the entire surface area of synthetic media impacts, by providing a body of, in essence, case law for synthetic media, we offer the field a starting point for navigating their own synthetic media challenges. For example, if one is navigating a creative project that deals with posthumous consent, they can consult the D-ID or WITNESS cases.
Notably, these cases required enormous effort and time across PAI staff and Framework supporters, and we are interested in developing methods for collecting cases and instances of synthetic media decision making that might not require long-form writing — something akin to the AI Incident Database that was created at PAI. Starting with the level of depth exemplified in the cases, though, provides a useful foundation for understanding the complexity of case examples featuring synthetic media challenges and opportunities, and also allows us to put them in context and dialogue with other actors in the synthetic media pipeline.
Another benefit of the case process was for pointing out ways that the Framework can be augmented or adapted over time. A key principle of the Framework’s launch was that, in direct response to the rapid pace of AI development, we would revise the Framework. Several details emerged throughout the case reflection process that will inform future versions of the Framework, including but not limited to:
The transparency afforded by these cases is a step in the right direction for the field — of course, the cases reveal instances of synthetic media development, creation, and distribution that shed light on institutional practices and tactics. In addition, the manner in which the institutions described and analyzed their decision making, and chose to share it, also offers transparency into institutional practices.
One of the benefits, and challenges, from an open-ended case template was that institutions had quite a bit of flexibility in how they could focus, and describe their cases; one could focus on something as broad as general policy development or as specific as a particular gray area case that prompted debate, with varying levels of detail (though PAI pushed emphatically for more detail, across cases, with methods we will describe in more depth in future reporting). This flexibility was both practical (to enable us to learn more about how institutions would respond to our first foray into case studies of this sort) and useful (since we were interested in learning more about many levels of implementation of Framework principles and practices).
We were particularly heartened by the cases that offered frank introspection and wrote their cases in a manner that acknowledged when they changed course, and meaningfully, described why — like in the case of OpenAI describing how they navigated their text detection decision making. This is the type of honest reflection we hope to promote, stylistically and substantively, in all future versions of the cases.
One of the trickiest realities of the case study effort is the extent to which universal themes emerged, but so too did very unique, specific elements come through for each case.
Ecosystem actors face similar value trade-offs regardless of their positions in the synthetic media pipeline, but their specific institutional considerations — and even specific case considerations — need to guide their responses to those tradeoffs. This makes the job of those creating Frameworks that move beyond merely stating “do no harm” and thus apply across specific cases and sectors quite tricky, as they need to balance degrees of flexibility and specificity that proves useful to the real world examples the field encounters. Our hope is that this exercise meaningfully highlights the complexities of synthetic media governance, while also producing tangible recommendations that work across cases and underscore the utility of an ecosystem approach. While these cases are focused on synthetic media, they touch vast societal dynamics ranging from freedom of speech, the meaning of harm, transparency, creative endeavor, and consent — topics that each warrant their own specific analysis exercises.
The utility of a case exercise, then, is not only the coherent themes across cases, but also the distinct facets that take shape in individual cases. Thus, we encourage institutions to pay attention to their distinct considerations when making decisions about synthetic media governance. Meaningful synthetic media governance should be useful for specific institutions, as well as broader institutions and stakeholders.
Government regulation and policy are key complements to the Synthetic Media Framework and governance activities at PAI more broadly. Our hope is that policymakers not only learn from the emergent best practices in these cases, but also consider:
We plan to report in more depth on PAI’s analysis of the case study process soon. In the coming months, PAI will be working to analyze and refine this case study process for the eight additional institutions who have joined the Framework. Through our engagement with policymakers, including the NIST Safety Institute in the US, we will be sharing insights from these case studies and this exercise in synthetic media governance with the policy community. And further, we hope to drill deeper into some of the open questions underscored in the cases — just as we further operationalized key elements of the Framework, like indirect disclosure methods, through multistakeholder convening and collaboration.
We look forward to sharing more insights about the cases, how they were developed, and how they have impacted the field in the coming months. If you’re interested in learning more about the PAI Synthetic Media Framework, please sign up here.