Over the past few years, we’ve seen AI-generated content move from experimental research in computer science labs to one of the engines of digital content creation.
Synthetic media provides significant responsible, creative opportunities across society. However, it can also cause harm. As the technology becomes increasingly accessible and sophisticated, the potential harmful, as well as responsible and beneficial impacts, can increase. As this field matures, synthetic media creators, distributors, publishers, and tool developers need to agree on and follow best practices.
With the Framework, AI experts and industry leaders at the intersection of information, media, and technology are coming together to take action for the public good. This diverse coalition has worked together for over a year to create a shared set of values, tactics, and practices to help creators and distributors use this powerful technology responsibly as it evolves.
PAI’s Responsible Practices for Synthetic Media offers recommendations for three categories of stakeholders contributing to the societal impact of synthetic media:
Based around the core concepts of consent, disclosure, and transparency, the Framework outlines key techniques for developing, creating, and sharing synthetic media responsibly.
Along with stakeholder-specific recommendations, the Framework asks organizations to:
PAI’s Responsible Practices for Synthetic Media is a living document. While it is grounded in existing norms and practices, it will evolve to reflect new technology developments, use cases, and stakeholders. Responsible synthetic media, infrastructure development, creation, and distribution are emerging areas with fast-moving changes, requiring flexibility and calibration over time. PAI plans to conduct a yearly review of the Framework and also to enable a review trigger at any time as called for by the AI and Media Integrity Steering Committee.
The Partnership on AI’s (PAI) Responsible Practices for Synthetic Media is a set of recommendations to support the responsible development and deployment of synthetic media.
These practices are the result of feedback from more than 100 global stakeholders. It builds on PAI’s work over the past four years with representatives from industry, civil society, media/journalism, and academia.
With this Framework, we seek to:
The intended stakeholder audiences are those building synthetic media technology and tools, or those creating, sharing, and publishing synthetic media.
Several of these stakeholders will launch PAI’s Responsible Practices for Synthetic Media, formally joining this effort. These organizations will:
PAI will not be auditing or certifying organizations. This Framework includes suggested practices developed as guidance.
PAI’s Responsible Practices for Synthetic Media is a living document. While it is grounded in existing norms and practices, it will evolve to reflect new technology developments, use cases, and stakeholders. Responsible synthetic media, infrastructure development, creation, and distribution are emerging areas with fast-moving changes, requiring flexibility and calibration over time. PAI plans to conduct a yearly review of the Framework and also to enable a review trigger at any time as called for by the AI and Media Integrity Steering Committee.
Synthetic media presents significant opportunities for responsible use, including for creative purposes. However, it can also cause harm. As synthetic media technology becomes more accessible and sophisticated, its potential impact also increases. This applies to both positive and negative possibilities — examples of which we only begin to explore in this Framework. The Framework focuses on how to best address the risks synthetic media can pose while ensuring its benefits are able to be realized in a responsible way.
Further, while the ethical implications of synthetic media are vast, implicating elements like copyright, the future of work, and even the meaning of art, the goal of this document is to target an initial set of stakeholder groups identified by the PAI AI and Media Integrity community that can play a meaningful role in: (a) reducing the potential harms associated with abuses of synthetic media and promoting responsible uses, (b) increasing transparency, and (c) enabling audiences to better identify and respond to synthetic media.
For more information on the creation, goals, and continued development of PAI’s Responsible Practices for Synthetic Media, see the FAQ.
Those building technology and infrastructure for synthetic media, creating synthetic media, and distributing or publishing synthetic media will seek to advance ethical and responsible behavior.
Here, synthetic media, also referred to as generative media, is defined as visual, auditory, or multimodal content that has been generated or modified (commonly via artificial intelligence). Such outputs are often highly realistic, would not be identifiable as synthetic to the average person, and may simulate artifacts, persons, or events. See Appendix A for more information on the Framework’s scope.
PAI offers recommendations for different categories of stakeholders with regard to their roles in developing, creating, and distributing synthetic media. These categories are not mutually exclusive. A given stakeholder could fit within several categories, as in the case of social media platforms. These categories include:
Responsible categories of use may include, but are not limited to:
These uses often involve gray areas, and techniques for navigating these gray areas are described in the sections below.
The following techniques can be deployed responsibly or to cause harm:
For examples of how these techniques can be deployed to cause harm and an explicit, nonexhaustive list of harmful impacts, see Appendix B.
Those building and providing technology and infrastructure for synthetic media can include: B2B and B2C toolmakers; open-source developers; academic researchers; synthetic media startups, including those providing the infrastructure for hobbyists to create synthetic media; social media platforms; and app stores.
Disclosure can be direct and/or indirect, depending on the use case and context:
Aim to disclose in a manner that mitigates speculation about content, strives toward resilience to manipulation or forgery, is accurately applied, and also, when necessary, communicates uncertainty without furthering speculation. (Note: The ability to add durable disclosure to synthetic media is an open challenge where research is ongoing).
Those creating synthetic media can range from large-scale producers (such as B2B content producers) to smaller-scale producers (such as hobbyists, artists, influencers and those in civil society, including activists and satirists). Those commissioning and creative-directing synthetic media also can fall within this category. Given the increasingly democratized nature of content creation tools, anyone can be a creator and have a chance for their content to reach a wide audience. Accordingly, these stakeholder examples are illustrative but not exhaustive.
Disclosure can be direct and/or indirect, depending on the use case and context:
Aim to disclose in a manner that mitigates speculation about content, strives toward resilience to manipulation or forgery, is accurately applied, and also, when necessary, communicates uncertainty without furthering speculation.
Those distributing synthetic media include both institutions with active, editorial decision-making around content that mostly host first-party content and may distribute editorially created synthetic media and/or report on synthetic media created by others (i.e., media institutions, including broadcasters) and online platforms that have more passive displays of synthetic media and host user-generated or third-party content (i.e., social media platforms).
Disclosure can be direct and/or indirect, depending on the use case and context:
Aim to disclose in a manner that mitigates speculation about content, strives toward resilience to manipulation or forgery, is accurately applied, and also, when necessary, communicates uncertainty without furthering speculation.
Channels (such as media institutions) that mostly host first-party content and may distribute editorially created synthetic media and/or report on synthetic media created by others.
Channels (such as platforms) that mostly host third-party content.
20. Clearly communicate and educate platform users about synthetic media and what kinds of synthetic content are permissible to create and/or share on the platform.
While this Framework focuses on highly realistic forms of synthetic media, it recognizes the threshold for what is deemed highly realistic may vary based on an audience’s media literacy and across global contexts. We also recognize that harms can still be caused by synthetic media that is not highly realistic, such as in the context of intimate image abuse. This Framework has been created with a focus on audiovisual synthetic media, otherwise known as generative media, rather than synthetic text which provides other benefits and risks. However, it may still provide useful guidance for the creation and distribution of synthetic text.
Additionally, this Framework only covers generative media, not the broader category of generative AI as a whole. We recognize that these terms are sometimes treated as interchangeable.
Synthetic media is not inherently harmful, but the technology is increasingly accessible and sophisticated, magnifying potential harms and opportunities. As the technology develops, we will seek to revisit this Framework and adapt it to technological shifts (e.g., immersive media experiences).
List of potential harms from synthetic media we seek to mitigate:
Synthetic media, also referred to as generative media, is visual, auditory, or multimodal content that has been artificially generated or modified (commonly through artificial intelligence). Such outputs are often highly realistic, would not be identifiable as synthetic to the average person, and may simulate artifacts, persons, or events.
PAI’s Responsible Practices for Synthetic Media: A Framework for Collective Action is a set of recommendations to support the responsible development and deployment of synthetic media. The intended audiences are those creating synthetic media technology and tools or creating, sharing, and publishing synthetic media content. The Framework builds on PAI’s work over the past four years with industry, civil society, media/journalism, and academia to evaluate the challenges and opportunities for synthetic media.
What PAI is not doing:
Think of this document like a constitution, not a set of laws. We provide recommendations to ensure that the emerging space of responsible synthetic media has a set of values, tactics, and practices to explore and evaluate. This document reflects the fact that responsible synthetic media (and its associated infrastructure development, creation, and distribution) is an emerging area with fast-moving developments requiring flexibility and calibration over time.
Synthetic media presents significant opportunities for responsible use, including for creative purposes. However, it can also cause harm. As synthetic media technology becomes more accessible and sophisticated, its potential impact also increases. This applies to both positive and negative possibilities — examples of which we only begin to explore in this Framework. The Framework focuses on how to best address the risks synthetic media can pose while ensuring its benefits are able to be realized in a responsible way.
We recognize, however, that many institutions collaborating with us are explicitly working in the creative and responsible content categories. In the Framework, we include a list of harmful and responsible content categories, and we explicitly state that this list is not exhaustive, often includes gray areas, and that specific elements of the Framework apply to responsible use cases as well.
This Framework has been created with a focus on visual, auditory, or multimodal content that has been generated or modified (commonly via artificial intelligence). Such outputs are often highly realistic, would not be identifiable as synthetic to the average person, and may simulate artifacts, persons, or events. However, the Framework may still provide useful guidance for the creation and distribution of synthetic text.
Additionally, this Framework focuses on highly realistic forms of synthetic media, but recognizes the threshold for what is deemed highly realistic may vary based on audience’s media literacy and across global contexts. We also recognize harms can still be caused by synthetic media that is not highly realistic, such as in the context of intimate image abuse. In addition, this Framework only covers generative media, not the broader category of generative AI as a whole. We recognize that these terms are sometimes treated as interchangeable.
PAI developed the Responsible Practices for Synthetic Media from January 2022 to January 2023, through:
Development timeline, 2022
One of the expectations of Framework supporters is the submission of a case example, in which the organization reflects on how the Framework can be applied to a synthetic media challenge it has faced or is currently facing. By collecting real-world examples of use cases to pressure test the Framework against, we can see how the Framework principles stand up against technological advancements and public understanding of AI-generated and modified content.
Those that join the Framework effort will explore case examples or analysis related to the application of its recommendations as part of the Framework Community of Practice. Over the course of each year, PAI will host convenings where the community applies the Framework to these cases, as well as additional public cases identified by PAI staff. The 11 case studies we published in March 2024 provide industry, policy makers, and the general public with a shared body of case material that puts the Framework into practice. These case studies allow us to pressure test the Framework and to further operationalize its recommendations via multistakeholder input, especially when applied to gray areas. The case studies also provide us with opportunities to identify what areas of the Framework can be improved upon to better inform audiences.
Although regulation and government policy are emerging in the synthetic media space, the Framework exemplifies a type of norm development and public commitment that can help to strengthen the connection between policies, entities, and industries that are relevant to responsible synthetic media. While we have intentionally limited the involvement of policymakers in drafting the Framework, we have thought about its development as a complement to existing and forthcoming regulation, as well as intergovernmental and organizational policies on AI, mis/disinformation, and synthetic and generative media. For example, we have thought about the Framework alongside the EU AI Act, the EU Code of Practice on Disinformation, as well as the launch of the Deepfake Task Force Act in the U.S. Following the launch of the Framework, we plan to engage the policy community working on and around AI, mis/disinformation, and synthetic and generative media policy, including through a policymaker roundtable on the Framework in 2023.
PAI worked with over 50 global institutions in a participatory, year-long drafting process to create the current Responsible Practices for Synthetic Media. Participating stakeholders included the broader AI and media integrity field of synthetic media startups, social media platforms, AI research organizations, advocacy and human rights groups, academic institutions, experiential experts, news organizations, and public commenters.
Development timeline, 2022
The Framework is not a static document, but a living one. You can think of the Framework like a constitution, and not a set of laws, providing the burgeoning generative AI space with a set of guidelines for ethical synthetic media. PAI will revise the Framework each year in order to reflect new technology developments, use cases, and stakeholders.Part of that evolution will be informed by case examples from the real-world institutions building, creating, and sharing synthetic media. Institutions that join the Responsible Practices for Synthetic Media will provide yearly reports or analysis on synthetic media cases and how the Framework can be explored in practice. These cases will be published and inform the evolution of synthetic media policymaking and AI governance.
The best practices outlined in PAI’s Synthetic Media Framework will need to evolve with both the technology and information landscape. Thus, to understand how the principles can be applied to the real-world, we required all 18 of the Framework supporters to submit an in-depth case study exploring how they implemented the Framework in practice.
In March 2024, ten Framework supporters delivered case studies, with PAI drafting one of our own. This set of case studies, and the accompanying analysis, focused on transparency, consent, and harmful/responsible use cases.
In November 2024, another five Framework supporters developed case studies, specifically focused on an underexplored area of synthetic media governance: direct disclosure — methods to convey to audiences how content has been modified or created with AI, like labels or other signals — and PAI developed policy recommendations based on insights from the cases.
The cases not only provide greater transparency on institutional practices and decisions related to synthetic media, but also help the field refine policies and practices for responsible synthetic media, including emergent mitigations. Importantly, the cases may support AI policymaking overall, providing broader insight about how collaborative governance can be applied across institutions.
Read PAI’s analysis of these cases
Read PAI’s policy recommendations from these cases
The case submitting organizations are a seemingly eclectic group; and yet they’re all integral members of a synthetic media ecosystem that requires a blend of technical and humanistic might to benefit society.
Some of those featured are Builders of technology for synthetic media, while others are Creators, or Distributors. Notably, while civil society organizations are not typically creating, distributing, or building synthetic media (though that’s possible), they are included in the case process; they are key actors in the ecosystem surrounding digital media and online information who must have a central role in AI governance development and implementation.