#6C Regulating Deepfakes - Realistic? Redundant? Or Risky?: Part III

Note: This post is addressing the United States of America’s proceedings exclusively, since laws and regulatory bodies differ internationally.
Artificial intelligence and the creation of deepfakes is running rampant across the World Wide Web, making many people question whether regulations can be put in place. If regulations are established, will they be enforceable? Is it pointless to regulate something that you can’t necessarily detect? Is the onslaught of this content too great to surmount? Who has the normative authority to regulate AI?
Looking at these questions from a broad perspective, it may seem hopeless. Nevertheless, we can strike a balance between innovation and regulation by remembering the values we hold in other areas of life and medical practice, and from there create context-specific regulations that are taking old concepts and applying them to new technologies, like medical deepfakes.
With that, let’s dive into our last installment on deep-fakes and consider some ways to create safeguards for the development and use of these tools in healthcare. Check out the previous post Regulating Deepfakes: Balancing Perspectives in Healthcare to learn more about the technologies these regulations address.
Data Protections
The chance to use anonymized and deepfake generated images for research purposes has promising potential to advance treatments, but the safety of patient data needs to be ensured at all cost. The Office of Civil Rights (OCR) is responsible for enforcing protections and investigating violations of the HIPAA, such as the privacy and security of private health information (PHI). HIPAA separates Covered Entities (e.g. hospitals) from their Business Associates (vendors/collaborators) but holds both groups accountable in following PHI use rules and the limitations on that use. This means that the utilizers must use as little PHI as necessary to accomplish their goal, which can limit the effectiveness and discovery capacity of the AI by limiting the dataset.
To allow for greater use of data, companies making these deepfake medical records have to work under the Institutional Review Boards (IRBs) of their collaborators. This, while providing necessary oversight and protections, also means that there may be variability between institutions. For further protection, the OCR should implement a Special Division dedicated to overseeing standardization of PHI use in AI and Deepfake Development, which will allow for normative and legal power to enforce penalties, conduct unannounced audits, and perform other actions necessary for the protection of patient data within the nation.
Human-in-the-Loop Verification and Accreditation
If deepfakes will be utilized as an interface with patients and not just a tool for doctors, the first step is to establish oversight bodies and jobs for human persons that will oversee the deployment of deepfakes in healthcare. This method is referred to as the “human-in-the-loop” – where active human participation allows for a final set-up, tuning, and testing to improve the AI decision-making outcomes and mitigating risks. Depending on the circumstance, these overseers will have to have expert knowledge, such as an accredited MD/DO or other practitioner’s degree to ensure that mistakes and issues can be effectively detected, vetted, and addressed.
Regulations for human-in-the-loop credentials and personnel requirements can be instituted by the regulatory boards overseeing hospitals that would implement these technologies. This could be enforced by bodies like the Centers for Medicare & Medicaid Services (CMS) and Occupational Safety and Health Administration (OHSA), which provide standards with which medical license accreditators base their assessments. CMS and OHSA could require the human-in-the-loop and personnel vetting in the deployment of these systems, which would therefore affect whether accreditation is given, leading to compliance or avoidance in using this tech.
Additionally, companies producing these deepfakes can be held accountable through establishing legal liability. If the company has legal liability if a deepfake malfunctions or is used to spread misinformation, then the company will ensure in their own hiring, and in the contracts of those they work with, that organizations using their software must have human-in-the-loop oversight and the expertise of vetting and approving new hires.
Informed Consent and Abstinence
Patients should be able to still have preferences when in the interpersonal method of their care. There should be standards in place where a “deepfake doctor” option can be made available, but it should never be the first or only option. Patients should also be notified if translation deepfakes are being used and they should be asked to consent to this technology. The consent interface should be allowed to explain the benefits the deepfake translation feature has for clear communication with the provider, but since the lips and voice of the patient are being digitally manipulated to achieve this, the patient should be informed if their face biometrics and manipulated image/video will be used as training data long-term, how it will be protected, and if it will be deleted after the appointment. The use of deepfake technology for translation technology would be a great step forward for health literacy, but the data collection involved in this system should be given consent to, and be explicitly stated at the beginning of care delivery. This can be guided by the standards of Health and Human Services for Informed Consent and the Health Information Technology for Economic and Clinical Health (HITECH) Act.
Final Thoughts
Deepfakes are here and they are viral! As harrowing as that is, deepfake technology can still be used in ways that help patient-physician communication, improve medical research, and offer new opportunities for patient-specific care. We must remember, however, that deepfakes are not autonomous. They cannot assume legal liability or ethical responsibility; they do not have agency; and they do not have the means to amend harms. They are only a tool. It is our duty then to ensure that the humans who have created and/or are implementing deepfakes are held accountable if harm is inflicted by these technologies, whether by the regulations we have now or the ones we still need to create.
Under the current regulatory landscape, AI companies developing these devices need to be held to a higher standard of liability and accountability, as “AI healthcare agents” begin serving in patient interface roles. Positive (or negative) incentives in regulation – such as penalties and audits – will encourage companies to produce the best product possible or encourage them to give the project up to mitigate harms. It is not worth risking the physical, mental, and emotional well-being of patients with a poor product, which means oversight is necessary. The Hippocratic Oath of every physician – “do no harm” – should be the same oath taken by these developers, and thus the same liability for medical malpractice should also be taken up. Under this common mission, this common responsibility, and this common liability, deepfakes for healthcare will allow us to advance medical research and serve patients in an affordable, accessible, and accountable way.
Author

— by Alyssa Montalbine, MS, Graduate Student, Emory and Georgia Tech, 10/2025
_______________________
Continue the conversation! Please email us your comments to post on this blog. Enter the blog post # in your email Subject.