The Oklahoma Bar Journal March 2026

THE OKLAHOMA BAR JOURNAL 28 | MARCH 2026 Statements or opinions expressed in the Oklahoma Bar Journal are those of the authors and do not necessarily reflect those of the Oklahoma Bar Association, its officers, Board of Governors, Board of Editors or staff. individual or provide a reasonable basis to believe can be used to identify the individual. Such information held or transmitted in any form or media by a “covered entity” or “business associate” is protected under the privacy rule. “Covered entity” under the privacy rule means a health plan, a health care clearinghouse or a health care provider that transmits any health information in electronic form in connection with certain transactions under the rule.6 “Business associates” include persons or organizations that perform functions on behalf of covered entities where such functions involve the use or disclosure of PHI.7 GENERAL USE RESTRICTIONS ON PROCESSING PHI In the absence of express patient consent for specific processing activities, the privacy rule generally permits a covered entity to only use or disclose PHI for treatment, payment or health care operations as set forth in the regulations, with some limited exceptions.8 Treatment is broadly defined as the provision, coordination or management of health care and related services.9 Payment includes a range of activities from health plan coverage activities, risk adjusting, billing, claim adjustment, collection activities and receipt of payment.10 Health care operations include, among other things, a host of internal (and sometimes external) quality assessment and improvement activities.11 The permissible uses and disclosures may be for purposes of the covered entity’s own activities or for those of another covered entity having a relationship with the individual whose information is used or disclosed.12 Covered entities are generally limited to processing PHI for these purposes, unless a patient consents in writing to other uses of their PHI.13 Many uses of AI can be adequately categorized as a treatment, payment or health care operation because the regulations are agnostic as to the manner of permissible processing. However, some use cases of AI do not squarely fall within them. An example may arise where PHI from multiple patients is combined into a single data set, which is then used to train AI to perform automated diagnostic functions. Another example is where such data sets are used to create data analytics14 used for purposes unrelated to treatment, payment or health care operations – such as finding potential participants in a clinical study or trial. For such uses, it may be necessary to first de-identify the data before it is processed. DE-IDENTIFICATION OF PHI Generally, “de-identification” is a process whereby all personally identifiable information, and information that could be used to identify an individual, is removed from the data set to be processed. De-identified health information may typically be used or shared without restriction because such data is no longer considered PHI under HIPAA regulations. It generally neither identifies nor provides a reasonable basis to identify an individual.15 HIPAA regulations permit the de-identification of PHI through either formal determination by a qualified expert or by the removal of specified identifiers.16 The Expert Determination Method The first method requires a qualified individual to have appropriate knowledge of and experience with statistical and scientific principles for rendering information not individually identifiable. The expert must apply such methods to determine that the risk is “very small” that the information could be used to identify an individual. In making the determination, the expert must document their methods and results, justifying the decision.17 Experts need not be connected to the health care field and may come from statistical, mathematical or other scientific domains.18 Considerations on the identification risk are fact-dependent, and an acceptable “very small” risk is based on the ability of an anticipated recipient to identify an individual in each circumstance. Some principles relied on to determine risk include replicability, data source availability and distinguishability.19 The expert is often looking at the degree to which a data set can be “linked” to a data source that reveals the identity of the corresponding individuals.20 At times, the expert may recommend modifying a data set in order to mitigate the risk. Such modifications include adjusting certain features or values in the data to ensure that any unique, identifiable elements no longer exist.21 The Safe Harbor Method The second “safe harbor” method requires the removal of 18 specific identifiers22 (see the sidebar) in combination with a covered entity having no “actual knowledge” that any remaining information could be used to re-identify an individual.23 The identifiers of the individual, as well as those of the relatives, employers or household members of the individual, must be removed. “Actual knowledge” in this context means “clear and direct

RkJQdWJsaXNoZXIy OTk3MQ==