Skip to main content
eData Edge
April 8, 2026

Generative AI for Privilege Review: Impressions and Considerations

eData Edge: Navigating the Everchanging World of eDiscovery

While it has become commonplace to use technology assisted review (TAR) tools (both traditional predictive coding relying on machine learning, as well as newer artificial intelligence (“AI”) methods) to review documents for responsiveness, many attorneys still hesitate when using this technology for privilege review. TAR has not proven to be as effective at identifying privileged documents as it has been with responsive documents, and so by default many practitioners still rely on linear document-by-document privilege review by human reviewers.

Some generative AI software now allows practitioners to use the power of large language models (LLMs) to review documents for privilege. Relativity aiR for Privilege is one such tool that the author recently used to assist with a large document review in an antitrust litigation. The experience demonstrated that generative AI can provide significant benefits for streamlining privilege review. While it does not eliminate the need for human review, it can offer substantial cost savings relative to a traditional linear review with contract attorneys – approximately 75% in this case. However, depending on the size of your dataset and the amount of human review that is conducted, aiR also may not be more cost effective than alternative approaches. A few considerations to bear in mind:

  • Using aIR for Privilege requires significant investment in up-front preparation time: While you do not need to draft a prompt to use aiR for Privilege, the user should still expect a significant up-front investment in time and involvement by attorney(s) knowledgeable about privilege. As an initial stage of the project, you must determine whether each law firm, attorney and third party identified in the document set is aligned with or is adverse to your client, and thus whether that entity’s presence on a communication confers or breaks privilege (or is neutral). This information is fed into the LLM and is critical to the generative AI in making privilege predictions on individual documents. This work is time-consuming and requires some document-specific research to determine the role played by different entities. This step took close to two weeks of focused effort by a firm attorney on a dataset of roughly 500,000 documents, and was similarly time-intensive as the up-front time required for a firm attorney “subject matter expert” to train a TAR model. While this up-front time is a necessary evil, it seems to have been a worthwhile investment given the helpful information aiR for Privilege provides as output, at least relative to what you would ordinarily obtain from a traditional TAR approach.
  • aiR for Privilege predictions can streamline privilege review: aiR for Privilege results provide detailed information that can assist the user in making judgments about privilege. In addition to providing an overall privilege prediction on each document, aiR assigns documents to privilege categories and provides rationales and other considerations for the AI’s privilege assessment. While the author did not rely on the rationales and considerations to a meaningful extent, the categorization was very useful in helping to target and prioritize subsequent review and quality control (“QC”) of documents in certain categories.
  • Human review is still required to validate aiR results: Relativity markets aiR for Privilege as a tool that prioritizes recall over precision to ensure that no privileged documents are missed. In the author’s experience, this bore out. aiR for Privilege did a broad “first pass” review to identify potentially privileged documents, which was beneficial, but it also meant significant human review and QC was required to validate and confirm the privilege calls assigned by the AI. This relied on sampling of documents identified as privileged or not privileged using different markers of privilege to target for human QC. In addition to the documents identified as privileged and not privileged, there are a number of “in between” privilege categories that are more nuanced and require more fulsome human review. For example, one category includes documents that were sent to or from an attorney where no apparent legal advice is sought or provided, while another category includes documents that contain legal “jargon” but no apparent attorney. There is also a “borderline” category where aiR found the privilege status of the document unclear. The amount of human review and QC required will depend on the specific circumstances of your matter, how aggressive or conservative you want to be, and how much risk you are willing to tolerate of inadvertent disclosures of privileged documents.
  • aiR can provide significant cost savings but depends on the circumstances of your case: While the author was able to achieve a 75% cost savings by using aiR (relative to the estimated cost of a full linear review), whether you can achieve a similar cost savings depends on the circumstances of your case. Relativity charges on a per document basis for each document run through the aiR LLM. This means there can be a sizable cost to use aiR before any human review or QC takes place, depending on the volume run through the LLM. Thus, Relativity aiR may not be cost-effective in matters with large document sets or where significant human review and QC will be necessary. Conversely, given the significant investment in up-front preparation time, aiR may not be worthwhile for the smallest document sets either. Instead, an alternative approach may be more cost-effective. For example, some vendors use predictive AI models to develop a classifier to make privilege predictions. This approach would not provide the detailed rationale and other considerations that generative AI creates for each document, but this additional information may not be necessary depending on the case.

Practitioners using AI tools for privilege review should also keep a few professional responsibility considerations in mind. Because cloud-based AI tools require feeding confidential client data into a third-party platform, it is important to review the vendor’s data handling terms before getting started, including whether client data is retained or used for model training, and whether client notification is warranted. The supervising attorney also needs a working understanding of how the tool functions and what its limitations are; the entity classification work and QC steps described above go a long way toward satisfying that competence requirement. And the human review described in this post is not just good practice. It reflects the attorney’s supervisory responsibility for the privilege classifications the AI generates.

© Arnold & Porter Kaye Scholer LLP 2026 All Rights Reserved. This Blog post is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.