Four Data Handling Considerations Legal Teams Must Address with AI Vendors
As law firm managing partners and C-Suite leaders demand their workers to accomplish more with fewer resources, artificial intelligence (AI) discussions have become more common in vendor contract negotiations. When all lawyers need to do is query a chatbot for redlining help or ask AI to develop contract language in seconds, what?s not to like? Plenty, if teams don?t account for how AI vendors handle the data you submit.
Like any repository of electronically stored information, AI programs and servers can become fertile ground for security and confidentiality risks. However, unclear regulations and wide-ranging service terms can obfuscate how these vendors approach data handling. Some AI vendors, including ChatGPT creator OpenAI, suggest they have broad rights to use and review any data they receive, whether for internal training or for modeling generative AI prompt responses. This means that whenever lawyers send confidential information to a vendor, there will likely be invisible eyes ? real or artificial ? peering into the contents of their AI prompt screens.
Unfortunately for lawyers, this arrangement can tee up potential ethics violations involving attorney-client confidentiality, data security concerns and improper confidential information disclosures. The risks are so notable that the Biden Administration issued an executive order in October covering guidelines around the safe, secure and trustworthy use of AI, along with what companies should do to protect and secure customer information.
Therefore, firms and departments must communicate their data handling expectations as clearly as possible to prospective AI vendors ? especially in light of the ever-evolving AI legal and compliance requirements. Some suggested steps are below.
1. Ask for the vendor?s AI policies
AI data handling policies can vary from vendor to vendor. Naturally, legal teams must review and negotiate these policies from the outset to ensure client or company data remains secure.
Lawyers can find instructive guidance upon reviewing a vendor?s terms of use and service terms language. These often cover the basic parameters of how vendors will use and process data online. For most industry-agnostic AI vendors, lawyers should prepare themselves to wade through unclear, open-ended language around data ownership and usage rights, many of which will likely be clarified once U.S. courts have weighed in on these ambiguities. In this sense, firms and departments should work with AI companies that fully understand the legal sector and other compliance-heavy industries.
Other important policies may require more investigation and discussion. Lawyers should make the most of their demo sessions and discovery calls to inquire about a vendor?s data security safeguards and security certifications. They should also ask questions about how vendors segregate and anonymize uploaded information ? if at all. These steps can shed light on how AI companies store, read and review sensitive client data and whether it is at risk for reuse or inadvertent disclosure.
With a vendor?s AI policies in hand, law firms and general counsel can better understand how the vendors they?re vetting protect sensitive data through encryption and access controls, leverage user authentication and authorization protections and address built-in biases that impact an AI program?s work product. They will also see how their potential vendors maintain secure client data environments. These materials should also show how in-depth vendors review, share and model prompt answers from submitted materials.
2. Negotiate robust provisions and warranties into your vendor contracts
Although global regulations are still catching up to AI?s nuances, lawyers and in-house teams should still take advantage of existing protections when finalizing their vendor agreements. Therefore, legal teams should make their vendor-customer agreements as comprehensive as possible to protect their interests. Teams looking to square away data handling issues should consider discussing the following during negotiations:
- Compliance-with-law clauses: Although the laws around AI are still evolving, negotiating a substantial compliance-with-law clause can help encourage compliance with existing regulations. Under a compliance-with-law clause, all parties must follow best practices and applicable rules when performing on their contracts. It should also outline related remedies, penalties and indemnities that arise if a vendor is not compliant
- Noninfringement clauses: Companies that license AI from vendors ? or even use vendor APIs to bolster their in-house solutions ? should also ensure their partners agree to noninfringement clauses regarding the use of third-party information. Although there are wide-ranging discussions on whether ?fair use? defenses apply to how AI programs train on proprietary or copyrighted information, noninfringement clauses can offer some protection and indemnification against impermissible client data sharing
- Data privacy compliance protections: Lawyers must also ask vendors to integrate strong warranty and representation language requiring vendors to comply with applicable data privacy laws. Specific clauses tailored to relevant privacy laws can assure lawyers that vendors will handle client data in compliance with regulatory requirements and combat breach liability problems. Government contractors may also want to see if their vendor candidates follow the National Institute of Standards and Technology?s voluntary Artificial Intelligence Risk Management Framework and relevant government data security rules
- Confidentiality provisions: Law firms may use AI to review specific types of discovery for privilege issues and the like. While it is safer to send over non-confidential documents to help with the machine learning process (more on that in a bit), lawyers should still stress-test pertinent confidentiality provisions in case their vendors mishandle data. In any event, law firms and companies should have ? and provide to vendors ? strict procedures outlining how they should share, label, designate and monitor confidential materials.
3. Be strategic in exporting sample documents and templates
Even with the most airtight of vendor-client agreements, lawyers experimenting with AI should still work to mitigate serious data exposure risks. These concerns extend to what types of documents lawyers decide AI programs should analyze from the outset.
Companies and firms should think through their AI productivity goals before mapping out their machine learning strategy. Do they center around increasing contract drafting automation? If so, consider which of the company?s routine agreements are already circulating in the public, and have your AI program master those clauses. Do the objectives lean toward streamlining eDiscovery? If yes, then training the AI on non-confidential materials from past cases could help give it a crash course on evidentiary privileges, duplicate file identification and important file types. Suppose teams need AI to locate data within the depths of their Elite 3E programs and other enterprise software. In that case, they should be selective about the billing information and internal data they introduce to the module. Above all, parties should control, redact and remove any sensitive information in any documents AI vendors receive.
4. Make sure all documents submitted to the vendor are ?clean? and fully optimized for machine learning
Data cleansing is critical to facilitating and speeding up the machine-learning process. Redundant data and corrupted files can risk diminishing the AI module?s ability to recognize patterns and issue informed decisions. Forcing AI programs to wade through cumbersome raw data can lead to unwieldy results. Even worse, it can increase the likelihood of inaccurate or hallucinated prompt answers.
The answer to ensuring a clean user experience? Programs with battle-tested validation scripts and customizable queries. With this software, legal teams can ensure any information used to train their AI programs is of the highest possible quality. Helm360?s Digital Eye, for example, can help teams identify corrupted data and related issues and inconsistencies. The program can also share action steps on what teams must do to rectify any corruption issues.
Tailoring data handling measures to AI technology
Most law firms and some in-house departments must turn to vendors and third parties to jumpstart their AI journey. In doing so, they must be mindful of how they prepare and share their data, as well as whether their chosen vendors and partners adhere to AI best practices.
Through proper data preparation, careful contract drafting, comprehensive data quality vetting and robust compliance protocols, law firms and in-house departments can safely leverage AI with the help of their vendor networks. Most importantly, they can cover their tracks and mitigate massive liability in this ever-evolving area.