fbpx

Management Assistance Program

ChatGPT’s Business Plan and Confidentiality

By Julie Bays, OBA Management Assistance Program Director

As AI tools continue to evolve, so do the subscription options available to professionals who rely on them. Earlier this year, OpenAI rolled out a business-grade version of ChatGPT, now called the ChatGPT Business Plan. It replaces what was formerly known as the “Team” plan, offering stronger security controls, administrative oversight, and clear data-handling assurances. For lawyers, these upgrades are more than conveniences, they tie directly into our duties under ORPC 1.6 to safeguard client information.

The Business Plan requires at least two users and includes access to advanced models, collaboration tools, and the ability to create and share custom GPTs across a firm. Most importantly, conversations and uploaded content are not used to train OpenAI’s models. This alone makes the plan a noticeable step up from the individual version of ChatGPT Plus, which is designed for personal use and lacks the same business-level controls.

Another benefit for small and mid-sized firms is administrative management. The Business Plan allows a firm to maintain centralized billing, manage users, and apply consistent settings across accounts. These features help ensure that everyone in an office is working with the same privacy standards and tool configurations. This is something that can make a meaningful difference when building secure internal systems.

As AI tools become part of everyday legal workflows, choosing the right subscription tier matters. Tools like ChatGPT can enhance efficiency, improve drafting accuracy, and help lawyers better serve clients. But when client information is involved, the security terms of the subscription must be as strong as the technology itself.

If your firm is experimenting with AI or considering broader adoption, the Business Plan is the more appropriate option. It aligns more closely with your professional obligations, provides added administrative protection, and supports the responsible use of AI in a law practice.

Before you subscribe to any AI product, begin your AI subscription evaluations with ORPC 1.6 in mind.

Checklist: Assessing AI Subscriptions Using ORPC 1.6

  1. Confidentiality of inputs
  • Does the provider explicitly state that business-plan data is not used for model training?
  • Can the firm control who can access chats, uploads, and custom GPTs?
  1. Administrative oversight
  • Can the firm manage user accounts, settings, and billing in one place?
  • Can privacy settings be applied consistently across all users?
  1. Data handling and retention
  • Does the provider explain retention practices and allow the firm to delete data?
  • Is there a clear statement about whether the vendor can review customer data?
  1. Security posture
  • Is data encrypted at rest and in transit?
  • Does the provider hold industry-recognized security certifications?
  1. Vendor reliability
  • Are the terms of service appropriate for professional use?
  • Can the firm obtain documentation to support compliance reviews?
  1. Incident Response and Breach Notification
  • Does the vendor have a clear process for notifying your firm in the event of a data breach or security incident?
  1. Third-Party Integrations
  • Are there controls in place for managing integrations with other software or services
  • integrations subject to the same privacy and security standards?