Print| Email

ChatGPT IP Issues FAQ

By: Christopher Heer, Rares Minecan | Last updated: August 8, 2024

Sections:

ChatGPT Basics

ChatGPT & Intellectual Property Law

ChatGPT & Privacy Concerns


ChatGPT Basics

1. What is ChatGPT?

ChatGPT is a generative AI technology. It can create a wide variety of output based on input received from the user. It was launched in November 2022 by OpenAI, an American artificial intelligence research organization. As of the date this FAQ was written, a “Free,” “Plus,” “Team,” and “Enterprise” version of ChatGPT is available, with varying features in each of the different subscription models. The latest model of ChatGPT available as of the date this FAQ was written is GPT-4o, which OpenAI states can reason across audio, vision, and text in real time and is a step towards more natural human-computer interaction.

It is important to note that ChatGPT is not the only generative AI technology of its kind available, with Google’s Gemini, Microsoft’s Copilot, MetaAI, and Claude among some of the most popular alternatives. While many of the issues discussed in this FAQ may also broadly apply to the use of these alternatives, they may not apply in the exact same way due to differences in operation of the alternatives, as well as differing legal terms related to their use.

2. How does ChatGPT work?

IChatGPT is a fine-tuned large language model that was originally trained to produce text output but is now also capable of producing output in a variety of other formats. At a basic level, ChatGPT attempts to understand input received by the user and then outputs a string of words and phrases that it predicts will be the best at responding to the input, based on the data it was trained on. According to OpenAI, ChatGPT was trained and developed using information that is publicly available on the internet, information that is licensed from third parties, and information that users or human trainers provide. With respect to information that is publicly available on the internet, this refers only to information that is freely and openly available, and does not include, for example, information behind paywalls. Presumably, most other sources of information are therefore fair game for the training of ChatGPT.

Furthermore, ChatGPT was specifically optimized for human-like dialogue by using the method of Reinforcement Learning with Human Feedback, a method that uses human demonstrations and preference comparisons to guide the model towards desired behavior. In this case, because the models were trained on vast amounts of data from the internet written by humans, including conversations, ChatGPT’s desired behavior and output are human-like responses.

3. In what ways can ChatGPT be used?

From its inception, ChatGPT could be used in a variety of different ways depending on the context and user’s needs. Below is a non-exhaustive list of some of the ways ChatGPT has been reported to be used:

  • ChatGPT can be used broadly to brainstorm and generate ideas. In the context of personal use, ChatGPT could therefore be used to generate a variety of content, whether it is for a blog, a social media account, a news account, and so on.
  • In an educational context, ChatGPT has a variety of uses. It can assist in finding and summarizing research, explaining complex topics, solving math problems, and writing and debugging lines of code. It can also generate multiple-choice questions and generally be used as a tool to assist with studying.
  • ChatGPT can also be implemented in a business context. It can be used as an AI chatbot, as it is able to answer questions and offer recommendations based on what customers are looking for. It may also be used to automate certain tasks, generate reports, and create content for a business.

In brief, ChatGPT can be used in a vast number of different ways. In addition, with every new iteration of the model, the ways in which ChatGPT can be used appear to be increasing and becoming increasingly more diverse and creative.

4. What are some limitations of ChatGPT?

Despite the numerous ways ChatGPT can be used to assist users, it has several limitations that users should be aware of. First, ChatGPT is not connected to the internet, and the current version of ChatGPT available to the public was trained using sources up to 2021. Therefore, it may not be helpful to users seeking more recent information. Second, because of the variety of sources that are used to train the model, it is possible that the output it provides contains errors or biases. Third, ChatGPT frequently does not provide sources for the information contained in its output, and if asked to do so, sometimes provides inaccurate or incorrect sources. Other limitations of ChatGPT to note include its potential inaccuracy, lack of logical reasoning, inability to understand context, as well as the usage limits for the free version of the model. While not exhaustive, these limitations highlight just some of the reasons why OpenAI has several disclaimers in their Terms of Use related to the ChatGPT service, including that:
  • The output may not always be accurate, and accordingly, should not be relied on as a sole source of truth or factual information, or as a substitute for professional advice.
  • The output should be evaluated for accuracy and appropriateness for your use case, including human review as appropriate, before using or sharing the output.
  • The output created may be incomplete, incorrect, or offensive.

5. Does ChatGPT collect and store personal data?

In general, ChatGPT collects data from users in order to further train the model and improve its performance. ChatGPT collects data such as user input, chat history, and user preferences from the user. Conversations between ChatGPT and the users are reviewed both by AI trainers to improve their systems and are reviewed to ensure that the content complies with OpenAI policies and safety requirements. Users of ChatGPT can see a history of their conversations within the interface, and although specific prompts cannot be deleted from the history, user are able to delete their data following the process laid out on the service.

More recently, OpenAI has stressed that they do not actively seek out personal information to train their models, but that personal information is incidentally included at times. OpenAI stresses that any personal information will be used exclusively to train the model and will not be used to build profiles of people or attempt to contact or advertise to them. Additionally, OpenAI states that they take several steps to limit the use of personal information and comply with privacy laws. With respect to the former, OpenAI removes websites that aggregate large volumes of personal information, as well as trains their model to reject requests for personal information. With respect to the latter, training data is stated to be obtained and used lawfully. In addition, users in certain jurisdictions can object to the processing of their personal information by the model in OpenAI’s privacy portal.

ChatGPT & Intellectual Property Law

1. How does intellectual property law interact with technologies like ChatGPT?

ChatGPT falls under the umbrella of generative AI technologies, which is used to describe software that can produce output in various forms, such as text, images, and audio, based on input from a user. Generative AI technologies have become increasingly popular in recent years, and due to their accessibility and the quality of output they are able to potentially produce, they are frequently used in both private and commercial contexts to create, or assist in creating, a variety of different works.

Some of the key questions regarding the intersection of intellectual property law and the use of technologies such as ChatGPT, therefore, includes who owns the output that is created by ChatGPT, whether the output could be the subject of a copyright or trademark infringement claim, and to what extent ChatGPT can be used in the process of logo creation and branding of a business. This section of the FAQ will explore these questions.

2. Does the user own the copyright to the output from ChatGPT?

It is currently unclear who owns the copyright to content generated by an AI model, such as the output from ChatGPT. Because the Canadian Copyright Act (the “Act”) does not explicitly address generative AI creation, many copyright issues pertaining to AI-generated content remains unresolved. One of the key issues in this space is whether the user can be considered an author for the purposes of the Act, or if the AI program itself should be granted authorship. Without authorship, there is also no corresponding ownership for copyright purposes. The Act provides protection to authors but does not explicitly define the term, which is perhaps exactly why this issue with respect to AI-produced output arises. “Author,” for the purpose of copyright law, has essentially become synonymous with “creator,” which would include, for example, the person who writes a book or a poem, or the person who takes a picture or otherwise creates a piece of artwork, that eventually becomes the subject of copyright protection.

This issue of authorship and ownership arises because it is unclear whether the user, by providing the input, would satisfy the requirement that under Canadian copyright law, a work must be original. To be original, the creation of a work must be the result of an exercise of both skill and judgement. Due to the current lack of clarity, the issue of authorship may ultimately be settled based on the degree of human involvement in the process to create the output.

In the 2023 Consultation on Copyright In The Age of Generative Artificial Intelligence (the “Consultation”) published by the Government of Canada, the topics of authorship and ownership of AI-generated works were discussed at some length. The Consultation highlights that although the Act does not define “author,” jurisprudence suggests that authorship is attributed to a natural person who exercises skill and judgement in creating the work. Therefore, a human may contribute sufficient skill and judgement in a work if it is produced with the assistance of generative AI technologies, but this may not be the case if the work is produced solely by generative AI technologies, with only a short set of instructions provided by the user. Furthermore, the Consultation recognized that in response to the 2021 Consultation on a similar topic, many stakeholders found it was too premature for Canada to take a position on authorship and ownership of AI-generated works. In response, the Canadian Government continues to invite stakeholders to share their views and present evidence with respect to the uncertainty surrounding authorship and ownerships of AI-generated works, as well as if the Government should propose any clarifications and modifications to copyright ownership and authorship, and if these clarifications and modifications should be informed by the approaches of other jurisdictions. The government is specifically seeking the views of stakeholders on three approaches to authorship and ownership of AI-generated works. These approaches are:

  • Clarify that copyright protection apply only to works created by humans;
  • Attribute authorship on AI-generated works to the person who arranged for the work to be created (this is informed by the UK’s copyright framework); or
  • Create a new and unique set of rights for AI-generated works.

These approaches are not exhaustive, with the Government specifically stating that the possibility of other approaches in not foreclosed. It is clear to see, based on how vastly different the proposed approaches are, that the question of copyright ownership and authorship of AI-generated works in Canada is still very much in the air.

3. Does OpenAI assigns the rights to the AI-generated content to the user?

Under the “Content” section of the Terms of Use, OpenAI states that with respect to “Ownership of content, as between you and Open AI, and to the extent permitted by applicable law, you (a) retain your ownership rights in Input and (b) own the Output. We hereby assign to you all our right, title, and interest, if any, in and to Output.” Therefore, it appears that OpenAI does assign the rights to the AI-generated content to the user. However, the language of “to the extent permitted by applicable law” adds an element of uncertainty with regards to what OpenAI actually assigns to the user, especially given the fact that the applicable law in this case remains quite unclear. Furthermore, it is possible that no rights are actually assigned if ownership is settled “As between you and OpenAI”, as stipulated by the Terms of Use.

4. Can the output from ChatGPT be subject to a copyright infringement claim?

The short answer is yes. As outlined in the Copyright Act, s. 27, “it is an infringement of copyright for any person to do, without the consent of the owner of the copyright, anything that by this Act only the owner of the copyright has the right to do.” This section of the Act continues by stating some of the things only the copyright owner has the right to do, such as selling or renting, distributing, exposing/exhibiting in public, or possessing for one of these purposes a copy of the work in question. It is also important to keep in mind that a “substantial part” of a copyrighted work being reproduced may fall under infringement, with what constitutes a “substantial part” being further explored in the Canadian jurisprudence. Because of the data and information ChatGPT relies on for providing its output, there is a chance that the output may be similar enough or identical to an existing copyrighted material to constitute an infringement on the copyright.

The 2023 Consultation adds to this topic by highlighting just how difficult it may be to establish a viable copyright infringement claim as discussed above. First, the Consultation recognizes that Canadian courts have not yet rendered decisions regarding copyright infringement because of the novelty of AI technologies. It goes on to then highlight the difficulties of establishing such a claim, focusing on:

  • The difficulty a copyright owner who is alleging infringement would have in identifying the person or persons responsible and establishing liability in court;
  • The difficulty of determining infringement as the level of human involvement in the AI-generated work becomes increasingly complex to determine; and
  • The difficulty a plaintiff would have in establishing that the infringing party had access to the original copyrighted work, that the original work was the source of the copy, and that all or a substantial portion of the work was reproduced.

Even though there exists no precedent for such an infringement claim, and the difficulties of such a claim are highlighted by the Consultation, it may be prudent to use ChatGPT in compliance with the Terms of Use, which may include reviewing and rewriting or revising the output generated by ChatGPT as necessary.

5. ChatGPT has used data or content to train its model without permission. Can any action be taken in response?

In light of the novelty of generative AI technologies and the legal novelties of copyright related questions pertaining to the use of generative AI technologies, it is unclear whether any action can be taken in the first place, or if such an action has reasonable prospects of success. In Canadian jurisprudence, such a question seems to be too novel for the courts to have weighed in yet.

With that being said, the U.S.A. has seen a multitude of lawsuits against OpenAI arising in the last year, which users may want to follow. Below is a non-exhaustive list of just some of the lawsuits initiated against OpenAI or the providers of other similar generative AI technologies:

  • In January of 2023, Getty Images announced a lawsuit against a similar generative AI company, Stability AI, alleging that Stability AI copies and processed millions of its images without obtaining the proper licensing.
  • In June of 2023, a class action was filed in the federal court in San Fransisco that included several authors claiming that the large language models created by OpenAI are “not only an infringement of authors’ rights, but the case represents a larger fight for preserving ownership rights for all artists and other creators”.
  • In September of 2023, the Authors Guild, a trade organization representing authors, also filed a class action against OpenAI, later also adding Microsoft as a defendant. Similar to the second case mentioned, the Author’s Guild case alleges that OpenAI engages in “systemic theft on a mass scale” of the plaintiffs’ work.
  • In December of 2023, the New York Times started a suit, where it has alleged that OpenAI used its entire library to train ChatGPT. It accused OpenAI of having a “business model based on mass copyright infringement”.

It will certainly be interesting to see how these lawsuits play out, the various arguments that different proponents of each side will advance, and which copyright principles, statutory provisions, and arguments will be decisive in each of the cases. Although the Canadian copyright system differs from the American copyright system in some ways, the results of these lawsuits will undoubtedly offer some insight into the prospects of success of a similar suit in the Canadian legal system.

For the time being, as per the Terms of Use of OpenAI, users may send a notice if they believe that their intellectual property rights have been infringed. OpenAI may then delete or disable content that they believe violates the terms or is alleged to be infringing and will terminate accounts of repeat infringers where appropriate. The notice must include several details, including a signature of the person authorized to act on behalf of the owner of the copyright interests, a statement made in good faith that the disputed use is not authorized by the copyright owner, and a statement that the notice is accurate, and that the notice is being sent by the copyright owner or someone authorized to act on the copyright owner’s behalf.

While this mechanism does not provide systemic relief as the lawsuits potentially could, nor is it necessarily a foolproof way to deal with a potential copyright infringement, it may nonetheless offer some preliminary relief if a users notices their data or content is being used in a way that they believe infringes on their copyright.

6. Can ChatGPT be used to generate a name/logo for a prospective business? Are there any implications of doing this with respect to trademarks?

The use of ChatGPT for assistance in coming up with a logo and/or branding for a business may have trademark implications. While ChatGPT and other similar generative AI technologies can assist in creating some interesting output and sparking creative ideas, or outright create a logo and/or branding, it is important to note that a new trademark cannot be confusingly similar to an existing trademark (Trademarks Act s. 6). Using a trademark that is confusingly similar with an existing trademark may mean that the trademark is unregistrable, as this is something the Canadian Intellectual Property Office will consider when reviewing an application. Additionally, using a confusingly similar trademark may result in trademark infringement. This can result in a costly dispute resolution process, and potentially costly re-branding. A user may therefore wish to carefully review any output created by ChatGPT that may be used in the context of a trademark. It would also be prudent to have an intellectual property lawyer review any output from the generative AI technology, to assess any potential risks related to registrability and infringement.

ChatGPT & Privacy Concerns

1. Is input to ChatGPT confidential?

As part of OpenAI’s commitment to safe and responsible AI, conversations between the user and ChatGPT are reviewed to improve the service and to ensure that the content complies with their policies and safety requirements. Furthermore, conversations may also be used to further train the models. Therefore, to some extent, confidentiality is lost as soon as a conversation with ChatGPT has started. Although OpenAI provides options to the users to attempt to remedy this, such as opting out of training through their privacy portal, by giving the option to turn off training for their ChatGPT conversations, or by disabling chat history, these options come with their shortfalls. If a user chooses the option to turn off chat history, for example, OpenAI will still retain new conversations for 30 days and review them if needed to monitor for abuse, before permanently deleting them. Therefore, it seems that no matter what privacy option is selected by the user, OpenAI will potentially have access to the conversation regardless. The implication of this is that any personal, confidential, or sensitive information input by a user may be accessed by OpenAI and potentially appear in future outputs to other users. Users should therefore refrain from inputting any personal, confidential, or otherwise sensitive information into ChatGPT. While it would be prudent for all users to follow this suggestion, it may be especially pertinent for users who regularly handle private and confidential information, or who have personal or ethical obligations. For example, users who are under nondisclosure agreements, who hold confidential information, trade secrets, or other proprietary information, should be especially careful to ensure that they do not use ChatGPT in relation to this information in a way that would lead to a loss of confidence or a breach of professional and/or ethical obligations.

2. Is ChatGPT safe to use? Has the Government of Canada established that ChatGPT is safe to with respect to privacy and confidentiality?

Each user should be aware of, and factor in, the limitations and potential privacy concerns of ChatGPT when deciding if it is safe to use ChatGPT in their specific context. Users should also be careful to use the technology in a way that minimizes risk as discussed throughout this FAQ. On the service side, OpenAI takes a variety of security measures related to ChatGPT. These measures include audits, data encryption, and access to a security portal which highlights many more of their security measures, including risk-profiles, data security, app security, data privacy, endpoint security, policies, and so on. That said, OpenAI, like most other service providers, is susceptible to data breaches. In March of 2023, for example, OpenAI reported having a bug in their open-source library which allowed unauthorized users to see the beginning of someone else’s conversations, account details, and even the last four digits of credit cards. The possibility of a data/security breach is another factor users need to consider when determining whether ChatGPT is safe for their specific case.

On a system-wide level, although the Canadian government has not taken an explicit stance, it is important to note that in April 2023, the Office of the Privacy Commissioner of Canada launched a joint investigation into OpenAI’s ChatGPT with the provincial privacy authorities of Quebec, British Columbia, and Alberta. The investigation was launched in response to a complaint alleging the collection, use and disclosure of personal information without consent. The privacy authorities will investigate, among other things, whether OpenAI:

  • Has obtained valid and meaningful consent for the collection, use, and disclosure of the personal information of individuals based in Canada via ChatGPT;
  • Has respected its obligations with respect to openness and transparency, access, accuracy, and accountability; and
  • Has collected, used and/or disclosed personal information for purposes that a reasonable person would consider appropriate, reasonable or legitimate in the circumstances, and whether this collection is limited to information that is necessary for these purposes.

As of the date this FAQ was written, the investigation is ongoing.

3. What are the risks of implementing ChatGPT or similar generative AI technologies in my business?

The potential risks surrounding the limitations, privacy, and confidentiality concerns of ChatGPT have been canvased throughout this FAQ. It would be prudent for a business that is contemplating the incorporation of ChatGPT into their operation to assess the variety of privacy-related risks that exist within their workplace and use cases, and subsequently, consider in what ways ChatGPT should be restricted in the workplace, if not prohibited.

Conclusion

If you have any further questions related to ChatGPT or similar generative AI technologies, or any intellectual property or privacy concerns related to these technologies, please contact us for a complimentary and confidential initial telephone appointment with a member of our team.


Back to top