3 Reasons to Pair Caution with Artificial Intelligence
Since its November 30, 2022 launch by OpenAI, a San Francisco entity (and former non-profit), “ChatGPT” rapidly became a commonly-referred to “artificial intelligence” tool. Able to conduct meaningful conversations with its hosts, draft documents, and fix software code, ChapGPT and its successive upgrades are increasingly used in the marketplace across several different industries. And without much regulation curtailing or monitoring its uses (excepting Illinois’ labor relations laws), the continued user and development of copy-cat tools will accelerate.
However, with every “free” tool lurks hidden costs and risks. Most notably, those indulging in artificial intelligence (AI) tools to improve workplace productivity need to consider the risk of intellectual property right disputes, the inadvertent disclosure of confidential information, and the accuracy of the data feeding the AI tools.
1. Intellectual Property Disputes: ChatGPT and similar AI tools use pre-selected data to feed its bank of knowledge from which to respond to queries, according to algorithms developed by the tool’s creator. ChatGPT is then trained on how to select the sources from the tool’s data repository in responding to user queries. Therefore, the work product produced by ChatGPT is unlikely to be original.
ChatGPT’s creators disclaim rights to and responsibility for ChatGPT products developed by its users, assigning users all rights, title, and interests to the output. This isn’t a gift from ChatGPT’s creators; it’s a risk mitigation tactic as the product’s output may contain copyright violations or plagiarism. And with ChatGPT terming its use on limitations of liability and indemnity provisions, unsuspecting users may inadvertently reproduce copyrighted work for profit, believed to be original, at their own risk and without recourse against ChatGPT.
2. Confidentiality Breaches: Depending on the nature of the work and purpose of use when involving AI tools, employers permitting (or failing to prohibit) employees to use AI tools risk disclosing confidential information. Information input by users into ChatGPT becomes part of its data repository, thereafter available to other third-party users. Consider this: with ChatGPT readily available to anyone with an internet connection, an employee inputting market sensitive data on a large-scale merger of two public companies would certainly violate terms of confidentiality often agreed to during negotiations and due diligence examinations.
3. Lack of Accuracy: ChatGPT and other AI tools continue to build the database of information through user input. Accordingly, the value and accuracy of the data becomes increasingly less accurate unless all users are exclusively imputing empirically correct data; absent which user output gets degraded over time. And as time passes, data may not be current as most AI tools are not directly connected to internet outlets posting updated research, news, and legal changes.
Presently, there are several suits against AI tool producers for copyright infringement, including suits against AI producers who boast the ability to filter out copyrighted or trademarked materials. And one U.S. District Court for the Central District of California cautioned a party for drafting pleadings in a manner that “read like what an artificial intelligence tool might come up with if prompted” (Pedro Hernandez v. San Bernardino County, EDCV 22-1101 JGB (SPx) (C.D.Cal 2023)). So while fun, A.I. tools are best left out of the professional workplace until conditions improve.