The rapid proliferation of new issues, regulations, opinions, and threats, cyber law requires vigilant research. This week, that research is dedicated to artificial intelligence (AI). The ultimate gordian knot, AI is a necessity to compete in the global economy but also dangerous.
Below are four noteworthy developments concerning the growing use of AI across multiple business sectors:
1. Voluntary HIPAA Data Breaches: The healthcare industry uses AI tools to create predictive disease patterns and diagnostics. However, the Health Insurance Portability and Accountability Act (HIPAA) privacy rule inherently opposes AI. AI makes predictions based prior data accumulated by the tool, which is then compared against the present data set for analysis. Any data presented to an AI tool is permanently ingested and used to improve the tool’s future predictions. Healthcare entities that fail to anonymize data prior utilizing AI tools are voluntarily creating data breaches, punishable by the Office of Civil Rights.
2. Intellectual Property Protection: Software developers are strongly advised against utilizing AI tools to enhance or fill gaps in their work, unless indifferent to the forfeiture of their intellectual property rights. In May, the Northern District of California dismissed software developers’ claims of intellectual property theft against “GitHub,” a wildly popular open-source software hub hosted by Microsoft. In Doe v. GitHub, two GitHub programs named Copilot and Codex were AI tools used to fill in the coding gaps for other software developers. Copilot and Codex were trained from publicly available code, including code from the GitHub repositories. However, “Copilot and Codex were not programmed to treat attribution, copyright notices, and license terms as legally essential." Therefore, the programs illegally repurposed codes from thousands of software developers in violation of the developers' copyright protections and licensing agreements. Still, the Court dismissed the claims because there was no actual or imminent injury, despite developers' ability to identify several instances in which Copilot's output matched copyrighted code.
3. Discriminatory Practices: Since California enacted its Consumer Privacy Protection Act in 2018, individual states began passing individual state privacy laws that mimic California’s efforts. Recent Consumer Privacy laws in Connecticut, Delaware, Indiana, Montana, Texas, and Virginia also included prohibitions against automated profiling to prevent businesses like financial institutions, prospective employers, and heath care entities from denying services and employment opportunities based AI profiles of consumers. For example, Colorado and Connecticut require data controllers to perform assessments on data profiling tools (AI) to determine the risk of unfair or deceptive treatment of Colorado consumers. In short, states seek to prevent racial and gender discrimination at the hands of AI. In 2019, Illinois, a lesser known leading in the technology law space, prohibited the use of artificial intelligence to analyze job applicants absent the applicant’s prior consent (820 ILCS 42/5).
4. Expert Litigation: On November 9, 2023, the Bankruptcy Court for the Southern District of New York excluded an economist’s expert witness report generated by artificial intelligence. In In re Celsius Network, Case No. 22-10964(MG), the parties disputed the value of cryptocurrency, with one creditor arguing for an increased valuation. In support of its argument, that creditor produced an expert report and testimony from Hussein Faraj. Although guiding the report’s creation, Faraj admittedly used AI to generate a 172-page report, claiming a human generated report would require more than 1000 hours. Despite permitting Faraj’s expert testimony, the court excluded the report Citing that “there were no standards controlling the operation of the artificial intelligence that generated the report” and the report contained several errors and lacked peer review.
Comments