Here lies my English 102: Research Reasoning and Writing final paper on the subject of Artificial Intelligence and Intellectual Property Rights. It was a real joy taking Professor Oldham’s course at Shoreline Community College. The freshman English course focuses on writing revisions, while non-judgmental grading opens up space for budding writers to develop their own style. Attendance was strongly graded, including pop quizzes. Each week the professor delivered impactful lessons in an energetic and engaging fashion that kept me coming back for more.
Does US Intellectual Property Law
Protect Creators’ Rights in the
Age of Machine Learning?
It depends on what you think Creator’s Rights are.
The Learning Algorithm
Advancements in machine learning, built with digital neural networks based on the way biological neurons connect and communicate (Xiong et al.), are impacting markets in human-generated creative works. While each nation has its own laws concerning the use and misuse of intellectual property, how is US intellectual property law meant to protect these markets, and how do learning algorithms threaten that protection?
Learning algorithms analyzing the relationships between billions or trillions of examples of images, audio, and written language; from which they can be prompted to extrapolate to generate new examples that follow in the same vein (Roose). These prompts can mix and match ideas, such that even a novice prompter can ask it to generate output that includes an Epic Rap Battle between Santa Claus and the Tooth Fairy over who gives the best gifts, alongside Python code to build a website to sell mail-order exotic snake and spider snacks (“Snakes, Spiders, and Santa Rap”).
“The artificial neural network, was first proposed almost 80 years ago” (Xiong et al.), but recent advances in high-speed computing — driven by computer gaming, cloud computing, and cryptocurrencies — have accelerated machine-learning systems in the last five years, powering devices and services we rely on daily at work, home, and play (Littman et al. SQ2). Once considered the domain of human creatives and innovators, machine-generated conceptualizations build upon and compete with human-generated designs in the open marketplace.
While some machine-generated works have won awards for their creativity (Schneider 113), many digital platforms report they have been overwhelmed with ersatz works of dubious quality, limiting access to the market for humans. (“SFWA Comments on AI” par. 5-6) Generative AI systems owe their creative abilities to creative humans whose words and images were digitized and sampled to train learning algorithms. “Creative workers alone are expected to provide the fruits of their labor for free, without even the courtesy of being asked for permission. Our rights are treated as a mere externality.” (par. 3)
The backlash came swiftly on the heels of the release of image generators like DALL-E, and text generators like ChatGPT: Generative AI is a legal problem, to be decided by the courts. (Wiggers). The US Copyright Office was quick to rule that they, “will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.” (US Copyright Office)
A vital open question remains, however: US Copyright law was intended to encourage humans to share ideas, so that we might all benefit from them together, to enhance and benefit our commonwealth (US House of Representatives). Deep learning models are trained on corpuses of literature and archives of digital photos and art, data-mined from the public internet (Quang 1407). Do the same laws apply to machines when they learn, even artificially? And how?
How does an idea become property?
Ideas are the raw material from which arts are drawn and literature is written. They are accessible to everyone, freely distributed in dictionaries and thesauruses, encyclopedias and almanacs, available at every public library in a wide variety of colors and formats. When you or I take these ideas and in our mind’s eye reshuffle them like a deck of playing cards, each outcome is determined by our unique experiences and personalities. We call this an “original work” and it is protected by the legal system of the United States of America:
“Copyright protection subsists, in accordance with this title, in original works of authorship fixed in any tangible medium of expression, now known or later developed, from which they can be perceived, reproduced, or otherwise communicated, either directly or with the aid of a machine or device.” (U.S. House of Representatives §106)
When you remix ideas to create something new, like a rocket pogo stick, a meme about the Seattle Freeze with a picture of a polar bear being given the cold shoulder by a group of hipster penguins wearing flannel, an electric truck shaped like an irregular polygon, a social media app for people who enjoy Harry Potter fan fiction, or a song about a lonely guitar-playing stock broker with an aching artificial hip; your unique expression of those ideas is protected, but not the ideas themselves. Another creator can take those ideas and create duplicates, even trying to mimic your creation, and their unique expression will be protected just as strongly as yours (Library of Congress ArtI.S8.C8.1.1).
Say you invented an electric truck using a new type of motor, though; or invented a machine that fills bananas with marshmallow and then coats them in chocolate and candy sprinkles. That’s more than just your individual expression isn’t it? It is, according to US Patent and Trademark law. US Patent Law protects utility, “for inventing a new or improved and useful process, machine, article of manufacture, or composition of matter”; design, “for inventing a new, original, and ornamental design for an article of manufacture”; and genetically modified plants, but we are making strictly non-GMO candied bananas here. Were you to trademark your candied-banana logo and packaging, the unique symbols and word(s) you applied for would be protected as well (US Patent and Trademark Office).
Nabisco’s Oreo cookies are an example of a particular trademark, being used to identify a very similar product: Sunshine Biscuits’ Hydrox® cookies. Once a staple in America’s cupboards and lunchboxes, Hydrox® cookies were invented, baked, and sold years earlier; until Oreos were sold alongside and eventually became the more popular brand of cookies. The dispute between cookie-makers goes on to this day, over 100 years later and has been taken to the extremes, “Oreo stockers… push Hydrox® aside when they place Oreo boxes on the shelves.” The key, according to intellectual property law, is the cookies themselves and the processes to make them weren’t precisely alike — a selling point for the kosher Hydrox® — and thus, not infringing (Sales).
The recipe for each cookie may be similar, but in the case of “secret recipes”, the US Patent and Trademark Office has special protections that make up US Trade Secret Policy that protect your recipe from prying eyes, but do not prohibit other cookie makers from independent discovery of or reverse engineering your recipe. They only prevent outright theft when the offending party intends or knows “that the offense will, injure any owner of that trade secret.” (“Trade Secret Policy”)
It’s possible that one day even a registered trademark with years of regular use can lose its protection by being too successful and implanting itself in the public consciousness. Xerox is an example where the company’s name became so prevalent in identifying the process of photocopying a document that it lost its trademark protection for the process (which it never claimed), while retaining protection for its corporate identity.
The act of xeroxing someone else’s work is also an example of how an invention can facilitate copying of copyrighted materials, and yet be protected itself because it can can be used to create transformative works, “While the Xerox machine is a tool for making exact copies, it often facilitates transformative creativity from innumerable writers, artists and musicians.” (Silbey) This is strikingly similar to the way generative AI models can be trained on publicly-available copyrighted materials, yet human users are able to direct them to create new, transformative works.
In the landmark decision of Sony Corporation of America v. Universal City Studios, courts determined that in the case of video tape recording, copyright law “does not support respondents’ novel theory that supplying the ‘means’ to accomplish an infringing activity and encouraging that activity through advertisement are sufficient to establish liability for copyright infringement.” (“Sony Corp. of America”) These examples are milestones to help us identify the copyright-infringement potential of generative AI.
Infringement: Who Decides?
Few creators will be at the table when the trillions of US dollars are wagered by Silicon Valley venture capitalists entranced by the seductive pull of the one-armed bandit of artificial intelligence and its promise of fortune, despite much of that money being used to scrape our portfolios for examples of our work to train the machines that will replace us (Littman et al. SQ11). Yet the voices of creators cry out loudly to be heard, “Some content creators, including publishers, have complained about potential IPR infringements for using their copyrighted content without permission.” (Carugati)
At the end of 2023, nearly one year after public release of generative AI’s poster child, ChatGPT (“Introducing ChatGPT”), governments from around the world gathered to discuss policies to address generative AI and its impacts on society and markets. In the United Kingdom, on the first of November, The Bletchley Declaration by Countries Attending the AI Safety Summit called for worldwide cooperation to address potential threats and equitably distribute the benefits of AI (“The Bletchley Declaration”), and in the US, one day earlier, the Biden Administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence took bolder steps by defining objectives and setting deadlines (“Executive Order”).
Vice President Kamala Harris attended the summit at Bletchley Park — where Alan Turing and others founded the field of computer science to decipher cryptography the Axis powers used to encode secret messages during WWII (“Alan Turing FAQs”) — to provide US support for The Bletchley Declaration on AI Safety. As Harris spoke before delegations from 27 other countries, the public’s growing outcry against the corporate ownership of generative AI models was generating a “moral panic” (Naughton) in the press and elsewhere, and put governments in the position of trying to please competing interests. “The twin subliminal convictions are that the technology is advancing at an unprecedented speed and that it’s out of control.” (Naughton) Governments rely on taxes — and politicians on campaign donations — from tech giants like Microsoft, Amazon, Meta, X, Google, and Apple; who seek to cement their dominance in nascent generative AI markets (Naughton; Thompson; Sinofsky).
Too much of the hubbub is disproportionally centered on science fiction scenarios of AI destroying humanity or replacing humanity at the top of the intelligence pyramid, yet more prevalent in the public consciousness are concerns of job market disruptions caused by generative AI’s ability to sufficiently reproduce and replace creative jobs that were once the sole domain of human creators. The assumption that market disruptions accompany major new technological advances has certainly held true through history. However, the assumption that all market disruptions are inherently bad for those being served by those markets certainly isn’t, even when it has led to those markets being eliminated.
In fact, a study on AI labor displacement between the years of 2013 to 2019 in 701 detailed occupations, has already been conducted and its findings are in line with expectations: AI does indeed disrupt markets. However, “digital skills will be the bridge for human labor and AI to work together” (Chen), meaning that training programs, when applied to workforces impacted by technological advances, allow impacted laborers to benefit from those disruptions. We already have major job-retraining efforts in place to re-skill coal, oil, gas infrastructure laborers, empowering them to benefit from advances in clean power-generation technologies (“Clean Energy”). Similar job-retraining efforts can benefit creatives, whether by training them to include generative AI in their work, or to elevate their work to levels unreachable by generative AI systems. (Chen)
“Artificial intelligence will be transformative in ways we can’t even imagine, with implications for Americans’ elections, jobs, and security,” Sen. Josh Hawley of Missouri remarked at the May hearing before the Senate Judiciary subcommittee on privacy, technology and the law (O’Brien). Tremendous assumptions are being made about this advancing technology and its effects on our lives, with very little evidence being presented for its actual impacts before we are fully aware of its current capabilities. “I would argue that we do not even have a reliable measure of AI capabilities let alone the velocity, direction, or acceleration of those capabilities,” cautions Steven Sinofsky, former Microsoft executive (Naughton; Sinofsky).
The Science Fiction and Fantasy Writers Association makes several assumptions in their letter to the US Copyright Office. Their two claims, that generative AI amounts to a “denial-of-service attack” (“SFWA Comments on AI” para. 7) on writers’ markets and that the training and output of generative AI models does not respect “the rights of creative workers” (“SFWA Comments on AI”), are not aligned with with precedent.
The first seems reasonable on the face of it, that human writers are being “increasingly crowded out in online marketplaces such as Amazon, where there is a well-known issue of large numbers of AI-generated book-length works for sale.” (“SFWA Comments on AI”.) Their desire of many human writers not have to compete with works that are written in whole or in part by generative AI is clear, however, the remedy the SFWA suggests is extreme.
“Work incorporating AI without that clear provenance should be treated as presumptively infringing on the copyright of unknown authors,” is their standard (“SFWA Comments on AI” para. 19). Works created in violation of this standard would be subject to lawsuit, with damages going to human writers. While the outcome of profit sharing and monetary judgments for anyone whose creative works were used to train generative AI is laudable and desirable, it is contradicted by decades of case law deciding fair use and infringement liability. (Silbey; “Sony Corp. of America”)
Historically, new technologies have caused anxiety at their introduction to the art world. When the personal computer and desktop publishing were introduced, traditional typographers and graphic designers were shocked to find their acquired skills no longer afforded them a guaranteed livelihood. They had to expand their knowledge and learn to incorporate the new technology into their work in order to stay productive and competitive. “Ultimately, I think they realized that they were going to have to expand their business. … Quite a few of them thought, ‘I need a new business strategy, or I am going to go out of business.’” (McJones and Spicer 45)
“The Biden administration,” in its recent Executive Order, “is operating with a fundamental lack of trust in the capability of humans to invent new things, not just technologies, but also use cases, many of which will create new jobs.” (Thompson) Digital artists, who found ways to build audiences through social media, now rally those audiences to their defense from what they see as a threat to their livelihood. “Syed says that for things such as fan-art, self-published books, logos and family portraits, people may turn to AI. ‘These clients will usually care more about saving money than the quality of the finished product,’ she says. ‘They will prefer to use AI if it means keeping costs low. So a lot of these small jobs will vanish.’” (Shaffi)
Meanwhile, court dockets are filling up with copyright lawsuits by struggling artists and writers, blaming tech giants for “stealing” their works without their consent to train generative AI models that will compete the market with their works. OpenAI insists that copyright law favors their outlook, which is that teaching algorithms to draw and write, even using copyrighted material, falls under the domain of “fair use”. The Congressional Research Service’s report to congress quotes OpenAI in its defense of using copyrighted material to train ChatGPT. They state in their defense, “that ‘[w]ell-constructed AI systems generally do not regenerate, in any nontrivial portion, unaltered data from any particular work in their training corpus.’ Thus, OpenAI states, infringement ‘is an unlikely accidental outcome.’” (Zirpoli) This is analogous to the standard set with the Xerox photocopier, that photocopies of copyrighted material can be transformative and thus are protected. Even with intentional outcomes, courts have determined that responsibility for infringement lies in the actions of the user, not the capabilities of the product (Silbey).
As beleaguered creators call for strict punishment of generative AI manufacturers, it is important that we recognize the dangers to overprotecting creative works. In a recent court case, copyright law was used to protect an author from harassment by members of the notorious Kiwi Farms hacker group (Belanger). A member of the group, “a web forum accused of fomenting harassment campaigns” (Brown), encouraged other members of the group to freely share the author’s works in a way that would prevent the author from earning money from publication of his work. The ruling, and the disagreement around it, mirrors in some ways the actions of anti-generative-AI activists at SFWA, who believe that their only protection from having their livelihoods subsumed is copyright law, forcefully applied, such that the precedent of “fair use” is diminished. “We definitely don’t want more copyright doctrines that facilitate pernicious removals,” noted Technology & Marketing Law Blog publisher, Prof. Eric Goldman. (Belanger).
Attempts to quell speech that is harmful to society are laudable, but have a history of backfiring. “As Cloudflare itself pointed out in an August 31 blog post, the company’s private decisions to terminate services to 8chan and the Daily Stormer were followed by “a dramatic increase in authoritarian regimes attempting to have us terminate security services for human rights organizations — often citing the language from our own justification back to us.’” (Brown) What we use against others to protect ourselves, they can turn against us to protect themselves. This is what it means to live in a fair and just society where the laws apply to all equally.
And yet, giving free reign over our very personal data to organizations ranging from the shadowy Kiwi Farms to well-funded corporations like OpenAI, Google DeepMind, Stability AI, Midjourney Inc., Inflection AI, Anthropic, and an ever-growing list of “Surveillance Capitalists”, threatens to undermine democratic protections like the right to privacy. “We can’t rid ourselves of later-stage social harms unless we outlaw their foundational economic causes. This means we move beyond the current focus on downstream issues such as content moderation and policing illegal content.” (Zuboff) Social Psychologist Shoshana Zuboff makes clear the stakes are high: “Democracy is on the ropes in the UK, US, many other countries. Not in small measure because of the operations of surveillance capitalism.” (Kavenna)
With democracy itself in crisis (Zuboff) and our livelihoods at stake (“SFWA Comments on AI”), what then do we do? “Alarmists or ‘existentialists’ say they have enough evidence,” to pause the production of generative AI models until society can better prepare itself to manage the harmful impacts of the technology. “If that’s the case then then so be it, but then the only way to truly make that case is to embark on the legislative process and use democracy to validate those concerns.” (Sinofsky)
Intellectual property law is meant to protect innovators. (US House of Representatives) Critics say government and business are moving too slowly and without sufficient purpose (Naughton.) Even the founder of OpenAI, CEO Sam Altman, is open about his concerns. In his recent testimony before congress he warned, “As this technology advances, we understand that people are anxious about how it could change the way we live. We are too.” (O’Brien.)
Where Do We Go from Here?
There is no short supply of lofty propositions as evidenced in communiques from individuals, academia, government, and in the media. What we lack are integrated solutions that attempt to address quantifiable areas of impact, on which we have precious little data. Surveillance capitalists are not the sole purveyors of generative AI systems and proposals to protect us from them undermine solutions to prevent our reliance on them, “Authorities should work with policymakers during the legislative process, and with competent authorities during the enforcement phase, to ensure that they do not impose undue restrictions that would deter the development of open-source models in a way that would favour the use of closed-source models.” (Carugati)
While ethicists point toward biases in generative AI as a serious problem, they also suggest they are sources of employment that potentially address both bias and impacts on employment, “One essential pathway toward fixing AI’s flaws is to build a more diverse pipeline to tech careers, one that includes women and people of color.” (Gibson) This is a solution that is already well underway, with ongoing success, though we clearly still have a long way to go.
Unions are effectively advocating for workers and bargaining with corporate employers for better contracts that address impacts on employment that may someday come from generative AI (Alexander). Today, Microsoft and the AFL-CIO, US’s largest labor federation entered into “a joint commitment to respect the right of employees to form or join unions, to develop positive and cooperative labor-management relationships, and to negotiate collective bargaining agreements that will support workers in an era of rapid technological change.” (Bishop)
The Internet Archive is a digital library, a repository of data that has been collected from the internet, as well as an archive of physical information that has been digitized and made available to anyone with internet access. It is also likely to be affected by efforts to clamp down on the concept of fair use, which affords them legal protection to maintain their library of copyrighted materials. They, like the SFWA, submitted comment to the US Copyright Office, but instead of asking for stronger copyright laws, they asked for protection from overzealous protectors of markets for creative works, while also recommending ways to alleviate the pressure on those markets through education and retraining efforts. (Bailey)
“Regulation of Artificial Intelligence should be considered holistically–not solely through the isolated lens of copyright law. As explained in the Library Copyright Alliance Principles for Artificial Intelligence and Copyright, “AI has the potential to disrupt many professions, not just individual creators. The response to this disruption (e.g., support for worker retraining through institutions such as community colleges and public libraries) should be developed on an economy-wide basis, and copyright law should not be treated as a means for addressing these broader societal challenges.” Going down a typical copyright path of creating new rights and licensing markets could, for AI, serve to worsen social problems like inequality, surveillance and monopolistic behavior of Big Tech and Big Media.” (Bailey)
Court cases once thought to be the nail in the coffin of generative AI are being steadily dismissed. “The judge found that their generative AI output was likely not similar enough to the artists’ work to infringe,” (Brittain) a refrain heard often in other court cases deciding in favor of disruptive technologies. It’s this refrain that is most likely the cause of creator dissatisfaction with the legal system: artists don’t feel as though the intellectual property system protect them, leading to demands to add new protections. But, “Copyright, especially copyright maximalism, has done a terrible job of preventing artist exploitation.” (Lane) Is it possible that both can be necessary, but we are adding on the wrong new protections? That rather than making copyright itself more restrictive, thereby handing more power to Big Tech and Big Media, we should be taking other action to make arts and letters a more viable career path?
Perhaps a solution like the one proposed by the SFWA could work? Wouldn’t we all welcome a solution that offers a method to compensate everyone who participates in the training of generative AI systems — even those of us who are not quite lucky enough to be professional science fiction and/or fantasy writers? “As writers of science fiction and fantasy, we are confident that the boundless ingenuity of the researchers who brought these AI systems to the world will find ways to do their work that fully respect our rights.” (“SFWA Comments on AI”)
Perhaps we can spread the wealth in the form of a Universal Basic Income to everyone whose writing, images, and contributions to the pool of human knowledge were used to train these disruptive new technologies? “The application of AI is bound to affect labor markets, and thus social security must concern the weak low-skilled labor as well as building the strong bottom line for whole labor force. Basic income support could provide a comprehensive social safety net against the risk of AI shock.” (Chen)
In his own thought-provoking essay on the matter, tech analyst Ben Thompson offers another angle that includes something for everyone, from technologists eager to put the pedal to the metal, to ethicists eager to find intersectional solutions to intersectional problems, to advocates of free markets in ideas and products.
“Innovation — technology, broadly speaking — is the only way to grow the pie, and to solve the problems we face that actually exist in any sort of knowable way, from climate change to China, from pandemics to poverty, and from diseases to demographics. To attack the solution is denialism at best, outright sabotage at worst. Indeed, the shoggoth to fear is our societal sclerosis seeking to drag the most exciting new technology in years into an innovation anti-pattern.” (Thompson)
Generative AI poses new challenges to an already-challenged social and economic environment brought about by climate catastrophe, pandemic fatigue, and social media manipulation. We’ve lost trust in each other’s solutions, because we are cynical about each other’s motivations, while being perhaps overly certain of the purity of our own. It is our lack of trust in each other that we are now confronted with.
In his essay on Trust in AI, security researcher Bruce Schneier suggests,
“A public model is a model built by the public for the public. It requires political accountability, not just market accountability. This means openness and transparency paired with a responsiveness to public demands. It should also be available for anyone to build on top of. This means universal access. And a foundation for a free market in AI innovations. This would be a counter-balance to corporate-owned AI.” (Schneier)
It is an intriguing suggestion that deserves serious consideration. Can a conglomerate of federal, state, and local public libraries, presidential libraries, scientific institutions, and other stores of public data, collate into one foundational model whose inputs and outputs belong to us all? From a data provenance point of view, the idea is certainly intriguing, and the outputs would enrich us all as a society, rather than enriching only the shareholders of Microsoft or Google. For those distrustful of Big Tech and Big Media, Big Government must seem the only authority left to which to appeal, but is a call for government intervention likely to inspire trust at a time when “public trust in government” is “near historic lows”? (“Public Trust”) Will creators trust government to use their data without their consent where they did not trust tech companies? If the US government is to be the steward of the public’s data (a stance that has met with fierce resistance since America’s founding), it would have to come about as Sinofsky says, through the use of “democracy to validate those concerns.” (Sinofsky)
It is still unknown what conclusions the US Copyright Office will draw from public comments. Where courts have so far found current intellectual property law does not weigh in favor of an individual creator’s right to determine whether another human or machine learns from their work, the US Copyright Office is empowered to administrate and enforce current copyright law as it is written by Congress (US House of Representatives). Should they determine that “fair use” does not cover the use of copyrighted works for the use of training generative AI systems, and Congress does not pass new copyright law explicitly allowing for such, and in the unlikely event the US Supreme Court has nothing to say on the matter, generative AI systems that are trained on copyrighted material will become more difficult to build, but they will continue to be trained on licensed material. OpenAI has already partnered with the Associated Press, “licensing part of AP’s text archive”, in exchange for “technology and product expertise”. (“AP, OpenAI”)
There is one other way to see this juncture we find ourselves at. What if this is an opportunity to set aside our notions of ownership completely? “Generative AI should be understood as a fundamental disruption of the construct of ownership and identity. Despite the strategies of big technological corporations, it may well prepare the ground for concepts that effectively encourage and reward processes of cross-creation, rather than silo thinking and solo-acting.” (Schneider)
Professor Lawrence Lessig has been fighting a long battle to protect creator’s rights. In 2004, he wrote, Free Culture: The Nature and Future of Creativity, and in the introduction, he foresees the future we find ourselves living in today. He echoes not only the SFWA, but also the Internet Archive, and OpenAI as well:
“The hard question is therefore not whether a culture is free. All cultures are free to some degree. The hard question instead is “How free is this culture?” How much, and how broadly, is the culture free for others to take and build upon? Is that freedom limited to party members? To members of the royal family? To the top ten corporations on the New York Stock Exchange? Or is that freedom spread broadly? To artists generally, whether affiliated with the Met or not? To musicians generally, whether white or not? To filmmakers generally, whether affiliated with a studio or not?” (Lessig 30)
Are we so worried about the “top ten corporations on the New York Stock Exchange” (Lessig 30) that we’re willing to hand the freedom of creativity and the capability to earn a living from it to anyone, trustworthy or not, so long as they aren’t Big Tech? We’re presented with a pivotal choice: continue playing defense against Google, Microsoft, or Meta, or pivot towards a proactive, shared vision for our future? One in which we collaboratively build trustworthy tech together? One in which our collective rights transcend the objectives of corporate America or the protection of a privileged class of creators whether determined by the SFWA, the AFL/CIO, Disney/Pixar, or the directives of the US Copyright Office?
The underlying challenge, as so many have pointed out here, is the crisis of trust: in Big Tech, in Big Media, in Big Government; the institutions that propose to be the solution, if only we would leave it up to them. To navigate this crisis, we must rebuild trust not only in our institutions, but also in each other. Isn’t that the essence of the democratic process we all claim to believe in? Is it not through our common pursuit of goals, and the compromises we make along the way, that we achieve the most desirable outcomes for all?
Change is inevitable. How we react to change is a well-established measure of character, and we have much to prove with regard to our collective ability to collaborate to bring about positive change. With the promise of generative AI, as well as the potential for harm, we have a new opportunity for collective growth that only change brings. Let’s not squander this opportunity. Humanity doesn’t have the luxury of inaction.
“Alan Turing FAQs.” Bletchley Park Trust, https://bletchleypark.org.uk/our-story/alan-turing-faqs/. Accessed 5 Nov. 2023.
Alexander, Bryan. “SAF-AFTRA President Fran Drescher: AI Protection was Nearly ‘Deal Breaker’ in Actors Strike.” USA-Today, 10 Nov. 2023. https://www.msn.com/en-us/tv/news/sag-aftra-president-fran-drescher-ai-protection-was-nearly-deal-breaker-in-actors-strike/ar-AA1jJQjL. Accessed 12 Nov. 2023.
“AP, Open AI [sic] agree to share select news content and technology in new collaboration.” Associated Press, 13 July 2023. https://www.ap.org/press-releases/2023/ap-open-ai-agree-to-share-select-news-content-and-technology-in-new-collaboration. Accessed 11 Dec. 2023.
Bailey, Lila. “Internet Archive Comments In Response To The Copyright Office Study On Artificial Intelligence.” Internet Archive, 30 Oct. 2023. https://archive.org/details/internet-archive-comments-in-response-to-the-copyright-office-study-on-artificial-intelligence/ Accessed 10 Nov. 2023.
Belanger, Ashley. “Kiwi Farms ruling sets “dubious” copyright precedent, expert warns.” Ars Technica, 18 Oct. 2023, https://arstechnica.com/tech-policy/2023/10/court-ruling-may-doom-kiwi-farms-but-set-dubious-copyright-precedent/. Accessed 22 Oct. 2023.
Bishop, Todd. “Microsoft tries to address AI labor concerns with new AFL-CIO pact and ‘neutrality framework’.” GeekWire, 11 Dec. 2023. https://www.geekwire.com/2023/microsoft-tries-to-address-ai-labor-concerns-with-new-afl-cio-pact-and-neutrality-framework/. Accessed 11 Dec. 2023.
“The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023.” Gov.uk, 1 Nov. 2023, www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023. Accessed 5 Nov. 2023.
Brittain, Blake. “US judge trims AI copyright lawsuit against Meta.” 9 Nov. 2023, Reuters. https://www.reuters.com/legal/litigation/us-judge-trims-ai-copyright-lawsuit-against-meta-2023-11-09/. Accessed 12 Nov. 2023.
Brown, Elizabeth Nolan. “Kiwi Farms Is Back.” Reason, 7 Oct. 2023. https://reason.com/2022/10/07/kiwi-farms-is-back/. Accessed 12 Nov. 2023.
*†Carugati, Christophe. Competition In Generative Artificial Intelligence Foundation Models. Bruegel, 2023, JSTOR. http://www.jstor.org/stable/resrep52128. Accessed 11 Nov. 2023.
*†Chen, Ni, et al. “Can Digital Skill Protect Against Job Displacement Risk Caused by Artificial Intelligence? Empirical Evidence from 701 Detailed Occupations.” PLoS ONE, vol. 17, no. 11, Nov. 2022. PUBMED. https://doi.org/10.1371/journal.pone.0277280. Accessed 9 Dec. 2023.
“Clean Energy Technology.” Shoreline Community College, 2023. https://www.shoreline.edu/programs/clean-energy-technology/. Accessed 11 Dec. 2023.
“Introducing ChatGPT.” OpenAI, 30 Nov. 2022,https://openai.com/blog/chatgpt. Accessed 5 Nov. 2023.
Gibson, Lydialyle. “Bias in Artificial Intelligence.” Harvard Magazine, 2 Aug. 2021. https://www.harvardmagazine.com/2021/08/meredith-broussard-ai-bias-documentary. Accessed 12 Nov. 2023.
Kavenna, Joanna. “Shoshana Zuboff: Surveillance capitalism is an assault on human autonomy.” The Guardian, 4 Oct. 2019, https://www.theguardian.com/books/2019/oct/04/shoshana-zuboff-surveillance-capitalism-assault-human-automomy-digital-privacy. Accessed 23 Oct. 2023.
Lessig, Lawrence. Free Culture: How Big Media Uses Technology and the Law to Lock Down Culture and Control Creativity. Penguin Press, 2004.
†Littman, Michael L. et al. Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report. Stanford University, Sept. 2021, http://ai100.stanford.edu/2021-report. Accessed 23 Oct. 2023.
Library of Congress. “ArtI.S8.C8.1.1 Origins and Scope of the Power.” Cornell Law School Legal Information Institute, https://www.law.cornell.edu/constitution-conan/article-1/section-8/clause-8/origins-and-scope-of-the-power. Accessed 9 Oct. 2023.
*†McJones, Paul and Dag Spicer. “The Advent of Digital Typography.” IEEE Annals of the History of Computing, vol. 42, no. 1, Jan. 2020, pp. 41–50. EBSCOhost, https://doi-org.ezproxy.shoreline.edu/10.1109/MAHC.2019.2929883.
Naughton, John. “AI is not the problem, prime minister – the corporations that control it are.” The Guardian, 4 Nov. 2023. https://www.theguardian.com/commentisfree/2023/nov/04/ai-is-not-the-problem-prime-minister-but-the-corporations-that-control-it-are-rishi-sunak. Accessed 5 Nov. 2023.
O’Brien, Matt. “WATCH: OpenAI CEO Sam Altman testifies before Senate Judiciary Committee.” PBS Newshour, 15 May 2023, Updated 16 May 2023. https://www.pbs.org/newshour/politics/watch-live-openai-ceo-sam-altman-testifies-before-senate-judiciary-committee. Accessed 6 Nov. 2023.
*Roose, Kevin. “How Does ChatGPT Really Work?” The New York Times, vol. 172, no. 59752, 2023, EBSCOhost. very-long-link. Accessed 4 Nov. 2023.
Sales, Ben. “Hydrox, Original Kosher Sandwich Cookie, Accuses Oreo of Sabotage.” The American Israelite, 15 Aug. 2018. very-long-link. Accessed 9 Oct. 2023.
*Schneider, Florian. “Imaginary Property: The Rise of Artificial Intelligence Is Disrupting Creative Processes and Challenging Constructs of Ownership and Intellectual Property.” Architectural Review, Sept. 2023, pp. 110–14. EBSCOhost, very-long-link.
“SFWA Comments on AI to US Copyright Office.” Science Fiction & Fantasy Writers Association. 2 Nov. 2023. https://www.sfwa.org/2023/11/03/sfwa-comments-on-ai-to-us-copyright-office/ Accessed 12 Nov. 2023.
Shaffi, Sarah. “‘It’s the opposite of art’: why illustrators are furious about AI.” The Guardian, 23 Jan. 2023. https://www.theguardian.com/artanddesign/2023/jan/23/its-the-opposite-of-art-why-illustrators-are-furious-about-ai. Accessed 12 Nov. 2023.
Silbey, Jessica. “How Xerox’s Intellectual Property Prevented Anyone From Copying Its Copiers.” Smithsonian Magazine, 2 Jul. 2019. https://www.smithsonianmag.com/innovation/how-xeroxs-intellectual-property-prevented-anyone-from-copying-copiers-180972536/ Accessed 12 Nov. 2023.
Sinofsky, Steven. “211. Regulating AI by Executive Order is the Real AI Risk.” Hardcore Software, 1 Nov. 2023. https://hardcoresoftware.learningbyshipping.com/p/211-regulating-ai-by-executive-order. Accessed 10 Nov. 2023.
“Snakes, Spiders, Santa Rap” prompt and output. ChatGPT Plus, OpenAI, 4 Nov. 2023, https://chat.openai.com/c/151b7066-7dbf-4fe8-a030-5c7350f46e04.
Sony Corp. of America v. Universal City Studios, Inc., 464 U.S. 417 (1984). Supreme Court of the US. https://supreme.justia.com/cases/federal/us/464/417/. Accessed 12 Nov. 2023.”
Thompson, Ben. “Attenuating Innovation (AI)”, Stratechery, 1 Nov. 2023. https://stratechery.com/2023/attenuating-innovation-ai/. Accessed 10 Nov. 2023.
“Trade Secret Policy.” US Patent and Trademark Office, https://www.uspto.gov/ip-policy/trade-secret-policy. Accessed 23 Oct. 2023.
US Copyright Office. “Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence.” Copyright.gov, 16 Mar. 2023, https://copyright.gov/ai/ai_policy_guidance.pdf. Accessed 23 Oct. 2023.
US Executive Office of the President. “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” The White House, 30 Oct. 2023. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/. Accessed 5 Nov. 2023.
US House of Representatives. “Copyright Law of the United States (Title 17).” The Library of Congress, Dec. 2022, https://www.copyright.gov/title17/, Section 106. Accessed 9 Oct. 2023.
US Patent and Trademark Office. “Patent Essentials.” USPTO.gov, https://www.uspto.gov/patents/basics/essentials. Accessed 9 Oct. 2023.
Wiggers, Kyle. “The current legal cases against generative AI are just the beginning.” TechCrunch, 27 Jan. 2023, https://techcrunch.com/2023/01/27/the-current-legal-cases-against-generative-ai-are-just-the-beginning/. Accessed 23 Oct. 2023.
CRAP Analysis (Updated!): Despite this source (Wiggers) being from the current year, 2023, it is not a current analysis — given the rapid state of the art — but it is nonetheless relevant, as it establishes a point on the timeline of legal analysis of generative AI’s effects on intellectual property law. According to K&L Gates, an American multinational corporation law firm, “While the outcomes of these early generative AI cases are far from certain, preliminary indications suggest that courts are not succumbing to the hype and rhetoric, and are approaching generative AI claims with a healthy level of skepticism.” (https://www.klgates.com/Recent-Trends-in-Generative-Artificial-Intelligence-Litigation-in-the-United-States-9-5-2023). The reputation of TechCrunch is solid for their technology coverage, but may not be quite as solid for their legal coverage. Authoritativeness may also be lacking in the legal area, though they cite reputable legal sources such as Bloomberg Law. Currency is the real weakness here. The purpose seems to be to sound a warning to companies interested in building a business on these technologies and pointing out that there are legal challenges which do have some legitimacy. It would have been better if they had dug deeper into previous examples of failed regulation around audio and video tap copying technologies and the failed attempts at regulating encryption technologies.
†*Xiong, Hui et al. Digital twin brain: a bridge between biological intelligence and artificial intelligence. Cornell University Library, arXiv.org, 3 Aug. 2023, ProQuest. very-long-link. Accessed 5 Nov. 2023.
†Zirpoli, Christopher T. “Generative Artificial Intelligence and Copyright Law.” Congressional Research Service, 29 Sept. 2023, https://crsreports.congress.gov/product/pdf/LSB/LSB10922. Accessed 9 Oct. 2023.
Zuboff, Shoshana. “You Are the Object of a Secret Extraction Operation.” New York Times, 12 Nov. 2021. https://www.nytimes.com/2021/11/12/opinion/facebook-privacy.html. Accessed 12 Nov. 2023.
*Sources gathered using the SCC Library databases, with database in bold.
Image by DALL-E 3 and ChatGPT Plus: “Here’s an abstract illustration that captures the themes of technological evolution, intellectual property, and the challenges of generative AI, along with the concluding emotions of your paper. This image is designed to fit the style of your technology blog and is on a white background, ready for you to make transparent as needed.”