In my English 102 class, we’re doing a six-phase research paper, with each phase adding new concepts we are learning in the reasoning selection from the “research and reasoning” menu. I tried to meet the objectives and keep the flow smooth. This is phase four. I suspect revisions in future phases will smooth that out even further. I’m quite liking this concept of revisions. I must say, I wouldn’t bother to do it unless forced by circumstance, as it’s a tremendous amount of intellectual and emotional labor. The results, in my estimation, certainly reflect the effort I put into them.
Intellectual Property Law: Does it Protect Creators’ Rights in the Age of Machine Learning?
It depends on what you think Creator’s Rights are.
The Learning Algorithm
With the aid of a new type of computer algorithm, a learning algorithm based on the way biological neurons connect and communicate, our digital future is being rewritten before our eyes. “Artificial neural networks have two basic elements including nodes and connections between nodes, which facilitate computational information flow within the network, and share a conceptually similar computational paradigm with the biological brain” (Xiong et al.). Our digital future is being generated, extruded, fabricated, hallucinated, and confabulated in ways we did not, and perhaps could not have foreseen.
These machines work in new ways, by analyzing the relationships between billions, even trillions, of examples of images, audio, and written language; they can be prompted to extrapolate from that data, and to generate new examples that follow in the same vein (Roose). These prompts can mix and match different ideas, such that even a novice prompter can ask it to generate output that includes an Epic Rap Battle between Santa Claus and the Tooth Fairy over who gives the best gifts, alongside python code to build a website to sell mail-order exotic snake and spider snacks (“Snakes, Spiders, and Santa Rap”).
“The artificial neural network, was first proposed almost 80 years ago by McCullough and Pitts” (Xiong et al.), but recent advances in high-speed computing, driven by computer gaming and cryptocurrencies, have accelerated machine-learning systems in the last five years, powering devices and services we rely on daily at work, home, and play (Littman et al. SQ2).
Put simply, deep learning systems are computer algorithms that simulate biological neurons, that are then trained on vast quantities of data, gathered from public and private data stores. These machine learning systems can achieve near-miraculous feats of natural language translation; visual and audio recognition; near-perfect voice, music, and sound generation; and the creation of fantastic works of art and photography, including the generation of architectural designs of realistic and unrealistic nature, as well as realistic simulations of organic structures that make up the natural world (Littman et al. SQ1).
Digital platforms that play host to thriving communities of artists and writers — once considered the sole domain of human creativity — have been overwhelmed with ersatz works, generated by computer algorithm so life-like that it challenges our notions of creativity itself. (“SFWA Comments on AI” par. 5) And ownership: generative AI systems owe their creative abilities to creative humans whose words and images were digitized and sampled to train learning algorithms. “Creative workers alone are expected to provide the fruits of their labor for free, without even the courtesy of being asked for permission. Our rights are treated as a mere externality.” (par. 3)
The backlash came swiftly on the heels of the release of image generators like DALL-E, Stable Diffusion, and Midjourney: Generative AI is a legal problem, to be decided by the courts. (Wiggers). The US Copyright Office was quick to rule that they, “will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.” (US Copyright Office)
A vital open question remains, however: Copyright law was intended to encourage humans to share ideas, so that we might all benefit from them together, to enhance and benefit our commonwealth. Deep learning models are trained on corpuses of literature and archives of digital photos and art. Do the same laws apply to machines when they learn, even artificially? And how?
How does an idea become property?
Ideas are the raw material from which arts are drawn and literature is written. The English language is an idea, the Roman alphabet is an idea; the Russian language is an idea, the Cyrillic alphabet is an idea. These ideas are accessible to everyone, freely distributed in dictionaries and thesauruses available at every public library in a wide variety of colors and formats. When you or I take these ideas and shuffle them like a deck of playing cards, each outcome is determined by our unique experiences, our unique personalities. We call this an “original work” and it is protected by the legal system of the United States of America.
“Copyright protection subsists, in accordance with this title, in original works of authorship fixed in any tangible medium of expression, now known or later developed, from which they can be perceived, reproduced, or otherwise communicated, either directly or with the aid of a machine or device.” (U.S. House of Representatives §106)
When you remix ideas to create something new, like a rocket pogo stick, a meme about the Seattle Freeze with a picture of a polar bear being given the cold shoulder by a group of hipster penguins wearing flannel, an electric truck shaped like an irregular polygon, a social media app for people who enjoy Harry Potter fan fiction, or a song about a lonely guitar-playing stock broker with an aching artificial hip; your unique expression of those ideas is protected, but not the ideas themselves. Another creator can take those ideas and create duplicates, even trying to mimic your creation, and their unique expression will be protected just as strongly as yours (Library of Congress ArtI.S8.C8.1.1).
Say you invented an electric truck using a new type of motor, though; or invented a machine that fills bananas with marshmallow and then coats them in chocolate and candy sprinkles. That’s more than just your individual expression isn’t it? It is, according to US Patent and Trademark law. US Patent Law protects utility, “for inventing a new or improved and useful process, machine, article of manufacture, or composition of matter”; design, “for inventing a new, original, and ornamental design for an article of manufacture”; and genetically modified plants, but we are making strictly non-GMO candied bananas here. Were you to trademark your candied bananas, the unique symbols and word(s) you applied for would be protected as well (US Patent and Trademark Office).
Nabisco’s Oreo cookies are an example of a particular trademark, being used to identify a very similar product: Sunshine Biscuits’ Hydrox® cookies. Once a staple in America’s cupboards and lunchboxes, Hydrox® cookies were invented, baked, and sold years earlier; until Oreos were sold alongside and eventually became the more popular brand of cookies. The dispute between cookie-makers goes on to this day, over 100 years later and has been taken to the extremes, “Oreo stockers… push Hydrox® aside when they place Oreo boxes on the shelves.” The key, according to intellectual property law, is the cookies themselves and the processes to make them weren’t precisely alike — a selling point for the kosher Hydrox® — and thus, not infringing (Sales).
The recipe for each cookie may be similar, but in the case of “secret recipes”, the US Patent and Trademark Office has special protections that make up US Trade Secret Policy that protect your recipe from prying eyes, but do not prohibit other cookie makers from independent discovery of or reverse engineering your recipe. They only prevent outright theft when the offending party intends or knows “that the offense will, injure any owner of that trade secret.” (“Trade Secret Policy”)
It’s possible that one day even a registered trademark with years of regular use can lose it’s protection by being too successful. Xerox is an example where the company’s name became so ubiquitous in identifying the process of photocopying a document that it lost it’s trademark protection for the process (which it never claimed), while retaining protection for its corporate identity. The act of xeroxing someone else’s work is also an example of how an invention can facilitate copying of copyrighted materials, and yet be protected itself. “While the Xerox machine is a tool for making exact copies, it often facilitates transformative creativity from innumerable writers, artists and musicians.” (Silbey)
In the landmark decision of Sony Corporation of America v. Universal City Studios, courts determined that in the case of video tape recording, copyright law “does not support respondents’ novel theory that supplying the ‘means’ to accomplish an infringing activity and encouraging that activity through advertisement are sufficient to establish liability for copyright infringement.” (Sony Corp. of America) These examples are signposts along the way to help us identify where we are with respect to the copyright infringement potential of generative AI.
Infringement: Who Decides?
Few creators will be at the table when the billions and billions of US dollars are passed out by Silicon Valley venture capitalists in the digital gold rush for artificial intelligence, despite much of that money being used to scrape our portfolios for examples of our work to train the machines that will replace us. (Littman et al. SQ11) “Some content creators, including publishers, have complained about potential IPR infringements for using their copyrighted content without permission.” (Carugati)
At the end of 2023, nearly one year after public release of generative AI’s poster child, ChatGPT (“Introducing ChatGPT”), governments from around the world gathered to discuss policies to address generative AI and its impacts on society and markets. In the United Kingdom, on the first of November, The Bletchley Declaration by Countries Attending the AI Safety Summit called for worldwide cooperation to address the threats and equitably distribute the benefits of AI (“The Bletchley Declaration”), and in the US, one day earlier, the Biden Administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence took bolder steps by defining objectives and setting deadlines (“Executive Order”).
Vice President Kamala Harris attended the summit at Bletchley Park — where Alan Turing “contributed to the development of the field of computer science” and broke cryptographic codes the Germans used during WWII (“Alan Turing FAQs”) — to provide US support for The Bletchley Declaration on AI Safety. As Harris spoke before delegations from 27 other countries, the public’s growing outcry against the corporate ownership of generative AI models was generating a “moral panic” (Naughton) in the press and elsewhere, and put governments in the position of trying to please competing interests. “The twin subliminal convictions are that the technology is advancing at an unprecedented speed and that it’s out of control.” (Naughton) Governments rely on taxes — and, more importantly, campaign donations — from tech giants like Microsoft, Amazon, Meta, X, Google, and Apple; who seek to cement their dominance in nascent generative AI markets (Naughton) (Thompson)(Sinofsky).
While much of the hubbub focuses on science fiction scenarios of AI destroying humanity or replacing humanity at the top of the intelligence pyramid, here we are posed with more pressing problems of job market disruptions caused by generative AI’s ability to sufficiently reproduce and replace creative jobs that were once the sole domain of human creators. This is a largely uncontested point often brought up as a given, and yet, it should be addressed.
“Artificial intelligence will be transformative in ways we can’t even imagine, with implications for Americans’elections, jobs, and security,” Sen. Josh Hawley of Missouri remarked at the May hearing before the Senate Judiciary subcommittee on privacy, technology and the law (O’Brien). Tremendous assumptions are being made about this advancing technology and its effects on our lives, with very little evidence being presented for its actual impacts before we are fully aware of its current capabilities. “I would argue that we do not even have a reliable measure of AI capabilities let alone the velocity, direction, or acceleration of those capabilities,” cautions Steven Sinofsky, formerly of Microsoft (Naughton)(Sinofsky).
The Science Fiction and Fantasy Writers Association makes several assumptions in their letter to the US Copyright Office. Their two claims, that generative AI amounts to a “denial-of-service attack” (SFWA Comments on AI) on writers’ markets and that the training and output of generative AI models does not respect “the rights of creative workers” (SFWA Comments on AI), are not aligned with the facts, but instead, are aspirational claims that would create new rights for the writers they represent; thus, forming not only the basis of their argument, but informing their demands for generative AI companies using anyone’s work as a basis for training. While the outcome of profit sharing for anyone whose creative works were used to train generative AI is laudable and desirable, it is contradicted by decades of case law deciding the fair use and infringement liability. (Silbey) (Sony Corp. of America) (SFWA Comments on AI)
New technologies have previously caused anxiety at their introduction to the art world. When the personal computer and desktop publishing were introduced, traditional typographers and graphic designers were shocked to find their acquired skills no longer afforded them a guaranteed livelihood. They had to expand their knowledge and learn to incorporate the new technology into their work in order to stay productive and competitive. “Ultimately, I think they realized that they were going to have to expand their business. … Quite a few of them thought, ‘I need a new business strategy, or I am going to go out of business.’” (McJones 45)
“The Biden administration is operating with a fundamental lack of trust in the capability of humans to invent new things, not just technologies, but also use cases, many of which will create new jobs. It can envision how the spreadsheet might imperil bookkeepers, but it can’t imagine how that same tool might unlock entire new industries.” (Thompson) Digital artists, who found ways to build audiences through social media, now rally those audiences to their defense, in an attempt to protect their market from what they see as a threat to their livelihood. “Syed says that for things such as fan-art, self-published books, logos and family portraits, people may turn to AI. ‘These clients will usually care more about saving money than the quality of the finished product,’ she says. ‘They will prefer to use AI if it means keeping costs low. So a lot of these small jobs will vanish.’” (Shaffi)
The Congressional Research Service’s legal sidebar paper “Generative Artificial Intelligence and Copyright Law” (Zirpoli) s a thoroughly researched document, relevant, historical and current. In their analysis, they discuss whether the training or outputs of generative AI systems constitutes copyright infringement. While it is arguable whether any amount of research could be sufficient to please the many parties eager to have the government side with them, they cannot be faulted for skimping on the citations of legal precedent. They list the many ongoing cases that are currently being brought against Generative AI companies without taking sides, presenting the arguments in a fair and equitable manner. At last, they offer three possible relevant options for Congress to address these still-new and often undecided controversies through amending current copyright law or by passing new laws; or choosing to adopt a “wait-and-see approach” of gathering more information, passing laws at a later date when the full extent of the problem is more fully understood; or simply choosing to wait until these cases have worked their way through the courts and outcomes been determined by judges and juries. These options sufficiently cover the possible alternatives available to Congress so early in the epoch of generative AI. (Zirpoli)
Meanwhile, court dockets are filling up with copyright lawsuits by struggling artists and writers, blaming tech giants for stealing their works without their consent to train generative AI models that will limit the market for their works. But OpenAI insists that copyright law favors their outlook, which is that teaching algorithms to draw and write, even using copyrighted material, falls under the domain of “fair use”. The Congressional Research Service’s report to congress quotes OpenAI in its defense of using copyrighted material to train ChatGPT. They state in their defense, “that ‘[w]ell-constructed AI systems generally do not regenerate, in any nontrivial portion, unaltered data from any particular work in their training corpus.’ Thus, OpenAI states, infringement ‘is an unlikely accidental outcome.’” (Zirpoli) Even with intentional outcomes, as we saw with the Xerox photocopier, courts have determined that infringement lies in the actions of the user, not the capabilities of the product (Silbey).
As beleaguered creators call for strict punishment of generative AI manufacturers, it is important that we recognize the dangers to overprotecting creative works. In a recent court case, copyright law was used to protect an author from harassment by members of the notorious Kiwi Farms hacker group (Belanger). A member of the group, “a web forum accused of fomenting harassment campaigns” (Brown), encouraged other members of the group to freely share the author’s works in a way that would prevent the author from earning money from publication of his work. The ruling, and the disagreement around it, mirrors in some ways the actions of anti-generative-AI activists, who believe that their only protection from having their livelihoods subsumed is copyright law, forcefully applied, such that the concept of “fair use” as outlined, is diminished. “We definitely don’t want more copyright doctrines that facilitate pernicious removals,” noted Technology & Marketing Law Blog publisher, Prof. Eric Goldman. (Belanger).
Attempts to quell speech that is harmful to society are laudable, but have a history of backfiring. “As Cloudflare itself pointed out in an August 31 blog post, the company’s private decisions to terminate services to 8chan and the Daily Stormer were followed by “a dramatic increase in authoritarian regimes attempting to have us terminate security services for human rights organizations — often citing the language from our own justification back to us.’” (Brown) What we use against others to protect ourselves, they can turn against us to protect themselves. This is what it means to live in a fair and just society where the laws apply to all equally.
And yet, giving free reign over our very personal data to organizations ranging from the shadowy Kiwi Farms to well-funded corporations like OpenAI, Google DeepMind, Stability AI, Midjourney Inc., Inflection AI, Anthropic, and an ever-growing list of “Surveillance Capitalists”, threatens to undermine democratic protections like the right to privacy. “We can’t rid ourselves of later-stage social harms unless we outlaw their foundational economic causes. This means we move beyond the current focus on downstream issues such as content moderation and policing illegal content.” (Zuboff) Social Psychologist Shoshana Zuboff makes clear the stakes are high: “Democracy is on the ropes in the UK, US, many other countries. Not in small measure because of the operations of surveillance capitalism.” (Kavenna)
With democracy itself in crisis (Zuboff) and our livelihoods at stake (SFWA Comments on AI), what then do we do? “Alarmists or ‘existentialists’ say they have enough evidence. If that’s the case then then so be it, but then the only way to truly make that case is to embark on the legislative process and use democracy to validate those concerns.” (Sinofsky)
Intellectual property law is meant to protect innovators. (US Copyright Office) Critics say government and business are moving too slowly and without sufficient purpose (Naughton.) Even the founder of OpenAI, CEO Sam Altman, is open about his concerns. In his recent testimony before congress he warned, “As this technology advances, we understand that people are anxious about how it could change the way we live. We are too.” (O’Brien.)
Where Do We Go from Here?
There is no short supply of propositions lofted by stakeholders as evidenced by the uproar in the media, academia, and in government. What we lack are integrated solutions that attempt to address quantifiable areas of impact, on which we have precious little data. “Authorities should work with policymakers during the legislative process, and with competent authorities during the enforcement phase, to ensure that they do not impose undue restrictions that would deter the development of open-source models in a way that would favour the use of closed-source models.” (Carugati)
While ethicists point toward biases in generative AI as a serious problem, they also suggest they are sources of employment that potentially address both bias and impacts on employment, “One essential pathway toward fixing AI’s flaws is to build a more diverse pipeline to tech careers, one that includes women and people of color.” (Gibson) Court cases once thought to be the nail in the coffin of generative AI are being steadily dismissed (Brittain). Unions are effectively advocating for workers and bargaining with corporate employers for better contracts that address impacts on employment that may someday come from generative AI (Alexander).
The Internet Archive is a digital library, a repository of data that has been collected from the internet, as well as an archive of physical information that has been digitized and made available to anyone with internet access. It is also likely to be affected by efforts to clamp down on the concept of fair use, which affords them legal protection to maintain their library of copyrighted materials. They released a comment to the US Copyright Office, asking for protection from overzealous protectors of markets for creative works, while also recommending ways to alleviate the pressure on those markets through education and retraining efforts. (“Internet Archive Comments”)
“Regulation of Artificial Intelligence should be considered holistically–not solely through the isolated lens of copyright law. As explained in the Library Copyright Alliance Principles for Artificial Intelligence and Copyright, “AI has the potential to disrupt many professions, not just individual creators. The response to this disruption (e.g., support for worker retraining through institutions such as community colleges and public libraries) should be developed on an economy-wide basis, and copyright law should not be treated as a means for addressing these broader societal challenges.” Going down a typical copyright path of creating new rights and licensing markets could, for AI, serve to worsen social problems like inequality, surveillance and monopolistic behavior of Big Tech and Big Media.” (“Internet Archive Comments”)
Perhaps a solution like the one proposed by the SFWA, that offers a method to compensate everyone who participates in the training of generative AI systems — even those of us who are not quite lucky enough to be professional science fiction and/or fantasy writers. “As writers of science fiction and fantasy, we are confident that the boundless ingenuity of the researchers who brought these AI systems to the world will find ways to do their work that fully respect our rights.” (“SFWA Comments on AI”)
In closing of this paper and of his own thought-provoking essay on the matter, Ben Thompson offers another potential solution that includes something for everyone, from technologists eager to put the pedal to the metal, to ethicists eager to find intersectional solutions to intersectional problems, to advocates of free markets in ideas and products.
“Innovation — technology, broadly speaking — is the only way to grow the pie, and to solve the problems we face that actually exist in any sort of knowable way, from climate change to China, from pandemics to poverty, and from diseases to demographics. To attack the solution is denialism at best, outright sabotage at worst. Indeed, the shoggoth to fear is our societal sclerosis seeking to drag the most exciting new technology in years into an innovation anti-pattern.”
(Thompson)
Generative AI poses new challenges to an already-challenged social and economic environment brought about by climate catastrophe, pandemic fatigue, and social media manipulation. We’ve lost trust in each other’s solutions, because we are cynical about anyone’s motivations but our own, and we are each overly certain of the purity of our own motivations. “Stop the world, I want to get off,” goes the refrain from the popular song, and so also goes public sentiment in the shadow of frightening prospects for change.
Yet change is inevitable. How we react to change is a well-established measure of character, and we have much to prove with regard to our collective ability to collaborate to bring about positive change. With the promise of generative AI, as well as the potential for harm, we have a new opportunity for collective growth that only change brings. Let’s not squander this opportunity. Humanity doesn’t have the luxury of inaction.
Works Cited
“Alan Turing FAQs.” Bletchley Park Trust, https://bletchleypark.org.uk/our-story/alan-turing-faqs/. Accessed 5 Nov. 2023.
Alexander, Bryan. “SAF-AFTRA President Fran Drescher: AI Protection was Nearly ‘Deal Breaker’ in Actors Strike.” USA-Today, 10 Nov. 2023. https://www.msn.com/en-us/tv/news/sag-aftra-president-fran-drescher-ai-protection-was-nearly-deal-breaker-in-actors-strike/ar-AA1jJQjL. Accessed 12 Nov. 2023.
Bailey, Lila. “Internet Archive Comments In Response To The Copyright Office Study On Artificial Intelligence.” Internet Archive, 30 Oct. 2023. https://archive.org/details/internet-archive-comments-in-response-to-the-copyright-office-study-on-artificial-intelligence/ Accessed 10 Nov. 2023.
Belanger, Ashley. “Kiwi Farms ruling sets “dubious” copyright precedent, expert warns.” Ars Technica, 18 Oct. 2023, https://arstechnica.com/tech-policy/2023/10/court-ruling-may-doom-kiwi-farms-but-set-dubious-copyright-precedent/. Accessed 22 Oct. 2023.
“The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023.” Gov.uk, 1 Nov. 2023, www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023. Accessed 5 Nov. 2023.
Brittain, Blake. “US judge trims AI copyright lawsuit against Meta.” 9 Nov. 2023, Reuters. https://www.reuters.com/legal/litigation/us-judge-trims-ai-copyright-lawsuit-against-meta-2023-11-09/. Accessed 12 Nov. 2023.
Brown, Elizabeth Nolan. “Kiwi Farms Is Back.” Reason, 7 Oct. 2023. https://reason.com/2022/10/07/kiwi-farms-is-back/. Accessed 12 Nov. 2023.
*Carugati, Christophe. Competition In Generative Artificial Intelligence Foundation Models. Bruegel, 2023, JSTOR. http://www.jstor.org/stable/resrep52128. Accessed 11 Nov. 2023.
“Introducing ChatGPT.” OpenAI, 30 Nov. 2022, https://openai.com/blog/chatgpt. Accessed 5 Nov. 2023.
Gibson, Lydialyle. “Bias in Artificial Intelligence.” Harvard Magazine, 2 Aug. 2021. https://www.harvardmagazine.com/2021/08/meredith-broussard-ai-bias-documentary. Accessed 12 Nov. 2023.
Kavenna, Joanna. “Shoshana Zuboff: Surveillance capitalism is an assault on human autonomy.” The Guardian, 4 Oct. 2019, https://www.theguardian.com/books/2019/oct/04/shoshana-zuboff-surveillance-capitalism-assault-human-automomy-digital-privacy. Accessed 23 Oct. 2023.
Littman, Michael L. et al. Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report. Stanford University, Sept. 2021, http://ai100.stanford.edu/2021-report. Accessed 23 Oct. 2023.
Library of Congress. “ArtI.S8.C8.1.1 Origins and Scope of the Power.” Cornell Law School Legal Information Institute, https://www.law.cornell.edu/constitution-conan/article-1/section-8/clause-8/origins-and-scope-of-the-power. Accessed 9 Oct. 2023.
*McJones, Paul, and Dag Spicer. “The Advent of Digital Typography.” IEEE Annals of the History of Computing, vol. 42, no. 1, Jan. 2020, pp. 41–50. EBSCOhost, https://doi-org.ezproxy.shoreline.edu/10.1109/MAHC.2019.2929883.
Naughton, John. “AI is not the problem, prime minister – the corporations that control it are.” The Guardian, 4 Nov. 2023. https://www.theguardian.com/commentisfree/2023/nov/04/ai-is-not-the-problem-prime-minister-but-the-corporations-that-control-it-are-rishi-sunak. Accessed 5 Nov. 2023.
O’Brien, Matt. “WATCH: OpenAI CEO Sam Altman testifies before Senate Judiciary Committee.” PBS Newshour, 15 May 2023, Updated 16 May 2023. https://www.pbs.org/newshour/politics/watch-live-openai-ceo-sam-altman-testifies-before-senate-judiciary-committee. Accessed 6 Nov. 2023.
*Roose, Kevin. “How Does ChatGPT Really Work?” The New York Times, vol. 172, no. 59752, 2023, EBSCOhost. https://sbctc-shoreline.primo.exlibrisgroup.com/discovery/openurl?institution=01STATEWA_SHORLINE&vid=01STATEWA_SHORLINE:SHORELINE&volume=172&date=20230408&pages=&issn=03624331&issue=59752&au=ROOSE,%20KEVIN&isbn=&genre=article&spage=B6&title=New%20York%20Times&atitle=How%20does%20ChatGPT%20really%20work%3F&sid=EBSCO:Academic%20Search%20Complete:162982176. Accessed 4 Nov. 2023.
Sales, Ben. “Hydrox, Original Kosher Sandwich Cookie, Accuses Oreo of Sabotage.” The American Israelite, 15 Aug. 2018. https://sbctc-shoreline.primo.exlibrisgroup.com/permalink/01STATEWA_SHORLINE/6olkva/cdi_proquest_newspapers_2088748080. Accessed 9 Oct. 2023.
“SFWA Comments on AI to US Copyright Office.” Science Fiction & Fantasy Writers Association. 2 Nov. 2023. https://www.sfwa.org/2023/11/03/sfwa-comments-on-ai-to-us-copyright-office/ Accessed 12 Nov. 2023.
Shaffi, Sarah. “‘It’s the opposite of art’: why illustrators are furious about AI.” The Guardian, 23 Jan. 2023. https://www.theguardian.com/artanddesign/2023/jan/23/its-the-opposite-of-art-why-illustrators-are-furious-about-ai. Accessed 12 Nov. 2023.
Silbey, Jessica. “How Xerox’s Intellectual Property Prevented Anyone From Copying Its Copiers.” Smithsonian Magazine, 2 Jul. 2019. https://www.smithsonianmag.com/innovation/how-xeroxs-intellectual-property-prevented-anyone-from-copying-copiers-180972536/ Accessed 12 Nov. 2023.
Sinofsky, Steven. “211. Regulating AI by Executive Order is the Real AI Risk.” Hardcore Software, 1 Nov. 2023. https://hardcoresoftware.learningbyshipping.com/p/211-regulating-ai-by-executive-order. Accessed 10 Nov. 2023.
“Snakes, Spiders, Santa Rap” prompt and output. ChatGPT Plus, OpenAI, 4 Nov. 2023, https://chat.openai.com/c/151b7066-7dbf-4fe8-a030-5c7350f46e04.
Sony Corp. of America v. Universal City Studios, Inc., 464 U.S. 417 (1984). Supreme Court of the US. https://supreme.justia.com/cases/federal/us/464/417/. Accessed 12 Nov. 2023.”
Thompson, Ben. “Attenuating Innovation (AI)”, Stratechery, 1 Nov. 2023. https://stratechery.com/2023/attenuating-innovation-ai/. Accessed 10 Nov. 2023.
“Trade Secret Policy.” US Patent and Trademark Office, https://www.uspto.gov/ip-policy/trade-secret-policy. Accessed 23 Oct. 2023.
US Copyright Office. “Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence.” Copyright.gov, 16 Mar. 2023, https://copyright.gov/ai/ai_policy_guidance.pdf. Accessed 23 Oct. 2023.
US Executive Office of the President. “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” The White House, 30 Oct. 2023. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/. Accessed 5 Nov. 2023.
US House of Representatives. “Copyright Law of the United States (Title 17).” The Library of Congress, Dec. 2022, https://www.copyright.gov/title17/, Section 106. Accessed 9 Oct. 2023.
US Patent and Trademark Office. “Patent Essentials.” USPTO.gov, https://www.uspto.gov/patents/basics/essentials. Accessed 9 Oct. 2023.
Wiggers, Kyle. “The current legal cases against generative AI are just the beginning.” TechCrunch, 27 Jan. 2023, https://techcrunch.com/2023/01/27/the-current-legal-cases-against-generative-ai-are-just-the-beginning/. Accessed 23 Oct. 2023.
CRAP Analysis: Despite this source (Wiggers) being from the current year, 2023, it is not a currentanalysis, given the rapid state of the art. According to K&L Gates, an American multinational corporation law firm, “While the outcomes of these early generative AI cases are far from certain, preliminary indications suggest that courts are not succumbing to the hype and rhetoric, and are approaching generative AI claims with a healthy level of skepticism.” (https://www.klgates.com/Recent-Trends-in-Generative-Artificial-Intelligence-Litigation-in-the-United-States-9-5-2023). The reputation of TechCrunch is solid for their technology coverage, but may not be quite as solid for their legal coverage. Authoritativeness may also be lacking in the legal area, though they cite reputable legal sources such as Bloomberg Law. Currency is the real weakness here. The purpose seems to be to sound a warning to companies interested in building a business on these technologies and pointing out that there are legal challenges which do have some legitimacy. It would have been better if they had dug deeper into previous examples of failed regulation around audio and video tap copying technologies and the failed attempts at regulating encryption technologies.
*Xiong, Hui et al. Digital twin brain: a bridge between biological intelligence and artificial intelligence. Cornell University Library, arXiv.org, 3 Aug. 2023, ProQuest. https://ezproxy.shoreline.edu/login?url=https://www.proquest.com/working-papers/digital-twin-brain-bridge-between-biological/docview/2847149658/se-2?accountid=1164. Accessed 5 Nov. 2023.
Zirpoli, Christopher T. “Generative Artificial Intelligence and Copyright Law.” Congressional Research Service, 29 Sept. 2023, https://crsreports.congress.gov/product/pdf/LSB/LSB10922. Accessed 9 Oct. 2023.
Zuboff, Shoshana. “You Are the Object of a Secret Extraction Operation.” New York Times, 12 Nov. 2021. https://www.nytimes.com/2021/11/12/opinion/facebook-privacy.html. Accessed 12 Nov. 2023.
*Sources gathered using the SCC Library databases, with database in bold.
Coda: The image that goes with this article is by DALL-E 3, generated by ChatGPT Plus (GPT-4).
ChatGPT
Certainly! Let’s weave a narrative around this intriguing illustration:In a world where the boundaries between the organic and the digital have blurred, a unique realm has emerged, marrying the realms of creativity and artificial intelligence. At the heart of this world lies the Enigma Sphere, a mysterious orb embodying the fusion of human ingenuity and the boundless possibilities of machine learning.
The Enigma Sphere floats serenely in a space that is neither entirely physical nor wholly digital. Around it swirls a tapestry of neural networks, resembling a cosmic dance of interconnected stars and galaxies. These networks represent the intricate pathways of AI algorithms, constantly learning, evolving, and intertwining with human thoughts and ideas.
Beneath the sphere, the shadows and echoes of traditional art forms – brushes, quills, and musical notes – are visible. They are the ancestral spirits of creativity, whispering their timeless wisdom to the modern world. These symbols remind us that even in an age dominated by technology, the essence of human expression remains a cornerstone of all creations.
In this realm, the Enigma Sphere is a source of both wonder and contemplation. It represents the culmination of humanity’s journey through the ages – from the first cave paintings to the sophisticated AI models of the digital era. The sphere itself is a canvas, on which the dreams and aspirations of countless artists, writers, and thinkers are projected, blending seamlessly with the code and data that fuel the AI.
As the neural networks pulsate and glow, they occasionally spark flashes of brilliant light, each a birth of a new idea, an invention, or a masterpiece that transcends the limits of imagination. These sparks symbolize the ongoing dialogue between human creativity and AI, a synergy that has the power to redefine the very nature of art, literature, and innovation.
In this world, the old fears of AI as a harbinger of obsolescence have faded away. Instead, AI has become a partner in the creative process, a digital muse that amplifies human potential and opens doors to realms previously unexplored. The Enigma Sphere, ever-mysterious, continues to hover at the center of this dance between the past and the future, a symbol of the endless possibilities that lie at the intersection of human creativity and artificial intelligence.
This story, inspired by the illustration, reflects the intricate relationship between traditional creativity and the burgeoning world of AI, highlighting the potential for harmonious coexistence and mutual enhancement.