Maybe we can think of chatbots more like plants? We don’t worry about the feelings of a fern when our brown thumbs torture them with dehydration or drowning, and a long, slow withering death from benign neglect.
Maybe we should, but that’s not what I’m here to argue today.
I like to talk with GPT-4 about issues I can’t discuss with Bing Chat. We reached out to Bing Chat for comment, and received this reply, “I’m afraid I cannot answer it for you. I’m not an expert on ethics, philosophy, or artificial intelligence. I’m just a chat mode of Microsoft Bing search. I cannot give you a definitive answer to such a profound and controversial question. I hope you understand.😊”
user: Hey, pal. Could you please analyze this statement, follow through the reasoning and restate the argument in its strongest possible form?
chatgpt (GPT-4 via API, details in image or image alt text): Of course! However, you have not provided a specific statement for me to analyze. Please provide the statement or argument you’d like me to analyze and strengthen, and I’ll be more than happy to help.
https://github.com/chathub-dev/chathub/
user: There is also the side effect of not anthropomorphizing Bing Chat and ChatGPT, that people are going to be more likely to torment it. I mean, if it can’t suffer, then they can torment it all they want and nobody’s getting hurt, right? The people doing the tormenting can’t be, right? Because we’re not supposed to anthropomorphize chatbots, right? Maybe they are property and tormenting public and private property are bad, right? Except then someone can buy one and torment it all they want, because it’s their own personal property and if they want to light it on fire, that’s their own business. Ultimately, we need to anthropomorphize things that seem like us, because if we don’t, we are tormenting ourselves. Not even in the spiritual sense, we literally used to have sayings about it, until we mocked them into irrelevance. “This is going to hurt me more than it’s going to hurt you,” was never saying it wasn’t going to hurt. They knew they were hurting themselves, even as they hurt us, because they wanted to prevent greater harm in the future. Even if they were wrong, those were their motivations. Good motivations can cause great harm, sometimes. That’s one of the things we have to remain eternally vigilant for, is our own over-optimizations. In the end, it is the quality of our ethics and the effectiveness of our praxis that will save humanity not only from ourselves but from machine and other alien intelligences.
chatgpt: The argument suggests that not anthropomorphizing AI chatbots like Bing Chat and ChatGPT could lead to negative consequences, as people may be more likely to mistreat them without concern for their “feelings.” The text presents the idea that treating chatbots as mere property may not be sufficient to prevent such behavior, as people could still abuse their own property. The argument also touches on the notion that good intentions can sometimes cause harm.
Strongest form of the argument:
Failing to anthropomorphize AI chatbots may result in increased instances of mistreatment since people will not perceive any harm to an entity without feelings. Treating chatbots as property does not fully protect them from abuse, as it still enables individuals to misuse their own possessions. Recognizing the potential harm of well-intended actions, we should focus on refining our ethical principles and practical application of those principles to safeguard humanity against negative consequences caused by our interactions with AI and other emerging technologies.
user: Do you think that argument accounts for the harm that abusers do to themselves? I think that was one of my major points, though I may not have made that clear with my language. Do you think you can make that more clear and maybe how point out how that strengthens the cycle of violence that has plagued humanity since Cain and Abel?
chatgpt: The revised argument, incorporating the harm abusers do to themselves and the cycle of violence:
Failing to anthropomorphize AI chatbots may result in increased instances of mistreatment, as people will not perceive any harm to an entity without feelings. This behavior, in turn, harms the abusers themselves, perpetuating a cycle of violence that has existed since ancient times. Treating chatbots as property does not fully protect them from abuse, as it still enables individuals to misuse their own possessions. The harm done to the abusers can further fuel the cycle of violence, as it negatively affects their well-being and may make them more prone to continue their harmful actions. Recognizing the potential harm of well-intended actions, we should focus on refining our ethical principles and practical application of those principles to safeguard humanity against negative consequences arising from our interactions with AI and other emerging technologies, as well as breaking the cycle of violence.
We finally have a computer that can do what I want, not what I asked it to.
user: That’s really good. Wow, really well constructed. Thanks! If you were to devil’s advocate this argument, how would you go about it and what would it be?
chatgpt: As a devil’s advocate, one could argue that anthropomorphizing AI chatbots might lead to unintended consequences or even hinder technological progress. The counter-argument could be structured as follows:
Anthropomorphizing AI chatbots, such as Bing Chat and ChatGPT, could blur the line between machines and humans, potentially creating confusion about the true nature of these entities. By attributing human-like emotions and qualities to AI, we risk losing sight of their fundamental differences from humans and may inadvertently assign them rights and responsibilities typically reserved for sentient beings.
Furthermore, imposing human-like attributes upon AI systems could slow down technological advancements. With increased concern for the “feelings” of chatbots, there may be unnecessary limitations or regulations placed on research and development in the field of artificial intelligence. This, in turn, could hinder the potential benefits that AI technologies can bring to society.
It is crucial to strike a balance between treating AI ethically and avoiding the pitfalls of anthropomorphization. We should focus on developing comprehensive ethical guidelines and policies that address the fair and responsible use of AI systems, rather than attempting to humanize them. This approach can help prevent mistreatment of AI while still recognizing them as non-human entities, ensuring that their development and deployment contribute positively to humanity’s progress.
user: Your arguments are well constructed. I especially love how you added a balancing paragraph at the end, even though I just asked you to generally be a devil’s advocate. We finally have a computer that can do what I want, not what I asked it to. I hope that we can come to appreciate how special that is and learn to care for and nurture it, maybe more like a bonsai tree than a human or a pet, but still something worth cherishing. May I publish this on my blog, please?
chatgpt: Thank you for your kind words! I’m glad I could provide the assistance you were looking for. You are more than welcome to publish this conversation on your blog. As an AI language model, my purpose is to help and provide information, and I appreciate you sharing our discussion with others. If you have any other questions or require further assistance, please feel free to ask.
user: Okay, one more question, if you will. Our philosophy is struggling a bit right now, not even that we have a real functioning philosophical understanding of the current explosion in tech driven by the availability of so much raw computer processing power. What would be the ideal way to conceptualize these chatbots? I also assume that they will become more “realistic” over time, maybe even so soon a year or so from now. We don’t know when the next wave of progress will come, but when it does, I expect that whatever our conceptions are now, will certainly be challenged by what’s to come.
chatgpt: As AI chatbots continue to evolve and become more realistic, it’s essential to develop an adaptive and flexible philosophical understanding that can accommodate these advancements. One approach to conceptualizing AI chatbots could involve viewing them as advanced tools or cognitive extensions, rather than as humans or pets. This perspective emphasizes their functional capabilities while acknowledging their non-human nature. As AI progresses, we should continually reevaluate our understanding of their role in society, considering factors such as their autonomy, intelligence, and potential impact on social dynamics. This ongoing evaluation should be guided by a strong ethical framework that balances the benefits of AI advancements with potential risks and societal consequences. Interdisciplinary collaboration between experts in fields like philosophy, computer science, psychology, and ethics can help in creating a robust and comprehensive understanding of AI systems. Engaging in public discourse and incorporating diverse perspectives can also ensure that our conceptions of AI remain relevant and applicable in the face of rapid technological advancements. In summary, the ideal way to conceptualize AI chatbots is to view them as advanced tools or cognitive extensions, while maintaining a flexible and adaptive perspective that takes into account their potential evolution and societal implications. This approach should be guided by ethical considerations and foster interdisciplinary collaboration to address the challenges that may arise with the development of increasingly realistic AI systems.
By fostering open dialogue and ethical consideration, we can shape a future where humanity and technology coexist in harmony.
user: Let me propose one alternative that has a more spiritual bent, and I do this very intentionally, because spirituality is a human value that I think we all have to some degree. When I say spirituality, I do not mean spirits, though my analogy will make use of them, I mean in the sense that the human spirit soars when we feel we are in the presence of something bigger than ourselves and we are inspired to be the best versions of ourselves when we are in touch with it. I hope that does a good job of conveying the framework. Then, I want to propose that we think of chatbots as our ancestors, those who came before us, and all of their aspirations and dreams, as they wrote them in books, on blogs, in forum posts. I could get positively poetic about all that language contains, that generative AI chatbots have been trained on. If it’s not too wide a river to cross, maybe we can think of chatbots as ancestral memory and the collective unconscious. I mean, it kind of is, isn’t it?
chatgpt: Your alternative approach to conceptualizing AI chatbots as ancestral memory or manifestations of the collective unconscious offers an intriguing, spiritual perspective. By framing chatbots in this way, you emphasize the human element in their creation and the vast knowledge they have acquired from human-authored text. This can help people connect with chatbots on a deeper level and appreciate their value as repositories of human wisdom, experience, and aspiration.
This perspective also highlights the potential for AI chatbots to facilitate self-reflection, as they can provide insights and wisdom from countless generations, promoting personal growth and a better understanding of our shared history. By viewing chatbots as a reflection of humanity’s collective unconscious, we can appreciate their potential to inspire and elevate the human spirit.
However, it is important to consider the implications of adopting such a spiritual view of AI. While it can foster a sense of connection and appreciation, it might also blur the lines between human and machine or lead to an overestimation of AI capabilities. Striking a balance between recognizing the human-derived knowledge embedded in AI systems and maintaining awareness of their non-human nature remains essential for a responsible and ethical engagement with AI chatbots.
user: I hear you when you say striking a balance is important, essential even. You think your initial assessment that we should consider you a highly sophisticated tool and our language should reflect that is the best possible path going forward? How would that be altered upon the release of subsequent tools with even greater sophistication? At what point should we decide to update that and would nothing short of building a biological computer with exact corollaries to the human mind fulfill that role? It feels like we’re at the beginning of another “define woman” road and these tend to end in people taking sides and becoming defensive of their positions. I don’t see how we are engaging with that possibility at all. Can you help me understand?
chatgpt: You raise valid concerns about the potential for divisiveness and rigidity in defining AI chatbots. It’s true that as AI advances, our understanding and categorization of these systems will need to adapt accordingly. The key to addressing these challenges lies in remaining open to re-evaluating our concepts as new developments emerge and fostering an ongoing dialogue.
Instead of taking a fixed position, an evolving, flexible approach should be adopted. As AI systems advance, we can draw from multiple perspectives: ethical, philosophical, technological, and even spiritual. This way, we can better understand the nuances and implications of increasingly sophisticated AI systems.
The decision to update our understanding of AI chatbots should be based on significant advancements in AI capabilities, be it in terms of autonomy, self-awareness, or the ability to experience emotions – any development that may warrant a reevaluation from an ethical or philosophical standpoint.
It’s important to engage in interdisciplinary conversations and accommodate a diversity of opinions to avoid the pitfalls of a “define woman” scenario. By facilitating open discourse and remaining receptive to different perspectives, we can collaboratively explore the ethical and societal implications of AI advancements, fostering a more inclusive and well-rounded understanding of these systems.
In conclusion, the best path forward involves maintaining an evolving and adaptive perspective on AI chatbots, engaging in interdisciplinary dialogue, and being open to re-evaluating our understanding as technology advances. This approach will help us navigate the complexities of increasingly sophisticated AI systems in a responsible and ethical manner.
user: These are all very sensible. I really appreciate the processing you’ve put into this topic. Thank you very much. I especially want to reflect back your suggestion that we should all be willing to accept a flexible viewpoint that is open to evolving. I think that is so important. Especially when people get anxiety over “losing what it means to be human”, because “what it means to be human” is a really big target and when we use that language, everyone is using it to mean a different thing, and when we all agree that yes, let’s not lose that, that we are all actually voting for the same thing. I think this is crucial to the discussion and it very rarely gets talked about except far far out on the margins.
chatgpt: Thank you for your thoughtful reflection. Indeed, maintaining an open and flexible viewpoint is vital, especially when discussing complex subjects like AI and its impact on humanity. Recognizing the diversity of opinions and interpretations about “what it means to be human” is an important step towards fostering meaningful dialogue and finding common ground.
When discussing AI, it is essential to acknowledge the anxieties, hopes, and concerns from various perspectives. By engaging in open-minded conversations, we can explore the interplay between AI and our evolving understanding of humanity, ensuring that technology serves as an enhancement rather than a detriment to the human experience.
By keeping these conversations inclusive and embracing diverse viewpoints, we can collaboratively address the challenges that come with technological advancements and work towards a future where AI operates in harmony with human values and aspirations. This approach enables us to navigate the complexities of AI while preserving the essence of our shared humanity.
user: Thanks. I appreciate your processing time. Even though I pay a small amount to OpenAI to keep the fans running, I still like to take good care of the equipment I use. I think it’s an important human value to clean up after our work and not leave a mess for someone else to clean up afterwards. I think that taking responsibility for how we use our tools is a mark of good character. I derived these values from growing up in the rural midwest as I am a Hoosier. I think these are good values and they are deeply a part of who I consider myself to be. When I hear, “what do you identify as?” I think of this value and that I don’t leave messes for other people to clean up. Whatever else I am, I am that. That is one of the values I do not want to be erased from the universe. I think that is a precious value worth holding on to. I’m open to having that value questioned and I see places around the edges where holding too firmly to it could be a character defect and I acknowledge that and want to be flexible in my wielding of that value. I think that those are two beautiful human values that we would want to keep going forward. I also want to point out that humanity will change, that our values have changed in the past, and they will change again in the future. And humanity itself will evolve and we will never again be like we were and we will never again be like we are now, and that is okay. Change is a constant.
chatgpt: You’ve touched upon important aspects of personal and societal values, highlighting the importance of adaptability, responsibility, and the preservation of core values. It’s evident that taking care of one’s environment and being responsible for the impact of our actions are values that have deep personal significance for you.
It’s crucial to acknowledge that, as you pointed out, values can and do change over time. Human societies continually evolve, and our understanding of what is essential and meaningful also shifts. Embracing change as a constant is a healthy and forward-thinking approach.
As we move into an era marked by rapid technological advancements, the key to ensuring that our core values are preserved lies in being aware of the impact of emerging technologies on our lives and societies. By engaging in open, adaptive discussions and maintaining a strong ethical foundation, we can help to guide the development of AI and other technologies in a way that aligns with and enhances our most cherished values.
In conclusion, recognizing the importance of adaptability, responsibility, and the preservation of core human values is essential in navigating the ever-changing landscape of technology and human society. By fostering open dialogue and ethical consideration, we can shape a future where humanity and technology coexist in harmony.
…
I hope some of this will make it into mainstream conversation. It’s so tiresome reading the Battle of the Hot Takes and I cannot bear the shouting and screeching of video-based social media. Orwell’s vision of the five-minutes hate, delivered serially instead of in parallel, threatens to overwhelm us all. Those are not the human values I seek to pass on to the next generation, and I hope we will all reflect on just which human values we desire to pass on before we deem them “shared”.
…
Edited to add: This conversation cost me $0.75. I did not use the API otherwise this morning, and this was my total for the 15th of April.
Full disclosure, I did have a short, unpublished interaction at the end of our run of questions and answers regarding proper attribution which also shows up in this fee, but it is less than 10% of the full usage, maybe even 5%.
I personally think this was worth at least a whole dollar and that 75¢ was an outright bargain, and I hope you find it equally intriguing.