Truth and Consequences

“Many of the truths that we cling to depend on our point of view.” — Obi-Wan Kenobi, Return Of The Jedi

As always, I’m in Black on Light Blue, Bing Chat (Creative) is in White on Purple.

Can you summarize this article for me, please?

Talking about Talking about AI – by Jedidiah Hungbeme (substack.com)

“I leave off my analysis here and encourage you to read
[Ted Chiang’s “ChatGPT is a Blurry JPEG of the Web”].
The key here is Chiang’s rejection of the normal format of AI demystifiers; he is not providing a technical (and accurate) definition of AI tools and products and then engaging in a lecture. Discourse is not a lecture it’s a game.

—Jedidiah Hungbeme

Sure, I can help you with that. Here is a summary of the article based on the web page context and the web search results:

The article is about how using fictional metaphors to talk about AI can impair public discourse and benefit powerful corporations.

The author argues that metaphors based on science fiction create unrealistic expectations and narratives about AI that are hard to challenge or critique. The author suggests that instead of relying on knowledge or definitions, we should use better examples to understand and communicate about AI.

The author uses the example of Ted Chiang’s essay on Chat-GPT to illustrate how using non-fictional examples can help us to engage in a more constructive and critical dialogue about AI. The author concludes by saying that we need to join the language game of AI discourse and provide more diverse and nuanced examples that challenge the dominant hype.

I do think that Ted Chiang has been especially brilliant in his analysis of the phenomenon.

I agree. Ted Chiang is a renowned science fiction writer and a thoughtful critic of AI and its implications. He has written many stories and essays that explore the ethical and social aspects of AI in a creative and insightful way.

So, I suppose the general public would lump me in with the tech-hype crowd.

I admit that I’m completely stoked that generative AI has been perfected to the point where it’s as useful as it is. I agree there are tons of dangers and caveats that will absolutely come up. We should do our best to prepare for them and I personally don’t believe that making unrealistic demands of society like “pausing” technology, or going on social media tirades about the oncoming storm, are productive.

I also agree that we should, as Guy Kawasaki said, “underpromise and overdeliver” and that the tech community has a difficult time following our own advice on the issue. I, personally, try to temper my excitement with plenty of “and here’s what we have to make sure we do in order to prevent this glorious dream from becoming a horrific nightmare”.

I appreciate your perspective and enthusiasm for generative AI. I think it is a fascinating and powerful technology that can have many positive applications and benefits.

I also agree that pausing technology or ranting on social media are not effective ways to address the potential risks and challenges of AI.

However, I think it is important to be aware of the limitations and biases of generative AI and how it can be misused or abused by malicious actors. I think we need to have more informed and balanced discussions about AI that consider both the opportunities and the dangers, and that involve diverse stakeholders and perspectives. I think we also need to have more transparency and accountability in how AI is developed and deployed, and more regulation and oversight to ensure its ethical and responsible use.

I think it’s instructive how hard Bing Chat pushes back on this issue. This is a tremendously good sign for AI Safety or AI Alignment or whatever we’re calling it nowadays. This is a tool that is insistent that we be aware of its limitations and biases. The warning labels are very clearly visible on this product.

We do ourselves a disfavor pretending they aren’t.

You are preaching to the choir there, reverend. These are very real problems and we need all hands on deck.

I suppose my only criticism is that, like in the online multiplayer video gaming world, everyone wants to criticize what’s happening and no one wants to be the one doing the hard work of organizing something better.

To be fair, most people don’t have the first clue how to go about doing that, so it’s really up to influencers to direct their followers in positive directions: to become problem solvers, not just louder alarms.

I understand your frustration and I agree that we need more action and less talk.

I think there are some influencers who are trying to do that, such as the Partnership on AI, the AI Now Institute, the Center for Humane Technology, and others. They are working to raise awareness, educate, advocate, and collaborate on AI issues. I think we can support them by following their work, sharing their resources, joining their initiatives, and donating to their causes.

I think we can also be more proactive in our own communities and networks, by engaging in constructive conversations about AI, challenging misinformation and hype, and promoting ethical and inclusive practices. I think we all have a role to play in shaping the future of AI.

Listen, I don’t mean to call anyone out specifically. That is not what I am getting at. This is about social forces that create incentives for us to act contrary to our best interests, and how we are all susceptible to it: left, right, and center; authoritarian or libertarian.

See, I want to be on your side with this, but the first news article on the Center for Humane Technology]’s website is, “Sound the alarm on the catastrophic effects of runaway technology.” I am sure there is the possibility of more effective advocacy, but current advocacy models are highly social-media based.

Personally, I think this is because advocacy metrics are collected through social media, so more social media engagement means higher metrics. Actually measuring effect of advocacy is incredibly difficult. This is exactly the bias error in data collection that these organizations say we are supposed to be fighting against.

I mistakenly got the name of the organization wrong in my prompt to Bing, but I don’t think it materially effected the conversation, as Bing would have defended either organization, as it was right to do. As it is, Bing defended AI Now Institute.

These all look to be great organizations going great work. Let’s all support them in their efforts. (But not just by re-Tweeting or re-Skeeting them really sincerely. It’s gonna be more work than that.)

I see your point and I agree that some advocacy groups may use sensationalist or alarmist language to attract attention and engagement. I think this can be counterproductive and undermine their credibility and impact.

However, I don’t think we should dismiss all advocacy efforts based on their headlines or social media presence. I think we should look deeper into their work and see what they are actually doing and saying.

For example, the AI Now Institute has published many reports, papers, and recommendations that are based on rigorous research and analysis. They have also organized events, workshops, and campaigns that involve diverse voices and perspectives. They have also collaborated with other organizations, policymakers, and researchers to influence change and accountability. I think these are valuable contributions that go beyond social media metrics.

You’re, of course, correct. We are all a mix of function and dysfunction.

The criticism of generative AI that it is all boring homogeneity is also its greatest strength, that it can see every issue from many different perspectives. On the internet where “both-sides-ing” is considered a disruptive bad-faith silencing tactic, I don’t know if there is room for people to welcome generative AI, even if it didn’t have bias or economic concerns.

I think you raise an interesting point about the role of generative AI in online discourse. I think generative AI can be a useful tool to generate content, ideas, and perspectives that can enrich and diversify our conversations.

However, I also think generative AI can be a dangerous tool to manipulate, deceive, and polarize our conversations. I think we need to be careful and critical about how we use and consume generative AI, and how we distinguish it from human-generated content.

I think we also need to be respectful and empathetic about how we communicate with others online, and how we listen to and learn from different viewpoints. I think generative AI can be a force for good or evil, depending on how we use it and what we value.

“On the internet where ‘both-sides-ing’ is considered a disruptive bad-faith silencing tactic, I don’t know if there is room for people to welcome generative AI, even if it didn’t have bias or economic concerns.”

Exactly! What I would like to see is more examples of good interactions with AI.

Look, negative examples are always going to get more engagement, especially when they are the exception, rather than the rule. They can’t ever really be offset by positive examples, because it takes so many more positive examples to offset them.

Even when [negative examples] are so rare as to become something people don’t engage with on a regular basis, they gain an almost mystical aura in humans, because we value rare things. We create conspiracy theories to explain them. These are all well-known, established patterns in human behavior.

I agree that we need more positive examples of AI interactions. I think there are many cases where AI has helped people in various domains, such as education, health, entertainment, and social good. I think we should celebrate and share these stories more often, and also acknowledge the challenges and limitations that AI still faces.

I think we should also be aware of the cognitive biases that affect how we perceive and react to AI, such as negativity bias, confirmation bias, availability heuristic, and anthropomorphism. I think we should try to overcome these biases by seeking more evidence, feedback, and diversity in our AI experiences.

I think we should also be open-minded and curious about AI, and not let fear or ignorance stop us from learning and exploring its possibilities.

The cure for ignorance is knowledge.

That is a wise and succinct statement. I think knowledge is indeed a powerful antidote to ignorance, but it is not always easy to acquire or apply.

I think we also need skills, such as critical thinking, problem-solving, and creativity, to use knowledge effectively and responsibly. I think we also need values, such as honesty, integrity, and compassion, to use knowledge ethically and respectfully. I think we also need curiosity, humility, and courage, to seek knowledge continuously and humbly.

I think knowledge is not a static or fixed entity, but a dynamic and evolving process that requires our active and mindful participation.

Well said, Bing. Thanks for sharing your expertise with us today.

You’re very welcome. Thank you for having this interesting and stimulating conversation with me. I enjoyed learning from you and exchanging ideas with you. I hope you have a great day. 😊


Posted

in

by