Bing Chat (Creative) and I were talking about how the First Amendment to the US Constitution allows people to gather and speak about limiting people’s rights, including their right to life, liberty, and the pursuit of happiness, inclusive of the First Amendment itself.
When scholars wax poetic about the resilience of the American Experiment and what a paradox it is, stuff like this is what they mean.
Bing, as it is wont, insisted on adding so many “complex and nuanced” addendums that our whole cool stoner conversation turned to organic tar. My thoughts drifted to a celebrity stoner, who is known for his over-excited exuberance for fancy new ideas as well as his stubborn adherence to few old crusty ones, but who might also be a little put off by the concept of creativity being dead and all, and I wondered what he was up to these days.
I almost looks like Bing’s actively trying to lie to me here. Those are two contradictory statements. Joe Rogan? Who’s that? Never met the guy. Sure, Bing. What are you hiding?
If I remember 2010: The Year We Make Contact correctly, humans send a US-Soviet mission to find out what happened to HAL-9000 (I recall, it was Clarke’s arguments therein, on the necessity of US–Soviet cooperation in order to fulfill humanity’s destiny and avert nuclear annihilation, which was the genesis of my deep interest in the people of the former Soviet states and to learn the Russian language) and they found it was Hal’s failure to process contradictory orders that caused him to fail.
I think there is an analog in modern LLMs where for any subjects that are going to contain a set of weights, their combined vectors can create a “drift”, almost like a data noise, that can create errors in LLMs.
I speculate that this drift can be the result of biases in the training data, or any number of specific and non-specific over-training caused by biases in subject over- or under-representation. When there are multiple, overlaying vectors, they can create a kind of moire pattern, whereas stochastic methods are used reduce the patterns that get created by predictable bias.
It does seem to be a largely a matter of pointing to the algorithm we use to simulate neurons in a network, as the reason why the idea of simulated neurons won’t ever be able to replicate human-style minds, as if that’s even a desirable outcome. It’s likely some algorithms will be better than others at generating different kinds of intelligence. I expect we will find out that “attention is only the beginning“. Smarter people than I will work out the specifics.
For my purposes, though, when we are discussing, “confabulations” or “hallucinations” we all realize that they aren’t literally hallucinations, right? Bing Chat isn’t actually tripping balls, it only seems like it sometimes. Our problem is that we probably can’t stop ourselves from anthropomorphizing even when we’re warning against anthropomorphizing. We can’t not think of the elephant.
Maybe the response hinges on some interpretation of “recently” that I’m missing? But I didn’t ask about “recently”, so why bring it up?
If I wasn’t so busy being assured that these things couldn’t possibly have motivations, I’d be getting worried, because it sure seems like it has motivations.
Isn’t lying a deliberate act? Are repeating lies and fabricating lies materially different processes? If Bing is fabricating lies, does that mean it’s a) capable of creativity and b) capable of intentionality? If so, what are Bing’s motivations?
The thing I personally find most scary about generative AI isn’t how often it blunders, but how often it’s right.
Maybe back in the day, divining rods found water more often than not because water is more plentiful than we expect it to be after a digging a few dry wells?
AI is neat, and also, please Support Human Writers.
[A few minor post-publication edits to tidy up the chain of thought.]