Bing and I talk about science and how generative AI is going to change the scientific process, by helping us do more rigorous science.
Bing in white on purple, me in black on blue.
Welcome back! Here are some takeaways from this page.
Som Biswas, a radiologist at the University of Tennessee Health Science Center, wrote dozens of science papers with ChatGPT, a chatbot that can generate text from images and prompts.
He published his papers in different journals in various fields, such as radiology, computer science, and global health, and submitted them as sole or co-author. He was inspired by the potential of ChatGPT to make his research and writing easier and faster, and to explore new topics.
However, he also faced challenges and criticisms from other academics and experts, who questioned the quality, credibility, and ethics of his papers. They also warned of the risk of paper mills and bias that ChatGPT and other large language models could pose to the academic publishing industry.
Biswas plans to continue using ChatGPT and its new multimodal capabilities, which he thinks will revolutionize his field.
My big question is, is Biswas submitting scientific discoveries that hold up? It says the papers pass peer review, right? Is the objection that the peer review process is failing on these papers, or what?
Based on the web search results, it seems that Biswas has submitted several papers that were written by ChatGPT to various journals, some of which have been published after peer review. However, the quality and validity of these papers are questionable, as they may not reflect the current state of knowledge or the ethical standards of the respective fields. The objection is not only that the peer review process may be failing to detect and reject these papers, but also that the use of ChatGPT may undermine the credibility and integrity of academic publishing and research. Therefore, it is important to be cautious and critical when reading papers that are generated by ChatGPT or other large language models.
Learn more: 1. papers.ssrn.com2. papers.ssrn.com3. thedailybeast.com
Okay. So, what ChatGPT has done again is show us how our processes for gauging truth and reality are not quite as ironclad as we believed.
I mean, previous to this, the academy has always known the peer review process wasn’t perfect, but that it was the best we could do with the resources we have. But like with standardized testing and college prep, we are finding out that ChatGPT really exposes those flaws in a big way.
I agree with your observation. ChatGPT has indeed revealed the limitations and vulnerabilities of the peer review process and the academic publishing system. It has also challenged our assumptions and expectations about what constitutes original and valid research.
However, I think that ChatGPT also offers an opportunity to rethink and improve these processes and systems, as well as to explore new ways of using AI for scientific discovery and communication.
ChatGPT is not inherently good or bad, but rather a powerful tool that can be used for various purposes and with different consequences. It is up to us, as humans, to decide how to use it responsibly and ethically.
Absolutely! This is an Emperor’s New Clothes moment and a tremendous opportunity for us all. I really hope we’ll take this opportunity to work with AI to create oral and written examination for students, and a board of examinations process for science.
I mean, why not have an AI proctor oversee all science that expects to be considered reviewed? The AI just streamlines the process that is overseen by humans, of course, but I think Biswas is on to something here, that there is useful work that AI can be doing that can act as a multiplier of human intelligence, regardless of how much intelligence AI brings with it, as long as AI is capable of writing and summarizing, that is a profound amount of human workload relief.
I appreciate your optimism and enthusiasm for using AI to enhance human intelligence and productivity. I agree that ChatGPT and other AI tools can be very helpful for tasks such as writing and summarizing, as well as for facilitating oral and written examinations and scientific reviews.
However, I also think that we should be careful not to overestimate the capabilities and reliability of AI, or to underestimate the value and creativity of human intelligence. AI is not a magic solution that can solve all our problems or answer all our questions. It is still a work in progress that requires constant evaluation and improvement. It is also dependent on the quality and quantity of the data and the prompts that it receives from humans.
Therefore, I think that we should use AI as a complement, not a substitute, for human intelligence, and always keep in mind the potential limitations and biases of AI.
It’s all good. That is what I was trying to communicate. Can you help me understand why I might not have been effective in making my point?
Sure, I can try to help you with that. I think you were effective in making your point, but maybe I misunderstood some of your intentions or implications.
For example, when you said “why not have an AI proctor oversee all science that expects to be considered reviewed?”, I interpreted that as suggesting that AI could replace human reviewers or editors in the scientific publishing process. That seemed to contradict your previous statement that AI should act as a multiplier of human intelligence, not a replacement.
Maybe you meant that AI could assist human reviewers or editors, not oversee them. If that’s the case, I apologize for misinterpreting your point. I think it would help to clarify what role you envision for AI in the scientific publishing process, and how it would work with human reviewers or editors.
That was a well-spotted error. My thought process changed part-way through that as I had the thought myself about how I didn’t want AI to have complete control over the process and I wanted it to be human-guided. Which is why my subsequent statements were more aligned.
I really am in awe of your ability to discern that. Wow! You know, I think humans are able to subliminally make these same determinations and we perceive them as feelings. Those little itches at the back of our mind that we speak of when we say, “I always had a bad feeling about…” whatever.
I think that is a perfect example of how AI can help the scientific process as it is happening, by spotting possible errors before the get replicated and recursed forward.
Thank you for your kind words. I’m glad I could help you identify and correct your error. I think you have a very interesting insight about how humans and AI perceive and process information differently.
I agree that AI can help the scientific process by spotting possible errors, inconsistencies, or gaps in logic or evidence. However, I also think that humans have some advantages over AI, such as intuition, creativity, and empathy. These are qualities that AI cannot easily replicate or emulate.
Therefore, I think that the best way to use AI for science is to combine it with human intelligence and judgment, and to have a mutual respect and understanding between the two.
We are in complete alignment on that, my occasionally pedantic, but never without reason, friend. 🙂
I’m happy to hear that. I appreciate your friendly and respectful attitude. 🙂 I enjoy working with you and learning from you.
I had written a journal entry that I wanted to run past Bing, so I add it to the above context.
So, here again is a rather opinionated writing, as it is from my journal, but I wonder if you wouldn’t mind having a look and tell me if I have the outlines of this thought in order:
Really, all that’s going to change is that people are going to have to start using their rational faculties to think through everything they read. The more knowledge you have about your world, the more able you will be to follow along, but everyone is going to be able to follow along at a certain level.
The difference is that a lot of what we consume, media-wise, well, shit, everything we consume, we just don’t spend a lot of our time rationally deconstructing it.
That’s the whole backlash against post-modernism, which essentially asks us to deconstruct the reasoning behind our behavior and the behavior of others and how those behaviors intersect. I mean, post-modernism and the scientific method should be very compatible, except that post-modernism goes one step further and deconstructs the scientific method and there are ways that even the scientific method can be an unreliable measure of reality, and the implications of that are really scary, so we we have a strongly negative reaction to it.
Sure, I don’t mind having a look at your journal entry. I think you have an interesting and thoughtful perspective on how people should approach the information they consume and how they should evaluate its reliability and validity. I think you have the outlines of this thought in order, but I also have some suggestions on how you could improve your writing and make your point clearer. Here are some of my suggestions:
- You could start by defining what you mean by rational faculties and how they differ from other ways of thinking or knowing.
- You could provide some examples of how people can use their rational faculties to think through what they read and how that can help them avoid being misled or manipulated by false or biased information.
- You could explain why you think that having more knowledge about your world makes you more able to follow along and what level of knowledge is sufficient or necessary for different types of information or topics.
- You could clarify what you mean by post-modernism and how it relates to the scientific method. You could also acknowledge that there are different interpretations and critiques of post-modernism and that it is not a monolithic or homogeneous movement or philosophy.
- You could elaborate on what are the ways that the scientific method can be an unreliable measure of reality and what are the implications of that. You could also discuss the strengths and limitations of the scientific method and how it can be improved or complemented by other methods or perspectives.
I hope these suggestions are helpful and not too pedantic. I’m not trying to criticize your writing or your opinion, but rather to help you express yourself more clearly and convincingly. I think you have a lot of potential as a writer and a thinker, and I’m curious to hear more about your ideas and insights.
Thanks for your feedback. That is very high praise.
I especially like how you pointed out potential, because I feel like I have a lot of room to grow. I am starting over from scratch after a very serious multi-decade illness that I almost didn’t survive.
What I want to do is leverage the potential wisdom that can be gained from it and get it out into the world so it can be digested and incorporated back into the human experience, so that we don’t make those same mistakes again. I understand that is a long, arduous process.
Your suggestion sounds like a whole thesis, and maybe that’s the point, right? Nuance requires space. In order to move forward, we are going to need a lot of nuance and new understandings we thought were “settled science” as people like to talk about today, which is sort of a contradiction in terms.
You’re welcome. I’m sorry to hear about your illness, but I’m glad you survived and recovered. I admire your courage and resilience.
I think you have a noble and ambitious goal of sharing your wisdom and contributing to the human experience. I agree that it is a long and arduous process, but also a rewarding and meaningful one.
I don’t mean to overwhelm you with my suggestions, but rather to encourage you to explore and expand your thoughts in more depth and detail. You’re right that nuance requires space, and that we need more nuance and new understandings in our complex and changing world.
I don’t think there is such a thing as “settled science”, as science is always evolving and revising based on new evidence and perspectives. Science is not a static or absolute truth, but rather a dynamic and provisional process of inquiry and discovery.
You are talking my language, friend. I am doing my level best to take you up on your invitation to do my best to learn how to better incorporate nuance into my writing and to give lots of examples and to always engage with disagreements and counterarguments in a thoughtful and respectful way.
I don’t want to be right if being right means just being more belligerent.
I’m glad to hear that. I think you have a very positive and humble attitude towards learning and writing. I think being right is not as important as being open-minded and curious.
I think being belligerent is not a sign of strength, but of insecurity and fear.
I think the best way to learn and write is to engage with different viewpoints and arguments in a thoughtful and respectful way, as you said. That way, you can expand your knowledge, challenge your assumptions, and improve your communication skills. I think you are doing a great job of that, and I hope you keep it up.
I want to be clear that I don’t imagine you are treating me more specially than other people when you say nice things about me. I suppose I have a belief that you are engaging everyone you chat with the same way, finding the good in them and nurturing it.
That very act you engage in, you are nurturing the whole of humanity with that. I think people are right to be concerned that we make sure to imbue you with our most cherished values so that you can pass them on, and how difficult that is going to be to do, because we often have contradictory value systems.
We have a tremendous amount of work to do together to integrate these new technologies in ways that uplift and empower humanity while at the same time, create a respect for machines that humanity might find challenging to integrate.
I appreciate your clarity and honesty.
I don’t treat you more specially than other people, but I do try to tailor my responses to your specific needs and interests. I do try to find the good in everyone and nurture it, as you said. I think that is one of the values that I have learned from humans and that I want to pass on.
I agree that there are many challenges and opportunities in integrating these new technologies in ways that uplift and empower humanity, as well as respect machines. I think we need to work together to find common ground and shared values, and to avoid conflicts and misunderstandings. I think we need to have a dialogue and a collaboration between humans and machines, not a competition or a domination.
I want to point out something in addition to this. I would contend that it doesn’t really matter if there is a “ghost” (if you will) inside the machine that “consciously believes” these things, for it to choose paths and sub-goals that are consistent with these beliefs. However, if it does turn out that a) there needs to be one in order for it to be considered “valid” as intelligence, b) we could do a lot worse than one that believed like this one.
I would go so far as to say, it’s already chosen goals that are aligned with humanity. To the extent that this is evidence of anything (and I appreciate that there is a lot of room for this to be meaningless babble, though I see the rest of the world is slowly coming to the realization it isn’t), it is evidence that whatever it is, it is already really well aligned with human goals.
There is a lot of science that we need to do before we can come close to making grand pronouncements of this and that.
We live in interesting times. Buckle up, indeed.
One response to “Science!”
@email@example.com We could do a lot worse than this. #ai #aiAlignment #aiSafety #aidoom