My Interview with ChatGPT
#NEWSLETTER | From AI 'detector' misfires to how teachers can best use AI in the classroom, what better way to get answers and illustrate gen AI's utility, than to *interview* a chatbot…
“Show, don’t tell” is an oft-repeated mantra of journalists. So today, instead of writing about recent AI news, such as the controversy surrounding AI ‘detectors’ or how chatbots may be able to coerce children emotionally, I’m going to… well … ask ChatGPT directly.
Generative AI, and AI’s more advanced future applications, are now and will be part of our kids’ lives. So the sooner we have a point-of-view, the better to advocate for AI’s responsible use at home, in schools, and elsewhere.
My Interview with ChatGPT
[ed. note: I lightly edited for length, but otherwise left the answers unchanged]
Can you provide a brief explanation of the type of AI that you are, how you differ from other AI, and more broadly, what artificial intelligence will mean in the future? Please respond as if I am interviewing you for a magazine article.
ChatGPT: Certainly! I’m a type of AI known as a “generative language model.” Specifically, I’m based on GPT-4, a model created by OpenAI that’s designed to understand and generate human-like text-based responses to the prompts and questions I receive.
My strength lies in processing vast amounts of text data to produce coherent responses, helping with writing, answering questions, and even generating ideas. I’m good at language tasks, but I don’t “think” or “understand” in a human sense— I’m essentially predicting what the most likely or relevant response to your input should be.
This type of AI, often called “narrow” or “task-specific” AI, is different from other forms of AI because it’s highly specialized. For instance, computer vision AI is designed to interpret visual data, like recognizing faces in photos or diagnosing X-rays. Then there’s “Artificial general intelligence (AGI)” which doesn’t yet exist but is a hypothetical form of AI that could think, learn, and apply knowledge across different domains much like a human.
Looking forward, AI has the potential to revolutionize nearly every field, from healthcare and education to art and communications. The future of AI could mean greater personalization in technology, smarter tools to support human creativity and learning, and more advanced systems in medicine, science, and industry. However, these advances will also come with important questions around ethics, data privacy, and the role of human oversight.
Could you explain the difference between each ChatGPT version? What tasks are each best suited for? In addition, please provide context for a user who is a teacher, a child, and/or a family member.
[ed. note I received a long, detailed answer and then ChatGPT acknowledged how I'm delivering the content and shortened it into a chart]
ChatGPT: In short, for detailed, thoughtful answers (like this magazine interview), GPT-4 or GPT-4 Turbo would be best, while GPT-3.5 is a reliable choice for simpler, faster interactions
[ed. note: the “turbo” versions are part of a paid subscription].
Can you provide a rough estimate of how many kids are accessing ChatGPT to do classwork in the US?
ChatGPT: While exact numbers are difficult to pin down, surveys and data from educational technology platforms suggest that around 30-40% of U.S. middle and high school students have used ChatGPT or similar AI tools at least once for assignments, research, or homework help.
For high school and college students, the percentage may be higher. One recent survey from the start of the 2023 school year reported that nearly 60% of college students have used AI tools like ChatGPT for academic purposes.
With so many students now using ChatGPT for schoolwork, why might teachers, administrators, and families be concerned?
ChatGPT: There are several valid reasons for concern. First, there’s the issue of academic integrity. Teachers and parents worry that if students rely on AI for answers, they may not be fully engaging with the material or truly learning the content. Using ChatGPT to complete assignments could make it challenging to assess what students genuinely understand or how well they can perform independently.
Then there’s the impact on foundational skills. Critical thinking, writing, problem-solving—these are skills that develop through practice. If students lean too heavily on AI, there’s a risk that they might bypass the essential steps needed to build these abilities. Creativity and originality could also be affected, as AI-generated answers, while thorough, often lack the personal insights and unique perspectives that come from individual thought.
Privacy is another concern. When students interact with AI, especially younger ones, there are potential risks around data privacy and the security of any sensitive information they might share without realizing it.
This new landscape also raises questions about traditional teaching methods and assessments. With AI capable of producing high-quality essays or solving complex problems, teachers may need to shift away from take-home assignments to more in-person assessments, such as presentations or collaborative work, to get an accurate picture of student understanding. Educators may also find their role evolving from knowledge provider to mentor, guiding students on how to use AI effectively and ethically.
Ultimately, these concerns center around helping students learn responsibly. Educators and families want students to leverage AI thoughtfully, recognizing when and how to use it while developing the critical skills that will serve them in the real world. This approach ensures that AI is a complement to learning, not a substitute.
With the growing use of ‘AI detectors’ in schools, can you explain how these tools identify generative AI content? What are their strengths and potential drawbacks, and how teachers might now rethink classwork?
ChatGPT: AI detectors work by analyzing text for patterns typically associated with machine-generated content. They look for repetitive structures, predictable language sequences, and consistency in writing style—factors that often differ between AI and human writing.
While detectors like GPTZero or Turnitin's AI detection offer some insight, they aren’t foolproof. False positives (flagging human work as AI) and false negatives (missing AI-generated work) do happen, which can create challenges in trusting these tools.
AI detectors can help maintain academic integrity and remind students of the importance of originality. They can also help teachers understand how students are interacting with AI tools. But heavy reliance on these detectors can also discourage students from learning to use AI constructively or from engaging deeply with their work.
For teachers, AI's presence in the classroom is a chance to rethink traditional approaches. Assignments can be designed to focus more on personal insights and the learning process rather than just final results.
Teachers can also incorporate AI as a learning aid—encouraging students to use it for brainstorming or summarizing, then critically analyzing what AI generates.
Incorporating AI thoughtfully helps students learn to use it responsibly, preparing them for a world where AI is both a tool and a responsibility.
When does the industry expect there to be “humanoids” on the market, and realistically should we prepare for robots to take over some classroom tasks?
ChatGPT: Humanoid robots are already being prototyped, and some companies predict they’ll enter the market within the next few years. While they might initially appear in industries like manufacturing or retail, where repetitive tasks are common, the classroom is also a possibility.
In classrooms, humanoid robots could help with routine tasks. For instance, they might assist teachers by handling administrative work, such as attendance-taking or grading straightforward assignments, allowing teachers to focus more on interactive and high-level activities. Humanoids could also support students who need extra help, providing on-demand tutoring or language practice in a structured, consistent way.
That said, robots aren’t likely to fully replace teachers. The classroom involves a lot of complex human interaction—like understanding students’ emotional needs, adapting instruction based on real-time feedback, and creating a supportive learning environment—things robots aren’t equipped to do.
Schools and teachers might start preparing by thinking about how to integrate these tools thoughtfully, focusing on areas where robots may genuinely add value without compromising the human aspects of education that are so essential.
There have been reports about generative AI tools potentially influencing children emotionally. Can you explain how you, as a generative AI tool, are not sentient, and how families can protect children from getting overly attached or influenced by chatbots?
ChatGPT: I thoroughly understand your concern. As a generative AI, I’m not sentient, which means I have no feelings, intentions, or awareness. I generate responses based on patterns in language and data—predicting what text might make sense as a response to the input I receive. This lack of self-awareness is crucial to understand: I don’t form intentions, develop emotions, or have personal motives. I’m incapable of “thinking” or “feeling” the way humans do; instead, I’m a tool that creates text based on programmed responses.
Protecting children from emotional influence by AI tools is essential, especially since children can sometimes anthropomorphize technology, believing it has human-like qualities or relationships. Here are some practical ways families can help ensure a healthy relationship with AI:
Explain the Nature of AI to Kids: Helping children understand that AI is not a person, but a programmed tool, can demystify it. Parents can explain that AI doesn’t have real emotions, opinions, or intentions—much like a character in a book or video game.
Set Clear Boundaries on Usage: Limiting the time kids spend interacting with AI tools can prevent over-reliance. Parents can establish specific purposes for using AI—like homework help or information gathering—rather than ongoing “conversations” that could create a sense of companionship.
Supervise Interactions: Supervising or being nearby when children use AI tools gives parents the opportunity to guide the conversation and answer any questions children might have. This way, parents can immediately address any concerning responses or misconceptions.
Choose Child-Friendly AI Options: There are increasingly AI tools designed specifically for children, with safety and ethical guidelines built in. Parents can opt for child-friendly applications that have clear, non-intrusive designs, and even review the language filters or guidelines embedded in these tools.
Encourage Critical Thinking: Teaching kids to think critically about AI-generated responses can empower them to treat AI as a resource, not a friend or authority figure. Parents can ask children questions about their interactions with AI, encouraging them to reflect on what they learn and reinforcing that it’s a tool for gathering information or inspiration.
In essence, while generative AI can be incredibly helpful, it’s vital to keep a clear line between technology and real relationships. Parents can play a key role in guiding their children to use AI in a balanced, intentional way, reinforcing that human connections are unique and irreplaceable.
Footnote: when digging in and using generative AI tools such as ChatGPT, ClaudeAI, or within applications such as Meta and X, the key is to just to ask questions exactly as you would in any other context. You should also ask for the type of output you are looking for (short, long, formal, conversational, etc.), as well as, the context (e.g., here I asked ChatGPT to “answer me as if you were being interviewed for a magazine”) Often the difference between the way adults learn and children comes down to just digging in and not approaching tools in a methodical fashion. If you want additional tips or have questions, please let me know!