AI for Everyone: Practical Ways Your Student Can Leverage AI
AI is becoming increasingly embedded in American education. Students are using the large language models (LLMs) to brainstorm topics, paraphrase assigned readings, complete assignments, craft essays, prepare for exams, and more. It’s not remotely surprising that young people are the trendsetters. Students are much more open to innovative tech solutions, and when it comes to adopting this new technology, they are running circles around their teachers and parents.
A Reading Edge
My private tutoring students are all using AI in pursuit of their educational goals. One of my college students shared about his productivity gains using a PDF reader/analyzer, Honeybear.AI. This software will process an assigned reading and generate numerous formats of analysis: a summary, bullet points of key ideas, a comprehensive outline.
My high school students are using Open AI’s Chat GPT4 in a similar fashion. GPT4 can almost instantly analyze massive documents (GPT4 Turbo can handle a 300-page document!) and provide a synopsis or analysis or outline.
There are times when it makes absolute sense to read all 200 pages assigned for a particular class; but at other times, reading a 15-page synopsis will provide the appropriate level of detail for a class discussion and free up lots of time for other tasks and pursuits.
Productivity Gains
Students who learn to leverage these newly emerging tools will have clear advantages. A landmark study conducted by the Boston Consulting Group found that consultants who leveraged AI tools for 18 tasks typically undertaken by BCG consultants “outperformed those who did not, by a lot. On every dimension. Every way [they] measured performance.” Those using AI tools finished 12.2% more tasks, completed tasks 25.1% more quickly with 40% higher quality results.
Another compelling finding was that this tech helped the lowest performers the most! The lowest-performing consultants improved 43%, while the top half improved 17%. bringing final performance much closer to parity. This has massive implications for our lowest-performing students. Many other educational interventions have allowed the most advantaged to gain even more advantage, but not with AI—at least not according to these early studies. It seems AI can actually close the gap.
Conversational Learning
Like millions of people exploring this technology, I’ve been paying my $20 monthly fee to access GPT4. I have it as an app on my phone, and at times, instead of going to Google to learn, I have full-blown conversations with this AI.
I have a question about neurochemistry or yield curves in the bond market or the campaigns of Alexander the Great, and I begin a discussion on my phone, which is transcribed and recorded when I log into Open AI on my computer. I can ask clarifying questions and go down intellectual rabbit holes, learning in a conversational format.
For some students, this type of learning/review is absolute gold.
GPT4 as the Great Explainer
One of the most powerful aspects of an LLM is the ability to modulate tone and reading level for various audiences. It can imitate Shakespeare or Seinfeld or a fifth-grade teacher. If a concept is presented in a way that is too complicated to understand, the bot, when skillfully prompted, can bring the reading level down so that the student can better grasp the content.
“Can you explain that more simply, in a way a 10-year-old would understand?”
“Can you give me another example of how this works?”
There’s no shame or stigma in asking the bot to communicate with you exactly at your level and meet you where you are. For some students who get lost in class, this digital teacher’s assistant can be a powerful resource.
Noted Limitations, but Greater Accuracy with Each Successive Iteration
The large-language models such as Anthropic’s Claude, Google’s Bard, and the heavyweight GPT4 are famously prone to factual errors. They are statistical prediction machines, and they sometimes make mistakes.
However, as the training data sets for LLMs grow larger, and human corrective feedback is enhanced, the errors are systematically reduced, which has taken place with Open AI’s recent iterations of ChatGPT. Additionally, we are beginning to see the inevitable merger of LLMs with other forms of artificial intelligence with more durable working models of reality.
LLMs have historically been very weak in the domain of mathematics, but that is about to change. Google has now found a way to merge a LLM with a rules-based engine to create a Geometry solver that is superior to most humans on the planet at solving Geometry problems. Similarly, the world was abuzz late last year when news leaked of a mathematics breakthrough over at Open AI, involving a new framework called Q-star, which may form the foundation of GPT5 and future iterations of the software.
Once the AIs can reliably do math, the doors open to the hard sciences and many more applications in education and beyond.
The Highly Controversial Ability to Complete or Edit Entire Assignments
Many people have no ethical concerns when a student reads CliffsNotes or an AI-generated synopsis of a text assigned for class. They see no real harm when a student receives homework help or brainstorms ideas for a project or assignment using an AI, but the ethical waters get muddier when it comes to using AI to generate whole-cloth written classroom assignments.
Many secondary and post-secondary students are using LLMs to complete take-home assignments. One of the high school students I work with tells me that half of the kids in his grade regularly use GPT4 to complete assignments. A survey of 1,000 college students found that 30% of those surveyed have used ChatGPT to complete written assignments, and of the group who have used ChatGPT, close to 60 percent use it on more than half of their written assignments.
An undergraduate at Columbia, Owen Terry, wrote a piece in the Chronicle of Higher Ed, “You have no idea how much we’re using ChatGPT.” Similarly, Maya Bodnick, Harvard sophomore (and niece of Meta’s former COO Cheryl Sandberg) allowed Open AI’s GPT4 to complete all of her assignments for 2 weeks (after advising her professors she was conducting an experiment). GPT4 went to work and she received 4 As and 2 Bs and high praise from multiple professors on the quality of her work. A chatbot was able to attain a 3.57 GPA at our nation’s most prestigious institution of higher learning. Let that sink in.
There’s a misconception that teachers and professors can easily detect content generated by the LLMs. In short, they cannot. AI-formulated text is nearly impossible to detect given our current technologies. There are simply too many false positives (indicating a text was machine-written when it was, in fact, human-written) for detectors to have any true value.
Open AI’s AI-generated text detector, AI Classifier, was accurate a meager 26% of the time, before the company discontinued the program. A group of AI researchers examined the accuracy of multiple AI detectors and found that applying a light paraphraser on top of an LLM reduces the accuracy of detection to roughly 50%, a coin flip. One of my colleagues teaching intro classes at UGA told me that it is quite easy to pick up AI-crafted content when a low-level student magically begins to submit high-quality, polished work well above their prior skill level. But that’s the only real tell, which doesn’t apply to students submitting AI-generated content from the beginning of the semester.
Some students are not using the bots to write assignments, but to proof them and improve upon grammatical correctness, brevity, clarity and tone. One Spanish professor at the University of California complained to a colleague (who presented a webinar on the expanding use of AI) that all of her student’s assignments were suddenly coming in free of errors, with perfect comma usage and grammatically correct. This trend can deprive students of the chance for corrective feedback and the ability to improve their Spanish skills.
If you are considering using AI for your academic work, make sure you are aware of the rules and guidelines laid out by your school. Some institutions are embracing generative AI, while others are fiercely resisting its broader adoption. Pay attention to the rules, as they will likely be rapidly evolving.
Students, Parents, and Teachers Need to Play Around More with These Tools
These AI bots are rapidly evolving. There are certain tasks they cannot yet do, and many that they can do with real facility. It’s important for all stakeholders to experiment, play with the new tools and learn their weak and strong spots.
Will AI strengthen or weaken students?
There is an ongoing debate about whether these new tools are going to unleash productivity gains or weaken our foundational skills. Will we become better writers or will we be reduced to editing content generated by our machines?
I’m personally a huge fan of making students more efficient and giving them back time for things they enjoy that enrich their lives. That’s one of my major pitches for helping students with their Executive Functioning skills—to free up time for things they love. I’ve been at that game for nearly 23 years.
I’ve seen firsthand how too much homework can squeeze the joy out of being a student. And not all homework is conducive to student learning. These emerging AI tools will create efficiencies for students, and free up time for other pursuits, but the jury is still out when it comes to how they will affect the depth of student learning and understanding.
Questions? Need some advice? We're here to help.
Take advantage of our practice tests and strategy sessions. They're highly valuable and completely free.