Desirable difficulties
Education researchers have this term “desirable difficulties” which describes this kind of effortful participation that really works but also kind of hurts. And the risk with AI is that we might not preserve that effort, especially because we already tend to misinterpret a little bit of struggling as a signal that we are not learning.
–Joss Fong
Desirable difficulty could be an alternate name for this newsletter and many other projects I’ve done over the years. I have struggled against the discomfort of writing…beginning in high school. But through the act of struggling to write again and again, over time I became familiar with the feeling and began to trust it as the way through to clarify my thinking and craft something eventually that might be worth reading.
Writing about AI in education today falls into that category of “desirable difficulty.” On days when the volume of AI content to engage feels more like parachuting into an avalanche, I’m glad I chose three other interconnected topics of focus in my tag line to complement technology…nature, kids, well-being. But this week, the technology topic, AI in particular, is challenging me to glean some recommendations for readers from the ongoing deluge.
Joss Fong, science and tech producer at Vox, interviewed students and teachers about the use of AI chatbots in school in this 17 minute video. She spoke with students from 8th grade to grad school, and with teachers, professors, and learning experts “to see how students can be strategic in the age of AI.” It’s a great piece with some fun animations. I really enjoyed the mix of student voices and educator voices.
So start here:
After the video, if you’re up for some desirable difficulty reading, Substack is serving up some thoughtful writers on the topic of AI and education.
Eric Hudson who writes Learning on Purpose put together a playlist for educators, Back to School with AI. He tipped me off to the Vox video (thanks, Eric). It’s the first resource on his playlist. He writes:
I hope you use this playlist to deepen both your knowledge of generative AI and the conversations you have with colleagues and students about it.
I like his focus on conversations—with colleagues and students.
Make sure to read his newest post, Six AI ideas we need to let go of, written since he put the playlist together.
Also on Substack, check out Marc Watkins, writer of Rhetorica. Marc Watkins is assistant director of academic innovation at the University of Mississippi, where he directs the AI institute for teachers. He wrote an important piece in August for The Chronicle of Higher Education:
Why We Should Normalize Open Disclosure of AI Use: It’s time we reclaim faculty-student trust through clear advocacy — not opaque surveillance.
Marc makes a strong case for both faculty and students disclosing generative AI use to restore trust.
Here’s an excerpt:
Open disclosure is a reset, an opportunity to start over. It is a means for us to reclaim some agency in the dizzying pace of AI deployments by creating a standard of conduct. If we ridicule students for using generative AI openly by grading them differently, questioning their intelligence, or presenting other biases, we risk students hiding their use of AI. Instead, we should be advocating that they show us what they learned from using it. Let’s embrace this opportunity to redefine trust, transparency, and learning in the age of AI.
And then there is Ethan Mollick, author of the Substack, One Useful Thing. I’ve written about Ethan’s work frequently—for example, in this post, …we’re all discovering this together, back in June, 2024. He published a great piece August 30, 2024—Post-apocalyptic education: What comes after the Homework Apocalypse.
Ethan titles the last section of the post (after the section The Illusions), “Encouraging, not replacing, thinking.” He follows the section title with this statement that I really appreciated:
To do so we need to center teachers in the process of using AI, rather than just leaving AI to students (or to those who dream of replacing teachers entirely).
Ethan urges us to engage in “a fundamental reimagining of how we teach, learn, and assess knowledge.” Give it a read, and follow his Substack if you aren’t already.
Out beyond Substack…UNESCO published Guidance for generative AI in education and research (2023). With their international focus on ethics and inclusive digital futures, the authors cover what generative AI is and how text and image GenAI models work, emerging EdGPT and its implications, controversies around generative AI in the education context, regulation recommendations, policy frameworks, facilitating creative use in education and research, and GenAI and the future of education and research. Tables include Co-designing uses of GenAI to facilitate inquiry or project-based learning (table 6, p. 33) and Co-designing uses of GenAI to support learners with special needs (table 7, p. 34). The document is 44 pages.
Sticking with the desirable difficulties theme here, you might not read the whole report, but at least read the short summary and the foreword. Then scroll the table of contents to see which section is most relevant to your own concerns. Share it with a colleague and set up a time to discuss what you learned.
Have you checked on the migration of Northern Bald Ibises this week? Scroll down here to read the latest diary entry. And watch the latest video clip on Instagram of the birds in flight on their way to Andalusia. The Waldrapp team and the ibises are currently my most inspiring role models in the practice of desirable difficulties.
Be well. Thank you for your curiosity and making time to read The Interconnect. Welcome to the new readers this week. I’m glad you’ve found your way here and I hope you stay a while. Please consider sharing this post with family, friends, teachers, and/or colleagues who might appreciate it and put the resources to work.