by Bernie McCormick
Bernie McCormick is the Chief Technology Officer of Mary McDowell Friends School. The views in this blog post are his own, and not necessarily reflective of the views, opinions or position of Mary McDowell Friends School.
Bernie is part of the team running the NYSAIS Roundtable for Information Professionals and Educational Technologists: Education in an AI World, to be held on Thursday, January 26th, 2023.
When I set out to write about AI and how it will impact not just K-12 Education, but humanity at large, I thought to start out with an excerpt from Phaedrus, and make some connections to externalized knowledge, the nature of thought and being, and tie in the Socratic argument as a model of instruction and learning. Turns out, Matt Bluemink did a way better job than I ever will in his 2017 article in Philosophy Now (and again more recently in the New York Times by Zeynep Tufekci).
This is going to be tougher than I thought. I have a good span of dark tunnel to run through before I get to the light at the end. I was really depending on Plato and Socrates to help out here, particularly the part that passed the room where everyone was strapped to their chairs watching shadow puppets…
Using this worldly philosopher trope is clearly popular when grappling with the power of technological change, particularly virtual intelligence (VI for short – I refuse to use AI as applicable nomenclature to current algorithmic interfaces – that is reserved for something modeling sapience, rather than modeling a purposefully hobbled intelligent-seeming interface). This exploration presumes you have some exposure or understanding of what a chat VI is, and what it can do. If you haven’t yet played with the freely available (for now)offering by OpenAI, go try it for ten minutes before you continue on. Feed it some writing prompts for lesson plans you want to flesh out, or some starting lines of a sub-pack lesson you’ve been meaning to get around to, but haven’t yet. It will surprise you, if this is your first time using the tool. Even the passing familiarity will make a big difference in what is discussed below the fold.
I’m going to ask ChatGPT to do my writing homework for me. It took about 17 seconds:
That generated content is actually not too shabby as an outline to build out for deeper introspection. There are a couple of good ideas in there, and while it lacks primary sources, differentiation nuances for audiences, and depth of field commentary on larger issues, on its face, the writing is not misguided (as it sometimes can be when using VI). The essay fell far short of the desired word count, and is not as good as the analysis provided by The Atlantic, Inside Higher Ed, or The New York Times, nor is it as compelling as the thought-provoking infographic ideology of John Spencer, the poignant musings of Collete Coleman, or the optimism of A.J Juliani, but it is at least as good as whatever passes on CNET, Sage SEO, or Scientific Journal Abstracts.
Have a headache yet? Breathe. It’s not all hopeless. A computer can just catch AI work, right?
For those who recognize the futility of trying to entrap free-range jinn, they want to buy subscription models to jinn-seeking silver bullets. For years, schools have paid to have plagiarism bots and services check homework and papers to try and catch students in moments of judicious copy-and-paste. Surely the same can be done here, right? There are a number of softwares and proof-of-concepts out there already (writer.com or GPTZero), merely weeks after the gas hit the fire, from the public’s eye.
When has the cat-and-mouse ever done anything more than line the pockets of the detectors and detector dodgers? The bill for one paid for by repurposed tuition/tax dollars, the other paid for by students with means. What happens when ChatGPT starts charging? Can only the students of means get access to the good cheat platforms? One of the most profitable ventures OpenAI could launch right now would be a subscription service that flagged the output of their own algorithms, but long term, consider where the false positives will lead, when considering student/faculty integrity and information management!
The answer does not lie in corks or bullets, but rather pedagogical pillars that both Educational Technologists and Library and Information Science professionals have been honing for decades. While many schools, independent or not, love to talk about how they teach “21st century skills” the reality is a very retrofuturist implementation – the vast majority of educational institutions still rely on the same subject/department/grade level/progression model that they have been in vogue for centuries. Most private schools are scared to mess with it, for fear of what it will mean for the college admissions process, or universally accepted means of evaluating students in a standards-based model. Worse, if you break down that system in K-12 (which was designed over a century ago as a means of conditioning students for a 40 hour work-week), everyone has to throw out their lesson plans, and re-evaluate effective methods of instruction, and what it means to be a teacher in the wake of this rapidly evolving chain of paradigm shifts.
Good. Iterative change is less scary, but sometimes it takes a revolution to really get things moving.
For more than 80% of the time the aforementioned instructional model has been predominant, one of the mantras of math teachers was “you have to learn this, because one day you will find yourself in a store and you won’t have a calculator in your pocket to help you solve this”. Except now everyone carries a calculator in their pocket, as well as a link to an unfathomably deep well of human knowledge available fast and free. There are at least two (some argue three) generations that have only known a world where this is the case. That shift in experience doesn’t appear to have changed much in the content or format of my son’s sixth grade math homework worksheets. That is a failing, not a pedagogical point of pride. Everyone has been talking about change coming, but change is HERE, and we are still teaching math like the common parlance of the word “computer” is someone really exceptional at doing algebra in their heads.
ChatGPT is just the tip of the iceberg. Used effectively, VI-powered tools offer the possibility for incredible strides in human development and knowledge. Used poorly, they can bring the worst of humanity right to our front door. OpenAI’s offerings are the first to hit the mainstream’s mass-consumption consciousness, but Microsoft has something that can learn your voice in seconds, and we are only moments away from a DALL-E/Deepfakes mashup (rand.org, infosysbpm.com) that will make any video published anywhere a matter of unending doubt of authenticity.
What do we do in the face of that? You already know, and have for some time. Enough tunnel, let’s get to some light.
Teach skepticism, logic, research skills, educate yourself in the possibilities, and guide students in how best to navigate the new currents in those possibilities. Use Socratic methodologies, and force students into discomfort. Meet the worst of VI with a curriculum focused on the best of what makes us human – ethics, civics, empathy, sympathy, community, and the ability to accomplish more than the sum of our parts as groups of individuals. Instead of spending thousands of dollars on a fallible software solution to catch kids cheating, spend thoughtful time building trust and understanding, and educate young minds in the fundamental pillars of civilization – with a focus on ethics, integrity, and civics sufficiently so that students will be left sleepless and uneasy about the idea of turning in digitally produced and/or copied work as original work.
Focus on project-based, passion-inspired curricula, with a focus on developing Design Thinking, not having everyone in the class following the same proscribed steps to get a hand-stencil turkey, or the same answers on the worksheet that has been photocopied 17 times since the mimeograph machine was retired in 1989. Cite sources. Check the citations. Model peer review, and get students to cross check each others’ references, and deepen authentic learning opportunities. Make sure facts are independently verifiable, and question the motivations of fake news.
All technological advance means change, and most humans disdain change. Educators are supposed to embrace it, to meet students where they are (which is always changing), but teachers are human. VI (in its myriad forms) are manifestations of change that librarians, technologists, STEM/STEAM educators, and design thinkers have been preparing for over the past several decades. This is not science fiction, but rather hard science in learning, in effective teaching, in media literacy, digital citizenship, and student engagement in the face of overwhelming amounts of information and misinformation.
We can support educators, administrators, and pedagogical models as they learn to adapt to these new paradigms, and provide resources to evolve curricula to make students aware of the digital bear traps and pitfalls the future keeps throwing in everyone’s paths. We have the tools, talent, experience, and methodologies to weather these changes, which are just going to become more tumultuous, and the impact of the waves bigger with time. This barrage of technological tsunamis will require everyone to stretch, change, learn, and grow to maintain safe harbors. The jinn can’t be put back in the bottle, but we can learn from it, and use it to help students and teachers develop the skills and mindset they need to master futures we can’t even comprehend.