“There’s someone in my head, but it’s not me.”
― Pink Floyd
I have a vivid memory of being driven by my father to an industrial estate for my very first driving lesson. I felt so nervous about the strange machine I was sitting in and couldn’t imagine taking the wheel in my sweaty adolescent hands. I had no confidence that I could command this unimaginably complex system of gears, wires, cogs and combustion. It felt so powerful and so dangerous.
Thinking about how to grapple with modern information technology and seemingly intelligent machines reminds me of that first driving experience. There appears to have been a recent shift in how we perceive the internet and artificial intelligence. Some sobering realities have surfaced around how our internet participation can be harnessed and utilised. The documentary, The Great Hack, is one of many sobering illustrations of how important it is to raise mindfulness and awareness around machine intelligence and data usage and warns of an ever-increasing need to prepare youth to interact with technology critically. If feels hard to imagine that we can take control of such things.
While we are wondering about so-called, ‘artificial’ intelligence, our own, natural intelligence, is being harvested every single day. How we learn is first dictated by what we want to learn. What we want to learn, is often influenced by external factors and driving forces we may or may not always be aware of. It appears that a new task of the educator is to raise awareness and foster reaction skills to these forces which increasingly operate within The Attention Economy.
Elon Musk described this as being a result of Limbic Resonance. The human energy we feed into our information networks about our fears, loves, hates, likes and dislikes, fuels movements, policies, trends and talking points and in return; increases our engagement. In other words, the way our moral, philosophical and biological processes work in response to the chaos of social media and modern media communications, is also what becomes represented and generated by the complex algorithms which sort through the noise. This is not Artificial Intelligence, this is our inner mind, leaping from our skulls and co-mingling with other minds. It is the collected, collated and collective intelligence of the internet.
The philosophical implications of this aside, we must accept that we are already cyborgs. The devices we carry which connect us to this network are now more prevalent than your shoes in terms of daily utility. Do you bring your phone to the bathroom?
Our students are born into this cybernetic world. They are raised experiencing devices which obey their commands and take it for granted that all factual information is easily accessible without the use of their natural recall ability. Like it or not, many of your students do not feel it is necessary to know or remember much of the information you may feel essential to be hard-wired into your brain. The difficult reality we face when adapting to the educational requirements of our students is that this technology isn’t going away. Requiring students to memorise anything is just becoming increasingly unreasonable and perhaps inevitably futile. In addition to this, they are likely more susceptible than us to the manipulation of the cyber network(s).
Research conducted in 2019 on the effects of the internet on human cognition have uncovered some troubling conclusions which have deep implications for how we educate. Published on May 2019 in the World Psychiatry Journal, The “online brain”: how the Internet may be changing our cognition explores the consequences of internet information reliance. A section on transactive memory suggests that:
“…the Internet is becoming a “supernormal stimulus” for transactive memory-making all other options for cognitive offloading (including books, friends, community) become redundant, as they are outcompeted by the novel capabilities for external information storage and retrieval made possible by the Internet.”
This unnerving observation can perhaps be more easily understood anecdotally by considering the current nature of casual debates. How often have you been involved in a heated discussion, argument or debate where the phrase ‘look it up’ has been used? How frequently do you, or your conversation acquaintances, refer to something that you ‘read online’? It is becoming the norm to halt a discussion, while multiple parties google the necessary information to verify one’s stance or ‘fact check’ a statement. This may seem like an improvement in transparency, efficiency and truth keeping, but it raises some important questions:
- What does it do to our perception of social trust?
- How does this influence teaching behaviour?
- Does this breakdown of trust also decrease the confidence we have in our knowledge?
- Will students trust their thoughts over the information they can retrieve so quickly on their smartphone?
- Are we increasingly susceptible to social media manipulation?
As we examine our past to understand our present and prepare for the future, we often consider things which we perceive as comparable. The analogy of the printing press is often employed as a counter-argument to claims that the internet has negative effects on our cognition. After all, books destroyed a long history of oratory communication, some might say, and the internet, along with internet programs which might be seen as artificially intelligent, are merely upgraded versions of books. Unfortunately, the research is suggesting that this is not the case and developmental effects which are negative, especially in the context of learning, are indeed becoming apparent as unique to users of this modern technology:
“…the Internet’s digital distractions and supernormal capacities for cognitive offloading seem to create a non‐ideal environment for the refinement of higher cognitive functions in critical periods of children and adolescents’ brain development. Indeed, the first longitudinal studies on this topic have found that adverse attentional effects of digital multi‐tasking are particularly pronounced in early adolescence (even compared to older teens), and that higher frequency of Internet use over 3 years in children is linked with decreased verbal intelligence at follow‐up, along with impeded maturation of both grey and white matter regions.”
As depressing as this may seem it should be taken as an impetus to accelerate the inevitable paradigm shift already needed in how we educate our young people. For all the advancements occurring in the world around us, we still require our youth to saunter into rooms and interact with us as a group of learners. This ancient system of learning is ironically perhaps what is perfectly suited to counter the ‘super-normal stimulus’ they experience outside the school gates. Instead of worrying about the impact of modern communication technology and fearing AI perhaps we can use these things to help students further understand human intelligence.
As Jean Baudrillard said, “The sad thing about artificial intelligence is that it lacks artifice and therefore intelligence.” Maybe if we re-purpose our classrooms as places of increased emphasis on humanity, we can better facilitate the effective use of real intelligence. Sebastian Thrun suggests that there is an opportunity here not to be overtaken and surpassed by machine intelligence but rather that, “…artificial intelligence is almost a humanities discipline. It’s really an attempt to understand human intelligence and human cognition.”
As much as we expect our students to manage their attention appropriately and employ the use of technology wisely, as educators, we too must evaluate our perceptions of what is appropriate in terms of our educational approach in the context of existing technology and the evolution of artificial intelligence.
Charles Fadel, the founder and chairman of The Center for Curriculum Redesign, along with Maya Bialik and Dr. Wayne Holmes have published a book to help further the discussion on the implications of AI on Education and offer some pragmatic frameworks with which educators might adapt and improve. The Promise and Implications of Artificial Intelligence in Education focuses in on the What and How of AI and learning. The book buries down to uncover the deeper goals of modern learning. Tom Vander Ark in his Getting Smart review states that:
The What section makes a case for the necessity to focus on a broad and deep, versatile education as a hedge against uncertain futures, which means a reinvigorated focus on the deeper learning goals of modern education:
- Versatility for robustness to face life and work;
- Relevance for applicability, and student motivation; and
- Transfer for broad future actionability.
The authors provide suggestions on how students might benefit more from focusing on the transfer of skills and expertise via concepts rather than on content learning. The identification and potential utility of available technology is outlined in their application model below:
Many educators may already agree with some or all of the guidelines and suggestions made here, but we must approach the implications of AI, the internet of things and smart devices with a cohesive and systematic way of thinking. As our students, like us, learn to combat the forces being thrust upon them in this harsh attention economy, we must foster the human skills required to take control of the artificial intelligentsia.
As my father told me as I nervously turned the key in the old green Peugeot on an industrial road:
“Son, you must drive this car, do not allow it, to drive you.”