
2026 CNIE Conference | Congrès du RCIE 2026
June 18, 2026
The Canadian Network for Innovation in Education (CNIE|RCIE) is excited to host its 2026 Annual Conference on 18 June 2026, online. CNIE|RCIE members are invited to submit proposals for paper and poster presentations centered on the theme of “Reclaiming Human Intelligence.” Abstracts are invited for practical workshops as well as paper and poster presentations on ongoing research or projects related to instructional design, pedagogical approaches, and educational technology innovation. Potential topics may include (but are not limited to):
- Instructional design approaches to facilitate “reclaiming human intelligence”
- Pedagogical approaches to facilitate “reclaiming human intelligence”
- Assessment that facilitates “reclaiming human intelligence”
- Innovative uses of digital technologies to facilitate “reclaiming human intelligence” for in-person, blended, or distributed learning.
- New, ongoing, or recently completed research related to instructional design, pedagogical approaches, or leveraging digital technologies to facilitate “reclaiming human intelligence” for in-person, blended, or distributed learning.
- Instructional design pilot projects, proof-of-concept projects, or implementation projects aimed at “reclaiming human intelligence” for in-person, blended, or distributed learning.
- Practical workshops or digital resource presentations related to “reclaiming human intelligence” for in-person, blended, or distributed learning.
Presentations will be for 20-minute timeslots on 18 June 2026.
See the Call for Proposals and submission form.
Registration Information – forthcoming
Keynote Session – “Reclaiming Human Intelligence while Taming the Artificial”
Peter Mozelius, Mid Sweden University, Sundsvall, Sweden
In Stockholm, I am a member of a chess club where we this year have a split chairmanship between a human and an AI bot. As in many other cities in the world there is currently more of AI activities than AI strategies, and a recently started coffee shop chain is run by an AI boss named Mona. Besides from sending emails and orders to the employed baristas when they are off duty, Mona has ordered a huge number of canned tomatoes and 6000 napkins. This brings to mind the paperclip problem, a thought experiment by the Swedish philosopher Niklas Boström, where we are asked to imagine an Artificial General Intelligence (AGI) tasked with one simple goal: ‘Maximize the production of paperclips’. Regardless of resource type, if there is a way of turning something into paperclips, the fictive AGI will find it and execute. In the same way as we still have problems to define human intelligence, it is hard to achieve consensus for a definition of AI. About the same time as AI reached Nobel prize level last year, reports appeared on how the increasing amount of AI slop is threating the future of the Internet.
A related AI issue is the alignment problem, comprising the difficulties to build artificial intelligence systems that are aligned with human values. Machine learning systems in particular, are difficult to tame and control. Taming AI is the main challenge at University of Toronto’s Schwartz Reisman Institute (SRI), for an interdisciplinary team with a mix of members from STEM, social sciences and humanities disciplines. When the Google bot Gemini was prompted about a definition, the answer was: “Taming AI involves ensuring artificial intelligence systems are safe, ethical, and controllable through robust governance, 透明度 (transparency), and human oversight”. An attempt to tame AI is the European AI Act where the AI risks have been divided into different levels that requires different types of interventions. One of many areas that have run wild under the influence of AI is education, where research studies have reported on AI cheating, and that overuse of generative AI can lead to meta cognitive laziness.
To look on the bright side, the concerns about AI in education could lead to the revision of activities, assessment and pedagogical models that we should have started far earlier. Teaching and learning activities and assessment could partly be tamed by clear instructions and keeping the human in the loop. A promising attempt for creating clear instructions for the use of AI, with the aim of transparency and trust is the AI assessment scale (AIAS). In the same way as SRI are taming AI with human oversight, there is a strong renaissance for oral examination, and to put focus on the process instead of the product in learning activities and assessment. Regarding pedagogy the traditional model could be stretched against andragogy and heutagogy depending on the learner group. Where generative AI can make a contribution to equity in education is in the field of inclusion where text-to-speech and features for handling sign language are examples of useful resources. Finally, there will be examples of how adult learning in online environments could be carried out as communities of inquiry, with synchronous online activities such as workshops, game-based learning and knowledge cafés.
ABOUT: Peter Mozelius is an Associate Professor and Researcher working at the Department of Department of Education, Psychology and Social work at the Mid Sweden University in Sundsvall, Sweden. Before shifting to Education, Peter worked at various departments of Computer and System Science, which also is the area where he holds a PhD. His research interests are in the fields of Technology enhanced learning, AI in education, Game-based learning and Lifelong learning. Beside his research, Peter teaches in various courses on AI in education and Academic writing.
