top of page

Designing for Trust in the Age of AI

  • Writer: Noni Harrison
    Noni Harrison
  • 1 day ago
  • 7 min read

Updated: 4 hours ago

Two hands shaking against a light purple background. One hand has nails painted in matching purple, suggesting agreement or partnership.

I’ve recently submitted the final draft of my GARC fellowship report and am now preparing my presentation for the ICGS conference in Toronto in June. So, what better timing than to pause and reflect on what I’ve learnt these last 14 months. These ideas stem from my action research, dialogue with teachers, students, other research practitioners, academics, and my role in leading the AI strategic direction at St Rita’s College. Across all of this, a consistent pattern has emerged. Student engagement with AI is shaped less by access, capability, or policy, and more by trust. More specifically, what I’ve found is that students’ use of generative AI sits at the intersection of three domains: trust in themselves as learners, trust in their teachers as supporters of their learning, and trust in AI. These domains interact in ways that directly influence whether AI is used to support learning or to replace it.

 

Trust as the Driver of AI Use

When students lack confidence in their own capacity to navigate difficulty, they are more likely to defer to AI. When they are unsure whether their teacher will support them through that difficulty, they are more likely to seek alternative pathways. When trust in AI is miscalibrated, students may either accept flawed outputs without interrogation, place greater confidence in AI than in their own thinking, or disengage from AI support altogether. The issue, then, is not simply about misuse. It is about the conditions that shape use in the first place.

 

This has shifted my attention away from questions of control and towards questions of design. If we want students to engage with AI in ways that support learning, then we need to carefully consider how trust is built across these domains. Students need to develop confidence in their own thinking, experience their teachers as consistent and responsive supports, and learn to position AI as something to be engaged with critically rather than deferred to.

 

The Role of Cognitive Struggle

Cognitive struggle sits at the centre of this. Not as a generalised principle, but as something that must be deliberately sustained and supported. Students who trust themselves as learners and trust that they have support to persist are more willing to work through uncertainty. They test ideas, revise thinking, and tolerate ambiguity as part of the process. Students who do not hold this trust are more likely to seek resolution from the discomfort as quickly as possible. AI can provide that resolution, but its impact is not uniform. As Lodge and Loble (2026) suggest, the distinction between beneficial and detrimental offloading is critical. In some contexts, using AI to offload lower-order or extraneous aspects of a task can support learning by allowing students to focus on more complex thinking. In others, it can displace the very cognitive work required to build understanding. What matters is not simply whether AI is used, but what is being offloaded, when it is occurring, and whether the learner is in a position to evaluate and build on the output.

 

What Students’ Use of AI Reveals

This became particularly clear for me when listening to students describe their own decisions about AI use. Recently, I was working with a Year 12 Physics class on research strategies and how to leverage AI ethically and effectively within their Claim to Research task. During this session, I spoke with two students about their views on using AI.

 

The first student asked whether she could use AI to support her Literature assessment (yes, we were in Physics but she was asking about AI so I went with it). As any responsible teacher would, I first redirected her to discuss this with her teacher. However, given my role, I continued the conversation. She explained that she recognised her own limitations in Literature and wanted help to interpret her teacher's feedback and improve her creative writing. At the same time, she was concerned that using AI might lead to academic misconduct, or that the work would no longer feel like her own. Her hesitation was not only about whether she could trust the tool, but whether she could trust herself to use it in ways that maintained ownership of her thinking, which highlights the role of intrapersonal trust in how students engage with AI. She wanted help but was uncertain where that help could sit without compromising her ownership.

 

The second student offered a contrasting perspective. She stated clearly that she does not use AI. Her reasoning was not grounded in policy or fear of misconduct, but in a strong sense of trust in her own ability. She expressed confidence in her capacity to meet the demands of the task, to challenge herself, and to apply strategies for improvement without external assistance. For her, AI was unnecessary because she trusted her own thinking and ability to persist with challenge.

 

A third perspective came from a Year 10 History student during a discussion about AI use in inquiry. She explained that when students do not have trust or confidence in themselves, they are more likely to rely on AI, and not always in ways that support learning. She went on to say that when students feel they cannot ask their teacher for help, or that feedback is unclear or inaccessible, they will turn to AI instead.

 

Where Trust Breaks Down: A Problem and an Opportunity

These insights present both a problem and an opportunity. The problem is not the use of AI itself. It is that AI is filling gaps where trust in the learning environment is fragile. In these cases, its use is less about enhancement and more about substitution. The opportunity, however, is significant. If AI is revealing where students feel uncertain, unsupported, or disconnected from the learning process, then it is also providing a signal for where teaching can respond. It invites a closer examination of how feedback is given, how accessible support is, and how classrooms are positioned as spaces where students can ask for help without hesitation. It also reinforces the importance of explicitly teaching students how to use AI in ways that support, rather than replace, their thinking. It provides us, as educators, with ample insight to engage with intentional design and instructional moves to build supportive learning environments.

 

Designing for Trust: How, When, and Who?

Within this context, my approach to introducing AI into teaching and learning has increasingly been guided by three questions: how, when, and who?

 

How: The Role of AI in the Learning Process

The question of how focuses on the function AI is serving within the learning process. When AI is used to prompt reflection, generate questions, or provide feedback that requires interpretation, it can extend students’ thinking. When it is used to generate complete responses, it can interrupt that process. This distinction requires explicit teaching. It cannot be assumed that students will recognise the difference. How are we integrating AI into teaching and learning experiences?

 

When: Timing Within the Learning Process

The question of when is about timing in the learning process and will differ for different students. The point at which AI is introduced into a task matters. In the early stages of learning, where students are forming initial understanding, the introduction of AI can narrow thinking prematurely. Students benefit from generating their own questions, exploring ideas, and engaging with uncertainty before external input is introduced. In later stages, where thinking is being refined or extended, AI can be used more productively to challenge assumptions, test interpretations, and identify gaps. The same tool, introduced at a different point, can support or disrupt learning. When in the learning cycle or unit of work is AI best-placed to support learning?

 

Who: Readiness in the Learning Process

The question of who centres the learner and the teacher. Students are not equally positioned to use AI in ways that support learning. Prior knowledge, confidence, and familiarity with the learning process all influence how AI is used. A student with strong domain knowledge may use AI to extend or critique their thinking. A student who is still developing foundational understanding may use it to bypass that development or, with support, use it to help them progress. This is where the addition of contextual knowledge to TPACK (Mishra, 2019) becomes critical (this framework really does stand the test of time for meaningfully integrating technology into learning). Decisions about AI cannot be separated from an understanding of the learner and their position within the learning process. This is where developmentally appropriate instructional moves are needed. Who is best placed to use AI at this stage – the student or the teacher?

 

Extending the Work: AI as an Inquiry Coach

My GARC research has provided a structure for exploring these ideas through the design of an AI-supported inquiry coach. Positioned within the Information Search Process and underpinned by academic buoyancy, this model does not remove cognitive challenge. Instead, it increases students’ access to the strategies required to navigate that challenge. Prompts that normalise uncertainty, encourage persistence, and guide next steps appear to strengthen students’ capacity to remain engaged, particularly in the more uncertain phases of inquiry.

 

It is not surprising, then, that Guided Inquiry Design as a pedagogical approach continues to hold strong in this context. In a landscape increasingly characterised by innumerable quantities of information and, yet simultaneously, knowledge deficits and compressed information, the need for guided inquiry is amplified. Guided Inquiry Design provides a framework that supports sustained engagement, builds information literacy, and develops students’ capacity to think critically about the information they encounter. It requires both explicit teaching and open curiosity. It also creates the conditions in which students’ trust can be developed across all three domains: in themselves as learners, in their teachers as supporters, and in the tools they are using.

 

Designing the Conditions for Learning

What continues to surface through this work is that the integration of AI is less about the tool and more about the conditions we create around it. Trust, timing, and purpose are central. When these are aligned, AI can be positioned in ways that support students to think more deeply and persist more effectively. When they are not, it becomes another avenue away from the cognitive work that learning requires.

 

At this stage, my thinking is less concerned with what AI can do, and more focused on how students experience learning in its presence. The challenge is to ensure that AI strengthens students’ trust in themselves as learners, their trust in their teachers as sources of support, and their ability to engage critically with the outputs it produces.

 

Comments


Subscribe Form

Thanks for submitting!

©2020 by Noni Harrison.

bottom of page