Going Beyond Feedback: An initial study into the effectiveness of AI-managed learning loops in Australian and New Zealand secondary schools

On this page

Share this

Register for your free AI trial

At Education Perfect, we’re passionate about developing AI capabilities that complement and support teachers while supporting student growth.

To gain further insights into how it can support student growth and drive engagement, we recently completed a detailed 10-week study of more than 19,500 Australian and New Zealand intermediate and secondary school students and their 210,000 answers to measure the before-and-after impact of using our AI-powered feedback tool and learning loops1.

We analysed 100,000 answers before the learning loop was switched on, and 104,000 answers after it was switched on to allow for comparisons to be made2.

However, before we get into the results, what exactly is a learning loop, and why is it becoming crucial for effective education in the 21st century?

Understanding the Learning Loop

At its core, the learning loop is an iterative process that allows students to receive feedback, apply it to improve their work, and then receive further feedback on their improvements. This cyclical approach contrasts the traditional model of feedback cycles, where students receive a delayed evaluation of their efforts, without the opportunity to learn from their mistakes and try again while in the flow of learning.

In the learning loop, AI analyses the student’s initial response and provides specific, actionable feedback, highlighting areas for improvement. This guides the student towards a better understanding of the concepts, freeing teachers to focus on areas they are most needed.

With this feedback, students can revise and refine their work, applying what they’ve learned. When the student submits their updated response, the system analyses it again, offering further guidance and suggestions. This iterative process continues, with the student progressing through multiple rounds of feedback and improvement. AI is crucial in this practice as implementing learning loops without it increases the workload on teachers to unsustainable levels.

Without that immediate feedback and the opportunity to improve, most students don’t make second or more attempts at improving, and just stop at their first attempt. The first key finding from our study backed this up.

Finding #1 – Students engage more effectively with the learning loop activated.
  • Our activated learning loop is designed to encourage students whose responses were assessed as low quality3 to have a second (and third if needed) attempt to improve.
  • Our data shows that compared to self-directed improvement, this drives significantly more attempts for students submitting low-quality responses.
  • Of students submitting a 1-star response as their first attempt, 83% go on to have a second attempt with the learning loop on, vs. 7% with it off.
  • Of students submitting a 2-star response as their first attempt, 92% go on to have a second attempt with the learning loop on, vs. 17% with it off.
  • Note: Only 1-star and 2-star responses trigger the learning loop.

Bar graph showing the percentage of students progressing to second attempt by star level of response.

The results above show an incredible increase in the number of students with low-quality initial attempts. This trend was repeated when looking at second attempts that then progressed to third attempts, as shown below.

Bar graph showing the 2nd to 3rd attempt of the learning loop by the star rating of the response

The Power of Iterative Learning

The learning loop approach is powerful because it aligns with how humans naturally learn and grow. By allowing students to actively engage with the material, receive personalised guidance, and demonstrate their learning progress over time, we provide an opportunity for them to develop a deeper, more lasting understanding.

Traditional feedback models often fall short in this regard. A single lesson or assignment may measure a student’s knowledge at a particular moment, but it relies on a student’s internal motivation to develop the ongoing learning and skill development that is essential for long-term success. In contrast, the Education Perfect learning loop keeps students actively involved in the learning process, empowering them to take ownership of their growth and development.

Moreover, learning loops can provide valuable insights to support educators’ planning. By analysing the iterative feedback and improvement process, teachers can gain a more nuanced understanding of each student’s strengths, weaknesses, and learning patterns. This information can then be used to tailor instruction, provide targeted support, and ensure every student has the resources they need to thrive. Over time, with the learning loop in place, we should see improvements on average in the quality of student responses, which can be seen in the second finding from our study.

Finding #2 – The enforced learning loop results in higher star-rated student responses.
  • With the learning loop in place, the overall average student response quality increased from 2.4 stars (out of 5) to 3.6 stars. A 47% improvement.

Bar graph showing average final star ratings before and after the learning loop.

We also saw that enabling the learning loop significantly improves the star-rating-based quality of final responses (i.e. after all attempts have been made) across the board. The really pleasing aspect of this was seeing lower 1-star and 2-star final responses drop from 52% to just 13%, as shown below.

Bar graph showing the final star ratings before and after the learning loop

What can be inferred from these results?

It is important to reiterate that these metrics are based on student answer quality in an assisted scenario, rather than a direct measure of student learning. However, the results are still exciting when viewed in this context.

When we examine what is happening more closely, a pattern emerges, as shown in our third finding.

Finding #3 – The learning loop both improves student first-attempt quality and raises low-quality first-attempt responses to a minimum standard.
  • Student first attempts are significantly better with the learning loop on, improving from an average of 2.37 stars to 3.05 stars.
  • However, even with this higher starting point, students still improve from their first attempt to their final attempt by 0.52 stars on average with the learning loop on, versus 0.14 stars on average without the learning loop.

Bar graph showing the average star rating by attempt and highlighting the difference between attempts before and after the learning loop

Our hypothesis in this situation is that students are trying harder on the first attempt as they know they’re being held accountable for their results, and their attempts won’t “fly under the radar” or get missed. We really need to see more data before we can prove this hypothesis, but the early signs are there.

As we noted earlier, the low-quality first responses improve the most with the learning loop enabled. In contrast, without the learning loop, first responses remain low quality with only minor improvement, as shown below.

Bar graph showing first vs. final star ratings highlighting difference between first attempt, before learning loop and after learning loop.

An important point to note with this data is that while students with 3-star or 4-star first responses do achieve some growth and improvement, it is relatively minimal. This is not entirely unexpected as the Learning Loop lets students with a 3-star response continue without requiring them to improve or have further attempts. What would be interesting to test is whether offering students with 3-star first attempts to have another go would push the whole distribution closer to the 4-star mark. A study on this hypothesis is currently underway and early indication is looking positive.

The Role of AI

These early results are certainly exciting, but they also spotlight AI as an educational support tool. As educational technology continues to evolve, the learning loop is poised to play an increasingly crucial role in shaping the future of teaching and learning. By harnessing the power of AI, machine learning, and other advanced tools in a supporting role, educators can create dynamic, personalised learning experiences that unlock the full potential of every student.

The underlying AI model used in an educational learning loop system can significantly impact the overall effectiveness and quality of this type of learning loop experience. We delve deeper into this topic in our AI Buyer Guide article, but the key points are:

  1. Accuracy and Relevance of Feedback
    The sophistication and capabilities of the AI model directly determine the accuracy and relevance of the feedback provided to students. More advanced language models, like those used by Education Perfect, can better understand the nuances of student responses and provide tailored, insightful feedback. Less capable models may struggle to identify the right improvement areas, leading to generic or misleading feedback.
  2. Breadth of Question Types
    The flexibility of the AI model dictates the types of questions and prompts the system can effectively handle. Advanced models can engage students with open-ended, complex questions that require higher-order thinking. Simpler models may be limited to more basic, multiple-choice or short-response tasks, which can constrain the learning experience.
  3. Iterative Improvement Tracking
    To enable a true learning loop, the AI system must be able to track and analyse a student’s progress over multiple iterations. More robust models can maintain context and memory, recognising how students’ responses evolve and provide increasingly targeted guidance. Weaker models may struggle to connect the dots, leading to a disjointed, less effective feedback loop.
  4. Reliability and Consistency
    Students and teachers need to be able to trust the reliability and consistency of AI-powered feedback. Advanced models trained on larger, higher-quality datasets are less prone to making mistakes or providing contradictory guidance. Inconsistent or erratic feedback can undermine the learning loop and frustrate both students and educators.

By carefully considering the capabilities of the underlying AI model, education providers can ensure that their learning loop systems are truly effective in driving student growth and learning. The more advanced and tailored the AI, the more powerful and transformative the learning loop experience can be.

Conclusion

As we have repeatedly pointed out, this study is based on an internal data analysis of AI-graded student answers performed by Education Perfect and does not reflect a formal study of learning improvement efficacy. However, even with this caveat, the results are exciting and deserve more exploration and measurement.

Looking at the high points of this study:

  • The data is derived from approximately 210,000 answers from 19,500 students over a 10-week period
  • There were over 200,000 AI-marked responses, which helped to free teachers up to focus where they were needed most
  • 87% of students engaged with the AI to improve low-scoring responses
  • This resulted in a 47% average improvement in the quality of final responses based on our star rating
  • 69% of students with low-scoring responses demonstrated deeper understanding by their final attempt.

We acknowledge that in the early days of AI technology, AI grading and feedback can sometimes be incorrect. We would also point out that model answers or exemplar responses may have been shown to students between the first and final attempt, however the trends still showed consistent results, which can’t be ignored.

With results like these, developments in this space will continue to happen rapidly, and at Education Perfect our focus remains on ensuring our comprehensive suite of AI tools is carefully designed to enhance student learning, ensure data privacy, and deliver consistent, high-quality outputs.

For more information on Education Perfect, to see a demo of our AI capabilities or to register for a trial, visit educationperfect.com/ai/.


 

  1. Statistics are based on an internal data analysis of AI-graded student answers performed by Education Perfect and do not reflect a formal study of learning improvement efficacy. EP AI star ratings use a generic marking criteria. AI grading and feedback can sometimes be incorrect. Model answers / exemplar responses may have been shown to students between the first and final attempt.
  2. Learning loop was turned on partly through a single day, so we discarded 6,000 answers from that day to remove ambiguity in the results.
  3. Response quality is measured through our AI star-based system using generic marking criteria where 1-star is low quality and 5-star is high quality.
Last Updated
October 21, 2024
Category
Article

Related blog articles

Get started with Education Perfect

Webinar

Teaching with AI

Join us on November 13 as we discuss navigating the benefits, risks, and classroom implementation of AI

Date: 13 November, 2024
Time: 4:00 PM