Interventions into student safety, prompted by technology used to help school leaders prevent students from harming themselves or others, increased dramatically after COVID-19 landed in the U.S. and caused schools to close physical classrooms.
In its annual Student Safety Report, Gaggle, which uses artificial intelligence and trained safety experts in a student safety solution designed to prevent student suicide, bullying, inappropriate behaviors, school violence, and other harmful situations
According to the report, which analyzes incidents detected using Gaggle’s solution, during the 2019–20 school year, school and district educators were able to save the lives of 927 students.
This number represents a general increase of 28 percent over the preceding school year, but a significant difference emerged after the COVID-19 pandemic began. The pre-pandemic increase in lives saved was 11 percent, but during the pandemic, the increase rose to 32 percent.
Related content: 6 reasons we’re using a student safety platform
“We’re able to have a presence in a space that normally we wouldn’t be part of. We’re not in their homes with them, but when students are creating these calls for help, we want to be able to act and provide the necessary support,” said Dr. Adrian Palazuelos, superintendent of the Fillmore Unified School District, in the report.
During the 2019–20 school year, Gaggle reported and alerted educators to:
● 64,000 references to suicide or self-harm in students’ online activity. Of these, more than 5,600 were serious enough to merit immediate attention by the district
● 38,000 references of violence toward others. More than 1,600 warranted an immediate call to prevent a potential incident
● 18,000 instances of nudity or sexual content, of which more than 2,400 were identified to be child pornography
“With school now taking place in our students’ living rooms and bedrooms, safety is more important than ever,” said Jeff Patterson, Gaggle’s founder and CEO. “Many educators are concerned that without in-person school, they may not be able to identify students in abusive situations or those suffering from mental illness.”
Student safety intervention in action
Northern York County School District (NYCSD) in Pennsylvania implemented the Gaggle student safety platform, and administrators said they knew that if the solution saved just one life, it would be more than worth the investment.
In late 2019, Gaggle flagged a student document that was a suicide note written to his family outlining his plan and saying goodbye. Gaggle immediately contacted the school’s administrators. Within half an hour of receiving the alert from Gaggle, school leaders were in the student’s home along with the local police to perform a wellness check. The student received the help he needed to address his struggles, and officials say they believe they saved a student’s life that day.
When the East Irondequoit Central School District in New York implemented Gaggle, the goal was to keep students safe in the digital world. What the district encountered was a months-long investigation resulting in the arrest and conviction of a child predator.
Gaggle intercepted pornographic content sent to an 11-year-old sixth grade student, blocked its delivery, and quarantined the file to keep it
out of the district’s system.
“The file was sent to our police department for their investigation,” said Christine Osadciw, executive director of technology for the district. “That was their proof. If we didn’t have that video, I don’t know if the predator would have been caught.”
After four months, a man from Michigan was arrested for the crime. “As a tech director, it’s difficult. With all of these different online tools that our students have access to, it gets harder and harder to keep them safe,” said Osadciw.
Gaggle’s student safety solution analyzes and reviews the use of online tools within Google’s G Suite, Microsoft Office 365, Google Hangouts, Microsoft Teams, and the Canvas learning management system for more than 4.5 million students across the United States. Machine learning technology watches for specific words and phrases that might indicate potentially harmful behavior. When a match surfaces, the content is evaluated by a trained safety professional to determine whether it is a threat and how much of a threat it poses.
- FETC 2025: Good to Know Before You Go - December 12, 2024
- FETC 2025: AI Sessions You’ll Love - December 12, 2024
- A look at one school’s innovative approach to PD - December 11, 2024