In a photo taken last March, a teenage boy is sitting at his desk with a plastic pellet gun that looks a lot like an AR-15. The airsoft rifle is propped up on the arm of a chair, pointing at the ceiling, and the boy, Eric, is looking at the camera. We're not using his last name to protect his privacy.
Eric's friend took the picture. At the time, Eric says, he didn't realize his friend had captioned the photo "Don't come to school Monday" and had sent it to others on Snapchat.
"I don't think he really had the intention of getting me in trouble," Eric says, explaining his friend's post as "dark humor."
But Eric, who is now in 10th grade, did get in trouble.
Someone reported the photo to Miami-Dade County Public Schools, where Eric is a student. Two police officers and Eric's principal took him out of class to question him. "I was terrified," Eric says. "They think that I wanted to shoot up the school, and I didn't. I didn't want to at all."
Eric was recommended for expulsion. His parents fought it, explaining that he didn't take the picture, caption it or send it to anyone himself. Ultimately, he was moved from his A-rated magnet school to a different school with a C rating.
His parents want to transfer him somewhere else. But they worry about what his record now says. Eric's dad, Ricardo, says, "Anybody that doesn't know the story will read this and say, 'There's no way in the world I'm gonna put this child in my school.' "
A spokesman for Miami-Dade schools said the district takes threats seriously, investigates them thoroughly and disciplines students when necessary.
For many students, this new school year will mean more reasons to watch what they do or say online. Spurred in part by the school shooting in Parkland, Fla., a year and a half ago, schools nationwide are collaborating with law enforcement in new ways in efforts to avoid the kind of tragedies that, while still rare, are far too familiar. They're investing in new security technologies that scan social media posts, school assignments and even student emails for potential threats.
These companies say they are saving lives. Privacy hawks and advocates for vulnerable students, such as those with disabilities, worry that these new surveillance technologies could threaten students' privacy and have far-reaching implications.
Florida out in front
Not surprisingly, Florida is at the forefront of this new wave of school security efforts.
Since the Parkland shooting, the Miami-Dade school district has opened a police command center with live video feeds from 18,000 cameras located in its public schools. It has installed GPS tracking on every school bus. There is an app through which the public can report threats.
And, on Aug. 1, the state launched an ambitious data repository called the Florida Schools Safety Portal. It's intended to collect information from school discipline records, law enforcement and mental health and child welfare systems and display it all in one place, alongside tips sent in by the public and automatic scans of social media posts for potential threats.
It's the state's answer to a problem that policymakers have been trying to solve since the Feb. 14, 2018, mass shooting at Marjory Stoneman Douglas High School in Parkland, which left 17 people dead and 17 injured. Various people had concerns that the confessed shooter was dangerous, but they weren't always comparing notes.
The system has been controversial from the start. State advocates for students with disabilities and students with mental illnesses raised alarms that children would be unfairly stigmatized and tracked for actions rooted in their conditions.
Social Sentinel, one of the largest companies in the business of scanning social media for schools, declined to be a part of the portal, because, it said in a statement, "we did not feel comfortable participating in an extensive database of student profiles." The firm is still serving schools in Florida — it just won't link its results to the portal.
The portal just launched, but officials involved with the process have already started to question whether it will work as designed, as a one-stop shop of information. Because of privacy concerns, no one will be able to access the data without authorization — and only a cop can see the law enforcement records, and only a school official can see the discipline records. So instead of integrating data, the portal essentially keeps it siloed.
"The Department of Education and all the other stakeholders that were tasked with doing this have done the absolute best possible job they could with what they were asked to ... accomplish," Bob Gualtieri, a Florida sheriff and the chair of the state's Marjory Stoneman Douglas High School Public Safety Commission, said at a public meeting Aug. 15. "What they were asked to accomplish is, in essence, the impossible."
Initially set up to investigate the Parkland shooting, the commission's brief has grown to encompass school safety across the state, including the portal effort.
False alarms
Late one night last spring, after an event, David Cittadino got an alert that someone might want to hurt his students.
Cittadino is the superintendent of schools in Old Bridge, N.J. Safety has been an increasing concern in this suburban township, as it has been around the United States. Cittadino's first year as assistant superintendent, 2012, not only was the year of the Sandy Hook school shooting, but was when, in his own small community, two young graduates of Old Bridge High School were shot and killed by a troubled veteran at a local supermarket.
In response, the school district has "hardened" schools, Cittadino says: more police, more security measures at the doors.
But, he adds, he has seen a new urgency in the past year. "Things changed after Parkland," he says. He remembers a school board meeting with members of the public asking, "What are you going to do?"
For his district, as for hundreds more around the country, the answer was new technology. Cittadino dug through his old phone messages and returned a call to Social Sentinel. It offers school professionals "Total Awareness," its website says.
Social Sentinel scans public posts to social media for potential threats to a school community. The company won't say exactly how it identifies which accounts to scan. When a threat is found, it is shared automatically with district officials and sometimes with the police.
A different company with a parallel mission, called Gaggle, scans learning software for similar threats. That means emails sent by students and faculty on school accounts, school assignments written with Google Docs or within the student software Canvas, and even calendar entries made in Microsoft Office. Gaggle safety experts manually review each alert before passing it along.
Gaggle reported that between July and December 2018, it found 51,000 examples of what it called "questionable content" — most often bullying or sexual content, less often self-harm and least often threats of violence to others.
On its website, Gaggle claims that it has "helped districts save 722 students from carrying out an act of suicide."
That figure, says Gaggle CEO Jeff Patterson, is based "on the severity of the incident, the specificity of it and the imminent nature." He gives an example: "I'm getting on the bus, my parents aren't home and I'm going to kill myself." Stories like these are what sell these technology products. But details often can't be shared with the public or the press.
Sarah Trimble-Oliver is the chief information officer of Cincinnati Public Schools, which is a customer of Gaggle. The school district has about 36,000 students, and last year, it had about 90 serious incidents that came into it through Gaggle. In one such case, "it actually came through as an alert for self-harm," Trimble-Oliver says. "We did find that there was some actual planning for self-harm and harm to others."
Software offerings like these promise to partially automate school safety, giving school leaders like Cittadino and Trimble-Oliver peace of mind. They're meant to help administrators answer the question "What are you going to do to prevent the next incident?"
Cittadino points out that shootings like the ones in Parkland and Dayton, Ohio, were preceded by threatening statements made online. "For every incident we're reading about, not just the ones at schools, there was a social media footprint that led to these tragedies — people putting it on social media, dealing with feelings of loss, shut out by society, left alone, seeking revenge," he says.
Indeed, after the Columbine High School shooting, the U.S. Secret Service studied shootings committed by adolescents. In 81% of cases, at least one person knew the shooter was planning or thinking about committing violence.
But what happened that night in the spring of 2018 — the incident that Cittadino remembers as proving the usefulness of his security system, the reason that Social Sentinel connected him to a reporter — also shows the drawbacks of this move toward high-tech surveillance in schools.
First of all, the post, which Cittadino paraphrases as "I would not have a problem with taking out a bunch of people all at once, and I would have no remorse for it," didn't come from a current student. It was from an account that Social Sentinel connected to Old Bridge schools based on its algorithm.
Second, after Cittadino contacted police, they showed up at the person's house and determined that it was not a serious threat. It was more like someone venting emotions, he says.
In other words, as a result of this system, a school official experienced anxiety and sent the police to a young person's home late at night, with unknown repercussions to that person.
Cittadino sees a success story, a potential crisis averted. Social Sentinel sees the validation of its model. Amelia Vance, a student privacy advocate with the Future of Privacy Forum, sees a false alarm that burdens school and law enforcement resources, even as it infringes on civil liberties and free speech.
"There's no proven information showing that social media monitoring is useful," she says. "We have a lot of data showing it overwhelms with false flags."
No easy answers
Here's the hard truth: School shootings of any kind — and mass shootings in general — are still so rare that there is no evidence that any particular security measure will reduce them. That was the conclusion of a review of literature by Jagdish Khubchandani, a professor at Ball State University, that was published this year.
Whether you're talking about locked doors to the building, security cameras, metal detectors, more police officers, random checks of lockers — none of it has been shown to improve safety. To prove so, says Khubchandani, would involve randomly assigning similar schools to use a particular measure rather than another and then following up for years.
Newer high-tech alert systems, as well as facial and voice recognition, have no evidence behind them either, Khubchandani tells NPR. Still, he holds out hope: "If ... shooters have these warning signs, it does seem like a new out-of-the-box approach. It could be promising."
But he also sees a drawback. "I hope that the social media monitoring does not make criminals out of a bunch of students who are having problems."
School leaders, for their part, feel bound to do something to help. In fact, according to Vance, the student privacy advocate, in today's climate they may face legal liabilities if they don't — if they miss something, if something happens and they should have known. And technology companies, having taken millions of dollars from investors, are offering solutions for that anxiety. But it's not clear that students are any safer as a result.
300x250 Ad
300x250 Ad