There’s a set of questions that comes up with grim hindsight after a shooting like the one in Uvalde, Texas: Were there signs? Did we miss them? Could we have caught this? An entire industry has sprung up claiming that it has the answer: software that scans social media for threats.

Ari Sen, a computational journalist for the Dallas Morning News, has reported that the Uvalde Consolidated Independent School District purchased one of these social media monitoring services, called Social Sentinel, a few years ago. Right now, Ari says it’s hard to know if the software was active in Uvalde at the time of the shooting—the school district hasn’t answered that yet.

But the bigger question is whether the posts on the gunman’s now-removed Instagram page—including lots of photos of AR-15-style rifles and weapons—would even have been flagged by the software. Why then are schools spending millions of dollars on this software, and why does the industry claim it helps protect students?

On Friday’s episode of What Next: TBD, I spoke with Ari Sen about what threat surveillance software promises and how it falls short. Our conversation has been edited and condensed for clarity.

Lizzie O’Leary: Can you explain what Social Sentinel is and who uses it?

Ari Sen: Social Sentinel is a social media monitoring technology. It’s used by dozens of colleges and hundreds of school districts all around the country. What they claim to do is to scan billions of social media posts with really sophisticated AI to identify threats of potential violence or self-harm. Now, some of the reporting that I’ve done suggests that these models may not be very sophisticated or that this might be a really hard problem to solve even if the models are very sophisticated.

In your reporting, you found that several school districts that bought this software, spending between $1 and $2 per student, weren’t getting all that much for their money.

Most of the school districts that we talked to that had used Social Sentinel did not find the service to be useful. I contacted every school that we could find that had used Social Sentinel and the three other social media monitoring services that we studied in the state of Texas. Over 200 school districts had used one of these four services since 2015. Most of them did not respond to my questions, but there were a handful, maybe five or six, who did actually respond. I would say four or five of those said that “We canceled the service after a year. We didn’t find it to be useful. Or we found something, an anonymous reporting tool, a team of humans to monitor this stuff. We found that to be as good or better than the Social Sentinel service.”

One thing that I’ve heard a lot, not only from school districts but from colleges, is that 90 percent, 99 percent of the stuff that they were getting from the Social Sentinel service was false alerts. I’ve seen stuff like song lyrics, Bible verses, obvious jokes. If you just think about the way that people talk on social media, it’s a lot of sarcasm. It’s a lot of irony. It’s a lot of hyperbole. That can be really difficult for machine learning models to catch in general and particularly the less sophisticated stuff.

Do you have any examples of posts that got flagged where you thought, “Oh, come on. That’s someone tweeting lyrics”?

There is a college in Florida that I was able to get some flagged tweets from. Somebody tweeted the lyrics to the 2010 B.o.B song “Airplanes.” I think it picked up on the phrase “shooting stars.” Obviously, we’ve seen people tweeting about their favorite characters on TV shows: “If X character doesn’t get together with Y character, I’m literally going to die,” things like that. There’s a really funny tweet from one of these Florida colleges about Hamburger Helper and how Hamburger Helper needs to accept that it needs help.

They thought that was a mental health problem?

Evidently. Like I said, it’s hard to inspect these machine learning models. We don’t know for sure what exactly is going on behind the scenes there. But I am able to look at some of the things that they have flagged, and they don’t seem to be threatening at all. What we’ve heard anecdotally from schools and colleges is like, “Yeah, most of what we were getting is just not actionable.”

Is the algorithm searching for keywords? Does it look for shoot, kill, stab?

If you looked at Social Sentinel, the way they talked about the service early on, it very much sounds like a keyword-based service. They talk about how they have thousands of terms that they’re able to flag to school districts. The company now says that they have very sophisticated machine learning models. They have these eight different machine learning models that are able to classify text appropriately.

It’s also unclear exactly how these models work because the companies treat their algorithms as proprietary. They also say it would defeat the purpose of their work to disclose too much.

We don’t know what sorts of training data they have to go into the models, whether that training data has been audited for racial bias. All of that stuff is opaque to us. It really raises questions about, if schools are going to use this for such a serious and important purpose, should there be some transparency about the models, the training data, and how effective they are?

Moreover, machine learning models often struggle with slang and the way kids talk. That can mean posts from students of color are disproportionately flagged by the algorithms.

There was a really interesting paper by some UMass Amherst researchers a couple years ago where they took African American Vernacular English and they plugged it into language identification machine learning models. Obviously, what it should spit out is that this is English. In actuality, one of these models flagged that language as Dutch with 99 percent confidence. So these models do poorly on non-Anglicized English text in general and may exhibit the biases from the training data which they were trained on. If you look at Social Sentinel’s claims, for example, on their website, they say, “We don’t perpetuate any biases.” The experts that I’ve talked to have said that’s very difficult to do if the underlying models you’re using behind the scenes have these sorts of biases built in.

While Social Sentinel claims it covers almost all of social media, your reporting and work from BuzzFeed News suggests it mostly just monitors Twitter. Do you think these services can even keep up with how students use social media as they jump from platform to platform?

Obviously, you have the problem with young people hopping between different services. The big thing now is TikTok, for example. Maybe 10 years ago, it was Facebook. It’s hard for these platforms to keep up. Then you also have the ways in which language changes naturally over time. Then, as we were just talking about, language differs very widely across groups and geographies. The way people talk in California is not the same way that people talk in North Carolina.

I saw that the Uvalde shooter was using a service called Yubo, which I suspect these companies are not monitoring.

I hadn’t even heard of Yubo. One of the things I’ve seen in my reporting on Social Sentinel is that police chiefs going back to 2015 were constantly bugging Social Sentinel: Can you add this platform? Can you add this platform? Sometimes they did, and sometimes they didn’t. But it’s very, very difficult to keep up with the fast-paced nature of how young people are acting online.

Listening to you, there seems to be a pretty substantial body of evidence that these services are largely ineffective. Do you think that’s a fair assessment? And if it is, why are they still being touted as a solution?

We haven’t really been able to identify a clear case of this service working. We have heard some anecdotes about maybe some of the other services preventing kids from harming themselves. But I think the question we have to ask is, is it worth the privacy invasion? I found in my reporting last year that most of the time students and parents weren’t told at all that these services were in place and had no way to opt in or opt out. What needs to happen is a more open conversation about, “This is the service that we’re using, this is why we’re using it, and this is the things that it looks for.”

If these schools and universities are under such pressure to do something, but the debate around guns is either a nonstarter where they are or completely out of their hands, maybe this software feels like a reed they can grasp.

We’re obviously having a larger political conversation about what gun restrictions we do and we don’t want. But I think for schools, they’re desperate to do something to protect their kids, whether that’s school safety drills, monitoring services, like the ones we’ve been talking about here, whether it’s physical security, like metal detectors or other sorts of physical security measures. But I think one of the things that the Uvalde shooting shows us is that even school districts that have all of those things in place and have all of the training and all the officers, these things can fail and do fail. So there needs to be a different conversation that’s happening about what measures are effective and whether new approaches are needed to tackle this problem.

These kinds of programs ingest a tremendous amount of information and data. To work best, they need to do that. It does make me wonder for what other reasons a school or a school district or university might want to have this information or might utilize this information?

Some of the things that I’ve been seeing in my reporting, particularly at the college level, is that colleges are adopting these services to monitor protests and activism. Obviously, that’s very chilling. In 2016, there was a company called Geofeedia that got caught monitoring Black Lives Matter protesters. But Geofeedia is not the only player out there obviously. My reporting has suggested that these other services, particularly Social Sentinel, at the college level may be used to monitor protests and activism.

What did the schools say when you asked them about this?

I have contacted every college that we know of that’s used Social Sentinel and asked about this question specifically. A lot of them don’t want to talk about this. We haven’t really heard a full-throated defense of, “We’re monitoring this protest to keep students safe.” A lot of them are very tight-lipped, so we have to rely on documents and whistleblowers inside of the company to give us information.

Does the company say, “Yeah, we know our stuff is being used to monitor protesters”?

The company fervently denies any ability or use of the service to monitor protesters in any way, and they have since the beginning. But that claim is very dubious.

It’s worth remembering that most of these services are being paid for with public money. An investigation by BuzzFeed News examined contracts from 130 schools and found that they collectively spent $2.5 million on social media monitoring over five years. If you are listening to this and you’re a parent or a teenager or a college student, what other kinds of questions you think you should be asking your educators, your administration about these services?

Well, first of all, I think it’s just important to know whether the service is in place or not. For example, when I was reporting on these four social media monitoring companies last year, I discovered that my high school had used Gaggle, one of the monitoring services. I knew from previous reporting that my undergraduate institution, UNC Chapel Hill, used Social Sentinel. First of all, we should just ask the campus police department, the school administrators, “What service are you using? What does it monitor for, and why are you using it?”

The next questions are, is it effective? Is it doing what it’s set out to do, what they claimed it could do when they were marketing the service to you? If it’s not, then I think people really have to raise questions about, why are we still using this thing if it doesn’t work for the thing that they said it works for?

More contents: