A few minutes before 8 a.m. Monday, millions of New Yorkers' phones screeched almost simultaneously. They all received the same notification: Ahmad Khan Rahami was wanted.
That's it. No links, no pictures, no further context — no one to call except 911. I got the alert while feeding my toddler breakfast, and in my pre-coffee haze, I glanced at my phone and mistook it for an Amber Alert. Elsewhere in the city, subway cars full of people must have looked up from their phones and regarded one another warily. Might one of their fellow passengers be Ahmad Khan Rahami? Young men with brown skin might well have wondered: Might one of my fellow passengers mistake me for Ahmad Khan Rahami?
The alert appears to have been the first of its kind, the New York Times reports. That is, it's the first time the Wireless Emergency Alerts system has been used as a sort of virtual "WANTED" poster as opposed to its more familiar uses in weather emergencies or child abductions. The alert went out throughout New York City, and that the decision to use the system for that purpose came from the office of Mayor Bill de Blasio.
By noon, the suspect in question had been arrested. There's no evidence, at this point, that the mobile push notification helped authorities find him.
De Blasio's press secretary, Eric Phillips, said on Twitter that the ability to use mobile push notifications in a manhunt is an "important added capacity" for law enforcement:
But others criticized authorities' decision to use the system in this way. In New York magazine, Brian Feldman called it "an extremely bad push alert to blast across the greater New York area:"
It provides no useful contextual information, warns of no imminent danger. It essentially deputizes the five boroughs and encourages people to treat anyone who looks like he might be named "Ahmad Khan Rahami" with suspicion. In a country where people are routinely harassed and assaulted for just appearing to be Muslim, this is remarkably ill-advised.
Feldman is right that the notification was seriously flawed. And yet I also think Phillips is also right that the ability for authorities to reach people on their cellphones could be important, if used judiciously.
It's a tenet of good crime reporting that you don't describe a suspect unless you have enough information that people could realistically distinguish that individual from others of similar age, race, build, etc. So to enlist the public in a hunt for, say, a "28-year-old male with dark skin, medium build, and brown facial hair" would be dangerous folly. You're asking people to go after a stereotype, not an individual.
On the other hand, if you have a clear photo of the suspect's face, you publish it, while describing the suspect in as much detail as possible. Countless crimes have been solved because a member of the public happened to spot a suspect whose face they had seen in the news. There's still a real risk of false positives, which has to be taken seriously. But depending on the severity of the crime, it could be outweighed by the public safety interest of catching the perpetrator.
In this case, the notification included neither a description nor a face, but a name. That's better than a vague description, because it identifies an individual rather than a stereotype. No doubt there are people in New York City and the surrounding area who know Rahami personally but were not aware that he was wanted. If the notification reached those people, it could spur them to provide information that would help authorities track him down.
But if a name is better than a vague description, it's still precious little to go on for the millions of New Yorkers who don't happen to know Rahami. Without a face to go with it, it simply encourages people to view any young man who looks like he might have such a name as a potentially deadly terrorist. That's deeply unfair, and it could lead to innocent people getting hurt.
Granted, the alert did not omit the face out of ignorance or malevolence. Due to various technical constraints, its geographic targeting is poor, and it is limited to text-only messages of 90 characters or less. That means the mayor's office couldn't have included the suspect's face if it wanted to — which it surely did, since authorities intentionally spread the image on social media before sending the notification.
That leaves two open questions. First, was the mayor's office right to send this alert, given the constraints and the risk of casting suspicion on innocent people? And second: If the system were to allow authorities to convey greater detail, including links or images, would that be a good thing?
I don't think there are easy answers to either question. But on the first, I lean toward "no," while acknowledging that it's far easier for me to criticize such a decision than it was for them to make it.
True, authorities were under tremendous pressure to do whatever they could to find the suspect before anyone else got hurt. But a system this crude, intrusive, and potentially harmful should not be employed on an ad hoc basis. There should be clear, well-thought-out policies in place to ensure that it's used as carefully, as sparingly, and as effectively as possible. Those policies should be debated in public and codified before the system is used in a new way. And out of that process should come an answer to the second question.
If authorities have the right man, we can all be grateful for their investigative work and thankful that he's no longer in a position to endanger innocent people. Next time, let's hope the authorities are a little more careful that they don't inadvertently do the same.