Reviews | Instagram users need a way to report death threats

(Washington Post staff illustration; iStock)
(Washington Post staff illustration; iStock)


Sherry Hakimi is the founder and executive director of genEquality, an organization that promotes gender equality and inclusion.

Friday, October 14 was a hectic day for me. It started with a meeting with Secretary of State Antony Blinken and ended with death threats.

I was one of a small group of Iranian-American women invited to meet with senior State Department officials to discuss the growing women-led protest movement in Iran. Beforehand, the secretary’s office asked for our agreement for the news agencies to be present while the secretary made a statement. Anticipating no harm, I said yes.

Unfortunately, I was wrong. Over the next week, I learned firsthand how the policy and design decisions of social media platforms, such as Instagram, directly affect the personal safety of users.

Shortly after the photos from the State Department meeting appeared in the news, my Twitter mentions — normally non-existent, as I barely use Twitter — spiked. I have seen tweets containing misinformation, hate, harassment, violence and vile insults. That night, the attacks spread to Instagram. The next morning, I woke up to a slew of requests for messages on my private Instagram account. To the hateful and harassing messages were added death threats.

As someone who readily presents himself as a “politics nerd” — certainly not a public figure — I was stunned.

Not knowing how to handle death threats, I followed the platform’s user interface, hoping it would guide me to respond appropriately. Instagram offers three options at the bottom of message requests: block, delete, or accept.

Obviously, “block” was my only viable option. It wouldn’t make sense to “accept” hate and threats, and “remove” without blocking would allow those criminals (or bots) to come back. No thanks.

Tapping “block” brings up a menu with options to “ignore”, “block account”, or “report”. Tapping on “report” brings you to a new menu, which says: “Select an issue to report”.

Here I encountered a new problem (at the top of death threats): none of the reporting categories adequately captures the seriousness of a death threat.

So I’ve flagged all posts under the two closest options: “violence or dangerous organizations” and “bullying or harassment”. Then I waited. Two days later, despite contacting a friend who works at Meta and accepting her offer to submit an internal escalation, no response has come. Requests for threatening messages kept coming. Finally, a private security professional showed me how to change my Instagram settings to stop message requests.

It should be noted that in many places local police do not have jurisdiction over threats made on social media platforms. When my local agents learned that the death threats had come via Instagram, they told me they had no expertise in the matter, told me to contact Meta, and hung up.

By Wednesday – four days later – I still hadn’t heard from anyone in Meta who wasn’t my friend or a friend of a friend. Coincidentally, Nick Clegg, Meta’s president of global affairs, was speaking that Friday at the Council on Foreign Relations (CFR), of which I am a member. So, by a combination of coincidence, privilege, and courage, I had the opportunity to ask Clegg why Meta doesn’t have a “death threat” option in its reporting process.

I am grateful to him for taking my question seriously. He sounded genuinely sorry and didn’t know what to say or do, and his team quickly followed suit. Yet the larger problem of the notification process needs to be addressed on a large scale.

There are at least three ways to redesign Meta’s approach and architecture of choice to better adhere to its community guidelines and provide greater user safety: First, add a “death threat” option to issue categories to report. For example, Twitter’s reporting process includes a category specifying “threatening me with violence.” In most jurisdictions, a death threat is a criminal offence, especially when conveyed in writing. It should be cataloged as such.

Second, assemble a team dedicated to monitoring and handling death threat reports. If the local police do not have jurisdiction over their platform, Meta must intervene. Users who threaten to cause harm should be identified and banned from platforms without delay. Additionally, in addition to enforcing its own community guidelines, the company must work with law enforcement to ensure incidents are properly reported and local laws are followed.

Third, perhaps the easiest and quickest action: under settings, privacy and messages, set the default to “do not receive”. Receiving messages from other people should be an opt-in setting, not an opt-out setting. In his remarks to CFR, Clegg mentioned that Meta uses nudge theory principles to guide users toward best practices. It took me three days to learn that I had a choice not receive message requests from people I’m not. At a minimum, “do not receive” should be the default for users who have private accounts.

As someone whose life was turned upside down by death threats on a Meta platform, I have another question for the company: please hurry and fix things.


Comments are closed.