To many on Twitter, images of violence against Foley can be seen as spreading a terrorist's message, while publicizing Brown's death shines a light on a perceived injustice.
"They're letting the masses decide what should be up and what should not be up," said Ken Light, a professor of photojournalism at the University of California, Berkeley. "When it's discovered, it needs to be dealt with promptly. The beheading video should never go viral."
The dilemma faced by Twitter, a proponent of free speech and distributor of real-time information, isn't much different from that of a newspaper or broadcaster, according to Bruce Shapiro, executive director of the Dart Center for Journalism & Trauma at Columbia Journalism School.
Read also:
• Social media pushes back at militant propaganda
• British jihadist 'likely' to have beheaded journalist
"Twitter's situation is exactly like that of a news organization," Shapiro said. "Freedom of the press and freedom of expression doesn't mean that you should publish every video no matter how brutal and violent."
The incidents also happened just after Robin Williams' daughter, Zelda, said she was quitting Twitter after receiving abusive messages following his death.
"In order to respect the wishes of loved ones, Twitter will remove imagery of deceased individuals in certain circumstances," the San Francisco-based company said in a policy that was enacted last week. "When reviewing such media removal requests, Twitter considers public interest factors such as the newsworthiness of the content and may not be able to honor every request."
Read also:
• Robin Williams' daughter quits social media over trolls
• Jennifer Aniston stays at home to avoid cyber bullying
Twitter's software isn't designed to automatically filter all inappropriate content. The company's Trust and Safety team works in all time zones to stamp out issues once they're discovered, according to Nu Wexler, a spokesman for the company. Twitter uses image-analysis technology to track and report child exploitation images, Wexler said.
Twitter doesn't specifically prohibit violent or graphic content on its site -- only "direct, specific threats of violence" and "obscene or pornographic images," according to its terms of service. It may need to go further, if Facebook's experience is any guide.
In October, around the time Twitter started displaying images automatically in people's timelines, Facebook was dealing with an uproar over a separate beheading video that was spreading around its site. The company resisted taking it down until user complaints intensified, including from British Prime Minister David Cameron. Then Facebook changed its policies.
"When we review content that is reported to us, we will take a more holistic look at the context surrounding a violent image or video," the Menlo Park, California-based company said at the time. Facebook said it "will remove content that celebrates violence."
Now that Twitter is encouraging images and video, it will also need to take another look at its rules, according to Columbia's Shapiro.
"I don't think a blanket rule is the point," Shapiro said. "You do need a company policy that recognizes that violent images can have an impact on viewers, can have an impact on those connected to the images, and can have an impact on the staff that have to screen this stuff. You can't ignore Twitter's role in spreading these images."
-Bloomberg