Social media companies looking to prevent a video being uploaded at all must first upload a copy of that video to a database, allowing for new uploads to be compared against that footage.Įven when platforms have a reference point - the original offending video - users can manipulate their version of the footage to circumvent upload filters, for example by altering the image or audio quality. The way most content-recognition technology works, he explains, is based on a “fingerprinting” model. “It’s very hard to prevent a newly-recorded violent video from being uploaded for the very first time,” Peng Dong, the co-founder of content-recognition company ACRCloud, tells TIME. “We’re also removing any praise or support for the crime and the shooter or shooters as soon as we’re aware.”Įxperts say the Christchurch video highlights a fatal flaw in social media companies’ approach to content moderation. “We quickly removed both the shooter’s Facebook and Instagram accounts and the video,” a Facebook spokesperson said.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |