Options for Removing Objectionable Content on Social Media Platforms
In today’s environment, most of us are likely to have a photograph or video on social media we would rather not have everyone seeing. Maybe it’s an unflattering photograph, an embarrassing video, a post of their minor children or an old photo with an ex-flame. When someone shares a photo or video of an individual without their consent, the consequences can range from mere annoyance to emotional distress or even damage to one’s reputation and employment prospects. The direct approach is to reach out and ask the user who posted the content to remove it from the social media platform. However, if the third party refuses to remove the photo or video, there are other limited options for redress.
Section 230 of the Communications Decency Act protects social media platforms from certain liability for content posted on their sites. However, users of social media sites do not have an unfettered First Amendment right to post anything they want. We have previously discussed legal remedies for copyright or trademark infringements, and there are procedures to report and request the removal of illicit materials such as defamatory or obscene content or media posted by minors. However, the individual’s rights and remedies are more limited with respect to a person’s image posted on the Internet without consent.
Reporting Violative Content to Social Media Sites
While each social media platform has its own terms of use, the policies are generally comparable across platforms, including Instagram, Facebook, WhatsApp, Snapchat, TikTok, X (formerly Twitter) and others.
Instagram’s policies provide a representative example of the terms and conditions users automatically agree to when they access social media platforms. Users failing to adhere to Instagram’s Community Guidelines may have their content deleted, accounts disabled and other restrictions imposed on their Instagram account. Anyone can report posts or profiles that violate Instagram’s Community Guidelines, including content that features spam, nudity or sexual activity, hate speech or symbols, violence, terrorist organizations, hate groups, bullying or harassment, selling goods illegally, intellectual property violations, suicide or self-harm, eating disorders, scams or frauds and false information. The first step is to preserve evidence of the offensive behavior or content, particularly in the case of an account or content involving bullying and harassment, to assist the site in taking appropriate action against the offender. If Instagram does not initially remove the reported content, the complainant may request a second review, which is usually undertaken within 24 hours of the request. If Instagram reviews the content and still decides the content or user is not in violation of the Community Guidelines, the complainant may appeal to an Oversight Board, which has the discretion to decide whether to review the decision. The complainant has 15 days to appeal to the Oversight Board, after which the prior decision is final.
While waiting for a determination from the platform on whether the content violates the site’s terms of use, or certainly if the platform decides not to remove the content, there are self-help measures. The aggrieved party should remove or block the objectionable content to prevent it from appearing on its own site and feed accessible to its contacts. For example, when an individual blocks an objectionable user on Instagram, the user’s likes or comments will be removed from the individual’s content, and the user won’t be able to mention the individual, tag them or message them in Instagram’s Direct inbox. Individuals can also remove and block others from their social media accounts, from streaming music services like Spotify to fitness tracking apps like Fitbit and Strava.
Defamation and Privacy and Publicity Rights
To support a defamation claim, the content posted must be derogatory or damaging to the reputation of its subject. A user can be held liable for defamation if (1) the content was defamatory, (2) the content was false, (3) the content concerned the plaintiff, (4) the defendant had some degree of fault and (5) the content was damaging to the plaintiff’s reputation. See the Restatement (Second) of Torts § 558. The material publicized needs to be highly offensive to a reasonable person and not of legitimate concern to the public. Id., § 652D.
The test for invasion of the right to privacy includes unreasonable intrusion upon another’s seclusion, appropriation of another’s name or likeness without their consent and publicity that unreasonably places the subject in a false light. To establish a privacy claim, the victim must be identifiable in the content. Moreover, if a photograph or video is taken in a public rather than private setting, it’s unlikely that an invasion of privacy claim can be established. Summarized in the Restatement (Second) of Torts, §652D, comment c: “[c]omplete privacy does not exist in this world except in a desert, and anyone who is not a hermit must expect and endure the ordinary incidents of the community life of which he is a part.”
Strategies for Removing Copyrighted Content from the Internet
While registration is not a prerequisite for copyright protection, there are numerous benefits to registering a work with the Copyright Office, including creating a public record of ownership, allowing the copyright owner to bring an infringement suit, establishing prima facie evidence of a valid copyright and eligibility for statutory damages and attorneys’ fees. For a discussion of copyright protection and registration of photographs, see here.
The Digital Millennium Copyright Act (“DMCA”) requires online service providers to have policies in place to address copyrighted works. Social media platforms must immediately take down any infringing material, ensure they are not receiving direct financial benefit from the infringing material, designate an agent to receive copyright infringement claims, adhere to the notice-and-takedown procedures set forth in section 512(c) of the DMCA and implement a repeat infringer policy. If a third party uploads a copyright owner’s copyrighted photo, video or music, the copyright owner can report the third party poster to the social media platform and have the content removed. Third parties can be held liable for copyright infringement even if the content was recorded on the user’s own device (for example, a song playing in the background of a video during a party or concert), credit is given to the copyright owner in the caption, a disclaimer is included and the user did not profit from the content. If the poster repeatedly posts infringing content, their account may be disabled or their page may be removed under repeat infringer policies. In order to file a DMCA claim, the reporting party must be the copyright owner. Section 512(h) of the DMCA also grants copyright owners the power to subpoena online platforms to obtain identifying information for an anonymous infringer on their site. The subject of a photograph or video is not covered by the DMCA unless they are acting under the authority of the copyright owner. Compliance with these procedures protects social media platforms from money damages stemming from infringing activity on their platform.
Policies for Removing Images of Minors
There are also ways to compel the removal of photographs and images that feature subjects under 18. For example, Google allows the removal of images featuring individuals under the age of 18 from Google search results upon the request of the individual, their parents or their guardian, with the exception of cases of “compelling public interest or newsworthiness.” While Google cannot remove the image from the website the search results link to, it can remove the images from its Images tab or as thumbnails in any feature on Google Search. As noted above, the best way to get the image removed from a website is to contact the website owner directly and request that the image be taken down. The WhoIs Database, an Internet record of domain name owners and their contact information, is one resource to identify the website owner.
Ex-friends or significant others are not always the offenders posting embarrassing or harmful content. “Sharenting,” the use of social media by parents to share news and images about their children, can also have consequences for children’s privacy and emotional well-being. The first generation of children born in the Facebook era are coming of age now, and the childhood of many of them has been documented on the Internet by their parents. By posting photographs of their children, parents are putting their children at risk of facial recognition tracking, stalking, digital kidnapping and other privacy and security concerns. Putting aside the worst-case scenarios, the child may just be embarrassed or subject to teasing by their peers for certain photos or videos. With years of evidence of the potential harm, parents are strongly encouraged to refrain from posting images including their child’s face, making their social media private or waiting until their child is old enough to provide informed consent to post the photo or video.
Conclusion
Social media platforms allow individuals to share moments from their lives, strengthen relationships with friends and family and create a timeline of important events, milestones and memories. However, it is crucial to consider privacy settings, protect personal information and be mindful of one’s social media presence as there are only limited remedies for removing objectionable content. Please reach out to Lutzker & Lutzker for guidance on protecting privacy and protecting intellectual property on social media.