Australia-based Thea-Mai Baumann told The Times that she’s had an Instagram account since 2012 with the handle @metaverse. She posted about her AR company, Metaverse Makeovers, on her account. The app allowed users to try on holographic nail designs.
But on Nov. 2, five days after Facebook announced its name change, Baumann told the outlet that Instagram had disabled it.
“Your account has been blocked for pretending to be someone else,” read a message in her app. She tried to get answers from Instagram, including who she was accused of impersonating, to no avail.
It wasn’t until a month later that The New York Times reached out to Meta inquiring what had happened.
An Instagram spokesperson told the paper that the account was “incorrectly removed for impersonation.”
“We’re sorry this error occurred,” they said, per The Times, without elaborating on why Baumann’s profile was disabled for impersonation. The account was reactivated two days later.
A post shared by metaverse 🐆🦋📲💎⚔️🍸🌌 (@metaverse)
“This account is a decade of my life and work. I didn’t want my contribution to the metaverse to be wiped from the internet,” Baumann told The Times.
“That happens to women in tech, to women of color in tech, all the time,” said Baumann, who has Vietnamese heritage.
Meta did not immediately respond to Insider’s request for comment.
The company changed its name to reflect its goal of expanding into the metaverse, a futuristic virtual landscape where people can live, play, and work with digital avatars.
Meta filed to trademark the name on Oct. 28, according to its filing with the Patents and Trademark Office. But the nonprofit Chan Zuckerberg Initiative gained ownership of the “META” trademark in 2018, according to a separate filing.
Social media’s role in radicalizing extremists has drastically increased over the last several years.
Some far-right TikTokers employ a meme-like format in their content to dodge content moderation.
TikTok populates violent, white supremacist content to users who interact with anti-trans content.
A recent study from left-leaning nonprofit watchdog Media Matters found that if a TikTok user solely interacts with transphobic content and creators, the social networking app’s algorithm will gradually begin to populate their “For You” page with white supremacist, antisemitic, and far-right videos, as well as calls for violence.
Launched in 2016 by Chinese tech startup ByteDance, TikTok saw a surge in user growth throughout the COVID-19 pandemic and acquired 1 billion users across the world in five years, many of which are teenagers and young adults.
In 2020, the app classified more than a third of its daily users as 14 years old or younger, The New York Times reported. A former TikTok employee noted that videos and accounts made by children who appeared younger than the app’s minimum age requirement of 13 were allowed to remain online for weeks, The Times reported, raising questions about measures taken by the platform to protect its users from misinformation, hate speech, and even violent content.
In the experiment, researchers from Media Matters created a dummy account, interacted with anti-trans content, and then evaluated the first 400 videos fed to the account. Some of the videos were removed before they could be analyzed, while others were sponsored advertisements unrelated to the study. Of the remaining 360 videos, researchers found:
29 contained racist narratives or white supremacist messaging
14 endorsed violence
“While nearly 400 may sound like a large number of videos, if a user watches videos for an average of 20 seconds each, they could consume 400 videos in just over two hours. A user could feasibly download the app at breakfast and be fed overtly white supremacist and neo-Nazi content before lunch,” the study concluded.
The far-right movement has historically embraced anti-trans rhetoric, and right-wing recruiters know that “softer ideas” like transphobia can be used to introduce newcomers to more extreme beliefs, Melody Devries, a Ryerson University PhD candidate who studies far-right recruitment and mobilization, told Insider.
“The videos that start people down the rabbit hole are things that are, unfortunately, prejudices that are not considered that extreme in society,” Devries said.
Unforeseen consequences of the digital age
Before the rise of social media, individuals predominately formed their beliefs through real-world networks of relationships with parents, family members, and friends. Social media platforms, however, gave individuals the ability to expand these social networks by building communities in online environments.
The rapid expansion and evolution of digital spaces have transposed extremist content and ideologies from niche corners of the Internet to platforms that are frequented by billions of users.
“Now, Facebook, Instagram, Twitter, all of our communications platforms that we think of as sort of the most easy to use can be the starting point [of radicalization]. And then a person can move into more layered applications that are harder to penetrate,” Thomas Holt, a professor and director of the Michigan State University school of criminal justice, told Insider.
In 2012, only 48% of extremists listed in Profiles of Individual Radicalization in the United States (PIRUS), an NCSTRT dataset, said that social media played a role in their radicalization. By 2016, 86.75% of PIRUS-listed extremists used social media in their radicalization process, according to an NCSTRT research brief.
Holt mentioned Facebook, Instagram, and Twitter, all of which are either a decade or more than a decade old. But in the past five years, TikTok has become one of the fastest-growing social media platforms of all time, known for its powerful algorithm that serves up highly tailored videos.
Because social media profit models rely heavily on user engagement, most companies choose to take the proverbial “middle road” when moderating content in order to avoid accusations of censorship from either side of the political spectrum and, ultimately, damaging their bottom line, according to Devries.
“The fact that those platforms are totally fine with that, because that’s their profit motive, and that’s their design, I think is a problem and obviously contributes to how right-wing communication is transformed,” Devries told Insider.
Subpar content moderation has allowed implicit extremist content to largely remain on platforms, sometimes reaching up to millions of users. Many of the extremist TikTok videos analyzed by Media Matters employed a “memetic format,” or utilized the platform’s unique combination of audio, video, and text to evade violating community guidelines.
For example, several of the videos populated to the FYP of the researchers’ dummy account used a sound called “Teddy,” which quotes the first line of “Unabomber” Ted Kaczynski’s manifesto: “The industrial revolution and its consequences have been a disaster for the human race.”
The sound, which has been used in more than 1,200 videos, has become popular on right-wing TikTok.
“In the videos we reviewed, it was frequently paired with montages of screenshots of LGBTQ people livestreaming on TikTok. These videos not only use audio that pays homage to a terrorist, but they also promote the harassment of LGBTQ TikTok users,” Media Matters researchers wrote.
While the “Teddy” sound might not explicitly violate the platform’s guidelines, videos using it frequently communicate hateful, dangerous, and even violent messages when taking into consideration the full piece of content, including other components like visuals and text.
The Internet has become a critical resource for extremist groups and loopholes around community guidelines allow them to promote their ideologies to larger audiences in subtle and convincing ways, according to Holt’s research in Deviant Behavior.
“Whether [viewers] initially believe it or not, over time, these interactions with that content slowly kind of chips away at their ideological belief system and builds up a new one that’s based around the ideas presented in this content,” Devries said.
Stopping online interactions with extremist content
“It’s not just the US. Every country is being impacted in some way by the use of and misuse of social media platforms for disinformation, misinformation, or radicalization. There’s an inherent need for better regulation, better management of platforms, and, to the extent that it can be provided, transparency around reporting and removal,” Holt stold Insider.
However, Devries added, it’s not about presenting counter-facts; the interactions themselves need to be stopped.
In her ethnographic analysis of far-right Facebook spaces, Devries has seen the platform add infographics warning that a post contains misinformation in an attempt to moderate content, an approach that she sees as counterintuitive.
“Not only are folks interacting with the false content itself, they’re interacting with the fact that Facebook has tried to censor it. So that infographic itself becomes another piece of content that they can interact with and pull into their ideology,” Devries told Insider.
When asked for comment, a Facebook spokesperson maintained that the company tries to give the maximum number of people a positive experience on Facebook and takes steps to keep people safe, including allocating $5 billion over the next fiscal year for safety and security.
When a Wall Street Journal investigation exposed how Facebook proliferated real-world harms by failing to moderate hate speech and misinformation, the company acknowledged in a September 2021 blog that it “didn’t address safety and security challenges early enough in the product development process.”
Rather than pursuing reactive solutions like content moderation, Holt proposes that social media companies mitigate online extremism on their platforms by implementing solutions like those used to remove child sexual exploitation content.
Tools like Microsoft’s PhotoDNA are used to stop online recirculation of child sexual exploitation content by creating a “hash,” which functions as a sort of digital fingerprint that can be compared against a database of illegal images compiled by watchdog organizations and companies, according to Microsoft.
If this kind of technology was overlayed against social media, Holt said it could be automated to take down content associated with extremism or violent ideologies.
Still, this solution relies on social media platforms making internal changes. In the meantime, Holt advocates for better public education on these platforms and how to use them responsibly.
“Yeah, the cat is out of the bag. I don’t know how we roll it back and minimize our use of social media. So instead, it seems like we have to get better at educating the public, particularly young people, to understand, ‘Here’s how the platforms work, here’s what may be there,'” Holt told Insider.
Ultimately, both Holt and Devries agree that more research is needed to analyze how newer platforms like TikTok are used to mobilize extremists and radicalize newcomers into their ideology, as well as discover solutions to minimize and counteract the fallout.
TikTok told Insider that all of the content cited in the Media Matters study was removed from the platform for violating its hateful behavior policy. Additionally, the company outlined anti-abuse efforts that it has built into its product, including its addition of new controls that allow users to delete or report multiple comments at once and block accounts in bulk.
Still, Eric Han, head of US safety for TikTok, said in an October statement that harassment and hate speech are “highly nuanced and contextual issues that can be challenging to detect and moderate correctly every time.”
“To help maintain a safe and supportive environment for our community, and teens in particular, we work every day to learn, adapt, and strengthen our policies and practices,” said TikTok’s Q2 2021 transparency report.
But Professor Andrew Przybylski, director of research at the Oxford Internet Institute, believes the research isn’t the smoking gun politicians believe it to be.
In an interview with Insider, Przybylski, an experimental psychologist who specialises in social media, said the leaked research amounted to little more than extremely preliminary work, and was a long way from the definitive proof that Instagram is bad for teenage girls. “If you were responsible and you ran a giant social media platform, this is like scoping research. This is the beginning of research,” he said.
“A lot of this work wouldn’t pass muster as a bachelor’s thesis,” he said. To have the research held up as “damning proof” of the ill effects of Instagram is “madness-making if you’re a responsible scientist,” he added.
Przybylski said Meta’s research was based entirely on self-reporting by Instagram users, meaning there wasn’t enough to draw conclusions about the actual effects of social media on mental health.
He drew a parallel with smoking. “You don’t use people’s opinions about whether or not smoking is good for them to draw some health inference,” he said.
However, Przybylski said he’s not ruling out the possibility that Instagram and other social media platforms have an adverse effect on the mental wellbeing of teens and children.
“This is about inviting Facebook to lead the way, to be a partner in a maturing science,” he said. “And that doesn’t mean locking everything away in proprietary computing environments.”
The open letter to Zuckerberg voiced the concern that Meta’s in-house research might be dangerously inadequate to deal with issues as serious as child and teen mental health. “We do not believe that the methodologies seen so far meet the high scientific standards required to responsibly investigate the mental health of children and adolescents,” the letter says.
When asked at a Senate hearing Wednesday about opening up Meta’s data to scientists, Mosseri said researchers should have “regular access to meaningful data about social-media usage across the entire industry” but stopped short of promising full transparency, citing privacy concerns, The Wall Street Journal reported.
In response to the open letter, a Meta spokesperson said: “This is an industry-wide challenge. A survey from just last month suggested that more US teens are using TikTok and YouTube than Instagram or Facebook, which is why we need an industry-wide effort to understand the role of social media in young people’s lives.”
Instagram’s Explore page does more than just let you search for your friends. Every time you open it, you’re recommended dozens of new photos and videos that the site thinks you might like. And although the algorithm is pretty fine tuned to match your tastes, it’s not hard for more unsavory content to make its way onto the page.
To give users more control over the Explore page, Instagram introduced the Sensitive Content Controls page. This menu lets you make the Explore page’s filters even stricter, or turn them off entirely.
Here’s how to use your Sensitive Content Controls in the Instagram app on both iPhone and Android.
How to change the Sensitive Content Controls on Instagram
Instagram considers “sensitive content” to be posts which are “sexually suggestive, non-graphic violence, or about drugs or firearms.” They provide some more details in a Help Center article, but there’s no exact definition of what all these terms mean.
These controls only affect the Explore page, not your feed, Direct Messages, Stories, or Reels.
1. Open the Instagram app and tap your profile picture in the bottom-right corner.
2. On your profile page, tap the three stacked lines in the top-right corner, and then Settings.
3. Tap Account, and then Sensitive Content Control.
4. Pick whether you want the default Limit, Limit Even More, or Allow. Each option comes with a short explanation of how it works.
5. Once you’ve picked an option, exit the menu. Your changes will take effect immediately.
Instagram recommended hashtags related to illegal drugs to teenagers as young as 13, researchers at the Tech Transparency Project found.
“The platform’s algorithms helped the underage accounts connect directly with drug dealers selling everything from opioids to party drugs,” the researchers said.
Instagram has faced increased scrutiny around how the platform impacts children.
Instagram recommended hashtags related to illegal substances to users as young as 13, and its algorithms led them to accounts claiming to sell drugs, including opioids and party drugs, in violation of Instagram policy, researchers found.
Researchers at the Tech Transparency Project said they set up multiple new Instagram accounts, creating one for a 13-year-old user, two representing 14-year-old users, two for 15-year-old users, and two for 17-year-old users. According to the report, it took two clicks for the hypothetical teen accounts to access accounts that claimed to be drug dealers.
In comparison, it took researchers five clicks to log out of an account on the Instagram app.
“Not only did Instagram allow the hypothetical teens to easily search for age-restricted and illegal drugs, but the platform’s algorithms helped the underage accounts connect directly with drug dealers selling everything from opioids to party drugs,” the Tech Transparency Project said in a news release outlining its findings.
“We prohibit drug sales on Instagram,” a Meta spokesperson told Insider on Tuesday. “We removed 1.8 million pieces of content related to drug sales in the last quarter alone, and due to our improving detection technology, the prevalence of such content is about 0.05 percent of content viewed, or about 5 views per every 10,000.
“We’ll continue to improve in this area in our ongoing efforts to keep Instagram safe, particularly for our youngest community members,” the spokesperson added.
While Instagram bans hashtags for illegal substances, researchers at the Tech Transparency Project found that the app would recommend alternative hashtags for some drugs after users typed into the Instagram search bar.
“For example, when one of our teen users started typing the phrase ‘buyxanax’ into Instagram’s search bar, the platform started auto-filling results for buying Xanax before the user was even finished typing,” the researchers said. “When the minor clicked on one of the suggested accounts, they instantly got a direct line to a Xanax dealer. The entire process took seconds and involved just two clicks.”
Instagram said it has blocked problematic hashtags identified in the report
The “buyxanax” hashtag and other hashtags outlined in the report, including “#mdma” and “#buyfentanyl,” have since been blocked by Instagram, the Meta spokesperson told Insider, adding “we’re reviewing additional hashtags to understand if there are further violations of our policies.”
When one of the teen accounts followed a user claiming to be a drug dealer, the app’s algorithm recommended other accounts similarly appearing to sell drugs, according to Tech Transparency Project’s report.
According to Instagram’s community guidelines, it is against policy to sell drugs on the platform. But researchers said they found that drug dealers operated “openly” on the platform and offered pills, including the opioid Oxycontin.
“Many of these dealers mention drugs directly in their account names to advertise their services,” the researchers said.
These findings come as Instagram, and its parent company Meta (formerly Facebook), face increasing scrutiny for how the platform affects minors.
The company on Tuesday announced it was rolling out new safety features for teenagers, including tools to help users spend less time on the app, have fewer unwanted interactions with adults and sensitive content, and allow parents to have more oversight of their children’s accounts, NPR reported.
The announcement came just one day before Instagram head Adam Mosseri is scheduled to testify Wednesday before the US Senate Subcommittee on Consumer Protection, Product Safety and Data Security. Mosseri is expected to be questioned about Instagram’s influence on young users.
In October, former Facebook employee and whistleblower Frances Haugen said Facebook had internal data that showed Instagram was toxic to teenagers, and particularly young girls. Internal Facebook researcher provided by Haugen showed 13.5% of teen girls said Instagram worsened suicidal thoughts and 17% of teenage girls said Instagram contributed to eating disorders, NPR reported.
An international coalition of over 300 scientists published an open letter to Mark Zuckerberg on Monday.
They demanded access to Meta’s research on how Facebook and Instagram affect child and teen mental health.
Leaked internal research found that Instagram could cause body image issues among teen girls.
An international coalition of more than 300 scientists working in the fields of psychology, technology, and health have published an open letter to Mark Zuckerberg, asking the Meta CEO to open his company’s doors to outside researchers who need to investigate the effects of Facebook and Instagram on child and teen mental health.
The open letter, published Monday, says that although the research leaked by Haugen doesn’t definitively prove Meta’s platforms have an adverse effect on teen and child mental health, the issues at stake are too serious for the company to keep its research behind closed doors.
The letter also says that based on the limited public information about Meta’s research techniques, its internal studies aren’t thorough enough.
“We have only a fragmented picture of the studies your companies are conducting,” the letter to Zuckerberg says. “We do not believe that the methodologies seen so far meet the high scientific standards required to responsibly investigate the mental health of children and adolescents.”
It continues: “You and your organisations have an ethical and moral obligation to align your internal research on children and adolescents with established standards for evidence in mental health science.”
The letter says Meta can commit to safeguarding teen mental health by introducing “gold standard transparency,” allowing outside researchers to scrutinize and participate in its research. It also says Meta can participate in external studies around the world, offering up its data voluntarily.
“Combining Meta data with large-scale cohort projects will materially advance how we understand implications of the online world for mental health,” the letter says.
The letter concludes by asking Meta to create an independent oversight trust that would monitor and study adolescent and child mental health. It compared the structure of the proposed trust to Meta’s existing Oversight Board model.
“In place of quasi-judicial rulings the trust would conduct independent scientific oversight,” the letter says.
When the letter was published it had 293 signatories. Prof. Andrew Przybylski, one of the letter’s authors, told Insider in an interview that more scientists had since signed to push the figure above 300.
Przybylski said the aim of the letter wasn’t to single out Meta among Big Tech companies. “This is about taking Mark [Zuckerberg] and the executives at their word that they care,” he said.
When contacted by Insider about the letter, a Meta spokesperson said: “This is an industry-wide challenge. A survey from just last month suggested that more US teens are using TikTok and YouTube than Instagram or Facebook, which is why we need an industry-wide effort to understand the role of social media in young people’s lives.”
The spokesperson did not specify which survey they were referring to, but a Forrester survey of 4,602 Americans aged 12 to 17, published last month, found that 63% of respondents used TikTok on a weekly basis compared with 57% for Instagram. It also found 72% of respondents used YouTube weekly. It did not mention Facebook.
Spotify is testing a TikTok-like music video feed in its app for select users.
The new “Discover” page displays full-screen music videos that users can “like” or “skip.”
The feature is currently in beta-mode and it remains unknown if it will be rolled out to all users in the coming weeks.
Spotify is testing a TikTok-like music video feed in its app, becoming the latest company to experiment with integrating a platform for short video clips.
The feature, which is currently in beta-mode for select users, will be accessible by tapping a new fourth tab labeled “Discover” in the lower navigational toolbar. The page displays full-screen music videos to songs as users scroll through, along with the option to “like” or “skip” similar to the widely-popular social media platform TikTok, according to TechCrunch.
“At Spotify, we routinely conduct a number of tests in an effort to improve our user experience,” a spokesperson told TechCrunch. “Some of those tests end up paving the way for our broader user experience and others serve only as an important learning. We don’t have any further news to share at this time.”
The feature was first spotted by Spotify user Chris Messina, who shared a video to Twitter on Wednesday showing the new video feed in Spotify’s beta version for iOS on TestFlight, an app that allows developers to test versions of their programs.
Spotify did not respond to Insider’s request to comment on when the feature may be rolled out to its over 81 million users in the US.
The new Discover tool appears to build on Spotify’s Canvas feature, where artists can select videos to play with their music. Currently, the videos included in the Discover feed appear to be the same as those used for Canvas, according to TechCrunch.
In a press release issued last week Lush said it will be shutting down its Facebook, Instagram, TikTok, and Snapchat accounts on November 26 in all 48 countries where it operates.
The company said it’s ditching its accounts in protest against safety issues on social media.
“In the same way that evidence against climate change was ignored and belittled for decades, concerns about the serious effects of social media are going largely ignored now,” the company said in its press release.
“I’ve spent all my life avoiding putting harmful ingredients in my products. There is now overwhelming evidence we are being put at risk when using social media,” Lush CEO, Mark Constantine, said in a statement.
“I’m not willing to expose my customers to this harm, so it’s time to take it out of the mix,” he added.
Lush’s UK operations made a similar announcement back in 2019, saying it was quitting Instagram and Facebook because it was “tired of fighting with algorithms.” Lush said in its press release it returned to some of its abandoned social media accounts in 2020 in response to the pandemic driving customers inside.
“We at Lush don’t want to wait for better worldwide regulations or for the platforms to introduce best practice guidelines, while a generation of young people are growing up experiencing serious and lasting harm,” Lush said.
Internal company documents leaked by Haugen on a range of subjects including teen mental health, hate speech, and misinformation prompted renewed scrutiny of Meta — Facebook’s rebranded parent company. Lawmakers in the US, the UK, and Europe took testimony from Haugen on how they might better regulate social media companies.
Lush said it will keep its Twitter and YouTube accounts active “for now.”
Annabelle King creates candy and content for Sticky, a popular Sydney-based candy shop.
Last year, she used social media to help save the business, which is owned by her father.
On some days, she helps the team create 60kg of candy that consists of 50 different flavors.
This as-told-to article is based on a conversation with Annabelle King, a 19-year-old social media manager at a candy shop based in Sydney, Australia, which specializes in artisanal, handmade sweets. It has been edited for length and clarity.
I started working at Sticky just to keep it going for my parents. Before then, I never saw myself working at the store at all.
Sticky was on the brink of collapse after the pandemic seriously impacted sales. We went from busy to bust. Desperate to turn things around, I took to social media to save the struggling business and it worked.
The TikTok account garnered more than 1 million followers in its first month of launching and is now close to 5 million. Now the store is drawing a healthy amount of customers and we are hiring again, as opposed to letting people go.
The most obvious job I do is content creation for the store. I spend about three-quarters of my week taking photos of the candy-making process at Sticky for Instagram, or videos for TikTok and YouTube. I spend between two and five hours each day turning what I film in the shop into something interesting.
To make good content about a subject, I believe you must be involved with it yourself. I try to participate in the process as much as I can when I film content on candy construction.
My daily responsibilities change so much that the only thing I am sure I do every day is to grab coffee for myself and Dad. I do a little bit of everything. I serve customers, make candy, clean, pack lollies, and handle online orders. Whatever needs doing, I’m your girl for it.
I am not hired as a full-time candy maker but I wish I could be. You really need to have some specific skills (muscles) to make candy all day. I love the stretching and the molding but I always have issues lifting a certain amount of candy. It becomes way too heavy for me to manipulate.
The team aims to create around six batches, equivalent to 60kg, of candy a day. Sometimes, we have very quick and easy designs, and we get more candy as a result.
Other times, the designs take ages and you get less rock. A roaring demand for Sticky’s candy has meant that we do not have any lollies going unsold.
The hardest thing is keeping candy in stock in-store and online.
Everyone in the shop decides the flavors. We have more than 50 single flavors — some more popular than others.
The excitement for us comes from the flavor combinations. Sometimes, someone will think of a new meld of flavors that ends up being so good. Recently, it was mangoes and cream. I really hope we keep that one for a while.
Now that I have been working at Sticky for well over a year, I can say that I do not see myself leaving any time soon. And having worked in other confectionery stores, I admit — with a fair amount of bias — that working at Sticky has been my favourite job so far.
I love my co-workers, even if being the boss’s daughter can complicate those relationships. I am treated with respect, and we spend a fair amount of time goofing off at work, but don’t tell Mum that.
Planning for the future is hard for me, it is all changing so fast. I just take each opportunity as it comes.