An artist says her @metaverse Instagram account was disabled after Facebook rebranded to Meta — and that she only got it back after asking the company what happened

Facebook CEO Mark Zuckerberg announcing Meta
Facebook CEO Mark Zuckerberg announcing the Meta rebrand on Thursday.

  • An Instagram user with the @metaverse handle says her account was disabled after Facebook rebranded.
  • The platform said her account was taken down because she was impersonating someone.
  • It wasn’t until a month later that Instagram apologized and reinstated her account.

An artist told The New York Times that Instagram disabled her @metaverse account after its parent company Facebook rebranded to Meta.

Australia-based Thea-Mai Baumann told The Times that she’s had an Instagram account since 2012 with the handle @metaverse. She posted about her AR company, Metaverse Makeovers, on her account. The app allowed users to try on holographic nail designs.

But on Nov. 2, five days after Facebook announced its name change, Baumann told the outlet that Instagram had disabled it.

“Your account has been blocked for pretending to be someone else,” read a message in her app. She tried to get answers from Instagram, including who she was accused of impersonating, to no avail.

It wasn’t until a month later that The New York Times reached out to Meta inquiring what had happened.

An Instagram spokesperson told the paper that the account was “incorrectly removed for impersonation.”

“We’re sorry this error occurred,” they said, per The Times, without elaborating on why Baumann’s profile was disabled for impersonation. The account was reactivated two days later.

A post shared by metaverse 🐆🦋📲💎⚔️🍸🌌 (@metaverse)

 

“This account is a decade of my life and work. I didn’t want my contribution to the metaverse to be wiped from the internet,” Baumann told The Times.

“That happens to women in tech, to women of color in tech, all the time,” said Baumann, who has Vietnamese heritage.

Meta did not immediately respond to Insider’s request for comment.

The company changed its name to reflect its goal of expanding into the metaverse, a futuristic virtual landscape where people can live, play, and work with digital avatars.

“Now, we have a new north star,” Zuckerberg said when he announced the rebrand in late October. “From now on, we are going to be Metaverse first, not Facebook first.”

This isn’t the first kerfuffle involving Meta’s name change and companies operating with a similar brand. 

The company had to take @wearemeta on Instagram since a Denver-based motorbike magazine already held the @meta handle.

And in early November, Arizona-based startup electronics Meta PC founder Zach Schutt told Insider the company filed for the “Meta” trademark in August. The firm had been using its brand since November 2020.

Meta filed to trademark the name on Oct. 28, according to its filing with the Patents and Trademark Office. But the nonprofit Chan Zuckerberg Initiative gained ownership of the “META” trademark in 2018, according to a separate filing.

Read the original article on Business Insider

From transphobia to Ted Kaczynski: How TikTok’s algorithm enables far-right self-radicalization

Hands swiping on a phone with the Tik Tok logo and a distorted background.
  • Social media’s role in radicalizing extremists has drastically increased over the last several years.
  • Some far-right TikTokers employ a meme-like format in their content to dodge content moderation.
  • TikTok populates violent, white supremacist content to users who interact with anti-trans content.

A recent study from left-leaning nonprofit watchdog Media Matters found that if a TikTok user solely interacts with transphobic content and creators, the social networking app’s algorithm will gradually begin to populate their “For You” page with white supremacist, antisemitic, and far-right videos, as well as calls for violence.

Launched in 2016 by Chinese tech startup ByteDance, TikTok saw a surge in user growth throughout the COVID-19 pandemic and acquired 1 billion users across the world in five years, many of which are teenagers and young adults.

In 2020, the app classified more than a third of its daily users as 14 years old or younger, The New York Times reported. A former TikTok employee noted that videos and accounts made by children who appeared younger than the app’s minimum age requirement of 13 were allowed to remain online for weeks, The Times reported, raising questions about measures taken by the platform to protect its users from misinformation, hate speech, and even violent content.

In the experiment, researchers from Media Matters created a dummy account, interacted with anti-trans content, and then evaluated the first 400 videos fed to the account. Some of the videos were removed before they could be analyzed, while others were sponsored advertisements unrelated to the study. Of the remaining 360 videos, researchers found:

  • 103 contained anti-trans and/or homophobic narratives
  • 42 were misogynistic
  • 29 contained racist narratives or white supremacist messaging
  • 14 endorsed violence

“While nearly 400 may sound like a large number of videos, if a user watches videos for an average of 20 seconds each, they could consume 400 videos in just over two hours. A user could feasibly download the app at breakfast and be fed overtly white supremacist and neo-Nazi content before lunch,” the study concluded.

The far-right movement has historically embraced anti-trans rhetoric, and right-wing recruiters know that “softer ideas” like transphobia can be used to introduce newcomers to more extreme beliefs, Melody Devries, a Ryerson University PhD candidate who studies far-right recruitment and mobilization, told Insider.

“The videos that start people down the rabbit hole are things that are, unfortunately, prejudices that are not considered that extreme in society,” Devries said.

Unforeseen consequences of the digital age

Before the rise of social media, individuals predominately formed their beliefs through real-world networks of relationships with parents, family members, and friends. Social media platforms, however, gave individuals the ability to expand these social networks by building communities in online environments.

The rapid expansion and evolution of digital spaces have transposed extremist content and ideologies from niche corners of the Internet to platforms that are frequented by billions of users.

“Now, Facebook, Instagram, Twitter, all of our communications platforms that we think of as sort of the most easy to use can be the starting point [of radicalization]. And then a person can move into more layered applications that are harder to penetrate,” Thomas Holt, a professor and director of the Michigan State University school of criminal justice, told Insider.

According to the National Consortium for the Study of Terrorism and Responses to Terrorism (NCSTRT), social media’s role in extremism has drastically increased over the last several years.

In 2012, only 48% of extremists listed in Profiles of Individual Radicalization in the United States (PIRUS),  an NCSTRT dataset, said that social media played a role in their radicalization. By 2016, 86.75% of PIRUS-listed extremists used social media in their radicalization process, according to an NCSTRT research brief.

Holt mentioned Facebook, Instagram, and Twitter, all of which are either a decade or more than a decade old. But in the past five years, TikTok has become one of the fastest-growing social media platforms of all time, known for its powerful algorithm that serves up highly tailored videos.

It has more than 100 million daily users in the US, according to CNBC, and has recently become the focus of more scrutiny surrounding its algorithm, which Black and LGBTQ creators have said censors their voices and perpetuates targeted harassment.

How Big Tech streamlined self-radicalization

Because social media profit models rely heavily on user engagement, most companies choose to take the proverbial “middle road” when moderating content in order to avoid accusations of censorship from either side of the political spectrum and, ultimately, damaging their bottom line, according to Devries.

“The fact that those platforms are totally fine with that, because that’s their profit motive, and that’s their design, I think is a problem and obviously contributes to how right-wing communication is transformed,” Devries told Insider.

Subpar content moderation has allowed implicit extremist content to largely remain on platforms, sometimes reaching up to millions of users. Many of the extremist TikTok videos analyzed by Media Matters employed a “memetic format,” or utilized the platform’s unique combination of audio, video, and text to evade violating community guidelines.

For example, several of the videos populated to the FYP of the researchers’ dummy account used a sound called “Teddy,” which quotes the first line of “Unabomber” Ted Kaczynski’s manifesto: “The industrial revolution and its consequences have been a disaster for the human race.”

The sound, which has been used in more than 1,200 videos, has become popular on right-wing TikTok.

“In the videos we reviewed, it was frequently paired with montages of screenshots of LGBTQ people livestreaming on TikTok. These videos not only use audio that pays homage to a terrorist, but they also promote the harassment of LGBTQ TikTok users,” Media Matters researchers wrote.

While the “Teddy” sound might not explicitly violate the platform’s guidelines, videos using it frequently communicate hateful, dangerous, and even violent messages when taking into consideration the full piece of content, including other components like visuals and text.

The Internet has become a critical resource for extremist groups and loopholes around community guidelines allow them to promote their ideologies to larger audiences in subtle and convincing ways, according to Holt’s research in Deviant Behavior.

“Whether [viewers] initially believe it or not, over time, these interactions with that content slowly kind of chips away at their ideological belief system and builds up a new one that’s based around the ideas presented in this content,” Devries said.

Stopping online interactions with extremist content

The impacts of disinformation, misinformation, and radicalization propagated by social media — insurrections, national security issues, and even genocide — have been felt throughout the globe for years.

“It’s not just the US. Every country is being impacted in some way by the use of and misuse of social media platforms for disinformation, misinformation, or radicalization. There’s an inherent need for better regulation, better management of platforms, and, to the extent that it can be provided, transparency around reporting and removal,” Holt stold Insider.

However, Devries added, it’s not about presenting counter-facts; the interactions themselves need to be stopped.

In her ethnographic analysis of far-right Facebook spaces, Devries has seen the platform add infographics warning that a post contains misinformation in an attempt to moderate content, an approach that she sees as counterintuitive.

“Not only are folks interacting with the false content itself, they’re interacting with the fact that Facebook has tried to censor it. So that infographic itself becomes another piece of content that they can interact with and pull into their ideology,” Devries told Insider.

When asked for comment, a Facebook spokesperson maintained that the company tries to give the maximum number of people a positive experience on Facebook and takes steps to keep people safe, including allocating $5 billion over the next fiscal year for safety and security.

When a Wall Street Journal investigation exposed how Facebook proliferated real-world harms by failing to moderate hate speech and misinformation, the company acknowledged in a September 2021 blog that it “didn’t address safety and security challenges early enough in the product development process.”

Rather than pursuing reactive solutions like content moderation, Holt proposes that social media companies mitigate online extremism on their platforms by implementing solutions like those used to remove child sexual exploitation content.

Tools like Microsoft’s PhotoDNA are used to stop online recirculation of child sexual exploitation content by creating a “hash,” which functions as a sort of digital fingerprint that can be compared against a database of illegal images compiled by watchdog organizations and companies, according to Microsoft.

If this kind of technology was overlayed against social media, Holt said it could be automated to take down content associated with extremism or violent ideologies.

Still, this solution relies on social media platforms making internal changes. In the meantime, Holt advocates for better public education on these platforms and how to use them responsibly.

“Yeah, the cat is out of the bag. I don’t know how we roll it back and minimize our use of social media. So instead, it seems like we have to get better at educating the public, particularly young people, to understand, ‘Here’s how the platforms work, here’s what may be there,'” Holt told Insider.

Ultimately, both Holt and Devries agree that more research is needed to analyze how newer platforms like TikTok are used to mobilize extremists and radicalize newcomers into their ideology, as well as discover solutions to minimize and counteract the fallout.

TikTok told Insider that all of the content cited in the Media Matters study was removed from the platform for violating its hateful behavior policy. Additionally, the company outlined anti-abuse efforts that it has built into its product, including its addition of new controls that allow users to delete or report multiple comments at once and block accounts in bulk.

Still, Eric Han, head of US safety for TikTok, said in an October statement that harassment and hate speech are “highly nuanced and contextual issues that can be challenging to detect and moderate correctly every time.”

“To help maintain a safe and supportive environment for our community, and teens in particular, we work every day to learn, adapt, and strengthen our policies and practices,” said TikTok’s Q2 2021 transparency report.

Read the original article on Business Insider

Meta’s leaked internal research on child and teen mental health isn’t the smoking gun we think it is, according to a top scientist

Mark Zuckerberg at a Congressional Hearing
Meta CEO Mark Zuckerberg.

  • Leaked Meta research on teen mental health isn’t the smoking gun we think it is, Andrew Przybylski said.
  • The scientist told Insider it could not be held up as “damning proof” of the ill effects of Instagram.
  • He’s among more than 300 scientists who have called on Mark Zuckerberg to open up Meta’s mental health research.

When Facebook whistleblower Frances Haugen leaked internal Instagram research about the effect of the platform on teen mental health, lawmakers seized on what appeared to be a damning statistic: “Thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse.”

But Professor Andrew Przybylski, director of research at the Oxford Internet Institute, believes the research isn’t the smoking gun politicians believe it to be.

In an interview with Insider, Przybylski, an experimental psychologist who specialises in social media, said the leaked research amounted to little more than extremely preliminary work, and was a long way from the definitive proof that Instagram is bad for teenage girls. “If you were responsible and you ran a giant social media platform, this is like scoping research. This is the beginning of research,” he said.

“A lot of this work wouldn’t pass muster as a bachelor’s thesis,” he said. To have the research held up as “damning proof” of the ill effects of Instagram is “madness-making if you’re a responsible scientist,” he added.

Przybylski said Meta’s research was based entirely on self-reporting by Instagram users, meaning there wasn’t enough to draw conclusions about the actual effects of social media on mental health.

He drew a parallel with smoking. “You don’t use people’s opinions about whether or not smoking is good for them to draw some health inference,” he said.

However, Przybylski said he’s not ruling out the possibility that Instagram and other social media platforms have an adverse effect on the mental wellbeing of teens and children.

“This is about inviting Facebook to lead the way, to be a partner in a maturing science,” he said. “And that doesn’t mean locking everything away in proprietary computing environments.”

Przybylski is a co-author of an open letter to Mark Zuckerberg that calls on the Meta CEO to open his company’s doors to scientists to allow them to scrutinize its mental health research. The letter has more than 300 signatories.

The open letter to Zuckerberg voiced the concern that Meta’s in-house research might be dangerously inadequate to deal with issues as serious as child and teen mental health. “We do not believe that the methodologies seen so far meet the high scientific standards required to responsibly investigate the mental health of children and adolescents,” the letter says.

This week Instagram CEO Adam Mosseri appeared before Congress to testify about child safety on the platform.

When asked at a Senate hearing Wednesday about opening up Meta’s data to scientists, Mosseri said researchers should have “regular access to meaningful data about social-media usage across the entire industry” but stopped short of promising full transparency, citing privacy concerns, The Wall Street Journal reported.

In response to the open letter, a Meta spokesperson said: “This is an industry-wide challenge. A survey from just last month suggested that more US teens are using TikTok and YouTube than Instagram or Facebook, which is why we need an industry-wide effort to understand the role of social media in young people’s lives.”

Read the original article on Business Insider

How to use Instagram’s Sensitive Content Controls to filter your Explore page

instagram iphone
You can filter your Explore page with Instagram’s Sensitive Content Controls.

  • Instagram’s Sensitive Content Controls let you limit how much graphic content appears on your Explore page.
  • Sensitive Content Controls are turned on by default, but you can make them stricter or turn them off.
  • The Sensitive Content Controls affect content that’s “sexually suggestive, non-graphic violence, or about drugs or firearms.”
  • Visit Insider’s Tech Reference library for more stories.

Instagram’s Explore page does more than just let you search for your friends. Every time you open it, you’re recommended dozens of new photos and videos that the site thinks you might like. And although the algorithm is pretty fine tuned to match your tastes, it’s not hard for more unsavory content to make its way onto the page.

To give users more control over the Explore page, Instagram introduced the Sensitive Content Controls page. This menu lets you make the Explore page’s filters even stricter, or turn them off entirely.

Here’s how to use your Sensitive Content Controls in the Instagram app on both iPhone and Android.

How to change the Sensitive Content Controls on Instagram

Instagram considers “sensitive content” to be posts which are “sexually suggestive, non-graphic violence, or about drugs or firearms.” They provide some more details in a Help Center article, but there’s no exact definition of what all these terms mean.

These controls only affect the Explore page, not your feed, Direct Messages, Stories, or Reels.

1. Open the Instagram app and tap your profile picture in the bottom-right corner.

2. On your profile page, tap the three stacked lines in the top-right corner, and then Settings.

The options menu in the Instagram app, with the Settings option highlighted.
Head to your Settings menu.

3. Tap Account, and then Sensitive Content Control.

The "Settings" and "Account" menus in the Instagram app.
The content filters are in your “Account” menu.

4. Pick whether you want the default Limit, Limit Even More, or Allow. Each option comes with a short explanation of how it works.

The Sensitive Content Control page on Instagram.
Choose how filtered you want the Explore page to be.

5. Once you’ve picked an option, exit the menu. Your changes will take effect immediately.

Read the original article on Business Insider

Instagram’s algorithm directly connects teens to ‘drug dealers selling everything from opioids to party drugs,’ researchers say

Instagram logo is blurred on an iPhone screen.
Instagram promoted hashtags related to the buying of illegal substances to users as young as 13, according to new research.

  • Instagram recommended hashtags related to illegal drugs to teenagers as young as 13, researchers at the Tech Transparency Project found. 
  • “The platform’s algorithms helped the underage accounts connect directly with drug dealers selling everything from opioids to party drugs,” the researchers said.
  • Instagram has faced increased scrutiny around how the platform impacts children.

Instagram recommended hashtags related to illegal substances to users as young as 13, and its algorithms led them to accounts claiming to sell drugs, including opioids and party drugs, in violation of Instagram policy, researchers found. 

Researchers at the Tech Transparency Project said they set up multiple new Instagram accounts, creating one for a 13-year-old user, two representing 14-year-old users, two for 15-year-old users, and two for 17-year-old users. According to the report, it took two clicks for the hypothetical teen accounts to access accounts that claimed to be drug dealers.

In comparison, it took researchers five clicks to log out of an account on the Instagram app. 

“Not only did Instagram allow the hypothetical teens to easily search for age-restricted and illegal drugs, but the platform’s algorithms helped the underage accounts connect directly with drug dealers selling everything from opioids to party drugs,” the Tech Transparency Project said in a news release outlining its findings. 

We prohibit drug sales on Instagram,” a Meta spokesperson told Insider on Tuesday. “We removed 1.8 million pieces of content related to drug sales in the last quarter alone, and due to our improving detection technology, the prevalence of such content is about 0.05 percent of content viewed, or about 5 views per every 10,000.

“We’ll continue to improve in this area in our ongoing efforts to keep Instagram safe, particularly for our youngest community members,” the spokesperson added.

While Instagram bans hashtags for illegal substances, researchers at the Tech Transparency Project found that the app would recommend alternative hashtags for some drugs after users typed into the Instagram search bar. 

“For example, when one of our teen users started typing the phrase ‘buyxanax’ into Instagram’s search bar, the platform started auto-filling results for buying Xanax before the user was even finished typing,” the researchers said. “When the minor clicked on one of the suggested accounts, they instantly got a direct line to a Xanax dealer. The entire process took seconds and involved just two clicks.”

Instagram said it has blocked problematic hashtags identified in the report

The “buyxanax” hashtag and other hashtags outlined in the report, including “#mdma” and “#buyfentanyl,” have since been blocked by Instagram, the Meta spokesperson told Insider, adding “we’re reviewing additional hashtags to understand if there are further violations of our policies.” 

When one of the teen accounts followed a user claiming to be a drug dealer, the app’s algorithm recommended other accounts similarly appearing to sell drugs, according to Tech Transparency Project’s report.

According to Instagram’s community guidelines, it is against policy to sell drugs on the platform. But researchers said they found that drug dealers operated “openly” on the platform and offered pills, including the opioid Oxycontin.  

“Many of these dealers mention drugs directly in their account names to advertise their services,” the researchers said.

Instagram in July announced that all Instagram accounts for users aged 16 years old or younger would be set to private by default, but researchers found that only accounts set up using the Instagram mobile app, and not Instagram’s website, were set to private. 

These findings come as Instagram, and its parent company Meta (formerly Facebook), face increasing scrutiny for how the platform affects minors. 

The company on Tuesday announced it was rolling out new safety features for teenagers, including tools to help users spend less time on the app, have fewer unwanted interactions with adults and sensitive content, and allow parents to have more oversight of their children’s accounts, NPR reported. 

The announcement came just one day before Instagram head Adam Mosseri is scheduled to testify Wednesday before the US Senate Subcommittee on Consumer Protection, Product Safety and Data Security. Mosseri is expected to be questioned about Instagram’s influence on young users.

In October, former Facebook employee and whistleblower Frances Haugen said Facebook had internal data that showed Instagram was toxic to teenagers, and particularly young girls. Internal Facebook researcher provided by Haugen showed 13.5% of teen girls said Instagram worsened suicidal thoughts and 17% of teenage girls said Instagram contributed to eating disorders, NPR reported.

Read the original article on Business Insider

More than 300 scientists have told Mark Zuckerberg they want access to Meta’s internal research on child and teen mental health because it doesn’t meet scientific standards

A headshot of Mark Zuckerberg, of Facebook
Meta CEO Mark Zuckerberg.

  • An international coalition of over 300 scientists published an open letter to Mark Zuckerberg on Monday.
  • They demanded access to Meta’s research on how Facebook and Instagram affect child and teen mental health.
  • Leaked internal research found that Instagram could cause body image issues among teen girls.

An international coalition of more than 300 scientists working in the fields of psychology, technology, and health have published an open letter to Mark Zuckerberg, asking the Meta CEO to open his company’s doors to outside researchers who need to investigate the effects of Facebook and Instagram on child and teen mental health.

The letter was written in response to internal documents leaked to The Wall Street Journal by whistleblower Frances Haugen. Internal research by Meta found that one in three teen girls said using Instagram made them feel worse about their bodies.

The open letter, published Monday, says that although the research leaked by Haugen doesn’t definitively prove Meta’s platforms have an adverse effect on teen and child mental health, the issues at stake are too serious for the company to keep its research behind closed doors.

The letter also says that based on the limited public information about Meta’s research techniques, its internal studies aren’t thorough enough.

“We have only a fragmented picture of the studies your companies are conducting,” the letter to Zuckerberg says. “We do not believe that the methodologies seen so far meet the high scientific standards required to responsibly investigate the mental health of children and adolescents.”

It continues: “You and your organisations have an ethical and moral obligation to align your internal research on children and adolescents with established standards for evidence in mental health science.”

The letter says Meta can commit to safeguarding teen mental health by introducing “gold standard transparency,” allowing outside researchers to scrutinize and participate in its research. It also says Meta can participate in external studies around the world, offering up its data voluntarily.

“Combining Meta data with large-scale cohort projects will materially advance how we understand implications of the online world for mental health,” the letter says.

The letter concludes by asking Meta to create an independent oversight trust that would monitor and study adolescent and child mental health. It compared the structure of the proposed trust to Meta’s existing Oversight Board model.

“In place of quasi-judicial rulings the trust would conduct independent scientific oversight,” the letter says.

When the letter was published it had 293 signatories. Prof. Andrew Przybylski, one of the letter’s authors, told Insider in an interview that more scientists had since signed to push the figure above 300.

Przybylski said the aim of the letter wasn’t to single out Meta among Big Tech companies. “This is about taking Mark [Zuckerberg] and the executives at their word that they care,” he said.

Instagram CEO Adam Mosseri is due to testify before Congress about children’s safety on the platform Tuesday. Instagram postponed the launch of its planned new product “Instagram for Kids” in September, citing the backlash provoked by Haugen’s leaked documents.

When contacted by Insider about the letter, a Meta spokesperson said: “This is an industry-wide challenge. A survey from just last month suggested that more US teens are using TikTok and YouTube than Instagram or Facebook, which is why we need an industry-wide effort to understand the role of social media in young people’s lives.”

The spokesperson did not specify which survey they were referring to, but a Forrester survey of 4,602 Americans aged 12 to 17, published last month, found that 63% of respondents used TikTok on a weekly basis compared with 57% for Instagram. It also found 72% of respondents used YouTube weekly. It did not mention Facebook.

Read the original article on Business Insider

Spotify is experimenting with a TikTok-like video feed to help listeners discover new artists

spotify on phone and computer
  • Spotify is testing a TikTok-like music video feed in its app for select users.
  • The new “Discover” page displays full-screen music videos that users can “like” or “skip.”
  • The feature is currently in beta-mode and it remains unknown if it will be rolled out to all users in the coming weeks.

Spotify is testing a TikTok-like music video feed in its app, becoming the latest company to experiment with integrating a platform for short video clips. 

The feature, which is currently in beta-mode for select users, will be accessible by tapping a new fourth tab labeled “Discover” in the lower navigational toolbar. The page displays full-screen music videos to songs as users scroll through, along with the option to “like” or “skip” similar to the widely-popular social media platform TikTok, according to TechCrunch.

“At Spotify, we routinely conduct a number of tests in an effort to improve our user experience,” a spokesperson told TechCrunch. “Some of those tests end up paving the way for our broader user experience and others serve only as an important learning. We don’t have any further news to share at this time.”

The feature was first spotted by Spotify user Chris Messina, who shared a video to Twitter on Wednesday showing the new video feed in Spotify’s beta version for iOS on TestFlight, an app that allows developers to test versions of their programs.

Spotify did not respond to Insider’s request to comment on when the feature may be rolled out to its over 81 million users in the US.

The new Discover tool appears to build on Spotify’s Canvas feature, where artists can select videos to play with their music. Currently, the videos included in the Discover feed appear to be the same as those used for Canvas, according to TechCrunch.

Since TikTok first launched in 2016, the app has amassed a total of 1 billion monthly users. The app’s success has prompted other social media platforms like Instagram and YouTube, to develop their own platforms for short-form video sharing. In an effort to compete with TikTok, some social media platforms have even incentivized creators with thousands of dollars to post on their platforms.

 

Read the original article on Business Insider

3 ways to add multiple photos to your Instagram Story

  • You can add multiple photos to an Instagram Story by selecting multiple photos, creating a collage, or inserting additional photos as stickers.
  • Selecting multiple photos from your camera roll will upload each photo as its own slide in your story.
  • Instagram has a built-in collage feature called Layout that allows you to drop several photos into the same slide.
  • Visit Insider’s Tech Reference library for more stories.

There are several different ways to add multiple photos to an Instagram Story at once. 

The first method allows you to select up to 10 photos and videos from your camera roll, which get uploaded to your Story as 10 separate slides. 

The second method is layering several photos on the same slide as stickers, and the last method is creating a collage using Instagram’s built in feature called Layout. 

Here’s how to do it all. 

How to add multiple photos to an Instagram Story

1. Open the Instagram app. Tap the plus (+) button over Your Story or swipe right on the home screen to open Stories.

Instagram homepage screenshot with the Your Story option highlighted
If you don’t currently have any live posts in your Instagram Story, the plus button will appear at the top of the screen.

2. Tap the photo icon on the bottom-left of the screen to open your Camera Roll.

Instagram post creator with the Camera Roll option highlighted
A preview of the last photo you took will appear at the bottom-left.

3. Tap Select, then select up to 10 photos and videos. Then, tap the arrow at the bottom-right of the screen to continue.

Instagram post creator with the Select option highlighted
Tap “Select” to upload more than one photo at a time.

4. On this screen, customize each photo or video with text, stickers, or drawn lines. When you’re finished, tap the arrow at the bottom-right of the screen.

5. Tap the circle next to your intended audience (Your Story or Close Friends), then tap Share.

Instagram post creator with the Share option highlighted
Close Friends will share your Stories with a smaller audience of your choosing.

How to layer multiple photos on the same screen of an Instagram Story

1. Open the Instagram app. Tap the plus (+) button over Your Story or swipe right on the home screen to open Stories.

2. Tap the photo icon on the bottom-left of the screen to open your Camera Roll. Select a photo.

3. Tap the sticker icon at the top of the screen. Scroll down and tap the photo sticker.

Instagram post creator with the sticker icon highlighted
The sticker icon also lets you add the time, temperature, location, and more.

4. Select the photo you want to insert. Repeat this process until you have all the photos you want to include.

5. Tap each photo to change the shape, drag them to change the position, or pinch them to change the size.

6. Once you’re finished, tap Your Story or Close Friends to share the post.

Instagram post creator with the Share option highlighted
Share your post once you’re finished editing.

How to post a photo collage on an Instagram Story

1. Open the Instagram app. Tap the plus (+) button over Your Story or swipe right on the home screen to open Stories.

2. Tap the Layout icon on the left side of the screen. Tap the Change Grid icon to select the option you want.

Instagram post creator with the Layout option highlighted
Layout is a collage creator built into Instagram.

3. Tap the photo icon on the bottom-left of the screen to open your Camera Roll. Select a photo to add it to your collage.

4. Once you’re finished creating your collage, tap the check button at the bottom of the screen.

5. Add any additional elements as desired (text, stickers, etc.), then tap Your Story or Close Friends to share the post.

Instagram post creator with the Share option highlighted
Share your collage after you’re finished editing.

Read the original article on Business Insider

Cosmetics company Lush says it’s shutting down its Facebook, Instagram, TikTok and Snapchat accounts because of the Facebook whistleblower

Bath products in display in a Lush cosmetics shop in London.
Bath products in display in a Lush cosmetics shop in London.

  • Lush announced it will shut down its Facebook, Instagram, TikTok, and Snapchat accounts.
  • The cosmetics company said the harms of social media are going “largely ignored.”
  • Lush’s UK operation announced in 2019 it would shut down some social media, but returned in 2020.

Trendy cosmetics company Lush has announced it’s quitting social media just as the holiday season kicks off.

In a press release issued last week Lush said it will be shutting down its Facebook, Instagram, TikTok, and Snapchat accounts on November 26 in all 48 countries where it operates.

The company said it’s ditching its accounts in protest against safety issues on social media.

“In the same way that evidence against climate change was ignored and belittled for decades, concerns about the serious effects of social media are going largely ignored now,” the company said in its press release. 

“I’ve spent all my life avoiding putting harmful ingredients in my products. There is now overwhelming evidence we are being put at risk when using social media,” Lush CEO, Mark Constantine, said in a statement.

“I’m not willing to expose my customers to this harm, so it’s time to take it out of the mix,” he added.

Lush’s UK operations made a similar announcement back in 2019, saying it was quitting Instagram and Facebook because it was “tired of fighting with algorithms.” Lush said in its press release it returned to some of its abandoned social media accounts in 2020 in response to the pandemic driving customers inside.

In its statement last week Lush said it was motivated to quit social media again following revelations from the Facebook whistleblower Frances Haugen.

“We at Lush don’t want to wait for better worldwide regulations or for the platforms to introduce best practice guidelines, while a generation of young people are growing up experiencing serious and lasting harm,” Lush said.

Internal company documents leaked by Haugen on a range of subjects including teen mental health, hate speech, and misinformation prompted renewed scrutiny of Meta — Facebook’s rebranded parent company. Lawmakers in the US, the UK, and Europe took testimony from Haugen on how they might better regulate social media companies.

Lush said it will keep its Twitter and YouTube accounts active “for now.” 

Read the original article on Business Insider

I’m a 19-year-old content creator who helped saved my family’s struggling candy shop by building up a huge TikTok following. Here’s what a typical day is like.

Annabelle King
Sticky’s social media manager, Annabelle King.

  • Annabelle King creates candy and content for Sticky, a popular Sydney-based candy shop. 
  • Last year, she used social media to help save the business, which is owned by her father.
  • On some days, she helps the team create 60kg of candy that consists of 50 different flavors. 

This as-told-to article is based on a conversation with Annabelle King, a 19-year-old social media manager at a candy shop based in Sydney, Australia, which specializes in artisanal, handmade sweets. It has been edited for length and clarity.

I started working at Sticky just to keep it going for my parents. Before then, I never saw myself working at the store at all.

Sticky was on the brink of collapse after the pandemic seriously impacted sales. We went from busy to bust. Desperate to turn things around, I took to social media to save the struggling business and it worked. 

The TikTok account garnered more than 1 million followers in its first month of launching and is now close to 5 million. Now the store is drawing a healthy amount of customers and we are hiring again, as opposed to letting people go. 

The most obvious job I do is content creation for the store. I spend about three-quarters of my week taking photos of the candy-making process at Sticky for Instagram, or videos for TikTok and YouTube. I spend between two and five hours each day turning what I film in the shop into something interesting. 

To make good content about a subject, I believe you must be involved with it yourself. I try to participate in the process as much as I can when I film content on candy construction. 

@stickyaustralia

“Don’t go easy on me dad” and those were her last words

♬ original sound – Sticky Lollies

 

My daily responsibilities change so much that the only thing I am sure I do every day is to grab coffee for myself and Dad. I do a little bit of everything. I serve customers, make candy, clean, pack lollies, and handle online orders. Whatever needs doing, I’m your girl for it. 

I am not hired as a full-time candy maker but I wish I could be. You really need to have some specific skills (muscles) to make candy all day. I love the stretching and the molding but I always have issues lifting a certain amount of candy. It becomes way too heavy for me to manipulate. 

 

The team aims to create around six batches, equivalent to 60kg, of candy a day. Sometimes, we have very quick and easy designs, and we get more candy as a result.

Other times, the designs take ages and you get less rock. A roaring demand for Sticky’s candy has meant that we do not have any lollies going unsold.

The hardest thing is keeping candy in stock in-store and online. 

Sticky candy
Sticky’s rock candy.

Everyone in the shop decides the flavors. We have more than 50 single flavors — some more popular than others.

The excitement for us comes from the flavor combinations. Sometimes, someone will think of a new meld of flavors that ends up being so good. Recently, it was mangoes and cream. I really hope we keep that one for a while. 

Now that I have been working at Sticky for well over a year, I can say that I do not see myself leaving any time soon. And having worked in other confectionery stores, I admit — with a fair amount of bias — that working at Sticky has been my favourite job so far.  

I love my co-workers, even if being the boss’s daughter can complicate those relationships. I am treated with respect, and we spend a fair amount of time goofing off at work, but don’t tell Mum that.

Planning for the future is hard for me, it is all changing so fast. I just take each opportunity as it comes. 

 

 

Read the original article on Business Insider