A man used AI to bring back his deceased fiancé. But the creators of the tech warn it could be dangerous and used to spread misinformation.

GPT-3 is a computer program that attempts to write like humans.
GPT-3 is a computer program that attempts to write like humans.

  • A man used artificial intelligence (AI) to create a chatbot that mimicked his late fiancé.
  • The groundbreaking AI technology was designed by Elon Musk’s research group OpenAI.
  • OpenAI has long warned that the technology could be used for mass information campaigns.
  • See more stories on Insider’s business page.

After Joshua Barbeau’s fiancé passed away, he spoke to her for months. Or, rather, he spoke to a chatbot programmed to sound exactly like her.

In a story for the San Francisco Chronicle, Barbeau detailed how Project December, a software that uses artificial intelligence technology to create hyper-realistic chatbots, recreated the experience of speaking with his late fiancé. All he had to do was plug in old messages and give some background information, and suddenly the model could emulate his partner with stunning accuracy.

It may sound like a miracle (or a Black Mirror episode), but the AI creators warn that the same technology could be used to fuel mass misinformation campaigns.

Project December is powered by GPT-3, an AI model designed by the Elon Musk-backed research group OpenAI. By consuming massive datasets of human-created text (Reddit threads were particularly helpful), GPT-3 can imitate human writing, producing everything from academic papers to letters from former lovers.

It’s some of the most sophisticated – and dangerous – language-based AI programming to date.

When OpenAI released GPT-2, the predecessor to GPT-3, the group wrote that it can potentially be used in “malicious ways.” The organization anticipated bad actors using the technology could automate “abusive or faked content on social media,” “generate misleading news articles,” or “impersonate others online.”

GPT-2 could be used to “unlock new as-yet-unanticipated capabilities for these actors,” the group wrote.

OpenAI staggered the release of GPT-2, and still restricts access to the superior GPT-3, in order to “give people time” to learn the “societal implications” of such technology.

Misinformation is already rampant on social media, even with GPT-3 not widely available. A new study found that YouTube’s algorithm still pushes misinformation, and the nonprofit Center for Countering Digital Hate recently identified 12 people responsible for sharing 65 percent of COVID-19 conspiracy theories on social media. Dubbed the “Disinformation Dozen,” they have millions of followers.

As AI continues to develop, Oren Etzioni, CEO of the non-profit, bioscience research group, Allen Institute, previously told Insider it will only become harder to tell what’s real.

“The question ‘Is this text or image or video or email authentic?’ is going to become increasingly difficult to answer just based on the content alone,” he said.

Read the original article on Business Insider

‘Worldwide phenomenon’ prefab tiny home maker Nestron just started shipping overseas – see inside its $77,000 units

the exterior of the Cube Two at night, lit up
The Cube Two.

  • Singapore-based Nestron is now shipping its prefab tiny home Cube One and Two models to the UK.
  • The company expects to sell over 100 units in the UK by the end of 2021 following massive online interest.
  • Take a look inside the two AI-powered models with smart furniture.
  • See more stories on Insider’s business page.
Interest in prefab tiny homes skyrocketed during COVID-19.

the exterior of the Cube One
The Cube One.

Now, one Singapore-based company is looking to capitalize on this trend by introducing its artificial intelligence-powered tiny homes to the UK.

the exterior of the Cube Two at night, lit up
The Cube Two.

Meet Nestron, the brains behind several wildly popular tiny homes that have since become a “worldwide phenomenon,” Choco Toh of Nestron’s marketing team told Insider in December.

the white panels of the Cube One while its being manufactured
Paneling the Cube One.

Source: Insider

Its tiny homes were such a hit, Nestron’s website crashed for a while, likely due to an influx in webpage visits and “extremely overwhelming” popularity, Toh said.

Cube Two's large windows
The interior of the Cube Two.

Source: Insider

To expand its reach, Nestron is now in the process of preparing its debut in Northampton, UK, a little over 65 miles from London.

the steel frame of the Cube One
Cube One’s structure.

Toh says Nestron will close about 10 deals before the homes actually debut in Europe …

the interior of the Cube One with a bathroom, living room, and kitchen
The interior of the Cube One.

… but estimates that by the end of the year, it’ll sell over 100 units in the UK.

the exterior of the Cube Two
The Cube Two.

“We believe with the increase in marketing activities upon our debut, there are nearly 100,000 potential users in the UK, which will bring explosive and continuous growth to our local distributors,” Toh told Insider in an email statement.

Cube Two's bedroom
The interior of the Cube Two.

Like other companies that ship products internationally, Nestron has struggled to move its tiny homes in the face of jammed ports and shipping delays.

The tiny home wrapped in fabric being picked up by a crane
The models being shipped.

But before we dive into how the company is overcoming these issues, let’s take a look at the two futuristic tiny homes that will debut in the UK: the $34,000 to $52,000 Cube One and the $59,000 to $77,000 Cube Two.

the exterior of the Cube Two at night, lit up
The Cube Two.

These prices vary widely due to a list of possible extra add-ons, such as solar panels, heated floors, and additional smart appliances.

Cube Two's bedroom
The interior of the Cube Two.

The Cube One is more popular with solo occupants, while the larger Cube Two has been a hit with families, couples, and as a backyard unit.

Cube Two's dining table
The interior of the Cube Two.

Nestron debuted both units well before its UK plans but has since made sizing changes ahead of its overseas delivery: the Cube One’s size was boosted about 16.2 square-feet, while the Cube Two was expanded by about 25 square-feet.

The tiny homes wrapped in fabric on a truck
The models being shipped.

Let’s take a closer look at the Cube One, which stands at about 156 square feet.

the interior of the Cube One with a bathroom, living room, and kitchen
The interior of the Cube One.

This square footage holds the living room, bedroom, bathroom, and kitchen space (which comes with cabinets, a sink, and a stovetop, according to renderings of the unit).

the interior of the Cube One with storage units
The interior of the Cube One.

Like any typical home, the living room has a dining table and sofa, while the bedroom has a side table, closet, and of course, a bed.

the interior of the Cube One with storage units and a bed
The interior of the Cube One.

Moving towards the bathroom, the tiny Cube One comes with a shower, towel rack, and sink, all in one enclosed space.

the interior of the Cube One with a bathroom, living room, and kitchen
The interior of the Cube One.

The little living unit also has built-in necessary amenities like lights, storage units, electric blinds, and a speaker.

the interior of the Cube One with a bathroom, living room, and kitchen
The interior of the Cube One.

There’s even room for a modern-day must-have: air conditioning units.

the interior of the Cube One with storage units
The interior of the Cube One.

Now, let’s take a look at the larger Cube Two, which can accommodate three to four people with its two beds, both of which sit on opposite ends of the tiny home.

Cube Two's countertop and windows
The interior of the Cube Two.

Like its smaller sibling, the almost 280-square-foot Cube Two has a living room, two beds, a kitchen, and a bathroom, all with the same furnishings as the Cube One.

a peek down Cube Two's hallway with the kitchen and windows
The interior of the Cube Two.

However, the dining table in the Cube Two is noticeably larger, and there’s a skylight for added natural light and stargazing.

Cube Two's dining table and countertop
The interior of the Cube Two.

Both models come insulated and have smart home capabilities using Nestron’s “Canny,” an artificial intelligence system.

Cube Two's kitchen and skylight
The interior of the Cube Two.

Canny can complete tasks like brewing your morning coffee or automatically adjusting your seat heights.

Cube Two's dining table
The interior of the Cube Two.

Everything is “smart” these days, which means the Cube One and Two can also come with motion-sensing lights and smart mirrors and toilets.

Cube Two's kitchen and skylight
The interior of the Cube Two.

You might be wondering how Nestron plans to move its Cube One and Two tiny homes overseas in one piece. Well, let’s move on to everyone’s favorite topic: logistics, and how the company managed to ship its tiny homes despite global delays.

the steel frame of the Cube Two
Cube Two’s structure.

According to Toh, Nestron has had a “solid foundation built in the industry … allowing it to have a good relationship with experienced and professional forwarding partners.”

a worker insulating the tiny home
A worker applying the insulation layer.

Despite this foundation, like other companies, Nestron has experienced delays related to the global supply chain jam, specifically congested ports in the UK.

the wiring and plumbing inside the bare tiny home under construction
The wiring and plumbing systems.

As a result, the company’s forwarding charges were tripled what it initially expected, according to Toh.

the white panels and windows of the Cube Two while its being manufactured
Paneling the Cube Two.

But instead of charging its clients extra money for immediate shipping, Nestron decided it would pause shipping until costs were lowered.

the white panels of the Cube One while its being manufactured
Paneling the Cube One.

To bypass these congestion issues, Nestron also decided to reroute its original plan to ship straight to the UK.

the wiring and plumbing inside the bare tiny home under construction
The wiring and plumbing systems.

“In the end, [we] decided to travel over to Antwerp, Belgium, and then land in the UK,” Toh said. “This way, by the time we reach the UK port, the congestion would’ve been clear.”

the tiny home under construction with white panels
One of the tiny homes under construction.

Despite this detour, shipping costs were still higher than expected, in part because the company and its distributors still wanted to make the debut timeline.

a person connecting the water and sewage points
The water and sewage connection points.

“Since the demands are growing and people want to experience touch and feel with Nestron, we took the chance and sent the units off earlier this month, expecting them to arrive late July [or] early August,” Toh said.

Cube Two's windows and tables
The interior of the Cube Two.

To aid in the transportation process, the tiny homes have built-in retractable hooks to help make it compatible with cranes.

The tiny home wrapped in fabric being picked up by a crane
The models being shipped.

The homes’ structures are also stable enough to withstand the stress of moving, according to Toh.

the steel frame of the Cube Two
Cube Two’s structure.

And all the little living units are also packaged in waterproof fabric to both avoid rusting and to allow for easy inspection.

A completed Cube One in the factory.
A completed Cube One in the factory.

Being in the UK will allow potential consumers to “engage with Nestron units directly,” Toh said. “The experience will definitely influence the market interest and purchase power.”

two tiny homes on a truck for a road test
The tiny homes on a road test.

Read the original article on Business Insider

China says its fighter pilots are battling AI aircraft in simulated dogfights, and humans aren’t the only ones learning

J-16 fighter jet
China is using artificial intelligence to hone the skills of Chinese fighter pilots.

  • China has been pitting pilots against AI-driven aircraft in training simulations.
  • A commander told the PLA Daily that the AI aircraft were “sharpening the sword” for Chinese pilots.
  • The AI was also learning, highlighting the potential for AI systems in its armed forces.
  • See more stories on Insider’s business page.

Chinese fighter pilots have been battling aircraft piloted by artificial intelligence in simulated dogfights to boost pilot combat skills, Chinese media reported.

Fang Guoyu, a People’s Liberation Army Air Force brigade flight team leader and recognized fighter ace, was recently “shot down” by an AI adversary in an air-to-air combat simulation, according to China’s PLA Daily, the official newspaper of the Chinese military.

He said that early in the training, it was easy to defeat the AI adversary. But with each round of combat, the AI reportedly learned from its human opponent. After one fight that Fang won with a bit of skillful flying, the AI came back and used the same tactics against him, defeating the human pilot.

“It’s like a digital ‘Golden Helmet’ pilot that excels at learning, assimilating, reviewing, and researching,” Fang said, referring to the elite pilots who emerge victorious in the “Golden Helmet” air combat contests. “The move with which you defeated it today will be at its fingertips tomorrow.”

Du Jianfeng, the brigade commander, told the PLA newspaper that AI is increasingly being incorporated into training.

It “is skilled at handling the aircraft and makes flawless tactical decisions,” he said, characterizing the AI adversary as a useful tool for “sharpening the sword” because it forces the Chinese pilots to get more creative.

‘Sharpening the sword’

Chinese J-15 fighter jets
Chinese J-15 fighter jets at a military parade.

China is striving to build a modern military with the ability to fight and win wars by the middle of this century, and it has made progress in recent years in advancing its air combat element, even developing a fifth-generation stealth fighter.

But far more challenging and time consuming than closing the technology gap is cultivating the critical knowledge and experience required to effectively operate a modern fighting force.

Chinese media did not offer specifics on the simulator, so there are questions about whether or not the AI adversary provides sufficiently realistic training necessary to prepare pilots to dogfight manned aircraft.

“If it does, that’s pretty good,” retired US Navy Cmdr. Guy Snodgrass, a former TOPGUN instructor and an artificial intelligence expert, told Insider.

“If it doesn’t,” he continued, “you’re really just training human operators to fight AI, and that is probably not what they are going to be going up against” since there are currently no autonomous AI-driven fighter aircraft they would need to be prepared to fight.

“There could be a divergence between real capability in a dogfight or aerial battle versus what the AI is presenting,” he said. If that’s the case, this could be wasted effort.

If it is a high-fidelity training simulator, though, it potentially lowers the cost of the air combat training because “you’re able to get that training at a price point that’s much lower than actually putting real planes in the air,” Snodgrass said.

Chinese leader Xi Jinping has repeatedly stressed the need for realistic combat training, including simulations, to help the Chinese military overcome their lack of combat experience, but it is not clear to what extent his agenda has been implemented with training simulators like what PLAAF pilots have been using.

‘The AI is learning and it’s getting better’

J-20 stealth fighters of PLA Air Force perform with open weapon bays during the Zhuhai Airshow
J-20 stealth fighters of PLA Air Force perform with open weapon bays during the Zhuhai Airshow.

Regardless of whether the pilots are learning anything valuable, Fang Guoyu’s recollection of his engagements with his AI adversary demonstrates that the AI is.

“AI requires feedback,” Snodgrass said. “And that’s exactly the kind of pathway you’d want to take, to use this to help train your pilots, but because your pilots are fighting against it, the AI is learning and it’s getting better.”

A next step, he explained, could then be to say, “This has performed very well in a virtual environment. Let’s put this into a manned fighter.”

China has invested heavily in AI research, and, like the US, it has been considering ways to incorporate AI – which can process information quickly and gain years of experience in a very short time – into the cockpits of its planes.

Yang Wei, chief designer for the J-20, China’s first fifth-generation stealth fighter, said last year that the next generation of fighter could feature AI systems able to assist pilots with decisions to increase their overall effectiveness in combat, the state-affiliated Global Times reported.

The US Air Force has expressed similar ideas. Steven Rogers, a senior scientist at the US Air Force Research Laboratory, told Inside Defense in 2018 that ace pilots have thousands of hours of experience. Then he asked, “What happens if I can augment their ability with a system that can have literally millions of hours of training time?”

Snodgrass explained that there are a number of different ways AI could be used to augment the capabilities of a pilot.

For instance, artificial intelligence could be used to monitor aircraft systems to reduce task saturation, especially for single-pilot aircraft, collect battlefield information, and handle target discrimination and prioritization. AI could even potentially chart out flight paths to minimize detection through electromagnetic spectrum analysis.

The US is currently pursuing several lines of effort exploring the possibilities of AI technology.

In a big event last summer, the Defense Advanced Research Projects Agency (DARPA) put an AI algorithm up against an experienced human pilot in a “simulated within-visual-range air combat” situation.

The artificial intelligence, which had already defeated other AI “pilots” in simulated dogfights and collected years of experience in a matter of months, achieved a flawless victory, winning five straight matches without the human, a US Air Force F-16 pilot, ever scoring a hit.

The point of the simulated air-to-air combat scenario was to move DARPA’s Air Combat Evolution program forward.

The agency said previously that it envisions “a future in which AI handles the split-second maneuvering during within-visual-range dogfights, keeping pilots safer and more effective as they orchestrate large numbers of unmanned systems into a web of overwhelming combat effects.”

It is not clear how long it would take to realize the agency’s vision for the future, but Snodgrass previously told Insider that he “would never bet against technological progress,” especially considering “all the advancements that have occurred in the last decade, in the last hundred years.”

Read the original article on Business Insider

Tesla ‘under review’ by California DMV over whether it misleads consumers with ‘full self-driving’ claims

Elon Musk
  • California’s DMV is probing whether Tesla’s “self-driving” claims broke state law, the LA Times first reported.
  • Tesla calls its $10,000 driver-assistance software “full self-driving” – it is not.
  • Amid a number of Tesla crashes, the technology is coming under increasing scrutiny.
  • See more stories on Insider’s business page.

The California Department of Motor Vehicles is looking into whether Tesla illegally misleads consumers with its claims about its “full-self driving” technology, the LA Times reported Monday and Insider confirmed.

“DMV has the matter under review,” a DMV spokesperson told Insider. “The [state] regulation prohibits a company from advertising vehicles for sale or lease as autonomous unless the vehicle meets the statutory and regulatory definition of an autonomous vehicle and the company holds a deployment permit.”

Tesla did not respond to a request for comment.

Tesla’s FSD technology, which customers can add to their vehicles for $10,000, gives the vehicle the capability to change lanes, adjust speed, and complete some other maneuvers without assistance from the driver.

It does not make the car fully autonomous, however, according to widely accepted engineering standards, and Tesla’s own website.

But the company, and specifically CEO Elon Musk, have repeatedly made ambitious promises about FSD’s capabilities, only to subsequently push back the timing of new features and tout the claimed safety benefits.

Tesla has faced scrutiny over its driver-assistance features for years. But regulators and lawmakers have been taking an even closer look following Tesla allowing a small group of drivers to test a beta version of its newest FSD features.

The beta sofware has been at the center of several fatal crashes and high-profile traffic violations in recent weeks, prompting inquiries from lawmakers. Yet the company plans to roll out the software more widely even as videos posted by customers continue to show bugs that could pose major risks.

Read the original article on Business Insider

Amazon’s AI-powered cameras are a double-edged sword that could make drivers safer, but also force the company to sacrifice productivity, a transportation expert says

GettyImages 1232149494 UNITED STATES - APRIL 6: Amazon driver Shawndu Stackhouse delivers packages in Northeast Washington, D.C., on Tuesday, April 6, 2021. (Photo By Tom Williams/CQ-Roll Call, Inc via Getty Images)
Amazon’s drivers must meet demanding productivity quotas as high as 300 packages per day, which drivers say require them to cut corners.

Amazon recently installed AI-powered surveillance cameras in its delivery trucks that monitor drivers’ behavior in what the company says is an effort to reduce risky driving behaviors and collisions.

Whether the cameras ultimately accomplish that goal may depend on how much productivity Amazon is willing to sacrifice in order to keep drivers safe, according to a transportation expert who studies AI-powered safety systems.

Amazon’s cameras, which are made by a startup called Netradyne, record 100% of the time that the vehicle’s ignition is on, tracking workers’ hand movements and even facial expressions and audibly alerting them in real-time when the AI detects what it suspects is distracted or risky driving.

Almost immediately, drivers pushed back – and one even resigned, according to the Thomson Reuters Foundation – citing concerns about the cameras eliminating virtually any privacy they once had, as well as potentially making them less productive.

Several drivers told Insider’s Avery Hartmans and Kate Taylor they’re worried about Amazon penalizing them for using their phones on the job, even though they need the devices for navigation. Others said the additional safety precautions they’re taking to avoid committing infractions, like stopping twice at an intersection or driving slower, are making it hard to keep up with the company’s notoriously demanding delivery quotas, which can run as high 300 packages per day.

But that’s exactly the trade-off Amazon may be forced to make, Matt Camden, a senior research associate at the Virginia Tech Transportation Institute, told Insider.

“If a fleet wants to reduce risky driving behaviors, it’s critical to look at why the drivers are doing that in the first place, and usually, it’s because there’s other consequences that are driving that behavior,” such as “unrealistic delivery times,” Camden said.

“They want to keep their job. If they miss their delivery time, that’s going to look bad – they could be fired, they could lose their livelihood,” he said. “And if [the delivery time] is unrealistic, then they have to find a way to get it done.”

Instead, Camden said, companies like Amazon need to approach technology-based safety systems “from a more positive standpoint, from a training standpoint and say: ‘We’re not going to nitpick you. We just want you to be safe.'”

“Netradyne cameras are used to help keep drivers and the communities where we deliver safe,” Amazon spokesperson Alexandra Miller told Insider in a statement.

“Don’t believe the self-interested critics who claim these cameras are intended for anything other than safety,” she added.

Netradyne could not be reached for comment.

Safety first

Miller told Insider in Amazon’s pilot test of the Netradyne cameras from April to October 2020, accidents decreased 48%, stop-sign violations decreased 20%, driving without a seatbelt decreased 60%, and distracted driving decreased 45%.

However, independent research on the Netradyne “Driveri” camera system Amazon uses, and AI camera systems generally, is more sparse.

In an informational video for its camera rollout, Amazon claimed “the camera systems” can “reduce collisions by 1/3 through in-cab warnings,” citing studies by an investment bank called First Analysis as well as VTTI, where Camden works. (First Analysis could not be reached for comment).

Amazon didn’t respond to questions about which studies it was referring to in the video.

Camden said VTTI hasn’t looked at Netradyne’s cameras specifically, but that a study it conducted in 2010 found “video-based monitoring systems” without real-time alerts or AI prevented between 38.1% and 52.2% of “safety-related events” when tested on two different company’s delivery fleets.

But those safety benefits were a result of funneling data from the cameras to safety managers, who could then give feedback to drivers to help them drive safer.

“We can’t say that these AI-powered cameras would reduce 10%, 20%, 30%, 50% [of safety incidents],” Camden said. “We can’t get that specific number yet because we haven’t done the research, but it makes sense that in-vehicle alerts do work to address risky driving,'” Camden said.

Similar technologies do show promise, he said, citing VTTI research that showed real-time lane-departure warnings reducing crashes by more than 45%.

But Camden also said when VTTI did a study last year looking at why some delivery fleets are safer than others, it ultimately came down to which ones had a strong “safety culture” and were “prioritizing and valuing safety, at least on the equal level as productivity, if not higher.”

“The safest ones typically said: ‘If you’re tired, we don’t care if you miss your delivery, we want you to stop. We want you to take a break. If you have to go to the bathroom, we want you to stop and go to the bathroom. We don’t want you to feel pressured to keep going.'”

Camden said those fleets made it clear that drivers could reject unrealistic delivery times and wouldn’t be penalized if the route took longer because of traffic or construction.

“It’s easier said than done, of course, because productivity is driving the business. They have to make money, they have to keep their customers happy,” he said.

“But really, it comes down to creating the policies and the programs to support safety, support the driver, because we don’t want them speeding. We don’t want the drivers cutting corners to try to make a delivery.”

Read the original article on Business Insider

‘Liar’s dividend’: The more we learn about deepfakes, the more dangerous they become

barack obama jordan peele deepfake buzzfeed Paul Scharre views in his offices in Washington, DC January 25, 2019 a manipulated video by BuzzFeed with filmmaker Jordan Peele (R on screen) using readily available software and applications to change what is said by former president Barack Obama (L on screen), illustrating how deepfake technology can deceive viewers. - "Deepfake" videos that manipulate reality are becoming more sophisticated and realistic as a result of advances in artificial intelligence, creating a potential for new kinds of misinformation with devastating consequences. (Photo by Robert LEVER / AFP) / TO GO WITH AFP STORY by Rob LEVER "Misinformation woes may multiply with deepfake videos" (Photo credit should read ROBERT LEVER/AFP via Getty Images)
BuzzFeed enlisted the help of comedian and Barack Obama impersonator Jordan Peele to create a deepfake video of the former president.

  • Deepfakes are on the rise, and experts say the public needs to know the threat they pose.
  • But as people get used to them, it’ll be easier for bad actors to dismiss the truth as AI forgery.
  • Experts call that paradox the “liar’s dividend.” Here’s how it works and why it’s so dangerous.
  • See more stories on Insider’s business page.

In April 2018, BuzzFeed released a shockingly realistic video of a Barack Obama deepfake where the former president’s digital lookalike appeared to call his successor, Donald Trump, a “dips–t.”

At the time, as visually convincing as the AI creation was, the video’s shock value actually allowed people to more easily identify it as a fake. That, and BuzzFeed revealing later in the video that Obama’s avatar was voiced by comedian and Obama impersonator Jordan Peele.

BuzzFeed’s title for the clip – “You Won’t Believe What Obama Says In This Video! 😉” – also hinted at why even the most convincing deepfakes so quickly raise red flags. Because deepfakes are an extremely new invention and still a relatively rare sighting for many people, these digital doppelgängers stick out from the surrounding media landscape, forcing us to do a double-take.

But that won’t be true forever, because deepfakes and other “synthetic” media are becoming increasingly common in our feeds and For You Pages.

Hao Li, a deepfakes creator CEO and co-founder of Pinscreen, a startup that uses AI to create digital avatars, told Insider the number of deepfakes online is doubling “pretty much every six months,” most of them currently in pornography.

As they spread to the rest of the internet, it’s going to get exponentially harder to separate fact from fiction, according to Li and other experts.

“My biggest concern is not the abuse of deepfakes, but the implication of entering a world where any image, video, audio can be manipulated. In this world, if anything can be fake, then nothing has to be real, and anyone can conveniently dismiss inconvenient facts” as synthetic media, Hany Farid, an AI and deepfakes researcher and associate dean of UC Berkeley’s School of Information, told Insider.

That paradox is known as the “liar’s dividend,” a name given to it by law professors Danielle Citron and Robert Chesney.

Many of the harms that deepfakes can cause – such as deep porn, cyberbullying, corporate espionage, and political misinformation – stem from bad actors using deepfakes to “convince people that fictional things really occurred,” Citron and Chesney wrote in a 2018 research paper.

But, they added, “some of the most dangerous lies” could come from bad actors trying to “escape accountability for their actions by denouncing authentic video and audio as deep fakes.”

George Floyd deepfake conspiracy

One such attempt to exploit the liar’s dividend, though ultimately unsuccessful, happened last year after the video of George Floyd’s death went viral.

“That event could not have been dismissed as being unreal or not having happened, or so you would think,” Nina Schick, an expert on deepfakes and former advisor to Joe Biden, told Insider.

Yet only two weeks later, Dr. Winnie Hartstrong, a Republican congressional candidate who hoped to represent Missouri’s 1st District, posted a 23-page “report” pushing a conspiracy theory that Floyd had died years earlier and that someone had used deepfake technology to superimpose his face onto the body of an ex-NBA player to create a video to stir up racial tensions.

“Even I was surprised at how quickly this happened,” Schick said, adding “this wasn’t somebody on, like 4chan or like Reddit or some troll. This is a real person who is standing for public office.”

“In 2020, that didn’t gain that much traction. Only people like me and other deepfake researchers really saw that and were like, ‘wow,’ and kind of marked that as an interesting case study,” Schick said.

But fast-forward a few years, once the public becomes more aware of deepfakes and the “corrosion of the information ecosystem” that has already polarized politics so heavily, Schick said, “and you can see how very quickly even events like George Floyd’s death no longer are true unless you believe them to be true.”

GettyImages 1167464772
A comparison of an original and deepfake video of Facebook CEO Mark Zuckerberg.

Locking down deepfakes is impossible – inoculation is the next best bet

Citron and Chesney warned in their paper that the liar’s “dividend” – the payoff for bad actors who leverage the existence of deepfakes as cover for their bad behavior – will get even bigger as the public gets used to seeing deepfakes.

But banning deepfakes entirely could make the problem worse, according to Schick, who pointed to China, the only country with a national rule outlawing deepfakes.

“Let’s say some very problematic footage were to emerge from Xinjiang province, for instance, showing Uyghurs in the internment camps,” she said. “Now the central authority in China has the power to say, ‘well, this is a deepfake, and this is illegal.'”

Combined with Beijing’s control over the country’s internet, Schick said, “and you can see why this power to say what’s real and what’s not can be this very effective tool of coercion. You shape the reality.”

With an outright ban out of the question, the experts who spoke to Insider said a variety of technological, legal, regulatory, and educational approaches are needed.

“Ultimately, it’s also a little bit up to us as consumers to be inoculated against these kinds of techniques,” Li said, adding that people should approach social media with the same skepticism they would a tabloid, especially if it hasn’t been confirmed by multiple reliable news or other official sources.

Schick agreed, saying “there has to be kind of some society-wide resilience building” – not only around bad actors’ ability to use real deepfakes to spread fake news, but also around their ability to dismiss real news as the product of nonexistent deepfakes.

Read the original article on Business Insider

The Tom Cruise deepfakes were hard to create. But less sophisticated ‘shallowfakes’ are already wreaking havoc

tom cruise BURBANK, CA - JANUARY 30: Tom Cruise onstage during the 10th Annual Lumiere Awards at Warner Bros. Studios on January 30, 2019 in Burbank. (Photo by Michael Kovac/Getty Images for Advanced Imaging Society)
  • The convincing Tom Cruise deepfakes that went viral last month took lots of skill to create.
  • But less sophisticated “shallowfakes” and other synthetic media are already creating havoc.
  • DARPA’s AI experts mapped out how hard it would be to create these emerging types of fake media.
  • See more stories on Insider’s business page.

The coiffed hair, the squint, the jaw clench, and even the signature cackle – it all looks and sounds virtually indistinguishable from the real Tom Cruise.

But the uncanny lookalikes that went viral on TikTok last month under the handle @deeptomcruise were deepfakes, a collaboration between Belgian visual-effects artist Chris Ume and Tom Cruise impersonator Miles Fisher.

The content was entertaining and harmless, with the fake Cruise performing magic tricks, practicing his golf swing, and indulging in a Bubble Pop. Still, the videos – which have racked up an average of 5.6 million views each – reignited people’s fears about the dangers of the most cutting-edge type of fake media.

“Deepfakes seem to tap into a really visceral part of people’s minds,” Henry Ajder, a UK-based deepfakes expert, told Insider.

“When you watch that Tom Cruise deepfake, you don’t need an analogy because you’re seeing it with your own two eyes and you’re being kind of fooled even though you know it’s not real,” he said. “Being fooled is a very intimate experience. And if someone is fooled by a deepfake, it makes them sit up and pay attention.”

Read more: What is a deepfake? Everything you need to know about the AI-powered fake media

The good news: it’s really hard to make such a convincing deepfake. It took Ume two months to train the AI-powered tool that generated the deepfakes, 24 hours to edit each minute-long video, and a talented human impersonator to mimic the hair, body shape, mannerisms, and voice, according to The New York Times.

The bad news: it won’t be that hard for long, and major advances in the technology in recent years have unleashed a wave of apps and free tools that enable people with few skills or resources to create increasingly good deepfakes.

Nina Schick, a deepfake expert and former advisor to Joe Biden, told Insider this “rapid commodification of the technology” is already is wreaking havoc.

“Are you just really concerned about the high-fidelity side of this? Absolutely not,” Shick said, adding that working at the intersection of geopolitics and technology has taught her that “it doesn’t have to be terribly sophisticated for it to be effective and do damage.”

The Defense Advanced Research Projects Agency (DARPA) is well aware of this diverse landscape, and its Media Forensics (MediFor) team is working alongside private sector researchers to develop tools that can detect manipulated media, including deepfakes as well cheapfakes and shallowfakes.

As part of its research, DARPA’s MediFor team mapped out different types of synthetic media – and the level of skill and resources an individual, group, or an adversarial country would need to create it.

MediFor threat landscape.pptx

Hollywood-level productions – like those in “Star Wars: Rogue One” or “The Irishman” – require lots of resources and skill to create, even though they typically aren’t AI-powered (though Disney is experimenting with deepfakes). On the other end of the scale, bad actors with little training have used simple video-editing techniques to make House Speaker Nancy Pelosi appear drunk and incite violence in Ivory Coast, South Sudan, Kenya, and Burma.

Shick said the Facebook-fueled genocide against Rohingya Muslims also relied mostly on these so-called “cheapfakes” and “shallowfakes” – synthetic or manipulated media altered using less advanced, non-AI tools.

But deepfakes aren’t just being used to spread political misinformation, and experts told Insider ordinary people may have the most to lose if they become a target.

Last month, a woman was arrested in Pennsylvania and charged with cyber harassment on suspicion of making deepfake videos of teen cheerleaders naked and smoking, in an attempt to get them kicked off her daughter’s squad.

“It’s almost certain that we’re going to see some kind of porn version of this app,” Shick said. In a recent op-ed in Wired, she and Ajder wrote about a bot Ajder helped discover on Telegram that turned 100,000 user-provided photos of women and underage children into deepfake porn – and how app developers need to take proactive steps to prevent this kind of abuse.

Experts told Insider they’re particularly concerned about these types of cases because the victims often lack the money and status to set the record straight.

“The celebrity porn [deepfakes] have already come out, but they have the resources to protect themselves … the PR team, the legal team … millions of supporters,” Shick said. “What about everyone else?”

As with most new technologies, from facial recognition to social media to COVID-19 vaccines, women, people of color, and other historically marginalized groups tend to be disproportionately the victims of abuse and bias stemming from their use.

To counter the threat posed by deepfakes, experts say society needs a multipronged approach that includes government regulation, proactive steps by technology and social media companies, and public education about how to think critically and navigate our constantly evolving information ecosystem.

Read the original article on Business Insider

Winnie Lee, Appier cofounder and COO, explains how AI and predictive technologies are powering the future of business

Winnie Lee Appier
Winnie Lee, COO and cofounder of Appier

According to a 2020 report from Grand View Research, Inc., the global artificial intelligence market size is expected to reach US$733.7 billion by 2027, of which up to 20% of revenue will come from the advertising and media sector. This is welcome news for Appier, a Taiwan-based startup that provides a suite of AI-powered products to brands and marketers looking to better understand – and predict – their customers’ needs. The company was established in 2012 by Chih-Han Yu, Winnie Lee, and Joe Su, three friends who met while working and studying in the US. Eight years later, the company has attracted upward of US$182 million in investment, making it one of Taiwan’s highest valued startups. Insider spoke with Winnie Lee, Appier cofounder and COO, about how AI and predictive technologies are powering the future of business.

Insider: In the eight years Appier has been in operation, the company has grown rapidly to where it is now valued at over US$1 billion. What has driven this growth?

Lee: I think there are three key elements. One is technology advancement. The second is the product solutions we build. And the last one is the business execution. These three are very critical to our growth. The first two echo back to the company mindset of having market-driven innovation. We need to make sure that any innovation is really addressing the market needs. This allows us not only to drive the technology advancement within the team, but also to think from the customer’s point of view – what would really make their lives easier? We need to develop products that are solving the end user’s problem but we also need to continue to drive the AI development. Both are very crucial to us.

At the same time, Asia is composed of so many different countries and if you want to grow as a company you really need to understand each market that you enter. We have been operating in 14 cities across Asia and in each city we hire people from the local market because we believe they will understand the customers there. Through this interaction between each business team from each market to the product team, we can fast iterate our product and make sure it is meeting everyone’s needs.

Insider: Why did Appier choose the marketing and sales sector as its area of focus?

Lee: What we had observed is that as businesses started to go digital, the first piece of data they want to digitize is the customer data. As an AI company, if you have very clear goals and sufficiently relevant data then you can help those companies to utilize their data and improve continuously and can easily show your value. However, if you enter a field that has very little data then it becomes almost impossible to prove your value. That’s why we chose to start in the marketing and sales domain.

Insider: What are Appier’s strengths compared to other marketing solutions companies such as Salesforce and IBM? How does the company differentiate itself in the market?

Lee: We are different in two ways. The first is that most other companies tend to group their products by functions. We are unique in that we are trying to think from our customer’s perspective and how they can map their business challenges on a day to day basis against the solutions we can offer.

The second difference is that we are actually an AI-native company. When we started the business, our aspiration was to use AI to empower businesses to use their own data. Everything we design and everything we sell is surrounded by this idea or concept. We are almost an ‘AI as a service company’. Not just a SaaS company, but more an intelligent software company. There are a lot of companies that are trying to add AI into their software. When we design a product, the AI capabilities are already built in.

Insider: Over the past few years Appier has acquired a number of tech companies, such as Japanese AI solutions provider Emin. What is the thinking behind this acquisition strategy?

Lee: When we want to expand our product portfolio, our first logical thought is to look around in adjacent fields and see which area would make the most sense to enter first. Once we have identified a specific area, we start to scan the market. If we don’t find anything close to what we want to build then we just go ahead and build it. But if the functionality that we want is already there with another company, then we would consider whether that company would be a good target for us.

For example, Emin was in a sector that we wanted to enter, which is optimizing transactions for businesses. However, their AI technology was not as advanced. After we had acquired them, we injected our AI capabilities into the platform and now the performance of that platform has become a lot better compared to what it was before.

Insider: Predictive AI – the ability to analyze historical data to predict future behavior – is an emerging field in marketing. How do you see this evolving?

Lee: In the past few years there has been a lot of focus on solving data collection and data organization problems. But from our perspective, the challenges that marketers face is not only this but also managing this increased amount of data and being able to continuously understand their end users.

Predictive AI is the trend the industry is moving toward right now. Turning data into insight is of course important, but how to turn data into insight and then into action is even more important. Within our suite of solutions, we focus on building this predictive capability instead of just providing insight and analytics to the customers. We want to help them actually turn data into action. Reactive approaches based on historical data is useful, it’s still important, but how do you predict future behavior or the future action you should take? This proactive approach is going to be critical for most companies.

Insider: The three cofounders all met while studying and working in the US. Why did you decide to bring the technology back to Asia?

Lee: Growing up here, we knew that we have very strong tech talent, a very strong software talent pool in Taiwan, which was not obvious to the rest of the world back then. In addition to the talent pool, Asia has been a great place to grow a business. There is a lot of growth and a lot of companies needing solutions like ours.

Read the original article on Business Insider

Toyota just started building a 175-acre smart city at the base of Mount Fuji in Japan. Photos offer a glimpse of what the ‘Woven City’ will look like.

Toyota city
The “Woven City” will eventually be home to 2,000 Toyota employees and their families, retired couples, retailers, and scientists, according to the company.

Toyota Motor Corporation started construction this week on a 175-acre smart city at the base of Japan’s Mount Fuji, about 62 miles from Tokyo, the company announced Tuesday.

The city, which Toyota has dubbed the “Woven City,” is expected to function as a testing ground for technologies like robotics, smart homes, and artificial intelligence. A starting population of about 360 inventors, senior citizens, and families with young children will test and develop these technologies.

These residents, who are expected to move into the Woven City within five years, will live in smart homes with in-home robotics systems to assist with daily living and sensor-based artificial intelligence to monitor health and take care of other basic needs, according to the company.

The eventual plan is for the city to house a population of more than 2,000 Toyota employees and their families, retired couples, retailers, and scientists. Toyota announced plans for the city last year at CES, the tech trade show in Las Vegas.

Here’s what the 175-acre smart city is set to look like when it’s finished.

Toyota’s planned 175-acre smart city will sit at the base of Mount Fuji in Japan, about 62 miles from Tokyo.

toyota city
An artist’s rendering of Toyota’s planned smart city.

Called the “Woven City,” the development will feature pedestrian streets “interwoven” with streets dedicated to self-driving cars, according to press materials. The city is expected to be fully sustainable, powered by hydrogen fuel cells. 

The Woven City will function as a testing ground for technologies like robotics, smart homes, and artificial intelligence, according to the company.

Toyota officially started construction on the city in a groundbreaking ceremony on Tuesday, the company announced. The city is set to be built on the site of one of Toyota’s former manufacturing plants called Higashi-Fuji.

Toyota plans to send about 360 people to live in the Woven City to start. From there, it intends to gradually grow the population to more than 2,000.

toyota city
An artist’s rendering of Toyota’s planned smart city.

The first residents will be a group of roughly 360 inventors, senior citizens, and young families with children, according to the company. These residents will move in within five years, a Toyota spokesperson told Insider last year.

Toyota has not yet revealed how these first residents will be chosen, and a spokesperson did not immediately respond to Insider’s request for more details.

Eventually, the Woven City is expected to be home to more than 2,000 Toyota employees and their families, retired couples, retailers, visiting scientists, and industry partners.

Residents will live in homes outfitted with in-home robotics technology as well as sensor-based artificial intelligence to monitor their health and take care of their basic needs.

Toyota city
An artist’s rendering of a home in Toyota’s planned smart city.

Despite the planned high-tech homes, Toyota says that promoting human connection is a major theme of the city but has not released specifics on how it plans to encourage this. 

Press materials indicate that the planned city will feature multiple parks and a large central plaza for social gatherings.

toyota city
An artist’s rendering of Toyota’s planned smart city.

Bjarke Ingels, the famed Danish architect behind high-profile projects such as 2 World Trade Center in New York City and Google’s California and London headquarters, is responsible for the city’s design.

Buildings are to be made mostly of wood to minimize the carbon footprint.

toyota city
An artist’s rendering of Toyota’s planned smart city.

Rooftops are slated to be covered in photo-voltaic panels to generate solar power and hydrogen fuel cell power.

Toyota says it plans to integrate nature throughout the city with native vegetation and hydroponics, a method of growing plants without soil.

The city will be designed with three different types of streets: one for self-driving vehicles, one for pedestrians using personal mobility devices like bikes, and one for pedestrians only.

Toyota city
An artist’s rendering of Toyota’s planned smart city.

These three types of streets will form an “organic grid pattern” to help test autonomy, according to Toyota.

There will also be one underground road used for transporting goods. 

A fleet of Toyota’s self-driving electric vehicles, called e-Palettes, will be used for transportation, deliveries, and mobile retail throughout the city.

toyota e-palette

Toyota has not yet disclosed an estimated completion date or estimated total cost for building the Woven City. 

The Woven City joins a slew of similar smart city projects across Japan, some of which are also spearheaded by major companies.

panasonic Fujisawa Sustainable Smart Town
A robot demonstrates a delivery at Panasonic’s Fujisawa Sustainable Smart Town in Fujisawa, Japan on December 9, 2020.

In 2014, electronic appliance company Panasonic opened a smart city in Japan’s Kanagawa Prefecture called the Fujisawa Sustainable Smart Town, per Tokyo Esque, a market research agency. The city is still under construction with completion expected in 2022, but more than 2,000 people live there now, according to Panasonic.

Accenture, an American-Irish consulting company, is teaming up with the University of Aizu on smart city projects in the town of Aizuwakamatsu with the goal of better using artificial intelligence in public services, the company announced in July 2020.

Local governments made more than 50 proposals for smart cities in Japan in 2020, but only a handful of those were approved, according to Tokyo Esque.

As Linda Poon reported for Bloomberg last year, critics say smart city developers should focus on the human aspect of the projects, not just the technology.

“If it’s not started from a human-centric perspective, from the bottom up as opposed to from the top down, these aren’t real cities,” John Jung, founder of the Intelligent Community Forum think tank, told Bloomberg in January 2020. “They’re not designed to get [people] to know each other.”

Read the original article on Business Insider

The 5 things everyone should know about cloud AI, according to a Sequoia Capital partner

Alexa in the kitchen
Many people encounter cloud AI through their smart speaker

  • Konstantine Buhler, a partner at Sequoia Capital, believes “cloud is going to become AI.”
  • Buhler insists that AI is not “magic,” and that it should be demystified and measured.
  • Buhler says companies can bake AI into their processes “horizontally.” 
  • This article is part of a series about cloud technology called At Cloud Speed.

If you ask Sequoia Capital partner and early-stage investor Konstantine Buhler about the role of artificial intelligence in cloud computing, his answer is unequivocal: “Cloud is going to become AI,” he told Insider. “I mean, all of the cloud will be based on AI.”

Snowflake’s $3.4 billion initial public offering and DataBricks’ $1 billion funding round over the past year suggest big things ahead for AI in the cloud, and the industry is estimated at $40 billion and climbing. Major platforms like Amazon’s AWS, Microsoft Azure, and Google Cloud – as well as a host of startups – sell cloud-based tools and services for data labeling, automation, natural language processing, image recognition, and more, making it more affordable than ever before for firms to dabble in AI. 

Buhler, who has a master’s degree in artificial intelligence engineering from Stanford, revels in AI’s contributions, but also insists that the sector be demystified, and basic business fundamentals applied to it. 

His investments include CaptivateIQ, which automates business commissions, and Verkada, a security camera company that uses AI to recognize information like license plate numbers. Sequoia in general is an investor in some of the biggest names in AI, including Snowflake and Nvidia. 

“This next wave of enterprise and consumer technologies will all need AI built in,” Buhler said. “That’s going to be the standard going forward.”

AI’s ubiquity in the future is the first of a few basic lessons Buhler believes everyone should understand about AI’s impact over the next decade in the cloud. Here are the rest:

AI is not magic – it’s math

There is an (unwarranted) aura around artificial intelligence that ascribes to it supernatural brilliance.

“It seems complicated – it seems like magic of some sort, so people get intimidated and awed by it,” Buhler said. “Artificial intelligence is just more and more mathematical computations done rapidly, which at some point, for a moment, seems ‘magical.’ But it never is.”

Ordinary people should ask to understand it, because it impacts their lives. If you talk to Apple’s Siri or Amazon’s Alexa, you are conversing with AI. If your cat hops aboard a Roomba vacuum, both of you can appreciate how it “learns” to avoid objects in its path. On the other hand, a red-light camera that zooms in to read your license plate when you go through intersections late and automatically fines you might not be such a welcome innovation. 

AI should learn from the internet revolution

Buhler believes that AI is at a similar inflection point as what the internet revolution experienced 20 years ago: “Let’s learn a lesson from the dot.com boom,” when many over-valued companies imploded as they failed to materialize as real companies, Buhler said: “Everybody had that mentality of, ‘let’s stick internet on this thing.'”

While cloud-based tools allow companies to spin up AI models with relative ease, not every problem needs to be solved with these kinds of algorithms. 

The business case must always be there – with the customer centered – or AI will not be practical.

“When you build an artificial intelligence model, it is not about the AI: It is about the customer,” Buhler said. “The internet was a communication revolution, and AI is a computation revolution. This is a new mechanism to serve people, and you have to understand their needs, or you’re going to spend years building the wrong thing.”

Konstantine Buhler of Sequoia Capital
Konstantine Buhler is a partner and early stage investor at Sequoia Capital.

Every company has a ‘horizontal’ AI opportunity

Buhler believes every company can bake AI into their business using the same basic “horizontal stack,” or processes that take raw data and turn it into actionable intelligence that can be used in different ways across business units. Buhler says companies like Databricks, Dataiku, DataRobot, and Domino Data Lab (“they all start with D for some reason”) help enterprises do this. 

Horizontal data processes can include data preparation (sorting text from image files, for instance), data labeling, data storage, creating algorithms that process the data, and, finally, applying the algorithms to specific business processes to help guide decision making. 

“It should be laid out that simply,” he says. That process “is all about enabling enterprises to bake artificial intelligence directly into their systems.”

AI startups can also focus on verticals 

Buhler says there are also AI startups that are providing products tailored to more specific business needs. Gong, for example, helps salespeople evaluate opportunities, while competitor Chorus turns sales conversations into data. In the financial world, the startup Vise automates investment management, while in the legal world, Ironclad helps attorneys build contracts faster. Gong, Vise, and Chorus are Sequoia portfolio companies. 

The key in picking great AI startups, Buhler says, is being able to measure how a company is helping its customers: “It has to be a real business with outputs that can be quantified.” 

Read the original article on Business Insider