Microsoft reportedly in talks to purchase AI and speech technology firm Nuance Communications

GettyImages 1230794251
Microsoft shares rose in pre-market trading after its sales jumped

  • Bloomberg News reported that Microsoft is in advanced talks to acquire AI voice company Nuance Communications.
  • The deal would reportedly value Nuance at $16 billion, a 23% premium to Friday’s close.
  • Nuance Communications voice-technology was the framework for Apple’s Siri.
  • See more stories on Insider’s business page.

Microsoft is negotiating a deal to buy Nuance Communications, an artificial intelligence and speech technology company, according to a report from Bloomberg News.

Bloomberg’s anonymous sources said that the deal could be announced this week, and would value Nuance at roughly $56 a share. This would value the company at $16 billion, a premium of 23% where the stock closed on Friday, $43.58.

Microsoft declined to comment to Bloomberg. Insider reached out to both firms for comment but they couldn’t be reached before publication.

Massachusetts-based Nuance, with a market cap just shy of $13 billion, is the developer of the Dragon Naturally Speaking family of voice technology products. Its speech recognition engine is the basis for Apple’s voice assistant Siri as well as dozens of other real world applications with enterprise clients.

The take-private deal would be Microsoft’s second-largest purchase ever after the company acquired LinkedIn in 2016 for $24 billion. The company has also acquired companies like Skype and GitHub.

Microsoft appears to be in an acquisitive mood. The company has been reported to be in deal talks with online-chat company Discord for over $10 billion. It also purchased game studio ZeniMax Media, the company behind video-game classics “Fallout,” “Doom,” and “The Elder Scrolls” for $7.5 billion in a deal that closed this year.

Microsoft failed to win a bid to acquire TikTok last year.

Nuance and Microsoft began to collaborate in 2019 on medical applications such as automatic note-taking during doctors’ visits.

According to analyst Dan Ives at Wedbush Securities, Microsoft and Nuance Communications are two of the top three tech buys heading into first-quarter earnings season. Ives highlighted Nuance’s “hospital-wide deployments” of its technology, with has created a growing revenue stream. Wedbush set a $65 price target, above Microsoft’s reported bid.

Microsoft stock is up 15.4% from the end of the year at end of closing Friday.

Read the original article on Business Insider

Billionaire investor Peter Thiel called out Apple and Google, warned about TikTok and bitcoin, and criticized China at a recent event. Here are the 17 best quotes.

peter thiel
Peter Thiel.

  • Peter Thiel called out Apple and Google for their links to China this week.
  • The billionaire tech investor issued warnings about bitcoin, TikTok, and AI.
  • Thiel wants to restrict US investors’ access to Chinese markets and vice versa.
  • See more stories on Insider’s business page.

Billionaire investor Peter Thiel warned bitcoin could serve as a “Chinese financial weapon,” criticized Apple and Google for their connections to China, and suggested TikTok should be banned in the US at a virtual event held by the Richard Nixon Foundation this week.

Thiel, the vocal libertarian who co-founded PayPal and Palantir and sits on the board of Facebook Facebook board member, also expressed concerns about technology theft and artificial intelligence, and called for greater restrictions on Chinese investment in the US and vice versa.

The event was called “The Nixon Seminar on Conservative Realism and National Security,” and the topic of discussion was “Big Tech and China: What do we need from Silicon Valley?”

Here are Thiel’s 17 best quotes from the seminar, lightly edited and condensed for clarity:

1. “Shockingly little innovation happens in China. But they have been very good at copying things, stealing things.”

2. “I criticized Google a few years ago for working with Chinese universities and Chinese researchers. And since everything in China is a civilian-military fusion, Google was effectively working with the Chinese military. One of the things that I was sort of told by some of the insiders at Google was they figured they might as well give the technology out the front door, because if they didn’t give it, it would get stolen anyway.”

3. “I had a set of conversations with some of the DeepMind AI people at Google. I asked them, ‘Is your AI being used to run the concentration camps in Xinjiang?’ and they said, ‘Well, we don’t know and don’t ask any questions.’ You have this almost magical thinking that by pretending everything is fine, that’s how you engage and have a conversation, and you make the world better.”

4. “If you look at the big five tech companies, Google, Facebook, Amazon, and Microsoft all have very, very little presence in China. So they aren’t a naturally pro-China constituency. Apple is probably the one that’s structurally a real problem, because the whole iPhone supply chain gets made from China.”

5. “We need to call companies like Google out for working on AI with communist China. I also think we should be putting a lot of pressure on Apple.”

6. “At Facebook, during the Hong Kong protests a year ago, the employees from Hong Kong were all in favor of the protests and free speech. But there were more employees at Facebook who were born in China than who were born in Hong Kong. And the Chinese nationals actually said that it was just Western arrogance, and they shouldn’t be taking Hong Kong’s side and things like that. The internal debate felt like people were actually more anti-Hong Kong than pro-Hong Kong.”

7. “TikTok is problematic because it has this incredible exfiltration of data about people. You are creating this incredibly privacy-invading map of a large part of the population of the Western world. It is a fairly powerful application of AI in a certain sense, as they find ways to make it especially addictive and figure out what videos to show you to keep you watching more and more. It doesn’t seem that if you shut it down, it would be an economic catastrophe.”

8. “In a totalitarian society, you have no qualms about getting data on everybody, in every way possible. That makes AI a very tricky technology, because there are a lot of ways we don’t actually want to apply it in the US or West.” – highlighting the Chinese government’s use of AI for widescale facial recognition.

9. “People often say crypto or bitcoin is a vaguely libertarian technology. If crypto is kind of libertarian, AI is kind of communist.”

10. “Even though I’m sort of a pro-crypto, pro-bitcoin maximalist person, I do wonder whether bitcoin should be partly thought of as a Chinese financial weapon against the US. It threatens fiat money, especially the US dollar, and China wants to do things to weaken the dollar. If China’s long bitcoin, perhaps the US should be asking some tougher questions about exactly how that works.”

11. “An internal stable coin in China – that’s not a real cryptocurrency. That’s just some sort of totalitarian measuring device.”

12. “Make it harder for Chinese investors to invest in the US, and perhaps we should also make it a little bit harder for American investors to invest in China. We have US investors that invest in China and become a big constituency for open capital flows. I think a decent part of the Wall Street crowd is pretty bad in this regard. I would dial it back on both sides – making it harder for US investors to invest in China is an almost equally important part of this.”

13. “China doesn’t like the US having the reserve currency, because it gives us a lot of leverage over Iranian oil supply chains and all sorts of things like that. You can think of the Euro in part as a Chinese weapon against the dollar. China would have liked to see two reserve currencies.”

14. “One of the very strange dynamics in Silicon Valley is people don’t do very much with semiconductors anymore. One of the weird problems with 20 years of intellectual property theft, and where IP doesn’t really have as much value as it used to, is that you learn not to invest in things like that.”

15. “People are too anchored to doing things that worked in the past or copying some model. Building a new search engine was the right thing for Google to do in 1999. It’s probably not the right thing to do today. It’s very hard to compete against Google by doing the exact same thing they are doing.”

16. “You can think of big tech as something that’s very natural. It’s maybe unnaturally big. It’s unhealthy. It’s too strong. But there’s something in the nature of tech to be big. Big science is actually an oxymoron. If you have some giant science factory, there’s probably not much science going on at all.” – criticizing how science has become overly institutionalized and dominated by large corporations.

17. “De-platforming President Trump was really quite extraordinary. That does feel like you really crossed some kind of Rubicon where you declare war on maybe a third, 40% of the country – that seems really crazy.”

Read the original article on Business Insider

The Tom Cruise deepfakes were hard to create. But less sophisticated ‘shallowfakes’ are already wreaking havoc

tom cruise BURBANK, CA - JANUARY 30: Tom Cruise onstage during the 10th Annual Lumiere Awards at Warner Bros. Studios on January 30, 2019 in Burbank. (Photo by Michael Kovac/Getty Images for Advanced Imaging Society)
  • The convincing Tom Cruise deepfakes that went viral last month took lots of skill to create.
  • But less sophisticated “shallowfakes” and other synthetic media are already creating havoc.
  • DARPA’s AI experts mapped out how hard it would be to create these emerging types of fake media.
  • See more stories on Insider’s business page.

The coiffed hair, the squint, the jaw clench, and even the signature cackle – it all looks and sounds virtually indistinguishable from the real Tom Cruise.

But the uncanny lookalikes that went viral on TikTok last month under the handle @deeptomcruise were deepfakes, a collaboration between Belgian visual-effects artist Chris Ume and Tom Cruise impersonator Miles Fisher.

The content was entertaining and harmless, with the fake Cruise performing magic tricks, practicing his golf swing, and indulging in a Bubble Pop. Still, the videos – which have racked up an average of 5.6 million views each – reignited people’s fears about the dangers of the most cutting-edge type of fake media.

“Deepfakes seem to tap into a really visceral part of people’s minds,” Henry Ajder, a UK-based deepfakes expert, told Insider.

“When you watch that Tom Cruise deepfake, you don’t need an analogy because you’re seeing it with your own two eyes and you’re being kind of fooled even though you know it’s not real,” he said. “Being fooled is a very intimate experience. And if someone is fooled by a deepfake, it makes them sit up and pay attention.”

Read more: What is a deepfake? Everything you need to know about the AI-powered fake media

The good news: it’s really hard to make such a convincing deepfake. It took Ume two months to train the AI-powered tool that generated the deepfakes, 24 hours to edit each minute-long video, and a talented human impersonator to mimic the hair, body shape, mannerisms, and voice, according to The New York Times.

The bad news: it won’t be that hard for long, and major advances in the technology in recent years have unleashed a wave of apps and free tools that enable people with few skills or resources to create increasingly good deepfakes.

Nina Schick, a deepfake expert and former advisor to Joe Biden, told Insider this “rapid commodification of the technology” is already is wreaking havoc.

“Are you just really concerned about the high-fidelity side of this? Absolutely not,” Shick said, adding that working at the intersection of geopolitics and technology has taught her that “it doesn’t have to be terribly sophisticated for it to be effective and do damage.”

The Defense Advanced Research Projects Agency (DARPA) is well aware of this diverse landscape, and its Media Forensics (MediFor) team is working alongside private sector researchers to develop tools that can detect manipulated media, including deepfakes as well cheapfakes and shallowfakes.

As part of its research, DARPA’s MediFor team mapped out different types of synthetic media – and the level of skill and resources an individual, group, or an adversarial country would need to create it.

MediFor threat landscape.pptx

Hollywood-level productions – like those in “Star Wars: Rogue One” or “The Irishman” – require lots of resources and skill to create, even though they typically aren’t AI-powered (though Disney is experimenting with deepfakes). On the other end of the scale, bad actors with little training have used simple video-editing techniques to make House Speaker Nancy Pelosi appear drunk and incite violence in Ivory Coast, South Sudan, Kenya, and Burma.

Shick said the Facebook-fueled genocide against Rohingya Muslims also relied mostly on these so-called “cheapfakes” and “shallowfakes” – synthetic or manipulated media altered using less advanced, non-AI tools.

But deepfakes aren’t just being used to spread political misinformation, and experts told Insider ordinary people may have the most to lose if they become a target.

Last month, a woman was arrested in Pennsylvania and charged with cyber harassment on suspicion of making deepfake videos of teen cheerleaders naked and smoking, in an attempt to get them kicked off her daughter’s squad.

“It’s almost certain that we’re going to see some kind of porn version of this app,” Shick said. In a recent op-ed in Wired, she and Ajder wrote about a bot Ajder helped discover on Telegram that turned 100,000 user-provided photos of women and underage children into deepfake porn – and how app developers need to take proactive steps to prevent this kind of abuse.

Experts told Insider they’re particularly concerned about these types of cases because the victims often lack the money and status to set the record straight.

“The celebrity porn [deepfakes] have already come out, but they have the resources to protect themselves … the PR team, the legal team … millions of supporters,” Shick said. “What about everyone else?”

As with most new technologies, from facial recognition to social media to COVID-19 vaccines, women, people of color, and other historically marginalized groups tend to be disproportionately the victims of abuse and bias stemming from their use.

To counter the threat posed by deepfakes, experts say society needs a multipronged approach that includes government regulation, proactive steps by technology and social media companies, and public education about how to think critically and navigate our constantly evolving information ecosystem.

Read the original article on Business Insider

A Cathie Wood ETF bought about 800,000 shares in a Serena Williams-backed SPAC that just entered a $1.6 billion deal

Cathie Wood
Cathie Wood.


Cathie Wood’s ARK Autonomous Technology & Robotics exchange-traded fund recently bought shares in a special-purpose acquisition company that counts tennis champion Serena Williams as a board member.

The ARKQ ETF snapped up 800,494 shares in Jaws Spitfire Acquisition Corp, according to data available on ARK Invest’s website. The fund counts Tesla, JD.com, Baidu, and Alphabet among its top ten holdings.

Miami-based Jaws is led by chairman Barry Sternlicht and CEO Matthew Walters. The SPAC recently entered a merger deal with digital manufacturing firm Velo3D to take it public, valuing the combined company at $1.6 billion.

Wood and the red-hot SPAC market have been caught up in a bit of a rough patch. Blank-check companies have already raised $96 billion across 296 IPOs so far in 2021, according to SPACInsider.com. Blank-check stocks tumbled on Thursday after Reuters reported the Securities and Exchange Commission has begun an inquiry into Wall Street’s SPAC frenzy and seeking voluntary information on dealings.

But 93% of SPACs that went public this week are trading below their $10 IPO price, Dealogic data compiled by Reuters showed. That is 14 out of 15 SPACs trading below par value.

Wood is known for her innovative investments in disruptive stocks. But her flagship $22.9 billion Innovation ETF is currently sitting on a 8% year-to-date loss after a broader pullback in high-growth stocks across multiple sectors. Meanwhile, the ARKQ ETF that bought into Jaws is up 4.6% year-to-date.

Read the original article on Business Insider

Amazon is reportedly telling delivery drivers they must give ‘biometric consent’ so the company can track them as a condition of the job

Parcels are stored in a truck in a logistics centre of the mail order company Amazon.
Parcels are stored in a truck in a logistics centre of the mail order company Amazon.

  • Amazon delivery drivers will reportedly lose their jobs if they don’t give the company permission.
  • The form would allow Amazon to collect biometric data, like facial recognition, from the drivers.
  • News surfaced last month that Amazon was planning to roll out AI-powered cameras in its vehicles.
  • See more stories on Insider’s business page.

Amazon is telling its delivery drivers to sign a consent form that allows the company to track them based on biometric data as “a condition of delivering Amazon packages,” Motherboard’s Lauren Kaori Gurley reported on Tuesday.

Thousands of drivers across the US must sign the “biometric consent” paperwork this week, and if they don’t they’ll lose their jobs, according to Motherboard. The form, which was viewed by the outlet and published in the report, states that Amazon would be allowed to use “on-board safety camera technology which collects your photograph for the purposes of confirming your identity and connecting you to your driver account.” The system would then “collect, store, and use Biometric Information from such photographs.”

The technology specifically would track a driver’s location and movement, like how many miles they drive, when they brake and turn, and how fast they are driving.

As Motherboard noted, the drivers presented with the consent form are employed through third-party delivery partners that use Amazon’s delivery stations but who are still subject to the company’s working guidelines. An Amazon delivery company owner told the outlet that one of their drivers refused to sign, citing Amazon’s micromanaging as the reason.

Amazon did not immediately respond to Insider’s request for comment.

The report comes after Amazon announced in February that it would start using cameras equipped with artificial intelligence in its trucks to track the drivers while they work. One driver, per Reuters, quit over privacy concerns regarding the new cameras. Amazon told Insider in a previous statement that the new cameras were part of an effort to invest in “safety across our operations.”

Read more: More than 40% of surveyed Amazon employees say they wished they were in a union, a new Insider survey shows

The AI cameras are able to sense if a driver is speeding, yawning, or if they’re not wearing their seatbelt, among other motions. Each truck’s system includes four cameras: one with a view of the road, two that face the side windows, and one that faces the driver.

You can read the full report on Motherboard here.

Read the original article on Business Insider

Amazon is using new AI-powered cameras in delivery trucks that can sense when drivers yawn. Here’s how they work.

amazon prime van delivery
  • Amazon has installed AI-equipped camera systems in all of its delivery vehicles.
  • The Netradyne system can be triggered by a yawn or a speeding.
  • The new system has sparked some backlash from workers.
  • See more stories on Insider’s business page.

In February, The Information reported on an instructional video for Amazon delivery drivers announcing a new suite of artificial intelligence-equipped cameras to surveil drivers during the entirety of their routes.

The decision sparked some backlash, and one driver told the Thomson Reuters Foundation that the policy change had driven him to quit, calling it an invasion of privacy. But how does it work?

In the introductory video shown to drivers, Amazon’s senior manager for last-mile safety Karolina Haraldsdottir, explains how the “camera-based video safety technology” works.

The camera system is called “Driveri” and manufactured by the AI and transportation startup Netradyne. Four cameras give 270 degrees of coverage: one faces out through the windshield, two face out the side windows, and one faces the driver.

The cameras do not automatically upload, Haraldsdottir stressed in the video. A live feed only comes after the AI detects a problem. There are 16 behaviors that an AI recognizes that trigger the upload, from distracted driving to speeding to “driver drowsiness.”

Amazon Driveri instruction video
A still from the instructional video on Amazon’s Netradyne camera system

Haraldsdottir also stressed that the camera system can be used to “exonerate drivers from blame in safety incidents” and that drivers can trigger a manual upload if there is a safety issue they want to report.

In the report about a driver quitting as a result of this new system, the former employee saw the system as a “sort of coercion.”

Amazon has faced controversy over claims of surveillance in the past. In January of this year, more than 200 workers signed a petition sent to the CEO Jeff Bezos asking for an end to what the employees called “labor surveillance” ahead of unionization efforts.

Read the original article on Business Insider

Amazon driver quits, saying the final straw was the company’s new AI-powered truck cameras that can sense when workers yawn or don’t use a seatbelt

Amazon delivery driver packages
  • The Thomson Reuters Foundation spoke to a former Amazon driver who quit the company.
  • He said he left after Amazon installed AI-powered cameras in delivery vehicles.
  • The decision to surveil employees has raised privacy questions about workers’ privacy at the tech giant.
  • See more stories on Insider’s business page.

The Thomson Reuters Foundation published a report Friday about an Amazon driver in Denver, Colorado, for whom the company’s constant AI-driven surveillance proved to be too much.

Vic, who asked the Thomson Reuters Foundation to use only his first name “for fear of retaliation,” this month quit his job delivering packages for the tech giant.

He started work in 2019 and saw Amazon’s policies change to include more active means of surveillance. First there was an app tracking his route, and then the company wanted pictures of him at the beginning of each shift on another app, he told the foundation.

But the breaking point came, he told the Thomson Reuters Foundation, when Amazon announced that it would be installing AI cameras in their fleet of vehicles.

Insider reported in February that Amazon was equipping all delivery vehicles with AI camera systems called Driveri, manufactured by a company called Netradyne. The cameras are always on and scan drivers’ body language, the speed of the vehicle, and even drowsiness. The system then uses “automated verbal alerts” to tell drivers if a violation has been detected.

When Amazon announced the policy change and gave its drivers a deadline to agree to the surveillance protocols, Vic told Thomson Reuters Foundation that he decided to put in his notice.

“It was both a privacy violation, and a breach of trust,” he told the foundation. He also said that the company requiring drivers to agree to constant surveillance in order to do their jobs seemed like “a sort of coercion.”

Amazon told Insider in February that driver footage is not automatically available to Amazon, and that the “live feed” is only triggered after a safety or policy violation is detected. Amazon did not immediately respond to a request for comment on this story.

Amazon responded to Insider’s request for comment about the foundation’s story with a statement saying: “We are investing in safety across our operations and recently started rolling out industry leading camera-based safety technology across our delivery fleet. This technology will provide drivers real-time alerts to help them stay safe when they are on the road.”

The company also included positive driver testimonials.

The tech giant is facing scrutiny for its employee tracking and surveillance in warehouses as a contentious union election in the company’s Bessemer, Alabama warehouse draws national attention to Amazon’s working conditions.

Read the original article on Business Insider

Everything you need to know about Neuralink, Elon Musk’s company that wants to put microchips in people’s brains

Elon Musk
Elon Musk.

  • Neuralink is one of Elon Musk’s strange and futuristic portfolio of companies.
  • It’s developing neural interface technology — a.k.a. putting microchips into people’s brains.
  • The technology could help study and treat neurological disorders. 
  • Visit the Business section of Insider for more stories.

Tesla billionaire Elon Musk is known for high-profile companies like Tesla and SpaceX, but the billionaire also has a handful of unusual ventures. One them, he says, he started to one day achieve “symbiosis” between the human brain and artificial intelligence.

Neuralink is Musk’s neural interface technology company. Simply put, it is building technology that could be embedded in a person’s brain, where it could both record brain activity and potentially stimulate it.

While Musk likes to talk up his futuristic vision for the technology, merging human consciousness with AI, the tech has plenty of near-term potential medical applications such as the treatment of Parkinson’s disease.

Here’s everything you need to know about Neuralink:

Neuralink was quietly founded under the radar in 2016.

Although Musk has touted the near-term applications of Neuralink, he often links the company up with his fears about artificial intelligence. Musk has said that he thinks humanity will be able to achieve a “symbiosis” with artificial intelligence.

Musk told “Artificial Intelligence” podcast host Lex Fridman in 2019 that Neuralink was “intended to address the existential risk associated with digital superintelligence.”

“We will not be able to be smarter than a digital supercomputer, so, therefore, if you cannot beat ’em, join ’em,” Musk added.

Musk has made lots of fanciful claims about the enhanced abilities Neuralink could confer. In 2020 Musk said people would “save and replay memories” like in “Black Mirror,” or telepathically summon their car.

Experts have expressed doubts about these claims. 

In September 2020, Insider spoke to neuroscientist Prof. Andrew Jackson of the University of Newcastle. He said: “Not to say that that won’t happen, but I think that the underlying neuroscience is much more shaky.”

He added: “We understand much less about how those processes work in the brain, and just because you can predict the position of the pig’s leg when it’s walking on a treadmill, that doesn’t then automatically mean you’ll be able to read thoughts.”

Another professor, Andrew Hires, told Insider in August 2020 that Musk’s claims about merging with AI is where he goes off into “aspirational fantasy land.”

Neuralink is developing two bits of equipment. The first is a chip that would be implanted in a person’s skull, with electrodes fanning out into their brain.

Neuralink chip
The chip sits behind the ear, while electrodes are threaded into the brain.

The chip Neuralink is developing is about the size of a coin, and would be embedded in a patients’ skull. From the chip an array of tiny wires, each roughly 20 times thinner than a human hair, fan out into the patient’s brain.

The wires are equipped with 1,024 electrodes which are able to both monitor brain activity  and, theoretically, electrically stimulate the brain. This data is all transmitted wirelessly via the chip to computers where it can be studied by researchers.

 

The second is a robot that could automatically implant the chip.

Neuralink surgical robot
Neuralink surgical robot.

The robot would work by using a stiff needle to punch the flexible wires emanating from a Neuralink chip into a person’s brain, a bit like a sewing machine.

Neuralink released a video showcasing the robot in January 2021.

 

Musk has claimed the machine could make implanting Neuralink’s electrodes as easy as LASIK eye surgery. While this is a bold claim, neuroscientists previously told Insider in 2019 that the machine has some very promising features.

Professor Andrew Hires highlighted a feature, which would automatically adjust the needle to compensate for the movement of a patient’s brain, as the brain moves during surgery along with a person’s breathing and heartbeat.

The robot as it currently stands is eight feet tall, and while Neuralink is developing its underlying technology its design was crafted by Woke Studios.

In 2020, the company showed off one of its chips working in a pig named Gertrude during a live demo.

Gertrude Neuralink
The Neuralink device in Gertrude’s brain transmitted data live during the demo as she snuffled around.

The demonstration was proof of concept, and showed how the chip was able to accurately predict the positioning of Gertrude’s limbs when she was walking on a treadmill, as well as recording neural activity when the pig snuffled about for food. Musk said the pig had been living with the chip embedded in her skull for two months.

 

“In terms of their technology, 1,024 channels is not that impressive these days, but the electronics to relay them wirelessly is state-of-the-art, and the robotic implantation is nice,” said Professor Andrew Jackson, an expert in neural interfaces at Newcastle University.

“This is solid engineering but mediocre neuroscience,” he said.

Jackson told Insider following the 2020 presentation that the wireless relay from the Neuralink chip could potentially have a big impact on the welfare of animal test subjects in science, as most neural interfaces currently in use on test animals involve wires poking out through the skin.

“Even if the technology doesn’t do anything more than we’re able to do at the moment — in terms of number of channels or whatever — just from a welfare aspect for the animals, I think if you can do experiments with something that doesn’t involve wires coming through the skin, that’s going to improve the welfare of animals,” he said.

Although none of the tech Neuralink has showcased so far has been particularly groundbreaking, neuroscientists are impressed with how well it’s been able to bundle up existing technologies.

Elon Musk Neuralink pigs
Elon Musk presenting during the 2020 demo.

“All the technology that he showed has been already developed in some way or form, […] Essentially what they’ve done is just package it into a nice little form that then sends data wirelessly,” Dr. Jason Shepherd, an associate professor of neurobiology at the University of Utah, told Insider following the 2020 demonstration.

“If you just watched this presentation, you would think that it’s coming out of nowhere, that Musk is doing this magic, but in reality, he’s really copied and pasted a lot of work from many, many labs that have been working on this,” he added.

Elon Musk has boasted multiple times that the company has put the chip in a monkey, though neuroscientists aren’t that blown away by this.

squirrel monkey
Not pictured: the monkey Neuralink has implanted a microchip into.

Elon Musk excitedly announced in Neuralink’s 2019 presentation that the company had successfully implanted its chip into a monkey. “A monkey has been able to control a computer with its brain, just FYI,” he said, which appeared to take Neuralink president Max Hodak by surprise. “I didn’t realize we were running that result today, but there it goes,” said Hodak.

Musk re-iterated the claim in February 2021 with a little extra detail. 

“We’ve already got a monkey with a wireless implant in their skull, and the tiny wires, who can play video games using his mind,” Musk said during a long and wide-ranging interview on Clubhouse.

 Neuroscientists speaking to Insider in 2019 said that while the claim might grab the attention of readers, they did not find it surprising or even particularly impressive.

“The monkey is not surfing the internet. The monkey is probably moving a cursor to move a little ball to try to match a target,”said Professor Andrew Hires, an assistant professor of neurobiology at the University of California.

Implanting primates with neural-brain interfaces that allow them to control objects on screens has been done before, and is expected in any research that aims to one day implant technology into human brains.

Elon Musk has said human testing could start by the end of this year, but he also said that last year.

Elon Musk

Elon Musk said during an appearance on the “Joe Rogan Experience” podcast in May 2020 that Neuralink could begin testing on human subjects within a year. He made the same claim during an interview on Clubhouse in February 2021.

Previously in 2019 Musk said the company hoped to get a chip into a human patient by the end of 2020.

Experts voiced doubt about this timeline at the time, as part of safety testing a neural interface device involves implanting it in an animal test subject (normally a primate) and leaving it there for an extended amount of time to test its longevity — as any chip would have to stay in a human patient’s brain for a lifetime.

“You can’t accelerate that process. You just have to wait — and see how long the electrodes last. And if the goal is for these to last decades, it’s hard to imagine how you’re going to be able to test this without waiting long periods of time to see how well the devices perform,” Jacob Robinson, a neuroengineer at Rice University, told STAT News in 2019.

 

 

In the near-term, the uses of a chip in someone’s brain could be to treat neurological disorders like Parkinson’s.

Neuralink
Close-up footage of the needle on Neuralink’s brain surgery robot.

Improved neural interface technology like Neuralink’s could be used to better study and treat severe neurological conditions such as Parkinson’s and Alzheimer’s.

Prof. Andrew Hires told Insider another application could be allowing people to control robotic prostheses with their minds.

“The first application you can imagine is better mental control for a robotic arm for someone who’s paralyzed,” Hires said in a 2019 interview with Insider, saying that the electrodes in a patient’s brain could potentially reproduce the sensation of touch, allowing the patient to exert finer motor control over a prosthetic limb.

Elon Musk also says in the long-term the chip could be used to meld human consciousness with artificial intelligence – though experts are skeptical of this.

Elon Musk

Although Musk has touted the near-term applications of Neuralink, he often links the company up with his fears about artificial intelligence. Musk has said that he thinks humanity will be able to achieve “symbiosis with artificial intelligence” through 

Musk told “Artificial Intelligence” podcast host Lex Fridman in 2019 that Neuralink was “intended to address the existential risk associated with digital superintelligence.”

“We will not be able to be smarter than a digital supercomputer, so, therefore, if you cannot beat ’em, join ’em,” Musk added.

Musk has made lots of fanciful claims about the enhanced abilities Neuralink could confer. In 2020 Musk said people would “save and replay memories” like in “Black Mirror,” or telepathically summon their car.

Experts have expressed doubts about these claims. 

“Not to say that that won’t happen, but I think that the underlying neuroscience is much more shaky. We understand much less about how those processes work in the brain, and just because you can predict the position of the pig’s leg when it’s walking on a treadmill, that doesn’t then automatically mean you’ll be able to read thoughts,” said Prof. Andrew Jackson.

In 2019 Prof. Andrew Hires said Musk’s claims about merging with AI is where he goes off into “aspirational fantasy land.”

Musk’s also made dubious claims about its medical applications. At one point he also claimed the technology could “solve autism.”

During an appearance on the “Artificial Intelligence” podcast with Lex Fridman in November 2019, Elon Musk said Neuralink could in future “solve a lot of brain-related diseases,” and named autism and schizophrenia as examples.

Autism is classified as a developmental disorder, not a disease, and the World Health Organization describes schizophrenia as a mental disorder.

One neuroscientist told Insider there are big ethical problems with the idea of performing brain surgery for anything other than essential treatment.

Dr. Rylie Green of Imperial College London told Insider in 2019 that the notion of performing brain surgery on a healthy person is deeply troubling.

“To get any of these devices into your brain […] is very, very high-risk surgery,” she said. “People do it because they have severe limitations and there is a potential there to improve their life. Doing it for fun is not a great idea,” she added.

Read the original article on Business Insider

The 5 things everyone should know about cloud AI, according to a Sequoia Capital partner

Alexa in the kitchen
Many people encounter cloud AI through their smart speaker

  • Konstantine Buhler, a partner at Sequoia Capital, believes “cloud is going to become AI.”
  • Buhler insists that AI is not “magic,” and that it should be demystified and measured.
  • Buhler says companies can bake AI into their processes “horizontally.” 
  • This article is part of a series about cloud technology called At Cloud Speed.

If you ask Sequoia Capital partner and early-stage investor Konstantine Buhler about the role of artificial intelligence in cloud computing, his answer is unequivocal: “Cloud is going to become AI,” he told Insider. “I mean, all of the cloud will be based on AI.”

Snowflake’s $3.4 billion initial public offering and DataBricks’ $1 billion funding round over the past year suggest big things ahead for AI in the cloud, and the industry is estimated at $40 billion and climbing. Major platforms like Amazon’s AWS, Microsoft Azure, and Google Cloud – as well as a host of startups – sell cloud-based tools and services for data labeling, automation, natural language processing, image recognition, and more, making it more affordable than ever before for firms to dabble in AI. 

Buhler, who has a master’s degree in artificial intelligence engineering from Stanford, revels in AI’s contributions, but also insists that the sector be demystified, and basic business fundamentals applied to it. 

His investments include CaptivateIQ, which automates business commissions, and Verkada, a security camera company that uses AI to recognize information like license plate numbers. Sequoia in general is an investor in some of the biggest names in AI, including Snowflake and Nvidia. 

“This next wave of enterprise and consumer technologies will all need AI built in,” Buhler said. “That’s going to be the standard going forward.”

AI’s ubiquity in the future is the first of a few basic lessons Buhler believes everyone should understand about AI’s impact over the next decade in the cloud. Here are the rest:

AI is not magic – it’s math

There is an (unwarranted) aura around artificial intelligence that ascribes to it supernatural brilliance.

“It seems complicated – it seems like magic of some sort, so people get intimidated and awed by it,” Buhler said. “Artificial intelligence is just more and more mathematical computations done rapidly, which at some point, for a moment, seems ‘magical.’ But it never is.”

Ordinary people should ask to understand it, because it impacts their lives. If you talk to Apple’s Siri or Amazon’s Alexa, you are conversing with AI. If your cat hops aboard a Roomba vacuum, both of you can appreciate how it “learns” to avoid objects in its path. On the other hand, a red-light camera that zooms in to read your license plate when you go through intersections late and automatically fines you might not be such a welcome innovation. 

AI should learn from the internet revolution

Buhler believes that AI is at a similar inflection point as what the internet revolution experienced 20 years ago: “Let’s learn a lesson from the dot.com boom,” when many over-valued companies imploded as they failed to materialize as real companies, Buhler said: “Everybody had that mentality of, ‘let’s stick internet on this thing.'”

While cloud-based tools allow companies to spin up AI models with relative ease, not every problem needs to be solved with these kinds of algorithms. 

The business case must always be there – with the customer centered – or AI will not be practical.

“When you build an artificial intelligence model, it is not about the AI: It is about the customer,” Buhler said. “The internet was a communication revolution, and AI is a computation revolution. This is a new mechanism to serve people, and you have to understand their needs, or you’re going to spend years building the wrong thing.”

Konstantine Buhler of Sequoia Capital
Konstantine Buhler is a partner and early stage investor at Sequoia Capital.

Every company has a ‘horizontal’ AI opportunity

Buhler believes every company can bake AI into their business using the same basic “horizontal stack,” or processes that take raw data and turn it into actionable intelligence that can be used in different ways across business units. Buhler says companies like Databricks, Dataiku, DataRobot, and Domino Data Lab (“they all start with D for some reason”) help enterprises do this. 

Horizontal data processes can include data preparation (sorting text from image files, for instance), data labeling, data storage, creating algorithms that process the data, and, finally, applying the algorithms to specific business processes to help guide decision making. 

“It should be laid out that simply,” he says. That process “is all about enabling enterprises to bake artificial intelligence directly into their systems.”

AI startups can also focus on verticals 

Buhler says there are also AI startups that are providing products tailored to more specific business needs. Gong, for example, helps salespeople evaluate opportunities, while competitor Chorus turns sales conversations into data. In the financial world, the startup Vise automates investment management, while in the legal world, Ironclad helps attorneys build contracts faster. Gong, Vise, and Chorus are Sequoia portfolio companies. 

The key in picking great AI startups, Buhler says, is being able to measure how a company is helping its customers: “It has to be a real business with outputs that can be quantified.” 

Read the original article on Business Insider

Break Free B2B Marketing: Oliver Christie on Making Life Better With AI

Oliver Christie of PertexaHealthTech Image

Oliver Christie of PertexaHealthTech Image

Just what is a B2B influencer, and what do they actually look like?

In our third season of Break Free B2B Marketing video interviews we’re having in-depth conversations with an impressive array of top B2B influencers, exploring the important issues that each expert is influential about.

Successful B2B influencers have a rare mix of the 5 Ps — proficiency, personality, publishing, promotion, and popularity — as our CEO Lee Odden has carefully outlined in “5 Key Traits of the Best B2B Influencers.”

Offering up all of those boxes and more is Oliver Christie, chief artificial intelligence (AI) officer at PertexaHealthTech, who we’re delighted to be profiling today.

Nothing helps individuals and the businesses they work for break free from the norm quite like a tech disruption. The microprocessor. The internet. Mobile data. E-Commerce. When these technologies came onto the scene, everything changed… but what’s next?

According to Oliver Christie, it’s AI. In his own words: “Artificial Intelligence is the biggest technology disruption of our generation.” As far as he’s concerned, A.I. isn’t just the future, it’s the present. In today’s new episode of the Break Free B2B Marketing Interview series, Christie speaks about the role of artificial intelligence in our lives, including topics like A.I. and morality, bias in A.I., and the direction of A.I.’s future.

Artificial intelligence isn’t science fiction. It’s very much a science reality, and Oliver Christie is one of the leading experts talking and consulting on the topic. In today’s 31 minute interview with TopRank’s own Josh Nite, he’ll be passing some of that expertise along.

Break Free B2B Interview with Oliver Christie

If you’re interested in checking out a particular portion of the discussion, you can find a quick general outline below, as well as a few excerpts that stood out to us.

  • :55 – Introduction to Oliver Christie
  • 3:05 – Human-centric artificial intelligence
  • 4:14 – Personalization and how to avoid the “diabolical side”
  • 5:46 – The ways Oliver believes AI will impact the life of the everyday person in the next couple years
  • 7:10 – Personalization on Amazon
  • 11:13 – How AI will be reshaping business
  • 13:46 – What’s your new question?”
  • 16:50 – How the pandemic is changing the way technology is being developed
  • 19:10 – Bias in AI
  • 22:46 – How Oliver Christie found his niche as a thought leader
  • 27:58 – The importance of being yourself

Josh: I’m really interested in what we were talking about before we started. The idea of human-centric AI. AI can feel like this distant or cold thing or something that is, you know, it’s powering my Netflix algorithm. But I don’t know how it relates to my day to day. How is it a human-centric thing? We’re thinking about people and individuals.

Oliver: Something we’re moving more and more towards is thinking about people as individuals and what matters to us. How we talk. How do we act? What are our interests? You mentioned Netflix. The algorithm which says what you should watch next. If that’s successful, you watch more. If it has an understanding of what you might like, you can see more media if you get it. If it gets it wrong, if it doesn’t know who you are, it is a turnoff and you never see the difference between that and other media services. I think that the next big leap is going to be our products and services are going to be much more reactive to who we are. How will we live? And so on. But there are some big challenges. So it’s not a quick and easy thing to do. But I think the future is pretty exciting.

[bctt tweet=”“I think that the next big leap is going to be our products and services are going to be much more reactive to who we are.” @OliverChristie #BreakFreeB2B #ArtificialIntelligence #AI” username=”toprank”]

Josh: Have you ever been on Amazon while not logged in? It’s such a striking thing to open an incognito window or something and you see how much personalization goes into that page and how just clueless it seems when it’s not on there.

Oliver: Amazon’s an interesting one. It’s algorithm is better than nothing. And it works to a degree. Some of the time, if you match a pattern — so the music you listen to, the books you buy — f someone is quite close to that, it works. As soon as you deviate, it pulls down or as soon as you’re looking for something original, it also doesn’t work. So I think Amazon is a good example of where we are at the moment, but not where we could be next. Amazon doesn’t once ask, what are you trying to achieve in your shopping? What are you trying to do next? And I think that’s going to be one of the big shifts that will happen.

Josh: What are we trying to achieve with that shopping, though? Besides, for me, it’s filling the void of not being able to go out to a concert and having a party, having something to look forward to with deliveries coming in. What kind of intent are you thinking about?

Oliver: Imagine you had the same shopping experience and let’s say it’s for books, videos, or courses. And the simple question can be, what would you like to achieve in your career in the next six months? Where would you like to be or what’s happening in your personal life? Want some advice and information which could be really useful? I think this sort of tailoring is where things are heading. So it’s still selling books and courses and videos and so on. But it’s understandably the intent behind content. What could this do to your career? What could this do for your family life, your love life, whatever it might be? Now, of course, we’re all locked down at the moment. So it’s a very different sort of situation. But I think some of the same things still apply. There’s going to be a back and forth. So how much do you want to give up about your personal life? Better recommendation. And I think it’s kind of early in some respects. But the data they passed shows, yes, if you get something positive out of it, you’ll have to give up some of that previously.

[bctt tweet=”“Amazon is a good example of where we are at the moment, but not where we could be next. Amazon doesn’t once ask, what are you trying to achieve in your shopping? What are you trying to do next?.” @OliverChristie #BreakFreeB2B” username=”toprank”]

Keep your eye on the TopRank Marketing Blog and subscribe to our YouTube channel for more Break Free B2B interviews. Also check out episodes from season 1 and season 2.

Take your B2B marketing to new heights by checking out out previous season 3 episodes of Break Free B2B Marketing:

The post Break Free B2B Marketing: Oliver Christie on Making Life Better With AI appeared first on B2B Marketing Blog – TopRank®.