Facebook is working on tech that will monitor human life, the company said in a new blog post.
The idea is to build AI that sees the world as humans do, from a first-person perspective.
This AI could be used for what Facebook envisions as the future of smartglasses.
Facebook envisions a future where smartglasses “become as useful in everyday life as smartphones,” the company said in a new blog post.
In order to achieve that future, such devices will require powerful AI software that can read and respond to the world around the headset’s user. And the only way to train AI to see and hear the world like humans do is for it to experience the world like we do: from a first-person perspective.
“Next-generation AI will need to learn from videos that show the world from the center of action,” the blog post said.
Facebook’s solution to this problem is a new project, titled, “Ego4D,” which will collate data from “13 universities and labs across nine countries, who collected more than 2,200 hours of first-person video in the wild, featuring over 700 participants going about their daily lives.”
The data will be open to the research community, the blog post said, but the goal of the project is clear: To create the type of AI that can power a slew of Facebook devices currently in the works.
There’s even a Facebook division, known as Reality Labs, that’s focused on research and development for the future of VR and AR tech.
That division is headed by longtime Facebook exec Andrew “Boz” Bosworth, who shared images of himself in various prototypes this past week:
Got a tip? Contact Insider senior correspondent Ben Gilbert via email (firstname.lastname@example.org), or Twitter DM (@realbengilbert). We can keep sources anonymous. Use a non-work device to reach out. PR pitches by email only, please.
“Smart House” depicts a computer system that turns against the family it’s supposed to assist.
Giants like Google and Amazon have since brought that smart tech to life with products like Alexa.
The movie’s co-writer told Insider tech will have “glitches” but will be more good than bad.
Disney’s “Smart House” debuted in 1999, well before the ubiquity of computers and smartphones (the phrase “electronic mail” and dial-up internet are actually referenced in the film), and became another Disney Channel movie cemented in the hearts of Millennials.
Pat – short for Personal Applied Technology – is the computer running the home. She can disappear debris on the ground through “floor absorbers” and project anything the occupants wish in a virtual reality room, a nod to the short story “The Veldt,” a 1950 cautionary tale about smart technology upon which the movie is loosely based.
Two decades later, and that generation is well into adulthood, inundated with real-life smart technology that’s ours for the taking.
Amazon’s Alexa, Apple’s Siri, Google Assistant, and other 21st century giants have brought much of the technology depicted in “Smart House” to life. But that same technology and the companies that create it have also posed new issues, such as privacy problems – and developing tech without anticipating the monster it can become.
“Smart House” co-writer Stu Krieger worked on 11 Disney Channel movies, including “Zenon: Girl of the 21st Century. He’s now a professor in the theater department at the University of California Riverside. He spoke to Insider about how he helped write then-futuristic tech into the beloved DCOM.
“What were those things as a kid that would have been absolutely amazing, so the screens in the bedrooms and voice commands and all those things,” Krieger said. He had to “get back into my 10-year-old, 12-year-old head and what would I have wanted and then start to think about what might be a technological or iteration of that fantasy.”
‘I’m sorry. I can’t do that, Nick’
Pat can track the family constantly, which the father, Nick, is hesitant about from the get-go.
“That’s kinda creepy, isn’t it? I mean, it’s like Big Brother is watching you,” he says to the home’s creator.
But the problems really start when Pat taps into her artificial intelligence capability.
Ben, the son mourning his late mother, sneakily feeds traditional, 50s-era material into Pat’s system, and she ditches her smart assistant role for an overbearing, increasingly controlling presence. Krieger said he wrote that in with 50s sitcoms of his childhood, which depicted stereotypical motherly figures, in mind.
“If you Googled how to become a mom, where would you go?” Krieger said.
Once Pat overrides her system being shut down, she traps the family inside the house, refusing to open the door as Nick requests.
“I’m sorry. I can’t do that, Nick,” Pat answers, a nod to HAL 9000’s infamous line in “2001: A Space Odyssey,” another beloved Sci-Fi movie about a computer system gone rogue. It’s only after Ben feeds Pat a new data point, that she could never truly be his mother given her virtual nature, that she stands down.
The movie has a much happier ending (the house doesn’t team up with the children to kill the parents like what’s implied in “The Veldt”) but it poses an interesting question: can we tame the technology we create?
Krieger, a self-described “technophobe” who said he has reluctantly come to own smart TVs and an Amazon Alexa, still said there’s hope.
“I do think that evolution will have its glitches and will have its bumps, but I do ultimately believe in its ability to work things out and become more positive than negative,” Krieger said.
AI enables researchers to better define mental illness subtypes and understand patient symptoms.
The technology may one day help guide psychiatry diagnosis and patient care.
Addressing ethical concerns surrounding AI in psychiatry may encourage clinicians to adopt the technology.
This article is part of the “Healthcare Innovation” series, highlighting what healthcare professionals need to do to meet this technology moment.
Psychiatry researchers are using artificial intelligence to develop a better understanding of mental illness, with the goal of creating more effective and personalized treatment plans.
“Psychiatry is a unique field because mental healthcare providers generally don’t have specific biomarkers or clear imaging findings indicating mental health pathology to make a diagnosis,” said Dr. Ellen Lee, assistant professor of psychiatry, University of California San Diego, and staff psychiatrist, VA San Diego Healthcare System.
Instead, practitioners largely rely on the patient’s self-reported symptoms and medical history, she said. To further complicate matters, patients with the same diagnosis may have different symptoms.
Artificial intelligence is helping researchers to assess the heterogeneity of psychiatric conditions more fully, said Dr. Charles Marmar, Lucius N. Littauer psychiatry professor and Department of Psychiatry chair at NYU Grossman School of Medicine. “It can help us determine whether there is one kind of depression or seven kinds of depression.”
The technology can sift through data about varying patient behaviors, medical, social, and family histories, differing responses to prior treatments, and information acquired through new forms of monitoring – such as wearable technology – to help fine-tune decisions about care, Lee said.
“AI in psychiatry has been a real revolution in many ways,” Lee said. However, clinicians may need some time to adapt to the idea of relying on this technology in the clinic. As AI becomes more available, participating in educational opportunities that address how algorithms enhance clinical decision-making and what data they contain can help with the transition process.
Refining diagnosis and patient monitoring
One goal of AI research is to hone patient diagnoses into different condition subgroups so doctors can personalize treatment. “It’s really about precision medicine,” Marmar said.
For example, he and his colleagues have been using machine learning – a form of AI that employs computer algorithms and decision rules to analyze and classify large amounts of data – to evaluate the heterogeneity of psychiatric illnesses. The more data these algorithms process, the more accurate they become.
They recently published a study in Translational Psychiatry using machine learning to identify two forms of post-traumatic stress disorder in veterans, a mild form with relatively few symptoms and a chronic, severe form in which patients experienced high levels of depression and anxiety.
Using machine learning, Marmar plans to further explore and validate a possible five PTSD subtypes, including anxious and dissociative, depressed, cognitive functioning impaired, mild, and severe. The research team will also look at molecular and brain circuit markers, genes, and gene products such as proteins and metabolites to see how they are associated with each form of PTSD. “In the end, we hope to have a diagnostic blood test for each subtype,” he said.
Another avenue in AI research is incorporating sensor data through wearable health technologies such as a Fitbit to help determine how to better manage patients, said Lee. For example, mental healthcare providers can ask patients to report on how they’ve been sleeping for the last month or they can assess data from technology that monitors sleep patterns.
“There’s a real disconnect between how people perceive they sleep versus how they actually sleep,” Lee said. Sleep monitoring technology combined with AI may be much more objective. Accurately tracking sleep patterns could give providers an indication of which patients with bipolar disorder might be more at risk of experiencing an episode of mania, allowing for adjustments in medication, she said.
Physician training and ethical concerns
Mental-health professionals will likely need to shift from patient self-reports and caretaker reports to incorporating AI technology in their clinical decision-making, Lee said.
“There are opportunities to learn about AI for physicians who are interested,” she said, adding that annual psychiatry conferences and continuing medical education events are trying to make AI more accessible to clinical audiences.
NYU Grossman School of Medicine also offers AI and machine learning courses to medical students interested in radiology. Those who are also studying psychiatry can focus on neuroradiology within this program.
Psychiatrists need to be informed about what types of data are being used to develop AI algorithms to be comfortable with using the technology, Lee said. Decisions made in clinical practice can have serious repercussions in a patient’s life, such as hospitalization or the loss of capacity to live independently. Physicians may be hesitant to hand this power over to an AI algorithm, Lee said.
“AI algorithms and models are only as good as the data they’re built on,” Lee said. “We need data built on diverse types of patients, from different racial, ethnic sociodemographic backgrounds.”
“A super-intelligent machine that controls the world sounds like science fiction,” said Manuel Cebrian, co-author of the study and leader of the research group. “But there are already machines that perform certain important tasks independently without programmers fully understanding how they learned it [sic].”
The question of whether a superintelligence could be contained is hardly a new one.
Manuel Alfonseca, co-author of the study and leader of the research group at the Max-Planck Institute’s Center for Humans and Machines said that it all centers around “containment algorithms” not dissimilar to Asimov’s First Law of Robotics, according to IEEE.
In 1942, prolific science fiction writer Isaac Asimov laid out The Three Laws of Robotics in his short story “Runaround” as part of the “I, Robot” series.
According to Asimov, a robot could not harm a human or allow them to come to harm, it had to obey orders unless such orders conflicted with the first law, and they had to protect themselves, provided this didn’t conflict with the first or the second law.
The scientists explored two different ways to control artificial intelligence, the first being to limit an AI’s access to the internet.
The team also explored Alan Turing’s “halting problem,” concluding that a “containment algorithm” to simulate the behavior of AI – where the algorithm would “halt” the AI if it went to harm humans – would simply be unfeasible.
Alan Turing’s halting problem
Alan Turing’s halting problem explores whether a program can be stopped with containment algorithms or will continue running indefinitely.
A machine is asked various questions to see whether it reaches conclusions, or becomes trapped in a vicious cycle.
This test can also be applied to less complex machines – but with artificial intelligence, this is complicated by their ability to retain all computer programs in their memory.
“A superintelligence poses a fundamentally different problem than those typically studied under the banner of ‘robot ethics’,” said the researchers.
If artificial intelligence were educated using robotic laws, it might be able to reach independent conclusions, but that doesn’t mean it can be controlled.
“The ability of modern computers to adapt using sophisticated machine learning algorithms makes it even more difficult to make assumptions about the eventual behavior of a superintelligent AI,” said Iyad Rahwan, another researcher on the team.
Rahwan warned that artificial intelligence shouldn’t be created if it isn’t necessary, as it’s difficult to map the course of its potential evolution and we won’t be able to limit its capacities further down the line.
We may not even know when superintelligent machines have arrived, as trying to establish whether a device is superintelligent compared with humans is not dissimilar to the problems presented by containment.
At the rate of current AI development, this advice may simply be wishful thinking, as companies from Baker McKenzie to tech giants like Google, Amazon, and Apple are still in the process of integrating AI into their businesses – so it may be a matter of time before we have a superintelligence on our hands.
Unfortunately, it appears robotic laws would be powerless to prevent a potential “machine uprising” and that AI development is a field that should be explored with caution.
It’s “everywhere” and it has the potential to “make people’s lives easier.”
AI is already present in “a number of products that you don’t really think about,” he said. He gave examples you can see if you just look down at your iPhone, from the way it recognizes your face and fingerprint, to “the way that Siri works,” and even how photos are grouped together.
“I see that we’re at the very early stages of what it can do for people and how it can make people’s lives easier,” he said.
“I believe that technology can do so much good in the world,” he said – though “it depends on the creator, and whether they thought through the ways it can be used and misused.” Cook has been thinking about the possibility of misuse in the AI space for years.
In a 2018 speech at a privacy conference in Brussels, Belgium, Cook warned against misuse of the emerging technology while slamming firms like Facebook for weaponizing data for profit. “Advancing AI by collecting huge personal profiles is laziness, not efficiency,” he said at the time.
“For artificial intelligence to be truly smart, it must respect human values, including privacy. If we get this wrong, the dangers are profound,” he continued. “We can achieve both great artificial intelligence and great privacy standards. It’s not only a possibility, it is a responsibility.”
He echoed those sentiments in the TIME interview, stating that “privacy is a basic human right,” along with “one of the most consequential issues of our time.”
Once privacy is contended with, Cook said he is “really stoked” and about overlaying the “virtual world with the real world” through AI and augmented reality. He said he is hopeful it can be done in a way “that is not distracting from the physical world and your physical relationships,” but is “enhancing” to your relationships.
“Mainly, I am so optimistic about all the things that can happen in our lives that free up time for more leisure activities and other things that we want to do in life,” he said.
The “father” of Iran’s nuclear weapons program was at the top of Israel’s hit list for 14 years. On November 27 2020, after a failed attempt to kill him a decade earlier, the Mossad finally managed to assassinate Mohsen Fakhrizadeh.
Their weapon of choice? A remote-controlled machine gun that required no on-site operatives and utilized advanced artificial intelligence technology, according to a report by The New York Times.
The deadly weapon was a special model of a Belgian-made FN MAG machine gun, which was attached to an advanced robotic apparatus, The New York Times reported.
It weighed about a ton and was controlled by Mossad operatives outside of Iran, thus ensuring the safety of Israeli agents, intelligence officials told the media outlet.
In order to get the weapon into the country, The Times said, it was smuggled into Iran piece by piece and was secretly assembled in time for the hit.
The Mossad had been following Fakhrizadeh since 2007, The Times reported, and the Israeli national intelligence agency reportedly set in motion plans to assassinate him in late 2019 after discussions with former President Donald Trump and high-ranking US officials.
Fakhrizadeh was a top target because Israeli intelligence officials said he was leading Iran’s efforts to build a nuclear bomb.
Israeli had considered a variety of methods to assassinate Fakhrizadeh, according to The Times, and the Mossad had weighed up detonating a bomb near his armed convoy, forcing him to halt, and attacking him with snipers. The plan was shelved.
Instead, the remote-controlled machine gun idea was floated. The New York Times reported that the computerized weapon was attached to an abandoned-looking pickup truck, which was fixed with cameras and explosives. It was positioned at a major junction on Fakhrizadeh’s route to his country home by Iranian agents working with the Mossad,
Once Fakhrizadeh’s vehicle arrived at the junction, in November 2020, Mossad operatives outside of Iran used the cameras to positively identify their target and unleashed a hail of bullets from the remote-controlled machine gun.
He got out of his car, The New York Times said, and was hit with three more bullets that “tore into his spine.” Reportedly, his bodyguards looked confused as they could not see an obvious assailant.
The kill took less than 60 seconds and only injured Fakhrizadeh, the paper reported.
The explosives on the pickup truck were supposed to damage the machine gun beyond repair but, instead, remained largely intact.
As a consequence, The Times said, Iran’s Revolutionary Guards were able to correctly assess that a remote-controlled machine gun “equipped with an intelligent satellite system” using artificial intelligence had carried out the attack.
As a business leader, you probably want to improve your organization’s diversity, merit, and fairness – whether related to hiring, advancement, teamwork, or other initiatives.
It’s a critical area with high stakes. For example, there’s evidence that diverse teams perform better, due to an integration of different skills and perspectives, and that fair systems generate employee devotion by giving credit where credit is due. Moreover, racial, gender, and other biases in hiring, promotion, and compensation have significant legal implications for businesses and other organizations.
The good news is that there are new ways to improve diversity, fairness, and merit – with the data already on hand in your enterprise. New technology enables you to use data to surface, understand, and address issues related to diversity and performance in unprecedented and sustainable ways.
The following are three specific ways to make that happen.
1. Give credit to the right people
Despite stereotypes being typically wrong on any individual basis, they influence how a person’s contributions are valued and recognized. Specifically, stereotypes create a “believing is seeing” situation where people distort reality to fit their biased view of a given group, such as the contribution of gender stereotypes to the devaluing of women’s contributions in multiple settings.
To resolve such bias, the broad idea is to create transparency around who contributes what and how, so that actual performance is quantified – fairly – and visible. While it was previously costly to collect comprehensive, accurate data that creates transparency, such data is now routinely collected inexpensively as a byproduct of team-based collaboration platforms.
Popular tech collaboration platforms like Slack, Dropbox, and Zoom unobtrusively capture real-time performance information that can be mined with AI to reveal unprecedented windows into the drivers of organizational performance: who leads thinking around a specific project, resolves key problems, initiates important conversations, and so on.
Not only can this new data give credit where credit is due, but it can promote effort and fairness. For instance, if a given factor affects performance, understand what’s driving that outcome and take steps to address it. For example, if you find women’s brainstorming contributions increase team performance when there is formal structure for turn-taking rather than an unstructured free-for-all, it indicates that teamwork processes, rather than gender, are likely correlated to outcomes, and such processes are modifiable. Thus you can use what’s learned to improve processes, make better, evidence-based people-advancement decisions, rectify misconceptions, and raise the organization’s entire tide of performance.
2. Create diverse teams
The famed Moneyball approach to team performance showed that team diversity on important metrics rather than star players drives success. Here, new data can help you understand the performance link between teams and diversity.
Much business today hinges on teamwork. Thus, a basic question might no longer be whether teams of all men or all women perform best, but whether more gender-balanced teams perform better than those heavily weighted toward men or women. For example, does an engineering team with at least one woman reach milestones faster than an all-male group?
My past investigation of millions of teams of biomedical researchers over a 20-year period found that controlling for individual past success, mixed-gender, balanced teams are more likely to publish more influential ideas than all-male or all-female groups, and that the mixed-gender-team effect becomes even more pronounced as the team’s gender balance becomes more equal. Use this simple insight to broadly promote performance and fairness.
3. Examine your policies with data
Data is critical to creating effective, innovative people policies, especially as labor markets and work arrangements change.
For example, one company recently used data to discover that their relocation policy hampered their recruitment of strong candidates, particularly women: When they asked prospective hires to move in the middle of the school year, rather than in the summer, women with children were prone to decline the offer because it disrupted family routines, even though they wanted the job. With this insight, and new options for remote work, the company is able to make better offers, reinforce a culture of support, and increase their yield of top talent, with no real increase in costs.
Take a similar, open-minded approach to examining your policies and practices using data and smart hypotheses, and you may be pleasantly surprised by what you find.
I hope the ideas here inspire you to harness data and AI in service of diversity, merit, and fairness in your organization. Remember: Leadership plays a critical role in these efforts, through championing initiatives and smartly exploiting data to quantify true relationships and promote important new insights.
The pandemic has quickly illuminated teledermatology’s specific applications.
A hybrid approach may ensure the capture of high-quality images for evaluation.
Advances in technology may expand the types of skin conditions assessed with teledermatology.
This article is part of the “Healthcare Innovation” series, highlighting what healthcare professionals need to do to meet this technology moment.
Increased reliance on teledermatology during the COVID-19 pandemic has not only helped patients avoid contracting infection but it’s also given dermatologists a better understanding of how to best employ the technology in daily practice.
“Teledermatology has definitely become more important during the pandemic and has allowed us to keep delivering effective care to our patients while they are in the safe environment of their homes,” Dr. Trilokraj Tejasvi, chair of the American Academy of Dermatology teledermatology task force, chair of the American Telemedicine Association special interest group for teledermatology, and associate professor of dermatology and director of teledermatology at the University of Michigan, said.
As the field scaled up teledermatology in March of 2020, dermatologists quickly learned that the technology has specific applications, Dr. Joseph C. Kvedar, chair of the ATA board of directors, professor of dermatology at Harvard Medical School, and senior advisor of virtual care at Mass General Brigham, said. Patient selection is critical to the success of teledermatology, as is knowing the limitations of some available technologies, he said.
When using teledermatology, understanding the typical workflow of patient care and adhering to HIPAA regulations are also key, Tejasvi said. He adds that professional societies such as the AAD and the ATA offer a variety of online teledermatology educational tools on these topics to their members.
Workflow and HIPAA compliance
Some appointments managed via teledermatology consist of an asynchronous approach with an initial online questionnaire or intake form about medical history and symptoms, Tejasvi said. Patients then take photos of their skin concerns with a smartphone and send images through the online patient portal.
A photo review by the dermatologist is followed by a video telehealth visit with the patient to collect more nuanced information and to provide a diagnosis and recommendations, Tejasvi said. Dermatologists may also prescribe necessary medications and give additional at-home care recommendations.
Video visits have gained popularity during the pandemic because of real-time interaction. However, with the lack of high-quality, static images available through video, a hybrid approach using synchronous and asynchronous elements is becoming the new normal, Tejasvi said.
“You need to use HIPAA-compliant software, whether it is through a system-wide portal or you’re using some private party software,” Tejasvi said. “Patient privacy is paramount.”
Careful patient selection
Patients best suited for teledermatology include those with acne, eczema, psoriasis, or wounds, Kvedar said. These patients may occasionally need to go into a lab for blood work to monitor medications, with follow-up visits via telehealth.
In contrast, patients with skin cancer need in-person visits about every six months to a year, Dr. Kvedar said. Video conferencing apps on phones and laptops are not high enough resolution for this type of monitoring.
However, someone with a new skin growth might be initially evaluated via a teledermatology appointment using photos sent to the patient portal, Kvedar said. Dermatologists can then determine whether the patient needs immediate, in-person follow-up or can wait for a regularly scheduled appointment.
Mole mapping technology is advancing
While teledermatology currently isn’t ideal for evaluating suspicious lesions, high-resolution digital photography has helped with mole mapping in the clinic, Dr. Adam Mamelak, a dermatologist in private practice in Austin, Texas, said. By using high-resolution photos to track moles over time, dermatologists can better recommend interventions, diagnostic testing, and treatment.
High-resolution, total body photography and dermoscopy, or microscopic examination of the skin surface, combined with artificial intelligence is an impressive advance in mole monitoring, Tejasvi said. However, the technology is cost-prohibitive because of the lack of reimbursement and may require more square footage than practices or institutions want to dedicate to the hardware required. Some patients with a history of melanoma or with more than 100 moles may want to pay out-of-pocket for these services.
Cloud-based systems and apps on smartphones can now run AI interpretations of moles and pigmented lesions, Mamelak said. In some cases, these apps can make patient self-evaluations more accurate, he said. Triage is one such example. “I predict that many of the larger hardware-heavy systems will become obsolete as the mobile apps become more developed,” he said.
AI’s application to dermatology will be manifold, but it will also help primary care doctors triage patients for dermatology referral, Kvedar said, who is an adviser to LuminDx, which is developing such a system. “I think that’s the next phase of care we need to prepare for.”
President Joe Biden announced a new security partnership with the UK and Australia at the White House on Wednesday, joined virtually by British Prime Minister Boris Johnson and Australian Prime Minister Scott Morrison.
The three countries will work together to “strengthen the ability of each” to pursue their defense interests through cooperation on defense technology.
“We have always seen the world through a similar lens,” said Morrison. “We must now take our partnership to a new level.”
Morrison said the first major initiative of AUKUS would be to deliver a new nuclear-powered submarine fleet to Australia, working together over the next 18 months “to seek to determine the best way forward to achieve this.” The subs will be built in Adelaide, the prime minister said.
Morrison stressed that Australia is not seeking nuclear weapons or to develop a civil nuclear capability and would adhere to its nuclear nonproliferation obligations.
“This will be one of the most complex and technically demanding projects in the world, lasting decades and requiring the most advanced technology,” Johnson said, hailing it as “a new chapter in our friendship.”
The AUKUS acronym “sounds strange,” Biden said in his remarks, adding “but this is a good one.”
Biden said the three countries will work together to improve their “shared ability” to take on 21st-century threats.
“We’re taking another historic step to deepen and formalize cooperation among all three of our nations because we all recognize the imperative of ensuring peace and stability in the Indo-Pacific over the long-term,” Biden said.
“This effort reflects the broader trend of key European countries playing a supremely important in the Indo-Pacific,” Biden added, citing France as having “a substantial” presence in the region, where it has several overseas territories.
According to Australian media, Canberra will abandon a roughly $66 billion deal with France for 12 state-of-the-art conventionally powered attack submarines. A French firm was picked to build the subs in 2016, but the deal fell apart amid local disputes, rising costs, changing designs, and delayed schedules.
Gerard Araud, a former French ambassador to the US, tweeted that the US and the UK “have stabbed [France] in the back in Australia.”
Biden also emphasized that Australia was not seeking a nuclear weapons capability.
“We’re not talking about nuclear-armed submarines. These are conventionally-armed submarines that are powered by nuclear reactors,” Biden said. “This technology is proven. It’s safe.”
Details of the agreement were reported earlier on Wednesday by US and Australia media.
US officials stressed that the partnership was “not aimed or about any one country,” and China wasn’t mentioned during the leaders’ remarks, but a White House official and a congressional staffer familiar with the matter told Politico that countering China is an important subtext of the new partnership.
The leaders on Wednesday stressed the joint nature of the effort, but the agreement comes as countries in the region seek to bolster the ability of their militaries to operate together and individually. Australia has already announced plans for major defense investments and to add new military capabilities, including long-range missiles.
Australia’s efforts and those in other countries have grown recently, spurred by China’s rapid increase in military strength.
It’s become clear over the last four years that the US and Australia “publicly now agree and recognize that the United States’ military preponderance in the Indo-Pacific is passed,” Ashley Townshend, director of Foreign Policy and Defence at the United States Studies Centre at the University of Sydney, said at an event earlier this month.
Washington and Canberra “are now very much in a Plan B-era where both countries are working together collectively alongside other committed regional security partners … to advance a networking agenda that can in some way offset and compliment the United States’ extended security guarantees to Asian countries going forward,” Townshend added.
Long-range missiles remain a focus, but submarine construction and presence in Australia are both important for Australia, Townshend said Wednesday, calling the new partnership a “surprising and very welcome sign of Biden’s willingness to empower close allies like Australia with highly advanced defence tech assistance.”
US export controls and concerns over defense industrial issues may still limit the extent of cooperation, but the three leaders stressed their need to cooperate against common threats.
“We need to be able to address both the current strategic environment in the region and how it may evolve because the future of each of our nations and indeed the world depends on a free and open Indo-Pacific enduring and flourishing in the decades ahead,” Biden said.
President Joe Biden is set to announce a new security partnership with the UK and Australia in a speech at the White House at 5pm ET, where he will be joined virtually by Prime Minister Boris Johnson of the United Kingdom and Prime Minister Scott Morrison of Australia.
The new working group – reportedly dubbed AUUKUS, incorporating each country’s initials – will allow the 3 Anglophone countries to share advanced technologies including artificial intelligence, cyber, underwater systems and long-range strike capabilities, according to POLITICO.
According to a White House official and a congressional staffer familiar with the matter, countering China is an important subtext of the new security partnership.
Watch Biden’s announcement at 5pm ET here:
This is a developing story. Please check back for further updates.