Balancing safety and privacy: NYC subways to test AI weapons detection
New York City will soon begin testing the use of artificial intelligence to help detect guns and other weapons at subway turnstiles. On Thursday, March 28, New York Mayor Eric Adams, D, announced that the city would be deploying the technology in a few months.
“This is a Sputnik moment, when President Kennedy said we were going to put a man on the moon and everyone responded,” Adams said. “Well, today we said we’re going to bring the technology that can identify guns and other dangerous weapons, and our private industry responded.”
However, with the new technology comes increased scrutiny of ethics and accuracy. Evolv, the company the city is partnering with, reportedly has issues with its detection systems, ranging from misidentifying umbrellas as weapons to failing to detect steel and aluminum tubes fashioned like gun barrels.
Evolv’s AI also missed knives in students’ pockets in some school districts and mistook backpacks and lunchboxes for bombs. With misidentification also comes risk, according to a legal expert who spoke with local New York City station Fox 5.
“We’re primarily concerned with false positives,” Jerome Greco, an attorney at the Legal Aid Society, said. “People being falsely identified as having a weapon, causing law enforcement to be afraid and possibly react in a way that could get themselves hurt and also the individual hurt.”
Meanwhile, a class action lawsuit filed by investors also claims that Evolv “misrepresented the efficacy of its products and deceived the general public, customers and investors.”
As Adams embraces AI, some say the mayor’s partnership with Evolv raises questions. Multiple of Adam’s top donors reportedly invested heavily in the company.
Despite the critics, Adams maintains the tech will be used within the law, balancing privacy and public safety.
The testing will begin in about three months. The 90-day waiting period is required as part of the POST Act, which requires the New York Police Department to disclose the use of surveillance technology and report on its impact and use before it is deployed.
Eventually, body scanners will be at subway stations, however, the exact number and locations have not been disclosed. The scanners use “advanced sensor technology and artificial intelligence to distinguish between weapons and everyday items,” according to Evolv’s website.
AI weapons detection is already used at popular New York City venues, including Citi Field, the Lincoln Center and the Metropolitan Art Museum.
Adams’ announcement comes less than a week after a shooting in a subway car and multiple stabbings within the subway system.
On Monday, March 25, Adams announced plans to crack down on subway violence by sending at least 800 police officers to monitor turnstiles. Hours after the announcement, a man died after being shoved onto the subway tracks and struck by a train in East Harlem.
Despite notable incidents, crime is down in the New York City subway system for the month of March. Still, Adams contends that the use of AI is necessary to keep people safe.
“I say to those who are afraid of scanners and would rather not walk through it, I’d rather you be safe,” Adams said. “So, let’s bring on the scanners. We are taking a huge step towards public safety.”
Adams also said the city will hire more mental health clinicians to work with police to move people with mental illness out of the subway system and “into care.”
A robotic future: US Army considering a platoon of steel to save lives
A proposal to embed new drone and robotics platoons within brigades U.S. Army-wide is reportedly under consideration. The Army says the platoons have the opportunity to save lives by further limiting the direct combat for human troops.
The number of robotic platoons embedded within units is still being determined. However, if implemented Army-wide, Defense One reported that the Army has the capacity to outfit 16 Robotics and Autonomous Systems (RAS) platoons.
The military has experimented with ground robots for a long time. Some are called quadruped robots, more commonly known as “robot dogs.” The kinks in the ground robots are still being worked out. According to Defense One, the ground bots had some problems sensing obstacles and didn’t entirely listen to the humans “barking” orders at them.
Currently, the Army has two RAS platoons. One platoon is in the 82nd Airborne and the other is experimental.
The robotic platoons are equipped with a number of different drones, including the Ghost-X drone. They also feature a Squad Multipurpose Transport Vehicle. This vehicle can be fitted with different tools or weapons, depending on the mission, like a Javelin anti-tank missile.
RAS platoons are capable of scouting out locations and engaging enemy forces before any friendly humans step foot on the battlefield, helping to save American lives.
Of course, there are ethical questions that still need be answered. For instance: Who should kill an enemy, a robot or human?
Right now, the decision lies in the hands of the human operating the machine — at least for U.S. armed forces. There is evidence that some drones in Ukraine could be taking autonomous lethal action.
Meanwhile, the United Nations has not said much about “robotic warfare.” In Geneva in 2021, U.N. officials reportedly discussed autonomous weapons, but tabled the talks.
When it comes to drones and international law, there is no provision specifically mentioning their use in war. The Geneva Convention has been updated in the past to restrict land mines, booby traps and incendiary weapons.
White House implements new AI regulations for federal agencies
The White House’s Office of Management and Budget (OMB) released a memo outlining new artificial intelligence regulations for federal agencies, Thursday, March 28. The new rules stem from President Joe Biden’s October executive order “on the safe, secure and trustworthy development and use of artificial intelligence.”
“President Biden and I had extensive engagement with the leading AI companies to help ensure the private sector commits to the principals in the blueprint and establish a minimum baseline of responsible AI practices,” said Vice President Kamala Harris when Biden first signed the order.
Harris said the new rules will include mandatory risk reporting and transparency from AI developers.
According to the regulations, within the next 60 days, each federal agency must put a chief artificial intelligence officer in place to oversee the agency’s use of AI.
Agencies will be responsible for creating AI governance boards and must report to the federal government on how the agency is using artificial intelligence.
Federal agencies will also be required to present a yearly AI report to OMB outlining what kinds of artificial intelligence the agency is using and how it plans to mitigate the risks.
Harris said it’s imperative that federal agencies’ use of AI “do not endanger the rights and safety of the American people.”
Shalanda Young, the director of OMB, said the federal government plans to hire 100 chief AI officers by summer.
Where is AI technology now and where is it headed?: Weapons and Warfare
This episode of Weapons and Warfare takes a look at artificial intelligence and aviation. Host Ryan Robertson visits with two companies leading the charge for AI pilots to learn where the technology is now and where it might be headed as the nature of warfare evolves and competition for resources grows.
AI fighter pilot initiatives aim to reduce risk, boost air superiority
The profession of being a fighter pilot is iconic, with historical figures like the Red Baron and fictional characters like Maverick. Yet, just as the bi-wing gave way to the turbo-prop and the F-117 to the F-35, the role of these pilots is evolving. Today’s pilots of the F-22 and F-35, modern icons in their own right, may soon share the skies with artificial Intelligence. Efforts are underway to integrate AI into the cockpit, aiming to reduce the risk to human pilots in future conflicts.
This next evolution in combat is being led by companies like Shield AI and EpiSci. For them, autonomous combat-ready aircraft are the way to bring maximum firepower with minimum exposure for those protecting the United States and its interests.
For former Navy SEAL and Shield AI co-founder Brandon Tseng, the need for autonomous combat aircraft is simple.
“Every single unit is able to have massive organic air assets at their disposal,” Tseng said. “That’s the shift that’s happening. And why is that important? It enables you to have air superiority on every single mission, and that enables maneuver on the ground, which is fundamentally game-changing.”
For EpiSci’s Chris Gentile, a retired Air Force fighter pilot himself, AI pilots are simply the next evolution of American combat innovation.
“The fact is warfare in general, and the American way of warfare in particular, is about using technology to realize asymmetric advantages over a foe,” Gentile said.
As he sees it, those advantages aren’t limited to the skies.
“Whether that’s a submarine, a ship, an aircraft, a weapons launch platform, something like that, we want to continue to increase that capability, continue to make each human being, each American that chooses to go into harm’s way, that much more effective, but use tools like AI and autonomy to manage their cognitive workload, make sure they’re not overwhelmed,” Gentile said.
For most people, seeing is believing. And both companies have plenty of working examples of their technology. So why aren’t they being introduced to the Department of Defense on a larger scale right now? According to Tseng, it’s a matter of resourcing.
“It’s not a technology problem,” Tseng said. “It’s a budget. It’s a resourcing. It’s a programming problem in terms of getting this capability out as fast as possible.”
Once those issues are overcome, Tseng thinks the change for operators in combat will be evident immediately.
“AI pilots paired with affordable aircraft is the most strategic conventional deterrent, since really, you know, the introduction of aircraft carriers,” Tseng said.
Earlier this month, Shield AI inked a deal with NAV-AIR to put its AI in the Kratos BQM-177A, a sub-sonic aerial target.
Meanwhile, EpiSci landed a Small Business Innovation Research award for an AI-aided satellite project that, if successful, will help sense hypersonic vehicles and missiles.
Access all Weapons and Warfare podcast episodes here.
Controversial deepfake Kari Lake video shows ease of AI disinformation
The Arizona Agenda, a newsletter that covers state politics, created a video using artificial intelligence to manipulate footage and make it seem as if Kari Lake was endorsing its coverage. The video showcased AI’s capabilities to create convincing videos that could blur the line between reality and fiction.
Stan Barnes, president of Copper State Consulting Group, told local media outlet AZFamily, “I think the Arizona Agenda, the media outlet that put that video into the public space, did everyone a favor.”
The video shows Lake’s AI counterpart appearing to support the Arizona Agenda. She also talks about the impact of AI on future elections.
The goal of making the video, according to its creators, was to demonstrate how easy it is to make fake content with AI and to highlight the challenges this brings, especially when it comes to telling the difference between real and fake videos around important events like elections.
Understanding how artificial intelligence creates these fake videos can help people notice the small details.
“Now that you know this is a deep fake, you’re probably catching a bunch of little inconsistencies that you can’t quite put your finger on,” deepfake Kari Lake said. “This is a less refined version of me, to help illustrate. My voice is pretty good, right? But my lips don’t quite sync up, my cadence isn’t natural, my skin is a little too smooth, and around the boundaries of my face, you can almost see the little glitches in the Matrix.”
The Arizona Agenda didn’t just create a deepfake; it also made a guide to help people spot fake AI content, showcasing how important it is to be aware of the dangers as AI technology becomes more accessible.
Lake’s team asked for it to be removed, and threatened to take legal action if the video wasn’t taken down. The team said it was created without her permission and used to make money. In response, the Arizona Agenda stressed that its goal was to educate and engage people in a responsible way.
Georgia lawmaker creates deepfake of colleague to garner support for AI bill
Rep. Brad Thomas, R-Ga., proposed legislation aimed at banning the use of deepfakes in politics. Deepfakes utilize artificial intelligence to manipulate audio and video, raising concerns about their potential to mislead voters.
To garner support from lawmakers, Thomas presented a case to the Judiciary Committee by showcasing a deepfake video featuring the voices of Georgia state Sen. Colton Moore, R, and former Republican congressional candidate Mallory Staples. Both of the deepfaked politicians oppose the legislation, citing free speech and satire.
The video falsely endorsed the proposed bill and Thomas emphasized the urgency of addressing this issue to prevent abuses in future elections. Thomas stressed how easily accessible these AI tools are, warning that their sophistication outpaces current legislation.
Following deliberation, the bill received bipartisan support, passing out of committee with an 8-1 vote.
Violators of the law would face penalties of prison time and fines.
Thomas acknowledged the challenges of enforcing the law but expressed confidence in the collaboration between law enforcement agencies to address election-related fraud.
Tennessee’s new ELVIS Act protects musicians from AI impersonations
Tennessee Gov. Bill Lee, R, signed a groundbreaking law Thursday, March 21, designed to shield artists from unauthorized artificial intelligence impersonations. The Ensuring Likeness Voice and Image Security, or the ELVIS Act, addresses growing concern among artists about deepfake technology and AI impersonations that mimic their own voices.
This law recognizes an artist’s voice as a protected personal right and sets stricter guidelines on the use of someone’s name, image and appearance.
Tennessee Governor’s Office
“The really great thing about this is Tennessee is the first in the nation to enact this legislation,” Lee said at the signing. “This will be a blueprint and we expect that it will be enacted multiple times over multiple states and, at some point, artists all across America will be protected because of what started here in the music capital of the world. We will ensure that no one can steal the voices of Tennessee artists and I believe that what we’re doing here today will ensure that no one will steal the voices of American artists once this is enacted across the country.”
The law has support from the music community. Lee, alongside stars Luke Bryan and Chris Janson, signed the act at a local music venue called “honky-tonk,” calling it “the coolest bill signing ever.”
AP Images
“What an amazing, stance or, precedent to set for the state of Tennessee to get in front of this to be the leaders of this and to show artists like myself, current artists, artists that are moving here, following their dreams, to know that our state protects us and what we’re about and what we work so hard for,” Bryan said.
“From Beale Street to Broadway, to Bristol and beyond, Tennessee is known for our rich artistic heritage that tells the story of our great state,” Lee added. “As the technology landscape evolves with artificial intelligence, I thank the General Assembly for its partnership in creating legal protection for our best-in-class artists and songwriters.”
The bill also received backing from the music industry and the Human Artistry Campaign, a worldwide effort by entertainment groups advocating for a thoughtful use of AI.
“This incredible result once again shows that when the music community stands together, there’s nothing we can’t do,” Mitch Glazier, Recording Industry of America (RIAA) chairman and CEO said. “We applaud Tennessee’s swift and thoughtful bipartisan leadership against unconsented AI deepfakes and voice clones and look forward to additional states and the U.S. Congress moving quickly to protect the unique humanity and individuality of all Americans.”
The ELVIS Act updates the Personal Rights Protection Act of 1984, which was first enacted to protect Elvis Presley’s publicity rights posthumously.
AI-powered WWII exhibit allows visitors to talk to heroes of ‘Greatest Generation’
The National WWII Museum is bringing history to life with new interactive exhibits. To create these exhibits, the museum interviewed 18 veterans from the “Greatest Generation,” including a Medal of Honor recipient who passed away in 2022, and combined their stories and images with artificial intelligence. As a result, museum visitors can now engage in AI-assisted conversations with real-life veterans.
Olin Pickens is featured in the exhibit. The 102-year-old veteran now has an avatar as part of the interactive display called “Voices from the Front.”
In 1943, Pickens’ battalion was captured by German forces in Tunisia, and he spent the rest of the war in a prison camp, according to The Associated Press. Now, through this technology, his story of survival will endure long after him.
“I’m making history to see myself telling the story of what happened to me over there,” Pickens told the AP.
In addition to troops overseas, the United States had plenty of home-front heroes during World War II, and museum visitors can hear from some of their avatars as well. Museumgoers can ask questions of a military nurse, an aircraft factory worker, and even a dancer who reportedly performed at USO shows and later became the model for the Tinkerbell character in Disney productions.
“We’re beginning to get to a time when the opportunity to speak to a real World War II veteran is more and more rare,” said Peter Crean, the vice president of the National WWII Museum. “But this will allow people for the next 100 years to talk to World War II veterans and really have a conversation, not just watch a film on TV.”
The setup is simple; people can chat with life-sized projections of real people while sitting in a chair. Introductions are made through a console instead of a handshake.
The project reportedly took four years to complete and was made possible through a donation of $1.5 million by a museum trustee and his wife. Each veteran featured in the exhibit was asked 1,000 questions about their life and experiences during World War II. The answers provide users with a vast database of responses to questions.
According to Tech Times, the museum’s staff chose veterans based on the most expansive range of events that occurred during World War II.
The National WWII Museum in New Orleans, formerly known as the National D-Day Museum, has always been a veteran-friendly place. Many veterans reportedly volunteered at the museum, sharing their experiences with guests over the years. However, after the COVID-19 pandemic, opportunities to share those experiences have dwindled.
Now, thanks to the Voices from the Front project, future generations will have insight into what it took to be named the Greatest Generation.
Texas immigration law paused again amid legal seesaw
Hours after the Supreme Court rules that Texas’ immigration law can take effect, it is now back on hold. And, charges have been dropped against the father of a late Marine who was arrested at the State of the Union. These stories and more highlight The Morning Rundown for Wednesday, March 20, 2024.
Texas immigration law back on hold hours after Supreme Court ruling
In a rapid succession of judicial decisions, Texas’ stringent immigration law, Senate Bill 4 (SB4), was put on hold just hours after the Supreme Court allowed it to temporarily take effect, marking a tumultuous 24 hours of rulings.
Get up to speed on the stories leading the day every weekday morning. Get The Morning RundownTM newsletter straight to your inbox!
The Fifth Court of Appeals intervened late Tuesday, deciding that the Texas law, which authorizes local law enforcement to detain migrants crossing the border illegally, should remain suspended. This pause sets the stage for oral arguments on Wednesday, March 20, in the appellate court, which will deliberate on whether the temporary block should be extended as the Biden administration’s challenge against SB4’s constitutionality proceeds.
Texas officials did not report any arrests during the short period SB4 was in effect. Under SB4, apprehended individuals are given the option to voluntarily leave the U.S. or face legal proceedings. Mexico’s government responded promptly to the Supreme Court’s decision, asserting it would not accept any deportees forced to cross the border under this law.
Amidst the ongoing legal battle over Texas’ immigration policy, a similar legislative proposal has successfully passed the state House in Iowa. This bill, now awaiting Governor Kim Reynolds’ signature, would take effect in July.
Trump-backed candidates sweep Ohio primary, sets stage for November
Businessman Bernie Moreno defeated a field of rivals to set up a clash with Democratic Senator Sherrod Brown. Meanwhile, Derek Merrin bested his opposition to take on U.S. Representative Marcy Kaptur.
The outcomes of these races are pivotal, with the potential to shift the balance of power in Washington. Both incumbents, Brown and Kaptur, are perceived as vulnerable in an increasingly Republican-leaning Ohio.
In his victory address, Moreno lauded Trump’s support and sought to rally the party behind him against Brown, whom he criticized as a key supporter of President Joe Biden’s policies. As the general election looms, it promises to be fiercely contested. Brown, seeking another term, intends to focus on abortion rights, a move that contrasts with Moreno’s strategy as he navigates scrutiny over his past. Brown posting on X, “The choice ahead of Ohio is clear: Bernie Moreno has spent his career and campaign putting himself first, and would do the same if elected. I’ll always work for Ohio.”
Steve Nikoui, who shouted his son’s name as the president delivered the March 7 address, was removed from the House chamber and arrested after being repeatedly warned by Capitol Police. In a statement issued later that night, authorities stated that disrupting Congress and demonstrating within congressional buildings was illegal.
Nikoui reportedly expressed was feeling “thrilled and humbled” by the decision to drop the charges. The incident and its aftermath have sparked a conversation on the rights of individuals to express their grief and political dissent in public forums.
NYT: Saudi government plans to create $40B fund for artificial intelligence
The New York Times reports the Saudi government, through its Public Investment Fund, has engaged in preliminary talks with the American venture capital giant Andreessen Horowitz. These discussions have explored the potential for Andreessen Horowitz to establish a presence in Riyadh, the Saudi capital, as part of the kingdom’s ambitious AI investment strategy.
According to the newspaper, the Saudi AI initiative is scheduled to commence in the latter half of this year, signaling a significant acceleration of the country’s efforts to diversify its economy and reduce its dependence on oil.
If realized, this investment would catapult Saudi Arabia to the status of the world’s preeminent investor in artificial intelligence, underscoring the kingdom’s commitment to adopting cutting-edge technologies to fuel its future growth.
South Korean police dismiss bomb threat targeting MLB star Shohei Ohtani as not credible
The game, held Wednesday morning between the Dodgers and the San Diego Padres at a stadium in Seoul, proceeded without incident. This event marks a milestone for MLB, as it is the first time regular season games are taking place in South Korea.
The threat emerged from an email sent to the South Korean consulate in Vancouver, Canada, by an individual claiming to be a Japanese lawyer. The message warned of a bomb set to detonate during the game. However, after a thorough investigation, police found no explosives at the venue. Authorities also believe the sender was responsible for similar threats last year.
Ohtani, making his debut with the Dodgers after signing a groundbreaking 10-year, $700 million contract with the team late last year, was the specific target of the threat.
Major League Baseball issued a statement confirming they are in close cooperation with local law enforcement to continue monitoring the situation vigilantly.
Finland tops World Happiness report, US drops from top 20
The Gallup survey pinpointed a notable dip in happiness among Americans under 30 as a significant factor behind the country’s slide down the rankings. In contrast, Americans aged 60 and older seem to be faring better, with the U.S. still making it into the top 10 for this age group.
Finland’s consistent top billing as the happiest country marks its seventh consecutive year at the peak of global well-being, a testament to the nation’s enduring quality of life and societal support systems.
The release of this year’s World Happiness Report aligns with the United Nations’ International Day of Happiness, offering a moment for reflection on the state of global well-being amid ongoing challenges.