Israel’s Merkava tank designed for upcoming fight with Hamas
To be successful, any ground invasion into Gaza by the Israel Defense Forces (IDF) will need to include armored vehicles and tanks. The armored chariot of choice for the IDF is the Merkava Mk4 main battle tank.
The Merkava is a battle proven system. It was designed and tailored to meet Israel’s specific warfighting needs. In Gaza, that means urban warfare in one of the most densely populated places on the planet; where snipers, drones, mines, IEDs, RPGs and other threats lurk around every corner.
The Merkava, maybe more than any other tank in the world, is built with crew survivability in mind. The 1500 horsepower, V-12 diesel engine is in the front of the tank. Most western tanks have the engine in the back. The forward location adds another layer of protection between the Merkava’s four-man crew and incoming projectiles.
The underside of the tank is designed to counter mines and IEDs. The tank’s hull is covered with modular armor that can be quickly replaced if damaged.
Israel also incorporated Rafael’s Trophy Active Protection System into the design. The Trophy system provides 360-degree protection against anti-tank rockets, missiles and high-explosive rounds from enemy tanks.
Merkavas are designed to keep their crews alive, but they aren’t invincible. When Hamas launched its Oct. 7 attacks, videos quickly began circulating of a drone successfully disabling a Merkava. A few days later, and many of the Merkavas that massed along the Israel-Gaza border started featuring “cope cages” to counter attacks from above.
In terms of offensive capabilities, the Merkava features a 120mm smooth bore main gun that fires all the standard NATO munitions and can also send LAHAT anti-tank missiles down range. There’s a remote-operated 7.62mm machine gun mounted to the Merkava’s turret as well, and some internal 60mm mortar launchers just for good measure.
The Merkava’s unique front engine design created enough space inside the rear of the tank that it can haul extra ammo, equipment, or three to four additional soldiers into battle. Of course, the extra space can also be used to safely transfer wounded soldiers out as well.
Most of the tank’s crew gains access through a clamshell-style rear entrance. This also provides a safer escape route if the tank is disabled. The tank’s commander gains access through a single hatch on top of the turret.
Like most tanks, the Merkava has some sight limitations when the crew needs to button up the hatches and go inside. To help get around this problem, the Merkava uses Elbit’s C4I systems to display real-time, day or night video feeds of what’s happening outside the tank.
The newest version of the Merkava, called the Barak or “Lightning” in Hebrew, takes this concept of real-time display to the next level. Using a system called Iron Vision, specially designed helmets give tank commanders an external view of the tank so clear it’s been described like looking through glass.
The Merkava Baraks also use AI in their targeting sensors to help its crew look, lock and launch faster.
If an enemy does manage to get a shot off at a Barak, its internal sensors can find the attack’s point of origin and adjust the tanks turret, while also relaying target info to nearby allied assets. This makes it that much easier to engage and eliminate enemy threats.
The IDF just took possession of its first batch of Merkava Baraks in September. According to reporting from Defense News, the IDF’s Armored Corps said a pair of Baraks can carry out the tasks for a combined force that previously required a platoon or whole company’s worth of Merkava Mk4 tanks to complete.
SAG-AFTRA issues Halloween costume guidance to striking members
As the Hollywood actors’ strike nears its 100th day, another issue is being raised by the Screen Actors Guild — Halloween. The union, which represents the roughly 65,000 actors on strike since July, is asking its members to not dress up as characters from movies or TV shows produced by the studios they are striking against.
The union says this will send a clear message that actors won’t promote studio content without a fair contract.
Instead, SAG-AFTRA is recommending members dress in traditional Halloween costumes like ghosts, skeletons and zombies, or trick-or-treat as someone from an animated series.
The union says if striking actors do end up dressing up as characters based on content from the major studios, it just asks that they not post any photos to social media.
Actors have been on strike since July 14 over issues including increase pay over streaming programming and the use of artificial intelligence in Hollywood.
Last week, talks broke down between SAG-AFTRA and the Alliance of Motion Picture and Television Producers. The AMPTP said the gap between the two sides was “too great, and conversations are no longer moving us in a productive direction.”
The Writers Guild of America ratified its new contract with studios earlier this month, ending a nearly 150 day strike.
Autonomous kill: AI drones in Ukraine strike Russian targets
Ukrainian drones dealing deathblows to Russian armor and equipment is nothing new. AI-piloted drones making the decision to strike targets on their own? That’s new. Not just for the war in Ukraine, but for all humanity.
In September, Ukraine started using Saker’s Scout quadcopter drone. A month later, Ukrainian developers confirmed to Forbes the drones are now carrying out autonomous strikes on Russian forces. It’s the first confirmed use of lethal force by artificial intelligence in history.
The Saker Scout started as a reconnaissance drone helping Ukraine’s armed forces identify Russian artillery and armor, even when heavily camouflaged.
Saker said its Scout can reconnoiter a field, mark hundreds of enemy targets and relay that info to other assets in a fraction of the time it would take humans to perform the same task.
The Scout can currently identify 64 different types of Russian military equipment including trucks, tanks, APCs and launchers. Teaching the Scout to target new types of equipment is as simple as a software patch.
The Scout can carry about six and a half pounds of explosives, and has a range over seven miles. Once the Scout identifies a target, it can autonomously drop its ordinance or act as a spotter; relaying the information to other Ukrainian attack drones which may, or may not, be controlled by a human operator.
The Scout doesn’t need GPS to navigate and can operate in environments where radio jamming blocks communication signals. It’s likely in these types of environments where the Scout is reportedly being used sparingly in Ukraine to carry out autonomous strikes.
Artificial Intelligence agents flying drones is nothing new. Straight Arrow News reported on several different AI pilots before, like Shield AI’s Hivemind. Coincidentally, Hivemind recently completed tests where it successfully flew swarms of V-Bats in formation. But again, those tests were focused maneuvering. Taking humans out of the kill-chain is a new development.
There are no international laws concerning the use of artificial intelligence and its control of lethally-armed robotic weapons systems. The U.S. military mandates an “appropriate level of human judgment” before an AI-agent can use force, but that’s an admittedly flexible term.
Critics say allowing so-called “Slaughterbots” onto the battlefield sets a dangerous precedent for mankind.
In an article from The Hill, the Future of Life Institute and the Arms Control Association agreed AI algorithms shouldn’t be in a position to take human life because they cannot comprehend its value. Those organizations also argued the over-reliance on machines and AI agents to conduct warfare will make it easier to declare war, could lead to the proliferation of AI weaponry being used by bad actors, and increases the risk of escalation between nuclear powers.
The United Nations is scheduled to address the issue of AI warfare more directly at its next General Assembly in late October. U.N. Secretary-General António Guterres said by 2026, he wants a legally binding agreement to prohibit lethal autonomous weapons from being used without human oversight.
PimEyes: Dangerous implications of reverse image searches
A website called PimEyes allows users to reverse image search for faces, raising privacy concerns about this use of facial recognition technology. The website crawls the public web and indexes publicly accessible photos.
PimEyes users can upload a photo of someone and the website will search for other photos of that person online, along with links to the websites where they came from.
People use PimEyes for purposes such as finding attractive strangers, keeping tabs on one’s digital presence or investigating dating app matches. However, the site could also be used for stalking, surveillance or identifying dissidents.
Anyone can use a basic version of PimEyes, but paid subscriptions offer more searches and access to the image sources. Users can opt out so that their face isn’t included on the site, but some report that the feature doesn’t always work.
PimEyes said in a statement on its website that it “has never been and is not a tool to establish the identity or details of any individual,” and “the target of the PimEyes search engine is not individuals, but publicly accessible websites.”
The company lists the safeguards that are in place to minimize harm, and its CEO said it blocks access to the site in 27 countries due to concerns that authorities will target people there.
This use of facial recognition technology is certainly not new. According to The New York Times’ reporting, Big Tech companies have had it for more than a decade, but have chosen not to release it to the public, citing concerns about privacy and misuse.
Meanwhile, PimEyes launched in 2017. A company called Clearview AI provides a similar service for law enforcement.
There are no federal laws limiting the use of facial recognition technology in the United States, though there are some in states.
Fury, Valkyrie and the US military’s future fleet of CCAs
One crew, one craft. For decades, that’s been the paradigm of air combat. However, the paradigm is shifting. In future fights, pilots will be facing some of the fiercest skies they’ve ever seen. So, the United States Air Force is trying to help its pilots by creating a more crewless craft.
At this year’s Air and Space Forces Association’s Air, Space and Cyber conference in National Harbor, Maryland, there were several types of these crewless crafts on display, including Anduril’s Fury.
Fury is a concept high-performance autonomous air vehicle from Anduril. Source: Anduril.
“Fury is a conventional takeoff and landing aircraft. We are looking to use as minimal runway as possible to provide the maximum amount of options for all of our areas that we may want to fly,” Andrew Van Timmerman said.
Callsign “Scar,” Van Timmerman is a retired Air Force fighter pilot and the current director of Air Dominance Systems at Anduril.
The company was one of nearly 200 vendors at the ASC 2023. In addition to the half-scale model Fury displayed, Kratos Defense had a full-size model of its XQ-58 Valkyrie.
A full-size model of the XQ-58 Valkyrie displayed at Air and Space Forces Association’s Air, Space and Cyber 2023 Conference in National Harbor, MD.
When asked to describe the Valkyrie, Otis Winkler, the vice president of Corporate Development and National Security Programs at Kratos, said, “It is a fighter-like performance. If you want to think of it as an F-35 without a pilot in it, that’s what you’ve got. It’s got a bomb bay; you’ve got weapons on the wings themselves. Take the fight to the enemy.”
Both Fury and Valkyrie are candidates in the U.S. military’s Collaborative Combat Aircraft (CCA) program. The goal is to create thousands of highly capable robotic wingmen quickly and do it all for a fraction of the cost of traditional fighter jets. The price of a new F-35 is well north of $100 million.
“We’ve kept continuous competition across the mission systems, across the autonomy space, as well as across the air vehicle space,” said Brig. Gen. Dale White, the program executive officer for Fighters and Advanced Aircraft for the USAF.
It’s White’s job to make sure the CCA program is a success.
“It’s pretty clear to see there’s a wide variation between vehicles, sizes, weights and things of that nature,” White said while walking around the floor at ASC 2023.
Variations in the design often equate to variations in fielding as well. So, let’s look at just how Fury and the Valkyrie stack up.
For starters, Fury is still under development while the Valkyrie was the first, and — as of the publishing of this article — still only CCA candidate flying in the real world.
“It is awesome. It’s changing the game right now, with the Air Force and with the Marine Corps,” Winkler said. “This is the way that you get more mass to the fight. It’s survivable and it’s ready today.”
Just because Fury hasn’t flown, though, doesn’t mean its designers don’t have big plans for it. Fury will still deliver fighter-like performance, and it will do so with a single, commercially available jet engine. That helps make maintenance easier and keeps costs down.
The Fury and Valkyrie both adhere to the Department of Defense’s open-architecture mandate. So, they can be kitted out with all sorts of different tech depending on the mission.
Van Timmerman said on Fury, that tech could include radar and radar jamming equipment, infrared sensors or other intelligence, surveillance, and reconnaissance sensors.
“It really is anything that you have that’s available on an existing aircraft,” Van Timmerman said. “We want to be able to provide as much plug and play utility out of the vehicle so that we can, again, provide those different mission sets to whatever the commander needs are that day.”
As mentioned above, Fury will need a runway to take off and land, albeit a much shorter runway. The Valkyrie uses a turbo-fan engine to launch off the back of a trailer. It can be recovered under a parachute once the mission is over.
“So, you can put these anywhere,” Winkler said. “We normally put them in a shipping container. Take the wings off. It’s all self-contained. You drop it either near the runway, or you can drop it in a forward operating base somewhere. Open it up with two men, slide out the rail, put the wings on with a couple of bolts, and then it launches with all the weapons on it.”
The Valkyrie can carry up to four small diameter bombs or similar sized munitions in its bomb bay. Its wing stations can also hold lethal payloads.
Fury won’t carry kinetic weapons. According to Van Timmerman, its payload is primarily a suite of sensors and electronic warfare systems. To maintain the unique performance of the craft, the sensors will be housed mostly in the nose of the aircraft.
Fury and Valkyrie, like all CCA candidates, will be piloted by artificial intelligence. Anduril uses its Lattice for Mission Autonomy software. The company has been perfecting the technology for several years now, including in the Air Force’s Project Venom initiative, which puts AI-agents in charge of modified F-16s while human pilots essentially observe in the cockpit.
Kratos and Shield AI announced the two companies will collaborate to put Shield AI’s Hivemind into Valkyrie. Hivemind is touted as the “world’s best AI pilot.”
CCAs may not have human pilots, but rest assured, humans will still be giving the orders.
Straight Arrow News asked Van Timmerman, as a former fighter pilot, how does he think CCAs would be most useful. He said, “We can have somebody remove themselves from inside of the loop and go on to the loop. What do I mean by that? When you are inside the loop, think about a man or woman, a person inside the cockpit. They’re in charge of the stick, and the throttle and every decision that’s made inside the aircraft.
“When you go onto the loop, you have one person that could be in charge of maybe many of these air vehicles. They may have sensors. They may just be flying around to draw attention. There are a number of different use cases you can have in an advanced autonomy enabled system.”
While the Air Force wants to eventually select a single contractor to build its Collaborative Combat Aircraft, because of the open architecture nature of the design process, the final craft may contain components and kit from several different suppliers.
People shouldn’t have to wait too long either to find out who those suppliers are. The Pentagon wants the CCAs airborne by 2027, when some think China may invade Taiwan.
‘Beware!’: Tom Hanks warns of AI ad using his likeness
Tom Hanks warned his fans on Sunday, Oct. 1, of an ad supposedly of the Oscar-winning actor promoting a dental plan. Hanks posted a screenshot of the ad, saying it was not really him, but rather an AI replica, telling everyone to “beware!”
Hanks added to the post, “There’s a video out there promoting some dental plan with an AI version of me.” Hanks made clear that the post has nothing to do with him.
It is unknown where the video originated from and what dental plan it is promoting. But the acting icon has previously addressed AI on the “Adam Buxton Podcast” in May.
On the podcast, Hanks said it is a real possibility that AI versions of actors could be used in movies.
“I could be hit by a bus tomorrow and that’s it, but my performances can go on and on and on,” Hanks said. “And outside of the understanding that it’s been done by AI or deepfake, there’ll be nothing to tell you that it’s not me and me alone.”
The actor known for his roles movies like “Forrest Gump,” “Castaway,” and “Big” made those comments weeks before Hollywood actors went on strike in July, with artificial intelligence being one of its key issues. SAG-AFTRA and the Alliance of Motion Picture Television Producers are expected to meet Monday for the first formal bargaining talks.
Our business correspondent Simone Del Rosario has a series on the rise of artificial intelligence and what the future holds for the technology.
Shutdown averted but Congress faces new challenges: The Morning Rundown, Oct. 2, 2023
With a government shutdown averted for now, Congress faces new challenges. And an A-list celebrity is putting out a warning about artificial intelligence. These stories and more highlight The Morning Rundown for Monday, Oct. 2, 2023.
Congress passes stopgap funding bill to avert shutdown; McCarthy is called out
Congress was able to avert a government shutdown by passing a stopgap funding bill late Saturday, Sept. 29, 2023. A shutdown would have meant millions of federal employees not being paid, but, for now, that is not the case. The bill, signed by President Joe Biden before the deadline hit, funds the government through Nov. 17, meaning Congress will have to find a way to pass another funding bill in just a matter of weeks.
This stopgap bill first passed the Republican-led House where it found more support from Democrats than Republicans in a 335 to 91 total vote. While the bill increases federal disaster assistance by $16 billion, the amount Biden was seeking, it does not provide any additional aid to Ukraine which was a White House priority opposed by many Republicans.
On Sunday, Oct. 1, the president pressed Congressional Republicans to back a bill for that very cause, saying he expected Speaker Kevin McCarthy to keep his commitment to secure the funding.
“We’re going to get it done. I can’t believe those who voted for supporting Ukraine, the overwhelming majority of the House and Senate, Democrat and Republican, will, for pure political reasons, let more people die needlessly in Ukraine,” Biden said.
When asked by CNN, McCarthy’s office declined to say whether he gave the president any confirmation on a future Ukraine deal. Meanwhile, McCarthy is facing opposition from his own party over the bill that had majority support from Democrats. Representative Matt Gaetz of Florida said on Sunday that he would try to remove the speaker from his leadership position.
“Speaker McCarthy made an agreement with House conservatives in January and since then he’s been in brazen, material breach of that agreement. This agreement that he made with Democrats to really blow past the spending guardrails we had set up is a last straw,” Gaetz said.
Gaetz announced he would be filing a motion to vacate the chair. McCarthy has responded – saying, “So be it. Bring it on. Let’s get over with it and let’s start governing.”
Trump says he will attend opening of NY civil fraud trial
“I’m going to court tomorrow morning to fight for my name and reputation,” Trump posted on Truth Social on Sunday.
Security preparations were already underway in case the former president was to make an appearance.
The case was brought last year by New York Attorney General Letitia James against Trump, his eldest sons and his companies accusing them of inflating the former president’s net worth.
Last week, the judge overseeing the case issued his first ruling in favor of the attorney general, finding Trump liable for fraud. The judge said Trump misrepresented his wealth to banks for decades for as much as $3.6 billion.
According to court records, Trump is expected to be called as a witness later in the trial.
Newsom to appoint Butler to fill Feinstein’s seat
California Governor Gavin Newsom has announced his choice to fill the Senate seat of late Democratic Senator Dianne Feinstein, the longest-serving female senator in U.S. history who passed away last week at the age of 90.
The governor has chosen Laphonza Butler, the president of EMILYs List, a committee that works to elect Democratic women, and a former adviser to Vice President Kamala Harris.
Newsom issued a statement late Sunday night, after news of the appointment broke, saying Butler “represents the best of California and…will carry the baton left by Senator Feinstein, continue to break glass ceilings, and fight for all Californians in Washington D.C.”
The governor had previously said that he would fill any Senate vacancy with a Black woman. Butler would become the third Black woman to serve in the Senate’s history.
Newsom said the choice is an interim appointment and he would not select any of the candidates who are running to succeed Feinstein in 2024. They include Reps. Barbara Lee, Adam Schiff and Katie Porter.
The Supreme Court begins its new term Monday. The nine justices, six conservative and three liberal, are prepared to tackle several major issues over the next nine months including gun rights, social media, the power of federal agencies, electoral districts and, perhaps, abortion pills.
On Oct. 31, the court will hear arguments concerning whether the First Amendment prohibits public officials from blocking critics on social media sites like Facebook and X.
On Nov. 7, a case will be presented to the court on whether a federal law barring people under domestic violence restraining orders from owning a gun violates the Second Amendment’s right to keep and bear arms.
In addition, the Biden administration has asked the justices to hear its appeal to a ruling by the 5th U.S. Circuit Court of Appeals in New Orleans on barring telemedicine prescriptions and shipments by mail of the abortion pill mifepristone.
Federal student loan payments resume after 3-year pause
Federal student loan payments resumed on Sunday, Oct. 1, after a three-year pause due to the COVID-19 pandemic. Interest on the loans began accruing again on Sept. 1.
Borrowers will receive a bill saying how much they owe each month, at least 21 days before their due date. There is a yearlong grace period to help borrowers – meaning missed or late payments in the next 12 months won’t be reported to the credit bureaus, but interest will continue to accrue.
Borrowers also have the option to sign up for the new income-driven repayment program, called SAVE, which was announced after the Supreme Court struck down the Biden administration’s loan forgiveness plan. According to the Education Department, the SAVE plan will help the typical borrower save more than $1,000 per year on payments.
If you need more information on your loans, you can log onto the studentaid.gov.
Tom Hanks warns ‘beware’ AI version of himself
Oscar-winner Tom Hanks sent a warning to his fans not to believe everything they see. He posted to Instagram on Sunday an image seemingly of himself but with the caption: “Beware!”
It turns out it was not an actual photo of the “Forrest Gump” actor, but a computer-generated one made from artificial intelligence.
Hanks’ caption went on to explain further, saying, “There’s a video out there promoting some dental plan with an AI version of me. I have nothing to do with it.”
Hanks has previously spoken about the challenges actors face when it comes to AI on a podcast earlier this year.
“I could be hit by a bus tomorrow and that’s it, but performances can go on and on and on and on. Outside of the understanding of AI and deepfake, there’ll be nothing to tell you that it’s not me and me alone. and it’s going to have some degree of lifelike quality. That’s certainly an artistic challenge but it’s also a legal one,” Hanks said on The Adam Buxton Podcast.
The interview was done months before Hollywood actors went on strike in July, with artificial intelligence being one of its key issues. SAG-AFTRA and the Alliance of Motion Picture Television Producers are expected to meet Monday for the first formal bargaining talks.
Our business correspondent Simone Del Rosario has a series on the rise of artificial intelligence and what the future holds for the technology.
ChatGPT launched an AI revolution. Here’s where we stand nearly 1 year on.
Artificial intelligence hit the mainstream like a firestorm following the release of OpenAI’s ChatGPT. Technology companies scrambled to join the AI arms race, led by Microsoft’s $10 billion investment in OpenAI. At the same time, Capitol Hill sprang into action, holding hearing after hearing over safety and regulation.
The overnight sensation of generative AI is not likely to burn out as quickly as it came on. The endless possibilities are expected to transform technology, the workforce and society at large. At this pivotal juncture, humans will shape where artificial intelligence goes from here, but many fear the direction it will take.
AI expert Aleksandra Przegalińska joins Straight Arrow News from Warsaw, Poland, for an in-depth conversation about the future of AI, from job replacement to developing consciousness.
Przegalińska is a senior research associate at Harvard University analyzing AI, robots and the future of work. She has a doctorate in the philosophy of artificial intelligence from the University of Warsaw and is an associate professor at Kozminski University.
The AI Revolution
Interest in artificial intelligence exploded when ChatGPT first hit the masses in November 2022. While AI has technically been around for decades, the sheer accessibility of directly interacting with a chatbot led to a surge in chatter, as evidenced by Google search trend data.
But it wasn’t just talk. Companies were quick to put money on the table. Nothing comes close to Microsoft’s $10 billion OpenAI investment, but tech companies, health care firms and venture capitalists were quick to ink their own deals in the first quarter of 2023. Microsoft’s move also triggered an AI search-engine race, pushing Google to release Bard, its experimental AI-powered search tool.
The Fear Factor
As humans reckon with the future of artificial intelligence capabilities, Aleksandra Przegalińska, a doctorate in the philosophy of AI, says the most prevalent emotion is fear.
It is mostly a story that is infused with fear, with a sense of threat; where artificial intelligence can reach a level where it figures out that it’s also as smart as we are, perhaps even smarter, and then becomes our enemy. And I think it’s in many ways a story about our history.
Aleksandra Przegalińska, AI expert
Przegalińska said many factors play into this fear, from movies like “The Terminator”to fear spread by AI developers themselves.
This past spring, AI leaders and public figures attached their names to the following statement. Key names that signed on include OpenAI CEO Sam Altman, Anthropic CEO Dario Amodei, Google DeepMind CEO Demis Hassabis and Bill Gates.
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
Center for AI Safety
“Sam Altman is obviously telling the Congress that we should all be scared but then again, he’s incubating GPT-5 as we speak,” Przegalińska said. “This to me seems a bit strange. Either you say, ‘Okay, there is a chance that this technology will be misused and this is the way I would like to address these concerns,’ or you’re saying, ‘Well, it’s ultimately the worst thing that can happen to humanity and I just simply stop building it at all.’”
I think maybe he has some regrets being left with Twitter instead of surfing this big AI wave.
Aleksandra Przegalińska on Elon Musk advocating for an AI ‘pause,’ citing risks to society. Musk was an early investor in OpenAI.
Perhaps the biggest fear of AI is the possibility that it could replace so many livelihoods. In March, investment bank Goldman Sachs predicted that AI could automate the equivalent of 300 million full-time jobs between the U.S. and Europe.
Przegalińska, whose research at Harvard University focuses on AI and the future of work, says developers should focus on how humans can collaborate with AI to increase productivity but not replace humans altogether.
Many things can go wrong if you decide to choose that pathway of full automation.
Aleksandra Przegalińska, AI expert
“But our jobs will change and some jobs will probably disappear because of artificial intelligence,” Przegalińska said. “And I do think that politicians have to look at that as well.”
In May 2023, AI was responsible for 3,900 job cuts in the U.S., according to data from Challenger, Gray & Christmas, Inc.
When it comes to regulating AI, the U.S. is not the one setting the global groundwork. This summer, the European Union passed a draft law known as the A.I. Act, legislation that is years in the making. But it’s just a start.
“I do regret a bit that this regulation happened this late,” Przegalińska said. “Many people from the AI field have been calling for regulation before ChatGPT and way before ChatGPT. We knew already that there would be some problems because some of these systems are just not explainable. They’re like black boxes; they’re very difficult to understand and yet we use them.”
Meanwhile, lawmakers on Capitol Hill have held several hearings about risks posed by artificial intelligence and ways to regulate its use. However, American efforts are considered to be in the early stages. Also, lawmakers have been criticized in the past for not understanding the technology they aim to regulate, like during Big Tech hearings in the past.
“There was a bit of a mismatch in terms of digital competencies,” Przegalińska said.
I do hope that this time around, the politicians will come prepared, that they will be better prepared for these types of discussions.”
Aleksandra Przegalińska, AI expert
How should AI be regulated to combat deepfakes and bad actors? Click here for more.
The Uncanny Valley
How easy is it to tell what is real and what is artificial? AI today has some serious quirks, like generating eight or nine fingers on one hand. But as technology advances, it’ll get more and more difficult to separate fact from fiction.
I have my own deepfake, and it’s so good that for me, it’s even sometimes hard to figure out whether it’s her speaking or myself. Really, that’s so uncanny.
Aleksandra Przegalińska, AI expert
In real life and in movies, those in robotics have pursued building robots that look and act human, coming close to crossing the uncanny valley.
“The uncanny valley is this concept that tells us that if something resembles a human being, but not fully, then we are scared of it,” Przegalińska said. “So our probably very deeply, inrooted biological response to something that looks like a human and is not a human and we know that, is to just have this eerie sensation that this is not something we should be interacting with.”
What are the psychological effects of crossing into the uncanny valley? Click here to watch.
Full interview time stamps:
0:00-2:22 Introduction 2:23-5:00 My Unconventional Path To AI Research 5:01-9:42 How The Terminator, Media Drive Our AI Fears 9:43-13:01 Sam Altman, AI Developers Spreading Fear 13:02-14:00 Elon Musk’s Big Regret? 14:01-18:55 How ChatGPT Changed Everything 18:56-25:01 Do Politicians Know Enough About AI To Regulate? 25:02-31:48 The Dangers Of The Uncanny Valley, Deepfakes 31:49-39:27 Will AI Cause Massive Unemployment? 39:28-43:49 Answering Most-Searched Questions About AI
AI is coming for 300 million jobs. Is the future work optional?
Will generative artificial intelligence enhance the way professionals work or replace them altogether? While it is still in its early stages, generative AI is expanding automation into a much wider set of the workforce.
Goldman Sachs predicted that AI could automate the equivalent of 300 million full-time jobs between the U.S. and Europe.
“Some sort of luxury version of a utopia where we don’t work, or we choose to work whenever we feel like it, and then we rely on a very generous, universal basic income; well, that’s probably not something that’s going to happen,” said Aleksandra Przegalińska, an AI expert researching the future of work at Harvard University. “So we should push for a vision or scenario where humans are working and they are enhanced by artificial intelligence.”
By 2030, 30% of hours worked in the U.S. today could be automated, according to a recent report by McKinsey Global Institute. While AI could enhance productivity among STEM, creative, and business and legal professionals, McKinsey projects it will eliminate jobs in office support, customer service and food service.
AI expert Aleksandra Przegalińska joins Straight Arrow News from Warsaw, Poland, for an in-depth conversation about the future of AI, from job replacement to developing consciousness.Watch the full conversation here.
Below is an excerptfrom that interview.
Aleksandra Przegalińska: You touched upon a very important part of decision-making; that we have to find or strike a balance where we are still the main or key decision-makers.
For instance, I am now involved in a project that we are doing at my university in Warsaw, Kozminski University, with Harvard University. And in that project, we are looking at collaborative AI, so the type of artificial intelligence that is designed for collaboration with humans. So it does not do the job instead of you, but instead works together with you.
And that’s a very different approach that I think can be very useful in many different professions. We are now testing our tool, that is also generative, on salespeople and marketers. We’ve seen an increase in productivity, but also an increase in job satisfaction.
And this is something that we would like, right? Getting rid of some routines and really focusing in our work on something that is interesting and fun for us. So I do hope that this is the trajectory that we will choose.
And I would really look very carefully at the speed and pace of automation, finding the proper use cases for it. Not everything can be done by generative AI and also not everything should be done by generative AI. So I do think that we have a bit of road mapping to do ahead of us.
Simone Del Rosario: Do you think there’s a risk of someone taking the technology a little bit too far just because the technology can go there? And then by then, it’s too late and you can’t really figure out why it’s a better case to have humans in that role over AI.
Aleksandra Przegalińska: Well, I do think that many things can go wrong if you decide to choose that pathway of full automation. Many things can go wrong because this is just a technology. It does not have any experiences, any internal states, no affections, no emotions. It’s just designed by humans and it’s prompted by humans. So you have to know how to use it in order to use it well. If you just rely on it, usually the results are not so spectacular. So in that way, I would say that those who choose full automation will probably not be very happy with their choice.
The race to regulate AI hits snag; politicians don’t understand the tech
Should government have a role in regulating artificial intelligence? When asked in a closed-door meeting with tech executives, Senate Majority Leader Chuck Schumer said, “every single person raised their hands, even though they had diverse views.”
The overnight sensation of ChatGPT put a timer on government oversight, as politicians scrambled to convene hearings on Capitol Hill about the need for regulation.
I just hope that this time around, we will do a better job than we did with social media.
Aleksandra Przegalińska, AI senior research associate, Harvard University
The U.S. is behind the ball on regulating AI when compared to the European Union, which passed draft legislation known as the AI Act this summer. The AI Act was first proposed by the European Commission in 2021, over a year before OpenAI released ChatGPT.
Countries and commissions face many challenges when it comes to regulating the fast-moving technology. To start, government regulation has never been able to keep up with technological advances, and politicians have regularly displayed an inability to understand the field. So why are some of tech’s biggest executives pushing for regulation?
AI expert Aleksandra Przegalińska joins Straight Arrow News from Warsaw, Poland, for an in-depth conversation about the future of AI, from job replacement to developing consciousness.Watch the full conversation here.
Below is an excerptfrom that interview.
Simone Del Rosario: All of this conversation has really been sparked over who’s going to regulate AI and this urgency behind that effort to regulate AI. Who do you think should be regulating something like this, though? You have to admit that politicians aren’t really the most well-versed in the most groundbreaking technology.
Aleksandra Przegalińska: That is correct. And we’ve seen that with social media and the hearings of Mark Zuckerberg in the Senate. There was a bit of a mismatch in terms of digital competencies.
But ultimately, I think it has to be a collective effort of many different stakeholders because this technology is not only about technology. It’s not about IT people and how they’re going to use it, but it’s a technology that is very, very broad. It’s a general-purpose technology, you could even say.
It’s the type of technology that will penetrate so many different professions, tasks, accountants, health care professionals, different people who are working in various organizations, business, consultancy. Wherever you look, you will see AI. So in that way, I think it has to be a collective effort.
I do regret a bit that this regulation happens this late, because actually, many people from the AI field have been calling for regulation before ChatGPT, way before ChatGPT. And we knew already that there will be some problems because some of these systems are just not explainable. They’re like black boxes. They are very difficult to understand and yet we use them.
We want them to make decisions about important things like giving someone a loan in a bank or not, or declining. So we really need to understand what these systems are doing. And that has been a problem way before ChatGPT.
But now I am sort of glad that there’s at least a debate. And I do hope that this time around, the politicians will come prepared and that they will be better prepared for these types of discussions. They do have experts. They can talk to many people.
I observed what’s been going on at the White House. There was a meeting between Kamala Harris and many representatives of those companies that are building generative tools, generative AI.
There has been a hearing at the Senate where one of the senators said that Sam Altman should tell everyone how to regulate AI. And I don’t think it’s necessarily the best way to go. We need at least a couple of rounds of different consultations. Many companies have to be involved, but also NGOs, civil society, researchers who are not working in private companies but also at universities.
There are many people with good ideas so it has to be a dialogue. And I just hope that this time around, we will do a better job than we did with social media.