FTC bars Rite Aid from using AI for 5 years for ‘reckless’ surveillance
Rite Aid, the pharmacy chain that filed for bankruptcy in October, is now facing another issue. It has been banned for five years from using artificial intelligence facial recognition technology for surveillance purposes to settle charges by the Federal Trade Commission.
However, we fundamentally disagree with the facial recognition allegations in the agency’s complaint.
Rite Aid
In a statement on Tuesday, Dec. 19, the FTC said the retailer failed to implement “reasonable procedures and prevent harm to consumers” using facial recognition in hundreds of its stores.
“Rite Aid’s reckless use of facial surveillance systems left its customers facing humiliation and other harms, and its order violations put consumers’ sensitive information at risk,” said Samuel Levine, Director of the FTC’s Bureau of Consumer Protection.“Today’s groundbreaking order makes clear that the Commission will be vigilant in protecting the public from unfair biometric surveillance and unfair data security practices.”
In its complaint, the FTC said between 2012 and 2020, Rite Aid used AI to capture images of all its customers at select stores and created a database of “persons of interest” suspected of past wrongdoings like shoplifting. The system would send match alerts to Rite Aid workers, who were then instructed to tell the customers to leave the store. The FTC said this led to numerous false positives.
The FTC said Rite Aid did not inform its customers of the technology being used in its stores and instructed employees not to reveal anything to consumers or the media. In a statement, Rite Aid said it had stopped using the technology in a small group of stores more than three years ago.
“We are pleased to reach an agreement with the FTC and put this matter behind us. We respect the FTC’s inquiry and are aligned with the agency’s mission to protect consumer privacy,” Rite Aid said. “However, we fundamentally disagree with the facial recognition allegations in the agency’s complaint.”
Colorado Supreme Court bars Trump from 2024 ballot: The Morning Rundown, Dec. 20, 2023
Citing the 14th Amendment, the Colorado Supreme Court ruled former President Donald Trump is ineligible to run again. And Rite Aid is accused of recklessly using AI technology on its customers. These stories and more highlight The Morning Rundown for Wednesday, Dec. 20, 2023.
Colorado Supreme Court disqualifies Trump from 2024 ballot
Colorado’s Supreme Court is the first in the nation to find the 14th Amendment’s “insurrection clause” applies to the former president after similar lawsuits in other states have been dismissed. The clause bars anyone who took an oath to uphold the Constitution from office and then engaged in insurrection or rebellion.
The state’s high court, consisting of all Democratic appointees, reversed a decision by a district judge last month that found while Trump incited an insurrection, Section 3 did not apply to the presidency. In its ruling, Colorado’s Supreme Court said it did not take its conclusions lightly but found Section 3 did apply to the former president.
Trump is facing federal charges concerning Jan. 6. One of the three dissenting judges said without an insurrection-related conviction, the court was depriving Trump of due process. A spokesman for the Trump campaign called the ruling a “flawed decision” and said they have full confidence that the U.S. Supreme Court will rule in their favor.
The Colorado Supreme Court’s decision won’t take effect until at least Jan. 4, giving the U.S. Supreme Court time to review the case.
Lawsuit filed over Texas’ newly signed immigration law
The bill overrides bedrock constitutional principles and flouts federal immigration law while harming Texans, in particular Brown and Black communities.
Adriana Piñon, legal director of the ACLU of Texas
The law, which is set to go into effect in March, also gives judges the power to order deportations. It comes as record numbers of migrants have made it to the U.S.-Mexico border and a surge in illegal border crossings.
“Governor Abbott’s efforts to circumvent the federal immigration system and deny people the right to due process is not only unconstitutional, but also dangerously prone to error, and will disproportionately harm black and brown people regardless of their immigration status,” said Anand Balakrishnan, senior staff attorney at the ACLU’s Immigrants’ Rights Project.
Adriana Piñon, the Legal Director of ACLU of Texas, added that the law wastes billions of tax payer dollars.
“The bill overrides bedrock constitutional principles and flouts federal immigration law while harming Texans, in particular Brown and Black communities,” Piñon said. “Time and time again, elected officials in Texas have ignored their constituents and opted for white supremacist rhetoric and mass incarceration instead.”
Republican state representatives who back the law say Texas and other border states have the “absolute right” to enforce their borders.
Senate confirms top military nominees, ends Tuberville’s hold
As time went by and stalled promotions began to pile up, Tuberville faced bipartisan criticism that his tactic affected military readiness and threatened national security. Tuberville dropped his hold for most promotions earlier this month, except for four-star officers.
In a statement on Tuesday, Dec. 19, the FTC said the retailer failed to implement “reasonable procedures and prevent harm to consumers” using facial recognition in hundreds of its stores. The agency said Rite Aid’s “reckless use” of AI left customers “facing humiliation.”
In its complaint, the FTC said between 2012 and 2020, Rite Aid used AI to capture images of all its customers at select stores and created a database of “persons of interest” suspected of past wrongdoings like shoplifting. The system would send match alerts to Rite Aid workers, who were then instructed to tell the customers to leave the store. The FTC said this led to numerous false positives.
The FTC said Rite Aid did not inform its customers of the technology being used in its stores and instructed employees not to reveal anything to consumers or the media. In a statement, Rite Aid said it had stopped using the technology in a small group of stores more than three years ago.
Nearly 36 million Comcast customers affected by security breach
Comcast has announced that a security breach compromised nearly 36 million of its Xfinity customers’ accounts. According to the cable giant, hackers were able to gain access to its systems through a vulnerability in software provided by the cloud computing company Citrix.
The unauthorized access occurred between Oct. 16 and Oct. 19, more than two weeks after Citrix disclosed its software issue. Customer data “likely acquired” includes usernames, contact details, and the last four digits of social security numbers.
Comcast said in a letter to customers that it notified authorities when it discovered the breach, adding that the software issue has been resolved. However, Xfinity users are being required to reset their passwords.
Blue Origin rocket has successful launch after 2022 crash
Amazon founder Jeff Bezos is once again reaching for the stars, as his Blue Origin rocket took another trip out of this world. The New Shepard rocket lifted off from Blue Origin’s Texas facility more than a year after engine troubles led the rocket to crash during a failed launch.
This time, the rocket successfully catapulted a capsule containing 33 science experiments from NASA and other groups into space for a few minutes of weightlessness. Both the capsule and the rocket then landed safely back to Earth.
Although no one was on board the tourism rocket this time, Blue Origin is looking forward to returning to passenger flights soon following this successful mission.
Investigation into new Meta smart glasses brings privacy concerns
The “next-generation Ray Ban Meta smart glasses” secretly captured hundreds of photos of individuals in parks, on trains, and hiking trails, without their knowledge. These covertly taken photos are sparking significant privacy concerns.
In the experiment, Chen explored how the new glasses are raising privacy concerns, especially now that they are integrated with livestream and artificial intelligence technology.
“Starting in the U.S., you’re going to get this state-of-the-art AI that you can interact with hands-free wherever you go,” Meta CEO Mark Zuckerberg said when he unveiled the new glasses.
Zuckerberg also posted a video to Instagram showing how the smart glasses can help translate signs and pick out a pair of pants to match an outfit.
The glasses feature a small LED light that shines from the right frame to indicate that the glasses are recording. When the glasses take a photo, a flash goes off as well. There is also “tamper-detection technology” to prevent a user from covering the LED light with tape.
“As I shot 200 photos and videos with the glasses in public, no one looked at the LED light or confronted me about it,” Chen wrote. “And why would they? It would be rude to comment on a stranger’s glasses, let alone stare at them.”
Meta’s collaboration with Ray-Ban is just one example of tech giants tapping into new products that shift what consumers do with their devices. Recent changes shifted technology to a more personal experience, now often interactive with the help of AI.
According to Chen, someone could unknowingly be part of that experience too, if they fail to see the LED light shining from the rim of a stranger’s glasses.
“Sleek, lightweight and satisfyingly hip, the Meta glasses blend effortlessly into the quotidian,” Chen wrote. “No one — not even my editor, who was aware I was writing this column — could tell them apart from ordinary glasses, and everyone was blissfully unaware of being photographed.”
Pope Francis warns against ethical dangers of AI, calls for regulation
Pope Francis has called for a binding international treaty to regulate artificial intelligence (AI), emphasizing the need to prevent algorithms from replacing human values. In a written message for the World Day of Peace, he warned against a “technological dictatorship” that poses a threat to humanity.
The message, titled “Artificial Intelligence and Peace,” highlights the global scale of AI and the role of international organizations in regulating its use. Pope Francis urged nations to collaborate on adopting a treaty to govern the development and application of AI.
Getty Images
“The global scale of artificial intelligence makes it clear that, alongside the responsibility of sovereign states to regulate its use internally, international organizations can play a decisive role in reaching multilateral agreements and coordinating their application and enforcement,” Francis wrote in the message.
“The immense expansion of technology thus needs to be accompanied by an appropriate formation in responsibility for its future development,” Francis wrote. “Freedom and peaceful coexistence are threatened whenever human beings yield to the temptation to selfishness, self-interest, the desire for profit and the thirst for power.”
AP Images
This comes as governments worldwide grapple with finding a balance between the benefits and risks of AI technology. Last week, the European Union reached a provisional deal on landmark AI rules, addressing issues such as biometric surveillance and the regulation of AI systems.
Pope Francis, a vocal critic of the arms industry, has expressed serious ethical concerns about the weaponization of AI. The pope cautioned against using AI in weapons, saying it could be disastrous.
Vladimir Putin hesitates when questioned by AI doppelgänger
Four hours into his annual press conference, Russian President Vladimir Putin received a video-based question from what seemed like an AI-generated version of himself. The synthetic doppelgänger asked the real Putin about body doubles and his thoughts on artificial intelligence.
“Vladimir Vladimirovich,” the double said, according to a Reuters report and translation. “Hello. I am a student at St. Petersburg State University. I want to ask, is it true you have a lot of doubles?”
“And also,” the double continued, “How do you view the dangers that artificial intelligence and neural networks bring into our lives?”
“I see you may resemble me and speak with my voice,” the real Putin said after appearing to hesitate. “But I have thought about it and decided that only one person must be like me and speak with my voice, and that will be me.”
“That is my first double, by the way,” Putin added.
Officials and media have previously speculated about Putin using body doubles for public appearances due to health and security reasons.
In November 2023, Japanese researchers claimed several versions of Putin exist after analyzing body movements, facial recognition and voice comparisons.
The Kremlin has denied these accusations at every turn.
Putin admitted he’s been offered the use of body doubles for security reasons in the past. In a 2020 interview with a Russian news agency, Putin said he was presented with an opportunity to use a body double during Russia’s second war against Chechnya in the early 2000s — but he claims he declined it.
The Associated Press and Reuters contributed to this report.
Tesla’s humanoid robot Optimus Gen 2 shows major improvements
The newest Tesla bot prototype, known as Optimus Gen 2, is designed to handle boring, repetitive, and dangerous tasks, according to CEO Elon Musk. The humanoid robot has undergone significant advancements since its introduction in 2021 as Bumblebee.
At the time, the bot was barely able to walk around and wave at the crowd.
Reuters
With improvements in its profile, weight reduction, and increased walking speed, Optimus Gen 2 features human-like hands with tactile sensors capable of safely handling delicate objects.
Reuters
Despite Musk’s claim that Optimus will be more significant than Tesla’s car business, commercial production is still in the distant future, with a steep price point of $20,000.
While early versions may focus on factory tasks, Musk envisions future applications, including running errands for human owners based on voice commands like grocery shopping.
Defense contractor Anduril unveils new AI-guided drone-hunting jet
While it shares a name with a classic cartoon character, a newly unveiled defense platform is light-years ahead of Wile E. Coyote’s nemesis. The Roadrunner and the Roadrunner-M — both built by defense contractor Anduril — are described by their makers as AI-guided, drone-hunting autonomous jets.
Thirty-one-year-old Palmer Luckey — the man behind the project — said, despite sounding like the newest high tech toy, the Roadrunners are mobile, effective and make financial sense. They can be easily deployed and keep service members out of harm’s way at a cost effective price.
Luckey is no stranger to bringing new tech to market. He is best known as the founder of Oculus and the creator of the Oculus Rift. In an interview with CNBC, Luckey said the two-year project was motivated by the increased threats to U.S. and allied troops from what are known as kamikaze drones.
The Roadrunner is different from earlier drone hunters because of its ability to quickly carry out a mission and then return to where it was launched from. This ability means the Roadrunner can be serviced and re-loaded for another mission in a very short period of time.
It can also take off and land vertically, eliminating the need for a runway and making it versatile enough to be used in any kind of terrain.
The Roadrunner-M is a long-range option carrying a high-explosive warhead that can go after jet-powered threats, protecting the troops on the ground. The base model Roadrunner can be fitted with modular payloads that can be sent to hit a specific target before turning back for another mission.
The Roadrunners aren’t just prototypes that exist only on paper. In an interview with Yahoo Finance, Luckey said these UAVs are already in production, with an unnamed U.S. military partner soon to be taking delivery.
One reason why the Roadrunner and the ‘M’ variant can be priced far below their competitors in the arena is the way Anduril develops and markets their products. Aduril builds their creations with their own money rather than looking to win a defense-funded contract for production. This approach essentially brings a ready-made defense system to the table. That’s especially important as the danger to American troops serving abroad rises.
Europe’s AI regulations face uncertain fate amid negotiations
The European Union’s once-pioneering efforts to regulate artificial intelligence have hit some roadblocks in the face of new legislation. Lawmakers are struggling to reconcile differing perspectives on foundation models such as generative AI tools like ChatGPT, according to Reuters.
The European Commission unveiled its proposal for the AI Act in 2021, and the European Parliament granted approval in June of 2023. The drafted regulations now have to move through the European Parliament, the Council and the European Commission.
However, a persistent tension between the pursuit of unhindered innovation in the competitive AI landscape and the imperative for robust regulations to prioritize safety remains a central challenge.
Obstacles emerged during EU negotiations when France, Germany and Italy advocated for self-regulation among companies developing foundation models, according to Politico. The three countries proposed a paper asserting the need for a “regulatory framework which fosters innovation and competition, so that European players can emerge and carry our voice and values in the global race of AI.”
Additional contentious issues include a proposal for a complete ban on public facial recognition.
With talks scheduled for Wednesday, Dec. 6, EU leaders face a crucial moment as they strive to finalize a version before the commencement of election campaigns next year. A failure to reach an agreement could jeopardize the legislation.
Simultaneously, while EU negotiations persist, China has surged ahead with its own AI regulation, which took effect in August.
In the United States, President Joe Biden took a significant step in October by issuing an executive order on AI safety. The order requires that developers with AI technology that could threaten national security will share safety test results with the government. However, the executive order only goes so far. President Biden called on Congress to take further actions on AI regulation.
Ex-Google CEO warns of AI threats to humanity within 5-10 years
Former Google CEO Eric Schmidt warned Tuesday, Nov. 28, that artificial intelligence could pose a danger to humanity within the next five to 10 years. At the Axios AI+ Summit in Washington, D.C., Schmidt said he worries about “the point at which the computer can start to make its own decisions to do things.”
Schmidt compared the development of nuclear weapons to that of AI, saying that it took 18 years to get a treaty over test bans after Nagasaki and Hiroshima.
“We don’t have that kind of time,” Schmidt said.
Schmidt called for the creation of a body similar to the Intergovernmental Panel on Climate Change to provide guidance to policymakers, who would regulate AI. He also said that AI could have significant benefits, such as helping to improve health care and education.
However, the potential risks of AI are causing growing concern among some experts.
Reuters reported that staff researchers at OpenAI sent a letter to the board of directors warning that a significant AI breakthrough could pose a threat to humanity. This letter was allegedly one of the reasons why the board dismissed Sam Altman as the company’s CEO last month.
The specific AI model that is causing concern is reportedly called Q* and it is said to have solved certain mathematical problems, potentially indicating that it has greater reasoning capabilities than previously thought.
The Verge asked Altman about the project, to which he said he had “no particular comment on that unfortunate leak.”
It’s ChatGPT’s birthday. Here’s how it changed the AI game in 1 short year.
The final weeks of ChatGPT’s first year were mired in drama. The face of the technology, OpenAI CEO Sam Altman, was unexpectedly fired by his board and subsequently rehired after hundreds of OpenAI employees threatened to join him at Microsoft.
One day before the first anniversary of ChatGPT’s launch, Altman announced his official return as CEO and a series of musical chairs following the fallout. Most notably, the future of co-founder and chief scientist Ilya Sutskever is still in doubt.
Sutskever was behind the board’s effort to oust Altman and has lost his seat on the board, which is now being led by former Salesforce CEO Bret Taylor.
“I love and respect Ilya, I think he’s a guiding light of the field and a gem of a human being. I harbor zero ill will towards him,” Altman said in a statement. “While Ilya will no longer serve on the board, we hope to continue our working relationship and are discussing how he can continue his work at OpenAI.”
Turmoil aside, there is no doubt that ChatGPT has changed the AI game in the past year. While AI has been around for decades, the user experience of ChatGPT is what propelled this profound impact. Generative AI is now widely used and accessible and is already transforming workplaces in 12 short months.
The launch of ChatGPT one year ago also triggered a technology arms race, where the giants rushed to invest in their own chatbots or gobble up OpenAI’s. Microsoft’s $10 billion investment in OpenAI led the charge.
“If anything, the next 12 months of the AI industry will move even faster than the last 12,” The Verge’s David Pierce wrote.
Recently, Straight Arrow News interviewed global AI expert Aleksandra Przegalińska about AI’s development before and after ChatGPT. A philosopher of artificial intelligence, Przegalińska dives into the public’s fear of AI and how media is driving the narrative.
That interview is in the video at the top of this article. Below are time stamps to forward to a particular topic of interest from the in-depth conversation.
0:00-2:22 Introduction 2:23-5:00 My Unconventional Path To AI Research 5:01-9:42 How The Terminator, Media Drive Our AI Fears 9:43-13:01 Sam Altman, AI Developers Spreading Fear 13:02-14:00 Elon Musk’s Big Regret? 14:01-18:55 How ChatGPT Changed Everything 18:56-25:01 Do Politicians Know Enough About AI To Regulate? 25:02-31:48 The Dangers Of The Uncanny Valley, Deepfakes 31:49-39:27 Will AI Cause Massive Unemployment? 39:28-43:49 Answering Most-Searched Questions About AI