Steam gets ahead of new CA law, informs users they don’t own their games
Steam is informing its users that they don’t fully own the games they purchase, as a new California law on digital content ownership looms. The law, taking effect next year, aims to bring transparency to digital marketplaces by clarifying that users are buying licenses, not permanent ownership of games on such platforms.
Steam, the largest digital gaming distribution platform, has begun notifying users that their digital purchases are licenses to access content, not the content itself. This move comes ahead of a California law requiring companies to clearly communicate the nature of digital purchases after several incidents where users lost access to content.
Getty Images
Under the law, terms like “buy” or “purchase” can only be used if consumers are granted permanent access to the product. Otherwise, companies must clarify that the purchase is for a license that can be revoked.
The law follows high-profile cases involving Ubisoft and Sony, where users lost access to purchased games and TV shows due to server shutdowns or content removal from platforms. Critics argue that this raises questions about the ethics of pirating digital content, considering users don’t actually own the material they pay for.
Steam’s compliance with the new law signals a broader industry shift in how digital content is sold and marketed, sparking debate about consumer rights and potential pushback.
CHP seizes $1.7 million in fentanyl, arrests three in Central Valley drug busts
Authorities in California have continued to highlight the persistent efforts of criminals to distribute fentanyl. This week, significant seizures occurred on Interstate 5 in the Central Valley.
California Highway Patrol officers made two critical traffic stops, resulting in the seizure of 120,000 fentanyl-laced pills and 11 pounds of fentanyl valued at $1.7 million.
On Oct. 3 in Fresno County, a CHP K-9 unit alerted officers to narcotics hidden in a cooler. Inside the cooler, authorities discovered packages of raw beef used to conceal 11 pounds of fentanyl, worth about $500,000. The suspect, a resident of Washington state, was arrested and booked on felony charges.
A day later in Merced County, another CHP K-9 officer discovered 120,000 fentanyl-laced pills and two handguns during a routine traffic stop in Los Banos. Two more suspects, also from Washington state, were arrested on multiple felony charges.
These operations are part of California’s broader effort to combat the fentanyl crisis. Gov. Gavin Newsom, D-Calif., has ramped up enforcement and prevention measures, including expanding the National Guard’s Counterdrug Task Force. Since January 2024, the task force has seized over 5,000 pounds of fentanyl powder and nearly 10 million pills.
VP nominee Walz supports national popular vote over Electoral College
Minnesota governor and Democratic vice presidential nominee Tim Walz recently expressed support for abolishing the Electoral College in favor of a national popular vote. Speaking at a fundraiser hosted by California Gov. Gavin Newsom, Walz emphasized that while many share this sentiment, the likelihood of seeing such a change in the current political landscape remains slim.
“I think all of us know the Electoral College needs to go,” Walz said. “We need a national popular vote, but that’s not the world we live in.”
The system, established by the framers of the Constitution as a compromise between electing the president by Congress or through a direct popular vote, produced controversial outcomes in recent elections.
In both the 2000 and 2016 elections, the Electoral College awarded the presidency to candidates who lost the popular vote, raising questions about the fairness and modern relevance of the system.
Critics argue that the Electoral College allows presidential candidates to focus their campaigns on a few key battleground states, neglecting large parts of the country where outcomes are considered predictable.
Meanwhile, defenders of the system said it ensures that smaller states, which could be overlooked in a national popular vote, have a voice in presidential elections.
A significant movement has gained traction in recent years to work around the Electoral College without changing the Constitution. The National Popular Vote Interstate Compact is an agreement among a growing number of states to allocate their electoral votes to the candidate who wins the national popular vote, regardless of the state’s own outcome.
Currently, 17 states and the District of Columbia have signed on to the pact, but it will only go into effect once enough states have joined to represent at least 270 electoral votes, the number needed to win the presidency.
Public support for moving away from the Electoral College is strong. A Pew Research poll found that 63% of Americans favor a shift to the popular vote, though support varies significantly along party lines. Democrats are overwhelmingly in favor of the change, while Republicans are more divided on the issue, with some arguing that the Electoral College serves to balance the interests of smaller and rural states.
Since 1888, only two U.S. presidents have won the White House without securing the popular vote: George W. Bush in 2000 and Donald Trump in 2016.
Despite the increasing push for reform, political analysts caution that significant changes to the Electoral College are unlikely to happen in the near future. For now, the Electoral College remains a deeply entrenched part of U.S. elections.
The proposed California law would have required safety testing of large AI systems. It would have also given the state’s attorney general power to sue companies over serious harm caused by their tech, and it would have required a sort of “kill switch” that would turn off AI models in case of emergency.
“I do not believe this is the best approach to protecting the public from real threats posed by the technology,” Newsom said in a statement explaining his opposition. “Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it.”
It’s very clear that the harms of AI today are toward consumers and toward our democratic institutions, not sort of pie-in-the-sky sci-fi fantasies about computers making super viruses.
Patrick Hall, Assistant Professor of Decision Sciences, George Washington University
For instance, minimal risk systems like OpenAI’s Chat GPT would only need to adhere to transparency provisions and EU copyright laws. But higher risk systems, like AI models that try to predict whether a person might commit a crime, will be fully banned as of February 2025.
These algorithms are becoming a bigger and bigger part of our lives, and I do think it’s time to regulate them.
Patrick Hall, Assistant Professor of Decision Sciences, George Washington University
The following transcript has been edited for length and clarity. Watch the exchange in the video above.
Simone Del Rosario: Patrick, what was it in this bill that the governor of California sent back and how would it have changed the AI landscape in the state?
Patrick Hall: I think that there are a lot of good things on the table for this California bill, in particular, mandatory testing before systems were released; the ability for the government to take enforcement actions when harms do occur related to AI systems; the notion of a kill switch or the ability to turn a system off quickly; whistleblower protections. There were good things there.
I think that the issue was that the focus of the law was on so-called frontier models. And these are sort of the largest AI models developed by the largest AI companies. It’s a very narrow scope. And then also it really only focused on a sort of small aspect of the performance of AI systems that has come to be known, sort of confusingly, as AI safety.
AI safety really concentrates on things like preventing systems from being used to make bioweapons, preventing catastrophic risk, and I think that was where the bill went wrong.
AI can be a dangerous technology, but I think that it’s very clear that the harms of AI today are toward consumers and toward our democratic institutions, not sort of pie-in-the-sky sci-fi fantasies about computers making super viruses. So I think that’s where the bill went wrong: its focus on catastrophic risk.
Simone Del Rosario: Do you agree with the tech companies that said this bill would have stifled innovation because of the things that you would have to do before developing or is that just an excuse that they make?
Patrick Hall: My opinion there is that it is an excuse, but it would certainly have cut into their revenues in terms of these AI systems, which are probably already under a great deal of stress. I try to explain to people that these generative AI systems require industrial-scale investments in computation, tens [to] hundreds of millions of dollars or more. So they’ve already spent a lot of money on these systems. Whenever you have a sort of regulatory burden, that, of course, increases the amount of money that you have to spend. But since we talking about the biggest, richest companies in the world, I do think it’s a little bit of an excuse.
Simone Del Rosario: I am curious: had this bill passed, or if California decides to move forward with different but similar legislation regulating AI when the rest of the country hasn’t, could this change how tech companies operate in the state of California?
Patrick Hall: Certainly you could see tech companies leave the state of California. I’m not sure how realistic that is, though. What tends to happen is almost a different scenario where most of the larger firms would apply the California regulation, or any large state regulation – California, New York, Illinois, Texas – apply the obligations to meet that regulation across the entire United States.
I’d say that’s actually a more likely outcome and perhaps another reason why some of the tech firms did not like this bill is because they knew it would not only affect their behavior and their revenues in California, but it was likely to affect their behavior and revenues throughout the country.
Simone Del Rosario: Let’s extrapolate that out even more because the EU has passed AI regulation, the AI Act, over there. These are multinational companies that have to adhere to rules in the EU. So how does that affect business in America? And how is the proposed regulation in California different from what we see in the EU?
Patrick Hall: One thing that I would like to emphasize is that EU citizens and citizens of other countries with strong data privacy laws or AI regulations really have a different experience online than Americans and and have many more protections from predatory behaviors by tech companies than we as Americans do.
What it boils down to is tech companies are able to extract a lot more data and sort of conduct a lot more experiments on Americans than they are able to on EU citizens and citizens of other countries in the world that have strong data privacy or AI regulations.
I think it’s a fully different online experience in Europe these days than it is in the U.S. The EU AI Act is a fairly different kind of law. It’s a much broader law and it’s a law that doesn’t focus only on so-called frontier models or only on large models. It doesn’t focus only on safety. It focuses on all types of uses of AI, and it has several different risk tiers, where models in different risk tiers or systems in different risk tiers have different compliance burdens. So it’s a much more holistic law.
Simone Del Rosario: Do we need to have an AI act of our own for a federal response to this?
Patrick Hall: It’s a very good question. I think the answer is yes, eventually. AI in 2024 is very data-driven, so it’s very hard to have good AI regulation without good data privacy regulation. The EU is quite far ahead of us in that they have a strong, overarching data privacy regulation, the GDPR, and after they passed that, they were able to pass an AI Act.
Now it doesn’t have to be done in that order. I’m not saying that the Europeans have done everything right. I’m not saying that they won’t stifle innovation. Certainly, they will to a certain degree, but we have a lot of catching up to do as well. We need to start thinking about data privacy and broader regulation of AI, certainly, and those two may have to be done together. It’s just hard to do AI regulation without data privacy regulation because 2024 AI is so data driven.
We as voters need to make it clear to our representatives that these types of regulations are important, and we need to make it clear the harms we’re experiencing, anything from privacy violations to inconveniences to more serious outcomes, more serious negative outcomes.
These algorithms are becoming a bigger and bigger part of our lives and I do think it’s time to regulate them. And I’d also make it clear that we have good models for regulating algorithms on the books in consumer finance and employment decision-making, in medical devices, and any of these would be a better model to start out from then than the sort of, quote-unquote, AI safety direction.
The Babylon Bee sues California over satire and ‘deepfake’ law
The Babylon Bee has filed a lawsuit against the state of California, challenging new laws signed by Gov. Gavin Newsom, D, that regulate satire and parody through “deepfake” restrictions. The lawsuit claims the laws infringe on free speech by requiring social media platforms to monitor and report deceptive content, which could include political satire.
In July 2024, Gov. Newsom tweeted that a parody video of Vice President Kamala Harris should be illegal, leading to legislation prohibiting such content. The new laws focus on AI-generated disinformation but also mandate labeling or removal of satire that isn’t clearly marked.
The Associated Press
Babylon Bee CEO Seth Dillon argued that forcing disclaimers ruins the humor of satire and could result in penalties if not followed.
The lawsuit, supported by Alliance Defending Freedom, asserts that satire is protected under the First Amendment, which guarantees the right to political speech. The suit adds to a growing list of conservative media outlets alleging censorship.
Last year, The Daily Wire and The Federalist sued the Biden administration over similar concerns. This particular legal showdown could set new precedents for political satire in the digital age.
Newsom vetoes controversial California AI safety bill
Gov. Gavin Newsom vetoed a landmark California bill that would have established the nation’s first safety regulations for artificial intelligence. The proposed legislation sought to require AI companies to test their systems, publicly disclose safety measures and provide whistleblower protections.
Proponents of the bill argued that it was a necessary step to address the risks AI poses to infrastructure and public safety, with potential threats ranging from manipulating electric grids to creating chemical weapons.
However, Newsom opposed the bill, stating its strict requirements could hinder innovation by imposing regulations even on low-risk AI systems.
California is home to 32 of the world’s top 50 AI companies.
Newsom expressed concern that the legislation could drive developers out of the state. Instead, he announced a partnership with AI industry leaders to create more flexible safety guidelines.
Getty Images
While the veto is seen as a win for tech companies, experts warn that it leaves rapidly advancing AI systems unregulated.
Supporters of the bill, including Elon Musk’s X and AI firm Anthropic, argued that it could have introduced much-needed transparency in an industry that remains largely unregulated.
Critics of the bill feared the regulations might discourage investment and development, particularly in open-source software. Despite the veto, similar efforts are already inspiring lawmakers in other states to explore AI safety measures.
California continues to lead AI development, with the state already working to combat election deepfakes, protect Hollywood workers from unauthorized likeness use and prevent AI-driven discrimination in hiring practices.
Meanwhile, the Biden administration has proposed an AI Bill of Rights, but federal legislation regulating the rapidly growing industry has yet to be introduced.
Gov. Newsom, with Demi Lovato, signs bills to protect young actors, creators
California Gov. Gavin Newsom, D, just signed two bills into law meant to protect the earnings of children and teenagers involved in creating online content. The New York Times reported that some young influencers can make between $10,000 and $20,000 per post.
One of the legislation’s biggest supporters is singer Demi Lovato. She recently went on “The Tonight Show with Jimmy Fallon” to call on Gov. Newsom to sign these bills.
“In order to make things different for future generations, there have to be protections put in place for minors of the digital age,” Lovato said.
“So we’re talking about families who are profiting off of social media, you know, minors need to be compensated for that. And there’s actually a bill on Governor Gavin Newsom’s desk right now. He has until September 30 to sign it. And it advocates for the compensation of minors working on social media. Let’s go. Let’s get that signed and moving. Let’s go. Let’s go.” She continued.
And Newsom was watching. He discussed the issue with Lovato on the latest episode of his podcast, “Politickin’,” with cohosts Marshawn Lynch and Doug Hendrickson.
“She was attacking me Doug, on Fallon,” Newsom jokingly said on the podcast. “She’s saying, ‘This Newsom guy, we’re gonna track him down.’ And I think Fallon also said, ‘Newsom’s watching’ or something. I’m like, Jesus, this is next level pressure.”
But it seemed to get his attention. A video posted to the Democratic governor’s YouTube channel on Thursday, Sept. 26, showed him signing the legislation alongside Lovato.
One of the bills requires parents and guardians who feature their children in at least 30% of their online videos to set aside a percentage of earnings in trust accounts. Straight Arrow News reported on a similar law in Illinois that took effect in July.
The other bill expands on a law passed decades ago that protects child actors. Lovato, who rose to fame as a child actor on the Disney Channel released a documentary on Hulu this month titled “Child Star,” which took a close look at what it’s like growing up in front of the cameras, the good and the bad.
In their video, Newsom and Lovato explain how a child actor from a century ago who starred in Charlie Chaplin films is helping the social media creators of today.
“People don’t know who Jackie Coogan was,” Newsom said. “We wouldn’t be here. We’re building off Coogan’s Law. An actor back in the 1920s.”
This second bill expands the Coogan Law to include young creators who made content themselves for platforms like YouTube and TikTok. Under this law, 15% of a child actor’s earnings is put away in trusts until they turn 18.
Lovato is hoping her influence that helped create these new laws will lead to similar legislation across the country.
California Gov. Gavin Newsom vetoes bill for Black land claims
California Gov. Gavin Newsom vetoed a bill that would have created a process for Black families to file claims for land taken through discriminatory use of eminent domain. The eminent domain bill was inspired by a 2022 case where Los Angeles area officials returned beachfront property to a Black couple a century after its unjust seizure.
Democratic Sen. Steven Bradford, the bill’s author, viewed the proposal as a crucial step toward reparations and correcting historical wrongs.
The veto dealt a significant blow to a key part of the California Legislative Black Caucus’s reparations package aimed at addressing decades of racial disparities.
The Department of Finance opposed the bill, citing potential annual costs ranging from hundreds of thousands to millions of dollars.
The Newsom administration proposed allocating $6 million to California State University to fund a study on a reparations task force, signaling a potential shift from direct action to further research.
Gov. Newsom signs California ban on all plastic grocery bags by 2026
All plastic shopping bags at California grocery stores will be banned by Jan. 1, 2026. Gov. Gavin Newsom signed a law Sunday, Sept. 22, that will eliminate all plastic bags at grocery checkouts.
In 2014, the California Legislature passed a statewide ban on single-use plastic bags. However, shoppers were still able to purchase “thicker” plastic bags they could reuse. Advocacy groups said the law “needed a re-do.”
“Unsurprisingly, people were just not really reusing these,” Jenn Engstrom, the CalPIRG state director, said during an interview with CBS. “So we actually did a survey where we stood outside grocery stores and counted how many people walked into the store with one of these bags to re-use them and we found only 2% of the people we counted actually brought one of these bags back.”
Environmental groups support the ban, while critics express concerns about consumer costs and excessive regulations.
State data shows per capita plastic bag trash increased from eight pounds in 2004 to 11 pounds in 2021, highlighting the growing problem. Despite previous efforts, California experienced a 47% increase in grocery and merchandise bag waste from 2016 to 2022, rising from just over 157,385 tons to 231,072 tons annually.
Coastal cleanup efforts have collected more than 300,000 plastic grocery bags in the last three decades.
Oceana, a nonprofit ocean conservation organization, emphasizes that plastic is deadly to ocean wildlife and threatens marine ecosystems.
Starting in 2026, California shoppers must bring reusable bags or purchase paper bags at checkout, typically costing between 10 to 25 cents each.
Twelve other states have implemented similar restrictions, indicating a growing national trend towards plastic bag bans.
Creator of Kamala Harris parody video sues California over ‘deepfake’ ban
A conservative commentator who used artificial intelligence to create a parody video of Vice President Kamala Harris is suing California, arguing that recent laws banning AI-generated political content violate his constitutional rights. Christopher Kohls, known online as “Mr. Reagan,” filed the lawsuit after Gov. Gavin Newsom signed legislation aimed at curbing the spread of digitally altered deepfakes in political campaigns.
The lawsuit claims the laws infringe on Kohls’ First Amendment and 14th Amendment rights, asserting that political satire, whether created by AI or traditional methods, is a protected form of free speech.
In July, Kohls posted a video that mimicked Harris’ voice using AI, portraying her as “the ultimate diversity hire” in a mock campaign ad. The video quickly went viral after it was shared by X owner Elon Musk without being labeled as parody.
Newsom, who criticized the video, vowed to act swiftly to ban such AI-altered content, citing concerns over the potential for misinformation to influence elections.
On Tuesday, Sept. 17, he signed laws targeting fraudulent campaign materials, including those generated with artificial intelligence.
Kohls’ lawsuit challenges these laws, claiming that the state is attempting to make political satire illegal and restrict his ability to use AI in his content. He argues that the laws could suppress free expression in political discourse, especially as AI becomes more commonly used in media.
Legal experts say the case could set a precedent for how AI-generated content is regulated in future elections and whether existing free speech protections extend to digital parodies.
Newsom’s office has not yet commented on the lawsuit.