Universal, TikTok strike new deal to end feud that kept Taylor Swift off the platform
Artists from Universal Music Group (UMG), including Taylor Swift, Drake, Adele, Bad Bunny and Billie Eilish, are set to return to TikTok following a new licensing agreement that resolves a three-month dispute. Announced Wednesday, May 1, the deal addresses past concerns over artist compensation, the use of AI and user safety on TikTok.
Universal Music Group and TikTok have reached a new licensing agreement, ending their months-long dispute and bringing the label’s catalog back to the short-form video platform.
This agreement specifically tackles issues with generative AI by ensuring future AI developments within the music industry will protect artists’ and songwriters’ creative rights and financial earnings.
TikTok CEO Shou Chew emphasized the platform’s commitment to drive value and promote UMG’s talent, including removing unauthorized AI-generated content and enhancing attribution for artists and songwriters. The deal also aims to create a safer online community by preventing fake merchandise and ticket sales scams.
Additionally, the agreement introduces new monetization avenues and global promotional campaigns for UMG artists, aiming to leverage TikTok’s vast user base for greater artist visibility and engagement.
Our core mission is simple: to help our artists & songwriters attain their greatest creative and commercial potential, which is why we must call time out on TikTok.
However, the future of TikTok in the U.S. remains uncertain. Recent legislation requires TikTok’s parent company, ByteDance, to sell to a U.S. owner within a year or shut down, posing potential challenges to such agreements.
TikTok maintains it has not and will never share U.S. user data with the Chinese government.
It’s ChatGPT’s birthday. Here’s how it changed the AI game in 1 short year.
The final weeks of ChatGPT’s first year were mired in drama. The face of the technology, OpenAI CEO Sam Altman, was unexpectedly fired by his board and subsequently rehired after hundreds of OpenAI employees threatened to join him at Microsoft.
One day before the first anniversary of ChatGPT’s launch, Altman announced his official return as CEO and a series of musical chairs following the fallout. Most notably, the future of co-founder and chief scientist Ilya Sutskever is still in doubt.
Sutskever was behind the board’s effort to oust Altman and has lost his seat on the board, which is now being led by former Salesforce CEO Bret Taylor.
“I love and respect Ilya, I think he’s a guiding light of the field and a gem of a human being. I harbor zero ill will towards him,” Altman said in a statement. “While Ilya will no longer serve on the board, we hope to continue our working relationship and are discussing how he can continue his work at OpenAI.”
Turmoil aside, there is no doubt that ChatGPT has changed the AI game in the past year. While AI has been around for decades, the user experience of ChatGPT is what propelled this profound impact. Generative AI is now widely used and accessible and is already transforming workplaces in 12 short months.
The launch of ChatGPT one year ago also triggered a technology arms race, where the giants rushed to invest in their own chatbots or gobble up OpenAI’s. Microsoft’s $10 billion investment in OpenAI led the charge.
“If anything, the next 12 months of the AI industry will move even faster than the last 12,” The Verge’s David Pierce wrote.
Recently, Straight Arrow News interviewed global AI expert Aleksandra Przegalińska about AI’s development before and after ChatGPT. A philosopher of artificial intelligence, Przegalińska dives into the public’s fear of AI and how media is driving the narrative.
That interview is in the video at the top of this article. Below are time stamps to forward to a particular topic of interest from the in-depth conversation.
0:00-2:22 Introduction 2:23-5:00 My Unconventional Path To AI Research 5:01-9:42 How The Terminator, Media Drive Our AI Fears 9:43-13:01 Sam Altman, AI Developers Spreading Fear 13:02-14:00 Elon Musk’s Big Regret? 14:01-18:55 How ChatGPT Changed Everything 18:56-25:01 Do Politicians Know Enough About AI To Regulate? 25:02-31:48 The Dangers Of The Uncanny Valley, Deepfakes 31:49-39:27 Will AI Cause Massive Unemployment? 39:28-43:49 Answering Most-Searched Questions About AI
These Sports Illustrated writers never existed. Their faces are AI-generated.
Sports Illustrated has removed articles after Futurism reported that the outlet published them under fake author names with AI-generated headshots. The revelation has sparked concerns about the growing use of AI in journalism.
On Monday, Nov. 27, Futurism reported that the headshots of these nonexistent writers were available for purchase on a website that sells AI-generated content. According to the report, a source involved in the creation of the authors confirmed that some of the articles attributed to the authors were also AI-generated.
The Arena Group has been Sports Illustrated’s publisher since 2019. The company addressed the allegations in a statement to media outlets.
The articles in question were product reviews and were licensed content from an external, third-party company, AdVon Commerce.
The Arena Group
“Today, an article was published alleging that Sports Illustrated published AI-generated articles,” the statement said. “According to our initial investigation, this is not accurate. The articles in question were product reviews and were licensed content from an external, third-party company, AdVon Commerce.”
The Arena Group statement also mentioned that AdVon Commerce writers use pen or pseudo-names for some articles.
Sports Illustrated is not the first outlet accused of experimenting with AI and not disclosing it. In October, USA Today’s product reviews site, Reviewed, faced accusations of publishing AI-generated articles.
Despite AI detection programs reportedly indicating otherwise, Gannett said that the articles were written by freelancers, according to The New York Times. USA Today currently has ethical guidelines that mandate disclosure when AI is employed in their content creation process.
Generative AI threatens 2024 elections; false Israel-Hamas images spread
The tech world is preparing for what some say has already begun disrupting democratic processes: Artificial intelligence. More specifically, the focus is on generative AI, a type of AI that creates fake, but convincingly realistic images, audio and text.
At the Reuters NEXT conference in New York during the week of Nov. 5, AI entrepreneur and professor emeritus of psychology and neural science at New York University, Gary Marcus, emphasized that the peril AI poses to democracy stands out as the most substantial risk.
“There are a lot of elections around the world in 2024, and the chance that none of them will be swung by deepfakes and things like that is almost zero,” Marcus said.
Politicians have been particularly vulnerable to these threats. Meta has taken preemptive measures by deciding to prohibit advertisers from utilizing its generative AI for political ads on Facebook and Instagram.
Starting next year, the use of third-party AI software for political, electoral, or social ads will require disclosure. Failure to comply may lead to ad rejection, and repeated violations could incur penalties.
While the detection of deepfakes has historically been imperfect, DeepMedia claims its product performs with 99% accuracy in its detection.
“The thing that makes our deepfake detection highly accurate, really fast and easy to use, is the fact that we both do generation and detection, these are kind of two sides to the same coin,” COO and co-founder Emma Brown said.
Brown cautioned against focusing solely on entirely fabricated content, noting instances where only a brief segment of a video is manipulated. She emphasized the difficulty in detecting such alterations, even for highly trained analysts, making it a critical concern.
“One thing that we’ve found is, you know, there are certain situations where only three seconds of a video are faked, and it might be a 20-minute video, and it might change the meaning of something,” Brown said. “But it’s only three seconds.”
Beyond the domestic effects, deepfakes are further complicating international issues.
“One of the things that we’re doing is we’re working directly with platforms to make sure that it’s integrated for all users,” Brown said. “And we’ve actually recently come out with a Twitter bot in response to Israel, Hamas.”
Recent revelations about Adobe selling AI-generated images depicting scenes of war, including explosions and destroyed homes in Gaza, further underscore the challenges. Adobe used a label to indicate the images were generated with AI.
Experts, including Brown, anticipate that the prevalence of deepfakes will only increase, flooding social media platforms with more manipulated video and audio content.
ChatGPT launched an AI revolution. Here’s where we stand nearly 1 year on.
Artificial intelligence hit the mainstream like a firestorm following the release of OpenAI’s ChatGPT. Technology companies scrambled to join the AI arms race, led by Microsoft’s $10 billion investment in OpenAI. At the same time, Capitol Hill sprang into action, holding hearing after hearing over safety and regulation.
The overnight sensation of generative AI is not likely to burn out as quickly as it came on. The endless possibilities are expected to transform technology, the workforce and society at large. At this pivotal juncture, humans will shape where artificial intelligence goes from here, but many fear the direction it will take.
AI expert Aleksandra Przegalińska joins Straight Arrow News from Warsaw, Poland, for an in-depth conversation about the future of AI, from job replacement to developing consciousness.
Przegalińska is a senior research associate at Harvard University analyzing AI, robots and the future of work. She has a doctorate in the philosophy of artificial intelligence from the University of Warsaw and is an associate professor at Kozminski University.
The AI Revolution
Interest in artificial intelligence exploded when ChatGPT first hit the masses in November 2022. While AI has technically been around for decades, the sheer accessibility of directly interacting with a chatbot led to a surge in chatter, as evidenced by Google search trend data.
But it wasn’t just talk. Companies were quick to put money on the table. Nothing comes close to Microsoft’s $10 billion OpenAI investment, but tech companies, health care firms and venture capitalists were quick to ink their own deals in the first quarter of 2023. Microsoft’s move also triggered an AI search-engine race, pushing Google to release Bard, its experimental AI-powered search tool.
The Fear Factor
As humans reckon with the future of artificial intelligence capabilities, Aleksandra Przegalińska, a doctorate in the philosophy of AI, says the most prevalent emotion is fear.
It is mostly a story that is infused with fear, with a sense of threat; where artificial intelligence can reach a level where it figures out that it’s also as smart as we are, perhaps even smarter, and then becomes our enemy. And I think it’s in many ways a story about our history.
Aleksandra Przegalińska, AI expert
Przegalińska said many factors play into this fear, from movies like “The Terminator”to fear spread by AI developers themselves.
This past spring, AI leaders and public figures attached their names to the following statement. Key names that signed on include OpenAI CEO Sam Altman, Anthropic CEO Dario Amodei, Google DeepMind CEO Demis Hassabis and Bill Gates.
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
Center for AI Safety
“Sam Altman is obviously telling the Congress that we should all be scared but then again, he’s incubating GPT-5 as we speak,” Przegalińska said. “This to me seems a bit strange. Either you say, ‘Okay, there is a chance that this technology will be misused and this is the way I would like to address these concerns,’ or you’re saying, ‘Well, it’s ultimately the worst thing that can happen to humanity and I just simply stop building it at all.’”
I think maybe he has some regrets being left with Twitter instead of surfing this big AI wave.
Aleksandra Przegalińska on Elon Musk advocating for an AI ‘pause,’ citing risks to society. Musk was an early investor in OpenAI.
Perhaps the biggest fear of AI is the possibility that it could replace so many livelihoods. In March, investment bank Goldman Sachs predicted that AI could automate the equivalent of 300 million full-time jobs between the U.S. and Europe.
Przegalińska, whose research at Harvard University focuses on AI and the future of work, says developers should focus on how humans can collaborate with AI to increase productivity but not replace humans altogether.
Many things can go wrong if you decide to choose that pathway of full automation.
Aleksandra Przegalińska, AI expert
“But our jobs will change and some jobs will probably disappear because of artificial intelligence,” Przegalińska said. “And I do think that politicians have to look at that as well.”
In May 2023, AI was responsible for 3,900 job cuts in the U.S., according to data from Challenger, Gray & Christmas, Inc.
When it comes to regulating AI, the U.S. is not the one setting the global groundwork. This summer, the European Union passed a draft law known as the A.I. Act, legislation that is years in the making. But it’s just a start.
“I do regret a bit that this regulation happened this late,” Przegalińska said. “Many people from the AI field have been calling for regulation before ChatGPT and way before ChatGPT. We knew already that there would be some problems because some of these systems are just not explainable. They’re like black boxes; they’re very difficult to understand and yet we use them.”
Meanwhile, lawmakers on Capitol Hill have held several hearings about risks posed by artificial intelligence and ways to regulate its use. However, American efforts are considered to be in the early stages. Also, lawmakers have been criticized in the past for not understanding the technology they aim to regulate, like during Big Tech hearings in the past.
“There was a bit of a mismatch in terms of digital competencies,” Przegalińska said.
I do hope that this time around, the politicians will come prepared, that they will be better prepared for these types of discussions.”
Aleksandra Przegalińska, AI expert
How should AI be regulated to combat deepfakes and bad actors? Click here for more.
The Uncanny Valley
How easy is it to tell what is real and what is artificial? AI today has some serious quirks, like generating eight or nine fingers on one hand. But as technology advances, it’ll get more and more difficult to separate fact from fiction.
I have my own deepfake, and it’s so good that for me, it’s even sometimes hard to figure out whether it’s her speaking or myself. Really, that’s so uncanny.
Aleksandra Przegalińska, AI expert
In real life and in movies, those in robotics have pursued building robots that look and act human, coming close to crossing the uncanny valley.
“The uncanny valley is this concept that tells us that if something resembles a human being, but not fully, then we are scared of it,” Przegalińska said. “So our probably very deeply, inrooted biological response to something that looks like a human and is not a human and we know that, is to just have this eerie sensation that this is not something we should be interacting with.”
What are the psychological effects of crossing into the uncanny valley? Click here to watch.
Full interview time stamps:
0:00-2:22 Introduction 2:23-5:00 My Unconventional Path To AI Research 5:01-9:42 How The Terminator, Media Drive Our AI Fears 9:43-13:01 Sam Altman, AI Developers Spreading Fear 13:02-14:00 Elon Musk’s Big Regret? 14:01-18:55 How ChatGPT Changed Everything 18:56-25:01 Do Politicians Know Enough About AI To Regulate? 25:02-31:48 The Dangers Of The Uncanny Valley, Deepfakes 31:49-39:27 Will AI Cause Massive Unemployment? 39:28-43:49 Answering Most-Searched Questions About AI
Why we fear AI, from a PhD in philosophy of artificial intelligence
Conversations around artificial intelligence are often filled with fear and threat. Much of it can be traced back to movies, news stories, and even comments by those developing the technology, according to a Harvard senior research associate with a Ph.D. in the philosophy of artificial intelligence.
“Instead of focusing on things that are to be solved and some challenges ahead, we are just clearly falling into that Terminator narrative,” Aleksandra Przegalińska said.
Przegalińska said experts in the field are among those most afraid of the technology. Many have signed on to statements warning of the risks associated with AI but continue developing it.
AI expert Aleksandra Przegalińska joins Straight Arrow News from Warsaw, Poland, for an in-depth conversation about the future of AI, from job replacement to developing consciousness.Watch the full conversation here.
Below is an excerptfrom that interview.
Simone Del Rosario: What are some of the emotions that we struggle with when we’re reckoning with the AI development going on?
Aleksandra Przegalińska: I think the most prevalent emotion is probably fear. When you think about it, the way the story of artificial intelligence is being told to us by pop culture, by media, by movies that we like, like ‘The Terminator,’ it is mostly really a story that is infused with fear, with a sense of threat, where artificial intelligence can reach a level where it figures out that it’s also as smart as we are, or perhaps even smarter, and then becomes our enemy.
And I think it’s in many ways a story about our history, how we’ve struggled, and how there were so many conflicts and revolutions as we were moving forward as a civilization. And I do think that we put many of these fears into AI because this is something we know.
On the other hand, we are absolutely intrigued, right? It’s an intriguing field. We don’t have any other technology that is so close to us that can also communicate with us, have a perception of reality, see something, hear something, respond, make inferences, reason. So I think in that way, we are challenged by it, but we are also very interested and intrigued by it and how it’s going to evolve in the future is a very intriguing question here.
Brent Jabbour: I have an interesting question about that, too, because you talk about the fear and the concerns about Terminator and Skynet. Do you think us, as a society, we do a disservice because every time something interesting in AI comes up, we immediately go to the bad rather than the possible good?
Aleksandra Przegalińska: Well, yes, I absolutely agree with that. So I’m definitely on the, I hope, rational side here. So very often, I would just say, ‘Hey, let’s not panic. It’s just technology. AI is a statistical model. It’s very good at what it does, and it can be very helpful to us, but nonetheless, it’s just a tool.’ But there are many other experts also, you know, people who are so prominent in the field who are clearly afraid of this technology. The current discourse around artificial intelligence, including generative AI, something we will probably talk about, is absolutely full of panic, which is unnecessary and perhaps it is a bit of a disservice because instead of focusing on things that are to be solved and some challenges ahead, we are just clearly falling into that Terminator narrative immediately, right. And that does not help us in rational thinking and planning, strategizing around this technology. So that I think is a problem.
Is it alive? How AI’s uncanny valley could threaten human interaction
The uncanny valley as a concept has been around for decades. But as artificial intelligence develops, technology is several steps closer to tricking human brains and manipulating emotions.
The term uncanny valley is used to describe the emotional response from humans when encountering robots that appear human-like. AI expert and philosopher Aleksandra Przegalińska said humans have a common biological response to this interaction: an eerie sensation.
We do see avatars that look almost exactly like humans, where that immediate response of your body is just acceptance… But then there’s suddenly a glitch.
Aleksandra Przegalińksa, AI senior research associate, Harvard University
“In the era of deepfakes and also in the context of the fact that we are mostly interacting with the digital world, not necessarily with physical robots, this uncanny valley idea is very, very problematic,” Przegalińska said.
In the video above, she details how encounters with human lookalikes could make people afraid of actual human interaction.
AI expert Aleksandra Przegalińska joins Straight Arrow News from Warsaw, Poland, for an in-depth conversation about the future of AI, from job replacement to developing consciousness.Watch the full conversation here.
Below is an excerptfrom that interview.
Simone Del Rosario: I was hoping that you could explain for me this concept of the uncanny valley. I’ve heard you talk on it before and I just thought it was a really fascinating look at where people should be designing AI versus where they should be steering away from.
Aleksandra Przegalińska: This is a concept that my team and I have been researching for the past couple of years. It was mainly focused on building robots and how not to build them.
The uncanny valley is this concept that tells us that if something resembles a human being but not fully, then we are scared of it. So our probably very deeply, inrooted biological response to something that looks like a human and is not a human and we know that, is to just have this eerie sensation that this is not something we should be interacting with.
I’m not sure if you’re familiar with a robot called Sophia. It’s very, very popular on social media and it gives you that sensation or effect of the uncanny valley — just sort of very confusing to figure out whether you’re really talking to something that is alive or not. Is it healthy or is it sick? What’s going on with it? Why is the mimic so weird? Why are the eyes rolling so slowly?
So it does resemble a human, but then again, it’s not a human. And that is interesting because now in the era of deepfakes and also in the context of the fact that we are mostly interacting with the digital world, not necessarily with physical robots, this uncanny valley idea is very, very problematic.
We do see avatars that look almost exactly like humans, where that immediate response of your body is just acceptance. You’re seeing something that looks like a human and it talks and it’s all good. But then there’s suddenly a glitch and that glitch is that moment when you realize that this may not be a human.
Then who knows? Maybe in the future, when there will be more deepfakes, we will become very cautious and afraid of interactions with others because it will be very hard to classify who is it that we’re dealing with.
The race to regulate AI hits snag; politicians don’t understand the tech
Should government have a role in regulating artificial intelligence? When asked in a closed-door meeting with tech executives, Senate Majority Leader Chuck Schumer said, “every single person raised their hands, even though they had diverse views.”
The overnight sensation of ChatGPT put a timer on government oversight, as politicians scrambled to convene hearings on Capitol Hill about the need for regulation.
I just hope that this time around, we will do a better job than we did with social media.
Aleksandra Przegalińska, AI senior research associate, Harvard University
The U.S. is behind the ball on regulating AI when compared to the European Union, which passed draft legislation known as the AI Act this summer. The AI Act was first proposed by the European Commission in 2021, over a year before OpenAI released ChatGPT.
Countries and commissions face many challenges when it comes to regulating the fast-moving technology. To start, government regulation has never been able to keep up with technological advances, and politicians have regularly displayed an inability to understand the field. So why are some of tech’s biggest executives pushing for regulation?
AI expert Aleksandra Przegalińska joins Straight Arrow News from Warsaw, Poland, for an in-depth conversation about the future of AI, from job replacement to developing consciousness.Watch the full conversation here.
Below is an excerptfrom that interview.
Simone Del Rosario: All of this conversation has really been sparked over who’s going to regulate AI and this urgency behind that effort to regulate AI. Who do you think should be regulating something like this, though? You have to admit that politicians aren’t really the most well-versed in the most groundbreaking technology.
Aleksandra Przegalińska: That is correct. And we’ve seen that with social media and the hearings of Mark Zuckerberg in the Senate. There was a bit of a mismatch in terms of digital competencies.
But ultimately, I think it has to be a collective effort of many different stakeholders because this technology is not only about technology. It’s not about IT people and how they’re going to use it, but it’s a technology that is very, very broad. It’s a general-purpose technology, you could even say.
It’s the type of technology that will penetrate so many different professions, tasks, accountants, health care professionals, different people who are working in various organizations, business, consultancy. Wherever you look, you will see AI. So in that way, I think it has to be a collective effort.
I do regret a bit that this regulation happens this late, because actually, many people from the AI field have been calling for regulation before ChatGPT, way before ChatGPT. And we knew already that there will be some problems because some of these systems are just not explainable. They’re like black boxes. They are very difficult to understand and yet we use them.
We want them to make decisions about important things like giving someone a loan in a bank or not, or declining. So we really need to understand what these systems are doing. And that has been a problem way before ChatGPT.
But now I am sort of glad that there’s at least a debate. And I do hope that this time around, the politicians will come prepared and that they will be better prepared for these types of discussions. They do have experts. They can talk to many people.
I observed what’s been going on at the White House. There was a meeting between Kamala Harris and many representatives of those companies that are building generative tools, generative AI.
There has been a hearing at the Senate where one of the senators said that Sam Altman should tell everyone how to regulate AI. And I don’t think it’s necessarily the best way to go. We need at least a couple of rounds of different consultations. Many companies have to be involved, but also NGOs, civil society, researchers who are not working in private companies but also at universities.
There are many people with good ideas so it has to be a dialogue. And I just hope that this time around, we will do a better job than we did with social media.
AI is coming for 300 million jobs. Is the future work optional?
Will generative artificial intelligence enhance the way professionals work or replace them altogether? While it is still in its early stages, generative AI is expanding automation into a much wider set of the workforce.
Goldman Sachs predicted that AI could automate the equivalent of 300 million full-time jobs between the U.S. and Europe.
“Some sort of luxury version of a utopia where we don’t work, or we choose to work whenever we feel like it, and then we rely on a very generous, universal basic income; well, that’s probably not something that’s going to happen,” said Aleksandra Przegalińska, an AI expert researching the future of work at Harvard University. “So we should push for a vision or scenario where humans are working and they are enhanced by artificial intelligence.”
By 2030, 30% of hours worked in the U.S. today could be automated, according to a recent report by McKinsey Global Institute. While AI could enhance productivity among STEM, creative, and business and legal professionals, McKinsey projects it will eliminate jobs in office support, customer service and food service.
AI expert Aleksandra Przegalińska joins Straight Arrow News from Warsaw, Poland, for an in-depth conversation about the future of AI, from job replacement to developing consciousness.Watch the full conversation here.
Below is an excerptfrom that interview.
Aleksandra Przegalińska: You touched upon a very important part of decision-making; that we have to find or strike a balance where we are still the main or key decision-makers.
For instance, I am now involved in a project that we are doing at my university in Warsaw, Kozminski University, with Harvard University. And in that project, we are looking at collaborative AI, so the type of artificial intelligence that is designed for collaboration with humans. So it does not do the job instead of you, but instead works together with you.
And that’s a very different approach that I think can be very useful in many different professions. We are now testing our tool, that is also generative, on salespeople and marketers. We’ve seen an increase in productivity, but also an increase in job satisfaction.
And this is something that we would like, right? Getting rid of some routines and really focusing in our work on something that is interesting and fun for us. So I do hope that this is the trajectory that we will choose.
And I would really look very carefully at the speed and pace of automation, finding the proper use cases for it. Not everything can be done by generative AI and also not everything should be done by generative AI. So I do think that we have a bit of road mapping to do ahead of us.
Simone Del Rosario: Do you think there’s a risk of someone taking the technology a little bit too far just because the technology can go there? And then by then, it’s too late and you can’t really figure out why it’s a better case to have humans in that role over AI.
Aleksandra Przegalińska: Well, I do think that many things can go wrong if you decide to choose that pathway of full automation. Many things can go wrong because this is just a technology. It does not have any experiences, any internal states, no affections, no emotions. It’s just designed by humans and it’s prompted by humans. So you have to know how to use it in order to use it well. If you just rely on it, usually the results are not so spectacular. So in that way, I would say that those who choose full automation will probably not be very happy with their choice.