
Elon Musk interviewed by Chris Anderson at TED2017 - The Future You, April 24-28, 2017, Vancouver, BC, Canada. Photo: Bret Hartman / TED on Flickr
Elon Musk Drops Lawsuit Against OpenAI Before Hearing
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by
Elon Musk dropped his lawsuit against OpenAI and two of its co-founders, Greg Brockman and current OpenAI CEO Sam Altman, on Tuesday. According to CNBC , a court filing obtained by the publication states that the case—filed in California state court in February—was dismissed without prejudice, meaning Musk can refile it in the future.
Musk, also one of OpenAI’s co-founders who stepped down from its board in 2018, accused the company of breaching its original mission by turning it into a for-profit entity. According to Musk, OpenAI was initially created “for the benefit of humanity.”
He withdrew the case just a day before a hearing with a judge was scheduled to consider OpenAI’s request to dismiss it on Wednesday. The reason for the dismissal isn’t clear.
According to Bloomberg reporter Rachel Metz, the fact that OpenAI requested the case be dismissed doesn’t appear to have impacted Musk’s decision to dismiss it. “The timing of this makes it sound like it felt like a good time based on whatever discussions they were having,” she said.
According to The Verge , there were inconsistencies in Musk’s case from the beginning. “Musk is straightforwardly alleging that OpenAI breached a contract that does not exist,” expressed the publication’s Editor-in-Chief Nilay Patel. “The complaint makes reference to a ‘Founding Agreement,’ but no such Founding Agreement is attached as an exhibit.”
OpenAI also denied the existence of such an agreement.
The dismissal also comes a day after Musk publicly criticized OpenAI for its new alliance with Apple and threatened to ban Apple devices from his company.
“If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies. That is an unacceptable security violation,” wrote Musk on his social media platform X. “And visitors will have to check their Apple devices at the door, where they will be stored in a Faraday cage,” he later added.
Reuters reports that OpenAI argued in a court filing that the lawsuit wasn’t based on anything substantial and was just an attempt to stagger OpenAI to help Musk advance his own AI ventures.
Link to Image License

Opinion: AI Art Is Taking Over Human Art, But We Might Be Too Distracted To Notice
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by
I have a confession to make: I’ve been having a lot of fun creating AI images in my spare time, and even for the articles I write—like the one featured at the top of this piece. I’ve been showing my less AI-aware friends the power of these new, free tools over the past few weeks and it can be downright entertaining.
Even as a writer who also deals with the artists-against-AI dilemma, I sometimes get distracted from the ethical debate and overlook the, frankly, aggressive ways some tech companies train their AI tools—using people’s copyrighted creations without permission and without providing easy ways to opt-out.
But when I learned about the hundreds of thousands of artists migrating from Instagram to this new Cara app after Meta announced it’d be using public posts to train its new AI models, I woke up from my AI hypnosis. I went back to questioning whether there are enough ethical (and legal!) regulations to protect human creators and their original work.
I find it sad that so many artists feel the need to delete their amazing Instagram portfolios and leave their supporters and followers behind. But I also admire the way photographer and Cara app creator Jingna Zhang has taken a stand to help support the artists that big-tech companies like Meta don’t seem to appreciate or respect.
Last year, a judge ruled in favor of human art and denied computer scientist Stephen Thaler the right to receive copyright protection for work he created using AI. “A work of art created by artificial intelligence without any human input cannot be copyrighted under U.S. law,” states Reuters’ report .
Getty Images filed a lawsuit against Stability AI for using content without consent to feed its gen-AI. The New York Times and other newspapers have recently done the same against OpenAI and Microsoft .
So there are signs that courts and lawmakers are on the side of protecting human creation and that content creators and providers are standing up.
The recent massive reaction from Instagram creatives raises awareness and allows us to take a step back and fully process what these companies are doing: displacing human creation by taking advantage of creators’ hard work without proper compensation or acknowledgment.
It’s a stark reminder that while AI offers incredible possibilities, appropriate steps need to be taken to ensure its development respects the rights and contributions of human creators. But it seems like, even for some regular netizens, protecting ourselves this way isn’t a top priority or a concern.
AI Art Just Keeps Getting Better
The debate has been deeply discussed on social media and platforms like Reddit . “I hate it so much, every time I see an awesome artwork all I can think is ‘Is this real or AI?’, it sucks so bad,” said one user.
“If you like the art it does not matter at all if it was made by a human or not,” answered another.
I remember the instant I realized I had been fooled by an AI-generated image. I felt so stupid. It was last year at a fake news exhibition by the Telefonica Foundation in Madrid. While learning about the impact of AI in the news, I saw that famous and viral image of the Pope wearing a Balenciaga jacket . I remembered I had seen it a few days before on social media, assumed it was real, and kept on with my business without a second thought.
It used to be so easy—and even funny—to spot an AI-generated image in 2022 or ace tests like NBC ’s on human versus robot-made images. The hints were right there: seven-fingered hands, a missing foot, weird perspectives, misplaced elements, or too-perfect traits. But it’s getting harder to see the difference, and new studies confirm this.
While some artists still believe that AI art is easy to differentiate and criticize , the truth is that a lot of people, even experts in the art industry, are no longer able to tell the difference.
In April of last year, photographer Boris Eldagsen turned down first prize at the Sony World Photography Awards, saying he applied “as a cheeky monkey” to see if the judges could spot that his photo was actually AI-generated.
This indicates that generative AI has not only created works of art that have earned prestigious awards meant for human artists, but that it can even create unique visual styles of its own.
A Dim Light At The End Of The Tunnel
I’ve alleviated my guilt from creating AI images by learning that certain companies are more transparent with their AI training procedures and even pay creators to use their work. Canva—the one I’ve been using—is investing $200 million in content and AI royalties to users and creators who consent to have their content used to feed its generative AI.
I also find comfort in knowing that celebrities like Scarlett Johanson are publicly expressing their concerns and asking for clarification from big tech giants about the recreation of their unique talents, setting an example to others.
This week, I’ve also found comfort in reading stories like Jingna Zhang’s, and I’ve been moved by the people all over the world who are supporting these much-needed initiatives meant to protect human capability as artists, writers, actors, and just in general, really.
It actually feels good to snap out of the fun—at least every now and then—to reflect on the current policy updates and AI training measures in the creative sphere, support those who are raising their voices, and make decisions on what we believe will be the best for our future.