- Family Voices AI
- Posts
- FVAI Vol #22
FVAI Vol #22


…they used AI to recreate Darth Vader’s voice—James Earl Jones’ legendary voice—without running it by the actors’ union..

The AI Is Strong With You
So here’s the deal: Fortnite teamed up with Star Wars for a big in-game event, and it’s stirred up some serious drama. Why? Because they used AI to recreate Darth Vader’s voice—James Earl Jones’ legendary voice—without running it by the actors’ union, SAG-AFTRA. And now there’s a formal complaint.
Now, here’s where it gets really sticky.
Jones had actually agreed to preserve his voice through AI, working with Lucasfilm to ensure Vader could live on.
His family was on board too. But this Fortnite rollout seems to have skipped some crucial conversations—especially with the union. That’s where the ethical rubber meets the road.
And then it got worse. Players found ways to manipulate the AI-Vader into saying stuff he absolutely shouldn’t—profanity, racial slurs, the whole nine yards. Through voice chat or code tricks, people twisted this powerful tech into something… a lot less noble.
It’s a perfect storm: cutting-edge tech meets old-school labor rights, with a dash of internet chaos. Even when voice preservation is done with the best intentions, the execution—in an uncontrolled, interactive space—can backfire fast.
This isn’t just about one game. It’s a red flag for the entire industry. If we don’t slow down and think through how we use AI-generated voices—especially iconic ones—we risk eroding trust, value, and even the dignity of the voices we’re trying to honor.
The promise is huge—but if we don’t get the ethics right, everybody loses.


Imagine this: a paralyzed person, unable to speak for years, suddenly able to talk again—in their own voice
That’s no longer science fiction. Researchers at UC Berkeley and UCSF have built a stunning AI-powered system that gives people with severe paralysis—like ALS patients—the ability to speak through a brain-computer interface (BCI). And we’re not talking about robotic, generic voices. We’re talking about their voice. Their tone. Their identity.
Here’s how it works: high-tech sensors (some placed on the brain, others on the face) pick up signals from the brain’s speech centers. Then, AI kicks in—decoding those signals in real time and transforming them into spoken words. Previously, there was a clunky 8-second delay. Now? Just 80 milliseconds. Basically instant.
But the real magic? This system doesn’t just make speech possible. It brings personality back. Using recordings from before the injury, it recreates the speaker’s natural voice—adding warmth, expression, even emotional nuance.
It’s a game-changing blend of neuroscience and AI. And while there’s more work ahead—like adding tone, pitch, and emotional expressiveness—we’re moving closer to a world where lost voices are restored. Not just communication, but connection.
This isn’t just about tech. It’s about being human.
AI just pushed weather forecasting beyond what we thought possible.
DeepMind's GenCast extends reliable predictions from 10 to 15 days.
Think about that for a second.
We've hit a wall in weather forecasting for decades. The chaos of atmospheric systems made longer predictions impossible.
Until now.
GenCast doesn't just predict weather. It calculates probabilities.
Traditional models: Binary yes/no forecasts GenCast: Probability-based predictions that adapt
The real power? It needs LESS computing power while delivering FASTER results.
Built on 40 years of verified meteorological data. Not internet noise. Pure scientific truth.
Applications are massive: • Better hurricane tracking • Wind power prediction • Climate adaptation planning • Disaster preparedness
This isn't just a weather tool. It's a template for merging AI with scientific principles.
The future of forecasting just arrived.
Reply