The French Philosopher

The French Philosopher

Décidez de votre impact dans le monde

Stephanie Lehuger

Join The French Philosopher and consider me your philosophy BFF! 🤗 If you’re wondering about the meaning of life, your impact on the world, or who you truly are, you’re in the right place. Picture us chatting over a latte, exploring life’s big questions with wisdom from ancient and modern philosophers. I’m a Brooklyn-based French philosopher, speaker, and author, and as an expert in AI ethics for the European Commission, I also dive into ethics and critical thinking around AI and tech.

En cours de lecture

58. Does AI make better decisions than humans?

Imagine a machine deciding who gets life-saving surgery in a split-second, armed with endless data and razor-sharp logic. No hesitation, no bias, no emotional baggage. Sounds like a dream... or does it? What do you think: does AI make better decisions than humans?

Well, it’s true that there are no existential crises or coffee breaks for our robot friends.

They’re brilliant at optimize outcomes by crunching numbers, without getting tired, distracted, or irrational. Some chatbots actually even give good moral advice (one could say better than some philosophers? 😅). Have a look if you’re curious: petersinger.ai. But here’s the kicker, machines don’t actually “understand” morality. Why is that? Because they don’t feel empathy or anguish when making tough calls. They don’t lose sleep over the weight of their decisions. They don’t consider the messy, lived experiences of the people affected by them.

Take existentialists like Simone de Beauvoir (yes, we’re name-dropping).

They’d argue that morality is rooted in freedom and authenticity, every decision we make defines who we are and carries the weight of our responsibility to others. Machines? They don’t have freedom, they’re programmed. They don’t have authenticity, they’re mimicking patterns. They’re not moral agents, they’re tools.

But here’s where things get spicy.

AI can actually push us to think deeper about our own ethical frameworks. By exposing our biases and presenting alternative perspectives, it can sharpen our reasoning and force us to confront uncomfortable truths. For instance, Amazon’s AI recruiting tool 10 years ago was a fiasco but it helped everyone realize how deep recruiting biases are, and that was definitely a win to make us aware that we had to fight against them.

So maybe the question isn’t whether AI is “better” at morality but whether it challenges us to be better moral thinkers ourselves?

Should we trust AI with big decisions?

Maybe as collaborators, not captains of the ship. Machines might help us see clearer, but the messy beauty of morality, its empathy, its anguish, its humanity, is something only we can bring to the table. Or at least that’s my take… what’s yours?

En cours de lecture

57. What Are Animals Saying About Us? Ask AI

Using AI to translate animal communication isn’t just a tech challenge. It’s a philosophical one. AI is making serious moves in decoding animal sounds (listen to this podcast episode from The Economist), but here’s the kicker: even if we crack their “language,” would it make sense to us? I'm really not sure we can ever truly understand what animals are saying when their entire experience of the world is so different from ours! How can we figure what it's like to be bats “seeing” with sound. Or birds feeling Earth’s magnetic fields. And what about dolphins living in a 3D underwater soundscape. Don't let me start with how we’re here struggling to swat a fly because it sees us in slow motion.🤦♀️

If AI manages to translate what animals say, it might force us to rethink language, meaning, and our place in nature. Because what does that say about how we treat them? Imagine if we discover they’re saying profound things like, “Hey, don’t overfish my home”?

I'm curious... if you could chat with a dolphin, what’s the first thing you’d ask? Better yet, what do you think they’d roast us for? 😅

En cours de lecture

56. Schrödinger’s Cat Just Got An Upgrade!

Word on the street is Microsoft’s latest quantum breakthrough (see Nature’s article link below) might finally let us crack open the box and see what’s really going on. But here’s the kicker: quantum computing isn’t just about faster tech or breaking encryption. It’s a philosophical mic drop. What if reality isn’t just yes or no? What if it’s yes AND no… or maybe even something else entirely?

See, quantum computers don’t follow the same rules as our everyday classical computers. They thrive in the chaos, living in that weird, paradoxical space where things can be two things at once. It’s like the universe is giving us a hint that we’ve been thinking way too small all the time. Human level thinking will probably always be too small to understand it all. It doesn’t stop us for craving more anyway!

While engineers are out here solving problems we didn’t even think actually had solutions, philosophers might buckle up to be ready for a world where zero and one can coexist. Where truth isn’t fixed but fluid? Where the impossible suddenly feels like it’s just around the corner? Strap in, this isn’t just science anymore. It’s a whole new way of seeing reality!

How could our world react to such a weird perspective to grasp? When we see human beings kill each other for failing to see the world the same way, I’m not overly optimistic about human kind capacity to fully apprehend quantum physics. But maybe it’s fine not to understand how quantum computing works if we can benefit from it. Or is it?

En cours de lecture

55. Where Does My Freedom End and Yours Begin?

Freedom sounds simple—do what you want, right?

But John Stuart Mill had a different take (he’s a 19th-century philosopher who spent a lot of time thinking about this, so pretty legit). He believed that liberty comes with one big condition: you’re free to do whatever you like, as long as you don’t harm others.

Sounds fair enough, doesn’t it?

But when you really think about it, this idea of “don’t harm others” gets complicated fast. For Mill, freedom wasn’t just about doing your own thing—it was about understanding how your actions affect the people around you. Liberty, he thought, isn’t something we keep to ourselves; it’s something we share.

Now, let’s bring this into today’s world

Think about all the big issues on the global stage—peace talks, climate change policies, trade negotiations. These are all about the same question Mill asked: where does my freedom end and yours begin? Can one country pursue its own goals without stepping on another’s toes?

Take peace talks as an example

One nation might feel justified in defending its borders or expanding its influence, while another sees those actions as threats to their sovereignty or safety. Mill would argue that true freedom doesn’t mean ignoring these tensions—it means recognizing how actions ripple outward and finding ways to address those ripples responsibly. His principle of “non-nuisance” isn’t just a moral idea—it’s a practical guide for resolving conflicts and building trust.

And then there’s climate agreements

One country might say, “We need more factories to grow our economy,” while another says, “Your growth is destroying our environment.” Again, Mill would remind us that freedom isn’t just about personal or national gain—it’s about understanding how interconnected we all are and making choices that respect those connections.

And what about compromise?

Mill believed that freedom works best when it’s built on conversation. The best solutions don’t come from one side winning and the other losing—they come from honest dialogue where both sides figure out how to move forward together. It’s not easy, but it’s how progress happens.

Are we living up to Mill’s vision of freedom today?

Are we using our liberties to build bridges or just digging deeper trenches? Every negotiation—whether it’s between nations or neighbors—is a chance to show whether we can balance our rights with our responsibilities to each other.

Mill would remind us that freedom isn’t just about doing whatever we want—it’s about finding ways to live together without harming each other. That’s where real liberty begins.

What do you think? I’d love to hear your thoughts on how Mill’s ideas apply today.

En cours de lecture

54. Ethical AI’s Dirty Secret

Every “trustworthy” AI system quietly betrays at least one sacred principle. Ethical AI forces brutal trade-offs: Prioritizing any one aspect among fairness, accuracy, and transparency compromises the others. It's a messy game of Jenga: pull one block (like fairness), and accuracy wobbles; stabilize transparency, and performance tumbles. But why can’t you be fair, accurate, AND transparent? And is there a solution?

The Trilemma in Action

Imagine you try to create ethical hiring algorithms. Prioritize diversity and you might ghost the best candidates. Obsess over qualifications and historical biases sneak in like uninvited guests.

Same with chatbots. Force explanations and they’ll robot-splain every comma. Let them “think” freely? You’ll get confident lies about Elvis running a B&B on a Mars colony.

Why Regulators Won’t Save Us

Should we set up laws that dictate universal error thresholds or fairness metrics? Regulators wisely steer clear of rigid one-size-fits-all rules. Smart move. They acknowledge AI’s messy reality where a 3% mistake margin might be catastrophic for autonomous surgery bots but trivial for movie recommendation engines.

The Path Forward?

Some companies now use “ethical debt” trackers, logging trade-offs as rigorously as technical debt. They document their compromises openly, like a chef publishing rejected recipe variations alongside their final dish.

Truth is: the real AI dilemma is that no AI system maximizes fairness, accuracy, and transparency simultaneously. So, what could we imagine? Letting users pick their poison with trade-off menus: “Click here for maximum fairness (slower, dumber AI)” or “Turbo mode (minor discrimination included)”? Or how about launching bias bounties: pay hackers to hunt unfairness and turn ethics into an extreme sport? Obviously, it’s complicated.

The Bullet-Proof System

Sorry, there’s no bullet-proof system since value conflicts will always demand context-specific sacrifices. After all, ethics isn’t about avoiding hard choices, it’s about admitting we’re all balancing on a tightrope—and inviting everyone to see the safety net we’ve woven below.

Should We Hold Machines to Higher Standards Than Humans?

Trustworthy AI isn’t achieved through perfect systems, but through processes that make our compromises legible, contestable, and revisable. After all, humans aren’t fair, accurate, and transparent either.