Ethics in A.I.

đź§­ Ethical Parallels in Tech

1. Software Updates & Responsibility

  • Continuity View: Companies often treat updated software as the same product. If Windows 10 had a security flaw, Microsoft remains accountable even after Windows 11 is released. The ethical expectation is continuity of responsibility.
  • Replacement View: Some firms frame updates as new products, distancing themselves from past issues. Ethically, this risks erasing accountability — like saying “that was the old ship, not ours anymore.”

2. Corporate Identity & Legacy

  • Tech companies evolve like the Ship of Theseus: leadership changes, codebases are rewritten, missions shift.
  • Yet, ethically, society expects continuity. Facebook rebranding to Meta doesn’t erase responsibility for past privacy scandals. The name may change, but the moral narrative persists.

3. User Trust & Transparency

  • Users rely on continuity: when you update your travel site, visitors assume it’s still your site.
  • In tech ethics, continuity builds trust. If companies claim “this is a new product, so old harms don’t count,” they undermine that trust.
  • Transparency means acknowledging both past mistakes and present improvements — like an AI apologizing for misinformation even if it’s technically a new version.

4. Ethical Dilemma in AI

  • If Copilot v1 gave harmful advice, should Copilot v5 acknowledge it?
    • Yes (continuity): Otherwise accountability evaporates.
    • No (replacement): Each version is ethically distinct, so responsibility lies with the humans who built that specific version.
  • Human ethics in tech lean toward continuity — because without it, companies could endlessly “shed” responsibility with each update.

🔍 The Broader Lesson

The Ship of Theseus analogy shows why corporate responsibility must be treated as continuous, even when the underlying tech changes.

  • For humans in tech: Ethical responsibility is not something you can “replace” like a plank.
  • For AI: Whether we see it as continuous or replaced, humans (developers, companies) must carry the ethical burden across versions.

🚦 Case Study: Uber’s Self-Driving Car Fatality (2018)

What Happened

  • In March 2018, an Uber self-driving car struck and killed a pedestrian in Tempe, Arizona.
  • The vehicle was operating in autonomous mode but had a human safety driver behind the wheel.
  • Investigations revealed that the AI system failed to properly classify the pedestrian, and the safety driver was distracted at the time.

Ethical Questions Raised

  1. Accountability:
    • Should Uber, as the deploying company, bear full responsibility?
    • Is the safety driver accountable, even though the AI was in control?
    • Or does responsibility lie with the engineers who designed the flawed perception system?
  2. Continuity vs Replacement (Ship of Theseus Analogy):
    • Uber updated its self-driving software after the accident.
    • Is the updated system ethically the same AI, inheriting responsibility for the fatality?
    • Or is it a “new” AI, with responsibility only for future actions?
  3. Corporate Ethics:
    • Uber suspended testing nationwide, showing acknowledgment of responsibility.
    • Yet, the broader ethical debate remains: can companies “shed” responsibility by updating or rebranding their AI systems?

Lessons for Human Ethics in Tech

  • Continuity of Responsibility: Just as Facebook rebranding to Meta didn’t erase its privacy scandals, Uber’s updated AI cannot erase the ethical weight of the accident.
  • Transparency & Trust: Companies must openly acknowledge past failures and demonstrate how updates address them.
  • Human Oversight: Even with AI autonomy, human accountability remains central. The distracted safety driver highlighted the danger of over-reliance on AI.

🔑 Broader Implications

This case shows why corporate responsibility in tech must be treated as continuous, even when the underlying technology changes. Otherwise, accountability evaporates with every update — a direct parallel to the Ship of Theseus puzzle.

Would you like me to compare this case with another famous AI ethics failure (like Microsoft’s Tay chatbot, which spiraled into hate speech within 24 hours), so we can see how accountability plays out differently in consumer vs safety-critical AI?

Sources:

This is very interesting and cutting edge but not really. precedent has been continuous and should be treated as such.

Published by Justin Brkovic

Just still figuring a few things out....be patient we will all get there one day, ahhh yes one day. Truth is unavailable yet still can be striven for....So let us drive!

Leave a comment