Trouble of cart and artificial intelligence
The Trolley Problem in the Age of Decision-Making Software
From philosophy to software design
The so-called "trolley problem" is a philosophical thought experiment designed to explore the boundaries of morality in human decision-making. The classic version is as simple as it is brutal: a runaway trolley is heading toward five people tied to the tracks. An observer can divert it, saving the five but causing the death of a single person on another track. What is the right choice? For decades, this remained a theoretical exercise for ethicists and moral philosophers. Today, however, it has found practical application in fields where software makes autonomous decisions—such as automotive systems, healthcare, and algorithmic finance.
Software as a moral agent
When a software system is required to make decisions that affect human lives, the discussion goes beyond performant code or scalable architectures. The software becomes a moral agent. Think of autonomous driving: a vehicle must decide in milliseconds whether to brake suddenly, endangering passengers, or swerve and potentially harm a pedestrian. There is no neutral solution. Every decision inherently carries a set of values, a hierarchy of priorities, an embedded ethical worldview.
Predictive models, imperfect data, irreversible choices
Further complicating things is the very nature of artificial intelligence: decision models are trained on historical data, which come with biases, imbalances, and gaps. Despite the increasing sophistication of algorithms, these systems remain statistical tools. They optimize an objective function, but do not understand moral context. If a dataset systematically disadvantages a group of people, or if the cost function ignores ethical dimensions, the result may be technically correct yet socially distorted. And yet, the software will make a decision—often in a timeframe that precludes human intervention.
Distributed responsibility and the need for transparency
One of the most critical issues is the distribution of responsibility. When an autonomous system makes a tragic decision, who is accountable? The development team who wrote the algorithm? The company that designed the system? The end user who activated it? Without a clear chain of responsibility and mechanisms for tracing decisions, we risk creating a society where consequences have no owner. That’s why the design of autonomous systems must include audit logic, decision logging, and reproducibility. Only then can we ensure accountability and true algorithmic auditability.
Ethics by design: a new standard of quality
In the daily practice of software developers building decision-making systems, ethical considerations must become an integral part of the development process—just like security, scalability, or maintainability. Increasingly, we speak of "Ethics by Design": an approach that includes modeling moral choices during analysis, simulating dilemma scenarios during testing, and validating ethical implications during technical reviews. In a world where software affects rights, freedoms, and safety, good design doesn’t just mean “it works”—it also means “it’s right that it works this way.”
A task for us developers
For software houses, technical teams, and individual developers, this shift is both a challenge and an opportunity. It’s not just about complying with regulations or mitigating legal risk. It’s about taking on the cultural and technical responsibility to build a future where software is not only smart, but also fair. Our algorithms are not separate from the real world—they shape it, influence it, define it. It’s time we treat them as moral instruments, and act accordingly.
