Alexander Trevino
CyberSecurity Ethics
Dr. Wittkower
Warfare isn’t just fought on land, at sea, or in the air anymore. The front lines now include the invisible networks that keep our world running. The cyber operations described in the Industrial Cyber article pulls into that reality. These weren’t bombs or tanks. They were lines of code designed to disrupt, disable, and destabilize. The targets were systems that power everyday life things that feed cities, move supplies, and keep hospitals running. In theory, that might sound like a smart strategy against an enemy. In practice, it gets complicated fast. Boylan points out that in cyberwarefare it’s hard to know exactly who’s responsible, and even harder to know if the damage was truly proportional to the goal. Taddeo says that Just War Theory alone can’t handle the messiness of this domain and that we need to think about the “infosphere” as something that also deserves protection. Utilitarianism asks one simple but hard question. When you add it all up did this action create more good than harm? Looking at these operations through that lens, the answer seems pretty clear to me. The harm to civilians, to stability, to trust and it outweighs any military gain. That’s why I think these cyber actions couldn’t be part of a just war.
Boylan’s discussion of cyberwarefare hits you with a reality check right away with the first major problem is accountability. In conventional warfare, there is usually no mystery about who pulled the trigger or launched the missile. You can trace responsibility back to a commander, a unit, or even a political decision maker. In cyberwarfare, that chain is murky at best. Attackers can route their actions through multiple countries, fake digital identities, or use other actions as unwilling intermediaries. If you cannot pin down who was behind the attack, you cannot hold them responsible. That means they face no meaningful deterrent against causing harm.
From a Utilitarian point of view, this is dangerous because the absence of accountability makes reckless harm more likely. People or states that know they cannot be caught may be willing to take greater risks, which in turn increases the potential suffering of large populations. The “har,” side of the moral balance sheet begins filling up even before you look at the specific impacts of an operation. Without accountability, you also lose the chance to learn from mistakes or set limits on future conduct, which means the same harms are likely to be repeated or even escalated.
The second major issue Boylan identifies is proportionality. In theory, the principle of proportionality says that the harm caused by a military action should not be greater than the military advantage gained. In the cyber domain, this is incredibly hard to measure because of how intertwined military and civilian systems are. The communications network an enemy uses for command and control may run on the same infrastructure as civilian emergency services. The power grid that supplies electricity to military installations is also keeping the lights on in hospitals, water treatment plants, and homes. Disrupting these systems to gain an edge on the battlefield may mean cutting off vital services for civilians who have nothing to do with the conflict.
This is where Utilitarian reasoning becomes especially useful, because it demands a full accounting of both sides of the equation. On one side, you might have a real tactical gain slowing down enemy coordination, interrupting supply chains, or preventing the launch of an attack. On the other side, you have the potential for patients dying in unpowered hospital wards, for food supplies spoiling in dark warehouses, for cities plunged into panic and disorder. The suffering on the civilian side is concrete, widespread, and often long lasting. The military gain, while potentially important, is often short term and limited.
Boylan’s perspective aligns neatly with a Utilitarian conclusion here that if the costs to civilian life and well being are predictable, significant, and far reaching, then those costs cannot be justified by a modest or uncertain military benefit. A justifiable cyber operation, by contrast, would have to be designed with surgical precision. It would need to strike only systems that are exclusively military, or at the effects of the attack. Without the kind of targeting and restraint, the operation is far more likely to create a net negative outcome, which fails the Utilitarian test.
The final point Boylan’s view brings to the table is that cyber operations, once launched, are hard to fully control. Even if the intended effect is limited, malware can spread beyond its target, exploit unknown vulnerabilities, and cause unintended collateral damage. Utilitarianism is deployed concerned with consequences, intended or not, so the risk of unpredictable escalation is another strike against the ethical defensibility of these actions. If you cannot reasonably predict and limit the harm your attack will cause, you are acting irresponsibly in Utilitarian terms.
Taddeo shifts the discussion by focusing not just on physical damage or immediate disruptions, but on something she calls the “infosphere”. This is the environment of information, communication systems, and data flows that make modern life possible. Her point is that the infosphere is as much a part of our social infrastructure as roads, bridges, and power plants, and it is just as deserving of protection. When you damage the infosphere, you are not just interfering with an enemy’s ability to coordinate military action and you are potentially undermining the functioning of entire societies.
From a Utilitarian standpoint, that’s a serious problem because harm to the infosphere can spread like ripples in a pound, touching far more people than the initial target. If a cyberattack corrupts a hospital’s medical records, it could cause delays or errors in treatment for thousands of patients. If it shuts down banking systems, millions could lose access to their money, triggering financial panic. If it floods communication channels with misinformation, it could erode public trust and make it harder for people to know what is true or how to respond to an emergency.
These harms are not contained neatly within a conflict zone. Because the digital world is so interconnected, the damage can spill over into neutral countries, harm allies, and sometimes even backfire on the state that launched the attack. From a Utilitarian perspective this interconnectedness multiplies the moral responsibility of anyone planning a cyber operation. You cannot just look at your intended target and ignore the ripple effects and those ripple effects are part of the ethical equation.
When weighed against possible military benefits, the harm from degrading the infosphere often proves overwhelming. Even if a cyberattack delays an enemy offensive or disrupts their logistics, the widespread loss of trust, order, and stability may leave a society weaker for years. This is especially true when attacks create what Taddeo calls “informational entropy”, where systems become so disorganized that they stop functioning effectively. In Utilitarian terms, informational entropy represents a huge amount of net harm because it undermines everything from emergency response to economic stability.
A cyber operation that meets Utilitarian standards in this context would need to be highly targeted, limited in scope, and fully reversible. That means if you disrupt something, you also have a plan to restore it quickly once your objective is achieved. You would also avoid targeting systems that civilians depend on, even if they have some military value. This kind of restraint protects the greater good by keeping harm to a minimum and ensuring that the operation produces more benefit than suffering.
The operations described in the Industrial Cyber article, however, appear to lack these safeguards. They caused disruptions that were neither contained nor easily reversible, and they risked long term damage to the very systems civilians depend on for daily life. In a Utilitarian analysis, that tips the scale toward an unjust action, even if the broader war could be considered just. The lasting harm to civilian trust, safety, and stability simply outweighs the temporary advantages gained on the battlefield.
When you take Boylan’s warning about accountability and proportionality and combine them with Taddeo’s emphasis on the fragility of the infosphere, the Utilitarian verdict is pretty clear. These cyber operations cause more harm than good. The people who might defend them would probably argue that they were necessary to achieve a larger strategic goal. But Utilitarianism doesn’t let you ignore the costs just because the goal sounds important. It asks you to put everything, every injury, disruption, and every bit of fear and instability on the table. And when you do that here, the scales don’t balance in favor of the action.
If cyberwarfare is going to fit into the rules of a just war, it needs to be designed with much stricter safeguards. It should hit only what truly needs to be hit, shield civilians from as much harm as possible, and have a clear path to repair the damage it causes. Without that kind of precision and planning, cyber operations risk becoming blunt instruments that hurt far more people than they help. And from a Utilitarian standpoint, that’s never going to add up to justice.
Leave a Reply