Demystifying Catastrophic Modeling for Cyber

Image from Unsplash

Catastrophe models and their uses remain the subjects of civilized discourse in most parts of the (re)insurance world, but not in cyber, where I hang my hat. Here, it continues to be a battlefield where every dollar of additional capacity is monitored by an executive gendarmerie of property catastrophe underwriters turned reinsurance CEOs.

Alas, it is unsurprising that the waters are so muddied by cyber-cat:nat-cat comparisons: a desire to fit a square-pegged cyber into a NatCat round hole. I’ll concede that the modeling of nascent lines like cyber is entitled to some degree of poetic license; if the theories and ideas are at first somewhat rough or exaggerated, no matter: just polish and perspective can come in due course. Such has been the case for cyber, and probably why the comparisons have lingered past their due dates.

The roughness can be found almost entirely within the NatCat analogy. Two truths form the basis of the problem. The foundational truth is that natural catastrophe insurance products constitute the most mature market for the trade of catastrophic risk. The subsequent truth is that the mechanisms—reinsurance, Insurance Linked Securities (ILS), as well as the modeling methodologies—have been constructed around the trade of natural catastrophe risk.

In my job as Chief Actuary at Cowbell, I spend most of my time polishing the cyber view of risk and decoupling it from natural catastrophe methodologies and perspectives.

Poking holes in the natural catastrophe parallel

To understand a cyber event is to understand the cyber ecosystem. Simply put, to model a cyber event is to model the system’s boundary conditions of the moment. This is because the most significant characteristic of the cyber ecosystem is the malleability and overall flux; the only constant is change, and past behavior is not necessarily predictive of the future. This is a core distinction from NatCat, where past behavior can be somewhat predictive of future events.

That is not to say that cyber is completely random; it is in many ways more modellable (and more predictable) than NatCat. The interconnections of the system are digitally housed, readily mineable, and the events that proliferate through them are therefore modellable. With the correct approach, cyber models are in many ways more predictive than prevailing NatCat ones, particularly in the case of earthquake and wildfire, which despite the amount of historical events to cling to, remain largely unpredictable and difficult to model and underwrite.

Challenges modeling catastrophic cyber risk

Short of a cyber catastrophe, stakeholders continue to see cyber models as hypothetical or unvalidated. Their reasoning is flawed for a few reasons. Firstly, it is worth noting that events with catastrophic potential do occur, and somewhat frequently, but because they are remedied quickly and losses are contained, long-term impacts are limited. Secondly, the sequence of events that led to the NatCat model acceptance cannot be directly translated to cyber.

The historical script plays out as follows: catastrophic event depletes market capital, testing the validity of the models and underwriting shops. Subsequent renewals shepherd a flora of Bermuda newcos armed with the now validated model outputs and a highly attractive rating environment. Such was the case following the 1992 storm season and Hurricane Andrew.

This is one of the reasons why the 1993 underwriting year was such a watershed moment in catastrophe reinsurance underwriting. The model validation that followed was farther reaching the hurricane models alone. A single validation of modeling for hurricanes proliferated into the rest of the NatCat modeling space. Even today, despite the limited accuracy of wildfire end earthquake models, their market segments benefit from their posturing as bedfellows to the hurricane modeling acumen of the 90s. They also benefit from the flawed logic that the mere existence of a catastrophe scale event inevitably improves model accuracy, and lack thereof erodes it.

The same rationale unreasonably punishes cyber models due to the lack of such an event. If the requirement for model validation is the existence of one or more market rattling events, the cyber peril will always be disadvantaged, no matter the model accuracy.

Critical need to build catastrophic cyber risk models now

Despite the challenges of building models for catastrophic cyber events, it’s critical to start now to avoid potentially devastating risks. The risk if we don’t start now is that the market underwrites to a very narrow scope of the aggregate cyber risk. At worst, this leaves a significant portion of the cyber risk uninsured and policyholders highly vulnerable, at best insurers may not have an accurate view of their exposure as the peril evolves.

The net effect of both scenarios is that financial hardships risk being pushed back onto the policyholder, thereby eroding the value of the insurance product and overall market.

The path forward

Understanding the aggregate ecosystem of cyber is paramount to underwriting individual risks because of the interconnected nature. Underwriting a single risk is effectively underwriting the security of interaction with the entire ecosystem of cyber risks.

To the extent that we can understand, model, and manage the systemic risk, the peril will always be insurable. This is where cyber modeling is invaluable to the overall market of single and portfolio risk. Worth noting that the primary cyber model vendors have begun to converge as they reach maturity. This is significant because the models are significantly different in their approaches. To the extent that they have inched closer, the understanding of cyber tail risk has and continues to improve – independent of NatCat approaches and NatCat-type history.

Latest posts by Dan Palardy (see all)
Dan Palardy: Dan Palardy is Chief Actuary at Cowbell, a leading provider of cyber insurance for small and medium-sized enterprises. As Chief Actuary, he heads the Actuarial and Catastrophe Modeling team. Dan has 13 years of experience spanning actuarial, underwriting, as well as various other insurance analytics roles. Throughout his career, he has held actuarial roles for several leading insurers and reinsurers in the United States, Bermuda, and Canada. He was a finalist for the Program Manager Rising Star of the Year 2023 and is a frequent industry speaker and commentator in the areas of Cyber Insurance and Cyber Actuarial Models. He is an Associate of the Casualty Actuarial Society.
Related Post