Insights

A Q&A with Michael Isakov, Director of Quantitative Research

October 23, 2023
Arbol QA Series featuring Michael IsakovArbol QA Series featuring Michael Isakov

Welcome to another installment of Arbol's Expert Insights series!

This conversation touches on the work of Michael Isakov, Director of Quantitative Research. Michael graduated summa cum laude from Harvard University with a BA in math and MA in statistics. Now at Arbol, Michael applies his formidable skill set to improving our pricing methodology, developing predictive models, and steering our product development pipeline. His projects, like the development of our proprietary hail model, have helped Arbol stay ahead of the curve in the rapidly evolving world of climate solutions.

Beyond his academic and professional pursuits, Michael is also an accomplished chess player – holding the title of national master since high school – and an avid Brazilian Jiu-Jitsu practitioner.

Q: One of the most impactful projects that you’ve worked on since joining Arbol has been improving the company’s AI underwriting–in particular, developing a scalable portfolio pricer to take a view on the joint distribution of hundreds of contracts. Can you briefly explain portfolio pricing theory and how you adapted it to the needs of insurance contracts, then tell us how Arbol’s pricing / risk-management algorithms bring value to clients and capacity providers?
Portfolio management is the science of selecting and pricing contracts depending on your existing exposure to various risks. When Arbol was just starting out, it was reasonable to price each deal on a “standalone” basis, handling any large concentration of risk using hand-crafted heuristics. Now that we write hundreds of millions of dollars of risk across 4 continents, it’s critical to develop automatic algorithms that make sure the entire portfolio isn’t becoming lopsided in a certain region, peril, or large-scale weather pattern like El Nino.

To do this, I helped design and implement a simulation algorithm that gives us a view of the joint distribution of each risk, meaning that it accurately captures dependence such as the correlation between payouts and probability of extreme losses. Each new deal is then assessed based on how it influences our combined risk, and given a discount or surcharge based on how “diversifying” it is. This allows us to more confidently deliver a well-diversified portfolio, at scale, to our investors.
Q: You led the development on Arbol’s new hail product which uses on-the-ground sensors and radar to estimate the frequency and distribution of hail. What makes this an exciting product and who can benefit from it?
The hail project was really interesting, because this peril is somewhat notorious in the insurance industry for being hard to analyze properly. The main reason for this is the lack of reliable data: hail is a highly local event and, to date, the main official dataset is a compilation of on-the-ground observations by people who are in many cases not trained meteorologists. This leads to all sorts of data quality issues, from under-estimation of hail in less populated areas to incorrectly reported hail sizes. Traditional insurers have dealt with this problem by increasing deductibles for hail or withdrawing from the space completely, leaving a big insurance gap for Arbol to fill if we could figure out how to price the risk.

My approach was to combine different data sources: observations, sensors, and remote sensing data like radar into a single best frequency estimate. The different datasets all have their own issues, so I had to develop multiple layers of correction to make this feasible. For large portfolios of hail contracts, it’s also important to understand if certain areas are likely to be hit at the same time, so I built a simulation model based on the geometry of thunderstorms to capture the area of each hail event.

After careful evaluation using more recent high-quality data, we found that our approach outperformed several leading models in the industry, particular in the estimation of more rare, large hail events. With a robust pricing methodology in hand, we have been working with several major clients in the auto and energy sectors and are also in discussions with capacity providers to launch a formal product line. The most exciting part of this is the prospect of bringing value both to clients who are currently taking on unwanted hail risk, and investors who seek a largely uncorrelated source of returns.
Q: Are there emerging technologies that you think will affect the parametric insurance industry?
The area I’m most excited about long-term is the availability of higher-quality data, which will lead to the development of more precise parametric triggers. Remote sensing data such as radar and satellite has the potential to revolutionize this space and blur the lines of parametric and indemnity-based insurance. For example, commercially available synthetic aperture radars (SAR) can produce satellite images with resolution finer than 10 meters. Imagine taking an image of a corn field and being able to assess exactly what fraction of the crop was damaged by a tornado. A parametric policy paying out based on this would incur almost no basis risk, while remaining fully automated and objective. Arbol aims to remain at the forefront of these developments: our data partner dClimate has invested heavily in remote sensing data and analytics tools, including its purchase of OasisHub and partnership with CYCLOPS.

Better data will also require bigger, more complex models, doing everything from parsing satellite images to understanding the fine-scale properties of hyperlocal phenomena like wind. So an improved product will go hand-in-hand with challenging data problems, which is what makes this space so exciting for quant researchers like me!
Q: I know you’re interested in the use of AI for automation. Could you explain what this entails and what it means for Arbol?
Our thesis at Arbol is to introduce automation at every layer of the firm, allowing people to focus their time on the real value-add tasks. We’ve built a sophisticated set of pricing bots, automated diagnostics, and structuring tools to allow the pricing, insurance, and sales team to operate as efficiently as possible.

Structuring and product development are two areas where my team has helped to greatly improve efficiency. For example, we found that an arduous component of our hurricane deals was structuring the parametric policy to match historical losses across a portfolio of tens of thousands of locations. To address this matter, I designed a Bayesian optimization algorithm that automatically suggests a range of good policies, which has saved the insurance teams dozens of hours of trial-and-error.

On the product development side, my team has been building tools to understand where to focus our marketing efforts, by finding patterns in hundreds of combinations of weather variables and commodities. In both of these examples, we not only supercharged efficiency internally, but also improved the client experience – from reducing turn-around times on complicated deals, to developing more tailored products for specific regions.
Q:  In your experience, how has the modeling of catastrophic events like tropical cyclones changed in recent years?
The insurance industry is re-thinking tropical storm coverage after catastrophic losses in the face of Hurricane Ian and other storms. Just like in non-extreme weather modeling, there have been breakthroughs in ML for predicting hurricane tracks. My team is currently working on developing a comprehensive risk assessment model that takes into account the frequency and shape of hurricanes to more accurately understand this risk.
Q: Could you speak to the pros and cons of using physical models and deep-learning models in weather prediction and insurance pricing?
Global climate models have been the traditional workhorse of weather forecasting, solving an enormous system of differential equations on a global grid. Improving these systems requires scaling up the computation to a finer resolution, and these models already run on some of the world’s biggest supercomputers. Deep learning has the potential to yield faster models which improve with the amount of data we have, as opposed to just computational resources. In the past year, there have been a number of breakthroughs on using DL for short-term forecasting, to considerable success.

That said, in practice a physics-based model offers quite a few advantages. For example, its underlying physics should hardly change even as the climate does, which potentially makes it more future-proof. Further, extrapolating extremes beyond observations seen in the data is a big pain-point for many deep learning models, but this is exactly the regime we care about in insurance!

At Arbol we’ve done a lot of work to develop AI models that improve on these fronts, synthesizing key ideas from published research and modifying them based on extensive testing. For example, we combine forecasts from physical models with more statistical approaches in our pricing, yielding a “best of both worlds” view of our risk.

To stay in the loop with all of the exciting initiatives underway at Arbol, we invite you to follow us on LinkedIn and X.

Disclaimer

Continue reading