Apr 14, 2022
Mohamed El Mahallawy
Co-founder & CTO
A core value prop of Shepherd is our ability to serve our brokers with speed and efficiency. Our technology needs to help underwriters ingest submissions, price them quickly, and turn around quotes for brokers. To accomplish this, we made an early decision as a company to build our own policy admin system, integrated with Shepherd’s proprietary pricing models. We wanted to share a bit more about how we build and version our models.
In the insurance industry, it’s fairly common for carriers to have risk models built in Excel workbooks by the actuarial team. This is the tool of choice, and frankly, Excel is a very powerful tool. Taking a complicated pricing workbook model and turning it into code is a unique engineering challenge, and each company has its own ways of approaching the solution. At Shepherd, we create engineering-friendly Excel models that can be digested by our coding scripts to generate files for usage within our model.
The common structure for our models:
Here’s an example sheet that we’ve created for the purpose of this blog post
We found this strategy to be especially useful for our team as formulas should (hopefully) stay consistent while constants can change or update. For example, we may decide we want to change a variable that affects how much commission we charge. This is an easy change for anyone (eng or non-eng) to change within Excel, then re-run our script against the Excel file to generate the code. Overall, this has given our insurance team the ability to make changes easily and quickly, then have them reflected in the model with a quick turnaround from engineering. The additional benefit is the time savings on engineering resources, whereby the team can continue to focus on new initiatives rather than constantly updating existing pricing models.
Building the logic to match formulas within a workbook file is the most time-consuming and error-prone element of this process. Engineers need to follow the various calculations done within the Excel model and write code to match outputs.
We’ve expended a large effort to combat errors by doubling down on a strong testing culture for our models and double-checking our work with the insurance team. We’ve also created testing helpers to connect to the Google Sheet model, fill in data, and capture the output of the model. After that’s done, we then run those same inputs against our code models to make sure it matches. Overall, if we could export the functions from Excel directly into code, we would – but this is for later exploration and another blog post!
Prior to building our models, we decided to spend 2 weeks exploring different ways other insurtechs have solved this same challenge. Here’s a shortlist of options we’ve considered and why we decided to shy away from them.
If your models are stored and edited on Google Sheets, why not run the models there? At the surface, this isn’t a bad idea but it quickly would run into a few problems:
This is another option we’ve considered that is similar to the prior but would solve some latency and collision issues. It’d also mean we can store the different model versions with our code allowing us to run the appropriate one. Unfortunately, our calculations are pretty complex and we haven’t found a library that’d make it easy to run calculations within Excel. Most libraries are great at reading and writing to Excel files, but not for running calculations. We could have explored some Microsoft APIs to solve this, but ultimately, testing would have been challenging. We may in the future explore this option to speed up the process of building more models.
One of our fellow insurtech peers does this. Every change they have for their models results in a new server deploy which their backend API calls for model calculations. This could be a fairly simple server that just does that one thing – receiving an API request, running calculations, returning the results. We did consider this option deeply but felt like having a Typescript library that is imported by our Typescript backend would maximize type safety and ultimately reduce type errors. And it works! We’ve had many instances where we’ve renamed variables in our library and can quickly identify them when a new version is installed and imported into our backend. This would have been difficult to do with an API.
Our Head of Underwriting, Costas Hadjipateras, wrote a great post about underwriting as a blend of art and science. Our ability to leverage engineering to iterate on the science is our unique edge as a modern insurtech. As a result our platform to empowers our underwriting team to do their best work (the art). Building our policy admin system and our risk models in-house affords us the opportunity to dig deep into our data calculations and make adjustments to our models at record speed.