CIA.Models
Reading: Use of Models Educational Note
Authour: Canadian Institute of Actuaries (CIA)
BA Quick-Summary: Models
|
Contents
- 1 Pop Quiz
- 2 Study Tips
- 3 BattleTable
- 4 In Plain English!
- 5 BattleCodes
- 6 POP QUIZ ANSWERS
Pop Quiz
Compare & contrast scenario-testing & sensitivity-testing. (This is from OSFI.Stress)
Study Tips
This paper on models isn't too hard and it's quite nicely written. The most important thing you have to know is the definition of a model. (A model is a practical representation of relationships among entities using Financial / Economic / Mathematical / Statistical concepts.) You also have to know the elements of a model, and be able to explain model risk. Of the four new papers on the syllabus for Fall 2018, this seems to me to be the most important.
Unfortunately there are a lot of bullet point lists, which is a favorite type of CAS exam question. That means there's more than average to memorize. (Get started early!) But aside from the basic definitions and concepts, there are actually some fun problems that can be created with this material. See mini BattleQuiz #6!
Estimated study time: 1-2 days (not including subsequent review time)
BattleTable
- this reading was new for 2018.Fall
Questions held out from Fall 2019 exam: #30. (Skip these now to have a fresh exam to practice on later. For links to these questions, see Exam Summaries.) |
reference part (a) part (b) part (c) part (d) E (2019.Spring #26) define terms:
- model, model risklist considerations:
- severity/likelihood (failure)
In Plain English!
Section 1: Intro & Background
Intro
Computer modeling and artificial intelligence are the future. Models have come a long way since the time of Sir Isaac Newton. The table below isn't part of the paper, but I think it helps to provide some historical context.
Era Example Type of model Solved using... early physical models Newton's law of gravitation algebraic equation simple algebra 20th century models economics & finance nonlinear differential equations numerical analysis on a computer 21st century models hurricane forecasting computer simulation 1000s of runs of computer program
The 21st century developments are possible only because of the exponential increase in computing power over the last several decades. It's a fascinating area of research, and this paper is a well-written introduction.
Digression
A while back, I spent at least a year writing a policy and claims simulation program that I called SimPolicy. It simulates individual policyholders and their claims. As the simulation runs, it dumps the output into a premium and claims database, which is then fed into reserving software. I could then see exactly how different reserving methods performed for different scenarios. SimPolicy also helped me develop intuition on how changes in simulation parameters affect the actuarial triangles. It was super-fun to play with! And it showed me first-hand how useful modeling and simulations could be.
Anyway, the point is that when I'm reading a new paper I like to relate it to something that I already know. The material is then more meaningful and you'll retain it better.
Can you think of your own examples of models - ones that already exist or maybe ones you'd like to build yourself? Be creative! Have fun! Try SimFlood.
Definitions
Let's get the basic definitions out of the way. The definition of a model may seem obvious, but it may very well be asked. Remember (2016.Fall #26a) from CIA.CSOP where they asked for the definition of claims liability and premium liability. You had to give the exact answer from the Statement of Principles for credit.
Anyway, our awesome friend Alice the Actuary made up a memory trick for remembering the definition of model: FEMS (as in feminine), which stands for
- Financial
- Economic
- Mathematical
- Statistical
model definition a practical representation of relationships among entities using FEMS concepts
The next important fact is the elements of a model. And since we live in a society of equality, I made up the memory trick SIR, which stands for
- Specification
- Implementation
- Run.
model elements all models require 3 elements: SIR (model Specification, model Implementation, model Run)
So far, so good. But now we have to define the items in SIR. (I know this intro stuff is a little dull so if you want a chuckle, take a look at the hypothetical examples in Section 6 of the source paper. I don't know whether it was Pierre or Bob, but someone had fun with the names of the actuaries in these examples!)
model specification a description of the parts of a model and their interactions (includes data, assumptions, methods, entities, events) model implementation the systems that perform the calculations (computer programs, spreadsheets,...) model run the inputs/outputs of the implementation
The last of these introductory definitions is:
model risk the risk that the user will draw inappropriate conclusions due to shortcomings of the model or its use
There is a also short discussion on the difference between a model and a calculation. For example:
- calculating a least-squares regression line is a calculation (possibly part of a model)
- using GLMs (Generalized Linear Models) to segment business would be a model
The main distinction is the documentation required for a model (how it was chosen, how it's used)
mini BattleQuiz 1 You must be logged in or this will not work.
Model Risk
Model risk always exists because models are a simplification of reality. (And you should know the definition of 'model risk' backwards and forwards!) Models leave things out - they have to - but a good model captures the important things.
Model risk is measured by the severity of failure and the likelihood of failure. (Interestingly, the actuary has limited control over the severity, but quite a bit of control over the likelihood of model failure. Alice always makes sure to choose robust models to minimize the likelihood of failure!)
And Alice ALWAYS takes the time to assess the potential severity and likelihood of model failure. To do this, she keeps in mind various considerations. These considerations are listed in the mini BattleQuiz below.
mini BattleQuiz 2 You must be logged in or this will not work.
Section 2: Choice of Model
Despite the title of this section, the authors don't say very much about how to actually choose a model for a particular purpose. Instead, what they talk about is how to evaluate or validate a model under consideration. In other words, they explain how to "kick the tires" to make sure the model isn't a dud. They've broken the discussion into 4 cases, as discussed below.
New (or substantially changed) Model
This type of model requires the most thorough validation before being used. You should memorize the 4 steps: review specification, validate implementation, deal with limitations, and keep documentation. I couldn't think of any fun memory tricks, so you're on your own here in terms of how to memorize this. :-)
(shout-out to beniddo!)(note that the ordering of the 2nd and 3rd items is reversed)
- beniddo created a new model to impress a girl and SLID the docs into her email:
- Specification (review Specification)
- Limitations (deal with Limitations)
- Implementation (validate Implemetation)
- Documentation (keep Documentation)
You should also be able to explain each of these steps. I didn't want to clutter up the wiki article, however, so the details are the mini BattleQuiz.
mini BattleQuiz 3 You must be logged in or this will not work.
Other Models
The type of model validation depends on the type of model. (That's pretty obvious!) The 4 types of models discussed are:
- new model (discussed above)
- existing model used in a new way
- model approved by others
- model outside actuary's expertise
There are no deep concepts here. For example, to validate an existing model used in a new way, you should do two things:
- check that the initial model was properly validated
- review limitations in the new application that may not have been relevant in the initial application
The validation process for the last two types is given in the mini BattleQuiz. The new model is the one that requires the most work by the actuary.
Sensitivity Testing
The Pop Quiz at the top of the article reminded you of the differences between scenario-testing and sensitivity-testing. That question has been asked more than once on past exams and I took that as a signal to pay extra attention to this section in the modeling paper. I picked out two questions I think have a reasonable possibility of showing up on the exam sooner or later.
Question: what is the purpose of sensitivity testing with respect to a model
- to validate the model
- to understand the relationship between inputs/outputs
- to develop a sense of comfort with the model
Question: how can model assumptions be tested in the context of sensitivity-testing
- test assumptions outside the expected range
- test assumptions singly and then in combination
- test assumptions with a nonlinear relationship between inputs/outputs
mini BattleQuiz 4 You must be logged in or this will not work.
Section 3: Minor Changes to a Model
This is a short section, and I think you can ignore it. All it says is that if you make a minor change to a model, you can use a more streamlined validation process. It's a matter of judgment. If you make more than just a minor change, however, it might be wise to revalidate it fairly thoroughly.
Section 4: Use of Models
This section starts with the common sense advice that you should reuse models wherever possible. For example, if you have a model that calculates trends for pricing work, you can probably use the same model for trends in valuation work. Segmentation for valuation work is usually broader, but the model should be able to accommodate that.
Anyway, we discussed above how to validate models under consideration. This is done before the model is used. When it's time to actually use the model, there are a bunch of other validations you have to do. This section isn't much more than a series of bullet point lists. What I've done for you here is pick out things that I think are likely to be asked on the exam.
Question: what types of validation should be performed when using a model
- data (should be "RelSuff" - Reliable & Sufficient)
- assumptions (some assumptions are not global but vary with model run)
- results (should be reasonable relative to input)
Now, I'm not sure if you'd be asked for more detail than I've included in the parentheses, but it might be wise to know specifically what's meant by the terms reliable & sufficient.
Question: explain the terms reliable & sufficient with respect to data validation within a model
- reliable:
- data reconciles to other audited sources (Ex: balance sheet)
- data is reasonable with respect to prior period data (Ex: data from prior quarter shouldn't be vastly different)
- sufficient:
- data fits model specification
- data is available in a consistent format
Since I'm a bit OCD, I'm always terrified I'm going skip something that then appears on the exam! I'm sorry to dump all these lists on you, but there's one more from this section that the little voice inside my head keeps telling me to include. (I'm not crazy! Everyone has voices in their head, right? RIGHT??!! Please say yes!!)
Question: what are two checks that can be used to validate the results of a model
- inputs/outputs should be consistent (Ex: if input data includes premium, then the premium totals in the output data should match)
- is the output reasonable in both magnitude and direction (Ex: if you loss increase trends by a little bit, do estimated prices also increase by just a little bit)
This section ends with a couple of pages on stochastic models. Most models I've seen are stochastic, but you can also get some really good information with deterministic runs of complicated models. In any case, the discussion here feels too detailed and seems like something you can skip. (I hope the CAS doesn't buy BattleActs and read that, because then they'll ask a 5 point question on stochastic models!)
mini BattleQuiz 5 You must be logged in or this will not work.
Section 5: Reporting
I don't see anything worthy of an exam question here. It just says the actuary doesn't need to report anything specific about the model, just the results of the analysis. The final user doesn't give a crap whether the actuary used a model or just pulled the results of out their ass. The only exception might be if the model has a serious limitation that impacts the reliability of the results.
Section 6: Hypothetical Examples
This section is probably the most interesting because it looks at actual examples of models. Up until now, the discussion has been mostly abstract - defining basic terms and concepts. There are 6 examples, but the first two are life and pension so I skipped them. In my opinion, examples 6.3 and 6.5 were the best because they explicitly mentioned the model's risk-rating. You should read the introductory paragraph in the source paper for each along with the first bullet point.
The reason for focusing on the first bullet point for those examples is that this is where they evaluate the model risk. (The remaining bullet points get into the details of the analysis, which don't seem as important.) Let's look at example 6.3 - P&C Valuation Using the Chain Ladder Method.
task background model risk considerations model risk-rating auto claim liability valuation - chain ladder model developed by company several years ago
- no recent modifications- results are significant to financials
- current model still applicableseverity: high
likelihood: low/medium
OVERALL: medium to high
The key idea is assessing the model risk based on considerations that we discussed in an earlier section. I didn't list the considerations in the wiki article; I made you go to mini BattleQuiz #2 to find out what they were, and I hope you did! But I'll be nice here and list them for you. (Remember, we're using a two-dimensional approach: severity of failure & likelihood of failure)
SEVERITY of failure: FIF (TED C. is not good at using models, SUMtimes it can be severe. See this forum post for an explanation and also another memory trick by yifanwang!)
- Financial significance
- Importance of model
- Frequency of use (high frequency of use means higher risk)
LIKELIHOOD of failure:
- complexity of model (bells and whistles are nice but too many complicated features increase model risk)
- expertise of users (if the user is a total dumb-ass then the likelihood of failure is epic!)
- docs (crappy docs increase risk)
- testing (if your assistant is posting to Instagram when they're supposed to be testing the model, then failure is near certain!)
So, for the example above, there might be two things that play into the risk-rating of medium to high. The first is that the results are significant to the company's financial statements, so the potential severity of a failure is high. The second is that the model hasn't been changed and if we assume the original model validation process was sound then this doesn't contribute to the risk. The likelihood of failure is therefore medium/low. (It's judgmental.)
You can make a similar table for the other examples, and I think this would be a good exam question. But first...close your books!
Pop Quiz! does the actuary have more control over the severity of model failure OR the likelihood of model failure
- Answer: more control over likelihood of failure: (see mini BattleQuiz #2)
- choose a more reliable model (within the actuary's control)
- test the model more thoroughly before using (also within the actuary's control)
- Answer: more control over likelihood of failure: (see mini BattleQuiz #2)
Here is a summary for example 6.5 - Forecasting Capital Requirements Using a Spreadsheet Model:
task background model risk considerations model risk-rating forecast capital requirements developed a new spreadsheet model - capital requirements are significant
- model used frequently (quarterly)
- data inputs are complexseverity: high
likelihood: medium/high
OVERALL: moderately high
Appendix 1: Risk-Rating Schemes
All you need to know from this section is that there are two approaches to risk-rating a model:
- uni-dimensional approach: rating from 1-20 (20 is high) based on financial significance, complexity, expertise of users, docs
- two-dimensional approach: assessed separately for severity & likelihood of failure - final rating is a balance of these
mini BattleQuiz 6 You must be logged in or this will not work.
BattleCodes
Memorize:
- definitions: model, model elements, model specification, model implementation, model run, model risk
- validating the 4 different types of models before use
- sensitivity testing
- validating models during use: DAR (Data, Assumptions, Results)
Conceptual:
- Given some background information about a model, you should be able to estimate its risk-rating. You could use either a uni-dimensional or two-dimensional approach. That means you have to memorize some of the considerations in estimating the potential severity of failure and the potential likelihood of failure.
Calculational:
- none
Full BattleQuiz You must be logged in or this will not work.
POP QUIZ ANSWERS
SCENARIO-TESTING:
- significant changes to risk factors
- observe future state including ripple effects & management actions over a longer time horizon
- more complex & comprehensive
SENSITIVITY-TESTING:
- incremental changes to risk factors
- shock is more immediate & time horizon shorter
- simpler - fewer resources required