This is quite a broad question, so will try and answer to the best of context.
Basically, considerations of input data is the following:
Sufficiency:
-Do the data meet the requirements of the model specification?
-If the model will be used repeatedly, are the data in a consistent format every time?
Reliability:
-Reconciliation to other sources (preferably audited): e.g., does an asset file reconcile to the balance sheet, or do benefit/premium totals reconcile to other company records?
-Summarize and compare input data to prior periods, if applicable.
-Check and investigate outliers (e.g., age 115, zero benefit, zero premium).
-How are missing data handled—through assumptions or errors flagged?
-Review data assumptions periodically to ensure appropriateness.
-Confirm the size of the data file is consistent with prior periods
It is almost a common sense/reasonability check. If you are building/testing a model, but lets say, it is considerably different period to period. Is that good (no it is not)? If you are building/testing a model, but you only have 5 data points, does that seem okay (no)? If you are building/testing a model to see frequency, but half of the data is missing claim counts, is it still appropriate?
how about these 3 questions?
1. what considerations when updating a model's risk rating?
2. how to assess a model? (can I use the considerations in assessing the severity/likelihood of model failure?)
3. how to validate a model? are they asking how to validate when using a model?
Find answers below! Just wanted to ask, where are you pulling the screenshots/comments from? Thanks!
1. Risk-rating a model is to “assess how risky a model is so that the amount of work done to choose, validate, and document a model may be appropriate to the circumstances”. A model is assessed separately for severity and likelihood of failure, and the risk-rating is determined by balancing the two aspects. So to update for a model's risk rating, it would be the same - you would review these same factors to see if severity or likelihood has changed.
2. Yup, that's exactly it! There is the uni-dimensional and multi-dimensional approach and from there there is the severity/likelihood framework to assess further (how likely issues/failure is, how material impact is, etc)
3. Yup, it's asking how do you validate to see if a model is "good". I think the wiki/battlecards covers it pretty well (also need to consider if a new or existing model) but things like: Validation of data inputs, validation of assumptions, validation of results, etc
These validation steps also tie in with risk rating !
Comments
This is quite a broad question, so will try and answer to the best of context.
Basically, considerations of input data is the following:
Sufficiency:
-Do the data meet the requirements of the model specification?
-If the model will be used repeatedly, are the data in a consistent format every time?
Reliability:
-Reconciliation to other sources (preferably audited): e.g., does an asset file reconcile to the balance sheet, or do benefit/premium totals reconcile to other company records?
-Summarize and compare input data to prior periods, if applicable.
-Check and investigate outliers (e.g., age 115, zero benefit, zero premium).
-How are missing data handled—through assumptions or errors flagged?
-Review data assumptions periodically to ensure appropriateness.
-Confirm the size of the data file is consistent with prior periods
It is almost a common sense/reasonability check. If you are building/testing a model, but lets say, it is considerably different period to period. Is that good (no it is not)? If you are building/testing a model, but you only have 5 data points, does that seem okay (no)? If you are building/testing a model to see frequency, but half of the data is missing claim counts, is it still appropriate?
Stuff like this is considerations of input data.
Hope this helps, thanks!
how about these 3 questions?

1. what considerations when updating a model's risk rating?
2. how to assess a model? (can I use the considerations in assessing the severity/likelihood of model failure?)
3. how to validate a model? are they asking how to validate when using a model?
Find answers below! Just wanted to ask, where are you pulling the screenshots/comments from? Thanks!
1. Risk-rating a model is to “assess how risky a model is so that the amount of work done to choose, validate, and document a model may be appropriate to the circumstances”. A model is assessed separately for severity and likelihood of failure, and the risk-rating is determined by balancing the two aspects. So to update for a model's risk rating, it would be the same - you would review these same factors to see if severity or likelihood has changed.
2. Yup, that's exactly it! There is the uni-dimensional and multi-dimensional approach and from there there is the severity/likelihood framework to assess further (how likely issues/failure is, how material impact is, etc)
3. Yup, it's asking how do you validate to see if a model is "good". I think the wiki/battlecards covers it pretty well (also need to consider if a new or existing model) but things like: Validation of data inputs, validation of assumptions, validation of results, etc
it is from the exam summaries, is it possible to have comment/answer for each summary from BA?
https://www.casact.org/exams-admissions/exam-results/post-exam-summaries
Ah got it. Good point on that, we will see what we can do and let you know - Thanks for bringing this up!