ch11 doubts

  1. Step 3a of territorial ratemaking in the Wiki article (Spatial smoothing methods) - Why would it not be appropriate under adjacency based smoothing to supplement the data of zip code A, say, with zip code B if they are separated by a high-speed rail corridor? I understand it for natural boundaries, for eg, if A and B have a river b/w them, then theft claims in A are likely to be quite different than in B and it may not appropriate to smooth A's data using B's. Can you provide a similar simple and intuitive example for artificial boundaries too?
  2. Step 4a of territorial ratemaking in the Wiki article - While clustering individual units into territories , it is important to balance homogeneity, credibility and statistical significance of difference in loss experience. But, if we specifically use the QUANTILE method of clustering (and creating territories either with equal no. of geographical units or equal number of exposures in each territory), then aren't we constraining our main requirement of balancing the above mentioned objectives by carefully selecting and clustering the units to form territories? The SIMILARITY method of clustering makes a lot more sense here to me.
  3. Fall 2018 (Q10) part b. (Sample Answer 2)--> How does spatial smoothing help to allocate residual risk between basic geographical units selected at step #3 of establishing territorial boundaries (after we quantify systematic risk at step #2)? What is systematic and residual risk and how does that come into picture here?
  4. Under Increased Limits ratemaking calculations (of your own version of the text example), for LAS(200) say, why is the calculation like LAS(100) + Prob(X>100)*LAS(100,200) and not like LAS(100) + Prob(100<X<200)*LAS(100,200)...?? In words why don't wwe multiply the layer LAS with the layer probability only?
  5. Fall 2019 (Q13)--> I followed the exact same procedure you followed in the Wiki article to calculate the ILF. However, that approach leads to double counting the probability terms somehow. It's a common error that examiners' also pointed out in their report. But, I am still not able to understand how is the probability getting double counted. I have sent you my workings in Excel via message.
  6. Spring 2016 (Q11) First, in part b.--> One of the sample answer says that the following relationship holds : Basic lts trend<Ground up trend<Excess loss trend. Please explain. In part c., I did not understand Sample Answer 1 and why is it acceptable. Also, why two so different answers were okay.

Thanks.

Comments

  • edited April 2021

    Question 1: Step 3a of territorial ratemaking in the Wiki article (Spatial smoothing methods) - Why would it not be appropriate under adjacency based smoothing to supplement the data of zip code A, say, with zip code B if they are separated by a high-speed rail corridor? I understand it for natural boundaries, for eg, if A and B have a river b/w them, then theft claims in A are likely to be quite different than in B and it may not appropriate to smooth A's data using B's. Can you provide a similar simple and intuitive example for artificial boundaries too?

    • A river and train tracks, especially high-speed rail, would probably have similar effects. Both can only be crossed with bridges and this separation often means things like land use, income, and demographics could be quite different on one side of the boundary versus the other.
    • In general, the effect of a boundary would have to be considered on a case-by-case basis. For example, a normal type of street probably wouldn't have any effect, but a highway probably would.
  • Question 2: Step 4a of territorial ratemaking in the Wiki article - While clustering individual units into territories , it is important to balance homogeneity, credibility and statistical significance of difference in loss experience. But, if we specifically use the QUANTILE method of clustering (and creating territories either with equal no. of geographical units or equal number of exposures in each territory), then aren't we constraining our main requirement of balancing the above mentioned objectives by carefully selecting and clustering the units to form territories? The SIMILARITY method of clustering makes a lot more sense here to me

    • Yes, the quantile method does impose constraints that might make it more difficult to balance homogeneity, credibility and statistical significance. There might be data sets however where the quantile method might make sense. Dividing the data in 5 equally sized classes is more well-defined and requires less judgment than clustering. You might only want to do the extra work for the clustering method if the simpler quantile method didn't seem appropriate for the specific situation.
  • Question 3: Fall 2018 (Q10) part b. (Sample Answer 2)--> How does spatial smoothing help to allocate residual risk between basic geographical units selected at step #3 of establishing territorial boundaries (after we quantify systematic risk at step #2)? What is systematic and residual risk and how does that come into picture here?

    • Systematic risk is risk that is inherent in the given situation. Residual risk is risk that remains after systematic risk has been removed.
    • Another way of thinking about it is that systematic variance is variance that can be explained by the data. Residual variance is variance that cannot be explained.
    • These 2 types of risk are always present but ideally, systematic risk is high and residual risk is low. That means your data/model provides a good explanation of the underlying situation. (If residual risk is very high, that means your data/model is essentially random and you would not be able to make statistically significant predictions.)
    • Spatial smoothing regarding residual risk in this situation is used to improve the predictive power of a multivariate analysis. According to the text:
      • In this case, the actuary applies smoothing techniques to the geographic residuals to see if there are any patterns in the residuals
      • In other words, the actuary tries to detect any systematic geographic patterns that are not explained by the geographical factors in the multivariate model. Any pattern in the residuals (i.e., the residuals are all positive or negative in a certain region) indicates the existence of geographic residual variation. Once identified, the spatially smoothed residuals can be used to adjust the geographic estimators to improve the overall predictive power of the model.

    Note that you would not have to perform this type of calculation on the exam. Any question you get on this topic should be a short-answer question where you have to demonstrate understanding of the general procedure and related considerations.

  • Question 4: Under Increased Limits ratemaking calculations (of your own version of the text example), for LAS(200) say, why is the calculation like LAS(100) + Prob(X>100)*LAS(100,200) and not like LAS(100) + Prob(100<X<200)*LAS(100,200)...?? In words why don't wwe multiply the layer LAS with the layer probability only?

    • If you only considered the probability of claims being between 100K and 200K, you would be missing claims greater than 200K, and your answer for LAS(200) would be too low. Claims greater than 200K still contribute to LAS(200). It's just that they are capped at 200K.


  • edited April 2021

    Question 5: Fall 2019 (Q13)--> I followed the exact same procedure you followed in the Wiki article to calculate the ILF. However, that approach leads to double counting the probability terms somehow. It's a common error that examiners' also pointed out in their report. But, I am still not able to understand how is the probability getting double counted. I have sent you my workings in Excel via message.

    • I have inserted an Excel spreadsheet into the wiki that shows how to solve this problem using the method given in the wiki. It's at the end of the section Example C Censored Claims from Werner Chapter 11: https://www.battleacts5.ca/wiki5/Werner11.SpecialClass#Example_C:_Censored_Claims
    • There are 2 reasons the examiner's report might be a little hard to follow:
    • Reason 1: They took a shortcut by calculating the severity in each layer and incorporating the probability all in 1 step.
    • Reason 2: They used slightly different notation than I used in the wiki. For example, when they wrote LAS($50xs50), this looks like it should be the same thing as LAS(50-100) in my solution.
    • To explain a little further: LAS(50-100) in my solution uses only claims > 50. I then incorporate P(X>50) as a separate step to get LAS(100) as follows:
    • ==> LAS(100) = LAS(50) + P(X>50)*LAS(50-100)
    • It's the same reasoning for LAS(100-200) and LAS(200). Once you wrap your head around that, I believe the solution should make more sense.


  • edited April 2021

    Question 6: Spring 2016 (Q11) First, in part b.--> One of the sample answer says that the following relationship holds : Basic lts trend<Ground up trend<Excess loss trend. Please explain. In part c., I did not understand Sample Answer 1 and why is it acceptable. Also, why two so different answers were okay.

    • part (b): This relationship is explained in chapter 6 here: https://www.battleacts5.ca/wiki5/Werner06.LossLAE#Leveraged_Effect_of_Limits_on_Severity_Trend
    • part (c): (This part of the question is actually from chapter 12.)
    • ==> Sample answer 1 bases the complement on losses capped at 100,000 while sample answer 2 bases the complement on losses capped at 250,000. Both methods are acceptable and in general there are many different ways of coming up with a complement of credibility. In other words, there is no single correct answer.
    • ==> You said you didn't understand sample answer 1, but I'm not sure exactly where you're having trouble. The 30,000,000 of losses capped at 100,000 is calculated from the given table of losses: (14,000 + 3,000 + 3,000) + (50+40+10)*100. The part of with the ILFs is an application of the formula for Lower Limits Analysis which you can find here: https://www.battleacts5.ca/wiki5/Werner12.Credibility#Lower_Limits_Analysis
Sign In or Register to comment.