ch11 doubts
- Step 3a of territorial ratemaking in the Wiki article (Spatial smoothing methods) - Why would it not be appropriate under adjacency based smoothing to supplement the data of zip code A, say, with zip code B if they are separated by a high-speed rail corridor? I understand it for natural boundaries, for eg, if A and B have a river b/w them, then theft claims in A are likely to be quite different than in B and it may not appropriate to smooth A's data using B's. Can you provide a similar simple and intuitive example for artificial boundaries too?
- Step 4a of territorial ratemaking in the Wiki article - While clustering individual units into territories , it is important to balance homogeneity, credibility and statistical significance of difference in loss experience. But, if we specifically use the QUANTILE method of clustering (and creating territories either with equal no. of geographical units or equal number of exposures in each territory), then aren't we constraining our main requirement of balancing the above mentioned objectives by carefully selecting and clustering the units to form territories? The SIMILARITY method of clustering makes a lot more sense here to me.
- Fall 2018 (Q10) part b. (Sample Answer 2)--> How does spatial smoothing help to allocate residual risk between basic geographical units selected at step #3 of establishing territorial boundaries (after we quantify systematic risk at step #2)? What is systematic and residual risk and how does that come into picture here?
- Under Increased Limits ratemaking calculations (of your own version of the text example), for LAS(200) say, why is the calculation like LAS(100) + Prob(X>100)*LAS(100,200) and not like LAS(100) + Prob(100<X<200)*LAS(100,200)...?? In words why don't wwe multiply the layer LAS with the layer probability only?
- Fall 2019 (Q13)--> I followed the exact same procedure you followed in the Wiki article to calculate the ILF. However, that approach leads to double counting the probability terms somehow. It's a common error that examiners' also pointed out in their report. But, I am still not able to understand how is the probability getting double counted. I have sent you my workings in Excel via message.
- Spring 2016 (Q11) First, in part b.--> One of the sample answer says that the following relationship holds : Basic lts trend<Ground up trend<Excess loss trend. Please explain. In part c., I did not understand Sample Answer 1 and why is it acceptable. Also, why two so different answers were okay.
Thanks.
Comments
Question 1: Step 3a of territorial ratemaking in the Wiki article (Spatial smoothing methods) - Why would it not be appropriate under adjacency based smoothing to supplement the data of zip code A, say, with zip code B if they are separated by a high-speed rail corridor? I understand it for natural boundaries, for eg, if A and B have a river b/w them, then theft claims in A are likely to be quite different than in B and it may not appropriate to smooth A's data using B's. Can you provide a similar simple and intuitive example for artificial boundaries too?
Question 2: Step 4a of territorial ratemaking in the Wiki article - While clustering individual units into territories , it is important to balance homogeneity, credibility and statistical significance of difference in loss experience. But, if we specifically use the QUANTILE method of clustering (and creating territories either with equal no. of geographical units or equal number of exposures in each territory), then aren't we constraining our main requirement of balancing the above mentioned objectives by carefully selecting and clustering the units to form territories? The SIMILARITY method of clustering makes a lot more sense here to me
Question 3: Fall 2018 (Q10) part b. (Sample Answer 2)--> How does spatial smoothing help to allocate residual risk between basic geographical units selected at step #3 of establishing territorial boundaries (after we quantify systematic risk at step #2)? What is systematic and residual risk and how does that come into picture here?
Note that you would not have to perform this type of calculation on the exam. Any question you get on this topic should be a short-answer question where you have to demonstrate understanding of the general procedure and related considerations.
Question 4: Under Increased Limits ratemaking calculations (of your own version of the text example), for LAS(200) say, why is the calculation like LAS(100) + Prob(X>100)*LAS(100,200) and not like LAS(100) + Prob(100<X<200)*LAS(100,200)...?? In words why don't wwe multiply the layer LAS with the layer probability only?
Question 5: Fall 2019 (Q13)--> I followed the exact same procedure you followed in the Wiki article to calculate the ILF. However, that approach leads to double counting the probability terms somehow. It's a common error that examiners' also pointed out in their report. But, I am still not able to understand how is the probability getting double counted. I have sent you my workings in Excel via message.
Question 6: Spring 2016 (Q11) First, in part b.--> One of the sample answer says that the following relationship holds : Basic lts trend<Ground up trend<Excess loss trend. Please explain. In part c., I did not understand Sample Answer 1 and why is it acceptable. Also, why two so different answers were okay.