# Creating a Custom Case Study

This page describes how users can create their own custom case studies using the new case form and produce the same set of tables and graphs shown in the book.

A Case has two units, where unit is thought of as a line, region, operating unit, or other division. The first unit should be the more volatile. Aggregate resinurance is applied to the first unit and results are presented on a gross and net basis. The units can be specified using a parametric distribution, a discrete distribution, or using a flexible frequency/severity specification.

All Custom Case studies are stored on the site and become available to all users.

A Case is specified by the following information.

• Case study id: A single word or under_scored label for the Case. Must uniquely identify the case.
• Case study name: A brief description of the case.
• Unit names: single word with no spaces or use _. Ideally, the names are in alphabetical order. For example, A_Property and B_Liability are good names.
• Unit stochastic models. These are described below.
• Aggregate reinsurance limit and attachment point for the first unit. Quantities greater than 1 are interpreted as a ¤ amount and quantities less than or equal to 1 are interpreted as percentiles. The ¤ limit and attachment is computed from percentiles of the stochastic model for the unit. In that case limit is interpreted as the exhaustion probability of the cover.
• The capital standard, entered as a probability level. Solvency II operates at a 0.995 (one in 200 years) level. In the US, rating agencies consider companies at 0.99 (100 years), 0.996 (250 years), 0.999 (1000 years), or higher. Corporate bond default rates impose even tighter capital standards.
• The average cost of capital ι. All pricing methods are calibrated to produce a return on capital of ι at the selected capital standard level. This forces all methods onto a common basis. Different cases can be compared via the implied distortion parameters.
• A series of check boxes to activate options:
• Discrete: check to trigger discrete distribution exhibits. The default assumes continuous distributions. Affects Exhibits XXX.
• Monochrome: check to produce black and white graphs, matching the book format. The default is color.
• ROE fix (default): correct calcuation with the CCoC fixed return distortion. Uncheck to reproduce the (incorrect) book exhibits exactly. This only impacts Exhibit V in Chapter 15.
• Calibrate blend (default): calibrate the blend to same premium as the other methods. Uncheck to reproduce the book exhibits, where the blend distortion produces a lower premium.
• Blend extend (default): try to use the extend method to calibrate the blend. If it fails, defaults back to uncalibrated blend, matching the book. Uncheck to use CCoC calibration method.

## Specifying Parametric Distributions

Parametric distributions can be specified as sev DISNAME MEAN cv CV where DISTNAME is the distrbution name, chosen from the list below, MEAN is the expected loss and CV is the loss coefficient of variation.

Available distributions:

• lognorm: lognormal
• gamma: gamma
• invgamma: invgamma

All continuous, one parameter distributions in scipy.stat are available by name. See below for details on using a Pareto, normal, exponential, or beta distribution.

Example. Entering sev lognorm 10 cv 0.2 produces a lognormal distribution with a mean of 10 and a CV of 0.2.

When executed, a sev specification is converted into full aggregate program form.

## Specifying Discrete Distributions

A discrete distrbution is entered as two equal-length rows vectors. The first gives the outcomes and the second the probabilities. For example


0   9    10
0.5 0.3   0.2


specifies an aggregate with a 0.5 chance of taking the value 0, 0.3 chance of 9, and 0.2 of 10. The layout (horizontal whitespace) is irrelevant.

When executed, an discrete specification is converted into full aggregate program form.

## Specifying Aggregate Distributions

Aggregate distributions are specified using the aggregate language as

    agg [label] [exposure] [limit]? sev [severity] [frequency] 

The words agg and sev are keywords (like if/then/else), [label], [exposure], [limit]? etc. are user inputs, and the limit clause is optional.

For example

    agg Auto 10 claims sev lognorm 10 cv 1.3 poisson 

creates an aggregate with label Auto, an expected claim count of 10, severity sampled from an unlimited lognormal distribution with mean 10 and CV 1.3, and a Poisson frequency distribution. The layer is unlimited because the limit clause missing. The label must begin with a letter and contain just letters and numbers. It can't be a language keyword, e.g. agg, port, poisson, fixed

Exposure can be specified in three ways.


123 claim[s]
1000 loss
1000 premium at 0.7 [lr]?		

The first gives the expected claim count; the s on claims is optional. The second gives the expected loss with claim counts derived from average severity. The third gives premium and a loss ratio with counts again derived from severity. The final lr is optional and just used for clarity.

Limit are entered as layer xs attachment or layer x attachment.

Here are four illustrative examples. The line must start with agg (no tabs or spaces first) but afterwards spacing within the spec is ignored and can be used to enhance readability. The newline is needed.


agg Example1   10  claims  30 xs 0 sev lognorm 10 cv 3.0 fixed
agg Example2   10  claims 100 xs 0 sev 100 * expon + 10 poisson
agg Example3 1000  loss    90 x 10 sev gamma 10 cv 6.0 mixed gamma 0.3
agg Example4 1000  premium at 0.7 lr inf x 50 sev invgamma 20 cv 5.0 binomial 0.4

• Example1 10 claims from the 30 x 0 layer of a lognormal severity with (unlimited) mean 10 and cv 3.0 and using a fixed claim count distribution (i.e. always exactly 10 claims).
• Example2 10 claims from the 100 x 0 layer of an exponential severity with (unlimited) mean 100 shifted right by 10, and using a Poisson claim count distribution. The exponential has no shape parameters, it is just scaled. The mean refers to the unshifted distribution.
• Example3 1000 expected loss from the 90 x 10 layer of a gamma severity with (unlimited) mean 10 and cv of 6.0 and using a gamma-mixed Poisson claim count distribution. The mixing distribution has a cv of 0.3 The claim count is derived from the limited severity.
• Example4 700 of expected loss (1000 premium times 70 percent loss ratio) from an unlimited excess 50 layer of a inverse gamma distribution with mean of 20 and cv of 5.0 using a binomial distribution with p=0.4. The n parameter for the binomial is derived from the required claim count.

The inverse Gaussian (ig), delaporte, Sichel and other distributions are available as mixing distributions.

The programs page provides a list of different ways to specify an aggregate distribution using the language.

The Aggregate Manual provides more details.

# Chat and Gloss for Examples of aggregate programs

The aggregate is a powerful and flexible way to specify a frequency and severity aggregate loss distribution. This page contains a list of examples.

## Specifying Losses

Total losses can be specified via

• Stated expected loss and severity (claim count derived)
• Premium and loss ratio and severity (expected loss and claim count derived)
• Claim count times severity (expected loss derived)

In each case, the expected severity is computed automatically.

Fixed frequency and severity
agg Example
Fixed frequency, lognormal severity specified with mean and CV
agg Example

## Limit and Attachment Examples

Fixed frequency and severity
agg Example
Fixed frequency, lognormal severity specified with mean and CV
agg Example

## Esoteric Examples


agg Example1   10  claims  30 xs 0 sev lognorm 10 cv 3.0 fixed
agg Example2   10  claims 100 xs 0 sev 100 * expon + 10 poisson
agg Example3 1000  loss    90 x 10 sev gamma 10 cv 6.0 mixed gamma 0.3
agg Example4 1000  premium at 0.7 lr inf x 50 sev invgamma 20 cv 5.0 binomial 0.4

agg WindB  0.8 claim sev dhistogram xps [11, 23, 101] [.5, .4, .1] bernoulli

which specifies a discrete severity with a 0.5 chance of loss of 11, 0.4 of 23, and 0.1 of 101. The Bernoulli frequency is a binomial distribution with parameters n = 1, p = 0.8. The more complex density for the Poisson model reflects the chances of multiple cat losses. Note the amounts of the losses have been chosen as prime numbers to differentiate kind of loss.

Exposures can be specified in three ways.

exposures     :: 123 claim[s]
|  1000 loss
|  1000 premium at 0.7 [lr]?

The first defines an expected claim count of 123. The second the expected loss of 1000, where the claim count is derived from the average severity. The third specifies premium of 1000 and a loss ratio of 70%. Again, claim counts are derived from severity. The final lr is for clarity and is optional. The pipe (|) notation indicates alternative choices.

Limit are entered as layer xs attachment or simply x. The examples below show severity limited to 30, to 100, an excess layer 90 x 10, and an unlimited layer xs 50.

agg Auto 10 claims  30 xs 0 sev lognorm 10 cv 1.3 poisson
agg Auto 10 claims 100 xs 0 sev lognorm 10 cv 1.3 poisson
agg Auto 10 claims  90 x 10 sev lognorm 10 cv 1.3 poisson
agg Auto 10 claims inf x 50 sev lognorm 10 cv 1.3 poisson

The severity distribution is specified by name. Any scipy.stats continuous distribution with one shape parameter can be used, including the gamma, lognormal, Pareto, Weibull etc. The exponential and normal variables, with no shape parameters, and the beta with two shape parameters are also available. Most distributions can be entered via mean and CV, or specified by their shape parameters and then scaled and shifted, using the standard scipy.stats scale and loc notations, see . Finally dhistogram and chistogram can be used to create discrete (point mass) and continuous (ogive) empirical distributions. Here are some examples.

Code Distribution Meaning
sev lognorm 10 cv 3 lognormal mean 10, cv 0.3
sev 10 * lognorm 1.75 lognormal 10X, X lognormal(μ = 0, σ = 1.75)
sev 10 * lognorm 1.75 + 20 lognormal 10X + 20
sev 10 * lognorm 1 cv 3 + 50 lognormal 10Y + 50, Y lognormal mean 1 cv 3
sev 100 * pareto 1.3 - 100 Pareto Pareto, survival (100/(100+x))1.3
sev 50 * normal + 100 normal mean 100, std dev 50
sev 5 * expon exponential mean 5
sev 5 * uniform + 1 uniform uniform between 1 and 6
sev 50 * beta 2 3 beta 50Z, Z beta parameters 2, 3

The frequency is specified as follows. The expected claim count is n.

Code Meaning
fixed Fixed n claims, degenerate distribution
poisson Poisson mean n
bernoulli p = n
binomial 0.3 Binomial n, p = 0.3, note mean is not n in this case
mixed ID 0.3 [0.1]? Mixed Poisson. First parameter is always CV. Second varies with type.

The mixing distribution can be gamma for a negative binomial, inverse Gaussian (ig), Delaporte, Sichel etc.

#### Limit Profiles

The exposure variables can be vectors to express a limit profile. All exp_[en|prem|loss|count] related elements are broadcast against one-another. For example

    [100 200 400 100] premium at 0.65 lr [1000 2000 5000 10000] xs 1000

expresses a limit profile with 100 of premium at 1000 x 1000; 200 at 2000 x 1000 400 at 5000 x 1000 and 100 at 10000 x 1000. In this case all the loss ratios are the same, but they could vary too, as could the attachments.

#### Mixtures

The severity variables can be vectors to express a mixed severity. All sev_ elements are broadcast against one-another. For example

sev lognorm 1000 cv [0.75 1.0 1.25 1.5 2] wts [0.4, 0.2, 0.1, 0.1, 0.1]

expresses a mixture of five lognormals with a mean of 1000 and CVs as indicated with weights 0.4, 0.2, 0.1, 0.1, 0.1. Equal weights can be express as wts=[5], or the relevant number of components.

#### Limit Profiles and Mixtures

Limit profiles and mixtures can be combined. Each mixed severity is applied to each limit profile component. For example

        ag = uw('agg multiExp [10 20 30] claims [100 200 75] xs [0 50 75]
sev lognorm 100 cv [1 2] wts [.6 .4] mixed gamma 0.4')

creates an aggregate with six severity subcomponents.

# Limit Attachment Claims
0 100 0 6
1 100 0 4
2 200 50 12
3 200 50 8
4 75 75 18
5 75 75 12

#### Circumventing Products

It is sometimes desirable to enter two or more lines each with a different severity but with a shared mixing variable. For example to model the current accident year and a run- off reserve, where the current year is gamma mean 100 cv 1 and the reserves are larger lognormal mean 150 cv 0.5 claims requires

        agg MixedPremReserve [100 200] claims \
sev [gamma lognorm] [100 150] cv [1 0.5] \
mixed gamma 0.4

so that the result is not the four-way exposure / severity product but just a two-way combination. These two cases are distinguished looking at the total weights. If the weights sum to one then the result is an exposure / severity product. If the weights are missing or sum to the number of severity components (i.e. are all equal to 1) then the result is a row by row combination.

#### Determining Expected Claim Count

Variables are used in the following order to determine overall expected losses.

• If count is given it is used
• If loss is given then count is derived from the severity
• If prem[ium] and [at] 0.7 lr are given then the loss is derived and counts from severity

• If prem is given the loss ratio is computed
• Claim count is conditional but severity can have a mass at zero
• X is the GROUND UP severity, so X | X > attachment is used and generates n claims really?

### Unconditional Severity

The severity distribution is conditional on a loss to the layer. For an excess layer y xs a the severity is has distribution X ∣ X>, where X is the specified severity. For a ground-up layer there is no adjustment.

The default behavior can be over-ridden by adding ! after the severity distribution. For example

agg Conditional 1 claim 10 x 10 sev lognorm 10 cv 1 fixed
agg Unconditional 1 claim 10 x 10 sev lognorm 10 cv 1 ! fixed

produces conditional and unconditional samples from an excess layer of a lognormal. The latter includes an approximately 0.66 chance of a claim of zero, corresponding to X ≤ 10 below the attachment.

### Example Aggregate Programs

Below are a series of programs illustrating the different ways exposure, frequency and severity can be broadcast together, several different types of severity and all the different types of severity.

Throughout the programs a backslash is a newline continuation.

        # use to create sev and aggs so can illustrate use of sev. and agg. below
# simple severity curve, lognormal mean 10 cv 0.3
sev sev1 lognorm 10 cv .3
# severity curve, 5 * lognormal(sigma=0.3) + 10, so mean 10 + 5*exp(0.3^2/2)
sev sev2 5 * lognorm 0.3 + 10
# standard Pareto alpha = 1.5 and scale = 10; supported on x > 0
sev par1 10 * pareto 1.5 - 10

# aggregate with fixed count, same as the severity
agg Agg0 1 claim sev lognorm 10 cv .09 fixed

agg Agg1  1 claim sev {10*np.exp(-.3**2/2)} * lognorm .3      fixed note{sigma=.3 mean=10}
agg Agg2  1 claim sev {10*np.exp(-.3**2/2)} * lognorm .3 + 5  fixed note{shifted right by 5}
agg Agg3  1 claim sev 10 * lognorm 0.5 cv .3                  fixed note{mean 0.5 scaled by 10 and cv 0.3}
agg Agg4  1 claim sev 10 * lognorm 1 cv .5 + 5                fixed note{shifted right by 5}
agg Agg5  1 claim sev 10 * gamma .3
agg Agg6  1 claim sev 10 * gamma 1 cv .3 + 5                  fixed note{mean 10 x 1, cv 0.3 shifted right by 5}
agg Agg7  1 claim sev 2 * pareto 1.6 - 2                      fixed note{pareto alpha=1.6 lambda=2}
agg Agg8  1 claim sev 2 * uniform 5 + 2.5                     fixed note{uniform 2.5 to 12.5}
agg Agg9  1 claim 10 x  2 sev lognorm 20 cv 1.5               fixed note{10 x 2 layer, 1 claim}
agg Agg10 10 loss 10 xs 2 sev lognorm 20 cv 1.5               fixed note{10 x 2 layer, total loss 10, derives requency}
agg Agg11 14 prem at .7    10 x 1 sev lognorm 20 cv 1.5       fixed note{14 prem at .7 lr derive frequency}
agg Agg11 14 prem at .7 lr 10 x 1 sev lognorm 20 cv 1.5       fixed note{14 prem at .7 lr derive frequency, lr is optional}
agg Agg12: 14 prem at .7 lr (10 x 1) sev (lognorm 20 cv 1.5)  fixed note{trailing semi and other punct ignored};
agg Agg13: 1 claim sev 50 * beta 3 2 + 10 fixed note{scaled and shifted beta, two parameter distribution}
agg Agg14: 1 claim sev 100 * expon + 10   fixed note{exponential single parameter, needs scale, optional shift}
agg Agg15: 1 claim sev 10 * norm + 50     fixed note{normal is single parameter too, needs scale, optional shift}

# any scipy.stat distribution taking one parameter can be used; only cts vars supported on R+ make sense
agg Agg16: 1 claim sev 1 * invgamma 4.07 fixed note{inverse gamma distribution}

# mixtures
agg MixedLine1: 1 claim 25 xs 0 sev lognorm 10 cv [0.2, 0.4, 0.6, 0.8, 1.0] wts=5 fixed note{equally weighted mixture of 5 lognormals different cvs}
agg MixedLine2: 1 claim 25 xs 0 sev lognorm [10, 15, 20, 25, 50] cv [0.2, 0.4, 0.6, 0.8, 1.0] wts=5 fixed \
note{equal weighted mixture of 5 lognormals different cvs and means}

agg MixedLine3: 1 claim 25 xs 0 sev lognorm 10 cv [0.2, 0.4, 0.6, 0.8, 1.0] wt [.2, .3, .3, .15, .05] fixed \
note{weights scaled to equal 1 if input}

# limit profile
agg LimitProfile1: 1 claim [1, 5, 10, 20] xs 0 sev lognorm 10 cv 1.2 wt [.50, .20, .20, .1]   fixed
agg LimitProfile2: 5 claim            20  xs 0 sev lognorm 10 cv 1.2 wt [.50, .20, .20, .1]   fixed
agg LimitProfile3: [10 10 10 10] claims [inf 10 inf 10] xs [0 0 5 5] sev lognorm 10 cv 1.25   fixed

# limits and distribution blend
agg Blend1 50  claims [5 10 15] x 0         sev lognorm 12 cv [1, 1.5, 3]          fixed \
note{options all broadcast against one another, 50 claims of each}

agg Blend2 50  claims [5 10 15] x 0         sev lognorm 12 cv [1, 1.5, 3] wt=3     fixed \
note{options all broadcast against one another, 50 claims of each}

agg Blend5cv1  50 claims  5 x 0 sev lognorm 12 cv 1 fixed
agg Blend10cv1 50 claims 10 x 0 sev lognorm 12 cv 1 fixed
agg Blend15cv1 50 claims 15 x 0 sev lognorm 12 cv 1 fixed

agg Blend5cv15  50 claims  5 x 0 sev lognorm 12 cv 1.5 fixed
agg Blend10cv15 50 claims 10 x 0 sev lognorm 12 cv 1.5 fixed
agg Blend15cv15 50 claims 15 x 0 sev lognorm 12 cv 1.5 fixed

# semi colon can be used for newline and backslash works
agg Blend5cv3  50 claims  5 x 0 sev lognorm 12 cv 3 fixed; agg Blend10cv3 50 claims 10 x 0 sev lognorm 12 cv 3 fixed
agg Blend15cv3 50 claims 15 x 0 sev \
lognorm 12 cv 3 fixed

agg LimitProfile4: [10 30 15 5] claims [inf 10 inf 10] xs [0 0 5 5] sev lognorm 10 cv [1.0, 1.25, 1.5] wts=3  fixed \
note{input counts directly}

# the logo
agg logo 1 claim {np.linspace(10, 250, 20)} xs 0 sev lognorm 100 cv 1 fixed

# empirical distributions
agg dHist1 1 claim sev dhistogram xps [1, 10, 40] [.5, .3, .2] fixed     note{discrete histogram}

agg cHist1 1 claim sev chistogram xps [1, 10, 40] [.5, .3, .2] fixed     note{continuous histogram, guessed right hand endpiont}

agg cHist2 1 claim sev chistogram xps [1 10 40 45] [.5 .3 .2]  fixed     note{continuous histogram, explicit right hand endpoint, don't need commas}

agg BodoffWind  1 claim sev dhistogram xps [0,  99] [0.80, 0.20] fixed   note{examples from Bodoffs paper}

agg BodoffQuake 1 claim sev dhistogram xps [0, 100] [0.95, 0.05] fixed

# set up fixed sev for future use
sev One dhistogram xps [1] [1]   note{a certain loss of 1}

# frequency options
agg FreqFixed      10 claims sev sev.One fixed
agg FreqPoisson    10 claims sev sev.One poisson                 note{Poisson frequency}
agg FreqBernoulli  .8 claims sev sev.One bernoulli               note{Bernoulli en is frequency }
agg FreqBinomial   10 claims sev sev.One binomial 0.5
agg FreqPascal     10 claims sev sev.One pascal .8 3

# mixed freqs
agg FreqNegBin     10 claims sev sev.One (mixed gamma 0.65)   note{gamma mixed Poisson = negative binomial}
agg FreqDelaporte  10 claims sev sev.One mixed delaporte .65 .25
agg FreqIG         10 claims sev sev.One mixed ig  .65
agg FreqSichel     10 claims sev sev.One mixed delaporte .65 -0.25
agg FreqSichel.gamma  10 claims sev sev.One mixed sichel.gamma .65 .25
agg FreqSichel.ig     10 claims sev sev.One mixed sichel.ig  .65 .25
agg FreqBeta       10 claims sev sev.One mixed beta .5  4  note{second param is max mix}

# portfolio examples
port Complex_Portfolio_Mixed
agg LineA  50  claims           sev lognorm 12 cv [2, 3, 4] wt [.3 .5 .2] mixed gamma 0.4
agg LineB  24  claims 10 x 5    sev lognorm 12 cv [{', '.join([str(i) for i in np.linspace(2,5, 20)])}] \
wt=20 mixed gamma 0.35
agg LineC 124  claims 120 x 5   sev lognorm 16 cv 3.4                     mixed gamma 0.45

port Complex_Portfolio
agg Line3  50  claims [5 10 15] x 0         sev lognorm 12 cv [1, 2, 3]        mixed gamma 0.25
agg Line9  24  claims [5 10 15] x 5         sev lognorm 12 cv [1, 2, 3] wt=3   mixed gamma 0.25

port Portfolio_2
agg CA 500 prem at .5 lr 15 x 12  sev gamma 12 cv [2 3 4] wt [.3 .5 .2] mixed gamma 0.4
agg FL 1.7 claims 100 x 5         sev 10000 * pareto 1.3 - 10000        poisson
agg IL 1e-8 * agg.CMP
agg OH agg.CMP * 1e-8
agg NY 500 prem at .5 lr 15 x 12  sev [20 30 40 10] * gamma [9 10 11 12] cv [1 2 3 4] wt =4 mixed gamma 0.4

# miscellaneous () and : are ignored so they can be used to improve readability
port MyFirstPortfolio
agg A1: 50  claims          sev gamma 12 cv .30 (mixed gamma 0.014)
agg A2: 50  claims 30 xs 10 sev gamma 12 cv .30 (mixed gamma 0.014)
agg A3: 50  claims          sev gamma 12 cv 1.30 (mixed gamma 0.014)
agg A4: 50  claims 30 xs 20 sev gamma 12 cv 1.30 (mixed gamma 0.14)
agg B 15 claims 15 xs 15 sev lognorm 12 cv 1.5 + 2 mixed gamma 4.8
agg Cat 1.7 claims 25 xs 5  sev 25 * pareto 1.3 0 - 25 poisson
agg ppa: 1e-8 * agg.PPAL

port distortionTest
agg mix    50 claims              [50, 100, 150, 200] xs 0  sev lognorm 12 cv [1,2,3,4]    poisson
agg low    500 premium at 0.5     5 xs 5                    sev gamma 12 cv .30            mixed gamma 0.2
agg med    500 premium at 0.5 lr  15 xs 10                  sev gamma 12 cv .30            mixed gamma 0.4
agg xsa    50  claims             30 xs 10                  sev gamma 12 cv .30            mixed gamma 1.2
agg hcmp   1e-8 * agg.CMP
agg ihmp   agg.PPAL * 1e-8

### Bodoff’s Examples

We now show the definition above reproduces Bodoff’s “Thought experiment 1”. He considers a situation of two losses wind, W, and earthquake, Q, where W and Q are independent, W takes the value 99 with probability 20% and otherwise zero, and Q takes the value 100 with probability 5% and otherwise zero. Total losses Y = W + Q. There are four possibilities as shown in Table [t:bod1].

Bodoff Thought Experiment 1
Event Probability
No Loss 0.76
W = 99 0.19
Q = 100 0.04
W = 99, Q = 100 0.01

Bodoff’s Examples in Aggregate

Here are the Aggregate programs for the examples Bodoff considers.

port BODOFF1 note{Bodoff Thought Experiment No. 1}
agg wind  1 claim sev dhistogram xps [0,  99] [0.80, 0.20] fixed
agg quake 1 claim sev dhistogram xps [0, 100] [0.95, 0.05] fixed

port BODOFF2 note{Bodoff Thought Experiment No. 2}
agg wind  1 claim sev dhistogram xps [0,  50] [0.80, 0.20] fixed
agg quake 1 claim sev dhistogram xps [0, 100] [0.95, 0.05] fixed

port BODOFF3 note{Bodoff Thought Experiment No. 3}
agg wind  1 claim sev dhistogram xps [0,   5] [0.80, 0.20] fixed
agg quake 1 claim sev dhistogram xps [0, 100] [0.95, 0.05] fixed

port BODOFF4 note{Bodoff Thought Experiment No. 4 (check!)}
agg a 0.25 claims sev   4 * expon poisson
agg b 0.05 claims sev  20 * expon poisson
agg c 0.05 claims sev 100 * expon poisson