Collapsed importance sampling
In the case of full particles for importance sampling, we used to generate particles from another distribution, and then, to compensate for the difference, we used to associate a weighting to each particle. Similarly, in the case of collapsed particles, we will be generating particles for the variables and getting the following dataset:
![](https://static.packt-cdn.com/products/9781784394684/graphics/B04016_04_457.jpg)
Here, the sample is generated from the distribution Q. Now, using this set of particles, we want to find the expectation of
relative to the distribution
:
![](https://static.packt-cdn.com/products/9781784394684/graphics/B04016_04_461.jpg)
![](https://static.packt-cdn.com/products/9781784394684/graphics/B04016_04_22.jpg)
Fig 4.22: The late-for-school model
Let's take an example using the late-for-school model, as shown in Fig 4.22. Let's consider that we have the evidence that ,
, and partition the variables as
and
. So, we will generate particles over the variable
. Also, each such particle is associated with the distribution
. Now, assuming some query (say
), our indicator function will be
. We will now evaluate for each particle:
![](https://static.packt-cdn.com/products/9781784394684/graphics/B04016_04_469.jpg)
After this, we will compute the average of these probabilities...