# 進捗(5/10) 忙しい

#### 進捗(5/10) 忙しい

Metropolis-Hastings、まあまあ理解できない。 おそらく学んだことでちゃんと人に説明できるものあんまりなくて、人に説明して初めて理解できたと言えると思う。ブログに書きなぐってるけどちゃんと理解しているわけではない。

このLidarすごい安いですね… 試しにSLAMやってみるのにちょうどいいかもしれない。

• Coursera

#### Metropolis-Hasting

Sometimes Gibbs samples are too correlated
###### Apply rejection sampling to Markov Chains

For k = 1, 2,... $$Samples\; x'\; from\; a\; wrong\; Q(xk \rightarrow x')$$ in the above equation, Q produce uncorrelated and faster sampling but it does not have anything to do with the desired distribution.

$$Accept\; proposal\; x'\; with\; probability\; A(xk \rightarrow x')$$

$$Otherwise\; stay\; at\; xk$$

$$x^{k+1}=xk$$

$$T(x\rightarrow x')=Q(x\rightarrow x')A(x\rightarrow x')\; for \; all \; x \neq x'$$

$$T(x\rightarrow x)=Q(x\rightarrow x)A(x\rightarrow x)+\sum_{x'\neq x}Q(x\rightarrow x')(1-A(xx\rightarrow x'))$$

How to choose A: $$\pi (x')=\sum_x\pi(x)T(x\rightarrow x')$$

###### Detailed Balance Equation

This gives you a sufficient condition for the stationary distribution.

This says that if we started from the distribution pi and made one step, then the probability we started from X and move to X prime is the same as doing the reverse.

When this equation hold, stationary distribution $\pi(x)$ exists.

###### How to choose the critic

$$A(x\rightarrow x')=min(1, \frac{\hat\pi(x')Q(x'\rightarrow x)}{\hat\pi(x)Q(x\rightarrow x')})$$

In this equation, we don't have to care about the exact distribution $\pi$, because we divide $\pi \; by \; \pi$. We don't need to know the normalization constant, which is usually difficult to compute.

In the above picture, we can choose $A(x'\rightarrow x)$ 1/4 or 1/3 instead of 1. This will also work but it's less efficient. $$A(x\rightarrow x')=\rho$$

This is the probability we accepts they move, Then, if the probability decrease, we will necessarily reject more moves. This is the maximum choice we can make while keeping the probability rho. Be careful that this is true when rho is smaller than 1. When rho is greater than 1, A(x->x') should be equal to 1.

###### Choice of Q

Opposing forces: * Q should spread out, to improve mixing and reduce correlation * But the acceptance probabilitty is often low

##### Summary

Rejection sampling applied to Markov Chains

Pros: * You can choose among family of Markov Chains * Works ofr unnormalized densities * Easy to implement Cons: * Samples are still correlated * Have to choose among family of Markov Chains