# 马尔科夫链蒙特卡洛采样( Markov Chain Monte Carlo，MCMC )

## 1、从随机变量中采样

### 举例

1 %% Explore the Normal distribution N( mu , sigma )
2 mu = 100; % the mean
3 sigma = 15; % the standard deviation
4 xmin = 70; % minimum x value for pdf and cdf plot
5 xmax = 130; % maximum x value for pdf and cdf plot
6 n = 100; % number of points on pdf and cdf plot
7 k = 10000; % number of random draws for histogram
8
9 % create a set of values ranging from xmin to xmax
10 x = linspace( xmin , xmax , n );
11 p = normpdf( x , mu , sigma ); % calculate the pdf
12 c = normcdf( x , mu , sigma ); % calculate the cdf
13
14 figure( 1 ); clf; % create a new figure and clear the contents
15
16 subplot( 1,3,1 );
17 plot( x , p , 'k−' );
18 xlabel( 'x' ); ylabel( 'pdf' );
19 title( 'Probability Density Function' );
20
21 subplot( 1,3,2 );
22 plot( x , c , 'k−' );
23 xlabel( 'x' ); ylabel( 'cdf' );
24 title( 'Cumulative Density Function' );
25
26 % draw k random numbers from a N( mu , sigma ) distribution
27 y = normrnd( mu , sigma , k , 1 );
28
29 subplot( 1,3,3 );
30 hist( y , 20 );
31 xlabel( 'x' ); ylabel( 'frequency' );
32 title( 'Histogram of random values' );


Listing 1.1: Matlab code to visualize Normal distribution.

## 1.2 从非标准分布中采样

### 1.2.1 用离散变量进行逆变换采样（Inverse transform sampling）

1 % Simulate the distribution observed in the
2 % human random digit generation task
3
4 % probabilities for each digit
5 theta = [0.000; ... % digit 0
6 0.100; ... % digit 1
7 0.090; ... % digit 2
8 0.095; ... % digit 3
9 0.200; ... % digit 4
10 0.175; ... % digit 5
11 0.190; ... % digit 6
12 0.050; ... % digit 7
13 0.100; ... % digit 8
14 0.000 ] ... % digit 9
15
16 % fix the random number generator
17 seed = 1; rand( 'state' , seed );
18
19 % let's say we draw K random values
20 K = 10000;
21 digitset = 0:9;
22 Y = randsample(digitset,K,true,theta);
24 % create a new figure
25 figure( 1 ); clf;
26
27 % Show the histogram of the simulated draws
28 counts = hist( Y , digitset );
29 bar( digitset , counts , 'k' );
30 xlim( [ −0.5 9.5 ] );
31 xlabel( 'Digit' );
32 ylabel( 'Frequency' );
33 title( 'Distribution of simulated draws of human digit generator' );


Listing 1.2: Matlab code to simulate sampling of random digits.

Figure 1.2.1: Illustration of the inverse transform procedure for generating discrete random variables. Note that we plot the cumulative probabilities for each outcome. If we sample a uniform random number of U = 0.8, then this yields a random value of X = 6

### 1.2.2 连续变量的逆变换采样

1. 获得均匀分布$U \sim Uniform\left( {0,1} \right)$
2. $X = {F^{ - 1}}\left( U \right)$
3. 重复上述采样过程

1. 获得均匀分布$U \sim Uniform\left( {0,1} \right)$
2. $X = - \log \left( {1 - U} \right)\lambda$
3. 重复上述采样过程

### 1.2.3 拒绝采样

1. 选择一个容易采样的分布$q\left( \theta \right)$

2. 确定常量$c$，使对所有$\theta$的有$cq\left( \theta \right) \ge p\left( \theta \right)$

3. 从建议分布$q\left( \theta \right)$中采样一个建议样本$\theta$

4. 从区间$\left[ {0,cq\left( \theta \right)} \right]$采样一个数$u$

5. 如果$u > p\left( \theta \right)$则拒绝，否则接受

6. 重复步骤3,4,5，直到达到要求的样本数量；每个接受的样本都是从$p\left( \theta \right)$中获得的

## 2、马尔科夫链蒙特卡洛理论（MCMC）

### 2.1 蒙特卡洛积分（Monte Carlo integration）

$E\left[ {g\left( x \right)} \right] = \int {g\left( x \right)p\left( x \right)dx}$

$E\left[ {g\left( x \right)} \right] = \sum {g\left( x \right)p\left( x \right)dx}$

$E\left[ {g\left( x \right)} \right] = {1 \over n}\sum\limits_{t = 1}^n {g\left( {{x^{\left( t \right)}}} \right)}$

### 2.2 马尔科夫链

Markov链是一个随机过程，我们利用顺序过程从一个状态过渡到另一个状态。我们在状态${x^{\left( 1 \right)}}$开始Markov链，使用转移函数$p\left( {{x^{\left( t \right)}}|{x^{\left( {t - 1} \right)}}} \right)$作为状态的转移矩阵，确定下一个状态${x^{\left( 2 \right)}}$。然后以${x^{\left( 2 \right)}}$作为开始状态使用同样的转移函数继续确定下一个状态，如此重复，得到一系列状态：

${x^{\left( 1 \right)}} \to {x^{\left( 2 \right)}} \to ... \to {x^{\left( t \right)}} \to ...$

1. $t=1$

2. 生成一个初始值$u$，并设${x^{\left( t \right)}} = u$

3. 重复下列过程

$t = t + 1$

${x^{\left( t \right)}} = u$

1. 直到$t = T$

## 2.3 综合讲解：马尔科夫链蒙特卡洛理论

MCMC的目标是设计一个马尔科夫链，该马尔科夫链的平稳分布就是我们要采样的分布，这就是所谓的目标分布。换句话说，我们希望从马尔科夫链的状态中采样等同于从目标分布中取样。这个想法是用一些方法设置转移函数，使无论马尔科夫链的初始状态是什么最终都能够收敛到目标分布。有很多方法使用相对简单的过程实现这个目标，我们在这里只讨论Metropolis，Metropolis Hastings和 Gibbs sampling。

## 2.4 Metropolis Sampling

Metropolis Sampling是最简单的MCMC方法，是Metropolis Hastings Sampling的一个特例，Metropolis Hastings
Sampling将在下一节讨论。假设我们的目标是从目标密度函数$p\left( \theta \right)$中采样，其中$-\infty < \theta < \infty$。Metropolis sampler创建一个马尔科夫链并且产生一系列值：

${\theta ^{\left( 1 \right)}} \to {\theta ^{\left( 2 \right)}} \to ... \to {\theta ^{\left( t \right)}} \to ...$

$\alpha = \min \left( {1,{{p\left( {{\theta ^*}} \right)} \over {p\left( {{\theta ^{^{t - 1}}}} \right)}}} \right)$

1. $t=1$

2. 生成一个初始值，并设${\theta ^{\left( t \right)}} = u$

3. 重复下列过程：

$t = t + 1$

$q\left( {\theta |{\theta ^{\left( {t - 1} \right)}}} \right)$中生成一个建议${\theta ^*}$

1. 直到$t=T$

Metropolis sampler一个关键的要求是建议分布是必须对称的，即：$q\left( {\theta = {\theta ^{\left( t \right)}}|{\theta ^{\left( {t - 1} \right)}}} \right) = q\left( {\theta = {\theta ^{\left( {t - 1} \right)}}|{\theta ^{\left( t \right)}}} \right)$。因此，基于旧状态到新状态的建议概率，与从新状态返回到旧状态的建议概率是相同的，这样才能满足平稳细致方程（具体细节见：LDA数学八卦, 这里面有非常清晰说明平稳细致方程的含义，下图有个定义）。对称的建议分布有如下：正态分布（Normal），柯西分布（Cauchy），学生t分布（Student-t），以及均匀分布（uniform distributions）。如果建议分布不具有对称性，应该使用Metropolis-Hastings sampler，将在下一节讨论。

Metropolis sampler的主要优点是，公式(2.6)只包含了一个密度函数的比例。因此在函数$p\left( \theta \right)$中任何独立于$\theta$的项都可以被忽略。因此，我们不需要知道密度函数的归一化常量。并且该采样规则允许从非标准分布中采样，这是非常重要的，因为非标准分布在贝叶斯模型中经常用到。

### 举例

$f\left( \theta \right) = {1 \over {\pi \left( {1 + {\theta ^2}} \right)}}$

$f\left( \theta \right) \propto {1 \over {\left( {1 + {\theta ^2}} \right)}}$

$\alpha = \min \left( {1,{{1 + {{\left[ {{\theta ^{\left( t \right)}}} \right]}^2}} \over {1 + {{\left[ {{\theta ^*}} \right]}^2}}}} \right)$

1 function y = cauchy( theta )
2 %% Returns the unnormalized density of the Cauchy distribution
3 y = 1 ./ (1 + theta.ˆ2);


Listing 2.1: Matlab function to evaluate the unnormalized Cauchy.

1 %% Chapter 2. Use Metropolis procedure to sample from Cauchy density
2
3 %% Initialize the Metropolis sampler
4 T = 500; % Set the maximum number of iterations
5 sigma = 1; % Set standard deviation of normal proposal density
6 thetamin = −30; thetamax = 30; % define a range for starting values
7 theta = zeros( 1 , T ); % Init storage space for our samples
8 seed=1; rand( 'state' , seed ); randn('state',seed ); % set the random seed
9 theta(1) = unifrnd( thetamin , thetamax ); % Generate start value
10
11 %% Start sampling
12 t = 1;
13 while t < T % Iterate until we have T samples
14 t = t + 1;
15 % Propose a new value for theta using a normal proposal density
16 theta star = normrnd( theta(t−1) , sigma );
17 % Calculate the acceptance ratio
18 alpha = min( [ 1 cauchy( theta star ) / cauchy( theta(t−1) ) ] );
19 % Draw a uniform deviate from [ 0 1 ]
20 u = rand;
21 % Do we accept this proposal?
22 if u < alpha
23 theta(t) = theta star; % If so, proposal becomes new state
24 else
25 theta(t) = theta(t−1); % If not, copy old state
26 end
27 end
28
29 %% Display histogram of our samples
30 figure( 1 ); clf;
31 subplot( 3,1,1 );
32 nbins = 200;
33 thetabins = linspace( thetamin , thetamax , nbins );
34 counts = hist( theta , thetabins );
35 bar( thetabins , counts/sum(counts) , 'k' );
36 xlim( [ thetamin thetamax ] );
37 xlabel( '\theta' ); ylabel( 'p(\theta)' );
38
39 %% Overlay the theoretical density
40 y = cauchy( thetabins );
41 hold on;
42 plot( thetabins , y/sum(y) , 'r−−' , 'LineWidth' , 3 );
43 set( gca , 'YTick' , [] );
44
45 %% Display history of our samples
46 subplot( 3,1,2:3 );
47 stairs( theta , 1:T , 'k−' );
48 ylabel( 't' ); xlabel( '\theta' );
49 set( gca , 'YDir' , 'reverse' );
50 xlim( [ thetamin thetamax ] );


Listing 2.2: Matlab code to implement Metropolis sampler for Example 1

## 2.5 Metropolis-Hastings Sampling

Metropolis-Hasting (MH) sampler是Metropolis sampler的广义版本，我们既可以用对称的分布也可以用不对称的分布作为建议分布。MH sampler的工作方式与Metropolis sampler完全相同，但是使用下列公式计算接受概率：

MH sampler的过程如下

1. $t=1$

2. 生成一个初始值$u$，并设${\theta ^{\left( t \right)}} = u$

3. 重复下列过程：

$t = t + 1$

$q\left( {\theta |{\theta ^{\left( {t - 1} \right)}}} \right)$中生成一个建议${\theta ^*}$

1. 直到$t = T$

## 2.6 对多变量分布的Metropolis-Hastings

### 2.6.1 块更新（Blockwise updating）

1. $t = 1$

2. 生成一个初始值，并设

3. 重复下列过程：

$t = t + 1$

1. 直到 $t = T$

### 举例

$p\left( {{\theta _1},{\theta _2}} \right) = \exp \left( { - \left( {{\lambda _1} + \lambda } \right){\theta _1} - \left( {{\lambda _2} + \lambda } \right){\theta _2} - \lambda \max \left( {{\theta _1},{\theta _2}} \right)} \right)$

sampler，我们使用均匀分布作为建议分布，$\theta _1^*$$\theta _2^*$都从均匀分布$Uniform\left( {0,8} \right)$中采样。也就是说，我们从一个盒子中均匀地采样出的建议。注意，用这种特别的建议分布，我们并没有根据前一个状态调整采样器的建议，这就是所谓的独立采样器（independence sampler）。这实际上是一个比较简单的建议分布，但是这也使实现变得简单，因为${{q\left( {{\theta ^{\left( {t - 1} \right)}}|{\theta ^*}} \right)} \over {q\left( {{\theta ^*}|{\theta ^{\left( {t - 1} \right)}}} \right)}} = 1$，所以这一项在接受概率中消失了。上述这种采样器的MATLAB实现在Listing 2.4中。在Listing 2.4中左图是利用5000个样本模拟的近似分布。

1 function y = bivexp( theta1 , theta2 )
2 %% Returns the density of a bivariate exponential function
3 lambda1 = 0.5; % Set up some constants
4 lambda2 = 0.1;
5 lambda = 0.01;
6 maxval = 8;
7 y = exp( −(lambda1+lambda)*theta1−(lambda2+lambda)*theta2−lambda*maxval );


Listing 2.3: Matlab code to implement bivariate density for Example 1

1 %% Chapter 2. Metropolis procedure to sample from Bivariate Exponential
2 % Blockwise updating. Use a uniform proposal distribution
3
4 %% Initialize the Metropolis sampler
5 T = 5000; % Set the maximum number of iterations
6 thetamin = [ 0 0 ]; % define minimum for theta1 and theta2
7 thetamax = [ 8 8 ]; % define maximum for theta1 and theta2
8 seed=1; rand( 'state' , seed ); randn('state',seed ); % set the random seed
9 theta = zeros( 2 , T ); % Init storage space for our samples
10 theta(1,1) = unifrnd( thetamin(1) , thetamax(1) ); % Start value for theta1
11 theta(2,1) = unifrnd( thetamin(2) , thetamax(2) ); % Start value for theta2
12
13 %% Start sampling
14 t = 1;
15 while t < T % Iterate until we have T samples
16 t = t + 1;
17 % Propose a new value for theta
18 theta star = unifrnd( thetamin , thetamax );
19 pratio = bivexp( theta star(1) , theta star(2) ) / ...
20 bivexp( theta(1,t−1) , theta(2,t−1) );
21 alpha = min( [ 1 pratio ] ); % Calculate the acceptance ratio
22 u = rand; % Draw a uniform deviate from [ 0 1 ]
23 if u < alpha % Do we accept this proposal?
24 theta(:,t) = theta star; % proposal becomes new value for theta
25 else
26 theta(:,t) = theta(:,t−1); % copy old value of theta
27 end
28 end
29
30 %% Display histogram of our samples
31 figure( 1 ); clf;
32 subplot( 1,2,1 );
33 nbins = 10;
34 thetabins1 = linspace( thetamin(1) , thetamax(1) , nbins );
35 thetabins2 = linspace( thetamin(2) , thetamax(2) , nbins );
36 hist3( theta' , 'Edges' , {thetabins1 thetabins2} );
37 xlabel( '\theta 1' ); ylabel('\theta 2' ); zlabel( 'counts' );
38 az = 61; el = 30;
39 view(az, el);
40
41 %% Plot the theoretical density
42 subplot(1,2,2);
43 nbins = 20;
44 thetabins1 = linspace( thetamin(1) , thetamax(1) , nbins );
45 thetabins2 = linspace( thetamin(2) , thetamax(2) , nbins );
46 [ theta1grid , theta2grid ] = meshgrid( thetabins1 , thetabins2 );
47 ygrid = bivexp( theta1grid , theta2grid );
48 mesh( theta1grid , theta2grid , ygrid );
49 xlabel( '\theta 1' ); ylabel('\theta 2' );
50 zlabel( 'f(\theta 1,\theta 2)' );
51 view(az, el);


Listing 2.4: Blockwise Metropolis-Hastings sampler for bivariate exponential distribution

### 举例

$w = \left( {1,2,3,4,5} \right) = \left( {Washington,Adams,Jefferson,Madison,Monroe} \right)$

Mallows模型建议需要记住的排序应该是与真正的顺序相似的排序结果，使非常相近的排序比不相似的排序更有可能。具体来说，根据Mallows模型：一个人记住的排序

(2.12)

Kendall tau distance函数的MATLAB代码实现如Listing 2.5所示。Metropolis sampler的MATLAB代码实现如Listing
2.6所示。简单地展示了在总共500次迭代中每10次迭代的状态。下面是程序的一些示例输出：

1 function tau = kendalltau( order1 , order2 )
2 %% Returns the Kendall tau distance between two orderings
3 % Note: this is not the most efficient implementation
4 [ dummy , ranking1 ] = sort( order1(:)' , 2 , 'ascend' );
5 [ dummy , ranking2 ] = sort( order2(:)' , 2 , 'ascend' );
6 N = length( ranking1 );
7 [ ii , jj ] = meshgrid( 1:N , 1:N );
8 ok = find( jj(:) > ii(:) );
9 ii = ii( ok );
10 jj = jj( ok );
11 nok = length( ok );
12 sign1 = ranking1( jj ) > ranking1( ii );
13 sign2 = ranking2( jj ) > ranking2( ii );
14 tau = sum( sign1 6= sign2 );


Listing 2.5: Function to evaluate Kendall tau distance

1 %% Chapter 2. Metropolis sampler for Mallows model
2 % samples orderings from a distribution over orderings
3
4 %% Initialize model parameters
5 lambda = 0.1; % scaling parameter
6 labels = { 'Washington' , 'Adams' , 'Jefferson' , 'Madison' , 'Monroe' };
7 omega = [ 1 2 3 4 5 ]; % correct ordering
8 L = length( omega ); % number of items in ordering
9
10 %% Initialize the Metropolis sampler
11 T = 500; % Set the maximum number of iterations
12 seed=1; rand( 'state' , seed ); randn('state',seed ); % set the random seed
13 theta = zeros( L , T ); % Init storage space for our samples
14 theta(:,1) = randperm( L ); % Random ordering to start with
15
16 %% Start sampling
17 t = 1;
18 while t < T % Iterate until we have T samples
19 t = t + 1;
20
21 % Our proposal is the last ordering but with two items switched
22 lasttheta = theta(:,t−1); % Get the last theta
23 % Propose two items to switch
24 whswap = randperm( L ); whswap = whswap(1:2);
25 theta star = lasttheta;
26 theta star( whswap(1)) = lasttheta( whswap(2));
27 theta star( whswap(2)) = lasttheta( whswap(1));
28
29 % calculate Kendall tau distances
30 dist1 = kendalltau( theta star , omega );
31 dist2 = kendalltau( lasttheta , omega );
32
33 % Calculate the acceptance ratio
34 pratio = exp( −dist1*lambda ) / exp( −dist2*lambda );
35 alpha = min( [ 1 pratio ] );
36 u = rand; % Draw a uniform deviate from [ 0 1 ]
37 if u < alpha % Do we accept this proposal?
38 theta(:,t) = theta star; % proposal becomes new theta
39 else
40 theta(:,t) = lasttheta; % copy old theta
41 end
42 % Occasionally print the current state
43 if mod( t,10 ) == 0
44 fprintf( 't=%3d\t' , t );
45 for j=1:L
46 fprintf( '%15s' , labels{ theta(j,t)} );
47 end
48 fprintf( '\n' );
49 end
50 end


Listing 2.6: Implementation of Metropolis-Hastings sampler for Mallows model

t=400 Jefferson Madison Adams Monroe Washington


### 举例

1. $t=1$

2. 生成一个初始值，并设

3. 重复下列过程：

$t=1$

1. 直到 $t=T$

### 举例

1 %% Chapter 2. Metropolis procedure to sample from Bivariate Normal
2 % Component−wise updating. Use a normal proposal distribution
3
4 %% Parameters of the Bivariate normal
5 mu = [ 0 0 ];
6 sigma = [ 1 0.3; ...
7 0.3 1 ];
8
9 %% Initialize the Metropolis sampler
10 T = 5000; % Set the maximum number of iterations
11 propsigma = 1; % standard dev. of proposal distribution
12 thetamin = [ −3 −3 ]; % define minimum for theta1 and theta2
13 thetamax = [ 3 3 ]; % define maximum for theta1 and theta2
14 seed=1; rand( 'state' , seed ); randn('state',seed ); % set the random seed
15 state = zeros( 2 , T ); % Init storage space for the state of the sampler
16 theta1 = unifrnd( thetamin(1) , thetamax(1) ); % Start value for theta1
17 theta2 = unifrnd( thetamin(2) , thetamax(2) ); % Start value for theta2
18 t = 1; % initialize iteration at 1
19 state(1,t) = theta1; % save the current state
20 state(2,t) = theta2;
21
22 %% Start sampling
23 while t < T % Iterate until we have T samples
24 t = t + 1;
25
26 %% Propose a new value for theta1
27 new theta1 = normrnd( theta1 , propsigma );
28 pratio = mvnpdf( [ new theta1 theta2 ] , mu , sigma ) / ...
29 mvnpdf( [ theta1 theta2 ] , mu , sigma );
30 alpha = min( [ 1 pratio ] ); % Calculate the acceptance ratio
31 u = rand; % Draw a uniform deviate from [ 0 1 ]
32 if u < alpha % Do we accept this proposal?
33 theta1 = new theta1; % proposal becomes new value for theta1
34 end
35
36 %% Propose a new value for theta2
37 new theta2 = normrnd( theta2 , propsigma );
38 pratio = mvnpdf( [ theta1 new theta2 ] , mu , sigma ) / ...
39 mvnpdf( [ theta1 theta2 ] , mu , sigma );
40 alpha = min( [ 1 pratio ] ); % Calculate the acceptance ratio
41 u = rand; % Draw a uniform deviate from [ 0 1 ]
42 if u < alpha % Do we accept this proposal?
43 theta2 = new theta2; % proposal becomes new value for theta2
44 end
45
46 %% Save state
47 state(1,t) = theta1;
48 state(2,t) = theta2;
49 end
50
51 %% Display histogram of our samples
52 figure( 1 ); clf;
53 subplot( 1,2,1 );
54 nbins = 12;
55 thetabins1 = linspace( thetamin(1) , thetamax(1) , nbins );
56 thetabins2 = linspace( thetamin(2) , thetamax(2) , nbins );
57 hist3( state' , 'Edges' , {thetabins1 thetabins2} );
58 xlabel( '\theta 1' ); ylabel('\theta 2' ); zlabel( 'counts' );
59 az = 61; el = 30; view(az, el);
60
61 %% Plot the theoretical density
62 subplot(1,2,2);
63 nbins = 50;
64 thetabins1 = linspace( thetamin(1) , thetamax(1) , nbins );
65 thetabins2 = linspace( thetamin(2) , thetamax(2) , nbins );
66 [ theta1grid , theta2grid ] = meshgrid( thetabins1 , thetabins2 );
67 zgrid = mvnpdf( [ theta1grid(:) theta2grid(:)] , mu , sigma );
68 zgrid = reshape( zgrid , nbins , nbins );
69 surf( theta1grid , theta2grid , zgrid );
70 xlabel( '\theta 1' ); ylabel('\theta 2' );
71 zlabel( 'pdf(\theta 1,\theta 2)' );
72 view(az, el);
73 xlim([thetamin(1) thetamax(1)]); ylim([thetamin(2) thetamax(2)]);


Listing 2.7: Implementation of componentwise Metropolis sampler for bivariate normal

### 2.7 吉布斯采样（Gibbs Sampling）

Metropolis-Hastings和拒绝采样器的很大的缺点是其很难对于不同的建议分布调参（如何选取最好的建议分布），另外，该方法的一个好处是被拒绝的样本没有用于近似计算中。Gibbs sampler与这些不一样的是：方法采样得到所有的样本都被接受，从而提高了计算效率。另外一个优点是，研究人员不需要指定一个建议分布，这在MCMC过程中留下一些猜想。

1. $t=1$

2. 生成一个初始值，并设

3. 重复下列过程：

$t = t + 1$

*从条件分布$f\left( {{\theta _1}|{\theta _2} = \theta _2^{\left( {t - 1} \right)}} \right)$*中采样$\theta _1^{\left( t \right)}$

1. 直到 $t=T$

### 举例

$P({\theta _1}|{\theta _2}) \sim Norm\left( {{\mu _1} + \rho \left( {{\theta _2} - {\mu _2}} \right),\sqrt {1 - {\rho ^2}} } \right)$

$P({\theta _2}|{\theta _1}) \sim Norm\left( {{\mu _2} + \rho \left( {{\theta _1} - {\mu _1}} \right),\sqrt {1 - {\rho ^2}} } \right)$

1. $t=1$

2. 生成一个初始值，并设

3. 重复下列过程：

$t = t + 1$

*从条件分布$Norm\left( {{\mu _1} + \rho \left( {{\theta _2} - {\mu _2}} \right),\sqrt {1 - {\rho ^2}} } \right)$*中采样$\theta _1^{\left( t \right)}$

1. 直到 $t=T$

http://pan.baidu.com/s/1qXIWzJu

MCMC matlab tutorial 电子版本：http://pan.baidu.com/s/1i48vyr7

## Reference

Mark Steyver. Computational Statistics with Matlab. 2011

Top