赞
踩
最优插值法是在逐步订正法的基础上利用最小二乘原理求解权重矩阵,得到最优估计。假设 x i b , x i a , x i t x_i^b, x_i^a,x_i^t xib,xia,xit分别为第i个分析格点上的背景场、分析场和真值。则有
x i a = x i b + ∑ k = 1 K i W k i ( x k o b s − x k b ) = x i b + w i T ( x o b s − x b ) x_i^a = x_i^b + \sum_{ k =1}^{K_i}W_{ki}(x_k^{obs} - x_k^b) = x_i^b + \boldsymbol{w_i^{T}}(\boldsymbol{x^{obs}} - \boldsymbol{x^{b}}) xia=xib+∑k=1KiWki(xkobs−xkb)=xib+wiT(xobs−xb)
为了使得分析场误差方差最小,即求解下面的极小化问题:
m i n w i = m i n w i ( x i a − x i t ) ( x i a − x i t ) min_{w_i} = min_{w_i}(x_i^a - x_i^t)(x_i^a - x_i^t) minwi=minwi(xia−xit)(xia−xit)
假设背景场合观测误差都无偏合无相关,观测误差合背景误差无相关,为了使得分析场的误差方差最小,求一阶导数为0,可得:
w i = ( B + O ) − 1 b i \boldsymbol{w_i = (B+O)^{-1}b_i} wi=(B+O)−1bi
其中B矩阵O矩阵和bi矩阵如下所示
B = ( ( x 1 b − x 1 t ) ( x 1 b − x 1 t ) ‾ ( x 1 b − x 1 t ) ( x 2 b − x 2 t ) ‾ ⋯ ( x 1 b − x 1 t ) ( x K i b − x K i t ) ‾ ( x 2 b − x 2 t ) ( x 1 b − x 1 t ) ‾ ( x 2 b − x 2 t ) ( x 2 b − x 2 t ) ‾ ⋯ ( x 2 b − x 2 t ) ( x K i b − x i t ) ‾ ⋮ ⋮ ⋮ ⋮ ( x K i b − x K i t ) ( x 1 b − x 1 t ) ‾ ( x K i b − x K i t ) ( x 2 b − x 2 t ) ‾ ⋯ ( x K i b − x K i t ) ( x k i b − x k i t ) ‾ ) B =\begin {pmatrix} \overline{(x_1^b - x_1^t)(x_1^b - x_1^t){}} & \overline{(x_1^b - x_1^t)(x_2^b - x_2^t){}} & \cdots & \overline{(x_{1}^b - x_1^t)(x_{Ki}^b - x_{Ki}^t){}} \\ \overline{(x_2^b - x_2^t)(x_1^b - x_1^t){}} & \overline{(x_2^b - x_2^t)(x_2^b - x_2^t){}} & \cdots & \overline{(x_{2}^b - x_2^t)(x_{Ki}^b - x_{i}^t){}} \\ \vdots & \vdots& \vdots & \vdots \\ \overline{(x_{Ki}^b - x_{Ki}^t)(x_1^b - x_1^t){}} & \overline{(x_{Ki}^b - x_{Ki}^t)(x_2^b - x_2^t){}} & \cdots & \overline{(x_{Ki}^b - x_{Ki}^t)(x_{ki}^b - x_{ki}^t){}} \end {pmatrix} B=⎝⎜⎜⎜⎜⎛(x1b−x1t)(x1b−x1t)(x2b−x2t)(x1b−x1t)⋮(xKib−xKit)(x1b−x1t)(x1b−x1t)(x2b−x2t)(x2b−x2t)(x2b−x2t)⋮(xKib−xKit)(x2b−x2t)⋯⋯⋮⋯(x1b−x1t)(xKib−xKit)(x2b−x2t)(xKib−xit)⋮(xKib−xKit)(xkib−xkit)⎠⎟⎟⎟⎟⎞
O = ( ( x 1 o b s − x 1 t ) ( x 1 o b s − x 1 t ) ‾ ( x 1 o b s − x 1 t ) ( x 2 o b s − x 2 t ) ‾ ⋯ ( x 1 o b s − x 1 t ) ( x K i o b s − x K i t ) ‾ ( x 2 o b s − x 2 t ) ( x 1 o b s − x 1 t ) ‾ ( x 2 o b s − x 2 t ) ( x 2 o b s − x 2 t ) ‾ ⋯ ( x 2 o b s − x 2 t ) ( x K i o b s − x i t ) ‾ ⋮ ⋮ ⋮ ⋮ ( x K i o b s − x K i t ) ( x 1 o b s − x 1 t ) ‾ ( x K i o b s − x K i t ) ( x 2 o b s − x 2 t ) ‾ ⋯ ( x K i o b s − x K i t ) ( x k i o b s − x k i t ) ‾ ) O =\begin {pmatrix} \overline{(x_1^{obs} - x_1^t)(x_1^{obs} - x_1^t){}} & \overline{(x_1^{obs} - x_1^t)(x_2^{obs} - x_2^t){}} & \cdots & \overline{(x_{1}^{obs} - x_1^t)(x_{Ki}^{obs} - x_{Ki}^t){}} \\ \overline{(x_2^{obs} - x_2^t)(x_1^{obs} - x_1^t){}} & \overline{(x_2^{obs} - x_2^t)(x_2^{obs} - x_2^t){}} & \cdots & \overline{(x_{2}^{obs} - x_2^t)(x_{Ki}^{obs} - x_{i}^t){}} \\ \vdots & \vdots& \vdots & \vdots \\ \overline{(x_{Ki}^{obs} - x_{Ki}^t)(x_1^{obs} - x_1^t){}} & \overline{(x_{Ki}^{obs} - x_{Ki}^t)(x_2^{obs} - x_2^t){}} & \cdots & \overline{(x_{Ki}^{obs} - x_{Ki}^t)(x_{ki}^{obs} - x_{ki}^t){}} \end {pmatrix} O=⎝⎜⎜⎜⎜⎛(x1obs−x1t)(x1obs−x1t)(x2obs−x2t)(x1obs−x1t)⋮(xKiobs−xKit)(x1obs−x1t)(x1obs−x1t)(x2obs−x2t)(x2obs−x2t)(x2obs−x2t)⋮(xKiobs−xKit)(x2obs−x2t)⋯⋯⋮⋯(x1obs−x1t)(xKiobs−xKit)(x2obs−x2t)(xKiobs−xit)⋮(xKiobs−xKit)(xkiobs−xkit)⎠⎟⎟⎟⎟⎞
b i = ( ( x 1 b − x 1 t ) ( x i b − x i t ) ( x 2 b − x 2 t ) ( x i b − x i t ) ⋮ ( x K i b − x K i t ) ( x i b − x i t ) ) b_i =\begin {pmatrix} {(x_1^b - x_1^t)(x_i^b - x_i^t){}} \\ {(x_2^b - x_2^t)(x_i^b - x_i^t){}} \\ \vdots \\ {(x_{Ki}^b - x_{Ki}^t)(x_i^b - x_i^t){}} \end {pmatrix} bi=⎝⎜⎜⎜⎛(x1b−x1t)(xib−xit)(x2b−x2t)(xib−xit)⋮(xKib−xKit)(xib−xit)⎠⎟⎟⎟⎞
其中,B为背景协方差矩阵,O为观测场协方差矩阵。两者均需要真值来推导,但是由于不知道所谓的均值,因此无法准确快速求解协方差均值。因此最优插值法的难点在于协方差矩阵的推导。
由上面的推导可知,要想求解出最优差值法的权重函数,需要 ( B + O ) (B+O) (B+O)矩阵可逆,而在实际的应用中,由于矩阵的过大,求逆计算量较大,这种情况则可以使用三维差分方法来解决。变分方法构建代价函数来描述状态量分析值和真值之间的差异,利用变分思想把数据同化问题转化为极值求解问题。下面以单点的三维变分为例,讲述三维变分的原理:
设模式模拟结果为 x b x_b xb,标准差为 σ b \sigma_b σb,,观测值为 x o b s x_{obs} xobs ,标准差为 σ o b s \sigma_{obs} σobs,需要求得的分析值为 y y y。
由Bayes理论可知: P ( y ∣ x o b s ) ∗ P ( x o b s ) = P ( x o b s ∣ x b ) ∗ P ( x b ) P(y|x_{obs})*P(x_{obs}) = P(x_{obs}|x_b) * P(x_b) P(y∣xobs)∗P(xobs)=P(xobs∣xb)∗P(xb)
又由于 x o b s x_{obs} xobs是给定的,所以 P ( x o b s ) = c o n s t P(x_{obs}) = const P(xobs)=const
综上: P ( y ∣ x o b s ) ∼ P ( x o b s ∣ x b ) ∗ P ( x b ) P(y|x_{obs}) \sim P(x_{obs}|x_b) * P(x_b) P(y∣xobs)∼P(xobs∣xb)∗P(xb)
对于三维变分法,其假设模拟误差与观测误差均服从正态分布。
则 P ( x o b s ∣ x b ) = 1 2 π σ o b s e x p [ − ( x − x o b s ) 2 2 σ o b s 2 ] P(x_{obs}|x_b ) = \frac{1}{\sqrt{2\pi}{\sigma_{obs}}} exp[{-\frac{(x - x_{obs})^2}{2\sigma_{obs}^2}}] P(xobs∣xb)=2π σobs1exp[−2σobs2(x−xobs)2]
P ( x b ) = 1 2 π σ b e x p [ − ( x − x b ) 2 2 σ b 2 ] P(x_b ) = \frac{1}{\sqrt{2\pi}{\sigma_{b}}} exp[{-\frac{(x - x_b)^2}{2\sigma_{b}^2}}] P(xb)=2π σb1exp[−2σb2(x−xb)2]
所以 P ( y ∣ x o b s ) = 1 2 π σ b σ o b s e x p [ − ( x − x b ) 2 2 σ b 2 − ( x − x o b s ) 2 2 σ o b s 2 ] P(y|x_{obs}) = \frac{1}{{2\pi}{\sigma_{b}}{\sigma_{obs}}} exp[-{\frac{(x - x_b)^2}{2\sigma_{b}^2}}{-\frac{(x - x_{obs})^2}{2\sigma_{obs}^2}}] P(y∣xobs)=2πσbσobs1exp[−2σb2(x−xb)2−2σobs2(x−xobs)2]
由此,我们可以得到分析值的概率分布,要求解最优分析值,实际上是求解概率最大值。
对上述概率取对数得到代价函数$ J(x) = - ln[P(y|x_{obs})] = -ln({{2\pi}{\sigma_{b}}{\sigma_{obs}}}) + {\frac{(x - x_b)2}{2\sigma_{b}2}}{+\frac{(x - x_{obs})2}{2\sigma_{obs}2}} $
所以 y = a r g m i n J ( x ) = σ b 2 x o b s + σ o b s 2 x b σ b 2 + σ o b s 2 ( 1.1 ) y = arg \ minJ(x) = \frac{\sigma_b^2 x_{obs} + \sigma_{obs}^2 x_{b} }{\sigma_b^2 + \sigma_{obs}^2} \ \ \ (1.1) y=arg minJ(x)=σb2+σobs2σb2xobs+σobs2xb (1.1)
对于此问题而言,代价函数为凸函数,只需求导即可,而在实际问题中,代价函数形式较为复杂,无解析最优值,通常需要迭代法求解,比如拟牛顿法,共轭梯度法,牛顿下降法,最速下降法等。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。