Loading [MathJax]/jax/output/HTML-CSS/jax.js

Saturday, March 12, 2016

Bayesian Technique

Here we will explore the relationship between maximizing likelihood p.d.f - p(t|w), maximizing posterior p.d.f - p(w|t), minimization of the sum-of-squares error function - ED(w) and the regularization technique.

When we maximise the posterior probability density function w.r.t the parameter vector using the “bayesian technique” - we need both the likelihood function and the prior. (The denominator in the bayes theorem is just a normalization constant so that doesn’t really matter.)

p(w|t)p(t|w)p(w)

The model

We have a set of inputs, x=[x1,...,xn]T with corresponding target values t=[t1,....tn]T.

We assume that there exists some deterministic function y such that we can model the relationship between these two as the sum of y(x,w) with additive gaussian noise,

ti=y(xi,w)+N(0,β1)

β is the precision (inverse variance) of the additive univariate Gaussian noise.

We define y as the linear combination of basis functions,
y(xi,w)=w1ϕ1(xi)+w2ϕ2(xi)+...+wpϕp(xi)=wTϕ(xi)
We define the parameter vector as wp×1=[w1,w2,...,wp]T and basis vector as ϕ(xi)=[ϕ1(xi),ϕ2(xi),...,ϕp(xi)]T.

This parameter vector is very important as the posterior p.d.f is the updated probability of w given some training data. which is found from the prior data of w. While the likelihood p.d.f of getting that training data given w.

We usually choose ϕ1(x)=1 because we need a bias term in the model. (to control the extent of the shift in y itself - check this answer out)

For the data set as a whole we can write the set of model outputs as a vector yn×1,

y(x,w)=Φw

Here the basis matrix Φn×p is a function of x and is defined with its ith-row being = [ϕ1(xi),ϕ2(xi),...,ϕp(xi)] for n such rows.

Likelihood function

We assume that these data points (xi,ti) are drawn independently from the distribution we would have to multiply the individual data point’s p.d.f - which are gaussian.

p(t|x,w,β)=ni=1N(ti|wTϕ(xi),β1)

Note that the ith data points p.d.f is centered around wTϕ(xi) as the mean.

Does the product of n univariate gaussians forms a multivariate distribution in {ti}?? I say this because we choose a gaussian prior, thus the likelihood should also be gaussian right?

Prior

We choose the corresponding conjugate prior, as we have a likelihood function which is the exponential of a quadratic function of w.

No clue why but for now for this to make sense let’s say that the likelihood function is also gaussian - product of all those gaussians.

Thus the prior p.d.f is a normal distribution - N(mo,So)

Posterior

The posterior p.d.f is a N(mN,SN) (as we choose a conjugate prior)

After solving for mN and SN we get,

(The complete derivation is available in Bishop - (2.116)) - coming soon

mN=SN(S10m0+βΦTt)
S1N=S10+βΦTΦ

The sizes are,
The mean vectors, mN and mo are both p×1 and they can be thought of as the optimal parameter vector and pseudo observations respectively.
The covariance matrices, SN and So are both p×p

We shall consider a particular form of Gaussian prior in order to simplify the treatment. Specifically, we assume a zero-mean isotropic Gaussian governed by a single precision parameter α,
N(0,α1I)
So we basically take mo=0 and So=α1Ip×p

Thus if we use this prior we can simplify the mean vector and the covariance matrix of posterior p.d.f to,

mN=βSNΦTt
S1N=αI+βΦTΦ

Now if we take log of the posterior pdf - N(mN,SN) , in order to maximize it with respect to w, we find that what we obtain is equivalent to the minimization of the sum-of-squares error function with the addition of a quadratic regularization term, corresponding to λ=αβ.

lnp(w|t)=β2Ni=1(tiwTϕ(xi))2αβwTw=βED(w)αβwTw

Thus we conclude that while maximising likelihood function is equivalent to the minimization of the sum-of-squares error function, maximising posterior p.d.f is equivalent to the regularization technique.

The regularization technique is used to control the over-fitting phenomenon by adding a penalty term to the error function in order to discourage the coefficients from reaching large values.

This penalty term arises naturally when we maximize posterior p.d.f w.r.t w

Here the minimization of the sum-of-squares error function - ED(w) is also same as Maximization of the likelihood p.d.f. Taking log of lnp(t|w,β) we get,

lnp(t|w,β)=N2lnβN2ln(2π)βED(w)

thus maximizing likelihood is equal to maximizing ED(w) (rest are all constants w.r.t w)

Important Resources

Important Resources

  • Less Wrong is a site I came across while researching into the author (Eliezer Yudkowsky) of one of my all time favorite book hpmor. I honestly believe in a lot of the ideas put forward in his book Rationality: From AI to Zombies .

  • Stack Exchange is a collection of extremely useful Q&A sites. Even though the approach to software development should not be totally dependent on Stack Overflow (As detailed here), no will argue that these Q&A sites are really really invaluable to solving certain classes of problems which would otherwise take a lot of time.

  • The various “kiss” Sites provide me with a lot of opportunity to relax and also turns out to be a big distraction, as of now it consists of the 3 sites,

If you want to keep some anime series permanently stored with you, AnimeOut does a brilliant job of encoding the anime series. We can get the best quality for storage there.

Check out my anime list at MAL.

Please note that Piracy is not advisable in the sense that the more that people pirate, the less money the creators make and lower quality entertainment we receive as they have to cut costs and target larger mainstream audiences.

I find korean dramas and the xianxia novels give the same sort of pleasure. It’s like I feel I enjoyed it a lot but I didn’t really gain anything new (insights) from it. Unlike great anime and manga which actually change the way you think. Korean dramas literally make me hate myself when I finish watching them, I really can’t compare actually speaking.

Some books which make me ask myself - why am I blogging when such wonderful resources are available to learn?!
* Linear Algebra and Its Applications, 4th Edition 4th Edition
by Gilbert Strang

* Modern Control Engineering (5th Edition) 5th Edition by Katsuhiko Ogata

Music which are unforgettable,
* Forever and Always - Taylor Swift

Novels which you can’t put down,
The Count of Monte Cristo

I’ll add more as I think of/find them.

Written with StackEdit.

Wednesday, March 2, 2016

PI/PID control via State Space Formulation

Example I

Consider the first example of
G(s)=k2s+1=bs+a

We shall try to control this system with a PI control.

G(t)=KIedt+KCe(t)=[KI,KC][edt,e(t)]T

Let x1=edt, this means that ˙x1=x2=e(t)

G(t)=[KI,KC][edt,e(t)]T=[KI,KC][x1,x2]T

As per the State Space formulation,

˙X=AX+BU

Note,
˙x2=˙e
e=ry=bus+a
˙e+ea=bu
˙x2=x2abu
and from the definition given above,
˙x1=x2

Thus we can form both A and B matrices.
A=010a

B=0b

Also U=kX,

˙X=AX+BU
sX=AX+B(kX)
X(sIA+kB)=0

Thus the eigenvalues for the closed loop system will be the solutions to the equation,
|sIA+kB|=0

Desired closed loop Transfer Function is,
D(s)=1λs+1
But as we have a second order system, we restate the desired closed loop T.F for comparison purposes,
D(s)=(zs+1)(λs+1)(zs+1)

Thus to find the k value we can now compare the desired to actual closed loop pole equations.

Example II

Given system T.F G(s)=1s,
we need to find the Proportional control for this system.

u(t)=Kce(t)

we define x1=e(t)
from the diagram we can see,
e=ry
e=us
˙e=u

which means ˙x1=u

Thus,
A1×1=0
B1×1=1

The desired closed loop T.F is,
GCL(s)=1λs+1

Similarly as the first example find k by solving,
|sIA+kB|=λs+1

Example III

This time consider a 2nd order system,

G(s)=k(z1s+1)(z2s+1)

While the desired closed loop system is,

GCL(s)=1λs+1

We restate the specifications because the system is a second order system.

GCL(s)=(z1s+1)(z2s+1)(z1s+1)(z2s+1)(λs+1)

We will now control this system using PID,

u=KIedt+KCe(t)+KDdedt
u=[KI,KC,KD][edt,e(t),dedt]T

Lets define the following terms,
x1=edt;˙x1=x2=e(t);˙x2=x3=dedt

So we can rewrite u as,
u=[KI,KC,KD][x1,x2,x3]T

As e=ry=ku(z1s+1)(z2s+1)
double differentiating,
¨e=f(x1,x2,x3)+g(u)

From these we can find A3×3 and B3×1

Using these we can find k by solving,
|sIA+kB|=(λs+1)(z1s+1)(z2s+1)

Inference

  • So we have used PI/PID control via State Space formulation for the above system specifications.
  • IMC based tuning or synthesis may also be used for these problems
  • The tuning procedure is straightforward and relatively simple. The solution of all systems take the same form.
  • We prefer PI/PID control via State Space formulation when the desired closed loop system specifications are performance based.