Saturday, March 12, 2016

Bayesian Technique

Here we will explore the relationship between maximizing likelihood p.d.f - , maximizing posterior p.d.f - , minimization of the sum-of-squares error function - and the regularization technique.

When we maximise the posterior probability density function w.r.t the parameter vector using the “bayesian technique” - we need both the likelihood function and the prior. (The denominator in the bayes theorem is just a normalization constant so that doesn’t really matter.)

The model

We have a set of inputs, with corresponding target values .

We assume that there exists some deterministic function such that we can model the relationship between these two as the sum of with additive gaussian noise,

is the precision (inverse variance) of the additive univariate Gaussian noise.

We define as the linear combination of basis functions,

We define the parameter vector as and basis vector as .

This parameter vector is very important as the posterior p.d.f is the updated probability of given some training data. which is found from the prior data of . While the likelihood p.d.f of getting that training data given .

We usually choose because we need a bias term in the model. (to control the extent of the shift in itself - check this answer out)

For the data set as a whole we can write the set of model outputs as a vector ,

Here the basis matrix is a function of and is defined with its th-row being = for such rows.

Likelihood function

We assume that these data points are drawn independently from the distribution we would have to multiply the individual data point’s p.d.f - which are gaussian.

Note that the th data points p.d.f is centered around as the mean.

Does the product of univariate gaussians forms a multivariate distribution in {}?? I say this because we choose a gaussian prior, thus the likelihood should also be gaussian right?

Prior

We choose the corresponding conjugate prior, as we have a likelihood function which is the exponential of a quadratic function of .

No clue why but for now for this to make sense let’s say that the likelihood function is also gaussian - product of all those gaussians.

Thus the prior p.d.f is a normal distribution -

Posterior

The posterior p.d.f is a (as we choose a conjugate prior)

After solving for and we get,

(The complete derivation is available in Bishop - (2.116)) - coming soon


The sizes are,
The mean vectors, and are both and they can be thought of as the optimal parameter vector and pseudo observations respectively.
The covariance matrices, and are both

We shall consider a particular form of Gaussian prior in order to simplify the treatment. Specifically, we assume a zero-mean isotropic Gaussian governed by a single precision parameter ,

So we basically take and

Thus if we use this prior we can simplify the mean vector and the covariance matrix of posterior p.d.f to,


Now if we take log of the posterior pdf - , in order to maximize it with respect to w, we find that what we obtain is equivalent to the minimization of the sum-of-squares error function with the addition of a quadratic regularization term, corresponding to .

Thus we conclude that while maximising likelihood function is equivalent to the minimization of the sum-of-squares error function, maximising posterior p.d.f is equivalent to the regularization technique.

The regularization technique is used to control the over-fitting phenomenon by adding a penalty term to the error function in order to discourage the coefficients from reaching large values.

This penalty term arises naturally when we maximize posterior p.d.f w.r.t

Here the minimization of the sum-of-squares error function - is also same as Maximization of the likelihood p.d.f. Taking log of we get,

thus maximizing likelihood is equal to maximizing (rest are all constants w.r.t )

Important Resources

Important Resources

  • Less Wrong is a site I came across while researching into the author (Eliezer Yudkowsky) of one of my all time favorite book hpmor. I honestly believe in a lot of the ideas put forward in his book Rationality: From AI to Zombies .

  • Stack Exchange is a collection of extremely useful Q&A sites. Even though the approach to software development should not be totally dependent on Stack Overflow (As detailed here), no will argue that these Q&A sites are really really invaluable to solving certain classes of problems which would otherwise take a lot of time.

  • The various “kiss” Sites provide me with a lot of opportunity to relax and also turns out to be a big distraction, as of now it consists of the 3 sites,

If you want to keep some anime series permanently stored with you, AnimeOut does a brilliant job of encoding the anime series. We can get the best quality for storage there.

Check out my anime list at MAL.

Please note that Piracy is not advisable in the sense that the more that people pirate, the less money the creators make and lower quality entertainment we receive as they have to cut costs and target larger mainstream audiences.

I find korean dramas and the xianxia novels give the same sort of pleasure. It’s like I feel I enjoyed it a lot but I didn’t really gain anything new (insights) from it. Unlike great anime and manga which actually change the way you think. Korean dramas literally make me hate myself when I finish watching them, I really can’t compare actually speaking.

Some books which make me ask myself - why am I blogging when such wonderful resources are available to learn?!
* Linear Algebra and Its Applications, 4th Edition 4th Edition
by Gilbert Strang

* Modern Control Engineering (5th Edition) 5th Edition by Katsuhiko Ogata

Music which are unforgettable,
* Forever and Always - Taylor Swift

Novels which you can’t put down,
The Count of Monte Cristo

I’ll add more as I think of/find them.

Written with StackEdit.

Wednesday, March 2, 2016

PI/PID control via State Space Formulation

Example I

Consider the first example of

We shall try to control this system with a PI control.

Let , this means that

As per the State Space formulation,

Note,




and from the definition given above,

Thus we can form both A and B matrices.

Also ,



Thus the eigenvalues for the closed loop system will be the solutions to the equation,

Desired closed loop Transfer Function is,

But as we have a second order system, we restate the desired closed loop T.F for comparison purposes,

Thus to find the k value we can now compare the desired to actual closed loop pole equations.

Example II

Given system T.F ,
we need to find the Proportional control for this system.

we define
from the diagram we can see,


which means

Thus,

The desired closed loop T.F is,

Similarly as the first example find by solving,

Example III

This time consider a 2nd order system,

While the desired closed loop system is,

We restate the specifications because the system is a second order system.

We will now control this system using PID,


Lets define the following terms,

So we can rewrite as,

As
double differentiating,

From these we can find and

Using these we can find by solving,

Inference

  • So we have used PI/PID control via State Space formulation for the above system specifications.
  • IMC based tuning or synthesis may also be used for these problems
  • The tuning procedure is straightforward and relatively simple. The solution of all systems take the same form.
  • We prefer PI/PID control via State Space formulation when the desired closed loop system specifications are performance based.