Processing math: 100%

Sunday, January 31, 2016

Polynomial curve fitting

As given in the book we are trying to estimate the relationship between a independent variable x and its dependent variable t (both scalars) by modelling the relationship between them as a Mth order polynomial. Then the model output ˆt can be written as a function of x and w.

Note that by storing the M coefficients that uniquely define this “model” as a vector w,
w=[w0,w1,....wM]T


And by constructing a column vector (which is a function of x) ,
k(x)=[1,x,x2,...,xM]

we can rewrite the sigma in y(xi,w) more concisely as the dot product between the two vectors as shown below,

ˆt=y(x,w)=Mi=0wixi=k(x).w

We were till now talking about a single x and t combination. What if we are given a Training Set consisting of N observations of this system that we are trying to model. i.e. we get x and its corresponding y.

Where,
x=[x1,x2,....xN]T
t=[t1,t2,....tN]T

We can now choose a Error function which measures the misfit between the polynomial function and the training set. The error function corresponds to (one half of) the sum of the squares of the displacements of each data point (xi,ti) from the model estimate (xi,^ti).

E(w)=12Ni=1(ˆtti)2=12Ni=1(k(xi).wti)2

Now we see how taking the square of the error - while eliminating the difference between positive and negative error also helps us when we minimize wrt w in the sense that E’s derivative will be linear in terms of w, assuring us that there exists a unique solution for the minimization of E(w).

This is one of the reasons why we prefer to square the difference rather than take the modulus of the difference - Mean Absolute error in which case we will have discontinuities at every point where the individual error changes sign.

So in order to find the best fit polynomial (defined by w) from the given training set we can differentiate the error function with respect to w and equate the resulting linear equation in w to 0.

dE(w)w|w=w=Ew0...dEwM1dEwM=0...00

Now the partial differentiation of the error function with respect to wj is,

Ewj=22Ni=1(y(xi,w)ti)(jwjxj1i)=0

The above equation is true for j0. When j=0, the (jwjxj1i) term will be one not 0.

In the above equation note that when we partially differentiate w.r.t wj all other ws are considered to be a scalar. Let us now represent this above equation as a System of linear equations as shown below then we can easily see how the Moore–Penrose pseudoinverse is the solution to this minimization problem.

Now we should note that we have the training set available to us, so x and y are known while w is our unknown.
So if we construct a matrix A of size N×(M+1) as follows,

A=1x1x21x31...xM11x2x22x32...xM2..................1xNx2Nx3N...xMN

and we know w is a (M+1)×1 vector. Then our minimization of the error function is equivalent to wanting to find a w for the below equation such that we obtain a ‘best fit’ (least squares) solution.

(Awt)=[0]N×1

Elaborating on why they are equivalent, note that the error function is a sum of squares. So the minimum value of the error function is when all the individual errors are 0. So here every row of the column vector Aw is the model’s output to the xi input. So (Awt) will consist of the N error values from each (x,t) pair given to us from the training data set.

I will write a post explaining how the Moore–Penrose pseudoinverse works. But for now there are many sources for that.

Therefore the best possible solution w is,

w=(ATA)1.AT.t

Written with StackEdit.

No comments:

Post a Comment