TrabalhosGratuitos.com - Trabalhos, Monografias, Artigos, Exames, Resumos de livros, Dissertações
Pesquisar

The Curve Fitting

Por:   •  6/8/2019  •  Trabalho acadêmico  •  1.610 Palavras (7 Páginas)  •  88 Visualizações

Página 1 de 7

Biomedical Engineering I – ECE 620

Instructor: Xiyi Hang

Python Project #01:

General Curve Fitting and

Regularized Curve Fitting

Felipe Soria

CSUN ID: 108422780

10/15/2015

OBJECTIVES

To analyze both general curve fitting and regularized curve fitting. Use the software Python and its extensions, such as matplotlib and numpy, to generate algorithms that simulate this process of machine learning.

 

INTRODUCTION

Polynomial Curve Fitting

Linear Regression: a linear regression equation to estimate the conditional (expected value) y of a variable, given the values of some other variable x.

Suppose that we want to predict the behavior of a curve, which may represent a model for investments, population growth, for example. We can do that by observing the real-valued of a target variable t, which is the response to our input x.

Training Process: Now, we have a training set of N values of x, forming a vector of N positions, along with a vector of N observations of the responses of the system to x, we call it t. So, we have:

x ≡ (x1,...,xN)T 

t ≡ (t1,...,tN)T

Suppose that the pure signal is a sinusoidal, denoted by sin(2πx). Then, once the practical measured signal isn`t perfect, the noise added to the pure signal should be considered on the training process. It can be simulated by the Gaussian noise function.

Gaussian noise is statistical noise having a probability density function equal to that of the normal distribution, which is also known as the Gaussian distribution. In other words, the values that the noise can take on are Gaussian-distributed.

The probability density function [pic 2]of a Gaussian random variable [pic 3]is given by:

[pic 4] [pic 5]

Where [pic 6] represents the mean value and [pic 7]the standard deviation.

Now, the target values are calculated as a function depending on the sinusoidal added to a Gaussian noise:

t(x) = sin(2πx) + PG(x)[pic 8]

What we want is to exploit the training set so we can predict new values of t for other new x inputs. For that, we shall approach a curve fitting by using a polynomial function described by:

[pic 9]

Where M is the order of the polynomial, and xj denotes x raised to the power of j. The polynomial coefficients w0,...,wM are collectively denoted by the vector w. The values of the coefficients w will be determined fitting the polynomial to the training data by the minimization of the error function E(w):

[pic 10]

This function measures the misfit between the function y(x,w), for any given value of w, and the training set data points:

[pic 11]

The minimization of the error function can be done by using the Gradient vector and Hessian matrix. After all the mathematical processes and simplifications, the result equation is:

x’xw* = x’t

Hence, w* = x’t(x’x)

Therefore, we can choose the best values of w, denoted w*. So we will have a polynomial y(x,w*). But we still have to vary the order of the polynomial and do this to all of the order, so we can choose the better combination of w and M, which represents less error. Varying M from 0 to 9, calculating the best w for each model and plotting, we get the following graphs for M=0; M=1; M=3 and M=9, for example:

[pic 12]

[pic 13]

We can see that, for this example, when M = 0 and M = 1 the polynomial gives poor representations of the pure sinusoidal function; while M = 9 fits perfectly to the target points, but does not represent well the sinusoidal function, this is called over-fitting. When M = 3, we can observe a better fit to the sinusoidal function, even though in this case the polynomial does not fit perfectly to the target points.

The following table denotes the coefficients w* for polynomials of different orders:

[pic 14]

So, now we can find out what is the model that best fits to the original sinusoidal curve. For that it is convenient to use the Root Mean Square error, which is defined by the function:

[pic 15]

Calculating the RMS error for each model with order M ranging from 0 to 9, we can generate a graph for both training and testing RMS errors versus M:

[pic 16]

The graph above represents the regular behavior of RMS errors for each M. W can see that between third order and eighth order, the polynomial fitting tend to be better models, while in the other cases it tend to be either poor representation or to have an over-fitting behavior.

We can improve our model by increasing the size of the data set. If we keep M = 9 and take, instead of 10, 15 or 100 observations, the over-fitting problem will be reduced:

[pic 17]

Another way to improve the model is by considering a regularizing factor λ on the training process, so that the error function now is described by:

[pic 18]

Where  , and the coefficient λ governs the relative importance of the regularization term compared with the sum-of-squares error term.[pic 19]

Similarly to the general case, we can mathematically find a simple equation after minimizing this error function, so that:

(x’x + λI)w* = x’t

Hence, w* = x’t(x’x + λI),

Where I is the identity matrix. Then, by solving this linear system, we can generate the coefficients w* while ln(λ) ranges from –  to 0:

[pic 20]

Then, we are able to calculate the RMS error and plot it versus ln(λ):

[pic 21]

Note that the best fittings happen in ln(λ) values between -35 and -20, so the corresponding regularizing factor should be chosen in this case.

STRATEGIES

1. A function for the training process of the general curving fitting with Mth order polynomial:

...

Baixar como (para membros premium)  txt (9.8 Kb)   pdf (1.1 Mb)   docx (1.1 Mb)  
Continuar por mais 6 páginas »
Disponível apenas no TrabalhosGratuitos.com