Introduction to Linear Regression: Polynomial Regression

October 26, 2014

Polynomial Regression

This form of regression is used to fit more complex functions and this is a general concept not restricted to linear regression, but also used commonly in classification algorithms such as Logistic Regression and Neural Networks as well. I hope to talk about it in detail in a future post. For now take a look at the following scatter plot.

It'a obvious that the data set doesn't fit a straight line as before. Even if we fit the data to a straight line(plot no. 02), the error would be very high and ultimately predictions made using the model would be inaccurate.
Solution for this is using polynomial features.

Basically what we do is creating new features using the base feature variables. So for one particular set of base feature variables, there can be many polynomial models. I'll talk about selecting the best model in a future post. For now let's see how a polynomial model is built.

Base model:
\(\ h(x) = \theta_0+ \theta_1x_1 + \theta_2x_2\)

Few Polynomial models created using base feature variables:
\(\ h(x) = \theta_0+ \theta_1x_1 + \theta_2x_2 + \theta_3x_1x_2 \)
\(\ h(x) = \theta_0+ \theta_1x_1 + \theta_2x_1^2 + \theta_3x_2^3 \)
\(\ h(x) = \theta_0+ \theta_1x_1 + \theta_2x_2 + \theta_3x_1x_2 + \theta_4x_1^2 + \theta_5x_2^3 \)
\(\ h(x) = \theta_0+ \theta_1x_1 + \theta_2x_2 + \theta_3{\sqrt{x_1}} \)

Now we can find a more suitable model that fits the data set.

Up to now we have talked about the 3 forms of linear regression, what we need next is a way of solving for the optimal values of \(\ \theta \) and find the corresponding hypothesis function \(\ h(\theta) \). Simply put, we need a way to figure out the best possible curve to fit the data. To do this there two possible approaches, a numerical method which involves a Cost function and an analytical method which uses the Normal Equation.

NEXT: Linear Regression Cost Function

<< previous  | 1 | 2 | 34 next >>

No comments:

Post a Comment