{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "remove_input" ] }, "outputs": [], "source": [ "path_data = '../../../../data/'\n", "\n", "import numpy as np\n", "import pandas as pd\n", "\n", "%matplotlib inline\n", "import matplotlib.pyplot as plt\n", "plt.style.use('fivethirtyeight')\n", "\n", "import functools\n", "\n", "import warnings\n", "warnings.filterwarnings('ignore')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "remove_input" ] }, "outputs": [], "source": [ "def standard_units(any_numbers):\n", " \"Convert any array of numbers to standard units.\"\n", " return (any_numbers - np.mean(any_numbers))/np.std(any_numbers) \n", "\n", "def correlation(t, x, y):\n", " return np.mean(standard_units(t[x])*standard_units(t[y]))\n", "\n", "def slope(table, x, y):\n", " r = correlation(table, x, y)\n", " return r * np.std(table[y])/np.std(table[x])\n", "\n", "def intercept(table, x, y):\n", " a = slope(table, x, y)\n", " return np.mean(table[y]) - a * np.mean(table[x])\n", "\n", "def fit(table, x, y):\n", " \"\"\"Return the height of the regression line at each x value.\"\"\"\n", " a = slope(table, x, y)\n", " b = intercept(table, x, y)\n", " return a * table[x] + b" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### The Method of Least Squares ###\n", "We have retraced the steps that Galton and Pearson took to develop the equation of the regression line that runs through a football shaped scatter plot. But not all scatter plots are football shaped, not even linear ones. Does every scatter plot have a \"best\" line that goes through it? If so, can we still use the formulas for the slope and intercept developed in the previous section, or do we need new ones?\n", "\n", "To address these questions, we need a reasonable definition of \"best\". Recall that the purpose of the line is to *predict* or *estimate* values of $y$, given values of $x$. Estimates typically aren't perfect. Each one is off the true value by an *error*. A reasonable criterion for a line to be the \"best\" is for it to have the smallest possible overall error among all straight lines.\n", "\n", "In this section we will make this criterion precise and see if we can identify the best straight line under the criterion." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Our first example is a dataset that has one row for every chapter of the novel \"Little Women.\" The goal is to estimate the number of characters (that is, letters, spaces punctuation marks, and so on) based on the number of periods. Recall that we attempted to do this in the very first lecture of this course." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "little_women = pd.read_csv(path_data + 'little_women.csv')\n", "\n", "periods = little_women['Periods']\n", "\n", "little_women.pop('Periods')\n", "\n", "little_women.insert(0, 'Periods', periods)\n", "\n", "little_women.head(3)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax = plt.subplots(figsize=(6,6))\n", "\n", "ax.scatter(little_women['Periods'], \n", " little_women['Characters'], \n", " color='darkblue')\n", "\n", "x_label = 'Periods'\n", "\n", "y_label = 'Characters'\n", "\n", "y_vals = ax.get_yticks()\n", "\n", "plt.ylabel(y_label)\n", "\n", "plt.xlabel(x_label)\n", "\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To explore the data, we will need to use the functions `correlation`, `slope`, `intercept`, and `fit` defined in the previous section." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "correlation(little_women, 'Periods', 'Characters')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The scatter plot is remarkably close to linear, and the correlation is more than 0.92." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Error in Estimation ###\n", "\n", "The graph below shows the scatter plot and line that we developed in the previous section. We don't yet know if that's the best among all lines. We first have to say precisely what \"best\" means." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lw_with_predictions = little_women.copy()\n", "\n", "lw_with_predictions['Linear Prediction'] = fit(little_women, 'Periods', 'Characters')\n", "\n", "fig, ax = plt.subplots(figsize=(8,5))\n", "\n", "ax.scatter(lw_with_predictions['Periods'], \n", " lw_with_predictions['Characters'], \n", " label='Characters', \n", " color='darkblue')\n", "\n", "ax.scatter(lw_with_predictions['Periods'], \n", " lw_with_predictions['Linear Prediction'], \n", " label='Linear Prediction', \n", " color='gold')\n", "\n", "x_label = 'Periods'\n", "\n", "y_label = ''\n", "\n", "y_vals = ax.get_yticks()\n", "\n", "plt.ylabel(y_label)\n", "\n", "plt.xlabel(x_label)\n", "\n", "ax.legend(bbox_to_anchor=(1.04,1), loc=\"upper left\", frameon=False)\n", "\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Corresponding to each point on the scatter plot, there is an error of prediction calculated as the actual value minus the predicted value. It is the vertical distance between the point and the line, with a negative sign if the point is below the line." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "actual = lw_with_predictions['Characters']\n", "predicted = lw_with_predictions['Linear Prediction']\n", "errors = actual - predicted" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lw_with_predictions['Error'] = errors\n", "lw_with_predictions.head(10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can use `slope` and `intercept` to calculate the slope and intercept of the fitted line. The graph below shows the line (in light blue). The errors corresponding to four of the points are shown in red. There is nothing special about those four points. They were just chosen for clarity of the display. The function `lw_errors` takes a slope and an intercept (in that order) as its arguments and draws the figure. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lw_reg_slope = slope(little_women, 'Periods', 'Characters')\n", "\n", "lw_reg_intercept = intercept(little_women, 'Periods', 'Characters')\n", "\n", "print('lw_reg_slope =', lw_reg_slope )\n", "print('lw_reg_intercept =', lw_reg_intercept)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "remove_input" ] }, "outputs": [], "source": [ "sample = [[131, 14431], [231, 20558], [392, 40935], [157, 23524]]\n", "\n", "def lw_errors(slope, intercept):\n", " fig, ax = plt.subplots(figsize=(6,6))\n", "\n", " ax.scatter(little_women['Periods'], \n", " little_women['Characters'], \n", " color='darkblue')\n", "\n", " x_label = 'Periods'\n", "\n", " y_label = 'Characters'\n", "\n", " y_vals = ax.get_yticks()\n", "\n", " plt.ylabel(y_label)\n", "\n", " plt.xlabel(x_label)\n", " \n", " xlims = np.array([50, 450])\n", " \n", " plt.plot(xlims, (slope * xlims) + intercept, lw=2)\n", " \n", " for x, y in sample:\n", " plt.plot([x, x], [y, slope * x + intercept], color='r', lw=2)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print('Slope of Regression Line: ', np.round(lw_reg_slope), 'characters per period')\n", "print('Intercept of Regression Line:', np.round(lw_reg_intercept), 'characters')\n", "lw_errors(lw_reg_slope, lw_reg_intercept)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Had we used a different line to create our estimates, the errors would have been different. The graph below shows how big the errors would be if we were to use another line for estimation. The second graph shows large errors obtained by using a line that is downright silly." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lw_errors(50, 10000)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lw_errors(-100, 50000)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Root Mean Squared Error ###\n", "\n", "What we need now is one overall measure of the rough size of the errors. You will recognize the approach to creating this – it's exactly the way we developed the SD.\n", "\n", "If you use any arbitrary line to calculate your estimates, then some of your errors are likely to be positive and others negative. To avoid cancellation when measuring the rough size of the errors, we will take the mean of the squared errors rather than the mean of the errors themselves. \n", "\n", "The mean squared error of estimation is a measure of roughly how big the squared errors are, but as we have noted earlier, its units are hard to interpret. Taking the square root yields the root mean square error (rmse), which is in the same units as the variable being predicted and therefore much easier to understand. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Minimizing the Root Mean Squared Error ###\n", "\n", "Our observations so far can be summarized as follows.\n", "\n", "- To get estimates of $y$ based on $x$, you can use any line you want.\n", "- Every line has a root mean squared error of estimation.\n", "- \"Better\" lines have smaller errors.\n", "\n", "Is there a \"best\" line? That is, is there a line that minimizes the root mean squared error among all lines? \n", "\n", "To answer this question, we will start by defining a function `lw_rmse` to compute the root mean squared error of any line through the Little Women scatter diagram. The function takes the slope and the intercept (in that order) as its arguments." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def lw_rmse(slope, intercept):\n", " lw_errors(slope, intercept)\n", " x = little_women['Periods']\n", " y = little_women['Characters']\n", " fitted = slope * x + intercept\n", " mse = np.mean((y - fitted) ** 2)\n", " print(\"Root mean squared error:\", mse ** 0.5)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lw_rmse(50, 10000)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lw_rmse(-100, 50000)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Bad lines have big values of rmse, as expected. But the rmse is much smaller if we choose a slope and intercept close to those of the regression line." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lw_rmse(90, 4000)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here is the root mean squared error corresponding to the regression line. By a remarkable fact of mathematics, no other line can beat this one. \n", "\n", "- **The regression line is the unique straight line that minimizes the mean squared error of estimation among all straight lines.**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lw_rmse(lw_reg_slope, lw_reg_intercept)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The proof of this statement requires abstract mathematics that is beyond the scope of this course. On the other hand, we do have a powerful tool – Python – that performs large numerical computations with ease. So we can use Python to confirm that the regression line minimizes the mean squared error." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Numerical Optimization ###\n", "First note that a line that minimizes the root mean squared error is also a line that minimizes the squared error. The square root makes no difference to the minimization. So we will save ourselves a step of computation and just minimize the mean squared error (mse).\n", "\n", "We are trying to predict the number of characters ($y$) based on the number of periods ($x$) in chapters of Little Women. If we use the line \n", "$$\n", "\\mbox{prediction} ~=~ ax + b\n", "$$\n", "it will have an mse that depends on the slope $a$ and the intercept $b$. The function `lw_mse` takes the slope and intercept as its arguments and returns the corresponding mse." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def lw_mse(any_slope, any_intercept):\n", " x = little_women['Periods']\n", " y = little_women['Characters']\n", " fitted = any_slope*x + any_intercept\n", " return np.mean((y - fitted) ** 2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's check that `lw_mse` gets the right answer for the root mean squared error of the regression line. Remember that `lw_mse` returns the mean squared error, so we have to take the square root to get the rmse." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lw_mse(lw_reg_slope, lw_reg_intercept)**0.5" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "That's the same as the value we got by using `lw_rmse` earlier:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lw_rmse(lw_reg_slope, lw_reg_intercept)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can confirm that `lw_mse` returns the correct value for other slopes and intercepts too. For example, here is the rmse of the extremely bad line that we tried earlier." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lw_mse(-100, 50000)**0.5" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And here is the rmse for a line that is close to the regression line." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lw_mse(90, 4000)**0.5" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we experiment with different values, we can find a low-error slope and intercept through trial and error, but that would take a while. Fortunately, there is a Python function that does all the trial and error for us.\n", "\n", "The `minimize` function (defined below) can be used to find the arguments of a function for which the function returns its minimum value. Python uses a similar trial-and-error approach, following the changes that lead to incrementally lower output values. \n", "\n", "The argument of `minimize` is a function that itself takes numerical arguments and returns a numerical value. For example, the function `lw_mse` takes a numerical slope and intercept as its arguments and returns the corresponding mse. \n", "\n", "The call `minimize(lw_mse)` returns an array consisting of the slope and the intercept that minimize the mse. These minimizing values are excellent approximations arrived at by intelligent trial-and-error, not exact values based on formulas." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from scipy import optimize\n", "\n", "def minimize(f, start=None, smooth=False, log=None, array=False, **vargs):\n", " \"\"\"Minimize a function f of one or more arguments.\n", " Args:\n", " f: A function that takes numbers and returns a number\n", " start: A starting value or list of starting values\n", " smooth: Whether to assume that f is smooth and use first-order info\n", " log: Logging function called on the result of optimization (e.g. print)\n", " vargs: Other named arguments passed to scipy.optimize.minimize\n", " Returns either:\n", " (a) the minimizing argument of a one-argument function\n", " (b) an array of minimizing arguments of a multi-argument function\n", " \"\"\"\n", " if start is None:\n", " assert not array, \"Please pass starting values explicitly when array=True\"\n", " arg_count = f.__code__.co_argcount\n", " assert arg_count > 0, \"Please pass starting values explicitly for variadic functions\"\n", " start = [0] * arg_count\n", " if not hasattr(start, '__len__'):\n", " start = [start]\n", "\n", " if array:\n", " objective = f\n", " else:\n", " @functools.wraps(f)\n", " def objective(args):\n", " return f(*args)\n", "\n", " if not smooth and 'method' not in vargs:\n", " vargs['method'] = 'Powell'\n", " result = optimize.minimize(objective, start, **vargs)\n", " if log is not None:\n", " log(result)\n", " if len(start) == 1:\n", " return result.x.item(0)\n", " else:\n", " return result.x" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "best = minimize(lw_mse)\n", "best" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "These values are the same as the values we calculated earlier by using the `slope` and `intercept` functions. We see small deviations due to the inexact nature of `minimize`, but the values are essentially the same." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"slope from formula: \", lw_reg_slope)\n", "print(\"slope from minimize: \", best.item(0))\n", "print(\"intercept from formula: \", lw_reg_intercept)\n", "print(\"intercept from minimize: \", best.item(1))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### The Least Squares Line ###\n", "\n", "Therefore, we have found not only that the regression line minimizes mean squared error, but also that minimizing mean squared error gives us the regression line. The regression line is the only line that minimizes mean squared error.\n", "\n", "That is why the regression line is sometimes called the \"least squares line.\"" ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 1 }