{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "remove_input" ] }, "outputs": [], "source": [ "path_data = '../../../../data/'\n", "\n", "import numpy as np\n", "import pandas as pd\n", "\n", "%matplotlib inline\n", "import matplotlib.pyplot as plt\n", "plt.style.use('fivethirtyeight')\n", "\n", "import functools\n", "\n", "import warnings\n", "warnings.filterwarnings('ignore')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from scipy import optimize\n", "\n", "def minimize(f, start=None, smooth=False, log=None, array=False, **vargs):\n", " \"\"\"Minimize a function f of one or more arguments.\n", " Args:\n", " f: A function that takes numbers and returns a number\n", " start: A starting value or list of starting values\n", " smooth: Whether to assume that f is smooth and use first-order info\n", " log: Logging function called on the result of optimization (e.g. print)\n", " vargs: Other named arguments passed to scipy.optimize.minimize\n", " Returns either:\n", " (a) the minimizing argument of a one-argument function\n", " (b) an array of minimizing arguments of a multi-argument function\n", " \"\"\"\n", " if start is None:\n", " assert not array, \"Please pass starting values explicitly when array=True\"\n", " arg_count = f.__code__.co_argcount\n", " assert arg_count > 0, \"Please pass starting values explicitly for variadic functions\"\n", " start = [0] * arg_count\n", " if not hasattr(start, '__len__'):\n", " start = [start]\n", "\n", " if array:\n", " objective = f\n", " else:\n", " @functools.wraps(f)\n", " def objective(args):\n", " return f(*args)\n", "\n", " if not smooth and 'method' not in vargs:\n", " vargs['method'] = 'Powell'\n", " result = optimize.minimize(objective, start, **vargs)\n", " if log is not None:\n", " log(result)\n", " if len(start) == 1:\n", " return result.x.item(0)\n", " else:\n", " return result.x" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "remove_input" ] }, "outputs": [], "source": [ "\n", "def standard_units(any_numbers):\n", " \"Convert any array of numbers to standard units.\"\n", " return (any_numbers - np.mean(any_numbers))/np.std(any_numbers) \n", "\n", "def correlation(t, x, y):\n", " return np.mean(standard_units(t[x])*standard_units(t[y]))\n", "\n", "def slope(table, x, y):\n", " r = correlation(table, x, y)\n", " return r * np.std(table[y]/np.std(table[x]))\n", "\n", "def intercept(table, x, y):\n", " a = slope(table, x, y)\n", " return np.mean(table[y]) - a * np.mean(table[x])\n", "\n", "def fit(table, x, y):\n", " \"\"\"Return the height of the regression line at each x value.\"\"\"\n", " a = slope(table, x, y)\n", " b = intercept(table, x, y)\n", " return a * table[x] + b" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "### Least Squares Regression ###\n", "In an earlier section, we developed formulas for the slope and intercept of the regression line through a *football shaped* scatter diagram. It turns out that the slope and intercept of the least squares line have the same formulas as those we developed, *regardless of the shape of the scatter plot*.\n", "\n", "We saw this in the example about Little Women, but let's confirm it in an example where the scatter plot clearly isn't football shaped. For the data, we are once again indebted to the rich [data archive of Prof. Larry Winner](http://www.stat.ufl.edu/~winner/datasets.html) of the University of Florida. A [2013 study](http://digitalcommons.wku.edu/ijes/vol6/iss2/10/) in the International Journal of Exercise Science studied collegiate shot put athletes and examined the relation between strength and shot put distance. The population consists of 28 female collegiate athletes. Strength was measured by the the biggest amount (in kilograms) that the athlete lifted in the \"1RM power clean\" in the pre-season. The distance (in meters) was the athlete's personal best." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "shotput = pd.read_csv(path_data + 'shotput.csv')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "shotput" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "shotput.plot.scatter('Weight Lifted', 'Shot Put Distance');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "That's not a football shaped scatter plot. In fact, it seems to have a slight non-linear component. But if we insist on using a straight line to make our predictions, there is still one best straight line among all straight lines.\n", "\n", "Our formulas for the slope and intercept of the regression line, derived for football shaped scatter plots, give the following values." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "slope(shotput, 'Weight Lifted', 'Shot Put Distance')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "intercept(shotput, 'Weight Lifted', 'Shot Put Distance')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Does it still make sense to use these formulas even though the scatter plot isn't football shaped? We can answer this by finding the slope and intercept of the line that minimizes the mse.\n", "\n", "We will define the function `shotput_linear_mse` to take an arbirtary slope and intercept as arguments and return the corresponding mse. Then `minimize` applied to `shotput_linear_mse` will return the best slope and intercept." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def shotput_linear_mse(any_slope, any_intercept):\n", " x = shotput['Weight Lifted']\n", " y = shotput['Shot Put Distance']\n", " fitted = any_slope*x + any_intercept\n", " return np.mean((y - fitted) ** 2)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "minimize(shotput_linear_mse)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "These values are the same as those we got by using our formulas. To summarize:\n", "\n", "**No matter what the shape of the scatter plot, there is a unique line that minimizes the mean squared error of estimation. It is called the regression line, and its slope and intercept are given by**\n", "\n", "$$\n", "\\mathbf{\\mbox{slope of the regression line}} ~=~ r \\cdot\n", "\\frac{\\mbox{SD of }y}{\\mbox{SD of }x}\n", "$$\n", "\n", "$$\n", "\\mathbf{\\mbox{intercept of the regression line}} ~=~\n", "\\mbox{average of }y ~-~ \\mbox{slope} \\cdot \\mbox{average of }x\n", "$$" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fitted = fit(shotput, 'Weight Lifted', 'Shot Put Distance')\n", "\n", "shotput['Best Straight Line'] = fitted\n", "\n", "fig, ax = plt.subplots(figsize=(7,6))\n", "\n", "ax.scatter(shotput['Weight Lifted'], \n", " shotput['Shot Put Distance'], \n", " label='Color=darkblue', \n", " color='darkblue')\n", "\n", "ax.scatter(shotput['Weight Lifted'], \n", " shotput['Best Straight Line'], \n", " label='Color=gold', \n", " color='gold')\n", "\n", "x_label = 'Weight Lifted'\n", "\n", "y_label = ''\n", "\n", "y_vals = ax.get_yticks()\n", "\n", "plt.ylabel(y_label)\n", "\n", "ax.legend(bbox_to_anchor=(1.04,1), loc=\"upper left\", frameon=False)\n", "\n", "plt.xlabel(x_label)\n", "\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Nonlinear Regression ###\n", "The graph above reinforces our earlier observation that the scatter plot is a bit curved. So it is better to fit a curve than a straight line. The [study](http://digitalcommons.wku.edu/ijes/vol6/iss2/10/) postulated a quadratic relation between the weight lifted and the shot put distance. So let's use quadratic functions as our predictors and see if we can find the best one. \n", "\n", "We have to find the best quadratic function among all quadratic functions, instead of the best straight line among all straight lines. The method of least squares allows us to do this.\n", "\n", "The mathematics of this minimization is complicated and not easy to see just by examining the scatter plot. But numerical minimization is just as easy as it was with linear predictors! We can get the best quadratic predictor by once again using `minimize`. Let's see how this works." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Recall that a quadratic function has the form\n", "\n", "$$\n", "f(x) ~=~ ax^2 + bx + c\n", "$$\n", "for constants $a$, $b$, and $c$.\n", "\n", "To find the best quadratic function to predict distance based on weight lifted, using the criterion of least squares, we will first write a function that takes the three constants as its arguments, calculates the fitted values by using the quadratic function above, and then returns the mean squared error. \n", "\n", "The function is called `shotput_quadratic_mse`. Notice that the definition is analogous to that of `lw_mse`, except that the fitted values are based on a quadratic function instead of linear." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def shotput_quadratic_mse(a, b, c):\n", " x = shotput['Weight Lifted']\n", " y = shotput['Shot Put Distance']\n", " fitted = a*(x**2) + b*x + c\n", " return np.mean((y - fitted) ** 2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can now use `minimize` just as before to find the constants that minimize the mean squared error. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "best = minimize(shotput_quadratic_mse)\n", "best" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Our prediction of the shot put distance for an athlete who lifts $x$ kilograms is about\n", "$$\n", "-0.00104x^2 ~+~ 0.2827x - 1.5318\n", "$$\n", "meters. For example, if the athlete can lift 100 kilograms, the predicted distance is 16.33 meters. On the scatter plot, that's near the center of a vertical strip around 100 kilograms." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "(-0.00104)*(100**2) + 0.2827*100 - 1.5318" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here are the predictions for all the values of `Weight Lifted`. You can see that they go through the center of the scatter plot, to a rough approximation." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "x = shotput.iloc[:,0]\n", "shotput_fit = best.item(0)*(x**2) + best.item(1)*x + best.item(2)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "shotput['Best Quadratic Curve'] = shotput_fit\n", "\n", "fig, ax = plt.subplots(figsize=(7,6))\n", "\n", "ax.scatter(shotput['Weight Lifted'], \n", " shotput['Shot Put Distance'], \n", " label='Color=darkblue', \n", " color='darkblue')\n", "\n", "ax.scatter(shotput['Weight Lifted'], \n", " shotput['Best Quadratic Curve'], \n", " label='Color=gold', \n", " color='gold')\n", "\n", "x_label = 'Weight Lifted'\n", "\n", "y_label = ''\n", "\n", "y_vals = ax.get_yticks()\n", "\n", "plt.ylabel(y_label)\n", "\n", "ax.legend(bbox_to_anchor=(1.04,1), loc=\"upper left\", frameon=False)\n", "\n", "plt.xlabel(x_label)\n", "\n", "plt.show()" ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 1 }