{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Scipy"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Scipy is a python library for scientific computing. It contains modules for linear algebra, root finding, optimization, etc.This tutorial will cover basic uses of scipy for:\n",
    "\n",
    "* Linear Algebra\n",
    "* Root finding and Optimization"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Linear Algebra"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The scipy.linalg module contains several useful linear algebra operation. The example below shows how you can use scipy to invert a matrix."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "ainv =  [[-2.   1. ]\n",
      " [ 1.5 -0.5]]\n",
      "[[  1.00000000e+00   0.00000000e+00]\n",
      " [  8.88178420e-16   1.00000000e+00]]\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "import scipy.linalg as sp\n",
    "a = np.array([[1,2],[3,4]])\n",
    "ainv = sp.inv(a)\n",
    "print(\"ainv = \", ainv)\n",
    "print(a.dot(ainv))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "If you want to find the determinant, then use the det() function."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "-2.0\n"
     ]
    }
   ],
   "source": [
    "print(sp.det(a))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "If you want to solve, a systems of linear equations, use the solve function.\n",
    "\n",
    "$$\n",
    "    x + 2y = 10 \\\\\n",
    "    3x + 4y = 20 \n",
    "$$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([[ 0.],\n",
       "       [ 5.]])"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b = np.array([[10,20]])\n",
    "sp.solve(a, b.T)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Root finding and Optimization"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Scipy has implemented several root finding and optimization algorithms. This [website](https://docs.scipy.org/doc/scipy/reference/optimize.html) contains all the optimization and root finding methods implemented in scipy and their syntaxes."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For example if want to find the root of $x^3 - 10$ using bisection."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2.154434690031394\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "import scipy.optimize as sp\n",
    "g = lambda x: (x**3) - 10\n",
    "x = sp.bisect(g, 1.5, 3.0)\n",
    "print(x)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "If you change the convergence criterion."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2.15625\n"
     ]
    }
   ],
   "source": [
    "# Changing convergence criterion\n",
    "x = sp.bisect(g, 1.5, 3.0, xtol = 0.1)\n",
    "print(x)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Scipy also contains other root finding techniques such as false position, Brent, and Newton's method."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "Scipy has several optimization routines also. Let us say you want to minimize $f(x) = (x-1)$^2 + e^x$ in the interval [0,2] using Golden Section Method. This can be done using the minimize_scalar() function."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.3149230505107306\n"
     ]
    }
   ],
   "source": [
    "import scipy.optimize as sp\n",
    "import numpy as np\n",
    "g = lambda x: (x-1)**2 + np.exp(x) \n",
    "res = sp.minimize_scalar(g, bracket=(-1,1,2), method='Golden')\n",
    "print(res.x)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "According to the help pages, \"bracket defines the bracketing interval and can either have three items (a, b, c) so that a < b < c and fun(b) < fun(a), fun(c)\". So pick the bracket interval carefully. It is also relatively easy to code some of the popular optimization algorithms in Python. For example, the golden section method can be coded as shown below."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0 0 0.764 1.236 2.0\n",
      "1 0 0.472152 0.763848 1.236\n",
      "2 0 0.29178993599999997 0.472058064 0.763848\n",
      "3 0 0.18032618044800003 0.29173188355199997 0.472058064\n",
      "4 0.18032618044800003 0.29176775996486404 0.360616484483136 0.472058064\n",
      "5 0.18032618044800003 0.24919707658942197 0.29174558834171405 0.360616484483136\n",
      "6 0.24919707658942197 0.29175929040482074 0.3180542706677372 0.360616484483136\n",
      "7 0.29175929040482074 0.31806273854273714 0.3343130363452196 0.360616484483136\n",
      "8 0.29175929040482074 0.3080148213540531 0.3180575053959872 0.3343130363452196\n",
      "9 0.3080148213540531 0.3180607394806787 0.324267118218594 0.3343130363452196\n",
      "10 0.3080148213540531 0.3142231987563077 0.3180587408163394 0.324267118218594\n",
      "11 0.3080148213540531 0.3118515985886465 0.31422196358174603 0.3180587408163394\n",
      "12 0.3118515985886465 0.31422272691962516 0.3156876124853607 0.3180587408163394\n",
      "13 0.3118515985886465 0.31331695589719133 0.31422225517681585 0.3156876124853607\n",
      "14 0.31331695589719133 0.314222546713872 0.31478202166868 0.3156876124853607\n",
      "15 0.314222546713872 0.3147822018385807 0.315127957360652 0.3156876124853607\n",
      "16 0.314222546713872 0.31456841358094195 0.3147820904935821 0.315127957360652\n",
      "17 0.31456841358094195 0.3147821593047912 0.3149142116368028 0.315127957360652\n",
      "18 0.3147821593047912 0.31491425416213004 0.3149958625033132 0.315127957360652\n",
      "19 0.3147821593047912 0.3148637939266266 0.31491422788147777 0.3149958625033132\n",
      "20 0.3148637939266266 0.3149142441229209 0.3149454123070189 0.3149958625033132\n",
      "Final Solution is:  0.31490460813542864\n",
      "Iteration counter:  21\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "g = lambda x: (x-1)**2 + np.exp(x) \n",
    "def goldenopt(f,x_l, x_u, tol, max_iter):\n",
    "    iter_ctr = 0\n",
    "    while (x_u-x_l)>tol and iter_ctr<max_iter:\n",
    "        x_r1 = x_u - 0.618*(x_u - x_l)\n",
    "        x_r2 = x_l + 0.618*(x_u - x_l)\n",
    "        print(iter_ctr, x_l, x_r1, x_r2, x_u)\n",
    "        if f(x_r1) <= f(x_r2):\n",
    "            x_u = x_r2\n",
    "            x_r2 = x_r1\n",
    "            x_r1 = x_u - 0.618*(x_u - x_l)\n",
    "        else:\n",
    "            x_l = x_r1\n",
    "            x_r1 = x_r2\n",
    "            x_r2  = x_l + 0.618*(x_u - x_l)\n",
    "        iter_ctr = iter_ctr + 1\n",
    "    return (x_r1 + x_r2)/2.0, iter_ctr\n",
    "    \n",
    "x_l_init = 0\n",
    "x_u_init = 2.0\n",
    "tolerance = 0.0001\n",
    "max_iter = 100\n",
    "x_f, iter_ctr = goldenopt(g, x_l_init, x_u_init, tolerance, max_iter)\n",
    "print(\"Final Solution is: \", x_f)\n",
    "print(\"Iteration counter: \", iter_ctr)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Similarly, the bisection can be coded as shown below. There are plenty of examples on the web of other good optimization codes."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0 0 2.0 1.0\n",
      "1 0 1.0 0.5\n",
      "2 0 0.5 0.25\n",
      "3 0.25 0.5 0.375\n",
      "4 0.25 0.375 0.3125\n",
      "5 0.3125 0.375 0.34375\n",
      "6 0.3125 0.34375 0.328125\n",
      "7 0.3125 0.328125 0.3203125\n",
      "8 0.3125 0.3203125 0.31640625\n",
      "9 0.3125 0.31640625 0.314453125\n",
      "10 0.314453125 0.31640625 0.3154296875\n",
      "11 0.314453125 0.3154296875 0.31494140625\n",
      "12 0.314453125 0.31494140625 0.314697265625\n",
      "13 0.314697265625 0.31494140625 0.3148193359375\n",
      "14 0.3148193359375 0.31494140625 0.31488037109375\n",
      "Final Solution is:  0.31488037109375\n",
      "Iteration counter:  15\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "g = lambda x: (x-1)**2 + np.exp(x) \n",
    "g1 = lambda x: 2*(x-1) + np.exp(x)\n",
    "\n",
    "def bisectopt(f,x_l, x_u, tol, max_iter):\n",
    "    iter_ctr = 0\n",
    "    while (x_u-x_l)>tol and iter_ctr<max_iter:\n",
    "        x_r = (x_l + x_u)/2.0\n",
    "        print(iter_ctr, x_l, x_u, x_r)\n",
    "        if f(x_r) == 0:\n",
    "            return x_r, iter_ctr\n",
    "        elif f(x_r) > 0:\n",
    "            x_u = x_r\n",
    "        else:\n",
    "            x_l = x_r\n",
    "        iter_ctr = iter_ctr + 1\n",
    "    return x_r,iter_ctr\n",
    "    \n",
    "x_l_init = 0\n",
    "x_u_init = 2.0\n",
    "tolerance = 0.0001\n",
    "max_iter = 100\n",
    "x_f, iter_ctr = bisectopt(g1, x_l_init, x_u_init, tolerance, max_iter)\n",
    "print(\"Final Solution is: \", x_f)\n",
    "print(\"Iteration counter: \", iter_ctr)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Some of the gradient based methods such as quasi newton methods or newton conjugate are also available in python through the minimize() function. For example, the quasi newton method for minimizing $f(x,y) = (x-1)^4 + 5(y-1)^4 + xy$ using quasi newton is shown below."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[ 0.23771415  1.7717857 ]\n"
     ]
    }
   ],
   "source": [
    "import scipy.optimize as sp\n",
    "import numpy as np\n",
    "\n",
    "def g(x):\n",
    "    return (x[0]-1)**4 + 5*(x[1]-2)**4 + x[0]*x[1] \n",
    "\n",
    "x_init = np.array([4.0,4.0])\n",
    "res = sp.minimize(g, x_init, method='BFGS')\n",
    "print(res.x)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "If you want to use Conjugate Gradient, then you can just change the method parameter."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[ 0.23771511  1.77178729]\n"
     ]
    }
   ],
   "source": [
    "res = sp.minimize(g, x_init, method='CG')\n",
    "print(res.x)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Writing your own code for some of these optimization algorithms is not that difficult and should be possible with some practise. For example, the following code implements Newton method for minimizing $f(x)=x^2 - 10x + 25$."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1 13.0 64.0\n",
      "2 11.4 40.96000000000001\n",
      "3 10.120000000000001 26.214399999999998\n",
      "4 9.096 16.777215999999996\n",
      "5 8.2768 10.737418239999997\n",
      "6 7.62144 6.871947673599998\n",
      "7 7.0971519999999995 4.398046511103992\n",
      "8 6.6777216 2.814749767106562\n",
      "9 6.34217728 1.8014398509481921\n",
      "10 6.073741824 1.1529215046068515\n",
      "11 5.8589934592 0.7378697629483781\n",
      "12 5.687194767359999 0.472236648286966\n",
      "13 5.549755813888 0.3022314549036622\n",
      "14 5.4398046511104 0.19342813113834012\n",
      "15 5.35184372088832 0.12379400392854123\n",
      "16 5.281474976710657 0.07922816251426923\n",
      "17 5.225179981368525 0.05070602400912705\n",
      "18 5.18014398509482 0.032451855365845717\n",
      "19 5.144115188075856 0.020769187434137137\n",
      "20 5.115292150460685 0.013292279957848763\n",
      "21 5.092233720368548 0.008507059173023634\n",
      "22 5.073786976294839 0.0054445178707318576\n",
      "23 5.059029581035871 0.003484491437266257\n",
      "24 5.047223664828697 0.0022300745198542415\n",
      "25 5.037778931862958 0.001427247692703304\n",
      "Final Solution is:  5.037778931862958\n",
      "Iteration counter:  25\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "g = lambda x: x**2 - 10*x + 25\n",
    "g1 = lambda x: 2*x - 10\n",
    "\n",
    "def grad_desc(f, f1, x_init, step_size, tol, max_iter):\n",
    "    x_new = x_init\n",
    "    x_old = 0\n",
    "    iter_ctr = 0\n",
    "    while abs(x_new - x_old) > tol and iter_ctr < max_iter:\n",
    "        x_old = x_new\n",
    "        gradient = f1(x_old) \n",
    "        move = gradient * step_size\n",
    "        x_new = x_old - move\n",
    "        iter_ctr = iter_ctr + 1\n",
    "        print(iter_ctr, x_new, f(x_new))\n",
    "    return x_new, iter_ctr\n",
    "\n",
    "x_init = 15.0\n",
    "step_size = 0.1\n",
    "tolerance = 0.01\n",
    "max_iter = 100\n",
    "\n",
    "x_f, iter_ctr = grad_desc(g, g1, x_init, step_size, tolerance, max_iter)\n",
    "\n",
    "print(\"Final Solution is: \", x_f)\n",
    "print(\"Iteration counter: \", iter_ctr)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The following implements multi-dimensional gradient descent for  minimizing $f(x,y) = (x-1)^4 + 5(y-1)^4 + xy$."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Final Solution is:  [ 0.23622452  1.774477  ]\n",
      "Functional Evaluation is:  0.772409696452\n",
      "Iteration counter:  5\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "\n",
    "def g(x):\n",
    "    return (x[0]-1)**4 + 5*(x[1]-2)**4 + x[0]*x[1] \n",
    "\n",
    "def g1(x):\n",
    "    grad = np.array([0.0,0.0])\n",
    "    grad[0] = 4*(x[0]-1)**3 + x[1]\n",
    "    grad[1] = 20*(x[1]-2)**3 + x[0]\n",
    "    return grad\n",
    "\n",
    "def back_track_ls(x):\n",
    "    eta = 1.0\n",
    "    beta = 0.1\n",
    "    gamma = 0.1\n",
    "    while g(x - eta*g1(x)) > g(x) - gamma*eta*np.dot(g1(x),g1(x)):\n",
    "        eta = beta*eta\n",
    "    return eta\n",
    "\n",
    "def grad_desc_2D(f, f1, x_init, step_size, tol, max_iter):\n",
    "    x_new = np.array([0.0,0.0])\n",
    "    x_old = np.array([0.0,0.0])\n",
    "    gradient = np.array([10.0,10.0])\n",
    "    move = np.array([0.0,0.0])\n",
    "    x_new = x_init\n",
    "    iter_ctr = 0\n",
    "    while np.linalg.norm(x_new - x_old) > tol and iter_ctr < max_iter:\n",
    "        x_old = x_new\n",
    "        gradient = f1(x_old) \n",
    "        step_size = back_track_ls(x_old)\n",
    "        move = gradient * step_size\n",
    "        x_new = x_old - move\n",
    "        iter_ctr = iter_ctr + 1\n",
    "    return x_new, iter_ctr\n",
    "\n",
    "x_init = np.array([4.0,4.0])\n",
    "x_f = np.array([0.0,0.0])\n",
    "step_size = 0.0025\n",
    "tolerance = 0.01\n",
    "max_iter = 100\n",
    "\n",
    "x_f, iter_ctr = grad_desc_2D(g, g1, x_init, step_size, tolerance, max_iter)\n",
    "\n",
    "print(\"Final Solution is: \", x_f)\n",
    "print(\"Functional Evaluation is: \", g(x_f))\n",
    "print(\"Iteration counter: \", iter_ctr)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The following implements multi-dimensional Newton for the same minimization problem."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Final Solution is:  [ 0.23771618  1.77178105]\n",
      "Functional Evaluation is:  0.772394721448\n",
      "Iteration counter:  17\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "\n",
    "def g(x):\n",
    "    return (x[0]-1)**4 + 5*(x[1]-2)**4 + x[0]*x[1] \n",
    "\n",
    "def g1(x):\n",
    "    grad = np.array([0.0,0.0])\n",
    "    grad[0] = 4*(x[0]-1)**3 + x[1]\n",
    "    grad[1] = 20*(x[1]-2)**3 + x[0]\n",
    "    return grad\n",
    "\n",
    "def hessian(x):\n",
    "    h  =  np.array([[0.0,0.0],[0.0,0.0]])\n",
    "    h[0,0] = 12*(x[0] - 1)**2\n",
    "    h[0,1] = 1\n",
    "    h[1,0] = 1\n",
    "    h[1,1] = 60*(x[1] - 2)**2\n",
    "    return h\n",
    "\n",
    "def newt_2D(f, f1, x_init, step_size, tol, max_iter):\n",
    "    x_new = np.array([0.0,0.0])\n",
    "    x_old = np.array([0.0,0.0])\n",
    "    gradient = np.array([10.0,10.0])\n",
    "    move = np.array([0.0,0.0])\n",
    "    x_new = x_init\n",
    "    iter_ctr = 0\n",
    "    while np.linalg.norm(x_new - x_old) > tol and iter_ctr < max_iter:\n",
    "        x_old = x_new\n",
    "        gradient = f1(x_old) \n",
    "        move = step_size*np.dot(np.linalg.inv(hessian(x_old)),gradient)\n",
    "        x_new = x_old - move\n",
    "        iter_ctr = iter_ctr + 1\n",
    "    return x_new, iter_ctr\n",
    "\n",
    "x_init = np.array([4.0,4.0])\n",
    "x_f = np.array([0.0,0.0])\n",
    "step_size = 1.00\n",
    "tolerance = 0.01\n",
    "max_iter = 100\n",
    "\n",
    "x_f, iter_ctr = newt_2D(g, g1, x_init, step_size, tolerance, max_iter)\n",
    "\n",
    "print(\"Final Solution is: \", x_f)\n",
    "print(\"Functional Evaluation is: \", g(x_f))\n",
    "print(\"Iteration counter: \", iter_ctr)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Linear Programs can be solved using Linprog. The sample code for solving the following LP is shown below:\n",
    "    \n",
    "$$\n",
    "\\text{max}\\,\\, 90x + 120y \\\\\n",
    " 30x + 40y \\leq 2000\\\\\n",
    "    50x + 20y \\leq 2500 \\\\\n",
    "    60x + 90y \\leq 4000 \\\\\n",
    "    x,y \\geq 0\n",
    "$$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Status : Optimal\n",
      "Optimal value : -5772.72727273\n",
      "X: [ 43.93939394  15.15151515]\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "import scipy.optimize as sp\n",
    "Aub = np.array([[30 , 40], [50 , 20] , [60 , 90]])\n",
    "bub = np.array([2000 , 2500 , 4000])\n",
    "c = np.array([-90 , -120])\n",
    "res = sp.linprog(c, A_ub =Aub , b_ub = bub)\n",
    "if(res. status == 0):\n",
    "    print(\"Status : Optimal\" )\n",
    "    print(\"Optimal value :\", res.fun)\n",
    "    print(\"X:\", res.x)\n",
    "else :\n",
    "    print(\"Optimal solution not found\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The objective function should be in minimization form. Equality constraints can be modeled by providing another parameter $A_eq$ in the linprog function. The default assumption is non-negativity of all variables. If you want to model variables with specific bounds as shown below.\n",
    "\n",
    "$$\n",
    "\\text{max}\\,\\, 10x + 26y \\\\\n",
    " x + y \\leq 9\\\\\n",
    "    x \\geq 9.5 \\\\\n",
    "    0 \\leq y \\leq 5 \n",
    "$$\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2\n"
     ]
    }
   ],
   "source": [
    "Aub = np.array([[1, 1]])\n",
    "bub = np.array([9])\n",
    "c = np.array([10, 26])\n",
    "b = [(9.5, None), (0, 5)]\n",
    "res = sp.linprog(c, A_ub=Aub, b_ub=bub, bounds = b)\n",
    "print(res.status)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The status of 2 indicates the problem is infeasible."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The minimize function can also be used to solves constrained nonlinear optimization problems. Let us look at solving the following constrained optimization problem.\n",
    "\n",
    "$$\n",
    " \\text{min} \\,\\,  (x_1 - 1)^2 + (x_2 -2.5)^2 \\\\\n",
    "\t\t\t     x_1 - 2x_2 + 2  \\geq 0 \\\\\n",
    "\t\t        -x_1 - 2x_2 + 6 \\geq 0 \\\\\n",
    "\t\t        -x_1 + 2x_2 + 2 \\geq 0 \\\\\n",
    "\t\t         x_1, x_2  \\geq 0\n",
    "$$\n",
    "\n",
    "This is example 16.4 from Jorge Nocedal, Stephen Wright. Numerical Optimization, 1999. This is also the same example solved in the scipy docs available [here](https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.optimize.minimize.html#scipy.optimize.minimize).\n",
    "\n",
    "Note that for constrained nonlinear optimization problems:\n",
    "\n",
    "* Constraints provided as dictionaries\n",
    "* Equality constraints must evaluate to 0\n",
    "* Inequality constraints must be nonnegative\n",
    "* COBYLA and SLSQP are the two algorithms available"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[ 1.4  1.7]\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "import scipy.optimize as sp\n",
    "\n",
    "objfun = lambda x: (x[0] - 1)**2 + (x[1] - 2.5)**2\n",
    "con1 = lambda x: x[0] - 2 * x[1] + 2\n",
    "con2 = lambda x: -x[0] - 2 * x[1] + 6\n",
    "con3 = lambda x: -x[0] + 2 * x[1] + 2\n",
    "cons = ({'type': 'ineq', 'fun': con1}, {'type': 'ineq', 'fun': con2}, {'type': 'ineq', 'fun': con3})\n",
    "x_init = (2,0)\n",
    "res = sp.minimize(objfun, x_init, method='SLSQP', constraints=cons)\n",
    "print(res.x)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In general for linear and integer programs, I recommend other specialized solvers like CPLEX and Gurobi which all have python interfaces. For other open source solvers which interface with python see [here](http://coral.ie.lehigh.edu/~ted/teaching/coin-or/)."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
