{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "_You are currently looking at **version 1.1** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-machine-learning/resources/bANLa) course resource._\n", "\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Assignment 4 - Understanding and Predicting Property Maintenance Fines\n", "\n", "This assignment is based on a data challenge from the Michigan Data Science Team ([MDST](http://midas.umich.edu/mdst/)). \n", "\n", "The Michigan Data Science Team ([MDST](http://midas.umich.edu/mdst/)) and the Michigan Student Symposium for Interdisciplinary Statistical Sciences ([MSSISS](https://sites.lsa.umich.edu/mssiss/)) have partnered with the City of Detroit to help solve one of the most pressing problems facing Detroit - blight. [Blight violations](http://www.detroitmi.gov/How-Do-I/Report/Blight-Complaint-FAQs) are issued by the city to individuals who allow their properties to remain in a deteriorated condition. Every year, the city of Detroit issues millions of dollars in fines to residents and every year, many of these fines remain unpaid. Enforcing unpaid blight fines is a costly and tedious process, so the city wants to know: how can we increase blight ticket compliance?\n", "\n", "The first step in answering this question is understanding when and why a resident might fail to comply with a blight ticket. This is where predictive modeling comes in. For this assignment, your task is to predict whether a given blight ticket will be paid on time.\n", "\n", "All data for this assignment has been provided to us through the [Detroit Open Data Portal](https://data.detroitmi.gov/). **Only the data already included in your Coursera directory can be used for training the model for this assignment.** Nonetheless, we encourage you to look into data from other Detroit datasets to help inform feature creation and model selection. We recommend taking a look at the following related datasets:\n", "\n", "* [Building Permits](https://data.detroitmi.gov/Property-Parcels/Building-Permits/xw2a-a7tf)\n", "* [Trades Permits](https://data.detroitmi.gov/Property-Parcels/Trades-Permits/635b-dsgv)\n", "* [Improve Detroit: Submitted Issues](https://data.detroitmi.gov/Government/Improve-Detroit-Submitted-Issues/fwz3-w3yn)\n", "* [DPD: Citizen Complaints](https://data.detroitmi.gov/Public-Safety/DPD-Citizen-Complaints-2016/kahe-efs3)\n", "* [Parcel Map](https://data.detroitmi.gov/Property-Parcels/Parcel-Map/fxkw-udwf)\n", "\n", "___\n", "\n", "We provide you with two data files for use in training and validating your models: train.csv and test.csv. Each row in these two files corresponds to a single blight ticket, and includes information about when, why, and to whom each ticket was issued. The target variable is compliance, which is True if the ticket was paid early, on time, or within one month of the hearing data, False if the ticket was paid after the hearing date or not at all, and Null if the violator was found not responsible. Compliance, as well as a handful of other variables that will not be available at test-time, are only included in train.csv.\n", "\n", "Note: All tickets where the violators were found not responsible are not considered during evaluation. They are included in the training set as an additional source of data for visualization, and to enable unsupervised and semi-supervised approaches. However, they are not included in the test set.\n", "\n", "<br>\n", "\n", "**File descriptions** (Use only this data for training your model!)\n", "\n", " readonly/train.csv - the training set (all tickets issued 2004-2011)\n", " readonly/test.csv - the test set (all tickets issued 2012-2016)\n", " readonly/addresses.csv & readonly/latlons.csv - mapping from ticket id to addresses, and from addresses to lat/lon coordinates. \n", " Note: misspelled addresses may be incorrectly geolocated.\n", "\n", "<br>\n", "\n", "**Data fields**\n", "\n", "train.csv & test.csv\n", "\n", " ticket_id - unique identifier for tickets\n", " agency_name - Agency that issued the ticket\n", " inspector_name - Name of inspector that issued the ticket\n", " violator_name - Name of the person/organization that the ticket was issued to\n", " violation_street_number, violation_street_name, violation_zip_code - Address where the violation occurred\n", " mailing_address_str_number, mailing_address_str_name, city, state, zip_code, non_us_str_code, country - Mailing address of the violator\n", " ticket_issued_date - Date and time the ticket was issued\n", " hearing_date - Date and time the violator's hearing was scheduled\n", " violation_code, violation_description - Type of violation\n", " disposition - Judgment and judgement type\n", " fine_amount - Violation fine amount, excluding fees\n", " admin_fee - $20 fee assigned to responsible judgments\n", "state_fee - $10 fee assigned to responsible judgments\n", " late_fee - 10% fee assigned to responsible judgments\n", " discount_amount - discount applied, if any\n", " clean_up_cost - DPW clean-up or graffiti removal cost\n", " judgment_amount - Sum of all fines and fees\n", " grafitti_status - Flag for graffiti violations\n", " \n", "train.csv only\n", "\n", " payment_amount - Amount paid, if any\n", " payment_date - Date payment was made, if it was received\n", " payment_status - Current payment status as of Feb 1 2017\n", " balance_due - Fines and fees still owed\n", " collection_status - Flag for payments in collections\n", " compliance [target variable for prediction] \n", " Null = Not responsible\n", " 0 = Responsible, non-compliant\n", " 1 = Responsible, compliant\n", " compliance_detail - More information on why each ticket was marked compliant or non-compliant\n", "\n", "\n", "___\n", "\n", "## Evaluation\n", "\n", "Your predictions will be given as the probability that the corresponding blight ticket will be paid on time.\n", "\n", "The evaluation metric for this assignment is the Area Under the ROC Curve (AUC). \n", "\n", "Your grade will be based on the AUC score computed for your classifier. A model which with an AUROC of 0.7 passes this assignment, over 0.75 will recieve full points.\n", "___\n", "\n", "For this assignment, create a function that trains a model to predict blight ticket compliance in Detroit using `readonly/train.csv`. Using this model, return a series of length 61001 with the data being the probability that each corresponding ticket from `readonly/test.csv` will be paid, and the index being the ticket_id.\n", "\n", "Example:\n", "\n", " ticket_id\n", " 284932 0.531842\n", " 285362 0.401958\n", " 285361 0.105928\n", " 285338 0.018572\n", " ...\n", " 376499 0.208567\n", " 376500 0.818759\n", " 369851 0.018528\n", " Name: compliance, dtype: float32\n", " \n", "### Hints\n", "\n", "* Make sure your code is working before submitting it to the autograder.\n", "\n", "* Print out your result to see whether there is anything weird (e.g., all probabilities are the same).\n", "\n", "* Generally the total runtime should be less than 10 mins. You should NOT use Neural Network related classifiers (e.g., MLPClassifier) in this question. \n", "\n", "* Try to avoid global variables. If you have other functions besides blight_model, you should move those functions inside the scope of blight_model.\n", "\n", "* Refer to the pinned threads in Week 4's discussion forum when there is something you could not figure it out." ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": true }, "outputs": [], "source": [ "import pandas as pd\n", "import numpy as np\n", "\n", "def blight_model():\n", " # Load the data files\n", " train = pd.read_csv('train.csv',encoding='ISO-8859-1',low_memory=False,parse_dates=['ticket_issued_date', 'hearing_date','payment_date'])\n", " test = pd.read_csv('test.csv',encoding='ISO-8859-1',low_memory=False, parse_dates=['ticket_issued_date', 'hearing_date'])\n", " address = pd.read_csv('addresses.csv',encoding='ISO-8859-1',low_memory=False)\n", " coord = pd.read_csv('latlons.csv',encoding='ISO-8859-1',low_memory=False)\n", "\n", " # now we remove columns and rows with all entries being EMPTY\n", " train.dropna(how='all',axis=1, inplace=True)\n", " train.dropna(how='all',axis=0, inplace=True)\n", "\n", " # Remove columns with the same values they are independent/non-correlated to/from target values\n", " independent = []\n", " for i in range(len(train.columns)):\n", " if len(train[train.columns[i]].unique())==1:\n", " independent.append(train.columns[i])\n", "\n", " # print('{} column was removed'.format(independent))\n", " train.drop(independent,axis=1,inplace=True)\n", " test.drop(independent,axis=1,inplace=True)\n", "\n", " # we see that there are a lot of columns with total unique values less than 250.Thus we can convert them into categorical data to reduce memory usage\n", " # to reduce memory usage we convert columns with < than 250 entries to categorical data\n", " for i in range(len(train.columns)):\n", " if len(train[train.columns[i]].unique())<250:\n", " train[train.columns[i]] = train[train.columns[i]].astype('category')\n", "\n", " # now lets see the missing number ratio in the data set\n", " total_null = train.isnull().sum().sort_values(ascending=False)\n", " per = train.isnull().count().sort_values(ascending=False)\n", "\n", " # now i remove columns with missing value percentage of more than 50%\n", " high_mssing_data = pd.concat([total_null,total_null/per], keys=['Total_nulls','percentage_nulls'],axis=1)\n", " high_missing_values = high_mssing_data[high_mssing_data['percentage_nulls']>0.5].index\n", " train.drop(high_missing_values,axis=1,inplace=True)\n", "\n", " # Now we join the address to train and test data\n", " address = address.merge(coord,how='inner',left_on='address',right_on='address')\n", " train = train.merge(address,how='left',left_on='ticket_id',right_on='ticket_id')\\\n", " .set_index('ticket_id')\n", " test = test.merge(address,how='left',left_on='ticket_id',right_on='ticket_id')\\\n", " .set_index('ticket_id')\n", "\n", " # now we reduce the features that can be replaced by the lat and lon\n", " latlon_replaced = ['violator_name',\n", " 'violation_street_number', 'violation_street_name',\n", " 'mailing_address_str_number', 'mailing_address_str_name',\n", " 'state', 'zip_code', 'country','address','city']\n", " train.drop(latlon_replaced, axis=1,inplace=True)\n", "\n", " # Now we reduce the features even further, by suming the amount payables into one\n", " train['total_amt_pay'] = train[['fine_amount','admin_fee','state_fee','late_fee']].sum(axis=1).subtract(train['discount_amount'].astype(np.float64))\n", " test['total_amt_pay'] = test[['fine_amount','admin_fee','state_fee','late_fee']].sum(axis=1).subtract(test['discount_amount'].astype(np.float64))\n", " drop_payments = ['fine_amount','admin_fee','state_fee','late_fee','discount_amount']\n", " train.drop(drop_payments,axis=1, inplace=True)\n", "\n", " # drop missing values of ['lat','lon','total_amt_pay'] from the train dataset but since its not allowed in the test set,we replace it with the mean\n", " train.dropna(subset = ['lat','lon','total_amt_pay'],inplace=True)\n", " test['lat'].fillna(test.lat.mean(),inplace=True)\n", " test['lon'].fillna(test.lon.mean(),inplace=True)\n", "\n", " # Now we find the time gap between the ticket issue data and the hearing date\n", " train['time_delta'] = (train['hearing_date'] - train['ticket_issued_date']).dt.days\n", " test['time_delta'] = (test['hearing_date'] - test['ticket_issued_date']).dt.days\n", " drop_timedelta = ['hearing_date','ticket_issued_date'] \n", " train.drop(drop_timedelta,axis=1, inplace=True)\n", " test.drop(drop_timedelta,axis=1, inplace=True)\n", "\n", " # Replace the missing values in the time delta column with the mode\n", " train['time_delta'].fillna(73, inplace=True)\n", " test['time_delta'].fillna(73,inplace=True)\n", "\n", " # Now remove not too important featured and make strinig features from string categories 'disposition','agancy_name'\n", " further_drop = ['inspector_name', 'violation_code','violation_description',\n", " 'payment_amount', 'balance_due','payment_status',\n", " 'compliance_detail']\n", "\n", " train.drop(further_drop,axis=1, inplace=True)\n", " string_features = ['disposition','agency_name'] \n", " train = pd.get_dummies(train,columns = string_features,drop_first=True)\n", " test = pd.get_dummies(test,columns = string_features,drop_first=True)\n", "\n", " # taking only non-NaN values for training\n", " train = train[( (train['compliance']==0) | (train['compliance']==1) )]\n", "\n", " # trime the train data to have only the columns available in the test data\n", " y = train['compliance']\n", " X = train.drop('compliance',axis=1)\n", "\n", " train_feature_set = set(X)\n", " for feature in set(X):\n", " if feature not in test:\n", " train_feature_set.remove(feature)\n", " train_features = list(train_feature_set)\n", "\n", " X_train = X[train_features]\n", " test = test[train_features]\n", "\n", " #import necessary models to train the data\n", " from sklearn.preprocessing import MinMaxScaler, StandardScaler\n", " from sklearn.metrics import roc_auc_score\n", " from sklearn.ensemble import RandomForestClassifier\n", " import time\n", "\n", " from sklearn.ensemble import RandomForestRegressor\n", "# X_train = MinMaxScaler().fit_transform(X_train)\n", "# testt = MinMaxScaler().fit_transform(test)\n", " RF_clf = RandomForestRegressor(max_depth=6).fit(X_train,y)\n", " y_pred = RF_clf.predict(test)\n", "\n", " test['compliance'] = y_pred\n", " return test.compliance\n" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "0.11096674526927389" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "blight_model().mean()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ], "metadata": { "coursera": { "course_slug": "python-machine-learning", "graded_item_id": "nNS8l", "launcher_item_id": "yWWk7", "part_id": "w8BSS" }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.2" } }, "nbformat": 4, "nbformat_minor": 2 }