Skip to content

Commit 0cccecb

Browse files
committed
add solutions
1 parent de15377 commit 0cccecb

File tree

2 files changed

+500
-0
lines changed

2 files changed

+500
-0
lines changed
+293
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,293 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"id": "a027a99e-1ac6-4336-b87a-f0d5d79e22e2",
6+
"metadata": {},
7+
"source": [
8+
"# Python Machine Learning: Regression Solutions"
9+
]
10+
},
11+
{
12+
"cell_type": "code",
13+
"execution_count": null,
14+
"id": "8546cbf5-1c72-40c5-be75-234d1c3c9f3b",
15+
"metadata": {},
16+
"outputs": [],
17+
"source": [
18+
"import pandas as pd\n",
19+
"import numpy as np\n",
20+
"import matplotlib.pyplot as plt\n",
21+
"\n",
22+
"from sklearn.linear_model import LinearRegression\n",
23+
"from sklearn.model_selection import train_test_split\n",
24+
"\n",
25+
"%matplotlib inline"
26+
]
27+
},
28+
{
29+
"cell_type": "code",
30+
"execution_count": null,
31+
"id": "772d7956-975b-4489-8336-40dc93e3f528",
32+
"metadata": {},
33+
"outputs": [],
34+
"source": [
35+
"data = pd.read_csv('../data/auto-mpg.csv')"
36+
]
37+
},
38+
{
39+
"cell_type": "markdown",
40+
"id": "59c90903-4781-4a14-a73f-5322e7003705",
41+
"metadata": {},
42+
"source": [
43+
"---\n",
44+
"### Challenge 1: More EDA\n",
45+
"\n",
46+
"Create the following plots, or examine the following distributions, while exploring your data:\n",
47+
"\n",
48+
"1. A histogram of the displacement.\n",
49+
"2. A histogram of the horsepower.\n",
50+
"3. A histogram of the weight.\n",
51+
"4. A histogram of the acceleration.\n",
52+
"5. What are the unique model years, and their counts?\n",
53+
"6. What are the unique origin values, and their counts?\n",
54+
"\n",
55+
"---"
56+
]
57+
},
58+
{
59+
"cell_type": "code",
60+
"execution_count": null,
61+
"id": "859bccf7-82fa-4095-a6ff-523ef9eb7759",
62+
"metadata": {},
63+
"outputs": [],
64+
"source": [
65+
"ax = data['displacement'].hist(grid=False, bins=np.linspace(75, 450, 15))\n",
66+
"ax.set_xlabel('Displacement')\n",
67+
"ax.set_ylabel('Frequency')\n",
68+
"plt.show()"
69+
]
70+
},
71+
{
72+
"cell_type": "code",
73+
"execution_count": null,
74+
"id": "631de034-f513-4199-9e76-e2a1388d0475",
75+
"metadata": {},
76+
"outputs": [],
77+
"source": [
78+
"ax = data['horsepower'].hist(grid=False, bins=np.linspace(45, 230, 15))\n",
79+
"ax.set_xlabel('Horsepower')\n",
80+
"ax.set_ylabel('Frequency')\n",
81+
"plt.show()"
82+
]
83+
},
84+
{
85+
"cell_type": "code",
86+
"execution_count": null,
87+
"id": "0b5c0f99-584f-4d52-ad12-051eeb238067",
88+
"metadata": {},
89+
"outputs": [],
90+
"source": [
91+
"ax = data['weight'].hist(grid=False)\n",
92+
"ax.set_xlabel('Weight')\n",
93+
"ax.set_ylabel('Frequency')\n",
94+
"plt.show()"
95+
]
96+
},
97+
{
98+
"cell_type": "code",
99+
"execution_count": null,
100+
"id": "95c88602-8d09-4b1c-ab93-d7a0329cee4f",
101+
"metadata": {},
102+
"outputs": [],
103+
"source": [
104+
"ax = data['acceleration'].hist(grid=False)\n",
105+
"ax.set_xlabel('Acceleration')\n",
106+
"ax.set_ylabel('Frequency')\n",
107+
"plt.show()"
108+
]
109+
},
110+
{
111+
"cell_type": "code",
112+
"execution_count": null,
113+
"id": "e40bdacb-9b47-491a-995c-961430fcb4b2",
114+
"metadata": {},
115+
"outputs": [],
116+
"source": [
117+
"data['model year'].value_counts().sort_index()"
118+
]
119+
},
120+
{
121+
"cell_type": "code",
122+
"execution_count": null,
123+
"id": "d56a338a-1929-4c19-a7bc-3beeb7045335",
124+
"metadata": {},
125+
"outputs": [],
126+
"source": [
127+
"data['origin'].value_counts().sort_index()"
128+
]
129+
},
130+
{
131+
"cell_type": "markdown",
132+
"id": "c391bc78-fb9c-441c-8c04-e6708645c157",
133+
"metadata": {},
134+
"source": [
135+
"---\n",
136+
"### Challenge 2: Mean Absolute Error\n",
137+
"\n",
138+
"Another commonly used metric in regression is the **Mean Absolute Error (MAE)**. As the name suggests, this can be calculated by taking the mean of the absolute errors. Calculate the mean absolute error on the training and test data with your trained model. We've imported the MAE for you below:\n",
139+
"\n",
140+
"---"
141+
]
142+
},
143+
{
144+
"cell_type": "code",
145+
"execution_count": null,
146+
"id": "5b6f6d56-5967-468c-bcd2-0ceb8819e630",
147+
"metadata": {},
148+
"outputs": [],
149+
"source": [
150+
"# Remove the response variable and car name\n",
151+
"X = data.drop(columns=['car name', 'mpg'])\n",
152+
"# Assign response variable to its own variable\n",
153+
"y = data['mpg'].astype(np.float64)"
154+
]
155+
},
156+
{
157+
"cell_type": "code",
158+
"execution_count": null,
159+
"id": "edc3dbcb-9610-4342-96a3-5a4b7d400a15",
160+
"metadata": {},
161+
"outputs": [],
162+
"source": [
163+
"X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=23)"
164+
]
165+
},
166+
{
167+
"cell_type": "code",
168+
"execution_count": null,
169+
"id": "f6eb59e4-1597-468e-b18d-ef5ecc519caf",
170+
"metadata": {},
171+
"outputs": [],
172+
"source": [
173+
"model = LinearRegression()"
174+
]
175+
},
176+
{
177+
"cell_type": "code",
178+
"execution_count": null,
179+
"id": "0994de85-ae86-43aa-9fe1-0ded209edbc9",
180+
"metadata": {},
181+
"outputs": [],
182+
"source": [
183+
"model.fit(X_train, y_train)"
184+
]
185+
},
186+
{
187+
"cell_type": "code",
188+
"execution_count": null,
189+
"id": "8c54289e-f6d0-4892-84bb-8728d8591402",
190+
"metadata": {},
191+
"outputs": [],
192+
"source": [
193+
"y_train_pred = model.predict(X_train)\n",
194+
"y_test_pred = model.predict(X_test)"
195+
]
196+
},
197+
{
198+
"cell_type": "code",
199+
"execution_count": null,
200+
"id": "1a7e56aa-35d8-4066-9fe1-29de73c359c3",
201+
"metadata": {},
202+
"outputs": [],
203+
"source": [
204+
"from sklearn.metrics import mean_absolute_error\n",
205+
"print(mean_absolute_error(y_train, y_train_pred))\n",
206+
"print(mean_absolute_error(y_test, y_test_pred))"
207+
]
208+
},
209+
{
210+
"cell_type": "markdown",
211+
"id": "c4205dbf-87e5-4bbc-97e2-f80c3bde8530",
212+
"metadata": {},
213+
"source": [
214+
"---\n",
215+
"### Challenge 3: Feature Engineering\n",
216+
"\n",
217+
"You might notice that the `origin` variable has only three values. So, it's really a categorical variable, where each sample has one of three origins. In this scenario, we've treated it like a continuous variable. \n",
218+
"\n",
219+
"How can we properly treat this variable as categorical? This is a question of preprocessing and **feature engineering**.\n",
220+
"\n",
221+
"What we can do is replace the `origin` feature with two binary variables. The first tells us whether origin is equal to 2. The second tells us whether origin is equal to 3. If both are false, that means origin is equal to 1.\n",
222+
"\n",
223+
"By fitting a linear regression with these two binary features rather than treating `origin` as continuous, we can get a better sense for how the origin impacts the MPG.\n",
224+
"\n",
225+
"Create two new binary features corresponding to origin, and then recreate the training and test data. Then, fit a linear model to the new data. What do you find about the performance and new coefficients?\n",
226+
"\n",
227+
"---"
228+
]
229+
},
230+
{
231+
"cell_type": "code",
232+
"execution_count": null,
233+
"id": "651f4a11-aa7f-45d5-84de-d3c6f8b551bd",
234+
"metadata": {},
235+
"outputs": [],
236+
"source": [
237+
"data['origin_2'] = (data['origin'] == 2).astype('int')\n",
238+
"data['origin_3'] = (data['origin'] == 3).astype('int')"
239+
]
240+
},
241+
{
242+
"cell_type": "code",
243+
"execution_count": null,
244+
"id": "0ba5b282-fb1f-4550-a2e6-ce156ae4bb51",
245+
"metadata": {},
246+
"outputs": [],
247+
"source": [
248+
"# Remove the response variable and car name\n",
249+
"X = data.drop(columns=['car name', 'mpg', 'origin'])\n",
250+
"# Assign response variable to its own variable\n",
251+
"y = data['mpg'].astype(np.float64)"
252+
]
253+
},
254+
{
255+
"cell_type": "code",
256+
"execution_count": null,
257+
"id": "b633c0f1-de8a-46ad-a573-7b37b50089a9",
258+
"metadata": {},
259+
"outputs": [],
260+
"source": [
261+
"# Split\n",
262+
"X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=23)\n",
263+
"# Fit model\n",
264+
"model = LinearRegression()\n",
265+
"model.fit(X_train, y_train)\n",
266+
"# Evaluate model\n",
267+
"print(model.score(X_test, y_test))\n",
268+
"print(model.coef_)"
269+
]
270+
}
271+
],
272+
"metadata": {
273+
"kernelspec": {
274+
"display_name": "nlp",
275+
"language": "python",
276+
"name": "nlp"
277+
},
278+
"language_info": {
279+
"codemirror_mode": {
280+
"name": "ipython",
281+
"version": 3
282+
},
283+
"file_extension": ".py",
284+
"mimetype": "text/x-python",
285+
"name": "python",
286+
"nbconvert_exporter": "python",
287+
"pygments_lexer": "ipython3",
288+
"version": "3.9.7"
289+
}
290+
},
291+
"nbformat": 4,
292+
"nbformat_minor": 5
293+
}

0 commit comments

Comments
 (0)