Newer
Older
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# JUPYTER NOTEBOOK TIPS\n",
"\n",
"Each rectangular box is called a cell. \n",
"* ctrl+ENTER evaluates the current cell; if it contains Python code, it runs the code, if it contains Markdown, it returns rendered text.\n",
"* alt+ENTER evaluates the current cell and adds a new cell below it.\n",
"* If you click to the left of a cell, you'll notice the frame changes color to blue. You can erase a cell by hitting 'dd' (that's two \"d\"s in a row) when the frame is blue."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Supervised Learning Model Skeleton\n",
"\n",
"We'll use this skeleton for implementing different supervised learning algorithms. Please complete \"preprocess\" and \"partition\" methods below."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You might need to preprocess your dataset depending on which dataset you are using. This step is for reading the dataset and for extracting features and labels. The \"preprocess\" function should return an $n \\times d$ features array, and an $n \\times 1$ labels array, where $n$ is the number of examples and $d$ is the number of features in the dataset. In cases where there is a big difference between the scales of feautures, we want to normalize the features to have values in the same range [0,1]. If that is the case with your dataset, output normalized features to get better prediction results."
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [],
"source": [
"def preprocess(file_path):\n",
" '''\n",
" file_path: where to read the dataset from\n",
" returns nxd features, nx1 labels\n",
" '''\n",
" # You might find np.genfromtxt useful for reading in the file. Be careful with the file delimiter, \n",
" # e.g. for comma-separated files use delimiter=',' argument.\n",
" \n",
" raise NotImplementedError\n",
"\n",
" \n",
" return features, labels"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, you'll need to split your dataset into training and validation and test sets. The \"split\" function should take as input the size of the whole dataset and randomly sample a proportion $p$ of the dataset as test partition and a proportion of $v$ as validation partition. The remaining will be used as training data. For example, to keep 30% of the examples as test and %10 as validation, set $p=0.3$ and $v=0.1$. You should choose these values according to the size of the data available to you. The \"split\" function should return indices of the training, validation and test sets. These will be used to index into the whole training set."
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {},
"outputs": [],
"source": [
"def partition(size, p, v):\n",
" '''\n",
" size: number of examples in the whole dataset\n",
" p: proportion kept for test\n",
" v: proportion kept for validation\n",
" '''\n",
" \n",
" # np.random.choice might come in handy. Do not sample with replacement!\n",
" # Be sure to not use the same indices in test and validation sets!\n",
" \n",
" # use the first np.ceil(size*p) for test, \n",
" # the following np.ceil(size*v) for validation set.\n",
" \n",
" raise NotImplementedError\n",
" \n",
" # return two 1d arrays: one keeping validation set indices, the other keeping test set indices \n",
" return val_indices, test_indices"
]
},
{
"cell_type": "code",
"metadata": {},
"outputs": [],
"source": [
"class Model:\n",
" # set the preprocessing function, partition_function\n",
" # use kwargs to pass arguments to preprocessor_f and partition_f\n",
" # kwargs is a dictionary and should contain p, v and file_path\n",
" # e.g. {'p': 0.3, 'v': 0.1, 'file_path': some_path}\n",
" \n",
" def __init__(self, preprocessor_f, partition_f, **kwargs):\n",
" \n",
" self.features, self.labels = preprocessor_f(kwargs['file_path'])\n",
" self.size = len(self.labels) # number of examples in dataset \n",
" self.feat_dim = self.features.shape[1] # number of features\n",
" self.val_indices, self.test_indices = partition_f(self.size, kwargs['p'], kwargs['v'])\n",
" self.val_size = len(self.val_indices)\n",
" self.test_size = len(self.test_indices)\n",
" \n",
" self.train_indices = np.delete(np.arange(self.size), np.append(self.test_indices, self.val_indices), 0)\n",
" self.train_size = len(self.train_indices)\n",
" \n",
" def fit(self):\n",
" raise NotImplementedError\n",
" \n",
" def predict(self, testpoint):\n",
" raise NotImplementedError"
]
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## General supervised learning related functions"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Implement the \"conf_matrix\" function that takes as input an array of true labels ($true$) and an array of predicted labels ($pred$). It should output a numpy.ndarray."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def conf_matrix(true, pred):\n",
" '''\n",
" true: nx1 array of true labels for test set\n",
" pred: nx1 array of predicted labels for test set\n",
" '''\n",
" raise NotImplementedError\n",
" \n",
" tp = tn = fp = fn = 0\n",
" # calculate true positives (tp), true negatives(tn)\n",
" # false positives (fp) and false negatives (fn)\n",
" \n",
" # returns the confusion matrix as numpy.ndarray\n",
" return np.array([tp,tn, fp, fn])"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.4"
}
},
"nbformat": 4,
"nbformat_minor": 2
}