" 1D array with integer entries containing indices\n",
" size: int\n",
" number of examples in the dataset that you want to split into k\n",
" k: int \n",
" Number of desired splits in data.(Assume test set is already separated.)\n",
" Returns:\n",
...
...
%% Cell type:markdown id: tags:
# JUPYTER NOTEBOOK TIPS
Each rectangular box is called a cell.
* ctrl+ENTER evaluates the current cell; if it contains Python code, it runs the code, if it contains Markdown, it returns rendered text.
* alt+ENTER evaluates the current cell and adds a new cell below it.
* If you click to the left of a cell, you'll notice the frame changes color to blue. You can erase a cell by hitting 'dd' (that's two "d"s in a row) when the frame is blue.
%% Cell type:markdown id: tags:
# Supervised Learning Model Skeleton
We'll use this skeleton for implementing different supervised learning algorithms.
%% Cell type:code id: tags:
``` python
classModel:
deffit(self):
raiseNotImplementedError
defpredict(self,test_points):
raiseNotImplementedError
```
%% Cell type:code id: tags:
``` python
defpreprocess(data_f,feature_names_f):
'''
data_f: where to read the dataset from
feature_names_f: where to read the feature names from
Returns:
features: ndarray
nxd array containing `float` feature values
labels: ndarray
1D array containing `float` label
'''
# You might find np.genfromtxt useful for reading in the file. Be careful with the file delimiter,
# e.g. for comma-separated files use delimiter=',' argument.
In cases where data is not abundantly available, we resort to getting an error estimate from average of error on different splits of dataset. In this case, every fold of data is used for testing and for training in turns, i.e. assuming we split our data into 3 folds, we'd
* train our model on fold-1+fold-2 and test on fold-3
* train our model on fold-1+fold-3 and test on fold-2
* train our model on fold-2+fold-3 and test on fold-1.
We'd use the average of the error we obtained in three runs as our error estimate.
Implement function "kfold" below.
%% Cell type:code id: tags:
``` python
# TODO: Programming Assignment 2
defkfold(indices,k):
defkfold(size,k):
'''
Args:
indices: ndarray
1D array with integer entries containing indices
size: int
number of examples in the dataset that you want to split into k
k: int
Number of desired splits in data.(Assume test set is already separated.)
Returns:
fold_dict: dict
A dictionary with integer keys corresponding to folds. Values are (training_indices, val_indices).
val_indices: ndarray
1/k of training indices randomly chosen and separates them as validation partition.
train_indices: ndarray
Remaining 1-(1/k) of the indices.
e.g. fold_dict = {0: (train_0_indices, val_0_indices),
1: (train_0_indices, val_0_indices), 2: (train_0_indices, val_0_indices)} for k = 3
'''
returnfold_dict
```
%% Cell type:markdown id: tags:
Implement "mse" and regularization functions. They will be used in the fit method of linear regression.
%% Cell type:code id: tags:
``` python
#TODO: Programming Assignment 2
defmse(y_pred,y_true):
'''
Args:
y_hat: ndarray
1D array containing data with `float` type. Values predicted by our method
y_true: ndarray
1D array containing data with `float` type. True y values
Returns:
cost: ndarray
1D array containing mean squared error between y_pred and y_true.
'''
raiseNotImplementedError
returncost
```
%% Cell type:code id: tags:
``` python
#TODO: Programming Assignment 2
defregularization(weights,method):
'''
Args:
weights: ndarray
1D array with `float` entries
method: str
Returns:
value: float
A single value. Regularization term that will be used in cost function in fit.