Skip to content
Snippets Groups Projects
Select Git revision
  • 72afbb6f9d83070bdf9edeb0cb2d78fa3ae75d7f
  • class default protected
2 results

ProgrammingAssignment1.ipynb

Blame
  • Forked from Zeynep Hakguder / csce478
    90 commits behind the upstream repository.
    Zeynep Hakguder's avatar
    Zeynep Hakguder authored
    72afbb6f
    History
    ProgrammingAssignment1.ipynb 13.19 KiB

    $k$-Nearest Neighbor

    We'll implement -Nearest Neighbor ($k$-NN) algorithm for this assignment. We recommend using Madelon dataset, although it is not mandatory. If you choose to use a different dataset, it should meet the following criteria:

    • dependent variable should be binary (suited for binary classification)
    • number of features (attributes) should be at least 50
    • number of examples (instances) should be between 1,000 - 5,000

    A skeleton of a general supervised learning model is provided in "model.ipynb". Please look through it and complete the "preprocess" and "partition" methods.

    Assignment Goals:

    In this assignment, we will:

    • learn to split a dataset into training/validation/test partitions
    • use the validation dataset to find a good value for
    • Having found the "best" , we'll obtain final performance measures:
      • accuracy, generalization error and ROC curve

    GRADING

    You will be graded on parts that are marked with \#TODO comments. Read the comments in the code to make sure you don't miss any.

    Mandatory for 478 & 878:

    Tasks 478 878
    1 Implement distance 10 10
    2 Implement k-NN methods 25 20
    3 Model evaluation 25 20
    4 Learning curve 20 20
    6 ROC curve analysis 20 20

    Mandatory for 878, bonus for 478

    Tasks 478 878
    5 Optimizing $k$ 10 10

    Points are broken down further below in Rubric sections. The first score is for 478, the second is for 878 students. There a total of 100 points in this assignment and extra 10 bonus points for 478 students.

    You can use numpy for array operations and matplotlib for plotting for this assignment. Please do not add other libraries.

    import numpy as np
    import matplotlib.pyplot as plt

    Following code makes the Model class and relevant functions available from model.ipynb.

    %run 'model.ipynb'

    TASK 1: Implement distance function

    Choice of distance metric plays an important role in the performance of -NN. Let's start with implementing a distance method in the "distance" function in model.ipynb. It should take two data points and the name of the metric and return a scalar value.

    Rubric:

    • Euclidean +5, +5
    • Hamming +5, +5

    Test distance

    x = np.array(range(100))
    y = np.array(range(100, 200))
    dist_euclidean = distance(x, y, 'Euclidean')
    dist_hamming = distance(x, y, 'Hamming')

    TASK 2: Implement $k$-NN Class Methods

    We can start implementing our -NN classifier. -NN class inherits Model class. You'll need to implement "fit" and "predict" methods. Use the "distance" function you defined above. "fit" method takes as an argument. "predict" takes as input an array containing -dimensional feature vectors for examples and outputs the predicted class and the ratio of positive examples in nearest neighbors.

    Rubric:

    • correct implementation of fit method +5, +5
    • correct implementation of predict method +20, +15
    class kNN(Model):
        '''
        Inherits Model class. Implements the k-NN algorithm for classification.
        '''
           
        def fit(self, k, distance_f, **kwargs):
            '''
            Fit the model. This is pretty straightforward for k-NN.
            '''
            # TODO
            # set self.k, self.distance_f, self.distance_metric
            raise NotImplementedError
            
            return
        
        
        def predict(self, test_indices):
            
            raise NotImplementedError
            
            pred = []
            # TODO
            
            # for each point in test points
            # use your implementation of distance function
            #  distance_f(..., distance_metric)
            # to find the labels of k-nearest neighbors. 
            
            # Find the ratio of the positive labels
            # and append to pred with pred.append(ratio).
            
    
            return np.array(pred)
        

    TASK 3: Build and Evaluate the Model

    Rubric:

    • Reasonable accuracy values +10, +5
    • Reasonable confidence intervals on the error estimate +10, +10
    • Reasonable confusion matrix +5, +5

    Remember you need to provide values to t, v parameters for "partition" function and to feature_file and label_file for "preprocess" function.

    # populate the keyword arguments dictionary kwargs
    kwargs = {'t': 0.3, 'v': 0.1, 'feature_file': ..., 'label_file': ...}
    # initialize the model
    my_model = kNN(preprocessor_f=preprocess, partition_f=partition, **kwargs)

    Assign a value to and fit the kNN model.

    kwargs_f = {'metric': 'Euclidean'}
    my_model.fit(k = 10, distance_f=distance, **kwargs_f)

    Evaluate your model on the test data and report your accuracy. Also, calculate and report the confidence interval on the generalization error estimate.

    final_labels = my_model.predict(my_model.test_indices)
    
    # For now, we will consider a data point as predicted in the positive class if more than 0.5 
    # of its k-neighbors are positive.
    threshold = 0.5
    
    # TODO
    # Calculate and report accuracy and generalization error with confidence interval here. Show your work in this cell.

    Computing the confusion matrix for $k = 10$

    Now that we have the true labels and the predicted ones from our model, we can build a confusion matrix and see how accurate our model is. Implement the "conf_matrix" function (in model.ipynb) that takes as input an array of true labels ($true$) and an array of predicted labels ($pred$). It should output a numpy.ndarray. You do not need to change the value of the threshold parameter yet.

    # TODO
    conf_matrix(my_model.labels[my_model.test_indices], final_labels, threshold = 0.5)

    TASK 4: Plotting a learning curve

    A learning curve shows how error changes as the training set size increases. For more information, see learning curves. We'll plot the error values for training and validation data while varying the size of the training set. Report a good size for training set for which there is a good balance between bias and variance.

    Rubric:

    • Correct training error calculation for different training set sizes +8, +8
    • Correct validation error calculation for different training set sizes +8, +8
    • Reasonable learning curve +4, +4
    # try sizes 50, 100, 150, 200, ..., up to the largest multiple of 50 >= train_size
    training_sizes = np.arange(50, my_model.train_size + 1, 50)
    
    # TODO
    
    # Calculate error for each entry in training_sizes
    # for training and validation sets and populate
    # error_train and error_val arrays. Each entry in these arrays
    # should correspond to each entry in training_sizes.
    
    plt.plot(training_sizes, error_train, 'r', label = 'training_error')
    plt.plot(training_sizes, error_val, 'g', label = 'validation_error')
    plt.legend()
    plt.show()

    TASK 5: Determining $k$

    Rubric:

    • Increased accuracy with new +5, +5
    • Improved confusion matrix +5, +5

    We can use the validation set to come up with a value that results in better performance in terms of accuracy.

    Below calculate the accuracies for different values of using the validation set. Report a good value and use it in the analyses that follow this section. Report confusion matrix for the new value of .

    In [2]:
    # TODO
    
    # Change values of k. 
    # Calculate accuracies for the validation set.
    # Report a good k value.
    # Calculate the confusion matrix for new k.

    TASK 6: ROC curve analysis

    • Correct implementation +20, +20

    ROC curve and confusion matrix for the final model

    ROC curves are a good way to visualize sensitivity vs. 1-specificity for varying cut off points. Now, implement, in "model.ipynb", a "ROC" function that predicts the labels of the test set examples using different values in "predict" and plot the ROC curve. "ROC" takes a list containing different parameter values to try and returns two arrays; one where each entry is the sensitivity at a given threshold and the other where entries are 1-specificities.

    We can finally create the confusion matrix and plot the ROC curve for our optimal -NN classifier. Use the value you found above, if you completed TASK 5, else use = 10. We'll plot the ROC curve for values between 0.1 and 1.0.

    # ROC curve
    roc_sens, roc_spec_ = ROC(my_model, my_model.test_indices, np.arange(0.1, 1.0, 0.1))
    plt.plot(roc_sens, roc_spec_)
    plt.show()