Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • geeknerd/csce478
  • MJAMILA/csce478
  • gwirka2/csce478
  • jqd/csce478
  • rocknfeather/csce478
  • mostafaibrahem882/csce478
  • nanfeng007/csce478
  • BTRIPP2/csce478
  • daisy333zl/csce478
  • zhakguder2/csce478
10 results
Show changes
Commits on Source (92)
Showing
with 5873 additions and 1155 deletions
%% Cell type:markdown id: tags:
# JUPYTER NOTEBOOK TIPS
Each rectangular box is called a cell.
* Ctrl+ENTER evaluates the current cell; if it contains Python code, it runs the code, if it contains Markdown, it returns rendered text.
* Alt+ENTER evaluates the current cell and adds a new cell below it.
* If you click to the left of a cell, you'll notice the frame changes color to blue. You can erase a cell by hitting 'dd' (that's two "d"s in a row) when the frame is blue.
%% Cell type:markdown id: tags:
# GRADING
You will be graded on parts that are marked with **\#TODO** comments. Read the comments in the code to make sure you don't miss any.
### Mandatory for 478 & 878:
| Tasks | 478 | 878 |
|----------------------------|-----|-----|
| Implement `preprocess` | 10 | 5 |
| Implement `partition` | 10 | 5 |
| Putting the model together | 5 | 5 |
### Mandatory for 878, bonus for 478
| Tasks | 478 | 878 |
|---------------------------------------|-----|-----|
| Modify `preprocess` for normalization | 5 | 10 |
Points are broken down further below in Rubric sections. The **first** score is for 478, the **second** is for 878 students. There a total of 25 points in this assignment and extra 5 bonus points for 478 students.
%% Cell type:markdown id: tags:
# Supervised Learning Model Skeleton
We'll use this skeleton for implementing different supervised learning algorithms. For this first assignment, we'll read and partition the ["madelon" dataset](http://archive.ics.uci.edu/ml/datasets/madelon). Features and labels for the first two examples are listed below. Please complete "preprocess" and "partition" methods.
%% Cell type:markdown id: tags:
The 500 features in the "madelon" dataset have integer values:
%% Cell type:code id: tags:
``` python
! echo '../data/madelon.data'; head -n 2 ../data/madelon.data | nl -s '-) '
```
%% Output
../data/madelon.data
1-) 485 477 537 479 452 471 491 476 475 473 455 500 456 507 478 491 447 422 480 482 515 482 464 484 477 496 509 491 459 482 483 505 508 458 509 517 479 487 473 472 474 531 485 508 517 489 507 515 440 465 550 532 450 483 460 469 507 485 479 458 516 480 460 479 648 480 561 481 474 474 544 484 490 451 494 480 486 459 521 500 466 457 494 492 488 497 477 461 473 464 476 471 481 507 474 500 481 536 464 501 479 480 483 462 470 181 510 470 431 482 496 481 469 539 491 482 481 476 533 495 474 485 479 495 465 541 493 488 452 481 491 501 477 479 503 529 540 504 482 463 477 530 508 488 488 474 479 506 478 511 501 474 483 575 478 482 461 480 543 415 527 477 487 486 511 474 477 482 476 516 466 492 561 479 472 457 497 475 452 491 477 454 461 472 481 490 526 490 459 478 461 516 511 544 519 487 485 475 477 476 478 470 493 581 484 476 521 474 492 459 487 504 464 485 478 465 603 475 481 491 555 424 528 511 384 525 459 478 477 539 479 508 471 517 482 518 473 478 506 476 507 434 466 480 547 518 516 476 492 454 463 497 477 531 472 495 532 496 492 480 480 479 517 470 470 500 468 477 486 553 490 499 450 469 466 479 476 401 491 551 477 517 492 475 537 516 472 451 484 471 469 523 496 482 458 487 477 457 458 493 458 517 478 482 474 517 482 488 490 485 440 455 464 531 483 467 494 488 414 491 494 497 501 476 481 485 478 476 491 492 523 492 476 464 496 473 658 507 628 484 468 448 502 618 438 486 496 535 452 497 490 485 504 477 481 473 517 476 479 483 482 458 464 466 473 482 497 479 497 495 489 483 500 490 479 471 468 496 419 513 475 471 514 479 480 486 480 477 494 454 480 539 477 441 482 461 484 510 475 485 480 474 474 442 477 502 402 478 504 476 484 475 488 486 524 506 480 451 512 498 478 485 495 476 496 485 496 485 486 482 505 528 496 533 504 512 474 646 526 485 541 487 568 492 467 479 483 479 546 476 457 463 517 471 482 630 481 494 440 509 507 512 496 488 462 498 480 511 500 437 537 470 515 476 467 401 485 499 495 490 508 463 487 531 515 476 482 463 467 479 477 481 477 485 511 485 481 479 475 496
2-) 483 458 460 487 587 475 526 479 485 469 434 483 465 503 472 478 469 518 495 491 478 530 462 494 549 469 516 487 475 486 478 514 542 406 469 452 483 498 480 476 474 504 478 493 472 461 521 521 499 458 466 519 487 485 489 485 551 516 435 487 525 481 529 486 488 513 415 463 481 481 491 504 496 433 475 416 481 482 493 536 483 416 553 460 554 447 477 499 470 527 476 480 507 522 474 485 478 479 468 397 482 469 477 476 553 431 489 447 535 487 488 557 485 515 484 497 479 494 436 470 477 468 480 587 503 429 496 502 473 485 522 484 481 486 519 455 442 499 470 483 508 510 481 494 483 473 481 510 480 447 538 497 475 404 479 519 486 492 520 519 500 482 486 487 533 487 476 480 475 459 470 522 489 477 447 519 484 472 458 510 529 539 456 478 490 509 481 524 530 478 495 507 459 467 494 470 480 491 476 503 485 475 508 488 495 477 507 482 447 482 483 455 485 474 478 579 540 484 508 480 492 517 490 547 510 465 495 477 475 497 477 442 489 507 466 504 493 471 478 467 530 551 476 470 575 477 510 486 473 504 451 450 477 506 480 506 575 502 486 489 485 479 488 524 465 516 443 503 517 498 482 467 454 407 484 479 475 498 514 492 477 435 491 475 503 480 506 512 482 477 504 527 454 483 458 473 484 542 469 459 462 503 477 492 469 467 475 483 491 464 466 475 477 502 483 506 474 494 469 524 483 434 488 463 495 483 468 481 493 489 538 469 477 480 460 495 469 469 528 544 497 497 462 478 494 481 493 461 482 483 471 422 493 511 471 497 523 476 462 453 471 502 475 536 481 389 491 464 500 553 467 497 489 486 490 540 487 488 526 477 480 462 523 483 488 475 485 479 492 452 479 441 475 442 476 475 484 500 570 482 481 428 477 456 477 546 502 477 516 467 512 469 498 501 503 539 493 505 543 556 486 483 514 476 457 507 475 448 479 481 486 500 489 442 509 479 500 517 489 488 494 496 463 460 472 478 457 487 420 463 484 474 459 311 479 582 480 495 538 487 537 488 485 483 500 487 476 526 449 363 466 478 465 479 482 549 470 506 481 494 492 448 492 447 598 507 478 483 492 485 463 478 487 338 513 486 483 492 510 517
%% Cell type:markdown id: tags:
Labels are either positive (1) or negative (-1):
%% Cell type:code id: tags:
``` python
! echo '../data/madelon.labels'; head -n 2 ../data/madelon.labels | nl -s '-) '
```
%% Output
../data/madelon.labels
1-) -1
2-) -1
%% Cell type:markdown id: tags:
## TASK 1: Implement `preprocess`
%% Cell type:markdown id: tags:
This step is for reading the dataset and for extracting features and labels. The "preprocess" function should return an $n \times d$ "features" array, and an $n \times 1$ "labels" array, where $n$ is the number of examples and $d$ is the number of features in the dataset. In cases where there is a big difference between the scales of features, we want to normalize the features to have values in the same range [0,1]. Since this is not the case with this dataset, we will not do normalization.
%% Cell type:code id: tags:
``` python
def preprocess(feature_file, label_file):
'''
Args:
feature_file: str
file containing features
label_file: str
file containing labels
Returns:
features: ndarray
nxd features
labels: ndarray
nx1 labels
'''
# You might find np.genfromtxt useful for reading in the file. Be careful with the file delimiter,
# e.g. for comma-separated files use delimiter=',' argument.
# TODO
raise NotImplementedError
return features, labels
```
%% Cell type:markdown id: tags:
### Rubric:
* Correct features size +5, +2.5
* Correct labels size +5, +2.5
%% Cell type:markdown id: tags:
### Test `preprocess`
%% Cell type:code id: tags:
``` python
features, labels = preprocess(feature_file = ..., label_file = ...)
# TODO: Output the dimension of both features and labels.
```
%% Cell type:markdown id: tags:
## TASK 2: Implement `partition`
%% Cell type:markdown id: tags:
Next, you'll need to split your dataset into training, validation and test sets. The "partition" function should take as input the size of the whole dataset and randomly sample a proportion $t$ of the dataset as test partition and a proportion of $v$ as validation partition. The remaining will be used as training data. For example, to keep 30% of the examples as test and %10 as validation, set $t=0.3$ and $v=0.1$. You should choose these values according to the size of the data available to you. The "split" function should return indices of the training, validation and test sets. These will be used to index into the whole training set.
%% Cell type:code id: tags:
``` python
def partition(size, t, v = 0):
'''
Args:
size: int
number of examples in the whole dataset
t: float
proportion kept for test
v: float
proportion kept for validation
Returns:
test_indices: ndarray
1D array containing test set indices
val_indices: ndarray
1D array containing validation set indices
'''
# np.random.permutation might come in handy. Do not sample with replacement!
# Be sure not to use the same indices in test and validation sets!
# use the first np.ceil(size*t) for test,
# the following np.ceil(size*v) for validation set.
# TODO
raise NotImplementedError
return test_indices, val_indices
```
%% Cell type:markdown id: tags:
### Rubric:
* Correct length of test indices +5, +2.5
* Correct length of validation indices +5, +2.5
%% Cell type:markdown id: tags:
### Test `partition`
%% Cell type:code id: tags:
``` python
# TODO
# Pass the correct size argument (number of examples in the whole dataset)
test_indices, val_indices = partition(size=..., t = 0.3, v = 0.1)
# Output the size of both features and labels.
```
%% Cell type:markdown id: tags:
## TASK 3: Putting things together
%% Cell type:markdown id: tags:
The model definition is given below. We'll extend this class for different supervised classification algorithms. Specifically, we'll implement "fit" and "predict" methods for these algorithms. For this assignment, you are not asked to implement these methods. Run the cells below and make sure each piece of code fits together and works as expected.
%% Cell type:code id: tags:
``` python
class Model:
# preprocess_f and partition_f expect functions
# use kwargs to pass arguments to preprocessor_f and partition_f
# kwargs is a dictionary and should contain t, v, feature_file, label_file
# e.g. {'t': 0.3, 'v': 0.1, 'feature_file': 'some_file_name', 'label_file': 'some_file_name'}
def __init__(self, preprocessor_f, partition_f, **kwargs):
self.features, self.labels = preprocessor_f(kwargs['feature_file'], kwargs['label_file'])
self.size = len(self.labels) # number of examples in dataset
self.feat_dim = self.features.shape[1] # number of features
self.val_indices, self.test_indices = partition_f(self.size, kwargs['t'], kwargs['v'])
self.val_size = len(self.val_indices)
self.test_size = len(self.test_indices)
self.train_indices = np.delete(np.arange(self.size), np.append(self.test_indices, self.val_indices), 0)
self.train_size = len(self.train_indices)
def fit(self):
raise NotImplementedError
def predict(self, indices):
raise NotImplementedError
```
%% Cell type:markdown id: tags:
### Rubric:
* Correct training size +5, +5
%% Cell type:markdown id: tags:
### Test `Model`
%% Cell type:markdown id: tags:
We will use a keyword arguments dictionary that conveniently passes arguments to functions that are themselves passed as arguments during object initialization. Please do not change these calls in this and the following assignments.
%% Cell type:code id: tags:
``` python
# TODO
# pass the correct arguments to preprocessor_f and partition_f
kwargs = {'t': 0.3, 'v': 0.1, 'feature_file': ..., 'label_file': ...}
my_model = Model(preprocessor_f=..., partition_f=..., **kwargs)
# Output size of the training partition
```
%% Cell type:markdown id: tags:
## TASK 4: Normalization
%% Cell type:markdown id: tags:
Modify `preprocess` function such that the output features take values in the range [0, 1]. Initialize a new model with this function and check the values of the features.
%% Cell type:markdown id: tags:
### Rubric:
* Correct range for feature values +5, +10
%% Cell type:code id: tags:
``` python
# TODO
# args is a placeholder for the parameters of the function
# Args and Returns are as in "preprocess"
def normalized_preprocess(args=...):
raise NotImplementedError
```
%% Cell type:code id: tags:
``` python
# TODO
kwargs = {'t': 0.3, 'v': 0.1, 'feature_file': ..., 'label_file': ...}
my_model = Model(preprocessor_f=..., partition_f=..., **kwargs)
# Check that the range of each feature in the training set is in range [0, 1]
```
%% Cell type:markdown id: tags:
# JUPYTER NOTEBOOK TIPS
Each rectangular box is called a cell.
* Ctrl+ENTER evaluates the current cell; if it contains Python code, it runs the code, if it contains Markdown, it returns rendered text.
* Alt+ENTER evaluates the current cell and adds a new cell below it.
* If you click to the left of a cell, you'll notice the frame changes color to blue. You can erase a cell by hitting 'dd' (that's two "d"s in a row) when the frame is blue.
* If you click to the left of a cell, you'll notice the frame changes color to blue. You can erase a cell by hitting 'dd' (that's two **d**'s in a row) when the frame is blue.
%% Cell type:markdown id: tags:
# GRADING
You will be graded on parts that are marked with **\#TODO** comments. Read the comments in the code to make sure you don't miss any.
You will be graded on parts that are marked with **TODO** comments. Read the comments in the code to make sure you don't miss any.
### Mandatory for 478 & 878:
| Tasks | 478 | 878 |
|----------------------------|-----|-----|
| Implement `preprocess` | 10 | 5 |
| Implement `partition` | 10 | 5 |
| Putting the model together | 5 | 5 |
| | Tasks | 478 | 878 |
|---|----------------------------|-----|-----|
| 1 | Implement `preprocess` | 10 | 5 |
| 2 | Implement `partition` | 10 | 5 |
| 3 | Putting the model together | 5 | 5 |
### Mandatory for 878, bonus for 478
| Tasks | 478 | 878 |
|---------------------------------------|-----|-----|
| Modify `preprocess` for normalization | 5 | 10 |
| | Tasks | 478 | 878 |
|---|---------------------------------------|-----|-----|
|4 | Implement `normalization` | 5 | 10 |
Points are broken down further below in Rubric sections. The **first** score is for 478, the **second** is for 878 students. There a total of 25 points in this assignment and extra 5 bonus points for 478 students.
%% Cell type:markdown id: tags:
# YOUR GRADE
### Group members: *Fill here*
| | Tasks | Points|
|---|----------------------------|-----|
| 1 | Implement `preprocess` | |
| 2 | Implement `partition` | |
| 3 | Putting the model together | |
|4 | Implement `normalization` | |
%% Cell type:markdown id: tags:
# Supervised Learning Model Skeleton
We'll use this skeleton for implementing different supervised learning algorithms. For this first assignment, we'll read and partition the ["madelon" dataset](http://archive.ics.uci.edu/ml/datasets/madelon). Features and labels for the first two examples are listed below. Please complete "preprocess" and "partition" methods.
We'll use this skeleton for implementing different supervised learning algorithms. For this first assignment, we'll read and partition the [**madelon** dataset](http://archive.ics.uci.edu/ml/datasets/madelon). Features and labels for the first two examples are listed below. Please complete **preprocess** and **partition** functions.
%% Cell type:markdown id: tags:
The 500 features in the "madelon" dataset have integer values:
The 500 features in the **madelon** dataset have integer values:
%% Cell type:code id: tags:
``` python
! echo '../data/madelon.data'; head -n 2 ../data/madelon.data | nl -s '-) '
```
%% Output
../data/madelon.data
1-) 485 477 537 479 452 471 491 476 475 473 455 500 456 507 478 491 447 422 480 482 515 482 464 484 477 496 509 491 459 482 483 505 508 458 509 517 479 487 473 472 474 531 485 508 517 489 507 515 440 465 550 532 450 483 460 469 507 485 479 458 516 480 460 479 648 480 561 481 474 474 544 484 490 451 494 480 486 459 521 500 466 457 494 492 488 497 477 461 473 464 476 471 481 507 474 500 481 536 464 501 479 480 483 462 470 181 510 470 431 482 496 481 469 539 491 482 481 476 533 495 474 485 479 495 465 541 493 488 452 481 491 501 477 479 503 529 540 504 482 463 477 530 508 488 488 474 479 506 478 511 501 474 483 575 478 482 461 480 543 415 527 477 487 486 511 474 477 482 476 516 466 492 561 479 472 457 497 475 452 491 477 454 461 472 481 490 526 490 459 478 461 516 511 544 519 487 485 475 477 476 478 470 493 581 484 476 521 474 492 459 487 504 464 485 478 465 603 475 481 491 555 424 528 511 384 525 459 478 477 539 479 508 471 517 482 518 473 478 506 476 507 434 466 480 547 518 516 476 492 454 463 497 477 531 472 495 532 496 492 480 480 479 517 470 470 500 468 477 486 553 490 499 450 469 466 479 476 401 491 551 477 517 492 475 537 516 472 451 484 471 469 523 496 482 458 487 477 457 458 493 458 517 478 482 474 517 482 488 490 485 440 455 464 531 483 467 494 488 414 491 494 497 501 476 481 485 478 476 491 492 523 492 476 464 496 473 658 507 628 484 468 448 502 618 438 486 496 535 452 497 490 485 504 477 481 473 517 476 479 483 482 458 464 466 473 482 497 479 497 495 489 483 500 490 479 471 468 496 419 513 475 471 514 479 480 486 480 477 494 454 480 539 477 441 482 461 484 510 475 485 480 474 474 442 477 502 402 478 504 476 484 475 488 486 524 506 480 451 512 498 478 485 495 476 496 485 496 485 486 482 505 528 496 533 504 512 474 646 526 485 541 487 568 492 467 479 483 479 546 476 457 463 517 471 482 630 481 494 440 509 507 512 496 488 462 498 480 511 500 437 537 470 515 476 467 401 485 499 495 490 508 463 487 531 515 476 482 463 467 479 477 481 477 485 511 485 481 479 475 496
2-) 483 458 460 487 587 475 526 479 485 469 434 483 465 503 472 478 469 518 495 491 478 530 462 494 549 469 516 487 475 486 478 514 542 406 469 452 483 498 480 476 474 504 478 493 472 461 521 521 499 458 466 519 487 485 489 485 551 516 435 487 525 481 529 486 488 513 415 463 481 481 491 504 496 433 475 416 481 482 493 536 483 416 553 460 554 447 477 499 470 527 476 480 507 522 474 485 478 479 468 397 482 469 477 476 553 431 489 447 535 487 488 557 485 515 484 497 479 494 436 470 477 468 480 587 503 429 496 502 473 485 522 484 481 486 519 455 442 499 470 483 508 510 481 494 483 473 481 510 480 447 538 497 475 404 479 519 486 492 520 519 500 482 486 487 533 487 476 480 475 459 470 522 489 477 447 519 484 472 458 510 529 539 456 478 490 509 481 524 530 478 495 507 459 467 494 470 480 491 476 503 485 475 508 488 495 477 507 482 447 482 483 455 485 474 478 579 540 484 508 480 492 517 490 547 510 465 495 477 475 497 477 442 489 507 466 504 493 471 478 467 530 551 476 470 575 477 510 486 473 504 451 450 477 506 480 506 575 502 486 489 485 479 488 524 465 516 443 503 517 498 482 467 454 407 484 479 475 498 514 492 477 435 491 475 503 480 506 512 482 477 504 527 454 483 458 473 484 542 469 459 462 503 477 492 469 467 475 483 491 464 466 475 477 502 483 506 474 494 469 524 483 434 488 463 495 483 468 481 493 489 538 469 477 480 460 495 469 469 528 544 497 497 462 478 494 481 493 461 482 483 471 422 493 511 471 497 523 476 462 453 471 502 475 536 481 389 491 464 500 553 467 497 489 486 490 540 487 488 526 477 480 462 523 483 488 475 485 479 492 452 479 441 475 442 476 475 484 500 570 482 481 428 477 456 477 546 502 477 516 467 512 469 498 501 503 539 493 505 543 556 486 483 514 476 457 507 475 448 479 481 486 500 489 442 509 479 500 517 489 488 494 496 463 460 472 478 457 487 420 463 484 474 459 311 479 582 480 495 538 487 537 488 485 483 500 487 476 526 449 363 466 478 465 479 482 549 470 506 481 494 492 448 492 447 598 507 478 483 492 485 463 478 487 338 513 486 483 492 510 517
%% Cell type:markdown id: tags:
Labels are either positive (1) or negative (-1):
%% Cell type:code id: tags:
``` python
! echo '../data/madelon.labels'; head -n 2 ../data/madelon.labels | nl -s '-) '
```
%% Output
../data/madelon.labels
1-) -1
2-) -1
%% Cell type:markdown id: tags:
## TASK 1: Implement `preprocess`
%% Cell type:markdown id: tags:
This step is for reading the dataset and for extracting features and labels. The "preprocess" function should return an $n \times d$ "features" array, and an $n \times 1$ "labels" array, where $n$ is the number of examples and $d$ is the number of features in the dataset. In cases where there is a big difference between the scales of features, we want to normalize the features to have values in the same range [0,1]. Since this is not the case with this dataset, we will not do normalization.
This step is for reading the dataset and for extracting features and labels. The **preprocess** function should return an *n x d* **features** array, and an *n x 1* **labels** array, where *n* is the number of examples and *d* is the number of features in the dataset. In cases where there is a big difference between the scales of features, we want to normalize the features to have values in the same range [0,1]. Since this is not the case with this dataset, we will not do normalization.
%% Cell type:code id: tags:
``` python
def preprocess(feature_file, label_file):
'''
Args:
feature_file: str
file containing features
label_file: str
file containing labels
Returns:
features: ndarray
nxd features
labels: ndarray
nx1 labels
'''
# You might find np.genfromtxt useful for reading in the file. Be careful with the file delimiter,
# e.g. for comma-separated files use delimiter=',' argument.
# TODO
raise NotImplementedError
return features, labels
```
%% Cell type:markdown id: tags:
### Rubric:
* Correct features size +5, +2.5
* Correct labels size +5, +2.5
%% Cell type:markdown id: tags:
### Test `preprocess`
%% Cell type:code id: tags:
``` python
features, labels = preprocess(feature_file = ..., label_file = ...)
# TODO: Output the dimension of both features and labels.
```
%% Cell type:markdown id: tags:
## TASK 2: Implement `partition`
%% Cell type:markdown id: tags:
Next, you'll need to split your dataset into training, validation and test sets. The "partition" function should take as input the size of the whole dataset and randomly sample a proportion $t$ of the dataset as test partition and a proportion of $v$ as validation partition. The remaining will be used as training data. For example, to keep 30% of the examples as test and %10 as validation, set $t=0.3$ and $v=0.1$. You should choose these values according to the size of the data available to you. The "split" function should return indices of the training, validation and test sets. These will be used to index into the whole training set.
Next, you'll need to split your dataset into training, validation and test sets. The **partition** function should take as input the size of the whole dataset and randomly sample a proportion *t* of the dataset indices for test partition and a proportion of *v* for validation partition. The remaining will be used as indices for training data. For example, to keep 30% of the examples as test and %10 as validation, set *t* = 0.3 and *v* = 0.1. You should choose these values according to the size of the data available to you. The **split** function should return indices of the training, validation and test sets. These will be used to index into the whole training set.
%% Cell type:code id: tags:
``` python
def partition(size, t, v = 0):
'''
Args:
size: int
number of examples in the whole dataset
t: float
proportion kept for test
v: float
proportion kept for validation
Returns:
test_indices: ndarray
1D array containing test set indices
val_indices: ndarray
1D array containing validation set indices
train_indices: ndarray
1D array containing train set indices
'''
# np.random.permutation might come in handy. Do not sample with replacement!
# Be sure not to use the same indices in test and validation sets!
# use the first np.ceil(size*t) for test,
# the following np.ceil(size*v) for validation set.
# TODO
raise NotImplementedError
return test_indices, val_indices
return test_indices, val_indices, train_indices
```
%% Cell type:markdown id: tags:
### Rubric:
* Correct length of test indices +5, +2.5
* Correct length of validation indices +5, +2.5
%% Cell type:markdown id: tags:
### Test `partition`
%% Cell type:code id: tags:
``` python
# TODO
# Pass the correct size argument (number of examples in the whole dataset)
test_indices, val_indices = partition(size=..., t = 0.3, v = 0.1)
test_indices, val_indices, train_indices = partition(size=..., t = 0.3, v = 0.1)
# Output the size of both features and labels.
# Output the size of length of test and validation indices.
```
%% Cell type:markdown id: tags:
## TASK 3: Putting things together
%% Cell type:markdown id: tags:
The model definition is given below. We'll extend this class for different supervised classification algorithms. Specifically, we'll implement "fit" and "predict" methods for these algorithms. For this assignment, you are not asked to implement these methods. Run the cells below and make sure each piece of code fits together and works as expected.
The model definition is given below. We'll extend this class for different supervised classification algorithms. Specifically, we'll implement **fit** and **predict** methods for these algorithms. For this assignment, you are not asked to implement these methods. Run the cells below and make sure each piece of code fits together and works as expected.
%% Cell type:code id: tags:
``` python
class Model:
# preprocess_f and partition_f expect functions
# use kwargs to pass arguments to preprocessor_f and partition_f
# kwargs is a dictionary and should contain t, v, feature_file, label_file
# e.g. {'t': 0.3, 'v': 0.1, 'feature_file': 'some_file_name', 'label_file': 'some_file_name'}
def __init__(self, preprocessor_f, partition_f, **kwargs):
self.features, self.labels = preprocessor_f(kwargs['feature_file'], kwargs['label_file'])
self.size = len(self.labels) # number of examples in dataset
self.feat_dim = self.features.shape[1] # number of features
self.val_indices, self.test_indices = partition_f(self.size, kwargs['t'], kwargs['v'])
self.val_size = len(self.val_indices)
self.test_size = len(self.test_indices)
self.train_indices = np.delete(np.arange(self.size), np.append(self.test_indices, self.val_indices), 0)
self.train_size = len(self.train_indices)
def fit(self):
def fit(self, training_features, training_labels):
print('There are {} data points in training partition with {} features.'.format(
training_features.shape[0], training_features.shape[1]))
return
raise NotImplementedError
def predict(self, indices):
raise NotImplementedError
def predict(self, test_points):
return
```
%% Cell type:markdown id: tags:
### Rubric:
* Correct training size +5, +5
%% Cell type:markdown id: tags:
### Test `Model`
%% Cell type:markdown id: tags:
We will use a keyword arguments dictionary that conveniently passes arguments to functions that are themselves passed as arguments during object initialization. Please do not change these calls in this and the following assignments.
Initialize the model and call fit method with the training features and labels.
%% Cell type:code id: tags:
``` python
# TODO
# pass the correct arguments to preprocessor_f and partition_f
kwargs = {'t': 0.3, 'v': 0.1, 'feature_file': ..., 'label_file': ...}
my_model = Model(preprocessor_f=..., partition_f=..., **kwargs)
# Output size of the training partition
# initialize model
my_model = Model()
# obtain features and labels from files
# partition the data set
# pass the training features and labels to the fit method
```
%% Cell type:markdown id: tags:
## TASK 4: Normalization
%% Cell type:markdown id: tags:
Modify `preprocess` function such that the output features take values in the range [0, 1]. Initialize a new model with this function and check the values of the features.
Implement `normalization` function such that the output features take values in the range [0, 1]. Check that the values of the features are in [0, 1].
%% Cell type:markdown id: tags:
### Rubric:
* Correct range for feature values +5, +10
%% Cell type:markdown id: tags:
### Test Normalization
%% Cell type:code id: tags:
``` python
# TODO
# args is a placeholder for the parameters of the function
# Args and Returns are as in "preprocess"
def normalized_preprocess(args=...):
def normalization(raw_features):
'''
Args:
raw_features: ndarray
nxd array containing unnormalized features
Returns:
features: ndarray
nxd array containing normalized features
'''
raise NotImplementedError
return features
```
%% Cell type:code id: tags:
``` python
# TODO
kwargs = {'t': 0.3, 'v': 0.1, 'feature_file': ..., 'label_file': ...}
my_model = Model(preprocessor_f=..., partition_f=..., **kwargs)
features = normalization(features)
# Check that the range of each feature in the training set is in range [0, 1]
```
......
%% Cell type:markdown id: tags:
# k-Nearest Neighbor
%% Cell type:markdown id: tags:
You can use numpy for array operations and matplpotlib for plotting for this assignment. Please do not add other libraries.
%% Cell type:code id: tags:
``` python
import numpy as np
import matplotlib.pyplot as plt
```
%% Cell type:markdown id: tags:
Following code makes the Model class and relevant functions available from model.ipynb.
%% Cell type:code id: tags:
``` python
%run 'model-Solution.ipynb'
```
%% Cell type:markdown id: tags:
Choice of distance metric plays an important role in the performance of kNN. Let's start by implementing a distance method in the "distance" function below. It should take two data points and the name of the metric and return a scalar value.
%% Cell type:code id: tags:
``` python
def distance(x, y, metric):
'''
x: a 1xd array
y: a 1xd array
metric: Euclidean, Hamming, etc.
'''
#raise NotImplementedError
if metric == 'Euclidean':
dist = np.sqrt(np.sum(np.square((x-y))))
####################################
return dist # scalar distance btw x and y
```
%% Cell type:markdown id: tags:
We can implement our kNN classifier. kNN class inherits Model class. Implement "fit" and "predict" methods. Use the "distance" function you defined above. "fit" method takes $k$ as an argument. "predict" takes as input the feature vector for a single test point and outputs the predicted class, and the proportion of predicted class labels in $k$ nearest neighbors.
%% Cell type:code id: tags:
``` python
class kNN(Model):
def fit(self, k, distance_f, **kwargs):
#raise NotImplementedError
self.k = k
self.distance_f = distance_f
self.distance_metric = kwargs['metric']
#######################
return
# vary the threshold value for ROC analysis
def predict(self, test_points):
chosen_labels = []
for test_point in self.features[test_indices]:
#raise NotImplementedError
tmp_dist = [np.inf] * self.k
distances = []
labels = []
for index in self.training_indices:
dist = self.distance_f(self.features[index], test_point, self.distance_metric)
distances.append(dist)
labels.append(self.labels[index])
a_order = np.argsort(distances)
tmp_labels = list(np.array(labels)[a_order[::-1]][:self.k])
b = tmp_labels.count(1)
chosen_labels.append(b/self.k)
##########################
# return the predicted class label and the following ratio:
# number of points that have the same label as the test point / k
return np.array(chosen_labels)
```
%% Cell type:markdown id: tags:
It's time to build and evaluate our model now. Remember you need to provide values to $p$, $v$ parameters for "partition" function and to $file\_path$ for "preprocess" function.
%% Cell type:code id: tags:
``` python
# populate the keyword arguments dictionary kwargs
kwargs = {'p': 0.3, 'v': 0.1, 'seed': 123, 'file_path': 'madelon_train'}
# initialize the model
my_model = kNN(preprocessor_f=preprocess, partition_f=partition, **kwargs)
```
%% Cell type:markdown id: tags:
Assign a value to $k$ and fit the kNN model. You do not need to change the value of the $threshold$ parameter yet.
%% Cell type:code id: tags:
``` python
kwargs_f = {'metric': 'Euclidean'}
my_model.fit(k = 10, distance_f=distance, **kwargs_f)
```
%% Cell type:markdown id: tags:
Evaluate your model on the test data and report your accuracy. Also, calculate and report the confidence interval on the generalization error estimate.
%% Cell type:code id: tags:
``` python
final_labels = my_model.predict(my_model.test_indices)
```
%% Cell type:markdown id: tags:
Now that we have the true labels and the predicted ones from our model, we can build a confusion matrix and see how accurate our model is. Implement the "conf_matrix" function that takes as input an array of true labels ($true$) and an array of predicted labels ($pred$). It should output a numpy.ndarray.
%% Cell type:code id: tags:
``` python
# You should see array([ 196, 106, 193, 105]) with seed 123
conf_matrix(my_model.labels[my_model.test_indices], final_labels, threshold= 0.5)
```
%% Output
array([196, 106, 193, 105])
%% Cell type:markdown id: tags:
ROC curves are a good way to visualize sensitivity vs. 1-specificity for varying cut off points. Now, implement a "ROC" function that predicts the labels of the test set examples using different $threshold$ values in "fit" and plot the ROC curve. "ROC" takes a list containing different $threshold$ parameter values to try and returns (sensitivity, 1-specificity) pair for each $parameter$ value.
%% Cell type:code id: tags:
``` python
def ROC(true, pred, value_list):
'''
true: nx1 array of true labels for test set
pred: nx1 array of predicted labels for test set
Calculate sensitivity and 1-specificity for each point in value_list
Return two nX1 arrays: sens (for sensitivities) and spec_ (for 1-specificities)
'''
return sens, spec_
```
%% Cell type:markdown id: tags:
We can finally create the confusion matrix and plot the ROC curve for our kNN classifier.
%% Cell type:code id: tags:
``` python
# confusion matrix
conf_matrix(true_classes, predicted_classes)
```
%% Cell type:code id: tags:
``` python
# ROC curve
roc_sens, roc_spec_ = ROC(true_classes, predicted_classes, np.arange(0.1, 1.0, 0.1))
plt.plot(roc_sens, roc_spec_)
plt.show()
```
%% Cell type:markdown id: tags:
# JUPYTER NOTEBOOK TIPS
Each rectangular box is called a cell.
* Ctrl+ENTER evaluates the current cell; if it contains Python code, it runs the code, if it contains Markdown, it returns rendered text.
* Alt+ENTER evaluates the current cell and adds a new cell below it.
* If you click to the left of a cell, you'll notice the frame changes color to blue. You can erase a cell by hitting 'dd' (that's two "d"s in a row) when the frame is blue.
%% Cell type:markdown id: tags:
# Supervised Learning Model Skeleton
We'll use this skeleton for implementing different supervised learning algorithms.
%% Cell type:code id: tags:
``` python
class Model:
def fit(self):
raise NotImplementedError
def predict(self, test_points):
raise NotImplementedError
```
%% Cell type:code id: tags:
``` python
def preprocess(feature_file, label_file):
'''
Args:
feature_file: str
file containing features
label_file: str
file containing labels
Returns:
features: ndarray
nxd features
labels: ndarray
nx1 labels
'''
# read in features and labels
return features, labels
```
%% Cell type:code id: tags:
``` python
def partition(size, t, v = 0):
'''
Args:
size: int
number of examples in the whole dataset
t: float
proportion kept for test
v: float
proportion kept for validation
Returns:
test_indices: ndarray
1D array containing test set indices
val_indices: ndarray
1D array containing validation set indices
train_indices: ndarray
1D array containing training set indices
'''
# number of test and validation examples
return test_indices, val_indices, train_indices
```
%% Cell type:markdown id: tags:
## TASK 1: Implement `distance` function
%% Cell type:markdown id: tags:
"distance" function will be used in calculating cost of *k*-NN. It should take two data points and the name of the metric and return a scalar value.
%% Cell type:code id: tags:
``` python
#TODO: Programming Assignment 1
def distance(x, y, metric):
'''
Args:
x: ndarray
1D array containing coordinates for a point
y: ndarray
1D array containing coordinates for a point
metric: str
Euclidean, Manhattan
Returns:
dist: float
'''
if metric == 'Euclidean':
raise NotImplementedError
elif metric == 'Manhattan':
raise NotImplementedError
else:
raise ValueError('{} is not a valid metric.'.format(metric))
return dist # scalar distance btw x and y
```
%% Cell type:markdown id: tags:
## General supervised learning performance related functions
%% Cell type:markdown id: tags:
Implement the "conf_matrix" function that takes as input an array of true labels (*true*) and an array of predicted labels (*pred*). It should output a numpy.ndarray.
%% Cell type:code id: tags:
``` python
# TODO: Programming Assignment 1
def conf_matrix(true, pred, n_classes):
'''
Args:
true: ndarray
nx1 array of true labels for test set
pred: ndarray
nx1 array of predicted labels for test set
n_classes: int
Returns:
result: ndarray
n_classes x n_classes confusion matrix
'''
raise NotImplementedError
# returns the confusion matrix as numpy.ndarray
return result
```
%% Cell type:markdown id: tags:
ROC curves are a good way to visualize sensitivity vs. 1-specificity for varying cut off points. "ROC" takes a list containing different *threshold* parameter values to try and returns two arrays; one where each entry is the sensitivity at a given threshold and the other where entries are 1-specificities.
%% Cell type:code id: tags:
``` python
# TODO: Programming Assignment 1
def ROC(true_labels, preds, value_list):
'''
Args:
true_labels: ndarray
1D array containing true labels
preds: ndarray
1D array containing thresholded value (e.g. proportion of neighbors in kNN)
value_list: ndarray
1D array containing different threshold values
Returns:
sens: ndarray
1D array containing sensitivities
spec_: ndarray
1D array containing 1-specifities
'''
# calculate sensitivity, 1-specificity
# return two arrays
raise NotImplementedError
return sens, spec_
```
%% Cell type:markdown id: tags:
# Linear Regression & Naive Bayes
We'll implement linear regression & Naive Bayes algorithms for this assignment. Please modify the "preprocess" in this notebook and "partition" method in "model.ipynb" to suit your datasets for this assignment. In the linear regression part of this assignment, we have a small dataset available to us. We won't have examples to spare for validation set, instead we'll use cross-validation to tune hyperparameters. In our Naive Bayes implementation, we will not use validation set or crossvalidation.
### Assignment Goals:
In this assignment, we will:
* implement linear regression
* use gradient descent for optimization
* use residuals to decide if we need a polynomial model
* change our model to quadratic/cubic regression and use cross-validation to find the "best" polynomial degree
* implement regularization techniques
* $l_1$/$l_2$ regularization
* use cross-validation to find a good regularization parameter $\lambda$
* implement Naive Bayes
* address sparse data problem with **pseudocounts** (**$m$-estimate**)
%% Cell type:markdown id: tags:
You can use numpy for array operations and matplotlib for plotting for this assignment. Please do not add other libraries.
%% Cell type:code id: tags:
``` python
import numpy as np
import matplotlib.pyplot as plt
```
%% Cell type:markdown id: tags:
Following code makes the Model class and relevant functions available from "model.ipynb".
%% Cell type:code id: tags:
``` python
%run 'model.ipynb'
```
%% Cell type:markdown id: tags:
We'll implement the "preprocess" function and "kfold" function for $k$-fold cross-validation in "model.ipynb". 5 and 10 are commonly used values for $k$. You can use either one of them.
%% Cell type:code id: tags:
``` python
def preprocess(file_path):
'''
file_path: where to read the dataset from
Returns:
features: ndarray
nxd array containing `float` feature values
labels: ndarray
1D array containing `float` label
'''
# You might find np.genfromtxt useful for reading in the file. Be careful with the file delimiter,
# e.g. for comma-separated files use delimiter=',' argument.
raise NotImplementedError
return features, labels
```
%% Cell type:markdown id: tags:
We'll need to use mean squared error (mse) for linear regression. Next, implement "mse" function that takes predicted and true y values, and returns the "mse" between them.
%% Cell type:code id: tags:
``` python
def mse(y_pred, y_true):
'''
Args:
y_hat: ndarray
1D array containing data with `float` type. Values predicted by our method
y_true: ndarray
1D array containing data with `float` type. True y values
Returns:
cost: float
A single value. Mean squared error between y_pred and y_true.
'''
raise NotImplementedError
return cost
```
%% Cell type:markdown id: tags:
We can define our linear_regression model class now. Implement the "fit" and "predict" methods. Keep the default values for now, later we'll change the $polynomial\_degree$. If your "kfold" implementation works as it should, each call to fit and predict
%% Cell type:code id: tags:
``` python
class linear_regression(Model):
def __init__(self, preprocessor_f, partition_f, **kwargs):
super().__init__(preprocessor_f, partition_f, **kwargs)
if k_fold:
self.data_dict = kfold(self.train_indices, k = kwargs['k'])
# counter for train fold
self.i = 0
# counter for test fold
self.j = 0
# You can disregard polynomial_degree and regularizer in your first pass
def fit(self, learning_rate = 0.001, epochs = 1000, regularizer=None, polynomial_degree=1, **kwargs):
train_features = self.train_features[self.data_dict[self.i]]
train_labels = self.train_labels[self.data_dict[self.i]]
#initialize theta_cur randomly
# for each epoch
# compute model predictions for training examples
y_hat = None
if regularizer = None:
# use mse function to find the cost
cost = None
# calculate gradients wrt theta
grad_theta = None
# update theta
theta_curr = None
raise NotImplementedError
else:
# take regularization into account
raise NotImplementedError
# update the model parameters to be used in predict method
self.theta = theta_curr
# increment counter for next fold
self.i += 1
def predict(self, indices):
# obtain test features for current fold
test_features = self.train_features[self.data_dict[self.j]]
raise NotImplementedError
# increment counter for next fold
self.j += 1
return y_hat
```
%% Cell type:code id: tags:
``` python
# populate the keyword arguments dictionary kwargs
# p: proportion for test data
# k: parameter for k-fold crossvalidation
kwargs = {'p': 0.3, 'v': 0.1, 'file_path': 'madelon', 'k': 1}
# initialize the model
my_model = linear_regression(preprocessor_f=preprocess, partition_f=partition, k_fold=True, **kwargs)
```
%% Cell type:code id: tags:
``` python
# use fit_kwargs to pass arguments to regularization function
# fit_kwargs is empty for now since we are not applying
# regularization yet
fit_kwargs = {}
my_model.fit(**fit_kwargs)
```
%% Cell type:markdown id: tags:
Residuals are the differences between the predicted value $y_{hat}$ and the true value $y$ for each example. Predict $y_{hat}$ for the validation set.
%% Cell type:code id: tags:
``` python
y_hat_val = my_model.predict(my_model.features[my_model.val_indices])
residuals = my_model.labels[my_model.val_indices] - y_hat_val
plt.plot(residuals)
plt.show()
```
%% Cell type:markdown id: tags:
If the data is better suited for quadratic/cubic regression, regions of positive and negative residuals will alternate in the plot. Regardless, modify fit" and "predict" in the class definition to raise the feature values to $polynomial\_degree$. You can directly make the modification in the above definition, do not repeat. Use the validation set to find the degree of polynomial that results in lowest _mse_.
%% Cell type:code id: tags:
``` python
kwargs = {'p': 0.3, 'file_path': 'madelon', 'k': 5}
# initialize the model
my_model = linear_regression(preprocessor_f=preprocess, partition_f=partition, k_fold=True, **kwargs)
fit_kwargs = {}
# calculate mse for each of linear model, quadratic and cubic models
# and append to mses_for_models
mses_for_models = []
for i in range(1,4):
kfold_mse = 0
for k in range(5):
my_model.fit(polynomial_degree = i ,**fit_kwargs)
pred = my_model.predict(my_model.features[my_model.val_indices], fold = k)
k_fold_mse += mse(pred, my_model.labels[my_model.val_indices])
mses_for_models_for_models.append(k_fold_mse/k)
```
%% Cell type:markdown id: tags:
Define "regularization" function which implements $l_1$ and $l_2$ regularization. You'll use this function in "fit" method of "linear_regression" class.
%% Cell type:code id: tags:
``` python
def regularization(weights, method):
'''
Args:
weights: ndarray
1D array with `float` entries
method: str
Returns:
value: float
A single value. Regularization term that will be used in cost function in fit.
'''
if method == "l1":
value = None
raise NotImplementedError
elif method == "l2":
value = None
raise NotImplementedError
return value
```
%% Cell type:markdown id: tags:
Using crossvalidation and the value of $polynomial_{degree}$ you found above, try different values of $\lambda$ to find a a good value that results in low _mse_. Report the best values you found for hyperparameters and the resulting _mse_.
%% Cell type:markdown id: tags:
## Naive Bayes Spam Classifier
This part is independent of the above part. We will use the Enron spam/ham dataset. You will need to decompress the provided "enron.tar.gz" folder. The two subfolders contain spam and ham emails.
The features for Naive Bayes algorithm will be word counts. Number of features will be equal to the unique words seen in the whole dataset. The "preprocess" function will be more involved this time. You'll need to remove pucntuation marks (you may find string.punctuation useful), tokenize text to words (remember to lowercase all) and count the number of words.
%% Cell type:code id: tags:
``` python
def preprocess_bayes(folder_path):
'''
Args:
folder_path: str
Where to read the dataset from.
Returns:
features: ndarray
nxd array with n emails, d words. features_ij is the count of word_j in email_i
labels: ndarray
1D array of labels (1: spam, 0: ham)
'''
# remove punctutaion marks
# tokenize, lowercase
# count number of words in each email
raise NotImplementedError
return features, labels
```
%% Cell type:markdown id: tags:
Implement the "fit" and "predict" methods for Naive Bayes. Use $m$-estimate to address missing attribute values (also called **Laplace smoothing** when $m$ = 1). In general, $m$ values should be small. We'll use $m$ = 1.
%% Cell type:code id: tags:
``` python
class naive_bayes(Model):
def __init__(self, preprocessor_f, partition_f, **kwargs):
super().__init__(preprocessor_f, partition_f, **kwargs)
def fit(self, m, **kwargs):
self.ham_word_counts = np.zeros(self.feat_dim)
self.spam_word_counts = np.zeros(self.feat_dim)
# find class prior probabilities
self.ham_prior = None
self.spam_prior = None
# find the number of words(counting repeats) summed across all emails in a class
n = None
# find the number of each word summed across all emails in a class
# populate self.ham_word_counts and self.spam_word_counts
# find the likelihood of a word_i in each class
# 1D ndarray
self.ham_likelihood = None
self.spam_likelihood = None
def predict(self, indices):
'''
Returns:
preds: ndarray
1D binary array containing predicted labels
'''
raise NotImplementedError
return preds
```
%% Cell type:markdown id: tags:
We can fit our model and see how accurately it predicts spam emails now. We won't use a validation set or crossvalidation this time.
%% Cell type:code id: tags:
``` python
# populate the keyword arguments dictionary kwargs
# p: proportion for test data
# k: parameter for k-fold crossvalidation
kwargs = {'p': 0.3, 'file_path': 'enron'}
# initialize the model
my_model = linear_regression(preprocessor_f=preprocess_bayes, partition_f=partition, **kwargs)
```
%% Cell type:markdown id: tags:
We can use the "conf_matrix" function we defined before to see how error is distributed.
%% Cell type:code id: tags:
``` python
preds = my_model.predict(my_model.test_indices)
tp,tn, fp, fn = conf_matrix(true = my_model.features[my_model.test_indices], pred = preds)
```
This diff is collapsed.
%% Cell type:markdown id: tags:
# JUPYTER NOTEBOOK TIPS
Each rectangular box is called a cell.
* ctrl+ENTER evaluates the current cell; if it contains Python code, it runs the code, if it contains Markdown, it returns rendered text.
* alt+ENTER evaluates the current cell and adds a new cell below it.
* If you click to the left of a cell, you'll notice the frame changes color to blue. You can erase a cell by hitting 'dd' (that's two "d"s in a row) when the frame is blue.
%% Cell type:markdown id: tags:
# Supervised Learning Model Skeleton
We'll use this skeleton for implementing different supervised learning algorithms. Please complete "preprocess" and "partition" methods below.
%% Cell type:markdown id: tags:
You might need to preprocess your dataset depending on which dataset you are using. This step is for reading the dataset and for extracting features and labels. The "preprocess" function should return an $n \times d$ features array, and an $n \times 1$ labels array, where $n$ is the number of examples and $d$ is the number of features in the dataset. In cases where there is a big difference between the scales of feautures, we want to normalize the features to have values in the same range [0,1]. If that is the case with your dataset, output normalized features to get better prediction results.
We'll use this skeleton for implementing different supervised learning algorithms.
%% Cell type:code id: tags:
``` python
def preprocess(file_path):
'''
file_path: where to read the dataset from
returns nxd features, nx1 labels
'''
# You might find np.genfromtxt useful for reading in the file. Be careful with the file delimiter,
# e.g. for comma-separated files use delimiter=',' argument.
class Model:
raise NotImplementedError
def fit(self):
raise NotImplementedError
return features, labels
def predict(self, test_points):
raise NotImplementedError
```
%% Cell type:markdown id: tags:
Next, you'll need to split your dataset into training and validation and test sets. The "split" function should take as input the size of the whole dataset and randomly sample a proportion $p$ of the dataset as test partition and a proportion of $v$ as validation partition. The remaining will be used as training data. For example, to keep 30% of the examples as test and %10 as validation, set $p=0.3$ and $v=0.1$. You should choose these values according to the size of the data available to you. The "split" function should return indices of the training, validation and test sets. These will be used to index into the whole training set.
%% Cell type:code id: tags:
``` python
def partition(size, p, v = 0):
def preprocess(data_f, feature_names_f):
'''
size: number of examples in the whole dataset
p: proportion kept for test
v: proportion kept for validation
data_f: where to read the dataset from
feature_names_f: where to read the feature names from
Returns:
features: ndarray
nxd array containing `float` feature values
labels: ndarray
1D array containing `float` label
'''
# You might find np.genfromtxt useful for reading in the file. Be careful with the file delimiter,
# e.g. for comma-separated files use delimiter=',' argument.
# np.random.choice might come in handy. Do not sample with replacement!
# Be sure to not use the same indices in test and validation sets!
# use the first np.ceil(size*p) for test,
# the following np.ceil(size*v) for validation set.
raise NotImplementedError
data = np.genfromtxt(data_f)
features = data[:,:-1]
target = data[:,-1]
feature_names = np.genfromtxt(feature_names_f, dtype='unicode')
# return two 1d arrays: one keeping validation set indices, the other keeping test set indices
return val_indices, test_indices
return features, feature_names, target
```
%% Cell type:markdown id: tags:
In cases, where data is not abundantly available, we resort to getting an error estimate from average of error on different splits of error. In this case, every fold of data is used for testing and for training in turns, i.e. assuming we split our data into 3 folds, we'd
In cases where data is not abundantly available, we resort to getting an error estimate from average of error on different splits of dataset. In this case, every fold of data is used for testing and for training in turns, i.e. assuming we split our data into 3 folds, we'd
* train our model on fold-1+fold-2 and test on fold-3
* train our model on fold-1+fold-3 and test on fold-2
* train our model on fold-2+fold-3 and test on fold-1.
We'd use the average of the error we obtained in three runs as our error estimate. Implement function "kfold" below.
We'd use the average of the error we obtained in three runs as our error estimate.
Implement function "kfold" below.
%% Cell type:code id: tags:
``` python
def kfold(indices, k):
# TODO: Programming Assignment 2
def kfold(size, k):
'''
Args:
indices: ndarray
1D array with integer entries containing indices
size: int
number of examples in the dataset that you want to split into k
k: int
Number of desired splits in data.(Assume test set is already separated.)
Returns:
fold_dict: dict
A dictionary with integer keys corresponding to folds. Values are (training_indices, val_indices).
val_indices: ndarray
1/k of training indices randomly chosen and separates them as validation partition.
train_indices: ndarray
Remaining 1-(1/k) of the indices.
e.g. fold_dict = {0: (train_0_indices, val_0_indices),
1: (train_0_indices, val_0_indices), 2: (train_0_indices, val_0_indices)} for k = 3
1: (train_1_indices, val_1_indices), 2: (train_2_indices, val_2_indices)} for k = 3
'''
return fold_dict
```
%% Cell type:code id: tags:
``` python
class Model:
# set the preprocessing function, partition_function
# use kwargs to pass arguments to preprocessor_f and partition_f
# kwargs is a dictionary and should contain p, v and file_path
# e.g. {'p': 0.3, 'v': 0.1, 'file_path': some_path}
def __init__(self, preprocessor_f, partition_f, **kwargs):
self.features, self.labels = preprocessor_f(kwargs['file_path'])
self.size = len(self.labels) # number of examples in dataset
self.feat_dim = self.features.shape[1] # number of features
self.val_indices, self.test_indices = partition_f(self.size, kwargs['p'], kwargs['v'])
self.val_size = len(self.val_indices)
self.test_size = len(self.test_indices)
self.train_indices = np.delete(np.arange(self.size), np.append(self.test_indices, self.val_indices), 0)
self.train_size = len(self.train_indices)
def fit(self):
raise NotImplementedError
def predict(self, indices):
raise NotImplementedError
```
%% Cell type:markdown id: tags:
## General supervised learning related functions
### (To be implemented later when it is indicated in other notebooks)
%% Cell type:markdown id: tags:
Implement the "conf_matrix" function that takes as input an array of true labels ($true$) and an array of predicted labels ($pred$). It should output a numpy.ndarray.
Implement "mse" and regularization functions. They will be used in the fit method of linear regression.
%% Cell type:code id: tags:
``` python
def conf_matrix(true, pred):
#TODO: Programming Assignment 2
def mse(y_pred, y_true):
'''
true: nx1 array of true labels for test set
pred: nx1 array of predicted labels for test set
Args:
y_hat: ndarray
1D array containing data with `float` type. Values predicted by our method
y_true: ndarray
1D array containing data with `float` type. True y values
Returns:
cost: ndarray
1D array containing mean squared error between y_pred and y_true.
'''
raise NotImplementedError
tp = tn = fp = fn = 0
# calculate true positives (tp), true negatives(tn)
# false positives (fp) and false negatives (fn)
return cost
# returns the confusion matrix as numpy.ndarray
return np.array([tp,tn, fp, fn])
```
%% Cell type:markdown id: tags:
ROC curves are a good way to visualize sensitivity vs. 1-specificity for varying cut off points. Now, implement a "ROC" function that predicts the labels of the test set examples using different $threshold$ values in "predict" and plot the ROC curve. "ROC" takes a list containing different $threshold$ parameter values to try and returns two arrays; one where each entry is the sensitivity at a given threshold and the other where entries are 1-specificities.
%% Cell type:code id: tags:
``` python
def ROC(model, indices, value_list):
'''
model: a fitted supervised learning model
indices: for data points to predict
value_list: array containing different threshold values
Calculate sensitivity and 1-specificity for each point in value_list
Return two nX1 arrays: sens (for sensitivities) and spec_ (for 1-specificities)
#TODO: Programming Assignment 2
def regularization(weights, method):
'''
# use predict method to obtain predicted labels at different threshold values
# use conf_matrix to calculate tp, tn, fp, fn
# calculate sensitivity, 1-specificity
# return two arrays
Args:
weights: ndarray
1D array with `float` entries
method: str
Returns:
value: float
A single value. Regularization term that will be used in cost function in fit.
'''
if method == "l1":
value = None
elif method == "l2":
value = None
raise NotImplementedError
return sens, spec_
return value
```
......
This diff is collapsed.
%% Cell type:markdown id: tags:
# JUPYTER NOTEBOOK TIPS
Each rectangular box is called a cell.
* ctrl+ENTER evaluates the current cell; if it contains Python code, it runs the code, if it contains Markdown, it returns rendered text.
* alt+ENTER evaluates the current cell and adds a new cell below it.
* If you click to the left of a cell, you'll notice the frame changes color to blue. You can erase a cell by hitting 'dd' (that's two "d"s in a row) when the frame is blue.
%% Cell type:markdown id: tags:
# Supervised Learning Model Skeleton
We'll use this skeleton for implementing different supervised learning algorithms.
%% Cell type:code id: tags:
``` python
class Model:
def fit(self):
raise NotImplementedError
def predict(self, test_points):
raise NotImplementedError
```
%% Cell type:markdown id: tags:
## General supervised learning performance related functions
%% Cell type:markdown id: tags:
"conf_matrix" function that takes as input an array of true labels (*true*) and an array of predicted labels (*pred*).
%% Cell type:code id: tags:
``` python
def conf_matrix(true, pred):
'''
Args:
true: ndarray
nx1 array of true labels for test set
pred: ndarray
nx1 array of predicted labels for test set
Returns:
ndarray
'''
tp = tn = fp = fn = 0
# calculate true positives (tp), true negatives(tn)
# false positives (fp) and false negatives (fn)
size = len(true)
for i in range(size):
if true[i]==1:
if pred[i] == 1:
tp += 1
else:
fn += 1
else:
if pred[i] == 0:
tn += 1
else:
fp += 1
# returns the confusion matrix as numpy.ndarray
return np.array([[tp,fp], [fn, tn]])
```
File deleted
This diff is collapsed.
CRIM
ZN
INDUS
CHAS
NOX
RM
AGE
DIS
RAD
TAX
PTRATIO
B
LSTAT
This diff is collapsed.
This diff is collapsed.
%% Cell type:markdown id: tags:
You might need to preprocess your dataset depending on which dataset you are using. This step is for reading the dataset and for extracting features and labels. The "preprocess" function should return an $n \times d$ features array, and an $n \times 1$ labels array, where $n$ is the number of examples and $d$ is the number of features in the dataset.
%% Cell type:code id: tags:
``` python
def preprocess(file_path):
'''
file_path: where to read the dataset from
returns nxd features, nx1 labels
'''
# You might find np.genfromtxt useful for reading in the file. Be careful with the file delimiter,
# e.g. for comma-separated files use delimiter=',' argument.
#raise NotImplementedError
feature_path = file_path + '.data'
label_path = file_path + '.labels'
features = np.genfromtxt(feature_path)
labels = np.genfromtxt(label_path)
#features = data[:, 1:]
#labels = data[:, 0]
####################
return features, labels
```
%% Cell type:markdown id: tags:
Next, you'll need to split your dataset into training and validation and test sets. The "split" function should take as input the size of the whole dataset and randomly sample a proportion $p$ of the dataset as test partition and a proportion of $v$ as validation partition. The remaining will be used as training data. For example, to keep 30% of the examples as test and %10 as validation, set $p=0.3$ and $v=0.1$. You should choose these values according to the size of the data available to you. The "split" function should return indices of the training, validation and test sets. These will be used to index into the whole training set.
%% Cell type:code id: tags:
``` python
def partition(size, p, v, seed):
'''
size: number of examples in the whole dataset
p: proportion kept for test
v: proportion kept for validation
'''
# np.random.choice might come in handy. Do not sample with replacement!
# Be sure to not use the same indices in test and validation sets!
#raise NotImplementedError
data_list = np.arange(size)
p_size = np.int(np.ceil(size*p))
v_size = np.int(np.ceil(size*v))
np.random.seed(seed)
permuted = np.random.permutation(data_list)
test_indices = permuted[:p_size]
val_indices = permuted[p_size+1:p_size+v_size]
##########################
# return two 1d arrays: one keeping validation set indices, the other keeping test set indices
return val_indices, test_indices
```
%% Cell type:code id: tags:
``` python
class Model:
# set the preprocessing function, partition_function
# use kwargs to pass arguments to preprocessor_f and partition_f
# kwargs is a dictionary and should contain p, v and file_path
# e.g. {'p': 0.3, 'v': 0.1, 'file_path': some_path}
def __init__(self, preprocessor_f, partition_f, distance_f=None, **kwargs):
self.features, self.labels = preprocessor_f(kwargs['file_path'])
self.size = len(self.labels)
self.val_indices, self.test_indices = partition_f(self.size, kwargs['p'], kwargs['v'], kwargs['seed'])
self.training_indices = np.delete(np.arange(self.size), np.append(self.test_indices, self.val_indices), 0)
def fit(self):
raise NotImplementedError
def predict(self):
raise NotImplementedError
```
%% Cell type:code id: tags:
``` python
def conf_matrix(true_l, pred, threshold):
tp = tn = fp = fn = 0
for i in range(len(true_l)):
tmp = -1
if pred[i] > threshold:
tmp = 1
if tmp == true_l[i]:
if true_l[i] == 1:
tp += 1
else:
tn += 1
else:
if true_l[i] == 1:
fn += 1
else:
fp += 1
return np.array([tp,tn, fp, fn])
# returns the confusion matrix as numpy.ndarray
#raise NotImplementedError
```