From a1483fa9dd0b320da146da2cdccdfb30403bf00e Mon Sep 17 00:00:00 2001
From: Zeynep Hakguder <zhakguder@cse.unl.edu>
Date: Thu, 31 May 2018 12:52:12 -0500
Subject: [PATCH] model for pa1

---
 model.ipynb => ProgrammingAssignment_1/model.ipynb | 14 --------------
 .../model_solution.ipynb                           | 14 --------------
 2 files changed, 28 deletions(-)
 rename model.ipynb => ProgrammingAssignment_1/model.ipynb (89%)
 rename model_solution.ipynb => ProgrammingAssignment_1/model_solution.ipynb (89%)

diff --git a/model.ipynb b/ProgrammingAssignment_1/model.ipynb
similarity index 89%
rename from model.ipynb
rename to ProgrammingAssignment_1/model.ipynb
index 735ca8c..11f5f31 100644
--- a/model.ipynb
+++ b/ProgrammingAssignment_1/model.ipynb
@@ -21,13 +21,6 @@
     "We'll use this skeleton for implementing different supervised learning algorithms. Please complete \"preprocess\" and \"partition\" methods below."
    ]
   },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "This step is for reading the dataset and for extracting features and labels. The \"preprocess\" function should return an *n x d* \"features\" array, and an *n x 1* \"labels\" array, where *n* is the number of examples and *d* is the number of features in the dataset. In cases where there is a big difference between the scales of features, we want to normalize the features to have values in the same range [0,1]. Since this is not the case with this dataset, we will not do normalization."
-   ]
-  },
   {
    "cell_type": "code",
    "execution_count": 14,
@@ -59,13 +52,6 @@
     "    return features, labels"
    ]
   },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "Next, you'll need to split your dataset into training, validation and test sets. The \"partition\" function should take as input the size of the whole dataset and randomly sample a proportion *t* of the dataset as test partition and a proportion of *v* as validation partition. The remaining will be used as training data. For example, to keep 30% of the examples as test and %10 as validation, set *t* = 0.3 and *v* = 0.1. You should choose these values according to the size of the data available to you. The \"split\" function should return indices of the training, validation and test sets. These will be used to index into the whole training set."
-   ]
-  },
   {
    "cell_type": "code",
    "execution_count": 1,
diff --git a/model_solution.ipynb b/ProgrammingAssignment_1/model_solution.ipynb
similarity index 89%
rename from model_solution.ipynb
rename to ProgrammingAssignment_1/model_solution.ipynb
index eae4fa4..8e1f78b 100644
--- a/model_solution.ipynb
+++ b/ProgrammingAssignment_1/model_solution.ipynb
@@ -21,13 +21,6 @@
     "We'll use this skeleton for implementing different supervised learning algorithms. Please complete \"preprocess\" and \"partition\" methods below."
    ]
   },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "This step is for reading the dataset and for extracting features and labels. The \"preprocess\" function should return an *n x d* \"features\" array, and an *n x 1* \"labels\" array, where *n* is the number of examples and *d* is the number of features in the dataset. In cases where there is a big difference between the scales of features, we want to normalize the features to have values in the same range [0,1]. Since this is not the case with this dataset, we will not do normalization."
-   ]
-  },
   {
    "cell_type": "code",
    "execution_count": 14,
@@ -58,13 +51,6 @@
     "    return features, labels"
    ]
   },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "Next, you'll need to split your dataset into training, validation and test sets. The \"partition\" function should take as input the size of the whole dataset and randomly sample a proportion *t* of the dataset as test partition and a proportion of *v* as validation partition. The remaining will be used as training data. For example, to keep 30% of the examples as test and %10 as validation, set *t* = 0.3 and *v* = 0.1. You should choose these values according to the size of the data available to you. The \"split\" function should return indices of the training, validation and test sets. These will be used to index into the whole training set."
-   ]
-  },
   {
    "cell_type": "code",
    "execution_count": 1,
-- 
GitLab