From d18f6597af1ce9606f1a854cf916778b9207523f Mon Sep 17 00:00:00 2001
From: Zeynep Hakguder <zhakguder@cse.unl.edu>
Date: Mon, 4 Jun 2018 14:12:19 +0000
Subject: [PATCH] Update GettingFamiliar.ipynb

---
 ProgrammingAssignment_0/GettingFamiliar.ipynb | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/ProgrammingAssignment_0/GettingFamiliar.ipynb b/ProgrammingAssignment_0/GettingFamiliar.ipynb
index 999b7b7..04f5b47 100644
--- a/ProgrammingAssignment_0/GettingFamiliar.ipynb
+++ b/ProgrammingAssignment_0/GettingFamiliar.ipynb
@@ -110,7 +110,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "This step is for reading the dataset and for extracting features and labels. The \"preprocess\" function should return an $n \\times d$ \"features\" array, and an $n \\times 1$ \"labels\" array, where $n$ is the number of examples and $d$ is the number of features in the dataset. In cases where there is a big difference between the scales of features, we want to normalize the features to have values in the same range [0,1]. Since this is not the case with this dataset, we will not do normalization."
+    "This step is for reading the dataset and for extracting features and labels. The \"preprocess\" function should return an *n x d* \"features\" array, and an *n x 1* \"labels\" array, where *n* is the number of examples and *d* is the number of features in the dataset. In cases where there is a big difference between the scales of features, we want to normalize the features to have values in the same range [0,1]. Since this is not the case with this dataset, we will not do normalization."
    ]
   },
   {
@@ -180,7 +180,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "Next, you'll need to split your dataset into training, validation and test sets. The \"partition\" function should take as input the size of the whole dataset and randomly sample a proportion $t$ of the dataset indices for test partition and a proportion of $v$ for validation partition. The remaining will be used as indices for training data. For example, to keep 30% of the examples as test and %10 as validation, set $t=0.3$ and $v=0.1$. You should choose these values according to the size of the data available to you. The \"split\" function should return indices of the training, validation and test sets. These will be used to index into the whole training set."
+    "Next, you'll need to split your dataset into training, validation and test sets. The \"partition\" function should take as input the size of the whole dataset and randomly sample a proportion *t* of the dataset indices for test partition and a proportion of *v* for validation partition. The remaining will be used as indices for training data. For example, to keep 30% of the examples as test and %10 as validation, set *t* = 0.3 and *v* = 0.1. You should choose these values according to the size of the data available to you. The \"split\" function should return indices of the training, validation and test sets. These will be used to index into the whole training set."
    ]
   },
   {
-- 
GitLab