diff --git a/ProgrammingAssignment_0/GettingFamiliar.ipynb b/ProgrammingAssignment_0/GettingFamiliar.ipynb
index 999b7b79fa5b7b35e685a628e75a68b83de22b1c..04f5b47575a70be27c40d4043545b6bcbca8483a 100644
--- a/ProgrammingAssignment_0/GettingFamiliar.ipynb
+++ b/ProgrammingAssignment_0/GettingFamiliar.ipynb
@@ -110,7 +110,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "This step is for reading the dataset and for extracting features and labels. The \"preprocess\" function should return an $n \\times d$ \"features\" array, and an $n \\times 1$ \"labels\" array, where $n$ is the number of examples and $d$ is the number of features in the dataset. In cases where there is a big difference between the scales of features, we want to normalize the features to have values in the same range [0,1]. Since this is not the case with this dataset, we will not do normalization."
+    "This step is for reading the dataset and for extracting features and labels. The \"preprocess\" function should return an *n x d* \"features\" array, and an *n x 1* \"labels\" array, where *n* is the number of examples and *d* is the number of features in the dataset. In cases where there is a big difference between the scales of features, we want to normalize the features to have values in the same range [0,1]. Since this is not the case with this dataset, we will not do normalization."
    ]
   },
   {
@@ -180,7 +180,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "Next, you'll need to split your dataset into training, validation and test sets. The \"partition\" function should take as input the size of the whole dataset and randomly sample a proportion $t$ of the dataset indices for test partition and a proportion of $v$ for validation partition. The remaining will be used as indices for training data. For example, to keep 30% of the examples as test and %10 as validation, set $t=0.3$ and $v=0.1$. You should choose these values according to the size of the data available to you. The \"split\" function should return indices of the training, validation and test sets. These will be used to index into the whole training set."
+    "Next, you'll need to split your dataset into training, validation and test sets. The \"partition\" function should take as input the size of the whole dataset and randomly sample a proportion *t* of the dataset indices for test partition and a proportion of *v* for validation partition. The remaining will be used as indices for training data. For example, to keep 30% of the examples as test and %10 as validation, set *t* = 0.3 and *v* = 0.1. You should choose these values according to the size of the data available to you. The \"split\" function should return indices of the training, validation and test sets. These will be used to index into the whole training set."
    ]
   },
   {