Commit c5c28ee8 authored by aknecht2's avatar aknecht2

Large documentation update. Added several new examples scripts. Updated...

Large documentation update.  Added several new examples scripts.  Updated image processing function documentation.
parent 71203716
......@@ -5,19 +5,19 @@
Arkansas Workshop July 2015
============================
Hello Arkansas State! This workshop covers basic command line instructions and local processing of an image from LemnaTec. Either look at the corresponding part of the installation page for your computer (`Mac OS X <installation.html#mac-osx>`_, `Windows <installation.html#windows>`_) or head to the next section.
Hello Arkansas State! This workshop covers basic command line instructions and local processing of an image from LemnaTec. Either look at the corresponding part of the installation page for your computer (`Mac OS X <installation.html#mac-osx>`_, `Windows <installation.html#windows>`_) or head to the next section.
Command Line Crash Course
-------------------------
If you are unfamiliar with the command line, or have never opened the Terminal or Command Prompt app before, this section is for you. There are several useful features available while using Image Harvest through a Python interpreter, which requires a small amount of command line knowledge -- namely the ability to navigate through directories. For all operating systems, files are structured hierarchically in directories. A directory almost always has a parent (represented by ".."), and it may contain files or more sub-directories. The only directory not that doesn't have a parent is the root directory which is "/" for Mac OS X and Linux, and "C:\\" for Windows users. Since all directories are children of the root directory, you can write the path to any file all the way from the root. These types of paths are absolute paths. "/Users/username/Downloads/file.txt" and "C:\\Users\\username\\Downlaods\\file.txt" are both examples of absolute paths. The counterpart to absolute paths is relative paths, which is a path to any file written from where you are currently located. If you are currently located in your Downloads folder, the relative path "file.txt" is the same as the absolute path "/Users/username/Downloads/file.txt". These concepts apply to all operating systems, however, each uses different commands for achieving directory navigation.
Mac OS X
#########
First and foremost, the terminal app is located at /Applications/Utilities/Terminal.app. There are only 3 important commands:
* cd -- Which stands for "change directory". This command allows you to move your current location to a new directory as specified by an argument. It can be either an absolute or a relative path.
* ls -- Which stands for "list". This command lists all the files and folders in the current directory.
* pwd -- Which stands for "print working directory", and prints your current directory's absolute path.
* pwd -- Which stands for "print working directory", and prints your current directory's absolute path.
Examples:
......@@ -49,7 +49,7 @@ The Command Prompt app can be opened by typing cmd into the windows search bar.
Examples:
.. code-block:: bash
# Move to your home folder
cd C:\\Users\\username
# Move to your downloads folder from your home folder.
......@@ -89,17 +89,17 @@ or it will be difficult to find. Execute the following lines from the python in
# Import the Image Harvest library
import ih.imgproc
# Load your downloaded image!
plant = ih.imgproc.Image("rgbsv1.png")
plant = ih.imgproc.Image("rgbsv1.png")
# Show your plant!
plant.show()
# Wait for user input, then destroy the window
plant.wait()
You should see a window named "window 1" pop up with a smaller version of your image in it!
By calling the :py:meth:`~ih.imgproc.Image.wait` method, the window will be displayed until you press any key, and control is taken away from the python interpreter. Every processing script begins by loading the library, and loading the image, and all Image Harvest functions are performed on a loaded image. Using the python interpreter provides some powerful tools to adjust your analysis on the fly and test out function parameters through the use of "states".
By calling the :py:meth:`~ih.imgproc.Image.wait` method, the window will be displayed until you press any key, and control is taken away from the python interpreter. Every processing script begins by loading the library, and loading the image, and all Image Harvest functions are performed on a loaded image. Using the python interpreter provides some powerful tools to adjust your analysis on the fly and test out function parameters through the use of "states".
.. code-block:: python
# Save the current state under the name "base"
plant.save("base")
# Convert the plant to grayscale
......@@ -183,7 +183,7 @@ You should see your final displayed image! The :py:meth:`~ih.imgproc.Image.cont
Python Script
-------------
:download:`Download Script <../../examples/workshops/arkansas_july_2015/firstimage.py>`
:download:`Download Script <../../examples/workshops/arkansas_july_2015/firstimage.txt>`
:download:`Download Image <../images/sample/rgbsv1.png>` (same as above)
......@@ -202,7 +202,7 @@ This script performs exactly the process we worked through only as a standalone
plant.convertColor("bgr", "gray")
plant.save("gray")
plant.show("Grayscale")
# Threshold the image incorrectly AND correctly
# Threshold the image incorrectly AND correctly
plant.threshold(255)
plant.show("Threshold 255")
plant.restore("gray")
......@@ -231,7 +231,7 @@ Now that we have everything installed, we'll work through a pipeline for an imag
:download:`Team 1 Rice 0_0_2 <../../examples/workshops/arkansas_july_2015/0_0_2.png>`
:download:`The Pipeline <../../examples/workshops/arkansas_july_2015/pipeline.py>`
:download:`The Pipeline <../../examples/workshops/arkansas_july_2015/pipeline.txt>`
The pipline:
......@@ -292,7 +292,7 @@ Loading
plant.save("base")
plant.show("base")
This part should be fairly familiar -- we load the Image Harvest library with import ih.imgproc, we load our image "0_0_2.png", and we save & display it under the name "base".
This part should be fairly familiar -- we load the Image Harvest library with import ih.imgproc, we load our image "0_0_2.png", and we save & display it under the name "base".
Blur
#####
......@@ -302,7 +302,7 @@ Blur
# blur
plant.blur((5, 5))
plant.save("blur")
plant.show("Blur")
plant.show("Blur")
Here we call the :py:meth:`~ih.imgproc.Image.blur` function -- which does exactly that. Blur can take a few arguments, but the most important and only required argument is the first which specifies the kernel size -- a larger kernel means a more blurry output image. The kernel size is represented as two numbers: (width, height), and both the width and height must be odd and positive. Try blurring with a kernel size of (15, 15) instead of (5, 5).
......@@ -318,7 +318,7 @@ Meanshift
plant.save("shift")
plant.show("Meanshift")
Here we call the :py:meth:`~ih.imgproc.Image.meanshift` function, which performs mean shift segmentation of the image. Mean shift segmentation groups regions in an image that are geographically close AND close in color. The first two arguments define the geographically distance, and the third argument defines the color variation allowed.
Here we call the :py:meth:`~ih.imgproc.Image.meanshift` function, which performs mean shift segmentation of the image. Mean shift segmentation groups regions in an image that are geographically close AND close in color. The first two arguments define the geographically distance, and the third argument defines the color variation allowed.
Why meanshift? Meanshift makes the next step in grouping an image into regions. In this case, we have some very easily defined regions that are close both geographically and close in color. The plant itself is completely connected and a similar shade of green. The interior of the pot is contained within a single circle in the image, and is made of similarly colored rocks. Both of these regions become much more uniform, and thus much easier to process as a result of this step.
......@@ -326,7 +326,7 @@ Color Filter
############
.. code-block:: python
# colorFilter
plant.colorFilter("(((g > r) and (g > b)) and (((r + g) + b) < 400))")
plant.colorFilter("(((r max g) max b) - ((r min g) min b)) > 12)")
......@@ -362,7 +362,7 @@ Recoloring
plant.convertColor("gray", "bgr")
plant.bitwise_and("base")
plant.write("final.png")
plant.show("Final")
plant.show("Final")
What's really happening here, is we are creating a mask of our current image, and then overlaying it on our base image to extract our original color. Blurring and segmenting our image changed its original color, and we want to restore that. The first 3 lines create the mask. We first convert the image to grayscale with :py:meth:`~ih.imgproc.Image.convertColor`, and then :py:meth:`~ih.imgproc.Image.threhsold` with a value of 0. By thresholding the image with a value of 0, we keep every pixel from our grayscale image that has an intensity of at least 1. This means we keep every non-black pixel from our grayscale image. We then convert the image back to "bgr" from "gray", however, this does NOT restore color to the image. Our binary image only has a single channel (intensity), but our color image has 3 channels (b, g, r). We can't overlay our mask while it only has a single channel, so we convert it to "bgr" to give it 3 channels. We then use :py:meth:`ih.imgproc.Image.bitwise_and` to overaly our mask on our base image.
......
Camera Processing Example #1
=============================
This section will detail a shorter script that provides details on how to process
an image manually taken with a camera.
Python Script
-----------------------
:download:`Download Script <../../examples/scripts/camera1/camera1.txt>`
:download:`Download Image <../../examples/scripts/camera1/IMG_6050.JPG>`
.. code-block:: python
import ih.imgproc
plant = ih.imgproc.Image("/path/to/your/image")
plant.crop([0, "y - 210", 0, "x"])
plant.write("crop.png")
plant.colorFilter("(g > 200)")
plant.write("thresh.png")
plant.contourChop(plant.image, 300)
plant.write("chop.png")
plant.contourCut(plant.image, resize = True)
plant.write("final.png")
This script has only four processing steps but produces a very good result. Let's
look at each block
.. code-block:: python
import ih.imgproc
plant = ih.imgproc.Image("/path/to/your/image")
These two lines setup your image. The first line imports the ih image processing
module, the second loads our image. Here's what our base image looks like:
.. image:: ../../examples/scripts/camera1/IMG_6050_small.jpeg
:align: center
|
.. code-block:: python
plant.crop([0, "y - 210", 0, "x"])
plant.write("crop.png")
First, we :py:meth:`~ih.imgproc.Image.crop` the image to simply remove the pot. All ROI's are of the form
[ystart, yend, xstart, xend], so in this case we are only removing the bottom
210 pixels from the image. We then write the image:
.. image:: ../../examples/scripts/camera1/IMG_6050_crop.png
:align: center
|
.. code-block:: python
plant.colorFilter("(g > 200)")
plant.write("thresh.png")
Next, we apply a :py:meth:`~ih.imgproc.Image.colorFilter` to the image, with the
logic "g > 200". This means that we only keep pixels in the image whose green
channel is greater than 200. We then write the image:
.. image:: ../../examples/scripts/camera1/IMG_6050_thresh.png
:align: center
|
.. code-block:: python
plant.contourChop(plant.image, 300)
plant.write("chop.png")
We then use the :py:meth:`~ih.imgproc.Image.contourChop` function to remove
small contours from the image. This takes advantage of the fact that the
plant is completely connected, and the noise around it isn't. We then write
the image:
.. image:: ../../examples/scripts/camera1/IMG_6050_chop.png
:align: center
|
.. code-block:: python
plant.contourCut(plant.image, resize = True)
plant.write("final.png")
Finally, we use the :py:meth:`~ih.imgproc.Image.contourCut` function to
crop the image to just the plant, and write our final result:
.. image:: ../../examples/scripts/camera1/IMG_6050_final.png
:align: center
Command Line Script
-------------------
:download:`Download Script <../../examples/scripts/camera1/camera1.sh>`
:download:`Download Image <../../examples/scripts/camera1/IMG_6050.JPG>` (The image is identical to the one above)
.. code-block:: bash
#!/bin/bash
ih-crop --input /path/to/your/image --output crop.png --ystart 0 --yend "y - 210" --xstart 0 --xend "x"
ih-color-filter --input crop.png --output thresh.png --logic "(g > 200)"
ih-contour-chop --input thresh.png --output chop.png --binary thresh.png --basemin 300
ih-contour-cut --input chop.png --binary chop.png --output final.png --resize
This bash script performs the exact same workflow as the one above. Although not major, it is important to note the
differences between library and command line access. Command line access loads and writes an image at each step,
whereas library access loads once, and only writes when you tell it to. Additionally, script names are close to that
of the method, and all follow lower case format, with words separated by dashes, and ih pre-pendening all commands.
The most notable difference, is that a restore method is unnecessary with command-line input. We simply reuse the gray.png
file we wrote once. Additionally, there is no initial setup, you simply begin processing. The arguments passed into the
command line arguments match identically with the method arguments above.
Camera Processing Example #2
=============================
This section will detail a shorter script that provides details on how to process
an image manually taken with a camera.
Python Script
-----------------------
:download:`Download Script <../../examples/scripts/camera2/camera2.txt>`
:download:`Download Image <../../examples/scripts/camera2/IMG_0370.JPG>`
.. code-block:: python
import ih.imgproc
plant = ih.imgproc.Image("/path/to/your/image")
plant.crop([280, "y - 175", 1000, 3200])
plant.write("crop.png")
plant.colorFilter("((g - b) > 0)")
plant.write("thresh.png")
plant.contourChop(plant.image, 1000)
plant.write("chop.png")
plant.contourCut(plant.image, resize = True)
plant.write("final.png")
This script has only four processing steps but produces a very good result. Let's
look at each block
.. code-block:: python
import ih.imgproc
plant = ih.imgproc.Image("/path/to/your/image")
These two lines setup your image. The first line imports the ih image processing
module, the second loads our image. Here's what our base image looks like:
.. image:: ../../examples/scripts/camera2/IMG_0370_small.jpeg
:align: center
|
.. code-block:: python
plant.crop([280, "y - 175", 1000, 3200])
plant.write("crop.png")
First, we :py:meth:`~ih.imgproc.Image.crop` the image to remove some background
noise. All ROI's are of the form [ystart, yend, xstart, xend], so in this case
we are removing a small amount from the top and bottom of the image, and a lot
from each side. We then write the image:
.. image:: ../../examples/scripts/camera2/crop.png
:align: center
|
.. code-block:: python
plant.colorFilter("((g - b) > 0)")
plant.write("thresh.png")
Next, we apply a :py:meth:`~ih.imgproc.Image.colorFilter` to the image, with the
logic "g - b > 0". This means that we only keep pixels in the image whose green
channel is greater than the value of the blue channel. We then write the image:
.. image:: ../../examples/scripts/camera2/thresh.png
:align: center
|
.. code-block:: python
plant.contourChop(plant.image, 1000)
plant.write("chop.png")
We then use the :py:meth:`~ih.imgproc.Image.contourChop` function to remove
small contours from the image. This takes advantage of the fact that the
plant is completely connected, and the noise around it isn't. We then write
the image:
.. image:: ../../examples/scripts/camera2/chop.png
:align: center
|
.. code-block:: python
plant.contourCut(plant.image, resize = True)
plant.write("final.png")
Finally, we use the :py:meth:`~ih.imgproc.Image.contourCut` function to
crop the image to just the plant, and write our final result:
.. image:: ../../examples/scripts/camera2/final.png
:align: center
Command Line Script
-------------------
:download:`Download Script <../../examples/scripts/camera2/camera2.sh>`
:download:`Download Image <../../examples/scripts/camera2/IMG_0370.JPG>` (The image is identical to the one above)
.. code-block:: bash
#!/bin/bash
ih-crop --input /path/to/your/image --output crop.png --ystart 280 --yend "y - 175" --xstart 1000 --xend 3200
ih-color-filter --input crop.png --output thresh.png --logic "((g - b) > 0)"
ih-contour-chop --input thresh.png --output chop.png --binary thresh.png --basemin 1000
ih-contour-cut --input chop.png --binary chop.png --output final.png --resize
This bash script performs the exact same workflow as the one above. Although not major, it is important to note the
differences between library and command line access. Command line access loads and writes an image at each step,
whereas library access loads once, and only writes when you tell it to. Additionally, script names are close to that
of the method, and all follow lower case format, with words separated by dashes, and ih pre-pendening all commands.
The most notable difference, is that a restore method is unnecessary with command-line input. We simply reuse the gray.png
file we wrote once. Additionally, there is no initial setup, you simply begin processing. The arguments passed into the
command line arguments match identically with the method arguments above.
......@@ -8,7 +8,7 @@ Color Filter In Depth #1
The :py:meth:`~ih.imgproc.Image.colorFilter` function is a very powerful and versatile function, that removes pixels
from an image based on arbitrarily inputted logic. This example
focuses solely on the uses and syntax of the :py:meth:`~ih.imgproc.Image.colorFilter` function. This function takes
2 arguments. The first is the actual logic to solve, and this first argument will be the main focus of all
2 arguments. The first is the actual logic to solve, and this first argument will be the main focus of all
color filter examples. The second argument is simply a region of interest -- namely where you want
to apply your specified logic. If left blank, it simply applies it to the entire image. Let's talk
about what we can do with logic. As stated under the function specific documentation, the supported
......@@ -17,7 +17,7 @@ as any number.
Excluding parenthesis, these can be broken down into 2 categories, operators and values. The operators
are all the symbols before the parenthesis: '+', '-', '*', '/', '>', '>=','==', '<', '<=', 'and', 'or', 'max', and 'min'.
The values are all the symbols after the parenthesis: 'r', 'g', 'b', and again, any number.
Parenthesis are used to make sure your logic is well formed and easy to parse. Well formed logic means that each operator only acts
Parenthesis are used to make sure your logic is well formed and easy to parse. Well formed logic means that each operator only acts
on 2 values, and logic is surrounded by parenthesis. Thus ((r + g) + b) is well formed, but (r + g + b) is not. The logic argument for this function
**must be well formed**. This is important to remember as we continue forward. Since we have fewer values than
operators, let's talk about values first.
......@@ -27,22 +27,22 @@ Values are simply numbers. Numbers that are input as raw text are interpreted l
of saying that if a given number, say '15', occurs in your logic string, it is translated to the number 15. However,
the other values hold special meaning relating to the colors of the image in question. 'r' is the value of the
red channel, 'g' is the value of the green channel, 'b' is the value of the blue channel. If we are considering a particular pixel
of an image, and let's say it has the following values for its channels in BGR order: [20, 30, 40]. For this pixel, 'r'
equates to 40, 'g' equates to 30, and 'b' equates to 20. We apply operators
of an image, and let's say it has the following values for its channels in BGR order: [20, 30, 40]. For this pixel, 'r'
equates to 40, 'g' equates to 30, and 'b' equates to 20. We apply operators
to these values to remove pixels.
By applying operators, we look to convert our values into a true or false output. The arithmetic operators
('+', '-', '*', '/') do exactly what you'd expect to numbers. Thus, by themselves they do not return a true or false value.
The comparison operators ('>', '>=', '==', '<', '<=') however, return true or false depending on what their comparison
returns. 'max' and 'min' are used to select the maximum or minimum respectively of values. Thus (b max g) selects the
returns. 'max' and 'min' are used to select the maximum or minimum respectively of values. Thus (b max g) selects the
maximum value between the blue and green channels. ((b min g) min r) selects the minimum of all three channels.
Finally, 'and' and 'or' can be used to join together multiple pieces of logic. That's a lot of text to absorb -- let's look at some examples.
Python Script
-------------
:download:`Download Script <../../examples/scripts/color1/color1.py>`
:download:`Download Script <../../examples/scripts/color1/color1.txt>`
:download:`Download Image <../images/sample/rgbtv1.png>`
......@@ -53,7 +53,7 @@ Python Script
plant = ih.imgproc.Image("/path/to/your/image")
plant.save("base")
logicList = [
"(((r + g) + b) > 350)",
"(r >= 250)",
......@@ -63,14 +63,14 @@ Python Script
"((r - g) > 50)",
"(((g - r) > 15) and (b < g))"
]
for logic in logicList:
plant.restore("base")
plant.show("base")
plant.colorFilter(logic)
plant.show(logic)
plant.wait()
This script applies different logic to the same image, while displaying both the base image
and the filtered image at the same time. Let's talk about each individual piece of logic. First is "(((r + g) + b) > 350)". This
will sum up the value of each channel (the intensity of the pixel) and compare this sum to 350. If the value is greater
......@@ -78,238 +78,238 @@ than 350, it will keep the pixel, otherwise it writes it as black. Here's the r
.. image:: ../images/sample/rgbtv1_small.png
:align: left
.. image:: ../../examples/scripts/color1/logic0.png
:align: right
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Next is "(r >= 250)". This simply keeps any pixel who has a red value greater than or equal
to 250. Color channels range from 0 to 255, so 250 is very large. Here's the result:
.. image:: ../images/sample/rgbtv1_small.png
:align: left
.. image:: ../../examples/scripts/color1/logic1.png
:align: right
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Next is "((b > 150) and (b < 220))". This will keep any pixel that has a blue value
that is greater than 150 and less than 220. Here's the result:
.. image:: ../images/sample/rgbtv1_small.png
:align: left
.. image:: ../../examples/scripts/color1/logic2.png
:align: right
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Next is ""((((b max g) max r) - ((b min g) min r)) > 20)",". This will keep any pixel
whose maximum channel value is at least 20 greater than its minimum channel value. Here's the result:
.. image:: ../images/sample/rgbtv1_small.png
:align: left
.. image:: ../../examples/scripts/color1/logic3.png
:align: right
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Next is "((((r + g) + b) > 700) or (((r + g) + b) < 100))". This will keep any pixel whose
intensity is larger than 700 OR smaller than 100. Here's the result:
.. image:: ../images/sample/rgbtv1_small.png
:align: left
.. image:: ../../examples/scripts/color1/logic4.png
:align: right
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Next is "((r - g) > 50)". This will keep any pixel that has a red channel value
that is at least 50 greater than its green channel value. Here's the result:
.. image:: ../images/sample/rgbtv1_small.png
:align: left
.. image:: ../../examples/scripts/color1/logic5.png
:align: right
|