Commit 3101d39c authored by aknecht2's avatar aknecht2

Documentation update for imgproc functions, and workflow examples.

parent 79cad3a1
This diff is collapsed.
......@@ -79,7 +79,7 @@ Here are the first few lines of the crawl.json file:
...
The data set located in /stash/project/@RicePhenomics/public/tinySet is publicly available,
so you shouldn't need to adjust this file at all! To setup a project directoy and load images run:
so you shouldn't need to adjust this file at all! To setup a project directory and load images run:
.. code-block:: bash
......@@ -96,6 +96,8 @@ setup.
Configuration
-------------
The configuration for osg submissions is very different, and you must additionally
generate an ih distribution, and ssh keys to transfer your files.
Here is the adjusted configuration file:
.. code-block:: javascript
......@@ -111,6 +113,7 @@ Here is the adjusted configuration file:
"universe": "vanilla",
"requirements": "OSGVO_OS_STRING == \"RHEL 6\" && HAS_FILE_usr_lib64_libstdc___so_6 && CVMFS_oasis_opensciencegrid_org_REVISION >= 3590",
"+WantsStashCache": "True",
"+ProjectName": "YOUR_OSG_PROJECT_NAME"
}
},
"osg": {
......@@ -123,18 +126,24 @@ Here is the adjusted configuration file:
"images": 2
},
"notify": {
"email": "avi@kurtknecht.com",
"email": "YOUR_EMAIL",
"pegasus_home": "/usr/share/pegasus/"
}
}
Most of the information is the same as the configuration file from the previous
workflow example, but there are a few differences. First, the condor configuration
workflow example, but there are a few differences. First, there is a version definition,
which should match the version of your ih tarball (Currently 1.0). Next, the condor configuration
should have the "requirements" and "+WantsStashCache" definitions, and they should
match as above. The "+ProjectName" definition is if you have a group on the OSG.
Next, there is a version definition, which should match the version of your ih
tar. To create a tarball distribution, navigate to your ih install, and run:
match as above. The "+ProjectName" definition is if you have a group on the OSG,
and it should match your group name. Finally, there is an "osg" specific key.
This requires a path to an ih tarball distribution, as well as a path to your
ssh private key.
Creating the ih Distribution
=============================
To create a tarball distribution, navigate to your ih install, and run:
.. code-block:: python
......@@ -160,6 +169,8 @@ full path to this file into the "tarball" definition.
# re-tar our folder
tar -zcvf ih-1.0.tar.gz ih-1.0
Creating ssh Keys
==================
Next, for data staging and transferring files to work successfully, you need to create an ssh key pair
for your workflow. Run the following:
......@@ -170,15 +181,16 @@ for your workflow. Run the following:
This while generate an ssh key pair. You will be prompted for a name and a
password. When prompted for the password, hit enter to leave the password blank.
After this finishes, there should be two files created in your ~/.ssh/ folder,
a private key file and public key file with the name that you gave. Append the
contents of the public key file to your authorized_keys files (create it if
necessary).
a private key file and public key file with the name that you gave. The public
key is simply the file that ends with ".pub". Append the contents of the public
key file to your ~/.ssh/authorized_keys files. If there is not a ~/.ssh/authorized_keys
files, then create it and append the contents of your public key to it.
.. code-block:: bash
cat ~/.ssh/KEY_NAME.pub >> ~/.ssh/authorized_keys
Make sure you provide the full path to the private key for the "ssh" definition.
Make sure you provide the full path to the PRIVATE key for the "ssh" definition.
Now you are ready to generate and submit your workflow! Make sure you cd
to your top level folder (awesome_project) and then run:
......
......@@ -4,7 +4,7 @@
{
"name": "pot_filter_1",
"executable": "ih-color-filter",
"inputs": ["base", "/Volumes/BLOO/Work/hcc/plantcv/new/workflow1/input/fluosv_pot1.json"],
"inputs": ["base", "/work/walia/common/workflows/0184/input/fluosv_pot1.json"],
"outputs": ["pot1"],
"arguments": {
"--logic": "(((((((r - g) < 30) and (((r + g) + b) < 110)) or ((((r + g) + b) > 110) and ((r - g) < 50))) or (((r - g) < 25) and ((g - r) < 25))) or (g > 60)) not 1)"
......@@ -13,7 +13,7 @@
{
"name": "pot_filter_2",
"executable": "ih-color-filter",
"inputs": ["pot1", "/Volumes/BLOO/Work/hcc/plantcv/new/workflow1/input/fluosv_pot2.json"],
"inputs": ["pot1", "/work/walia/common/workflows/0184/input/fluosv_pot2.json"],
"outputs": ["pot2"],
"arguments": {
"--logic": "(((r + g) + b) > 120)"
......@@ -33,7 +33,7 @@
{
"name": "crop",
"executable": "ih-crop",
"inputs": ["filter", "/Volumes/BLOO/Work/hcc/plantcv/new/workflow1/input/fluosv_edge.json"],
"inputs": ["filter", "/work/walia/common/workflows/0184/input/fluosv_edge.json"],
"outputs": ["edged"],
"arguments": {},
"depends": ["main_filter"]
......@@ -44,7 +44,6 @@
"inputs": ["edged", "edged"],
"outputs": ["final"],
"arguments": {
"--resize": "",
"--basemin": 75
},
"depends": ["crop"]
......@@ -96,7 +95,7 @@
{
"name": "crop",
"executable": "ih-crop",
"inputs": ["morphed", "/Volumes/BLOO/Work/hcc/plantcv/new/workflow1/input/rgbtv_edge.json"],
"inputs": ["morphed", "/work/walia/common/workflows/0184/input/rgbtv_edge.json"],
"outputs": ["edged"],
"arguments": {},
"depends": ["closing"]
......@@ -115,8 +114,7 @@
"inputs": ["recolor", "recolor"],
"outputs": ["final"],
"arguments": {
"--basemin": 200,
"--resize": ""
"--basemin": 200
},
"depends": ["reconstitute"]
}
......@@ -239,7 +237,7 @@
{
"name": "gfilter1",
"executable": "ih-color-filter",
"inputs": ["box_filtered", "/Volumes/BLOO/Work/hcc/plantcv/new/workflow1/input/rgbsv_gray1.json"],
"inputs": ["box_filtered", "/work/walia/common/workflows/0184/input/rgbsv_gray1.json"],
"outputs": ["gray_filtered1"],
"arguments": {
"--logic": "((((b max g) max r) - ((b min g) min r)) > 50)"
......@@ -249,7 +247,7 @@
{
"name": "gfilter2",
"executable": "ih-color-filter",
"inputs": ["gray_filtered1", "/Volumes/BLOO/Work/hcc/plantcv/new/workflow1/input/rgbsv_gray2.json"],
"inputs": ["gray_filtered1", "/work/walia/common/workflows/0184/input/rgbsv_gray2.json"],
"outputs": ["gray_filtered2"],
"arguments": {
"--logic": "((((b max g) max r) - ((b min g) min r)) > 100)"
......@@ -259,7 +257,7 @@
{
"name": "crop",
"executable": "ih-crop",
"inputs": ["gray_filtered2", "/Volumes/BLOO/Work/hcc/plantcv/new/workflow1/input/rgbsv_edge.json"],
"inputs": ["gray_filtered2", "/work/walia/common/workflows/0184/input/rgbsv_edge.json"],
"outputs": ["edged"],
"arguments": {},
"depends": ["gfilter2"]
......@@ -278,8 +276,7 @@
"inputs": ["recolor2", "recolor2"],
"outputs": ["final"],
"arguments": {
"--basemin": 50,
"--resize": ""
"--basemin": 50
},
"depends": ["reconstitute2"]
}
......@@ -298,7 +295,7 @@
"rgbsv": {
"inputs": ["final"],
"arguments": {
"--dimensions": "",
"--dimfromroi": "pot_roi",
"--pixels": "",
"--moments": ""
},
......@@ -316,7 +313,7 @@
"fluosv": {
"inputs": ["final"],
"arguments": {
"--dimensions": "",
"--dimfromroi": "/work/walia/common/workflows/0184/input/fluosv_pot1.json",
"--pixels": "",
"--moments": ""
},
......
{
{
"version": "1.0",
"installdir": "/home/aknecht/stash/walia/ih/ih/build/scripts-2.7/",
"profile": {
......@@ -9,7 +9,7 @@
"universe": "vanilla",
"requirements": "OSGVO_OS_STRING == \"RHEL 6\" &amp;&amp; HAS_FILE_usr_lib64_libstdc___so_6 &amp;&amp; CVMFS_oasis_opensciencegrid_org_REVISION >= 3590",
"+WantsStashCache": "True",
"+ProjectName": "RicePhenomics"
"+ProjectName": "RicePhenomics"
}
},
"osg": {
......
......@@ -337,6 +337,9 @@ class Image(object):
return np.array(merged, dtype=np.int32)
def drawContours(self):
"""
A helper function that draws all detected contours in the image onto the image.
"""
if self._isColor():
binary = cv2.inRange(self.image.copy(), np.array([1, 1, 1], np.uint8), np.array([255, 255, 255], np.uint8))
else:
......@@ -463,6 +466,17 @@ class Image(object):
return
def addWeighted(self, image, weight1, weight2):
"""
:param image: The image to add.
:type image: str of np.ndarray
:param weight1: The weight to apply to the current image.
:type weight1: float
:param weight2: The weight to apply to the additional image.
:type weight2: float
This function adds an additional image to the current based on the provided
weights. Both positive and negative weights can be used.
"""
self.image = cv2.addWeighted(self.image, weight1, self._loadResource(image)[1], weight2, 0)
return
......@@ -543,7 +557,7 @@ class Image(object):
def rotateColor(self, color):
"""
:param color: Color shift to perform
:param color: Color shift to perform. Should be [b, g, r].
:type color: list
Shifts the entire color of the image based on the values in
......@@ -583,8 +597,8 @@ class Image(object):
When creating your label file, make sure to use helpful names. Calling each set of colors "label1", "label2" e.t.c
provides no meaningful information. The remove list is the list of matched labels to remove from the final image.
The names to remove should match the names in your label file exactly. For example, let's say you have the labels
"plant", "pot", "track", and "background" defined, and you only want to keep pixels that match the "plant" label.
Your remove list should be specified as ["pot", "track", "background"].
"plant", "pot", "track", and "background" defined, and you only want to keep pixels that match the "plant" label.
Your remove list should be specified as ["pot", "track", "background"].
"""
if (os.path.isfile(labels)):
with open(labels, "r") as rh:
......@@ -864,6 +878,17 @@ class Image(object):
return
def contourChop(self, binary, basemin = 100):
"""
:param binary: The binary image to find contours of.
:type binary: str of np.ndarray
:param basemin: The minimum area a contour must have to be considered part of the foreground.
:type basemin: int
This function works very similiarly to the :py:meth:`~ih.imgproc.Image.contourCut`
function, except that this function does not crop the image, but removes
all contours that fall below the threshold.
"""
bname, binary = self._loadResource(binary)
if self._isColor(binary):
binary = cv2.cvtColor(binary, cv2.COLOR_BGR2GRAY)
......@@ -874,6 +899,12 @@ class Image(object):
return
def getBounds(self):
"""
:return: The bounding box of the image.
:rtype: list
This function finds the bounding box of all contours in the image, and
returns a list of the form [miny, maxy, minx, maxx]
"""
binary = self.image.copy()
if self._isColor(binary):
binary = cv2.cvtColor(binary, cv2.COLOR_BGR2GRAY)
......@@ -898,7 +929,7 @@ class Image(object):
"""
:param binary: The binary image to find contours of.
:type binary: str or np.ndarray
:param basemin: The minimum area a contour must have to be consider part of the foreground.
:param basemin: The minimum area a contour must have to be considered part of the foreground.
:type basemin: int
:param padding: Padding add to all sides of the final roi.
:type padding: int
......@@ -990,7 +1021,7 @@ class Image(object):
def bitwise_not(self):
"""
Inverts the image. If the given image has multiple channels (i.e. is a color image) each channel is processed independently.
Inverts the image. If the given image has multiple channels (i.e. is a color image) each channel is processed independently.
"""
self.image = cv2.bitwise_not(self.image)
return
......@@ -1119,6 +1150,19 @@ class Image(object):
return moments
def extractDimsFromROI(self, roi):
"""
:param roi: The roi to calculate height from.
:type roi: list or roi file
:return: A list corresponding to the calculated height and width of the image.
:rtype: list
Returns a list with the follwoing form: [height, width]. This functions differs
from the :py:meth:`~ih.imgproc.Image.extractDimensions` in the way that height
is calculated. Rather than calculating the total height of the image,
the height is calculated from the top of the given ROI.
"""
pot = self._loadROI(roi)
plant = self.getBounds()
height = plant[0] - pot[0]
......@@ -1257,7 +1301,8 @@ class Image(object):
def extractColorChannels(self):
"""
This function is similar to the
This function extracts the total number of pixels of each color value
for each channel.
"""
b, g, r = cv2.split(self.image)
bdata, gdata, rdata = [], [], []
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment