Tabletop dataset
The tabletop dataset is used in a publication currently under review. The database contains 300 pictures, of which 30 are used for training. We also include a video which we used for demonstrating tracking abilities of our system. More information will be made available when the publication process is finished.
Download: zip
Accessing the database from the Lomp, Faubel, Schöner (2016, in review) paper
Below are instructions for reproducing the experiments reported in the paper. If you encounter any problems with these instructions, please feel free to contact oliver lomp(at)ini rub de
(replace spaces with .)
Step 1: Get cedar
Method 1: use a virtual machine (recommended)
- Go to the cedar website and download the cedar VM (found on the 'downloads' page).
- Setup the virtual machine (VM) by following the specified instructions
- Start the virtual machine
- Open a terminal and type
cd && cd cedar.release
- Update the cedar version in the virtual box to an appropriate version (see "Choosing a cedar version" below.)
- Compile this version by typing
cd build
and thenmake cedar
- You may now test cedar by typing
../bin/cedar
. This should open its graphical user interface. - Continue with Step 2: Get the necessary plugins.
Method 2: install cedar on your own machine
This should work on Linux, Mac OS and Windows. However, it is untested.
Go to the cedar website. Follow the instructions on installing cedar found under in the documentation; use mercurial, and before compiling cedar (i.e., before running cmake/make), follow the instructions under "Choosing a cedar version" below. When you are done installing cedar, continue with 'Step 2: Get the necessary plugins'.
Choosing a cedar version:
These instructions were tested with revision a29302ef7cf2 of cedar. At the time of writing, this revision is only available from cedar's development repository (note that the development version is not necessarily stable; but it has features which are needed for the object recognition architecture). Thus, first run the following command in your cedar copy:
hg pull https://bitbucket.org/cedar/development
Now, you can update to the specific revision of cedar with
hg update -r a29302ef7cf2
Alternatively, you can update to the latest revision of cedar with
hg update
but this is not guaranteed to work.
Step 2: Get the necessary plugins
The object recognition architecture requires some functionality that is not part of cedar's core. This functionality is provided by two plugins.
To begin, open a terminal. If you are using a virtual machine, type cd
to go to the home directory. If you are working on your own machine, cd to a folder into which you want to check out the plugins.
1. Get the common cedar plugin
To do this, first type
hg clone https://bitbucket.org/cedar/plugins plugin cd plugin
Note that the 'plugin' argument for the hg clone command is important; if the plugin code is not in a folder called 'plugin', compiling the evaluation scripts (see below) will not work. As with cedar, the architecture was tested with a specific revision of the plugin, which you may obtain by typing
hg update -r ddddd3ae65be cp project.conf.example project.conf
Skip the -r ddd... part of the hg update command if you wish to try the newest version of the plugin. Edit the project.conf to reflect the location of your cedar version: on the VM, fill in the corresponding line
set(CEDAR_HOME ~/cedar.release)
Now, build the plugin with the following commands:
mkdir build cd build cmake -D BUILDSET=shared_buildsets/LompFaubelSchoener2016.cmake .. make
2. Get the evaluation scripts
This works analogously to the steps described for the common cedar plugin. On the VM, type cd
, on your own machine, cd into the directory into which you want to check out the plugins.
Now type
hg clone https://bitbucket.org/OliverLomp/ObjectRecognitionEvaluationScripts cd ObjectRecognitionEvaluationScripts hg update -r e15224d7b6e3 cp project.conf.example project.conf
Edit the project.conf as above. Also edit it so that it finds your cedar plugin by editing the lines to reflect
set(CEDAR_PLUGIN_HOME "~/") # folder in which you checked out the plugins set(CEDAR_PLUGIN_NAME LompFaubelSchoener2016)
Finally, compile the plugin with
mkdir build cd build cmake ..
3. Load the plugins in cedar
cd to your cedar folder. Type ./bin/cedar to start the cedar gui. In the menu, select tools > Manage plugins. Click add, and navigate to plugin/build/libLompFaubelSchoener2016.so (or dll if you are on windows, dylib on Mac OS). Click open. Repeat the process for the object recognition evaluation scripts plugin. You are now ready to load the architecture.
You may want to tick the "startup" checkbox so that you do not have to load the plugins everytime you start cedar.
Step 3: Use the architecture
First, download the architecture from this link and extract the zip file (make sure to keep both files contained in the ZIP at the same location!) In cedar, choose file > load and navigate to the architecture (if, upon loading, you are prompted about missing plugins, continue anyway.)
Now you will need to load an image. First, download and extract the tabletop database (see above). Then, press ctrl+f in cedar, and type "picture" into the search box. Hit enter. This should highlight the input step and select it, so that its properties are displayed in the panel to the right. In the properties, next to the property "filename", click the button labeled ">" and browse to one of the images from the database.
Make sure the architecture is seeing the appropriate region of the image. To do so, first select the "slice" step in the "Image ROI selection" box (located on the top left of the architecture). If you want to the region in the center of the image, change the "anchor type" parameter to "center". To check the result, you can right-click the slice step and choose "plot all". The image on the right shows the selected region. (The automatic training and testing script takes care of these settings, but you need to adjust them if you use the architecture manually.)
To start the architecture, hit the big play button on in the toolbar. it should now be up and running, and no errors should appear in the status bar at the bottom. Instructions on how to use the architecture are given below. As a first step, you should probably familiarize yourself a bit with how cedar is used. Then train the architecture so it can actually recognize some objects. Finally, you can manually load images and see if they are recognized, or evaluate the performance by running a script.
Inspecting the architecture and finding steps
Architectures in cedar are made of steps, the little elements you see in the large lower-left panel. If you left-click on them, their properties are shown on the right-hand side. On the top of that panel, there is a gray bar which shows the class of the step, e.g., cedar.dyn.NeuralField for a neural field. Below, you can see all the parameter values chosen for the steps.
Lines between the elements are data connections; they always originate from the right side of a step and terminate on the left side of a step.
You can also inspect the current state of a step by right-clicking and selecting plot all (or field plot for dynamic neural fields).
It can be hard to find steps in an unfamiliar architecture. When a specific step is referenced in the instructions below, you can find it by pressing ctrl+f and then typing the name.
Train the architecture
This can be done automatically. Load and start the architecture. In cedar's menu, select Scripts > Manage. In the dialog that opens, click on the "Train Architecture" script. Under properties, click the ">" button next to the "root directory" entry and navigate to the 'images' folder in the tabletop database. This will let the script know where the images are located.
To start training, press the play button next to the script.
When it is finished, you may want to save the trained weights; to do so, click "File" > "Save serializable data...". You can later load these weights with "File" > "Load serializable data...".
Oscillations
If the system on which you run the architecture is very slow (e.g., VM), you may need to adjust the simulation speed. First, to see if oscillations are happening, look at one of the layer one fields. ("shift layer 1" is the one most likely to become unstable). Right-click it and select "field plot". If you notice activation oscillating, lower the simulation speed by adjusting the slider in cedar's toolbar (right under the application menu).
Inspecting the system
An easy way to see what the architecture is doing is the architecture widget. To open it, select Windows > Architecture widgets > architecture in the top level menu of cedar. The widget that opens shows three columns. The left column shows the input to the recognition system. Below that, it shows input masked by the top-down prediction, followed by the current label ranking with the top one being the recognized label. The two other columns show the first and second layers of the pose and identity estimates.
To see what the system recognized, plot the step "Label String" (right-click, plot all).
The label nodes can be inspected by looking at the "label layer 1" and "label layer 2" steps. Note that these are also implemented with neural fields, however, this is just for convenience; they are set up so that each sampling point behaves like an individual node, and this is also reflected in the plot of the "field" activity.
The orientation estimate can be inspected by looking at the "rotation layer 1" and "rotation layer 2" steps. Note that the plots don't show proper labels on the x-axis.
Shift is represented by the "shift layer 1" and "shift layer 2" steps.
The masked input can be seen by plotting the "masked input" step.
The learned weights can be inspected from the "color memory", "Y edge memory", "Cr edge memory", "Cb edge memory" and "shape memory" steps.
The system can be reset by pressing ctrl+b and then briefly enabling the "reset" boost (check the checkbox next to it).
Evaluating the system
Like training, this can be done automatically. In cedar's menu, click "Scripts" > "Manage...". Select the "Test Performance" script. Click the ">" button next to "root directory" and browse to the images folder of the tabletop database. To start evaluation, click the play button next to the script.
This will create a log file in the folder from which you started cedar. This logfile contains records of the activation of the fields and label nodes at the end of each trial. To calculate statistics, copy the logfile to the python_scripts/results folder in the ObjectRecognitionEvaluationScripts plugin folder you checked out earlier. In a terminal, run
../performance_statistics.py [your result file]
This prints some statistics to the terminal and writes several statistics files into the results folder.