PiVR has been developed by David Tadres and Matthieu Louis (Louis Lab).

10. PiVR software documentation

INCOMPLETE, WORK IN PROGRESS You can find the source code here: https://gitlab.com/LouisLab/pivr

10.1. PiVR GUI source code

This page contains the classes used to construct the graphical user interface (GUI).

class start_GUI.PiVR(*args, **kwargs)[source]

This class initializes the GUI the user will see. There are several different frames (e.g. “Tracking” vs “Virtual Arena”) that are all created differently.

To do this, the “PiVR” class calls a number of other classes. To help with this the following three “helper” classes are important:

  1. “CommonVariables” contains variables that are true between frames,

  2. “SubFrames” helps with the creation of the different frames and finally

  3. “CommonFunction” which contains functions that are called in different frames.

The actual frames (e.g. “TrackingFrame”) are then created by “constructor” classes which call different components of the three classes described above.

The “helper” classes are necessary as they can save variables and functions between different frames (similar to global variables). For example, if the user would select at particular folder to save all the experimental data, the “CommonVariables” class saves this folder when the user is then switching from, for example, the “Tracking” frame to the “Virtual Arena” frame.

access_subframes(page_name)[source]

The function above returns the instance of the currently active (=in foreground) window

call_start_experiment_function(page_name)[source]

The function above will be called by the button that says ‘start experiment’. It will look in the currently active frame for a function called ‘start_experiment_function’.

show_frame(page_name)[source]

The function above is called when the user presses on a different frame. It takes the selected frame and raises it to the top.

In addition, it saves the current page name which is needed to pass around as a reference to the currently active frame when calling functions that are generally called, such as starting an experiment!

class start_GUI.DynamicVirtualRealityFrame(parent, controller, camera_class=None)[source]

The constructor class above is used to create the “Dynamic VR Arena” frame.

start_experiment_function()[source]

Each constructor class which is used to start an experiment has this function. They all have the same name. This is the one in the “DynamicVirtualRealityFrame”

It checks for:

  1. correct camera resolution,

  2. if the user has specified the pixel per mm

  3. if a copy of the dynamic virtual reality fits into RAM memory.

If one of these tests fails, the user gets an error message and the experiment will not start.

If the experiment does start, the control_file.ControlTracking is called that then handles the detection and tracking of the animal.

class start_GUI.TrackingFrame(parent, controller, camera_class=None)[source]

The constructor class used to create the “Tracking” frame is different from the “Dynamic Virtual Reality” frame.

start_experiment_function()[source]

Each constructor class which is used to start an experiment has this function. They all have the same name. This is the one in the “TrackingFrame”

It checks for:

  1. correct camera resolution,

  2. if the user has specified the pixel per mm

If one of these tests fails, the user gets an error message and the experiment will not start.

If the experiment does start, the control_file.ControlTracking is called that then handles the detection and tracking of the animal.

10.2. PiVR Tracking source code

10.2.1. Detection

class pre_experiment.FindAnimal(boxsize, signal, debug_mode, stringency_size=0.01, stringency_centroid=0.01, cam=None, resolution=[640, 480], recording_framerate=2, display_framerate=2, model_organism=None, offline_analysis=False, pixel_per_mm=None, organisms_and_heuristics=None, post_hoc_tracking=False, animal_detection_mode='Mode 1', simulated_online_analysis=False, datetime='not defined')[source]

Before the algorithm can start tracking it first needs to identify the animal and create a background image that can be used for the rest of the experiment. Three “Animal Detection Modes” are available. See here for a high level description which one should consult to understand the advantages and limitations of each Mode.

Mode 1:

If the background is not evenly illuminated or if the animal moves fast and often goes to the edge Mode 1 is a safe and easy choice.

  1. Identify the region of the picture where the animal is located by detecting movement. For this find_roi_mode_one_and_three() is called.

  2. Reconstruct the background image from the mean image

    while the animal was identified. For this

    define_animal_mode_one() is called.

Mode 2:

Mode 2 can be used if the animal can be added to the arena without changing anything in the field of view of the camera while doing so.

  1. Takes a picture before the animal is placed and a second picture after the animal is placed. This approach was used before in the SOS tracker (Gomez-Marin et al., 2012). This is called animal detection Mode 2. This only works if the only object that is different in the image is the animal that one wants to track. Slight changes in between trials such as, placing a lid on the arena, can result in detectable changes in the image which often breaks this approach! For this find_roi_mode_two() is called.

  2. Computationally the identification of the animal in two images, one with and one without, is very simple. Just subtract the two images, and what is standing out must be the object the user wants to track. For this define_animal_mode_two() is called

Mode 3:

This method is a bit more complicated compared to Mode 1 and Mode 2. It attempts to combine the ease of use of Mode 1 with the perfectly “clean” background image required by Mode 2.

This method only works well if several conditions are met. We only used this method with the slow fruit fly larva. See here for a detailed high level description

  1. Identify the region of the picture where the animal is located by detecting movement. This is in fact identical to Mode 1, as the same function is called: find_roi_mode_one_and_three()

  2. Then the animal must be defined using a binary image. This is a critical step and necessitates that the animal clearly stands out compared to the background. The function is: define_animal_mode_three().

  3. To reconstruct the background (for the experiment) the animal must leave the original position. The relevant function: animal_left_mode_three()

  4. Then the background is reconstructed by combining the background image minus the area that contained the animal at the start and adding the same region to the background without the animal (once the animal has moved away from its initial position); thus resulting in a neat background image for trajectory display. The function doing that is background_reconstruction_mode_three()

  5. For the tracking algorithm to start it needs to know where the animal went while all of the above was going on. animal_after_box_mode_three()

Finally, this class also holds an offline animal detection function: find_roi_post_hoc(). This is used when running Todo-Link Post-Hoc Single animal tracking. For example for debugging or to define a new model organism the user wants to track in the future.

animal_after_box_mode_three()[source]

After making sure that the animal left the original position, we have to find it again.

animal_left_mode_three()[source]

This function compares the first image the algorithm has taken with the current image. It always subtracts the two binary (thresholded using the mean +3*STD) images. The idea is that as soon as the animal has left the original position, the subtracted image will only have the original animal left. In other words, the closer this subtracted image is to the first binary image, the more of the animal has already left the initial bounding box.

background_reconstruction_mode_three()[source]

After the animal left the original position, take another picture. Use the bounding box coordinates defined for the first animal to cut out that area of the new image. Then paste it into the original position where the animal was.

This leads to an almost perfect background image.

cancel_animal_detection_func()[source]

When the user presses the cancel button, it turns the Boolean value to ‘True’ which cancels the detection of the animal in the next valid step.

define_animal_mode_one()[source]

This function is called when the user uses Animal Detection Mode #1

This function does not do local thresholding of the first frame. Instead it just reconstructs the background image from the mean image it has constructed while identifying the animal. This will almost always leave part of the animal in the background image. Usually this is not a problem as the whole animal is larger than just a part of it.

define_animal_mode_three()[source]

It uses the information about which region to look for the larva using local thresholding (we couldn’t do that before with the whole image)

This only works if the animal is somewhere where the background illumination is relatively even and the animal stands out clearly relative to it’s immediate background!

define_animal_mode_two()[source]

With the information where to look we identify the animal using local thresholding (we couldn’t do that before with the whole image)

This is only saved if the animal is somewhere where the background illumination is relatively even and the animal stands out clearly relative to it’s immediate background!

define_animal_post_hoc()[source]

With the information where to look, the animal can be identified using local thresholding.

error_message_pre_exp_func(error_stack)[source]

Whenever something goes wrong during animal detection this function is called. It writes the traceback of the error into a file called _ERROR_animal_detection.txt in the experimental folder.

find_roi_mode_one_and_three()[source]

Identification of the original region of interest (ROI):

This function identifies a region in the image that contains pixels that change over time. The assumption is that the only object moving in the field of view should be the animal the user is interested in.

To achieve this, the camera provides images. This function will take the mean of the images taken so far. It will then, starting from the second frame, start to subtract the newest frame from the previously taken images. In the resulting image, anything that moves will clearly stand out compared to the background. A region of interest is then drawn around those pixels to be used later on.

find_roi_mode_two()[source]

Sometimes the user can not use the automatic animal detection Method 3 because the background is not completely homogeneous. If the user still needs to have a clear background image without any trace of the animal this Methods can be used. It is similar to the one used in Gomez-Marin et al., 2011.

  1. Take an image before placing the animal

  2. Place the animal

  3. Take another picture and subtract from the first

find_roi_post_hoc()[source]

Identifies animal in post-hoc analysis.

Normally used when user defines a new animal The workflow consists of the user taking a video and then running the TODO-Link Post-Hoc Single Animal Analysis. This function identifies the animal before the actual tracking starts.

It first reads all the images (user should provide which file format the images are in) and zips them up so that the folder gets easier to copy around. It also creates a numpy array with all the images for this script to use.

It then takes the mean of all the images to create the background image.

It then smoothens the background image using a gaussian filter with sigma 1.

It then starts to loop over as many images as necessary by:

  1. subtracting the mean (background) image from the background.

  2. Calculate the threshold by defining everything below or above (depending on TODO link signal) 2*std from the mean as signal.

  3. Use the regionprop function to measure the properties of the labeled image regions.

  4. Depending on the amount of propertes, different rules apply:

    1. If more than one props, cycle through them testing if they are fulfilling the minimal requirements to count as potential animals: Filled Area Min and Max, Eccentricity Min and Max, and major over minor axis Min and Max.

      1. If one found > That’s the animal

      2. Else, go to next image

    2. If only one props, that’s the animal and break out of the loop

    3. If no blob, go to next image

10.2.2. Tracking

class control_file.ControlTracking(boxsize=20, signal=None, cam=None, base_path=None, genotype=None, recording_framerate=30, resolution=[640, 480], recordingtime=20, pixel_per_mm=None, model_organism='Not in list', display_framerate=None, vr_arena=None, pwm_object=None, placed_animal=None, vr_arena_name=None, offline_analysis=False, time_dependent_stim_file=None, time_dependent_stim_file_name=None, vr_arena_multidimensional=False, high_power_led_bool=False, minimal_speed_for_moving=0.25, observation_resize_variable=1, organisms_and_heuristics=None, post_hoc_tracking=False, debug_mode='OFF', animal_detection_mode='Mode 1', output_channel_one=[], output_channel_two=[], output_channel_three=[], output_channel_four=[], simulated_online_analysis=False, overlay_bool=False, controller=None, background_channel=[], background_2_channel=[], background_dutycycle=0, background_2_dutycycle=0, vr_update_rate=1, pwm_range=100, adjust_intensity=100, vr_stim_location='NA', version_info=None, save_centroids_npy=False, save_heads_npy=False, save_tails_npy=False, save_midpoints_npy=False, save_bbox_npy=False, save_stim_npy=False, save_thresh_npy=False, save_skeleton_npy=False, undistort_dst=None, undistort_mtx=None, newcameramtx=None)[source]

Whenever the tracking algorithm is called, this class controls first the detection algorithm, prepares the virtual arena if necessary, and then calls the tracking algorithm.

adjust_arena()[source]

This function translates and rotates the virtual reality if necessary. It also adjusts the desired stimulus intensity.

For both translation and rotation the scipy.ndimage.affine_transform function is used: https://docs.scipy.org/doc/scipy-0.19.1/reference/generated/scipy.ndimage.affine_transform.html

For the translation, the following transformation matrix is used with

\({\zeta}\) being the difference between the animal position and the desired animal position:

\[\begin{split}\begin{bmatrix} Y' \\ X' \\ 0 \end{bmatrix} = \begin{bmatrix} 1 & 0 & Y \zeta \\ 0 & 1 & X \zeta \\ 0 & 0 & 0 \end{bmatrix}\end{split}\]

To translate and rotate the arena, the following is done:

  1. Take the position of the animal in the real world and the position of the animal in the virtual reality. Translate the arena by the difference, effectively using the placed animal coordinates as the origin around which the arena is rotated.

  2. Then translate the arena to the origin of the array at [0,0]

  3. Rotate the arena by the difference in real movement angle and the desired angle

  4. Finally, translate the arena back to the desired position, defined by both the real position of the animal and the desired position.

This is implemented by the following linear transformation where:

\({X \zeta}\) and \({Y \zeta}\) is the difference between the animal position and the desired animal position and, \({\eta}\) is the desired animal position.

\[\begin{split}\begin{bmatrix} Y' \\ X' \\ 0 \end{bmatrix} = \begin{bmatrix} 1 & 0 & Y \zeta \\ 0 & 1 & X \zeta \\ 0 & 0 & 0 \end{bmatrix} \cdot \begin{bmatrix} 1 & 0 & Y \eta \\ 0 & 1 & X \eta \\ 0 & 0 & 0 \end{bmatrix} \cdot \begin{bmatrix} \cos & -\sin & 0 \\ \sin & \cos & 0 \\ 0 & 0 & 1 \end{bmatrix} \cdot \begin{bmatrix} 1 & 0 & -Y \eta \\ 0 & 1 & -X \eta \\ 0 & 0 & 0 \end{bmatrix}\end{split}\]
high_power_LED_arena_inversion_func()[source]

When the high powered PiVR version is used, the software has to handle the unfortunate fact that the LED controller of the high powered PiVR version is completely ON when the GPIO is OFF and vice versa. This of course is the opposite of what happens in the normal version.

Internally, the software must therefore invert the arena if that’s the case. This function takes care of this.

The end user does not need to know this. From their perspective they are able to use the same input arena they would use for the standard version while getting the expected result.

high_power_time_dependent_stim_func()[source]

Analog function to :high_power_LED_arena_inversion_func:

When the high powered PiVR version is used, the software has to handle the unfortunate fact that the LED controller of the high powered PiVR version is completely ON when the GPIO is OFF and vice versa. This of course is the opposite of what happens in the normal version.

Internally, the software must therefore invert the stimulus if that’s the case. This function takes care of this.

The end user does not need to know this. From their perspective they are able to use the same input arena they would use for the standard version while getting the expected result.

show_dynamic_vr_arena_update_error()[source]

This function warns the user that an incompatible frame rate/dynamic arena update frequency has been chosen. For example, if the frame rate is 30 frames per second and the update rate is 10Hz the arena will be updated every 3rd frame (30/10=3). This is of course possible. If the frame rate is 30 frames per second and the update rate is set to 20Hz the arena should be updated every 1.5th frame (30/20=1.5). This is not possible. What will happen is that for every other frame the arena will be updated for each frame and the other it will take two frames to update. This will lead to a mean of 1.5 but it’s not regular, of course. As this can easily lead to bad data being produced without the user knowing (no explicit error will be thrown) this function informs the user of the mistake so that they can change the settings to either 40 frames per second to keep the 20Hz update rate or to change the update rate.

start_experiment()[source]

This function is called at the end of the initialization of the control_file.ControlTracking class.

It creates the folder where all the experimental data is being saved using a timestamp taken now.

It then saves the “experiment_settings.json” file which contains a lot of important information of the current experiment.

Then it starts the detection algorithm in pre_experiment.FindAnimal.

If the animal has been detected, the arena will be translated and rotated if requested using the adjust_arena() function.

Then the tracking algorithm is called: fast_tracking.FastTrackingControl

class fast_tracking.FastTrackingControl(genotype='Unknown', recording_framerate=2, display_framerate=None, resolution=None, recordingtime=None, initial_data=None, boxsize=20, signal=None, frames_to_define_orientation=5, debug_mode=None, debug_mode_resize=1, repair_ht_swaps=True, cam=None, dir=None, pixel_per_mm=None, model_organism='Not in List', vr_arena=None, pwm_object=None, time_dependent_file=None, high_power_led_bool=False, offline_analysis=False, minimal_speed_for_moving=0.5, organisms_and_heuristics=None, post_hoc_tracking=False, datetime=None, output_channel_one=[], output_channel_two=[], output_channel_three=[], output_channel_four=[], simulated_online_analysis=False, overlay_bool=False, controller=None, time_delay_due_to_animal_detection=0, vr_update_rate=1, pwm_range=100, video_filename='test.yuv', pts_filename='pts_test.txt', pi_time_filename='system_time_test.txt', vr_stim_location='NA', save_centroids_npy=False, save_heads_npy=False, save_tails_npy=False, save_midpoints_npy=False, save_bbox_npy=False, save_stim_npy=False, save_thresh_npy=False, save_skeleton_npy=False, undistort_dst=None, undistort_mtx=None, newcameramtx=None)[source]

This class controls the tracking algorithm.

It was necessary to create a second class as the ‘record_video’ function of pi-camera requires it’s own class to deliver the images.

This function can be cleaned up further by removing a few extra variables however, the current function performs well.

after_tracking()[source]

When live tracking is done, the GPIOs must be turned off.

Then save the data that was just collected by calling the function ‘save’ (pressing ‘save’ button) in tracking_help_classes.

error_message_func(error_stack)[source]

This function is called if the recording can not continue until the end as defined by frame rate * recording_length.

It will write the error into a file called “DATE-ERROR.txt” and will place it in the experiment folder along with the other files of the given trial.

offline_analysis_func()[source]

This function is called when the user selects either the “Tools->Analysis->Single Animal tracking” or the “Debug->Simulate Online Tracking” option. It calls the identical animal tracking function as the live version, the only difference being the way the images are being provided.

While in the live version, the images are streamed from the camera, in the simulated online version the images are provided as a numpy array.

on_closing()[source]

Function to use when the user clicks on the X to close the window.

This should never be called in a live experiment as there is simply no option to click to close a window.

Will ask if user wants to quit the experiment. Will save the experiment so far

run_experiment()[source]

This function is called during live tracking on the PiVR.

Essentially, it starts to record a video but provides a custom output. See here.

The video records frames in the YUV format. See here. for explanation of that particular format.

YUV was chosen as it encodes a greyscale version of the image (the Y’ component) at full resolution (e.g. 307’200bytes for a 640x480) image while the U and the V component, which essentially encode the color of the image only have a quarter of the resolution (e.g. 76’800bytes for a 640x480 image). As the color is anyway discarded, this allows a more efficient usage of the Raspberry Pi’s buffer compared to using, for example RGB.

class fast_tracking.FastTrackingVidAlg(genotype='Unknown', recording_framerate=2, display_framerate=None, resolution=None, recordingtime=None, initial_data=None, boxsize=20, signal=None, frames_to_define_orientation=5, debug_mode=None, debug_mode_resize=1, repair_ht_swaps=True, cam=None, dir=None, pixel_per_mm=None, model_organism='Not in List', vr_arena=None, pwm_object=None, time_dependent_file=None, high_power_led_bool=False, offline_analysis=False, minimal_speed_for_moving=0.5, organisms_and_heuristics=None, post_hoc_tracking=False, datetime=None, output_channel_one=[], output_channel_two=[], output_channel_three=[], output_channel_four=[], simulated_online_analysis=False, overlay_bool=False, controller=None, time_delay_due_to_animal_detection=0, vr_update_rate=1, pwm_range=40000, video_filename='test.yuv', real_time=None, i_tracking=None, total_frame_number=10, search_boxes=None, image_raw=None, image_thresh=None, image_skel=None, local_threshold=None, bounding_boxes=None, centroids=None, midpoints=None, length_skeleton=None, tails=None, heads=None, endpoints=None, ht_swap=None, stimulation=None, heuristic_parameters=None, time_remaining_label=None, child_canvas_top_left=None, child_canvas_top_middle=None, child_canvas_top_right=None, child=None, loop_time_measurement=None, canvas_width=None, canvas_height=None, below_detected=None, pause_debug_var=None, vr_stim_location='NA', save_centroids_npy=False, save_heads_npy=False, save_tails_npy=False, save_midpoints_npy=False, save_bbox_npy=False, save_stim_npy=False, save_thresh_npy=False, save_skeleton_npy=False, undistort_dst=None, undistort_mtx=None, newcameramtx=None)[source]

This class takes either a camera object (so far only from the RPicamera) or images in a 3D numpy array (y,x and time). When run on the RPi it is assumed it’s running a live experiment. The camera frame rate will be set to the frame rate the user wants (if user asks for higher frame rate than the camera can give the program will throw an error directly in the GUI). The camera will then deliver each image into an in-memory stream. The images will then be formatted to be in 2D with the right resolution.

(For future improvement: To increase speed one could only take the bytes that are actually needed (we do have the search_box)).

animal_tracking()[source]

Main function in single animal tracking. After detection in Pre-Experiment() of the animal this function will be called on each frame to:

  1. Identify the animal,

  2. Define where to look for the animal in the next frame

  3. Define head, tail, centroid and midpoint

  4. If requested, present a stimulus by changing the dutycycle on the requested GPIO

Below a the list in a bit more detail:

  1. Ensure that the search box is not outside the image.

  2. Subtract the current search box image from the background search box.

  3. Calculate the threshold to binarize the subtracted image.

  4. Use the regionprops function of the scikit-image library to find blobs http://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.regionprops

  5. Select the largest blob as the animal

  6. Define the NEXT Search Box

  7. Save the current bounding box, centroid position and the raw image.

  8. Skeletonize the binary image and find the endpoints.

  9. By comparing the endpoint positions to the position of the previous tail position, assign the closer endpoint as the the tail.

  10. If virtual reality experiment: Use the head position to define position in virtual space and update stimulus in Channel 1 accordingly using a change in dutycycle of the GPIO.

  11. If time dependent stimulus: Update the dutycyle for all the defined channels.

close()[source]

Unsure if needed. Test if can do without

error_message_func(error_stack)[source]

Let user know that something went wrong! :return:

flush()[source]

Unsure if needed. Test if can do without

update_debug()[source]

This will only work in post-hoc analysis, NOT on the Raspberry Pi. In principle we could implement a ton more information, specifially we can always print: 1) filled area 2) eccentricity 3) major over minor axis Might be good for visualization, but these parameters are anyway saved if the user wants them.

update_pwm_dutycycle_time_dependent(previous_channel_value, output_channel_list, output_channel_name, stim_index)[source]

A convenience function for the time dependent stimulation. Takes the list with the gpios for a given channel, and, in a for loop, updates gpios according to a given channel. In the first iteration of the loop it will just set the pwm dutcycle according to whatever dutycycle is specified. As this function is called as ‘previous_channel_x_value = update_pwm_dutcycle..’ it then updates the previous_channel_x_value for the next iteration. :param previous_channel_value: As the GPIO dutcycle should only be updated when the value changes, this holds the previous value :param output_channel_list: list of gpio for a given channel, e.g. GPIO 17 would be [[17,1250]] (1250 is the frequency, not used here) :param output_channel_name: the channel as a string, e.g. ‘Channel 1’ :return:

write(buf)[source]

This function is called by the Custom output of the picamera video recorder. and (1) prepares the image for the tracking algorithm and (2) calls the the tracking function: animal_tracking().

Image preparation

  1. Receive the buffer object prepared by the GPU which contains the YUV image and put it into an numpy array in uint8 number space.

  2. Shorten the array to the Y values. For example for 640x480px images the array is shortened to 307’200bytes (from 460’800byes)

  3. The image, which so far has just been a 1D stream of uint8 values is then organized into the 2D image.

  4. Save the (GPU -> real time) timestamp of the current frame.

  5. Call the animal_tracking() function.

10.2.3. Detection and Tracking Helpers

class tracking_help_classes.FindROI(regionproperties, boxsize, size_factor, image)[source]

This class is used to define the region of interest from a given regionproperties class.

It also makes sure that the box is never outside of the frame.

class tracking_help_classes.MeanThresh(image, signal, sigma, roi=None, invert=False)[source]

This class takes an image and calculates the mean intensities and standard deviation to calculate a threshold which can be use to segment the image.

If no roi is given, the take whole image is taken into account.

roi must be a roi class object

calculate_threshold()[source]

Calculate threshold by: Depending on animal signal subtracting (white) or adding (dark) the mean of the pixel intensities in the ROI with a sigma (provided when class is called) times the standard deviation of the pixel intensities in the ROI

class tracking_help_classes.CallImageROI(image, roi, boxsize=None, sliced_input_image=None)[source]

This class consolidates the different calls to provide the ROI of the animal in a single class.

Different frames of references are being used in the detection and tracking algorithm: The absolute pixel coordinates and the search_box.

In different parts of the code different frames of references are used to call the image ROI:

  1. If only the image and the roi are given, the roi coordinates are given in the absolute frame of reference ( the 640x480 pixels of the image).

  2. If the boxsize parameter is given, the input image is not the full image. Instead, it is only the image of the search box, defined by the boxsize parameter! This is for example called in pre_experiment.FindAnimal.animal_after_box_mode_three()

  3. If the slice_input_image parameter is given, the input image is not the full image. Instead, it is only the image of the search box. In fast_tracking.ReallyFastTracking.animal_tracking() the algorithm only looks for the animal in a region defined by search_boxes. When providing the sliced_input_image this is taken into account.

In order to keep the code as tidy as possible this class will help calling the ROI using an roi object

call_image()[source]

Depending on the input (full image, only search box) the ROI of the image is extracted.

class tracking_help_classes.CallBoundingBox(image, bounding_box)[source]

This class is heavily used in the fast_tracking.ReallyFastTracking.animal_tracking() function!

It takes the full image and search_boxes coordinates and returns only the search_box (or ROI) of the image.

class tracking_help_classes.DescribeLargestObject(regioproperties, roi, boxsize=None, animal_like=False, filled_area_min=None, filled_area_max=None, eccentricity_min=None, eccentricity_max=None, major_over_minor_axis_min=None, major_over_minor_axis_max=None)[source]

This class takes a skimage regionprops object ( https://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.regionprops)

These regionprops objects have a list labelled image regions.

If “animal_like” is False, the largest labelled image region ( defined by filled_area) is defined as the animal.

If “animal_like” is True, each labelled image region checked against the following parameters taken from “available_organisms.json”

  1. A certain range of filled area

  2. A certain ratio of the long_axis over the short_axis

  3. A certain range of eccentricity

This class analyzes a binary image and defines the largest object and saves it’s bounding box, its major and minor axis and its centroid coordinates

animal_like_object()[source]

This function tests each labelled image region for “animal likeness”.

The largest of these labelled image regions is defined as the animal

largest_object()[source]

This function is just defining the largest labelled image region as the animal.

class tracking_help_classes.DrawBoundingBox(image, roi, value)[source]

Used only during debug mode when user can see the tracking algorithm in action.

Indicates the ROI (search box) where the algorithm has detected the animal.

draw_box()[source]

Draws the bounding box directly into the numpy array

class tracking_help_classes.Save(heads, tails, centroids, image_skel, image_raw, image_thresh, background, local_threshold, real_time, pixel_per_mm, bounding_boxes, midpoints, stimulation=None, arena=None, heuristic_data=None, datetime=None, time_delay_due_to_animal_detection=0, loop_time=None, recording_time=None, framerate=None, time_dep_stim_file=None, high_power_led_bool=False, pwm_range=100, save_centroids_npy=False, save_heads_npy=False, save_tails_npy=False, save_midpoints_npy=False, save_bbox_npy=False, save_stim_npy=False, save_thresh_npy=False, save_skeleton_npy=False, undistort_dst=None, undistort_mtx=None, newcameramtx=None)[source]

Used to Save experimental data after the experiment is concluded - for both the cases where either the experiment is finished as expected or an error occurred and the experiment abruptly crashed.

10.2.4. DefineOutputChannels

class output_channels.DefineOutputChannels(path, controller)[source]

Let user define which output channel (GPIO18, GPIO17 etc…) corresponds to which Channel (1,2 etc..)

The Raspberry Pi has a number of addressable GPIOs. PiVR currently uses 4 of them: GPIO18, GPIO17, GPIO27 and GPIO13. The software has a total of 6 output channels: Background, Background 2, Channel 1, Channel 2, Channel 3 and Channel 4. The user therefore has to decide which GPIO# is addressed by which channel.

Warning

Only GPIO18 and GPIO13 are capable of hardware PWM. The other GPIOs are limited to a maximum frequency of 40’000Hz

Warning

The transistor on the PCB has a finite rise and fall time. In theory the transistor should be able to be turned on and off every microsecond (10e-6 seconds) which translates to 1 Million Hz (10e6). However, this will not enable the usage of PWM to control light intensity. For example, if a dutycycle of 10% is chosen, it will lead to the transistor being on for only 10% of 1 us, which will lead to unspecified behavior. We usually use 40’000Hz even on the high speed GPIOs.

Background and Background 2 are intended to be used as constant light sources during a recording. Typically one of the two will be used to control illumination for the camera to record in optimal light conditions. As PiVR normally uses infrared light to illuminate the scene many animals wont be able to see at this wavelength. If the experimenter wants to use light of a wavelength that the animal can see (or white light) while using infrared illumination for the camera, the other background channel can be used.

Channels 1, 2, 3 and 4 are addressable during a recording. Channel 1 will always be used for Virtual Arenas. The other channels are only useful for time dependent stimulation. In principle each GPIO can have its own Channel. This is only useful if illumination (via background, see above) is optimal without fine grained control.

cancel()[source]

Function is called when user presses the ‘cancel’ button. Destroys the window without saving anything.

confirm()[source]

Function is called when user presses the ‘confirm’ button. Collects the channels and frequencies and associates it with the proper variable.

Specifically it creates one list per channel. Each GPIO in that channel is a nested list. For example, if the user assigns GPIO27 and GPI17 to Channel 1 (with frequency 1250), the channel_one variable will be a nested list in the following form: [[27, 1250][17,1250]]

Variables are modified in the instance of the original GUI

gpio13_high_speed()[source]

Function is called when user uses checkbutton High Speed for GPIO13. This updates the window for the user to either manually enter a frequency (if High Speed PWM is On) or use the list of available frequencies (not High Speed PWM). In principle identical to the gpio18_high_speed function

gpio18_high_speed()[source]

Function is called when user uses checkbutton High Speed for GPIO18. This updates the window for the user to either manually enter a frequency (if High Speed PWM is On) or use the list of available frequencies (not High Speed PWM)

10.2.5. Error Messages

tracking_help_classes.show_vr_arena_update_error(recording_framerate, vr_update_rate)[source]

This function warns the user that an incompatible frame rate/dynamic arena update frequency has been chosen.

For example, if the frame rate is 30frames per second and the update rate is 10Hz the arena will be updated every 3rd frame (30/10=3). This is of course possible.

If the frame rate is 30frames per second and the update rate is set to 20Hz the arena should be updated every 1.5th frame ( 30/20=1.5). This is not possible. What will happen is that for every other frame the arena will be updated for each frame and the other it will take two frames to update. This will lead to a mean of 1.5 but it’s not regular, of course.

As this can easily lead to bad data being produced without the user knowing (no explict error will be thrown) this function informs the user of the mistake so that they can change the settings to either 40frames per second to keep the 20Hz update rate or to change the update rate.

10.3. Virtual Arena drawing

class VR_drawing_board.VRArena(resolution=None, path_of_program=None)[source]

Users need to be able to “draw” virtual reality arenas as many users will not be able to just pull up matlab or python to draw a virtual arena as a 2D matrix and save it as csv.

This class is intended to let users draw virtual realities painlessly.It opens a blank image with the x/y size of the camera resolution. It lets the user ‘draw’ a VR arena with the mouse.

In general it has two options: One can draw gaussian circles while being able to define sigma.

The user also has the option to draw step functions without a gradient at the edge. In both of those geometric objects the user can define the intensity. It is also possible to define a general minimal stimulation.

See the subclasses for detailed information It then saves the arena (probably in a subfolder) to be used again. It should also be saved with any experiment that is conducted with it

addition_of_intensity_func()[source]

A callback function for the button ‘additive on/off’. The user can toogle between adding intensities and not

delete_animal_func()[source]

If the user wants to get rid of the animal after drawing one,they can press this button. It will just give the string ‘NA’ to all the animal position variables which tells the program that no animal has been selected.

dont_draw_func()[source]

If the user first wants to draw and then e.g. place an animal or zoom into a part of the figure, they need to first call this function by pressing the corresponding button. It will disconnect the event handler.

draw_gaussian_func()[source]

This function call the GaussianSlope class - just here to save lines in the program. Also changes the last_drawing_call variable to make the user experience more intuitive.

draw_linear_func()[source]

This function calls the Step class - just to save some lines in the code. Also changes the last_drawing_call variable to make the user experience more intuitive.

gridlines_func()[source]

When the user presses the Gridlines button, this function turns them on or off.

invert_func()[source]

This function is bound to the invert_button. It changes the text on the invert_button, it also changes the color of both the background and the text of the button for visual help it also changes the boolean invert variable to True or False and it also updates the drawing class call.

modify_exsiting_func()[source]

This function first opens a filedialog with the directory that this module saves the arenas normally. It then reads the file and directly draws in on the canvas. It also updates the name with the name of the file selected.

overwrite_func()[source]

This function is bound to the overwrite_button. It changes the text on the overwrite button, it also changes the boolean variable ‘overwrite’ to True or False and it updates the drawing class call

place_animal_func()[source]

After calling this function by pressing the appropriate button, the user is able to draw in the canvas where the animal shall be located. If mouse is pressed and released immediately (a click), a circle will be drawn. If not, an arrow will be drawn. The (inverted) arrowhead indicates the position of the animal at the beginning of the experiment while the other side indicates the angle the animal was last seen.

place_animal_precise_func()[source]

This function calls the PlaceAnimal class which will either draw a circle (if only x and y are given) or an arrow (if x, y and theta are given). Before it does so it tries to set any existing arrows invisible. It also changes a button color and changes the boolean switch ‘animal_draw_selected’.

precise_gaussian_draw_func()[source]

In order to precisely draw a gaussian gradient, the user has the option of defining the x/y coordinate and then pressing a button. It then calls this function which will give the precise x/y coodinates (along with all the other arguments that are called when drawing by mouse) to the GaussianSlope Class.

precise_rectangle_draw_func()[source]

In order to precisely draw a step rectangle, the user has the option of defining the x/y coordinate of the center and then pressing a button. It then calls this function which will give the precise x/y coordinates along with all the other arguments that are called when drawing by mouse) to the Step Class.

quit_func()[source]

In order to quit this window and go back to the main GUI, the user needs to press the ‘quit’ button and this function will be called.

save_arena()[source]

This function is called when the user clicks the ‘save’ button. The function should work both on Linux based systems and Windows. It checks whether the file already exists and asks the user if it should be overwritten if it exits. Otherwise it just saves, without any confirmation etc. The arenas are always saved with the same resolution as these are not interchangable within a trial.

stop_animal_drawing_func()[source]

After placing an animal with the mouse the user might want to draw more or just zoom into a part of the figure. Pressing the appropriate button will call this function which disconnects the event handler.

update_animal_position(x, y, angle)[source]

After drawing either a circle or an arrow, the PlaceAnimal Class calls this function to let the main class know what the x/y and theta of the latest animal was :param x: x coordinate of the animal :param y: y coordinate of the animal :param angle: the angle (calculated by arctan2 function) that describes where the animal was before the start of the experiment

update_drawing()[source]

If it is not clear on which geometric form the user is working, this function is called (e.g. when changing the ‘overwrite’ button or the ‘invert’ button

update_values()[source]

This funcion runs as a loop in the background after the VR drawing board has been constructed. It listens to changes in the Entry fields by the user and calls appropriate functions.

arena

CENTER call the plotting library and plot the empty arena into a figure

class VR_drawing_board.GaussianSlope(ax, arena, plot_of_arena, size, sigma, max_intensity, overwrite, invert, addition_of_intensity, mouse_drawing, x_coordinate=None, y_coordinate=None)[source]

This class is bound to the canvas.

There are two ways this class can behave:

  1. When the user clicks somewhere on the canvas, the x and y coordinates are collected using the GaussianSlope.on_press() function. This function then calls the GaussianSlope.draw_gradient() function. In that function the gaussian gradient with the entered size is created. Then the size of the gaussian gradient arena is matched to the size of the image which is given by the resolution and plotted. This makes for a interactive experience for the user who can ‘point and click’ on the area where a gaussian gradient should be created.

  2. If the “Draw Gaussian Circle at defined coordinates” buttons is pressed, the x and y coordinate are collected from the “Coordinates” entry boxes. Then the GaussianSlope.draw_gradient() is called and the gradient is created identical to the mouse click version.

By varying the size of the gaussian gradient it is also possible to have more than one gradient. If only one gradient is to be created the user should just leave the original setting in place (2000) as this is way larger than the resolution used to record the behavior. By varying sigma the user can choose the steepness of the gradient

disconnect()[source]

Disconnect the mouse button clicks from the canvas

draw_gradient()[source]

This class collects the size and sigma of the gradient and draws it at the coordinates where the user pressed on the canvas.

gkern(kernlen=20, std=3)[source]

Returns a 2D Gaussian kernel array. Taken from: https://stackoverflow.com/questions/29731726 /how-to-calculate-a-gaussian-kernel-matrix-efficiently-in-numpy

class VR_drawing_board.Step(ax, arena, plot_of_arena, size_x, size_y, intensity, overwrite, invert, addition_of_intensity, mouse_drawing, x_coordinate=None, y_coordinate=None)[source]

This class is bound to the canvas. This class is called to draw precise Rectangles using coordinates and when drawing Rectangles with the mouse.

Behavior in “precise” mode:

Collect the x and y coordinate from the entry field

Behavior in “mouse” mode:

When the user clicks somewhere on the canvas, the x and y coordinates are collected Step.on_press().

The rest of the behavior is identical as the function Step.draw_rectangle() is called

This class does essentially the same as the GaussianSlope class with the difference of not calling the gkern function. Source code is quite explicit and heavily annotated.

disconnect()[source]

This function is called directly from the master and disconnects the event handler

draw_rectangle()[source]

This function is called either after the user pressed the mouse button on the canvas or if the x/y coordinates have been entered manually.

on_press(event)[source]

This function just collectes the x/y coordinate of where the user pressed the mouse button

class VR_drawing_board.PlaceAnimal(ax, plot_of_arena, master, start_x=None, start_y=None, theta=None, precise=False)[source]

This class is bound to the image that has been plotted to show the arena. There are three ways this class can behave:

  1. If no x/y and theta values are passed: When the user clicks somewhere first the x and y coordinates are collected. After the user has released the mouse button, the coordinates for the press and release are compared. If they are identical, the animal will not have a directionality which is displayed as a circle. If they are not identical, the point of release is seen as the point where the animal will be when the experiment starts. The point where the user pressed is the direction where the animal is coming from.

  2. If x/y but not theta are provided: The class will just draw a circle (no directionality of the animal is assumed).

  3. If x/y and theta are provided, an arrow is drawn with the given coordinates as the place where the animal will be when the experiment starts and theta as the direction where the animal was before.

disconnect()[source]

Disconnect all mouse buttons from canvas. Called directly from the master

draw_arrow()[source]

This function is called either after the mouse button is released and x/y at press and release are not identical or if the x/y and theta are provided. First it will try to remove the arrow or circle that is already present, then it’ll draw a new arrow.

draw_point()[source]

This function is called either after the mouse button is released and x/y at press and release are identical or if the x/y coordinates (but not theta) are provided. First it will try to remove the arrow or circle that is already present, then it’ll draw a new circle.

10.4. PiVR Analysis source code

class analysis_scripts.AnalysisDistanceToSource(path, multiple_files, string, size_of_window, controller)[source]

For our lab, a typical experiment would be the presentation of an odor source to an animal. By analyzing the behavior, for example the attraction of the animal towards the source, we can learn a lot about the underlying biology that manifests itself in that behavior.

To easily enable the analysis of such an experiment, the user has the option to automatically analyze these experiments. This class is at the heart of the analysis.

As each experiment (across trials) can have the source at different positions in the image, the user is first presented with the background image. The user then selects the source upon which the distance to the source is calculated for each timepoint of the experiment.

The output is a csv file with the distance to source for each analyzed experiment and a plot indicating the median and the indvidual trajectories.

Note

Up to v.1.5.0 (27th of March 2021) the centroid position was median filtered with window size 3. This was removed for v.1.5.1. User should implement own filters.

class analysis_scripts.AnalysisVRDistanceToSource(path, multiple_files, string, controller)[source]

After running a virtual reality experiment with a single point source we are often interested in the distance to this source. For example, when expressing the optogenetic tool Chrimson in the olfactory system of fruit fly larva, they will ascend a virtual odor gradient which is similar to real odor source.

To easily enable the analysis of such an experiment, the user has the option to automatically analyze these experiments. This class is at the heart of the analysis.

The user just has to select the folder containing the experiments. This class will automatically detect the maximum intensity point in virtual space and calculate the distance to that point for the duration of the experiment.

The output is a csv file with the distance to the single point of maximum virtual stimulus for each analyzed experiment and a plot indicating the median and the indvidual trajectories.

class multi_animal_tracking.MultiAnimalTracking(data_path, colormap, recording_framerate, organisms_and_heuristics)[source]

The Multi-Animal Tracker allows the identification and tracking of several animals in a video or image series.

This tracker depends on user input, specifically:

  1. The user should identify the region in the frame where the animals are to be expected. This helps reduce mis-identification of structures outside that area as animals.

  2. The user should optimize the detection by using the ‘Treshold (STD from Mean)” slider. When doing background subtraction, the current image is subtracted from the mean image. The treshold defined using this slider defines how many standard deviations (e.g. 5 x Standard Deviation) from the mean value of pixel intensities of the subtracted image the animals are expected. In other words - the clearer your animals stand out (large contrast) the higher the treshold can be set.

  3. The “Minimum filled area” slider gives the user a handle on the animal size: After background subtraction and applying the threshold (see above) the algorithm goes through all the “blobs”. To determine whether a given blob counts as an animal it compares the number of fixels and compares it to this Minimum filled area. A blob will only count as an animal if it contains equal or more pixels as defined here.

  4. The “Maximum filled area” slider gives the user a handle on the animal size by defining the maximum area (in pixels) the animal has (see above).

  5. The “Major over Minor Axis” slider lets the user select for “elongated” objects. The Major and Minor axis are properties of the “blob”. For animal that are often round (such as fruit fly larva) it is best to keep this parameter at zero. For animals that are rigid such as adult fruit flies, it can be useful set this slider to a number higher than one.

  6. The “Max Speed Animal [mm/s]” is used during tracking to define realistic travelled distances between two frames. To calculate this, the script takes the pixel/mm and the frame rate as recorded in “experiment_settings.json” into account.

    For example, if you have a fruit fly larva that moves not faster than 2mm/s and you have recorded a video at 5 frames per second at a distance (camera to animals) translating to 5pixel/mm at your chosen resolution a blob can not move more than (2mm/s*5pixel/mm)/5 frames per second = 2 pixel per frame.

    Warning

    This feature can lead to unexpected results. If your trajectories look unexpected, try relaxing this parameter (=put a large number, e.g. 200)

  7. The “Select Rectangular ROI” is a important feature: it allows the selection of a rectangular area using the mouse in the main window. When looking for animals, only the area inside this area is taken into consideration.

  8. The main window displays the current frame defined by pulling the slider next to “Start Playing”. This can be used to optimize the “Image parameters” described above. To just watch the video you can of course also press the “Start Playing” button.

The multi-animal tracking algorithm critically depends on the optimal image parameters which means that for optimal results each frame should contain the expected number of animals. For example, if you are running an experiment with 5 animals the goal is to adjust the image parameters such that for each frame you will have 5 animals. See here on how to best achieve this.

To help the user find frames where the number of animals is incorrect, the button “Auto-detect blobs” can be very useful. It detects, in each frame, the number of “blobs”. that fit the image parameters irrespective of distance travelled. See MultiAnimalTracking.detect_blobs() for details on what that function is doing exactly.

Once the user presses the “Track Animals” button, the MultiAnimalTracking.ask_user_correct_animal_classification() function is called. This function uses the current frame and applies the user defined image parameters to determine the number of animals used in the experiment. It then shows a popup indicating the blobs identified as animals and ask the user if this is correct.

If the user decides to go ahead with tracking, the actual tracking algorithm starts. The principle of this multi-animal tracker is the following:

  1. User has defined the number of expected animals by choosing a frame (i.e. Frame # 50) where the correct number of animals can be identified.

  2. A numpy array with the correct space for storing X and Y coordinates for all these animals for each frame is pre-allocated

  3. In the user defined frame (i.e. Frame # 50), the position of each animal is identified.

  4. The centroid position for each animal is stored in the pre-allocated array. The order is from identified animal top left to bottom right. I.e. the animal that is top left in the image in i.e. Frame #50 will be in position #1 in the numpy array.

  5. As the user defined frame does not have to be the first frame, the tracking algorithm can run “backwards”, i.e. identifying animals in frame 50, 49, 48… and once it reaches zero it will run forward, in our example 51, 52 …

  6. In the next frame (which can also be the previous frame as the analysis can run backwards), the blobs that can be animals are again identified using animal parameters. In our example where the starting frame was 50, the “next” frame to be analyzed is 49.

  7. The centroids in frame 49 are assigned to the previously identified frame by calculating the distance of each centroid to each of the previously identified centroids. Centroids with the smallest distance are assumed to be from the same animal.

  8. In many multi-animal experiments, animal can touch each other which makes it impossible for the algorithm to distinguish them. For a frame where 2 (or more) touch each other, only one centroid can be assigned to the touching animals.

  9. Once the animals do not touch anymore, they can be re-idenfied as single animals. To assign them to their previous trajectory the distance to the previously known position of the animal that was lost before. However, for the time that the animal is missing, no assumptions are made and the data is just missing.

ask_user_correct_animal_classification()[source]

This function is called after the user presses “Track Animals”.

  1. Creates a popup window to show the current frame

  2. Subtracts the current image from the background image

  3. Thresholds (binarizes) the subtracted image with the user defined Treshold.

  4. Identifies all blobs in the current image by calling the “regionprops”. function.

  5. For each identified blob, determine whether it counts as an animal according to the user defined image parameters.

  6. If yes, draw a box around that blob.

  7. Display the resulting image and ask the user if the identified and numbered blobs are indeed animals and if the tracking algorithm should start.

Important

The number of animals identified here is used as the ‘ground truth’ of how many animals are present during the experiment.

detect_blobs()[source]

This function is intended to be used “pre-tracking”: If the user thinks the Image parameters are ok and they press “Detect blobs” this function is called. It checks for the number of blobs fitting the Image parameters for each frame. This will make it obvious where the image parameters are producing incorrect results.

The function does the following:

  1. Subtract all images from the background image.

  2. Threshold (binarize) the subtracted image using the user defined Threshold Image parameter.

  3. Loop through the subtracted frames and call the “regionprops”. function on each frame.

  4. Loop through each of the blobs and determine if they are counting as animals, i.e. by comparing their filled area to the user defined minimum and maximum filled area.

  5. If they count as animals, just count how many per frame do count.

  6. Plot the blobs identified as animals in the plot on the right side of the main window.

draw_rectangle()[source]

When the user presses the “Select rectangle” Button, this function is called.

It connects the mouse button press and release events.

Call MultiAnimalTracking.on_press() and MultiAnimalTracking.on_release()

interpolate()[source]

During tracking it can happen that animals are not identified in every frame.

This function allows to interpolate the trajectories.

Warning

This is an experimental feature. It can produce very wrong results

For each identified animal there is “last frame” where it has been identified and a “new frame” where it is identified again. This function assumes that the animal moved with a constant speed and in linear fashion and just does a linear interpolation between these coordinates.

Important

An important assumption is that the initial assignment was relatively correct. Small errors can lead to huge effects when using the interpolation function

manually_jump_to_frame_func()[source]

Function is called when user presses the “Jump to frame” button.

on_press(event)[source]

Saves x and y position when user presses mouse button on main window

on_release(event)[source]

Saves x and y position when user releases mouse button on main window.

Also takes care of updating the main window with the new ROI

play_func()[source]

Function is called when user presses the “Start playing” button.

quit_func()[source]

In order to quit this window and go back to the main GUI, the user needs to press the ‘quit’ button and this function will be called.

tracking_start()[source]

This function organizes the tracking of the animals.

It pre-allocates the numpy array for the centroid positions after identifying the correct number of animals in the current frame.

The actual tracking function, the tracking_loop(), is defined locally in this function. the tracking_loop() function is called in the correct order in here.

If the details in the documentation of this class are not sufficient please have a look at the heavily annotated source code of tracking_loop() function (line 1228)

update_overview_func()[source]

Function is called when user presses the “Update Overview Button. Just changes the bool used in update_visualization()

update_visualization(scale_input=None)[source]

Updates the embedded matplotlib plots by setting the data to the current image_number

10.5. PiVR Image Data Handling source code

class image_data_handling.PackingImages(controller, path, multiplefolders, folders, zip, delete, npy, mat, color_mode)[source]

After running an experiment with the full frame recording option, it is often problemtatic to move the folder around.

The reason is that for the OS it is usually harder (i.e. slower) to move thousands of small files around compared to a single file with the same size.

This class collects images and essentially creates a single file from them.

class image_data_handling.ConvertH264(path, multiplefolders, folders, save_npy, save_mat, emboss_stimulus, color_mode, output_video_format, codec)[source]

When recording a video using PiVR there seems to be a problem with the encoder: Some of the metadata is not correctly stored, most importantly the frame rate is usually given as ‘inf’.

This class enables the user to convert the recorded h264 video to another video format. This happens by completely decoding the video