Raw top-view movies were processed with custom MATLAB code. For RGB movies, the 80th percentile of a manually or automatically selected set of frames was used as background that was subtracted from all the frames. Then both for RGB and thermal movies, a threshold was manually selected on a GUI, and the resulting binary mask underwent a series of simple treatments: morphological closing, removal of small objects, another morphological closing and finally morphological opening. The values for the different parameters were set to accommodate the different conditions. For each frame, mice contour and center of gravity were obtained from the resulting mask and saved. A calibration (pxmovie / cmreal object ratio) was also obtained from a manually drawn segment and the corresponding length of the object in cm, for later normalization. In the following analysis steps, speed and mouse position were derived from the center of gravity coordinates, which were slightly smoothed with a median filter. In addition, to capture general activity, even in the absence of locomotion, a motion measure was used: it is computed as percentage of pixel change in the mouse masks from one frame to the next (nonoverlapping pixels/total pixel count). Several body parts (snout, ears, front and hind paws and tail) were also tracked with python-code-based DeepLabCut52 (link),53 (link). Briefly, a Resnet-152 network was iteratively trained and refined on ~1,350 frames to be as performant as possible in all our various recording conditions and in particular yield accurate tail tracking (cf thermal data extraction). To not sacrifice any accuracy, only coordinates with a score ≥0.99 were included, and no interpolation was applied. A semi-automated threshold-based GUI was used to annotate the following behaviors: rearing, grooming, stretch-attend posture, head dips, immobility, fast locomotion and so-called area-bound (not any of the other defined behaviors, in particular, no immobility and no locomotion). Briefly, for each behavior, a global score was obtained from relevant position information, body parts’ angles/distances/speed and thresholded with their respective hard-coded thresholds. The behavioral bouts resulting from that initial detection were displayed in a GUI together with the original movie and the scores. The thresholds could then be dragged manually, updating the detected events plots, to get the best possible detection. Events were then checked and adjusted manually when needed within the same GUI, and occasional periods of obstruction (for example, cable between the camera and the mouse) were marked for later exclusion. In the specific case of the LDB, because the RGB camera was not able to capture the mouse’s activity in the dark side, thermal movies were used for behavioral detection. To this end, thermal movies were re-exported with a black-and-white color map, after inverting the intensities and adjusting the contrast, so that the resulting frames resemble their RGB counterparts. A DeepLabCut network was derived from our main network and refined with those new movies (tail points were discarded because of their changing nature on thermal movies). The tracked body parts were then used as previously mentioned to detect behaviors. Mouse tracking was performed blind to the conditions of the experiments.
Free full text: Click here