The largest database of trusted experimental protocols

79 protocols using premiere pro

1

Acoustic Analysis of Duck Vocalizations

Check if the same lab product or an alternative is used in the 5 most similar protocols
Once the ducks were recorded in the sound chamber, we analyzed their vocalizations via the use of Adobe Premiere Pro, Adobe Audition, and Praat. Adobe Premiere Pro is a video editing program that allowed us to stitch together videos and audio recordings from the anechoic chamber. We then used Adobe Audition, a digital audio workstation with a waveform editing view used to isolate each vocalization. We also used Praat (Boersma 2001 ), a phonetics software, in conjunction with Adobe Audition, as it gave us a different view of each vocalization so we could correctly characterize and name each call (see Lucas et al. 2015 ). Each vocalization that was clipped was named based on a pre-determined naming system. This naming system was created based on the following criteria: number of pulses (number of waves when looking at a wavelength), amplitude (how loud each sound is, measured in decibels, [dB]), frequency (rate of the vibration of the sound traveling through the air, measured in hertz [Hz]), and the shape of any frequency modulation (variation of the frequency within a portion of the vocalization).
+ Open protocol
+ Expand
2

Digital Camera Videography Protocols

Check if the same lab product or an alternative is used in the 5 most similar protocols
Movie 1 was captured using Sigma fp digital camera (Sigma Co.). Video footage was edited and annotated using Premiere Pro (Adobe). Movies 2 & 3 was captured with the innate video capture function in SAMPL software using recording parameters described in Table 2. Movie 3 was edited using Adobe Premiere Pro (Adobe) to combine with timeseries data.
+ Open protocol
+ Expand
3

Video Capture and Editing Protocol

Check if the same lab product or an alternative is used in the 5 most similar protocols
Video S1 was captured using Sigma fp digital camera (Sigma Co.). Video footage was edited and annotated using Premiere Pro (Adobe). Videos S2 and S3 was captured with the innate video capture function in SAMPL software using recording parameters described in Table S2. Video S3 was edited using Adobe Premiere Pro (Adobe) to combine with timeseries data.
+ Open protocol
+ Expand
4

Video Capture and Editing Protocol

Check if the same lab product or an alternative is used in the 5 most similar protocols
Video S1 was captured using Sigma fp digital camera (Sigma Co.). Video footage was edited and annotated using Premiere Pro (Adobe). Videos S2 and S3 was captured with the innate video capture function in SAMPL software using recording parameters described in Table S2. Video S3 was edited using Adobe Premiere Pro (Adobe) to combine with timeseries data.
+ Open protocol
+ Expand
5

Keratinocyte Imaging and Analysis

Check if the same lab product or an alternative is used in the 5 most similar protocols
Images were processed and analyzed using FIJI and Imaris version 9.8.2 (Bitplane, Zurich, Switzerland) as indicated. Supplemental movies were generated in FIJI and edited using Adobe Premiere Pro (Adobe). In Adobe Premiere Pro, pseudocoloring of individual keratinocytes was done using the Color Effects module with manual tracking.
+ Open protocol
+ Expand
6

Auditory and Visual Onset Editing for Speech Perception

Check if the same lab product or an alternative is used in the 5 most similar protocols
To edit the auditory track, we located the /b/ or /g/ onsets visually and auditorily with Adobe Premiere Pro and Soundbooth (Adobe Systems Inc., San Jose, CA) and loudspeakers. We applied a perceptual criterion to operationally define a non-intact onset. We excised the waveform in 1 ms steps from the identified auditory onset to the point in the adjacent vowel for which at least 4 of 5 trained listeners (AO mode) heard the vowel as the onset. Splice points were always at zero axis crossings. Using this perceptual criterion, we excised on average 52 ms (/b/) and 50 ms (/g/) from the word onsets and 63 ms (/b/) and 72 ms (/g/) from the nonword onsets. Performance by young untrained adults for words (N=10) and nonwords (N=10) did not differ from the results presented herein for the 10–14-yr-old group. The visual track of the words and nonwords was also edited to form AV (dynamic face) vs AO (static face) modes of presentation.
+ Open protocol
+ Expand
7

Ethogram-based Behavioral Scoring Protocol

Check if the same lab product or an alternative is used in the 5 most similar protocols
Ethograms were obtained using a frame-by-frame scoring of behaviors using Adobe Premiere Pro (v.2020) (Adobe) in multiangle synchronized video recordings66 (link). Frames when a resident mouse (implanted with electrodes or optic fibers) was consuming food pellets were scored as feeding. Social contact was defined as sniffing or following an intruder mouse. During the stimulation of the LH–LPO circuit, the latter behavior evolved into a prolonged chasing, defined as uninterrupted pursuing of an intruder for longer than 2 s. New object exploration was defined as sniffing, gnawing, touching or climbing a new object.
+ Open protocol
+ Expand
8

Video Analysis of Sports Injuries

Check if the same lab product or an alternative is used in the 5 most similar protocols
We used video-editing software (Adobe Premiere Pro, version 14.1.0, Build 116, Adobe Inc, San Jose, California, USA) to extract the relevant injury sequences from the entire match video into MP4 format. We created one single video for each injury, including a few seconds before and after the injury to assess the environmental and match conditions, injury situations and biomechanical variables. Each sequence included the video of the injury (normal speed) and all available replays with slow-motion (50% speed) from different camera angles (one single replay for 4 injuries, two for 4, three for 5, six for one). One additional injury was filmed with a wide shot with no replay, so we created a zoomed sequence with slow-motion (50% speed) using the same video editing software. Visualization of the videos was performed using QuickTime Player (version 7.7.4, Apple, Cupertino, California, USA) allowing easy frame-by-frame navigation. The different camera views were mounted successively through the single video sequence of each injury. The analysts would then choose the camera view(s) that allowed them to perform the best analysis.
+ Open protocol
+ Expand
9

Auditory and Visual Onset Manipulation

Check if the same lab product or an alternative is used in the 5 most similar protocols
We edited the auditory track of the /B/ and /G/ experimental items by locating the onsets visually and auditorily with Adobe Premiere Pro and Soundbooth (Adobe Systems Inc., San Jose, CA) and loudspeakers. We excised the waveforms in 1 ms steps from the identified auditory onsets to the point in the later waveforms for which at least 4 of 5 trained adult listeners heard the vowel—not the consonant—as the onset in the auditory mode. Splice points were always at zero axis crossings. Using this perceptual criterion, we excised on average from the /B/ and /G/ onsets respectively 51 ms and 63 ms for the CV syllables and 63 ms and 72 ms for the nonwords. The visual track of the utterances was also edited to form audiovisual (dynamic face) vs auditory (static face) modes of presentation (see also 20 (link), 47 (link)). The video track was routed to a high-resolution computer monitor and the auditory track was routed through a speech audiometer to a loudspeaker.
+ Open protocol
+ Expand
10

Kannada Speech Rate Modulation Protocol

Check if the same lab product or an alternative is used in the 5 most similar protocols

Lists from the Kannada sentence bank
12
were used in this part of the study too. The sentences were recorded in an acoustically treated recording room by an adult female and male speaker who were native speakers of Kannada. Each speaker was first asked to speak naturally at a normal rate. Another recording was made with the speakers being asked to match the rate of sentences electronically time-compressed at 35% and 40%. The video recorded with the camera Nikon D500 (Nikon Corp., Minato City, Tokyo, Japan) and the audio recorded in Adobe Audition were synched using the Adobe Premiere Pro software (Adobe Inc.). The synched audio-video recordings were sliced into individual sentences and saved separately. The recorded speech was verified to have the same rate as that of the compressed speech. The prepared sentence materials consisted of both male and female speakers (mixed randomly), and were either presented unmodified (0% compression), or presented with time compression (35% and 45% compression).
+ Open protocol
+ Expand

About PubCompare

Our mission is to provide scientists with the largest repository of trustworthy protocols and intelligent analytical tools, thereby offering them extensive information to design robust protocols aimed at minimizing the risk of failures.

We believe that the most crucial aspect is to grant scientists access to a wide range of reliable sources and new useful tools that surpass human capabilities.

However, we trust in allowing scientists to determine how to construct their own protocols based on this information, as they are the experts in their field.

Ready to get started?

Sign up for free.
Registration takes 20 seconds.
Available from any computer
No download required

Sign up now

Revolutionizing how scientists
search and build protocols!