The largest database of trusted experimental protocols

Soundbooth

Manufactured by Adobe

Soundbooth is a software application developed by Adobe for audio editing and recording. It provides basic tools for trimming, mixing, and enhancing audio files.

Automatically generated - may contain errors

Lab products found in correlation

5 protocols using soundbooth

1

Auditory and Visual Onset Editing for Speech Perception

Check if the same lab product or an alternative is used in the 5 most similar protocols
To edit the auditory track, we located the /b/ or /g/ onsets visually and auditorily with Adobe Premiere Pro and Soundbooth (Adobe Systems Inc., San Jose, CA) and loudspeakers. We applied a perceptual criterion to operationally define a non-intact onset. We excised the waveform in 1 ms steps from the identified auditory onset to the point in the adjacent vowel for which at least 4 of 5 trained listeners (AO mode) heard the vowel as the onset. Splice points were always at zero axis crossings. Using this perceptual criterion, we excised on average 52 ms (/b/) and 50 ms (/g/) from the word onsets and 63 ms (/b/) and 72 ms (/g/) from the nonword onsets. Performance by young untrained adults for words (N=10) and nonwords (N=10) did not differ from the results presented herein for the 10–14-yr-old group. The visual track of the words and nonwords was also edited to form AV (dynamic face) vs AO (static face) modes of presentation.
+ Open protocol
+ Expand
2

Auditory and Visual Onset Manipulation

Check if the same lab product or an alternative is used in the 5 most similar protocols
We edited the auditory track of the /B/ and /G/ experimental items by locating the onsets visually and auditorily with Adobe Premiere Pro and Soundbooth (Adobe Systems Inc., San Jose, CA) and loudspeakers. We excised the waveforms in 1 ms steps from the identified auditory onsets to the point in the later waveforms for which at least 4 of 5 trained adult listeners heard the vowel—not the consonant—as the onset in the auditory mode. Splice points were always at zero axis crossings. Using this perceptual criterion, we excised on average from the /B/ and /G/ onsets respectively 51 ms and 63 ms for the CV syllables and 63 ms and 72 ms for the nonwords. The visual track of the utterances was also edited to form audiovisual (dynamic face) vs auditory (static face) modes of presentation (see also 20 (link), 47 (link)). The video track was routed to a high-resolution computer monitor and the auditory track was routed through a speech audiometer to a loudspeaker.
+ Open protocol
+ Expand
3

Real-World Sound Stimuli Protocol

Check if the same lab product or an alternative is used in the 5 most similar protocols
We selected a total of 60 sound stimuli representing real-world objects (door bell, barking dog, footsteps, etc.). Some of these stimuli have been used previously [8] (link), [11] (link), others were found through the Internet. The maximum duration of the sounds was 500 ms, so that they would 'fit' into individual up or down states. Our goal was to have the middle of each sound clip coincide with the SO peak or trough. In order to deal with different sound durations, for sleep presentation, sound files were edited to be 500 ms in length by zero-padding them symmetrically to the required length. Thus, the effective stimulus was always centered in the middle of the 500 ms sound clip. We also employed a white noise stimulus, which, during playback, was constantly looped. Empty padding was not performed for stimuli presented during wakefulness testing. Sound levels of all sounds, including the white noise, were equalized using the 'account for perceived loudness' option in Adobe Soundbooth. All sounds were stereo waveform files with a sample rate of 44.1 kHz. Sound levels during sleep were set at an unobtrusive volume, ranging from 35 to 45 dBA for individual sounds, which was sufficiently soft to not awake the participant but still be perceptible. During wake testing, sound volume ranged between 60 and 80 dBA.
+ Open protocol
+ Expand
4

Auditory and Visual Cues in Phonological Processing

Check if the same lab product or an alternative is used in the 5 most similar protocols
We edited the auditory track of the phonologically-related
distracters by locating the /b/ or /g/ onsets visually and auditorily with
Adobe Premiere Pro and Soundbooth (Adobe Systems Inc., San Jose, CA) and
loudspeakers. We applied a perceptual criterion to operationally define a
non-intact onset. We excised the waveform in 1 ms steps from the identified
auditory onset to the point in the later waveforms for which at least 4 of 5
trained listeners heard the vowel—not the consonant—as the
onset in the auditory mode. Splice points were always at zero axis
crossings. Using this perceptual criterion, we excised on average 52 ms
(/b/) and 50 ms (/g/) from the word onsets and 63 ms (/b/) and 72 ms (/g/)
from the nonword onsets (see Jerger et al.
2016
for details).
All stimuli were presented as Quicktime movie files, and we next
formed AV (dynamic face) and auditory (static face) presentations. In our
experimental design, we compare results for the auditory vs. AV non-intact
stimuli. Any coarticulatory cues in the auditory input are held constant in
the two modes; thus any influence on picture naming due to coarticulatory
cues should be controlled and allow us to evaluate whether the addition of
visual speech influences performance.
+ Open protocol
+ Expand
5

Auditory and Visual Stimulus Preparation for Perception Studies

Check if the same lab product or an alternative is used in the 5 most similar protocols
We edited the auditory track of the CV syllables and the nonwords by locating the /b/ or /g/ onsets visually and auditorily with Adobe Premiere Pro and Soundbooth (Adobe Systems Inc., San Jose, CA) and loudspeakers. We excised the waveforms in 1 ms steps from the identified auditory onsets to the point in the waveforms for which at least 4 of 5 trained adult listeners heard the vowel—not the consonant—as the onset in the auditory mode. Splice points were always at zero axis crossings. Using this perceptual criterion, we excised (on average) from the /b/ and /g/ onsets respectively 51 ms and 63 ms for the CV syllables and 63 ms and 72 ms for the nonwords. The visual track of the utterances was also edited to form AV (dynamic face) vs. auditory (static face) modes of presentation.
+ Open protocol
+ Expand

About PubCompare

Our mission is to provide scientists with the largest repository of trustworthy protocols and intelligent analytical tools, thereby offering them extensive information to design robust protocols aimed at minimizing the risk of failures.

We believe that the most crucial aspect is to grant scientists access to a wide range of reliable sources and new useful tools that surpass human capabilities.

However, we trust in allowing scientists to determine how to construct their own protocols based on this information, as they are the experts in their field.

Ready to get started?

Sign up for free.
Registration takes 20 seconds.
Available from any computer
No download required

Sign up now

Revolutionizing how scientists
search and build protocols!