The largest database of trusted experimental protocols

Audition 3

Manufactured by Adobe
Sourced in United States

Audition 3.0 is a digital audio editing software application developed by Adobe. It provides tools for recording, mixing, and editing audio files. Audition 3.0 supports a wide range of audio formats and offers features for noise reduction, sound effects, and audio restoration.

Automatically generated - may contain errors

39 protocols using audition 3

1

Auditory Modification of the TNT Paradigm

Check if the same lab product or an alternative is used in the 5 most similar protocols
We developed a modified version of Anderson et al. (2004 (link)) visual TNT paradigm. In this version all word stimuli were presented in the auditory domain. Only the instruction cues were presented visually (see Figure 1).
Auditory stimuli consisted of 48 pairs of English nouns developed for this experiment, paralleling the procedures used in previous TNT studies. All word pairs were chosen to have a weak semantic relationship. The second word of each pair fit into a semantic category, such that it could be probed with a category and stem-completion task for the independent probe subsequent memory test. Unlike previous studies, we chose to use non-exemplar words. That is, instead of using the most common example of a category (e.g., “water” for category “beverage”), we used a less common item from that category (e.g., “wine”). Of the 48 word pairs, six were designated for practice, and the remaining 42 were split evenly into 3 groups of 14 for Think, No-Think, and Baseline conditions. Eighteen repetitions of each Think and No-Think cue word were used in the TNT phase of the experiment.
Auditory word stimuli were recorded using a Zoom H2 microphone and edited using Adobe Audition 3.0, where words were cut and their volume normalized.
+ Open protocol
+ Expand
2

Emotional Word Stimulus Validation Protocol

Check if the same lab product or an alternative is used in the 5 most similar protocols
Stimuli consisted of nine numerical digits and 270 singular words. The digits “1” to “9” were visually presented in Arial font (size 18). The words were selected from the Affective Norms for English Words corpus (ANEW; Bradley & Lang, 1999 ) based on normative ratings of emotional valence. Ninety words were selected from the top end of emotional valence scores (emotionally positive); ninety words were selected from the bottom end of emotional valence ratings (emotionally negative), and 90 were selected from mid-point of emotional valence ratings (emotionally neutral). As desired, words selected for the three word categories significantly differed in emotional valence scores, with positive words having higher scores than neutral words, and neutral words having higher scores than negative words, F (2, 240) = 1910.95, p < 0.001, ƞ2p = 0.94. Negative and positive cue words did not differ in ANEW Arousal rating scores, F (1, 160) = 0.04, p = 0.85. All words were recorded by a female native-British English speaker and edited using Adobe Audition 3.0 (all sound files were 1000 ms in duration).
+ Open protocol
+ Expand
3

Metronome Beats: Sensory Perception Study

Check if the same lab product or an alternative is used in the 5 most similar protocols
The auditory stimuli were metronome beats (square-wave clicks of 1 ms duration) generated by the Adobe Audition 3.0 program and presented via earphones binaurally at the nine following frequencies: 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, and 5 beats/s. This means that the inter-beat intervals in these sequences were: 1000, 667, 500, 400, 333, 286, 250, 222, and 200 ms, respectively. These frequencies of metronome beats were selected on the basis of our pretesting data, which showed that the majority of tested participants could not perform the experimental task when the beat rate was above or below the values enumerated above (see section “Introduction” for further explanation).
+ Open protocol
+ Expand
4

Automated Analysis of Conditioned Fear Response

Check if the same lab product or an alternative is used in the 5 most similar protocols
We collected data measured at PND 29 and PND 56 and used the Any-maze (Stoelting Co., Wood Dale, IL USA) behavioral tracking software to detect a freezing response in the foot shock chamber. We recorded every test session using a camera mounted on the top of the chamber. The video files were transferred to the Any-maze program, which automatically analyzed total freezing time (TFT) and total freezing episode (TFE). Freezing was defined as a total absence of a body or head movement, except that associated with breathing [48 (link)]. A single “freezing episode” was defined by continuous freezing behavior over 2 seconds [49 (link)]. From TFT and TFE, freezing time per episode (FTpE) was calculated. Additionally, for measuring fear and anxiety related to conditioned fear responses, acoustical analysis of ultrasonic vocalization (USV) was used: we used the Petterson D-230 Bat Detector, which transforms high-frequency sounds (22 kHz) into the audible range [50 (link)]. The audio files were opened on Adobe Audition 3.0, and we filtered out any noise but USV. The total time of USV only during the 3-minutes tone in the conditioned fear response test was measured by an experienced user.
+ Open protocol
+ Expand
5

Staggered Matrix Sentence Presentation

Check if the same lab product or an alternative is used in the 5 most similar protocols
Three pairs of matrix sentences, recorded from a single male Australian English talker, were presented from the three loudspeaker locations (Figure 1). Matrix sentences were syntactically fixed and comprised of name, verb, number, adjective, and noun elements. Sentences were constructed at each trial by randomly sampling each element without replacement from a list of 10 possible words. All words within a trial occurred only once, with the exception of the target name which occurred twice.
Words were 500 ms in duration with the exception of nouns, which were time stretched to 600 ms using Adobe Audition 3.0. This manipulation was applied to reproduce the natural prosodic lengthening of speech at phrase boundaries (Wightman et al., 1992 (link)). A 350 ms silence gap was introduced between sentence pairs to replicate the average conversational turn-taking duration of English speech (Stivers et al., 2009 (link)). In addition, sentences were staggered with a 50 ms offset to (i) reduce the effects of energetic masking encountered with synchronized concurrent talkers and (ii) enhance grouping by staggering onsets. Offset combinations were randomized each trial and balanced for all locations. Stimuli were generated using Matlab (MathWorks) and played through an RME FireFace UCX soundcard at 48 kHz sampling rate. All sentences were presented at 65 dB SPL.
+ Open protocol
+ Expand
6

Infant Cry Stimuli and Controls

Check if the same lab product or an alternative is used in the 5 most similar protocols
Cry stimuli (C) were obtained from two infants, aged 3 and 5 months. Stimuli were purchased from an online audio database (www.audionetwork.com), and edited to 10 s clips using online available software (www.audacity.com). Two types of control stimuli were synthesized for each cry using Praat 5.1 and Adobe Audition 3.0 software. For one control, referred to as Con, an emotionally neutral baby vocalization was created to match the duration, intensity, spectral content and amplitude envelope of the cry stimulus. The second control, TCon, was a pure tone that preserved the mean fundamental frequency and amplitude envelop of the cry. Participants listened to the audio stimuli through Pro Ears Ultra 28 MRI-compatible headphones and were told prior to the task: ‘You will now hear a series of sounds. You do not have to do anything except listen to them’. The six different sound files (C1, C2, Con1, Con2, TCon1 and TCon2) were presented in pseudorandom order over four blocks such that each sound was presented four times. There was an inter-trial interval of 6 s between each stimulus (Fig. S2). Each of the four blocks lasted 126 s and the total duration of the task was 8 min 24 s.
+ Open protocol
+ Expand
7

Audio Filtering for Lexical Identification

Check if the same lab product or an alternative is used in the 5 most similar protocols
Uncompressed audio files were extracted from videos and low-pass filtered at 0.5 kHz using a Butterworth filter (24 dB/octave roll off, 100 Hz transition bandwidth) in Adobe Audition 3.0. Following the filtering process, files were amplitude normalized. This level of filtering reduces lexical identification to near-zero while retaining energy at the fundamental frequency (i.e., the acoustic correlate of perceived pitch), the first formant (i.e., speech frequency band that contributes to phonetic judgments), as well as amplitude and speech rhythm dynamics [45 (link), 46 ].
+ Open protocol
+ Expand
8

Ultrasonic Vocalizations in Mouse Courtship

Check if the same lab product or an alternative is used in the 5 most similar protocols
The sound source recorded in the previous study was used (Asaba et al., 2015 (link); Nomoto et al., 2020 (link)). One male BALB/c mouse was used for recording USVs (purchased from CLEA Japan, Inc., Tokyo, Japan). To promote vocalizations, in a sound-attenuated chamber, the male mouse was paired with a female C57BL/6 mouse, which was ovariectomized and devocalized. Sexual receptivity of the female mouse was hormonally induced. A microphone (CM16/CMPA, Avisoft Bioacoustics, Glienicke/Nordbahn, Germany) capable of recording USVs was placed through the wire mesh of the cage. The USVs were digitally converted, filtered using a 20–145.8 kHz band-pass filter, and recorded at a sampling rate of 300 kHz (UltraSoundGate 116H and RECORDER USGH, Avisoft Bioacoustics, Glienicke/Nordbahn, Germany). A fragment of 20 s was extracted from the original sound file (Audition 3.0, Adobe, CA, USA). Ambient noises were digitally reduced. A silent part of the sound file was used as background noise.
+ Open protocol
+ Expand
9

Lexical Tones and Pure Tones Perception Study

Check if the same lab product or an alternative is used in the 5 most similar protocols
Lexical tones and pure tones were used as stimuli in this study. Lexical-tone stimuli were obtained and slightly modified from those used in our previous study (Luo et al., 2006 (link)), in which the Mandarin consonant-vowel (CV) syllables /bai1/ and /bai4/ were employed and originally pronounced by an adult male Mandarin speaker (Sinica Corpus, Institute of Linguistics, Chinese Academy of Social Sciences, Beijing, China). Pure tones were generated by Audition 3.0 (Adobe Systems Inc., Mountain View, CA, USA). The duration of each lexical tone was normalized to 350 ms, the duration of each pure tone was 200 ms, and both included a 5-ms linear rise and fall time. The lexical-tone contrast was created by a sequence of /bai1/ frequently presented as the standard stimuli and /bai4/ infrequently presented as the deviant stimuli during the auditory stream. The pure-tone contrast was created by a sequence of pure tones frequently presented at 550 Hz as the standard stimuli and infrequently presented at 350 Hz as the deviant stimuli.
+ Open protocol
+ Expand
10

Standardized Sound Normalization Protocol

Check if the same lab product or an alternative is used in the 5 most similar protocols
The sounds have been taken from the databank of our laboratory. All sounds had an equal duration (800 ms). Sounds were normalized in peak intensity with Adobe Audition 3.0 with a 2 ms fading-in and sound intensity was measured with a sonometer (Roline, RO-1350).
+ Open protocol
+ Expand

About PubCompare

Our mission is to provide scientists with the largest repository of trustworthy protocols and intelligent analytical tools, thereby offering them extensive information to design robust protocols aimed at minimizing the risk of failures.

We believe that the most crucial aspect is to grant scientists access to a wide range of reliable sources and new useful tools that surpass human capabilities.

However, we trust in allowing scientists to determine how to construct their own protocols based on this information, as they are the experts in their field.

Ready to get started?

Sign up for free.
Registration takes 20 seconds.
Available from any computer
No download required

Sign up now

Revolutionizing how scientists
search and build protocols!