PLOS ONE: [sortOrder=DATE_NEWEST_FIRST, sort=Date, newest first, filterJournals=PLoSONE, q=subject:"Bioacoustics"]PLOShttps://journals.plos.org/plosone/webmaster@plos.orgaccelerating the publication of peer-reviewed sciencehttps://journals.plos.org/plosone/search/feed/atom?sortOrder=DATE_NEWEST_FIRST&unformattedQuery=subject:%22Bioacoustics%22&sort=Date,+newest+first&filterJournals=PLoSONEAll PLOS articles are Open Access.https://journals.plos.org/plosone/resource/img/favicon.icohttps://journals.plos.org/plosone/resource/img/favicon.ico2024-03-19T00:46:19ZNeural correlates of bilateral proprioception and adaptation with trainingSebastian Rueda ParraJoel C. PerryEric T. WolbrechtDisha Gupta10.1371/journal.pone.02998732024-03-15T14:00:00Z2024-03-15T14:00:00Z<p>by Sebastian Rueda Parra, Joel C. Perry, Eric T. Wolbrecht, Disha Gupta</p>
Bilateral proprioception includes the ability to sense the position and motion of one hand relative to the other, without looking. This sensory ability allows us to perform daily activities seamlessly, and its impairment is observed in various neurological disorders such as cerebral palsy and stroke. It can undergo experience-dependent plasticity, as seen in trained piano players. If its neural correlates were better understood, it would provide a useful assay and target for neurorehabilitation for people with impaired proprioception. We designed a non-invasive electroencephalography-based paradigm to assess the neural features relevant to proprioception, especially focusing on bilateral proprioception, i.e., assessing the limb distance from the body with the other limb. We compared it with a movement-only task, with and without the visibility of the target hand. Additionally, we explored proprioceptive accuracy during the tasks. We tested eleven Controls and nine Skilled musicians to assess whether sensorimotor event-related spectral perturbations in μ (8-12Hz) and low-β (12-18Hz) rhythms differ in people with musical instrument training, which intrinsically involves a bilateral proprioceptive component, or when new sensor modalities are added to the task. The Skilled group showed significantly reduced μ and low-β suppression in bilateral tasks compared to movement-only, a significative difference relative to Controls. This may be explained by reduced top-down control due to intensive training, despite this, proprioceptive errors were not smaller for this group. Target visibility significantly reduced proprioceptive error in Controls, while no change was observed in the Skilled group. During visual tasks, Controls exhibited significant μ and low-β power reversals, with significant differences relative to proprioceptive-only tasks compared to the Skilled group—possibly due to reduced uncertainty and top-down control. These results provide support for sensorimotor μ and low-β suppression as potential neuromarkers for assessing proprioceptive ability. The identification of these features is significant as they could be used to quantify altered proprioceptive neural processing in skill and movement disorders. This in turn can be useful as an assay for pre and post sensory-motor intervention research.Spatial distribution and movement of Atlantic tarpon (<i>Megalops atlanticus</i>) in the northern Gulf of MexicoShane A. StephensMichael A. DanceMichelle Zapp SluisRichard J. KlineMatthew K. StreichGregory W. StunzAaron J. AdamsR. J. David WellsJay R. Rooker10.1371/journal.pone.02983942024-03-07T14:00:00Z2024-03-07T14:00:00Z<p>by Shane A. Stephens, Michael A. Dance, Michelle Zapp Sluis, Richard J. Kline, Matthew K. Streich, Gregory W. Stunz, Aaron J. Adams, R. J. David Wells, Jay R. Rooker</p>
Atlantic tarpon (<i>Megalops atlanticus</i>) are capable of long-distance migrations (hundreds of kilometers) but also exhibit resident behaviors in estuarine and coastal habitats. The aim of this study was to characterize the spatial distribution of juvenile tarpon and identify migration pathways of adult tarpon in the northern Gulf of Mexico. Spatial distribution of juvenile tarpon was investigated using gillnet data collected by Texas Parks and Wildlife Department (TPWD) over the past four decades. Generalized additive models (GAMs) indicated that salinity and water temperature played a significant role in tarpon presence, with tarpon occurrences peaking in the fall and increasing over the past four decades in this region. Adult tarpon caught off Texas (n = 40) and Louisiana (n = 4) were tagged with acoustic transmitters to characterize spatial and temporal trends in their movements and migrations. Of the 44 acoustic transmitters deployed, 18 of the individuals were detected (n = 16 west of the Mississippi River Delta and n = 2 east of the Mississippi River Delta). Tarpon tagged west of the Mississippi River Delta off Texas migrated south in the fall and winter into areas of south Texas and potentially into Mexico, while individuals tagged east of the delta migrated into Florida during the same time period, suggesting the presence of two unique migratory contingents or subpopulations in this region. An improved understanding of the habitat requirements and migratory patterns of tarpon inhabiting the Gulf of Mexico is critically needed by resource managers to assess the vulnerability of each contingent to fishing pressure, and this information will guide multi-state and multi-national conservation efforts to rebuild and sustain tarpon populations.The relationship between alpha power and heart rate variability commonly seen in various mental statesTomoya KawashimaHonoka ShiratoriKaoru Amano10.1371/journal.pone.02989612024-03-01T14:00:00Z2024-03-01T14:00:00Z<p>by Tomoya Kawashima, Honoka Shiratori, Kaoru Amano</p>
The extensive exploration of the correlation between electroencephalogram (EEG) and heart rate variability (HRV) has yielded inconsistent outcomes, largely attributable to variations in the tasks employed in the studies. The direct relationship between EEG and HRV is further complicated by alpha power, which is susceptible to influences such as mental fatigue and sleepiness. This research endeavors to examine the brain-heart interplay typically observed during periods of music listening and rest. In an effort to mitigate the indirect effects of mental states on alpha power, subjective fatigue and sleepiness were measured during rest, while emotional valence and arousal were evaluated during music listening. Partial correlation analyses unveiled positive associations between occipital alpha2 power (10–12 Hz) and nHF, an indicator of parasympathetic activity, under both music and rest conditions. These findings underscore brain-heart interactions that persist even after the effects of other variables have been accounted for.Recognition of bird species with birdsong records using machine learning methodsYi TangChenshu LiuXiang Yuan10.1371/journal.pone.02979882024-02-23T14:00:00Z2024-02-23T14:00:00Z<p>by Yi Tang, Chenshu Liu, Xiang Yuan</p>
The recognition of bird species through the analysis of their vocalizations is a crucial aspect of wildlife conservation and biodiversity monitoring. In this study, the acoustic features of <i>Certhia americana</i>, <i>Certhia brachydactyla</i>, and <i>Certhia familiaris</i> were calculated including the Acoustic complexity index (ACI), Acoustic diversity index (ADI), Acoustic evenness index (AEI), Bioacoustic index (BI), Median of the amplitude envelop (MA), and Normalized Difference Soundscape Index (NDSI). Three machine learning models, Random Forest (RF), Support Vector Machine (SVM), and Extreme Gradient Boosting (XGBoost), were constructed. The results showed that the XGBoost model had the best performance among the three models, with the highest accuracy (0.8365) and the highest AUC (0.8871). This suggests that XGBoost is an effective tool for bird species recognition based on acoustic indices. The study provides a new approach to bird species recognition that utilizes sound data and acoustic characteristics.Audiovisualization of real-time neuroimaging dataDavid N. ThibodeauxMohammed A. ShaikSharon H. KimVenkatakaushik VoletiHanzhi T. ZhaoSam E. BenezraChinwendu J. NwokeabiaElizabeth M. C. Hillman10.1371/journal.pone.02974352024-02-21T14:00:00Z2024-02-21T14:00:00Z<p>by David N. Thibodeaux, Mohammed A. Shaik, Sharon H. Kim, Venkatakaushik Voleti, Hanzhi T. Zhao, Sam E. Benezra, Chinwendu J. Nwokeabia, Elizabeth M. C. Hillman</p>
Advancements in brain imaging techniques have significantly expanded the size and complexity of real-time neuroimaging and behavioral data. However, identifying patterns, trends and synchronies within these datasets presents a significant computational challenge. Here, we demonstrate an approach that can translate time-varying neuroimaging data into unique audiovisualizations consisting of audible representations of dynamic data merged with simplified, color-coded movies of spatial components and behavioral recordings. Multiple variables can be encoded as different musical instruments, letting the observer differentiate and track multiple dynamic parameters in parallel. This representation enables intuitive assimilation of these datasets for behavioral correlates and spatiotemporal features such as patterns, rhythms and motifs that could be difficult to detect through conventional data interrogation methods. These audiovisual representations provide a novel perception of the organization and patterns of real-time activity in the brain, and offer an intuitive and compelling method for complex data visualization for a wider range of applications.Rhythmic properties of <i>Sciaena umbra</i> calls across space and time in the Mediterranean SeaMarta PicciulinMarta BolganLara S. Burchardt10.1371/journal.pone.02955892024-02-21T14:00:00Z2024-02-21T14:00:00Z<p>by Marta Picciulin, Marta Bolgan, Lara S. Burchardt</p>
In animals, the rhythmical properties of calls are known to be shaped by physical constraints and the necessity of conveying information. As a consequence, investigating rhythmical properties in relation to different environmental conditions can help to shed light on the relationship between environment and species behavior from an evolutionary perspective. <i>Sciaena umbra</i> (fam. Sciaenidae) male fish emit reproductive calls characterized by a simple isochronous, i.e., metronome-like rhythm (the so-called R-pattern). Here, <i>S</i>. <i>umbra</i> R-pattern rhythm properties were assessed and compared between four different sites located along the Mediterranean basin (Mallorca, Venice, Trieste, Crete); furthermore, for one location, two datasets collected 10 years apart were available. Recording sites differed in habitat types, vessel density and acoustic richness; despite this, <i>S</i>. <i>umbra</i> R-calls were isochronous across all locations. A degree of variability was found only when considering the beat frequency, which was temporally stable, but spatially variable, with the beat frequency being faster in one of the sites (Venice). Statistically, the beat frequency was found to be dependent on the season (i.e. month of recording) and potentially influenced by the presence of soniferous competitors and human-generated underwater noise. Overall, the general consistency in the measured rhythmical properties (isochrony and beat frequency) suggests their nature as a fitness-related trait in the context of the <i>S</i>. <i>umbra</i> reproductive behavior and calls for further evaluation as a communicative cue.Preventing fear return in humans: Music-based intervention during reactivation-extinction paradigmAnkita VermaSharmili MitraAbdulrahman KhamajVivek KantManish Kumar Asthana10.1371/journal.pone.02938802024-02-21T14:00:00Z2024-02-21T14:00:00Z<p>by Ankita Verma, Sharmili Mitra, Abdulrahman Khamaj, Vivek Kant, Manish Kumar Asthana</p>
In several research studies, the reactivation extinction paradigm did not effectively prevent the return of fear if administered without any intervention technique. Therefore, in this study, the authors hypothesized that playing music (high valence, low arousal) during the reconsolidation window may be a viable intervention technique for eliminating fear-related responses. A three-day auditory differential fear conditioning paradigm was used to establish fear conditioning. Participants were randomly assigned into three groups, i.e., one control group, standard extinction (SE), and two experimental groups, reactivation extinction Group (RE) and music reactivation extinction (MRE), of twenty participants in each group. Day 1 included the habituation and fear acquisition phases; on Day 2 (after 24 hours), the intervention was conducted, and re-extinction took place on Day 3. Skin conductance responses were used as the primary outcome measure. Results indicated that the MRE group was more effective in reducing fear response than the RE and SE groups in the re-extinction phase. Furthermore, there was no significant difference observed between SE and RE groups. This is the first study known to demonstrate the effectiveness of music intervention in preventing the return of fear in a healthy individual. Therefore, it might also be employed as an intervention strategy (non-pharmacological approach) for military veterans, in emotion regulation, those diagnosed with post-traumatic stress disorder, and those suffering from specific phobias.Rats that learn to vocalize for food reward emit longer and louder appetitive calls and fewer short aversive callsAgnieszka D. WardakKrzysztof H. OlszyńskiRafał PolowyJan MatysiakRobert K. Filipkowski10.1371/journal.pone.02971742024-02-09T14:00:00Z2024-02-09T14:00:00Z<p>by Agnieszka D. Wardak, Krzysztof H. Olszyński, Rafał Polowy, Jan Matysiak, Robert K. Filipkowski</p>
Rats are social animals that use ultrasonic vocalizations (USV) in their intraspecific communication. Several types of USV have been previously described, e.g., appetitive 50-kHz USV and aversive short 22-kHz USV. It is not fully understood which aspects of the USV repertoire play important functions during rat ultrasonic exchange. Here, we investigated features of USV emitted by rats trained in operant conditioning, is a form of associative learning between behavior and its consequences, to reinforce the production/emission of 50-kHz USV. Twenty percent of the trained rats learned to vocalize to receive a reward according to an arbitrarily set criterion, i.e., reaching the maximum number of proper responses by the end of each of the last three USV-training sessions, as well as according to a set of measurements independent from the criterion (e.g., shortening of training sessions). Over the training days, these rats also exhibited: an increasing percentage of rewarded 50-kHz calls, lengthening and amplitude-increasing of 50-kHz calls, and decreasing number of short 22-kHz calls. As a result, the potentially learning rats, when compared to non-learning rats, displayed shorter training sessions and different USV structure, i.e. higher call rates, more rewarded 50-kHz calls, longer and louder 50-kHz calls and fewer short 22-kHz calls. Finally, we reviewed the current literature knowledge regarding different lengths of 50-kHz calls in different behavioral contexts, the potential function of short 22-kHz calls as well as speculate that USV may not easily become an operant response due to their primary biological role, i.e., communication of emotional state between conspecifics.Validation and applicability of the music ear test on a large Chinese sampleXiaoyu WangXiubo RenShidan WangDan YangShilin LiuMeihui LiMingyi YangYintong LiuQiujian Xu10.1371/journal.pone.02970732024-02-07T14:00:00Z2024-02-07T14:00:00Z<p>by Xiaoyu Wang, Xiubo Ren, Shidan Wang, Dan Yang, Shilin Liu, Meihui Li, Mingyi Yang, Yintong Liu, Qiujian Xu</p>
In the context of extensive disciplinary integration, researchers worldwide have increasingly focused on musical ability. However, despite the wide range of available music ability tests, there remains a dearth of validated tests applicable to China. The Music Ear Test (MET) is a validated scale that has been reported to be potentially suitable for cross-cultural distribution in a Chinese sample. However, no formal translation and cross-cultural reliability/validity tests have been conducted for the Chinese population in any of the studies using the Music Ear Test. This study aims to assess the factor structure, convergence, predictiveness, and validity of the Chinese version of the MET, based on a large sample of Chinese participants (n≥1235). Furthermore, we seek to determine whether variables such as music training level, response pattern, and demographic data such as gender and age have intervening effects on the results. In doing so, we aim to provide clear indications of musical aptitude and expertise by validating an existing instrument, the Music Ear Test, and provide a valid method for further understanding the musical abilities of the Chinese sample.Heterogeneous fusion of biometric and deep physiological features for accurate porcine cough recognitionBuyu WangJingwei QiXiaoping AnYuan Wang10.1371/journal.pone.02976552024-02-01T14:00:00Z2024-02-01T14:00:00Z<p>by Buyu Wang, Jingwei Qi, Xiaoping An, Yuan Wang</p>
Accurate identification of porcine cough plays a vital role in comprehensive respiratory health monitoring and diagnosis of pigs. It serves as a fundamental prerequisite for stress-free animal health management, reducing pig mortality rates, and improving the economic efficiency of the farming industry. Creating a representative multi-source signal signature for porcine cough is a crucial step toward automating its identification. To this end, a feature fusion method that combines the biological features extracted from the acoustic source segment with the deep physiological features derived from thermal source images is proposed in the paper. First, acoustic features from various domains are extracted from the sound source signals. To determine the most effective combination of sound source features, an SVM-based recursive feature elimination cross-validation algorithm (SVM-RFECV) is employed. Second, a shallow convolutional neural network (named ThermographicNet) is constructed to extract deep physiological features from the thermal source images. Finally, the two heterogeneous features are integrated at an early stage and input into a support vector machine (SVM) for porcine cough recognition. Through rigorous experimentation, the performance of the proposed fusion approach is evaluated, achieving an impressive accuracy of 98.79% in recognizing porcine cough. These results further underscore the effectiveness of combining acoustic source features with heterogeneous deep thermal source features, thereby establishing a robust feature representation for porcine cough recognition.TenseMusic: An automatic prediction model for musical tensionAlice Vivien BarchetJohanna M. RimmeleClaire Pelofi10.1371/journal.pone.02963852024-01-19T14:00:00Z2024-01-19T14:00:00Z<p>by Alice Vivien Barchet, Johanna M. Rimmele, Claire Pelofi</p>
The perception of tension and release dynamics constitutes one of the essential aspects of music listening. However, modeling musical tension to predict perception of listeners has been a challenge to researchers. Seminal work demonstrated that tension is reported consistently by listeners and can be accurately predicted from a discrete set of musical features, combining them into a weighted sum of slopes reflecting their combined dynamics over time. However, previous modeling approaches lack an automatic pipeline for feature extraction that would make them widely accessible to researchers in the field. Here, we present TenseMusic: an open-source automatic predictive tension model that operates with a musical audio as the only input. Using state-of-the-art music information retrieval (MIR) methods, it automatically extracts a set of six features (i.e., loudness, pitch height, tonal tension, roughness, tempo, and onset frequency) to use as predictors for musical tension. The algorithm was optimized using Lasso regression to best predict behavioral tension ratings collected on 38 Western classical musical pieces. Its performance was then tested by assessing the correlation between the predicted tension and unseen continuous behavioral tension ratings yielding large mean correlations between ratings and predictions approximating r = .60 across all pieces. We hope that providing the research community with this well-validated open-source tool for predicting musical tension will motivate further work in music cognition and contribute to elucidate the neural and cognitive correlates of tension dynamics for various musical genres and cultures.Soundscapes of morality: Linking music preferences and moral values through lyrics and audioVjosa PreniqiKyriaki KalimeriCharalampos Saitis10.1371/journal.pone.02944022023-11-29T14:00:00Z2023-11-29T14:00:00Z<p>by Vjosa Preniqi, Kyriaki Kalimeri, Charalampos Saitis</p>
Music is a fundamental element in every culture, serving as a universal means of expressing our emotions, feelings, and beliefs. This work investigates the link between our moral values and musical choices through lyrics and audio analyses. We align the psychometric scores of 1,480 participants to acoustics and lyrics features obtained from the top 5 songs of their preferred music artists from Facebook Page Likes. We employ a variety of lyric text processing techniques, including lexicon-based approaches and BERT-based embeddings, to identify each song’s narrative, moral valence, attitude, and emotions. In addition, we extract both low- and high-level audio features to comprehend the encoded information in participants’ musical choices and improve the moral inferences. We propose a Machine Learning approach and assess the predictive power of lyrical and acoustic features separately and in a multimodal framework for predicting moral values. Results indicate that lyrics and audio features from the artists people like inform us about their morality. Though the most predictive features vary per moral value, the models that utilised a combination of lyrics and audio characteristics were the most successful in predicting moral values, outperforming the models that only used basic features such as user demographics, the popularity of the artists, and the number of likes per user. Audio features boosted the accuracy in the prediction of empathy and equality compared to textual features, while the opposite happened for hierarchy and tradition, where higher prediction scores were driven by lyrical features. This demonstrates the importance of both lyrics and audio features in capturing moral values. The insights gained from our study have a broad range of potential uses, including customising the music experience to meet individual needs, music rehabilitation, or even effective communication campaign crafting.Validation of the F-POD—A fully automated cetacean monitoring systemJulia IvanchikovaNicholas Tregenza10.1371/journal.pone.02934022023-11-17T14:00:00Z2023-11-17T14:00:00Z<p>by Julia Ivanchikova, Nicholas Tregenza</p>
The F-POD, an echolocation-click logging device, is commonly used for passive acoustic monitoring of cetaceans. This paper presents the first assessment of the error-rate of fully automated analysis by this system, a description of the F-POD hardware, and a description of the KERNO-F v1.0 classifier which identifies click trains. Since 2020, twenty F-POD loggers have been used in the BlackCeTrends project by research teams from Bulgaria, Georgia, Romania, Türkiye, and Ukraine with the aim of investigating trends of relative abundance in populations of cetaceans of the Black Sea. Acoustic data from this project analysed here comprises 9 billion raw data clicks in total, of which 297 million were classified by KERNO-F as Narrow Band High Frequency (NBHF) clicks (harbour porpoise clicks) and 91 million as dolphin clicks. Such data volumes require a reliable automated system of analysis, which we describe. A total of 16,805 Detection Positive Minutes (DPM) were individually inspected and assessed by a visual check of click train characteristics in each DPM. To assess the overall error rate in each species group we investigated 2,000 DPM classified as having NBHF clicks and 2,000 DPM classified as having dolphin clicks. The fraction of NBHF DPM containing misclassified NBHF trains was less than 0.1% and for dolphins the corresponding error-rate was 0.97%. For both species groups (harbour porpoises and dolphins), these error-rates are acceptable for further study of cetaceans in the Black Sea using the automated classification without further editing of the data. The main sources of errors were 0.17% of boat sonar DPMs misclassified as harbour porpoises, and 0.14% of harbour porpoise DPMs misclassified as dolphins. The potential to estimate the rate at which these sources generate errors makes possible a new predictive approach to overall error estimation.Songbird mesostriatal dopamine pathways are spatially segregated before the onset of vocal learningMalavika RamaraoCaleb JonesJesse H. GoldbergAndrea Roeser10.1371/journal.pone.02856522023-11-16T14:00:00Z2023-11-16T14:00:00Z<p>by Malavika Ramarao, Caleb Jones, Jesse H. Goldberg, Andrea Roeser</p>
Diverse dopamine (DA) pathways send distinct reinforcement signals to different striatal regions. In adult songbirds, a DA pathway from the ventral tegmental area (VTA) to Area X, the striatal nucleus of the song system, carries singing-related performance error signals important for learning. Meanwhile, a parallel DA pathway to a medial striatal area (MST) arises from a distinct group of neighboring DA neurons that lack connectivity to song circuits and do not encode song error. To test if the structural and functional segregation of these two pathways depends on singing experience, we carried out anatomical studies early in development before the onset of song learning. We find that distinct VTA neurons project to either Area X or MST in juvenile birds before the onset of substantial vocal practice. Quantitative comparisons of early juveniles (30–35 days post hatch), late juveniles (60–65 dph), and adult (>90 dph) brains revealed an outsized expansion of Area X-projecting neurons relative to MST-projecting neurons in VTA over development. These results show that a mesostriatal DA system dedicated to social communication can exist and be spatially segregated before the onset of vocal practice and associated sensorimotor experience.Music listening evokes story-like visual imagery with both idiosyncratic and shared contentSarah HashimLauren StewartMats B. KüssnerDiana Omigie10.1371/journal.pone.02934122023-10-26T14:00:00Z2023-10-26T14:00:00Z<p>by Sarah Hashim, Lauren Stewart, Mats B. Küssner, Diana Omigie</p>
There is growing evidence that music can induce a wide range of visual imagery. To date, however, there have been few thorough investigations into the specific content of music-induced visual imagery, and whether listeners exhibit consistency within themselves and with one another regarding their visual imagery content. We recruited an online sample (N = 353) who listened to three orchestral film music excerpts representing happy, tender, and fearful emotions. For each excerpt, listeners rated how much visual imagery they were experiencing and how vivid it was, their liking of and felt emotional intensity in response to the excerpt, and, finally, described the content of any visual imagery they may have been experiencing. Further, they completed items assessing a number of individual differences including musical training and general visual imagery ability. Of the initial sample, 254 respondents completed the survey again three weeks later. A thematic analysis of the content descriptions revealed three higher-order themes of prominent visual imagery experiences: <i>Storytelling</i> (imagined locations, characters, actions, etc.), <i>Associations</i> (emotional experiences, abstract thoughts, and memories), and <i>References</i> (origins of the visual imagery, e.g., film and TV). Although listeners demonstrated relatively low visual imagery consistency with each other, levels were higher when considering visual imagery content within individuals across timepoints. Our findings corroborate past literature regarding music’s capacity to encourage narrative engagement. It, however, extends it (a) to show that such engagement is highly visual and contains other types of imagery to a lesser extent, (b) to indicate the idiosyncratic tendencies of listeners’ imagery consistency, and (c) to reveal key factors influencing consistency levels (e.g., vividness of visual imagery and emotional intensity ratings in response to music). Further implications are discussed in relation to visual imagery’s purported involvement in music-induced emotions and aesthetic appeal.