Neural decoding models can be used to decode neural representations of visual, acoustic, or semantic information. Recent studies have demonstrated neural decoders that are able to decode accoustic information from a variety of neural signal types including electrocortiography (ECoG) and the electroencephalogram (EEG). In this study we explore how functional magnetic resonance imaging (fMRI) can be combined with EEG to develop an accoustic decoder. Specifically, we first used a joint EEG-fMRI paradigm to record brain activity while participants listened to music. We then used fMRI-informed EEG source localisation and a bi-directional long-term short term deep learning network to first extract neural information from the EEG related to music listening and then to decode and reconstruct the individual pieces of music an individual was listening to. We further validated our decoding model by evaluating its performance on a separate dataset of EEG-only recordings. We were able to reconstruct music, via our fMRI-informed EEG source analysis approach, with a mean rank accuracy of 71.8% ( $$n~=~18$$ , $$p~<~0.05$$ ). Using only EEG data, without participant specific fMRI-informed source analysis, we were able to identify the music a participant was listening to with a mean rank accuracy of 59.2% ( $$n~=~19$$ , $$p~<~0.05$$ ). This demonstrates that our decoding model may use fMRI-informed source analysis to aid EEG based decoding and reconstruction of acoustic information from brain activity and makes a step towards building EEG-based neural decoders for other complex information domains such as other acoustic, visual, or semantic information.Measurement(s) Brain activity Technology Type(s) Stereotactic electroencephalography Sample Characteristic – Organism Homo sapiens Sample Characteristic – Environment Epilepsy monitoring center Sample Characteristic – Location The Netherlands. Successful communication and cooperation among different members of society depends, in part, on a consistent understanding of the physical and social world. What drives this alignment in perspectives? We present evidence from two neuroimaging studies using functional magnetic resonance imaging (fMRI; N = 66 with 2145 dyadic comparisons) and electroencephalography (EEG; N = 225 with 25,200 dyadic comparisons) to show that: (1) the extent to which people’s neural responses are synchronized when viewing naturalistic stimuli is related to their personality profiles, and (2) that this effect is stronger than that of similarity in gender, ethnicity and political affiliation. The localization of the fMRI results in combination with the additional eye tracking analyses suggest that the relationship between personality similarity and neural synchrony likely reflects alignment in the interpretation of stimuli and not alignment in overt visual attention. Together, the findings suggest that similarity in psychological dispositions aligns people’s reality via shared interpretations of the external world.

A new musical test can help to detect cognitive decline in old age. Sounds presented during sleep associated with previously learned stimuli reactivated memory and improved memory storage.EEG and skin conductance studies reveal verbal insults elicit strong P2 effects in brain waves, increasing sensitivity in the brain to negative words. Verbal insults trigger a cascade of consecutive and overlapping processing effects, and different parts of the cascade may be differently affected by repetition, resulting in a consistently strong emotional response over time.

An RCT study showing few weeks of music lessons enhance audio-visual temporal processing

Music involves different senses and is emotional in nature, and musicians show enhanced detection of audio-visual temporal discrepancies and emotion recognition compared to non-musicians. However, whether musical training produces these enhanced abilities or if they are innate within musicians remains unclear. Thirty-one adult participants were randomly assigned to a music training, music listening, or control group who all completed a one-hour session per week for 11 weeks. The music training group received piano training, the music listening group listened to the same music, and the control group did their homework. Measures of audio-visual temporal discrepancy, facial expression recognition, autistic traits, depression, anxiety, stress and mood were completed and compared from the beginning to end of training. ANOVA results revealed that only the music training group showed a significant improvement in detection of audio-visual temporal discrepancies compared to the other groups for both stimuli (flash-beep and face-voice). However, music training did not improve emotion recognition from facial expressions compared to the control group, while it did reduce the levels of depression, stress and anxiety compared to baseline. This RCT study provides the first evidence of a causal effect of music training on improved audio-visual perception that goes beyond the music domain.Measurement(s) Brain activity measurement • Brain structure Technology Type(s) Magnetoencephalography • Magnetic Resonance Imaging Factor Type(s) Audiovisual movie Sample Characteristic – Organism Homo sapiens. Corporal punishment increases the risk of developing anxiety and depression in adolescents, researchers report. Additionally, corporal punishment alters brain activity and impacts brain development.New brain-machine interface technology allows those who are immobile to control their wheelchairs through mind control. The BMI allows users to traverse natural and cluttered environments after training.

Mindful meditation can produce a healthy altered state of consciousness in the treatment of those with addiction problems.Selflessness can help individuals feel more confident and less hostile when faced with stress, researchers report.A new theory proposes there is an underlying relationship between nap transition in young children, brain development, and memory formation.Cognitive science refers to the field of study that strives to develop a complete understanding of the human mind/brain and how it impacts human intelligence. Here’s all you should know about cognitive science and its top applications in the real world.

The Centanni Lab at TCU has received a $10,000 grant from the GRAMMY Museum for a study exploring connections between musical instruction and brain improvements that might help with dyslexia. “We’ve found in the last few years that there may be some overlap here—a really interesting way of using our knowledge about musical training to learn some more about what’s happening in the brain in individuals that have dyslexia,” Dr. Tracy Centanni told Dallas Innovates.People with greater self-control have calmer minds, which in itself generates fewer distractions from stimuli.New research from Australia suggests that our brain is adroit at recognizing sophisticated deepfakes, even when we believe consciously that the images we’re seeing are real. The finding further implies the possibility of using people’s neural responses to deepfake faces (rather than their  stated. Children on the autism spectrum may not always process bodily movements correctly, especially if they are distracted by something else.

Trauma During Childhood Triples the Risk of Suffering a Serious Mental Disorder in Adulthood

Childhood trauma significantly increases the risk of being diagnosed with a mental health disorder later in life. For children who experienced emotional abuse, the most prevalent disorder reported was anxiety. Trauma also increased the risks for psychosis, OCD, and bipolar disorder. Significantly, those who experience trauma during childhood were 15 times more likely to be diagnosed with borderline personality disorder later in life.Auditory white noise (WN) is widely used in neuroscience to mask unwanted environmental noise and cues, e.g. TMS clicks. However, to date there is no research on the influence of WN on corticospinal excitability and potentially associated sensorimotor integration itself. Here we tested the hypothesis, if WN induces M1 excitability changes and improves sensorimotor performance. M1 excitability (spTMS, SICI, ICF, I/O curve) and sensorimotor reaction-time performance were quantified before, during and after WN stimulation in a set of experiments performed in a cohort of 61 healthy subjects. WN enhanced M1 corticospinal excitability, not just during exposure, but also during silence periods intermingled with WN, and up to several minutes after the end of exposure. Two independent behavioural experiments highlighted that WN improved multimodal sensorimotor performance. The enduring excitability modulation combined with the effects on behaviour suggest that WN might induce neural plasticity. WN is thus a relevant modulator of corticospinal function; its neurobiological effects should not be neglected and could in fact be exploited in research applications.VA Office of R&D, FY2022 funded project listing. A unifying turbulent dynamics framework using both model-free and modelbased measures of whole-brain information provides insights into brain states.Recent advancements in the field of network science allow us to quantify inter-network information exchange and model the interaction within and between task-defined states of large-scale networks. Here, we modeled the inter- and intra- network interactions related to multisensory statistical learning. To this aim, we implemented a multifeatured statistical learning paradigm and measured evoked magnetoencephalographic responses to estimate task-defined state of functional connectivity based on cortical phase interaction. Each network state represented the whole-brain network processing modality-specific (auditory, visual and audiovisual) statistical learning irregularities embedded within a multisensory stimulation stream. The way by which domain-specific expertise re-organizes the interaction between the networks was investigated by a comparison of musicians and non-musicians. Between the modality-specific network states, the estimated connectivity quantified the characteristics of a supramodal mechanism supporting the identification of statistical irregularities that are compartmentalized and applied in the identification of uni-modal irregularities embedded within multisensory stimuli. Expertise-related re-organization was expressed by an increase of intra- and a decrease of inter-network connectivity, showing increased compartmentalization.Scientists have taken a step forward in their ability to decode what a person is saying just by looking at their brainwaves when they speak.

. USC Brain and Music Lab. .

Neural mechanisms that arbitrate between integrating and segregating multisensory information are essential for complex scene analysis. Here, the authors show the existence of multisensory correlation detectors in the human brain which explains why and how causal inference is driven by the temporal correlation of multisensory signals.A thermodynamic-inspired framework enables researchers to quantify the balance between intrinsic and extrinsic dynamics in brain signals, providing further insight into how the brain behaves during sleep and when under anesthesia. . Selective listening to cocktail-party speech involves a network of auditory and inferior frontal cortical regions. However, cognitive and motor cortical regions are differentially activated depending on whether the task emphasizes semantic or phonological aspects of speech. Here we tested whether processing of cocktail-party speech differs when participants perform a shadowing (immediate speech repetition) task compared to an attentive listening task in the presence of irrelevant speech. Participants viewed audiovisual dialogues with concurrent distracting speech during functional imaging. Participants either attentively listened to the dialogue, overtly repeated (i.e., shadowed) attended speech, or performed visual or speech motor control tasks where they did not attend to speech and responses were not related to the speech input. Dialogues were presented with good or poor auditory and visual quality. As a novel result, we show that attentive processing of speech activated the same network of sensory and frontal regions during listening and shadowing. However, in the superior temporal gyrus (STG), peak activations during shadowing were posterior to those during listening, suggesting that an anterior–posterior distinction is present for motor vs. perceptual processing of speech already at the level of the auditory cortex. We also found that activations along the dorsal auditory processing stream were specifically associated with the shadowing task. These activations are likely to be due to complex interactions between perceptual, attention dependent speech processing and motor speech generation that matches the heard speech. Our results suggest that interactions between perceptual and motor processing of speech relies on a distributed network of temporal and motor regions rather than any specific anatomical landmark as suggested by some previous studies.