Shivani Sharma — August 26, 2021
Advanced Deep Learning Healthcare Image Image Analysis Libraries Project Python Structured Data Unstructured Data

This article was published as a part of the Data Science Blogathon

The intersection of medicine and data science has always been relevant; perhaps the most obvious example is the implementation of neural networks in deep learning. As data science and machine learning advance, so will medicine, but the opposite is also true.

Nanotechnology, stem cells, optogenetics, metabolomics, gene editing, and brain-to-computer interfaces are just some of the areas that benefit from the mutually beneficial relationship between medicine and data science, which must learn to grow and adapt to evolution in their respective fields or they risk being left behind. According to a neurosurgeon and a TEDx speaker, once MNE is interfaced with TensorFlow, sklearn, or another machine learning library, anyone can immerse themselves in brain-computer interfaces.

The brain intrigues almost everyone who considers it; it is both ambiguous in its intricacy and concrete in action. In addition, the complexity of the brain requires ingenuity and creativity when studying it.

The MNE-Python module is an open-source package for viewing data from neurophysiological instruments, one of the few available tools that allow you to view, manipulate and analyze data from different acquisition methods like EEG, ECoG, MEG.

This library is very useful for visualizing the many steps used in various brain-computer interface systems and for better understanding this evolving technology. Let’s look at the capabilities of MNE and work with datasets to test those capabilities.

 

Implementing MNE in Python

1. Import of modules

Experienced people are more likely to be familiar with Jupyter Labs and Anaconda, which should be worked with when using MNE. To make programming easier, you can download Anaconda and then install MNE that will ensure you use Jupyter Labs on localhost. When you run your program in Jupyter Labs, you can break your code into blocks that you can execute independently of each other; it is much more convenient than typical IDEs.

However, once the program is ready, you can transfer it to the IDE of choice (make sure to install MNE in a virtual environment if you are using one) and run the program. When I ported the program, I ran into several problems, mostly they were solved by loading and importing several modules.

import matplotlib
import matplotlib.pyplot as plt
import PyQt5
import pathlib
import mne
import mne_bids
from mne.datasets import sample
import picard
from surfer import Brain
import sobol_seq
import os.path
import os.path as op
from os import path
from mne.forward import read_forward_solution
from mne.minimum_norm import (make_inverse_operator, apply_inverse,
                              write_inverse_operator)
import nibabel
matplotlib.use('Qt5Agg')

The seventh line, from mne.datasets import sample, collects the fundamental dataset used to study MNE. There is already information in this dataset so that anyone can get started with MNE. Unfortunately, some of the tasks given to the people who contributed their data to the dataset were simple, but MNE has a lot of built-in datasets.

To organize the neurophysiological data the most suited way is Brain Imaging Data Structure (BIDS). In particular, it defines the following:

  • File formats.

  • How to name files.

  • Where to put files in the directory.

  • What additional metadata should be stored.

it looks like this:

out_path = pathlib.Path('out_data/sample_BIDS')
bids_path = mne_bids.BIDSPath(subject='01',
                              session='01',
                              task='audiovisual',
                              run='01',
                              root=out_path)
mne_bids.write_raw_bids(raw, bids_path=bids_path, events_data=events, event_id=event_id, overwrite=True)

The out_data folder already exists, a sample_BIDS is created in it with the required patient information. We defined the events and event_id earlier.

 

2. Import of raw data

After we have imported our sample data, we can import the initial information that will be needed for further analysis.

"""Creating data path"""
sample_dir = sample.data_path()
sample_dir = pathlib.Path(sample_dir)
"""Importing raw data"""
raw_p = sample_dir / 'MEG' / 'sample' / 'sample_audvis_raw.fif'
raw = mne.io.read_raw(raw_path)
print(raw.info['bads'])
"""Viewing raw data"""
raw.plot()
raw.plot_sensors(ch_type='eeg')
raw.plot_sensors(ch_type='eeg', kind='3d')
"""Finding events/event ids"""
events = mne.find_events(raw)
event_id = {
    'Auditory/Left': 1,
    'Auditory/Right': 2,
    'Visual/Left': 3,
    'Visual/Right': 4,
    'Smiley': 5,
    'Button': 32
}
total_auditory_events = len(events[events[:,2] == 1]) + len(events[events[:,2] == 2])
total_visual_events = len(events[events[:,2] == 3]) + len(events[events[:,2] == 4])
print(total_auditory_events, total_visual_events)

After creating the data path and navigating through the data, we can build it in various ways.

Drawing of raw data from MEG sensors MNE

Drawing of raw data from MEG sensorsDrawing of raw data from EEG and EOG sensors and stimulus channels MNEDrawing of raw data from EEG and EOG sensors and stimulus channels

The generated schedule is interactive, it makes life much easier. For example, EEG 053 electrode looks like an outlier that I don’t want to see in my data. I can just click a channel, clicking will mark it as bad. If you want to zoom in on the data, just use the function key and the arrow keys (at least on a mac).

I can also apply electrodes to the head model. There is a slight difference image produced by Jupyter Labs and PyCharm was different. PyCharm was used to create the image below, but some of the sensors seem to be slightly misaligned.

It’s okay if the sensors are not directly on the head – this means that they can be on the ears, neck, or elsewhere, but this must be taken into account when plotting the graphs.

EEG electrodes drawingEEG electrodes drawing. The red electrode is a channel previously marked as bad (EEG 053) MNE

 The red electrode is a channel previously marked as bad (EEG 053)

You can also create a 3D model.

Image of a 3D model of EEG electrodesImage of a 3D model of EEG electrodes MNE

The bottom part of the above code was devoted to finding different events based on stimulus channels. The events that were detected by the MNE are numbered 1, 2, 3, 5, and 32. These numbers are associated with a specific stimulus – thanks to the MNE documentation, the correct stimuli were found and indicated. I used this information to create an annotated raw data plot using EEG sensors.

Drawing of unfiltered EEG data with tagged stimuli.Drawing of unfiltered EEG data with tagged stimuli. MNE

3. Filtering raw data and plotting power spectral density

Before moving on, let’s filter the data to make it look much cleaner and easier to analyze.

"""Applying filters and plotting PSDs"""
raw_eeg = raw.copy().pick_types(meg=False, eog=False, eeg=True)
raw_eeg.plot(events=events, event_id=event_id, title="Unfiltered EEG")
raw_eeg.load_data()
raw_eeg_filtered = raw_eeg.copy().filter(l_freq=0.1, h_freq=40)
raw_eeg_filtered.plot(events=events, event_id=event_id, title="Filtered EEG")
fig, ax = plt.subplots(2)
raw_eeg.plot_psd(ax=ax[0], show=False)
raw_eeg_filtered.plot_psd(ax=ax[1], show=False)
ax[0].set_title('PSD before filtering')
ax[1].set_title('PSD after filtering')
ax[1].set_xlabel('Frequency (Hz)')
fig.set_tight_layout(True)

I used a low and high pass filter with frequencies of 40 Hz and 0.1 Hz. Frequencies below 40 Hz and above 0.1 Hz are retained, others are deleted. The frequencies used will vary from experiment to experiment, but these values ​​for a given set are good defaults.

Picture of filtered EEG data with labeled stimuli MNEPicture of filtered EEG data with labeled stimuli

After receiving unfiltered and filtered data, I can plot the power spectral density of the signal (hereinafter – PSD ) received from different electrodes. PSD is the signal strength as a function of frequency, by frequency

PSD plot obtained from EEG electrodes before and after filtration. The bottom graph is much cleaner and easier to analyze than the top one. MNEPSD plot obtained from EEG electrodes before and after filtration. The bottom graph is much cleaner and easier to analyze than the top one.

 

4. Epochs and evoked information

As mentioned in the previous article, epochs are simply the segmentation of data into several different parts. The evoked objects are the averaged signals of several EEG or MEG epochs, this strategy is common when assessing stimulus-evoked activity.

tmin = -0.250
tmax = 0.8
baseline = (-0.2, 0)
epochs = mne.Epochs(raw_eeg_filtered,
                    events=events,
                    event_id=event_id,
                    tmin=tmin,
                    tmax=tmax,
                    baseline=baseline,
                    preload=True)
print("nEpochs Info:")
print(epochs)
epochs.save(pathlib.Path('out_data') / 'epochs_epo.fif',
            overwrite=True)
epochs['Visual'].plot()
"""Evoked Data Information"""
evoked_auditory = epochs['Auditory'].average()
evoked_visual = epochs['Visual'].average()
evoked_visual.plot(spatial_colors=True)
evoked_visual.plot_topomap(ch_type='eeg')
mne.viz.plot_compare_evokeds([evoked_auditory, evoked_visual], picks='eeg')
mne.write_evokeds(fname=pathlib.Path('out_data') / 'evokeds_ave.fif',
                  evoked=[evoked_auditory, evoked_visual])

Image of epoch-filtered EEG data for visual stimuliImage of epoch-filtered EEG data for visual stimuli

Although I continue to use EEG data, this is not necessary. You can also use the original raw data with MEG information and divide the data into epochs. They can be further cleaned up using Independent Component Analysis (ICA). To use raw data, I use raw instead of raw_eeg_filtered.

Image of the original raw data placed in the epochImage of the original raw data placed in the epoch

I can also plot the triggered data against each fetch method.

Drawing of various methods of obtaining information after dividing into epochs and averaging to find the called informationDrawing of various methods of obtaining information after dividing into epochs and averaging to find the called information

A topographic map can also be created using evoked data to view stresses at various locations on the head from the beginning to the end of an epoch.

Topographic map of voltage on various EEG electrodes on the scalpTopographic map of voltage on various EEG electrodes on the scalp

I can also compare the evoked information from different stimulus-related eras.

Figure comparing auditory and visual evoked informationFigure comparing auditory and visual evoked information

 

4. Application of independent component analysis (ICA)

ICA is a method used to clean up epochs and is not part of the basic filtering methods applied to the original data. With the help of the ICA, certain artifacts can be found and eliminated, including EOG and ECG components captured by the electrodes.

if path.exists('out_data/epochs_cleaned.fif') == False:
    tmin = epochs.tmin
    tmax = epochs.tmax
    baseline = (None, 0)
    epochs_ica = mne.Epochs(raw,
                           events=events,
                           event_id=event_id,
                           tmin=tmin,
                           tmax=tmax,
                           baseline=baseline,
                           preload=True)
    print("nEpochs ICA Info: ")
    print(epochs_ica)
    n_comp = 0.8  # Should normally be higher, like 0.999!!
    method = 'picard'
    max_itere = 100  
    fit_params = dict(fastica_it=5)
    random_state = 42
    ica = mne.preprocessing.ICA(n_components=n_components,method=method,max_iter=max_iter,fit_params=fit_params,random_state=random_state)
    ica.fit(epochs_ica)
    ica.plot_components(inst=epochs)
    ecg_epochs = mne.preprocessing.create_ecg_epochs(raw, reject=None,
                                                     baseline=(None, -0.2),
                                                     tmin=-0.5, tmax=0.5)
    ecg_evoked = ecg_epochs.average()
    ecg_inds, ecg_scores = ica.find_bads_ecg(
        ecg_epochs, method='ctps')
    eog_epochs = mne.preprocessing.create_eog_epochs(raw, reject=None,
                                                     baseline=(None, -0.2),
                                                     tmin=-0.5, tmax=0.5)
    eog_evoked = eog_epochs.average()
    eog_inds, eog_scores = ica.find_bads_eog(
        eog_epochs)
    components_to_exclude = ecg_inds + eog_inds
    ica.exclude = components_to_exclude
    ica.plot_scores(ecg_scores)
    ica.plot_sources(ecg_evoked)
    ica.plot_overlay(ecg_evoked)
    epochs_cleaned = ica.apply(epochs.copy())
    epochs_cleaned.plot(title="After ICA")
    epochs.plot(title="Before ICA")
    epochs_cleaned.save(pathlib.Path('out_data') / 'epochs_cleaned.fif',
                overwrite=True)
    epochs_visual = epochs_cleaned["Visual"]

The if statement is only needed so that this part of the code does not run every time in the IDE. This is overkill in Jupyter Labs, but comes in handy after the cleared epochs have already been saved: execution time is greatly reduced.

Image of several different ICA componentsImage of several different ICA components

After applying the ICA, we can view the before and after eras.

Image from several eras before the ICAImage from several eras before the ICA

Image of post-ICA erasImage of post-ICA eras

We can also view ICA component ratings. This allows you to determine which components are most likely to be associated with different artifacts.

Image of ICA component ratingsImage of ICA component ratings

One of the components that showed a high likelihood of being associated with an artifact was component 001, which can be analyzed.

Image ICA001Image ICA001

We can also view the epochs before and after cleaning in a different way, that is, view the signal for each collection method before and after applying ICA.

Image of signal acquisition methods before and after applying ICAImage of signal acquisition methods before and after applying ICA

5. Time-frequency analysis

Time-frequency analysis can also be done via MNE, but I don’t use this function very often. However, a particularly interesting part of this step for me is looking at the presence of different brain waves on different parts of the skull. Based on the frequency found, various brain waves can be identified.

Image of three different types of brain waves after ICA applicationImage of three different types of brain waves after ICA application

 

6. Visualization of data from MRI scans and EEG sensors

One of the most interesting things about MNE is that the signal source can be tracked down by following a few steps. These steps include the following:

  • Energy metabolism of the brain.

  • Transformation.

  • Calculation of a direct solution.

  • Inverse operator

Image from MNE at various stages of source assessmentImage from MNE at various stages of source assessment

The code below illustrates some of the steps required by MNE to run the source localization protocol. Again, the sample data contains everything you need to complete these steps, provided you navigate to the correct directory. Many of the variables I’ve used can be replaced. For example, the last variable, the brain, has a pilal surface, but an image, as at the beginning of this article, can be generated by setting the inflated surface.

data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
subjects_dir = data_path + '/subjects'
subject = 'sample'
subjects_dir = pathlib.Path(mne.datasets.sample.data_path()) / 'subjects'
mne.viz.plot_bem(subject='sample', subjects_dir=subjects_dir,
                 orientation='coronal')
# The transformation file obtained by coregistration
trans = data_path + '/MEG/sample/sample_audvis_raw-trans.fif'
epochs_fname = pathlib.Path('out_data') / 'epochs_cleaned.fif'
info = mne.io.read_info(epochs_fname)
fig = mne.viz.plot_alignment(info=info,
                             trans=trans,
                             subject='sample',
                             dig=True,
                             subjects_dir=subjects_dir,
                             verbose=True)
subject = 'sample'
src = mne.setup_source_space(subject=subject,
                             spacing='oct4',  # During an actual analysis you can use oct6!
                             subjects_dir=subjects_dir,
                             add_dist=False)  
mne.viz.plot_alignment(info=info, trans=trans, subject=subject,
                       src=src, subjects_dir=subjects_dir, dig=True,
                       surfaces=['head-dense', 'white'], coord_frame='meg')
conductivity = (0.3,)  
model = mne.make_bem_model(subject=subject, ico=4,
                           conductivity=conductivity,
                           subjects_dir=subjects_dir)
bem_sol = mne.make_bem_solution(model)
mne.bem.write_bem_solution('sample_bem.fif', bem_sol, overwrite=True)
fwd = mne.make_forward_solution(info,
                                trans=trans,
                                src=src,
                                bem=bem_sol,
                                meg=True, # include MEG channels
                                eeg=False, # exclude EEG channels
                                mindist=5.0, # ignore sources <= 5mm from inner skull
                                n_jobs=1) # number of jobs to run in parallel
mne.write_forward_solution('sample_fwd.fif', fwd, overwrite=True)
noise_cov = mne.compute_covariance(epochs, tmax=0.,
                                   method=['shrunk', 'empirical'],
                                   rank='info')
mne.viz.plot_cov(noise_cov, info=info)
epochs.average().plot_white(noise_cov)
inverse_operator = make_inverse_operator(info, fwd, noise_cov,
                                         loose=0.2, depth=0.8)
method = "dSPM"
snr = 3.0
lambda2 = 1.0 / snr ** 2
stc = apply_inverse(epochs['Visual/Right'].average(),
                    inverse_operator,
                    lambda2,
                    method=method,
                    pick_ori=None)
brain = stc.plot(surface='pial',
                 hemi='both',
                 subjects_dir=subjects_dir,
                 time_viewer=True)

MRI scans of subjectsMRI scans of subjectsImage of a model of a patient with EEG sensors and a MEG helmetImage of a model of a patient with EEG sensors and a MEG helmetImage of the generated source localization using the sample data

Image of the generated source localization using the sample data

Localization of the right-sided visual source yielded an image showing activity in the left occipital lobe and some temporal information. There is also some subcortical activity near the thalamus (which transmits sensory information.)

Conclusion

MNE-Python is an incredible tool, but it is only one representative of the world of interfaces between the brain and the computer. This article is not a tutorial, it provides an overview of the capabilities of MNE and some of the protocols that can be easily viewed with a small amount of data. Brain research is also just one data science application out of a myriad of possible applications.

The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion.

About the Author

Our Top Authors

  • Analytics Vidhya
  • Guest Blog
  • Tavish Srivastava
  • Aishwarya Singh
  • Aniruddha Bhandari
  • Abhishek Sharma
  • Aarshay Jain

Download Analytics Vidhya App for the Latest blog/Article

Leave a Reply Your email address will not be published. Required fields are marked *