Review for "Modeling brain dynamics in brain tumor patients using The Virtual Brain"

Completed on 25 Apr 2018 by Shanna Kulik and Tianne Numan.

Login to endorse this review.

Endorsed by 1 person.


Significance

With great interest we read the recent BioRxiv contribution by Aerts and colleagues (2018) and would like to offer some of our thoughts and suggestions on this piece of work. In this simulation study, Aerts and colleagues aimed to bridge the gap between pre-surgical planning for brain tumor resection and post-surgical functional outcome in terms of cognition. The Virtual Brain (VB) was used to simulate large-scale brain dynamics based on the structural connectome of individual patients. A global scaling factor was individually optimized by comparing the VB model with the individual patients’ functional connectome based on fMRI. Aerts and colleagues showed that the accuracy of simulated functional connectivity in reflecting the empirical data was significantly improved by individualized VB models. Moreover, the individualized model parameters correlated with cognition.

Gaining insight into mechanisms describing how tumor(-related) processes influence network topology and cognition is very important to obtain new insights into the disease and its symptomatology. Particularly predicting the cognitive outcome of a resection, one of the future directions of this work, is highly relevant to brain tumor patients from a clinical point of view: a better understanding of how post-surgical cognitive complaints come about will improve decision making in treatment strategy in this patient group. The relevance of this work can therefore not be underestimated.


Comments to author

Author response.

Our thoughts primarily relate to the (details of) the methodology used and some of the results. We were surprised to see that the individually tuned model parameters in combination with the individual structural connectivity matrices did not result in better predictions of the individual functional connectivity patterns compared to individually tuned model parameters and the control average structural connectivity matrix. Although this finding is in line with previous work by Jirsa and colleagues (2017), it would be interesting to hear the authors’ own (speculative) explanations for this result after working with the model.

This is an excellent point, and a matter of much debate. Studies are ongoing looking under which situations individual SC yield better predictions of individual FC, compared to average SC. So far there is not a single published study that could support this claim based on a methodologicaly clean approach. In fact the experience is that both individual and avg SC lead to similarly good predictions of individual FC.
A recent study by Zimmerman et al. (2018) focused on this exact issue and they reported that subject-specificity of SC-FC correspondence is limited due to the relatively small variability between subjects in SC compared to the larger variability in FC. This limited variability could be due to the quality of current individual SC matrices. It is known that DWI tractography underestimates the number of short-distance interhemispheric streamlines, in favor of longer tracks. Although in the current study we have used a relatively new multi-shell DWI sequence with b-values of up to 2800 s/mm² and a state-of-the-art processing pipeline, future studies could investigate whether prediction accuracies of individual SC matrices improve when using data of even better quality (using 7T or more, with longer acquisition time, multi-band sequences, etc.). Another contributing factor to the low variability in individual SC matrices might be the relatively coarse parcellation schemes currently applied in computational modeling studies. With increasing computational power, future studies can test finer and more elaborate parcellations, for example based on multimodal data such as in (Glasser et al., (2016) to investigate whether individual traits can be captured through tractography that have relevance for the prediction of individual FC. We have added this to the limitations section:
“Second, simulated and empirical functional connectivity were only moderately related after optimization of the model parameters. Moreover, using individual structural connectomes did not yield better predictions of individual functional connectivity patterns compared to using a control average structural connectivity matrix. This is, however, a limitation of all current computational modeling studies and a matter of much debate. A recent study by Zimmermann and colleagues (Zimmermann, Griffiths, Schirner, Ritter, & McIntosh, 2018) focused on this exact issue and reported that subject-specificity of SC-FC is limited due to the relatively small variability between subjects in SC compared to the larger variability in FC. This limited variability in individual SC matrices could be due to the quality of current individual SC matrices. Although great advances have been made in diffusion weighted imaging acquisitions and tractography algorithms, it is known that DWI tractography underestimates the number of short-distance streamlines in favor of long fiber tracks (Jeurissen et al., 2017). Although in the current study we have used a relatively new multi-shell DWI sequence with b-values of up to 2800 s/mm² and a state-of-the-art processing pipeline, future studies could investigate whether prediction accuracies of individual SC matrices improve when using data of even better quality (using for example 7T MRI scanners, with multiband sequences and/or longer acquisition times). Another contributing factor to the low variability in individual SC matrices might be the relatively coarse parcellation schemes currently applied in computational modeling studies. With increasing computational power, future studies can test finer and more elaborate parcellations, for example based on multimodal data such as in (Glasser et al., (2016) to investigate whether individual traits can be captured through tractography that have relevance for the prediction of individual FC.”

Furthermore, it is not completely clear why an average firing rate of ~3 Hz was applied. This is also not evident to us when reading the paper by Deco and colleagues (2014), who mention that in a large-scale model of interconnected brain areas, a range of 2.63-3.55 Hz should be applied. Therefore, we were wondering why only one firing rate has been applied instead of using a range of values, particularly as it is known that brain tumors may impact neurotransmitter levels around the tumor and possibly neuronal firing rates.

~3 Hz is the average firing rate of each population over the entire simulation time. In other words, it is the attractor value around which there is an ongoing fluctuation during the entire simulation time. This fluctuation correlates with the mean-field potential/LFP/synaptic activity. Otherwise the BOLD fMRI time series prediction would be just a flat line and there would only be correlation of r=1 in the resulting FC matrix (because a flat line perfectly correlates with another flat line). In the Schirner et al. 2018 eLife paper we showed for example that an increase in alpha power decreases firing rates and vice versa, but on average over the entire time period, they stay the same. 3 Hz is, so to say, the "intrinsic frequency" of an isolated neural mass model, which Deco et al. (2013) derived from a spiking network. When these mass models are coupled, as we did in the brain model, then the firing rates increase due to the input from the global network. To get the firing rates back to a physiologically plausible average rate of 3 Hz the parameter J_i is increased until the population has this average firing rate. If the average firing rate of brain regions near tumors is systematically larger than 3 Hz, then this should also be accounted for in the model, though the J_i parameter. If we wouldn't do this tuning, there would be regions where the average firing rate is 200 Hz and others where the average firing rate is 1 Hz. Arguably, this is very unrealistic, so it makes sense to do this tuning. We have tried to clarify this in the manuscript, in the section describing the computational model:
“3 Hz was chosen as attractor value, as it is the “intrinsic frequency” of an isolated neural mass model according to derivations from a spiking network (Deco et al., 2014). When multiple mass models are coupled, as in a virtual brain model, the firing rates increase due to the input from the global network. In order to get the firing rates back to a physiologically plausible average rate of 3 Hz, the feedback inhibition control parameter Ji is increased until the population has this average firing rate. Thus, 3 Hz is the attractor value for each population, around which there is an ongoing fluctuation during the entire simulation time.

Finally, the relationship of the model parameters with cognitive functioning suggests a major step forward in getting a grip on explaining cognitive symptoms in brain tumor patients. We would like to suggest that it is also interesting to relate the available cognitive measures to the already calculated empirical functional network measures, to be able to assess how much variance in cognitive functioning can be explained by ‘simply’ using the model versus empirical data. Of course prospectively, it would be very interesting to relate the pre-surgical model parameters to post-surgical cognitive status, once longitudinal measurements are available as the authors imply. We are therefore looking forward see the prediction accuracy of the VB model using longitudinal cognitive data!

Thank you for this suggestion. We agree that it would be very interesting to see how much of the cognitive performance scores can be explained by the input (structural graph theory metrics) and how much is added by the TVB model parameters. However, the associations that were found between the model parameters and cognitive performance were rather counterintuitive and influenced by a few outlying values, as pointed out in the discussion of the manuscript. Therefore, we would like to first try to replicate these associations using a larger sample size. In a next step, we can then investigate more complex models with both modeling parameters and structural network topology measures as predictors of cognitive performance, to disentangle their relative contributions.