In this blog, Verena Hainesa postdoctoral research fellow at the University of Oxford, shares takeaways from both conversations during the MQ Data Science meeting in September.
In this talk, Greg Farber highlights two problems holding back scientific progress:
First, he highlighted the need for good theoretical frameworks to drive future research. This point has resonated with my own experiences, where a lot of (particularly exploratory) research is published without a basic understanding of the mechanisms of change. Going forward, appropriate theories that take the full complexity of mental health into account (ideally informed by both current evidence and the perspectives of key stakeholders) must be at the heart of future research. This is also in line with the updated Medical Research Council guidance on the development and evaluation of complex interventions (see http://dx.doi.org/10.1136/bmj.n2061).
Second, he emphasized that a critical problem for data science is the heterogeneity in which data has been collected. Usually several metrics are used to measure the same infrastructure. Because data coordination efforts can be challenging and require tremendous expertise, the use of a single, agreed-upon set of outcome measures was highlighted in all new studies. In this way, data can be easily combined across studies, saving the required sample size and ensuring reproducibility of results. If researchers still wish to evaluate the new measures, they can add them to the study in addition to these agreed upon measures to explore psychometrics.
In the discussion, questions were raised about the appropriateness of using this set of agreed measures in different cultural contexts. While it has been suggested that any noise related to a particular cultural context would likely be canceled out if we had a large enough sample size, I would like to challenge this idea; Although I believe that a standard set of measures is necessary going forward, I believe that the list of some measures does not fully reflect the importance of the context in which mental health often occurs.
For example, in some cultural or also clinical contexts (eg, patients with a disability), their endorsement of the item may be due to causes other than underlying depression. If we do not consider this to use a more personalized and personalized approach, we may lose important information that may be essential to informing the provision of appropriate future interventions.
Another challenge is the emergence of momentary environmental assessments, which allow us to capture a person’s experiences in the moment their thoughts, feelings or behaviors occur. I believe this technology has great potential to advance future research, however, I wonder how these approaches (which are based primarily on the use of individual items) can fit into this agreed-upon set of outcome measures? I think there is a basic starting point for defining the problem and suggesting possible solutions. However, I feel that more work is needed to translate the approach into the diversity of settings in which researchers find themselves.
I particularly appreciate the reading that was led by Professor Ann John. Being a suicide researcher and working primarily with data to better understand risk and resilience to suicide risk in young people, this reading has really enriched. It made me realize once again why this work is so important and why it is so important not to forget that behind each of these numbers in the data set, there is a tragic story that affects not only the individual but their entire family and the wider community.
Listening to conversations from research funders before, I realized even more, why it is so important to involve people with live research experience at every stage of the process to determine the questions that really matter to patients and their wider community. Context.
Moving forward, I feel we should give more people with live experience a chance to speak at conferences and engage in the conversation on those topics that affect them most, since only then will we be able to really move forward and produce research that has a meaningful impact on the lives of those affected.
I was impressed by the efforts made in the MindKind project to involve young people in the process at every stage to give young people a voice in the research process.
It is great to know that young people are generally happy to share their data for research if it is in the public interest. This is also an experience I had when working with young people and their parents.
Since this study included adolescents over 16 years of age, I wonder if these results would also translate to younger people (under 16 years old)? Also, in the case of adolescents under 16 years of age, parents will be required to give informed consent to their children. I wonder if parental opinions align with these findings or if additional challenges may arise when including participants under 16 years of age. This group is of particular interest to early intervention/prevention initiatives, because we know that many mental health disorders may first appear before the age of 16 years.
In this talk, I was particularly impressed with the results of the inverse trend when considering short-term versus long-term risk pathways.
Longitudinal research is often limited to relatively short time frames (eg, a maximum of 1-2 years). This made it clear to me that studies that capture longer periods are necessary to understand risk pathways in detail. This is critical in the context of Covid. However, it is also necessary to take into account the mental health of young people, as we know that some processes can take longer to unfold and therefore cannot be captured if we consider only the short time frames.