Research lemonade: RRR on the short-term benefits of emotional reappraisal interventions in an online context

When the pandemic hit, all in-person research was shut down at Dominican (of course). This left a real challenge in terms of trying to figure out how our psychology majors could continue to engage in authentic and interesting research.

One solution I (Bob) worked on during the summer of 2020 was to assemble a collection of studies that would be a) socially relevant, and b) feasible to replicate and extend fully online (https://osf.io/xnuap/). This worked out really well for our research methods sequence.

I also worked with colleague TJ Krafnick on another approach: getting DU involved in some RRR projects (registered replication reports). Specifically, TJ and I applied to take part in a massive RRR organized by the psychological science accelerator (https://psysciacc.org/). What was especially exciting about this RRR was that it featured a trio of experiments, each designed to test online interventions to help modify emotional/behavioral responses to the Covid-19 pandemic (https://psysciacc.org/studies/psacr-1-2-3/). TJ and I obtained local IRB approval, and then we worked with our research methods students to collect data at DU. Students in my fall 2020 research and methods course then analyzed the data from our DU and wrote it up for their semester-long term projects. It was a really good experience for the class; we turned lemons into lemonade.

Now the psych science accelerator has assembled the data from all the team sites and published the manuscript for the first project ​(Wang et al., 2021)​. TJ and I are proud to be co-authors in a very long-list of talented collaborators (reading through the Google docs of draft proposals and manuscripts was incredible–at times, the manuscripts were probably more comment than actual text!).

So what was the actual study and what did it find? Participants (N > 23,000!) were randomly assigned to receive either a brief training in an emotional regulation strategy (reappraisal or reconstrual) or to a control condition. Participants were then asked to rate their positive and negative emotions in response to a series of genuinely heartbreaking images related to the Covid-19 pandemic. There were clear and consistent effects of the interventions on self-reported emotions: participants who received the training reported more positive emotions (d = -0.59!) and fewer negative emotions (d = -0.39) in response to the photos. This was true across essentially all study sites regardless of language or culture. That’s pretty amazing! On the other hand, the intervention was short term, and the dependent variable relied entirely on self-reported emotional responses, which might not be very reliable and which could be susceptible to demand effects from the study. Still, an encouraging win for emotional re-appraisal strategies.

  1. Wang, K., Goldenberg, A., Dorison, C., Miller, J., Uusberg, A., Lerner, J., … Moshontz, H. (2021). A multi-country test of brief reappraisal interventions on emotions during the COVID-19 pandemic. Nature Human Behaviour, 5(8), 1089–1110. doi: 10.1038/s41562-021-01173-x

What is forgetting? Slug lab provides some new insights!

Today, the SlugLab can share an exciting new paper, with contributions from Tania Rosiles, Melissa Nguyen, Monica Duron, Annette Garcia, George Garcia, Hannah Gordon, and Lorena Juarez ​(Rosiles et al., 2020)​.

Where to even start?

  • Contributions from 7 student co-authors! It’s been such a long haul; we’re proud of each of you for sticking with it and for all your contributions to this paper.
  • This paper is a registered report: We first proposed the idea and the methods, even writing a complete analysis script. This was then sent to peer review (you know, when you can still do something if the reviewers turn up an issue or problem to consider!) and after some back and forth received an ‘in principle’ acceptance. Then we completed the work and the analysis and submitted it for one more round of review focused solely on the interpretation of the data. This approach to publication lets peer reviewers have a more meaningful impact on the project and it also helps combat publication bias. People tend to think of this model for replication research, but in our case we used a registered report because we wanted to establish a fair and valid test between two competing theories and to ensure that the approach and analysis were pre-specified.
  • This paper is exciting! We were able to test two very different theories of forgetting:
    • decay theory, which says that memories are forgotten because they physically degrade
    • retrieval failure, which says that memories don’t degrade at all, but simply become more difficult to retrieve due to interference

We found clear support for the retrieval failure theory of forgetting, something I (Bob) was completely not expecting.

So, what was the study actually about?

Even memories stored via wiring changes in the brain can be forgotten. In fact, the majority of long-term memories are probably forgotten. What does this really mean? Is the information gone, or just inaccessible?

One clue is from savings memory, the fact you can very quickly re-learn seemingly-forgotten information. Savings memory is sometimes taken to mean the original memory trace persists, but it could also be that it had decayed, and the remnants prime re-learning.

We noticed a testable prediction:

  • If forgetting is decay, savings re-encodes the memory and must involve the transcriptional and wiring changes used to store new information.
  • If forgetting is inaccessibility, savings shouldn’t involve transcriptional/wiring changes

To test this prediction, we tracked transcriptional changes associated with memory storage as a memory was first formed, then forgotten, then re-activated. We did this in the sea slug, Aplysia calinfornica as a registered report (with pre-registered design and analyses).

The memory was for a painful shock—this is expressed as an increase in reflexes (day 1, red line way above baseline). Sensitization is forgotten in about a week (day 7, reflexes back to normal), but then a weak shock produces savings (day 8, reflexes jump back up)

What’s happening in the nervous system? Our key figure shows expression of ~100 transcripts that are sharply up- or down-regulated when the memory is new. At forgetting, these are deactivated (all lines dive towards 0). At savings? No re-activation! (lines stay near 0)

Our results show that savings re-activates a forgotten memory without invoking *any* of the transcriptional changes associated with memory formation. This strongly suggests the memory is not rebuilt, but just re-activated—the information must have been there all along?!

Lots of caveats (see paper), but the results seem compelling (though surprising) to us. In particular, we used an archival data set to show we would have observed re-activation of transcription had it occurred. Transcriptional changes with savings are clearly negligible.

  1. Rosiles, T., Nguyen, M., Duron, M., Garcia, A., Garcia, G., Gordon, H., … Calin-Jageman, R. J. (2020). Registered Report: Transcriptional Analysis of Savings Memory Suggests Forgetting is Due to Retrieval Failure. Society for Neuroscience. doi: 10.1523/eneuro.0313-19.2020

What psychology instructors should know about Open Science and the New Statistics

Beth Morling and I (Bob) have a new commentary out in Teaching of Psychology that provides an overview of the Open Science and New Statistics movements and gives some advice about how psychology instructors can bring these new developments into the traditional psychology curriculum ​(Morling & Calin-Jageman, 2020)​.

Beth is a superstar, on many fronts, but is perhaps best known for her incredible Research Methods in Psychology textbook (https://wwnorton.com/books/9780393536263). Just being asked to work on this commentary was a thrill. Then, working together, I learned a lot from her, especially with her approach to writing, which kept us on task and productive.

The article is open-access, so check it out. Here’s my favorite paragraph:

Introductory coursework is the ideal time to foster estimation thinking. Teachers can use the prompt, “How much?” to help students consider the magnitudes of effects and to seek context. Using the prompt, “How wrong?” can encourage students to embrace uncertainty and to introduce the key idea of sampling variation. Finally, prompting students with, “What else is known?” helps them see science as a cumulative and integrative process rather than as a series of “one-and-done” demonstrations. These three questions instill a nuanced view of science, where any one study is tenuous, and yet the cumulative evidence from a body of research can be compelling. This is a sophisticated epistemic viewpoint that avoids both excessive confidence and undue cynicism.

Morling & Calin-Jageman, 2020, p. 174
  1. Morling, B., & Calin-Jageman, R. J. (2020). What Psychology Teachers Should Know About Open Science and the New Statistics. SAGE Publications. doi: 10.1177/0098628320901372

Updated word search and mirror-tracing tasks for Qualtrics

I finally had some spare time to document and post the mirror tracing and word-search tasks I developed for some replication work my students and I completed ​(Cusack, Vezenkova, Gottschalk, & Calin-Jageman, 2015)​.

Each task is (I think) pretty nifty, and I’ve had lots of emails about them over the past couple of years. I’ve finally posted both code bases to github along with working demos in Qualtrics and some rudimentary instructions. The code itself is not pretty–I was learning javascript and wrote most it during a conference I was attending in Amsterdam. Still, it works, and I’m sure it could come in handy.

The mirror-tracing task is just like it sounds–participants trace an image with their mouse or track pad but the mouse movements are mirrored, making it hard to stay in the line. You can vary task difficulty by changing line thickness. There is an expected weak negative correlation with age. The script can even posts the traced images back to your server, which is cool for making figures showing how groups differ with representative data.

The word-search task is also like it sounds. You can use pre-defined grids, or the script can generate a grid for you. I’ve used it to try priming for power (control vs. power-related words hidden in the grid) and to look at frustration (by having a grid that *doesn’t* have all the target letters…mean, I know).

  1. Cusack, M., Vezenkova, N., Gottschalk, C., & Calin-Jageman, R. J. (2015). Direct and Conceptual Replications of Burgmer & Englich (2012): Power May Have Little to No Effect on Motor Performance. PLOS ONE, e0140806. doi: 10.1371/journal.pone.0140806

Kids, Neurons, and Robots

At the end of February I (Dr. Bob) visited a local elementary school as part of the Oak Park Educational Foundation’s Science Alliance Program.

I was matched up with Sue Tressalt’s Third Grade Class at Irving Elementary. For an activity, I brought along the neuroscience program’s collection of Finch Robots, a set of laptops, and the Cartoon Network simulator I have been developing (Calin-Jageman, 2017, 2018). I introduced kids to the basic rules of neural communication, and they explored Cartoon Network, learning how to make brains to get the Finch Robots to do what they wanted (e.g. avoid light, sing when touched, etc.). It was a great class, and a ton of fun.

I’m proud of Cartoon Network, and the fact that it can make exploring brain circuitry fun. It’s simple enough that the kids were able to dive right in (with some help), yet complex enough that really interesting behaviors and dynamics can be modelled.

As a kid, my most formative experience in science was learning logo, the programming language developed by Seymour Papert and colleagues at MIT. Logo was fun to use, and it made me need/want key programming concepts. I clearly remember sitting in the classroom writing a program to draw my name and being frustrated at having to re-write the commands to make a B at the end of my name when I had already typed them out for the B at the beginning of my name. The teacher came by and introduced me to functions, and I remember being so happy about the idea of a “to b” function, and I immediately grasped that I could write functions for every letter once and then be able to have the turtle type anything I wanted in no time at all.

Years later I read Mindstorms and it remains, to my mind, one of the most important books on pedagogy, teaching, and technology. Papert applied Piaget’s model of children as scientists (he had trained with Piaget). He believed that if you can make a microworld that is fun to explore, children will naturally need, discover, and understand deep concepts embedded in that world. That’s what I was experiencing back in 2nd grade–I desperately needed functions, and so the idea of them stuck with me in a way that they never would in an artificial “hello world” type of programming exercise. Having been a “logo kid” it was amazing to read Mindstorms and recognize Papert’s intentionality behind the experiences I had learning Logo.

Anyways, bringing Cartoon Network to an elementary school for a day gave me a great feeling of carrying on a tiny piece of Papert’s legacy. The insights kids develop in just an hour of playing with neural networks are amazing–the idea of a recurrent loop made immediate sense to them, and that also sets up the idea that both excitation and inhibition are important. And, like in Logo, the kids were excited to explore–to know that their experience was not dependent on getting the ‘right’ answer but on trying, observing, and trying again.

The day was fun and even better I received a whole stack of thank-you cards this week. Reading through them has kept a smile on my face all week. Here’s a sample.

This kid has some great ideas for the future of AI

“I never knew neurons were a thing at all”–the joy of discovery
“Your job seems awesome and you are the best at it”—please put this kid on my next grant review panel.
  1. Calin-Jageman, R. (2017). Cartoon Network: A tool for open-ended exploration of neural circuits. Journal of Undergraduate Neuroscience Education : JUNE : A Publication of FUN, Faculty for Undergraduate Neuroscience, 16(1), A41–A45. Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/29371840
  2. Calin-Jageman, R. (2018). Cartoon Network Update: New Features for Exploring of Neural Circuits. Journal of Undergraduate Neuroscience Education : JUNE : A Publication of FUN, Faculty for Undergraduate Neuroscience, 16(3), A195–A196. Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/30254530

Workshop at the Society for Neuroscience Meeting

This year was a big year for our lab at the Society for Neuroscience conference.  Leticia Perez, who has been in the lab for the past two summers, gave an amazing talk on our work on forgetting.  In addition, I (Bob) helped organize a Professional Development Workshop on doing better neuroscience.

It was a huge honor to get to lead this workshop.  I gave a presentation on sample-size planning (which is sooo vital to doing good science).  David Mellor at the Open Science Framework spoke about pre-registration.  And Richard Ball, who co-directs project Tier, spoke about reproducible data analysis.  Like the good Open Scientists we are, we used the Open Science Framework to post all our slides and resources: https://osf.io/5awp4/.  SFN also made a video, which should be posted soon.

SFN staff told us it was the best attended workshop for the meeting.  Hooray!  Hope all our attendees will go forth to spread the good word about these small tweaks that can have such a big impact on scientific quality.

Here’s what it looked like from my perspective:

The New Statistics for Neuroscience Education.

This summer I (Bob) was asked to write a series of perspective pieces on statistical issues for the Journal of Undergraduate Neuroscience.

My first effort has just been published–it is a call for neuroscience education to shift away from p values, and an explanation of the basic principles of the New Statistics with an example drawn from neuroscience.

It turns out that the paper was published just before the annual meeting of the Society for Neuroscience, which I am currently attending.  It’s been very gratifying to see the paper is already sparking some discussion.

Here’s the key figure from the paper comparing/contrasting the NHST approach with the New Statistics approach with data from a paper in Nature Neuroscience.

Getting Started with the New Statistics – A talk at Indiana University

This fall I (Bob) was invited to give a talk at Indiana University as part of a series on good science and statistical practice organized by the university’s Social Science Research Commons (which is like a core facility for getting advice on statistics and experimental design…what a cool thing for a university to have!).

I really enjoyed my visit (thanks Emily, Cami, and Patricia)–good conversation with fascinating people in a beautiful setting.  The series has a video archive, so my talk is now posted online as a video and as a powerpoint. Here’s the link–take a look if you want to know more about how to get started using Open Science practices and the New Statistics:  https://media.dlib.indiana.edu/media_objects/gt54kp23k

Maintaining Memories, Changing Transcription

Under the right circumstances, a memory can last a lifetime.  Yet at the molecular level the brain is constantly in flux: the typical protein has a half-life of only a few hours to days; for mRNA a half-life of 2 days is considered extraordinarily long.   If the important biological molecules in the brain are constantly undergoing decay and renewal, how can memories persist?

The Slug Lab has a bit of new light to shed on this issue today.  We’ve just published the next in our series of studies elucidating the transcriptional changes that accompany long-term memory for sensitization in Aplysia.  In a previous paper, we looked at transcription 1 hour after a memory was induced, a point at which the nervous system is first encoding the memory.  We found that there is rapid up-regulation of about 80 transcripts, many of which function as transcription factors (Herdegen, Holmes, Cyriac, Calin-Jageman, & Calin-Jageman, 2014).

For the latest paper (Conte et al., 2017), we examined changes 1 day after training, a point when the memory is now being maintained (and will last for another 5 days or so).  What we found is pretty amazing.  We found that the transcriptional response during maintenance is very complex, involving up-regulation of >700 transcripts and down-regulation of <400 transcripts.  Given that there are currently 21,000 gene models in the draft of the Aplysia genome, this means more than 5% of all genes are affected (probably more due to the likelihood of some false negatives and the fact that our microarray doesn’t cover the entire Aplysia genome).   That’s a lot of upheaval… what exactly is changing?  It was daunting to make sense of such a long list of transcripts, but we noticed some very clear patterns.  First, there is regulation influencing growth: an overall up-regulation of transcripts related to producing, packaging, and transporting proteins and a down-regulation of transcripts related to catabolism.  Second, we observed lots of changes which could be related to meta-plasticity.  Specifically, we observed down regulation in isoforms of PKA, in some serotonin receptors, and in a phosphodiesterase.  All of these changes might be expected to limit the ability to induce sensitization, which would be consistent with the BCM rule (once synapses are facilitated, raise the threshold for further facilitation).  (Bienenstock, Cooper, & Munro, 1982).

One of the very intriguing findings to come out of this study is that the transcriptional changes occuring during encoding are very distinct from those occuring during maintenance.  We found only about 20 transcripts regulated during both time points.  We think those transcripts might be especially important, as they could play a key regulatory/organizing role that spans from induction through maintenance.  One of these transcripts encoded a peptide transmitter called FMRF-amide.  This is an inhibitory transmitter, which raises the possibility that as the memory is encoded, inhibitory processes are simultaneously working to limit or even erode the expression of the memory (a form of active forgetting).

There are lots of exciting pathways for us to explore from this intriguing data set.  We feel confident heading down these paths because a) we used a reasonable sample size for the microarray, and b) we found incredibly strong convergent validity in an independent set of samples using qPCR.

This is a big day for the Slug Lab, and a wonderful moment of celebration for the many students who helped bring this project to fruition: Catherine Conte (applying to PT schools), Samantha Herdegen (in pharmacy school), Saman Kamal (in medical school), Jency Patel (about to graduate), Ushma Patel (about to graduate), Leticia Perez (about to graduate), and Marissa Rivota (just graduated).  We’re so proud of these students and so fortunate to work with such a talented and fun group.

Bienenstock, E., Cooper, L., & Munro, P. (1982). Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. The Journal of Neuroscience : The Official Journal of the Society for Neuroscience, 2(1), 32–48. [PubMed]
Conte, C., Herdegen, S., Kamal, S., Patel, J., Patel, U., Perez, L., … Calin-Jageman, I. E. (2017). Transcriptional correlates of memory maintenance following long-term sensitization of Aplysia californica. Learning and Memory, 24, 502–515. doi: 10.1101/lm.045450117 [Source]
Herdegen, S., Holmes, G., Cyriac, A., Calin-Jageman, I. E., & Calin-Jageman, R. J. (2014). Characterization of the rapid transcriptional response to long-term sensitization training in Aplysia californica. Neurobiology of Learning and Memory, 116, 27–35. doi: 10.1016/j.nlm.2014.07009

Red, Romance, and Replication – Cross posted with thenewstatistics.com

I (Bob) have a new replication paper out today, a collaboration with DU student Elle Lehmann (Lehmann & Calin-Jageman, 2017).  The OSF page for the paper with all the materials and data is here: https://osf.io/j3fyq/ (Calin-Jageman & Lehmann, 2015).

The paper replicates a set of previous findings showing that the color red dramatically increases romantic attraction for both women rating men (A. J. Elliot et al., 2010) and men rating women (A. Elliot & Niesta, 2008).  Elle and I conducted two replications: one in-person with a standard psychology participant pool, the other online with MTurk participants.  In each case we planned for an informative sample, used original materials, pre-registered our design and analysis plan, and used extensive exclusion criteria to ensure suitable participants (e.g. testing for color-blindness).  In both cases, we are sad to report that there was little-to-no effect of red on perceived attractiveness or desired sexual behavior.

Example of the types of stimuli used in red-romance studies (not the actual stimuli we used, though)

There were a few weaknesses: 1) for the in-person study we didn’t obtain nearly enough men to make a good test of the hypothesis, 2) for the online study we couldn’t control the exact parameters for the color red.  Still, we found no strong evidence that incidental red influences perceived attractiveness.

Beyond the (disappointing) replication results, there are some really interesting developments to this story:

  • Our replication work drew the attention of science journalist Dalmeet Singh who wrote a cool article summarizing the field and our contribution for Slate.  Dalmeet has made covering negative results a part of his beat–how great is that!
  • There have been some questions about these studies almost from the start.  Greg Francis highlighted the fact that the original study of women rating men by Elliot & Niesta (2008) is just too good to be true–every study was statistically significant despite very low power, something that ought not to regularly happen (Francis, 2013).
  • Although there have been some studies showing red effects (though often in subgroups or only with some DVs), there is a growing number of studies reporting little-to-no effect of red manipulations on attraction: (Hesslinger, Goldbach, & Carbon, 2015)(Peperkoorn, Roberts, & Pollet, 2016)(Seibt, 2015)(Lynn, Giebelhausen, Garcia, Li, & Patumanon, 2013)(Kirsch, 2015) plus a whole raft of student-led precise replications that were part of the CREP project (Grahe et al., 2012): https://osf.io/ictud/
  • To help make sense of the data, Elle and I embarked on conducting a meta-analysis.  It has turned out to be a very big project.  We hope we’re nearly ready for submission.
  • Andrew Elliot, the original investigator, was extremely helpful in assisting with this replication.  Then, as the meta-analysis progressed, he became even more involved and has now joined the project as a co-author.  The project’s still not complete yet, but I’ve really enjoyed working with him, and I’m proud that this will (hopefully) become an example of how collegial and productive replication work can be towards better and more cumulative science.

References

Calin-Jageman, R., & Lehmann, G. (2015). Romantic Red – Registered Replications of effect of Red on Attractiveness (Elliot & Niesta, 2008; Elliot et al. 2010). Open Science Framework. https://doi.org/10.17605/osf.io/j3fyq [Source]
Elliot, A. J., Niesta Kayser, D., Greitemeyer, T., Lichtenfeld, S., Gramzow, R. H., Maier, M. A., & Liu, H. (2010). Red, rank, and romance in women viewing men. Journal of Experimental Psychology: General, 139(3), 399–417. https://doi.org/10.1037/a0019689
Elliot, A., & Niesta, D. (2008). Romantic red: red enhances men’s attraction to women. Journal of Personality and Social Psychology, 95(5), 1150–64. [PubMed]
Francis, G. (2013). Publication bias in “Red, rank, and romance in women viewing men,” by Elliot et al. (2010). Journal of Experimental Psychology. General, 142(1), 292–6. [PubMed]
Grahe, J. E., Reifman, A., Hermann, A. D., Walker, M., Oleson, K. C., Nario-Redmond, M., & Wiebe, R. P. (2012). Harnessing the Undiscovered Resource of Student Research Projects. Perspectives on Psychological Science, 7(6), 605–607. https://doi.org/10.1177/1745691612459057
Hesslinger, V. M., Goldbach, L., & Carbon, C.-C. (2015). Men in red: A reexamination of the red-attractiveness effect. Psychonomic Bulletin & Review, 22(4), 1142–1148. https://doi.org/10.3758/s13423-015-0866-8
Kirsch, F. (2015). Wahrgenommene Attraktivität und sexuelle Orientierung. Springer Fachmedien Wiesbaden. https://doi.org/10.1007/978-3-658-08405-9
Lehmann, G. K., & Calin-Jageman, R. J. (2017). Is Red Really Romantic? Social Psychology, 48(3), 174–183. https://doi.org/10.1027/1864-9335/a000296
Lynn, M., Giebelhausen, M., Garcia, S., Li, Y., & Patumanon, I. (2013). Clothing Color and Tipping. Journal of Hospitality & Tourism Research, 40(4), 516–524. https://doi.org/10.1177/1096348013504001
Peperkoorn, L. S., Roberts, S. C., & Pollet, T. V. (2016). Revisiting the Red Effect on Attractiveness and Sexual Receptivity. Evolutionary Psychology, 14(4), 147470491667384. https://doi.org/10.1177/1474704916673841
Seibt, T. (2015). Romantic Red Effect in the Attractiveness Perception. In Proceedings of The 3rd Human and Social Sciences at the Common Conference. Publishing Society. https://doi.org/10.18638/hassacc.2015.3.1.186