The SlugLab was in full force at the 2023 meeting of the Chicago Society or Neuroscience.
Zayra, Jackie, and Jash presented a poster reporting the very-long-term sensitization project we worked on this past summer.
Theresa snuck in some science before escaping for her softball team’s spring break trip.
We got to catch up with Cristian, a SlugLab alum now working as a lab technician at Rush.
And, nearly all of C-J’s neurobio class attended, soaking up some fantastic neuroscience.
A big highlight was the address by Carl Hart: “Exaggerating Harmful Drug Effects on the Brain Is Killing Americans” — it was a heartfelt, heartbreaking, and fascinating talk. Bravo to cSFN for highlighting Dr. Hart’s work and perspective.
When the pandemic hit, all in-person research was shut down at Dominican (of course). This left a real challenge in terms of trying to figure out how our psychology majors could continue to engage in authentic and interesting research.
One solution I (Bob) worked on during the summer of 2020 was to assemble a collection of studies that would be a) socially relevant, and b) feasible to replicate and extend fully online (https://osf.io/xnuap/). This worked out really well for our research methods sequence.
I also worked with colleague TJ Krafnick on another approach: getting DU involved in some RRR projects (registered replication reports). Specifically, TJ and I applied to take part in a massive RRR organized by the psychological science accelerator (https://psysciacc.org/). What was especially exciting about this RRR was that it featured a trio of experiments, each designed to test online interventions to help modify emotional/behavioral responses to the Covid-19 pandemic (https://psysciacc.org/studies/psacr-1-2-3/). TJ and I obtained local IRB approval, and then we worked with our research methods students to collect data at DU. Students in my fall 2020 research and methods course then analyzed the data from our DU and wrote it up for their semester-long term projects. It was a really good experience for the class; we turned lemons into lemonade.
Now the psych science accelerator has assembled the data from all the team sites and published the manuscript for the first project (Wang et al., 2021). TJ and I are proud to be co-authors in a very long-list of talented collaborators (reading through the Google docs of draft proposals and manuscripts was incredible–at times, the manuscripts were probably more comment than actual text!).
So what was the actual study and what did it find? Participants (N > 23,000!) were randomly assigned to receive either a brief training in an emotional regulation strategy (reappraisal or reconstrual) or to a control condition. Participants were then asked to rate their positive and negative emotions in response to a series of genuinely heartbreaking images related to the Covid-19 pandemic. There were clear and consistent effects of the interventions on self-reported emotions: participants who received the training reported more positive emotions (d = -0.59!) and fewer negative emotions (d = -0.39) in response to the photos. This was true across essentially all study sites regardless of language or culture. That’s pretty amazing! On the other hand, the intervention was short term, and the dependent variable relied entirely on self-reported emotional responses, which might not be very reliable and which could be susceptible to demand effects from the study. Still, an encouraging win for emotional re-appraisal strategies.
Wang, K., Goldenberg, A., Dorison, C., Miller, J., Uusberg, A., Lerner, J., … Moshontz, H. (2021). A multi-country test of brief reappraisal interventions on emotions during the COVID-19 pandemic. Nature Human Behaviour, 5(8), 1089–1110. doi: 10.1038/s41562-021-01173-x
Today, the SlugLab can share an exciting new paper, with contributions from Tania Rosiles, Melissa Nguyen, Monica Duron, Annette Garcia, George Garcia, Hannah Gordon, and Lorena Juarez (Rosiles et al., 2020).
Where to even start?
Contributions from 7 student co-authors! It’s been such a long haul; we’re proud of each of you for sticking with it and for all your contributions to this paper.
This paper is a registered report: We first proposed the idea and the methods, even writing a complete analysis script. This was then sent to peer review (you know, when you can still do something if the reviewers turn up an issue or problem to consider!) and after some back and forth received an ‘in principle’ acceptance. Then we completed the work and the analysis and submitted it for one more round of review focused solely on the interpretation of the data. This approach to publication lets peer reviewers have a more meaningful impact on the project and it also helps combat publication bias. People tend to think of this model for replication research, but in our case we used a registered report because we wanted to establish a fair and valid test between two competing theories and to ensure that the approach and analysis were pre-specified.
This paper is exciting! We were able to test two very different theories of forgetting:
decay theory, which says that memories are forgotten because they physically degrade
retrieval failure, which says that memories don’t degrade at all, but simply become more difficult to retrieve due to interference
We found clear support for the retrieval failure theory of forgetting, something I (Bob) was completely not expecting.
So, what was the study actually about?
Even memories stored via wiring changes in the brain can be forgotten. In fact, the majority of long-term memories are probably forgotten. What does this really mean? Is the information gone, or just inaccessible?
One clue is from savings memory, the fact you can very quickly re-learn seemingly-forgotten information. Savings memory is sometimes taken to mean the original memory trace persists, but it could also be that it had decayed, and the remnants prime re-learning.
We noticed a testable prediction:
If forgetting is decay, savings re-encodes the memory and must involve the transcriptional and wiring changes used to store new information.
If forgetting is inaccessibility, savings shouldn’t involve transcriptional/wiring changes
To test this prediction, we tracked transcriptional changes associated with memory storage as a memory was first formed, then forgotten, then re-activated. We did this in the sea slug, Aplysia calinfornica as a registered report (with pre-registered design and analyses).
The memory was for a painful shock—this is expressed as an increase in reflexes (day 1, red line way above baseline). Sensitization is forgotten in about a week (day 7, reflexes back to normal), but then a weak shock produces savings (day 8, reflexes jump back up)
What’s happening in the nervous system? Our key figure shows expression of ~100 transcripts that are sharply up- or down-regulated when the memory is new. At forgetting, these are deactivated (all lines dive towards 0). At savings? No re-activation! (lines stay near 0)
Our results show that savings re-activates a forgotten memory without invoking *any* of the transcriptional changes associated with memory formation. This strongly suggests the memory is not rebuilt, but just re-activated—the information must have been there all along?!
Lots of caveats (see paper), but the results seem compelling (though surprising) to us. In particular, we used an archival data set to show we would have observed re-activation of transcription had it occurred. Transcriptional changes with savings are clearly negligible.
Rosiles, T., Nguyen, M., Duron, M., Garcia, A., Garcia, G., Gordon, H., … Calin-Jageman, R. J. (2020). Registered Report: Transcriptional Analysis of Savings Memory Suggests Forgetting is Due to Retrieval Failure. Society for Neuroscience. doi: 10.1523/eneuro.0313-19.2020
Beth Morling and I (Bob) have a new commentary out in Teaching of Psychology that provides an overview of the Open Science and New Statistics movements and gives some advice about how psychology instructors can bring these new developments into the traditional psychology curriculum (Morling & Calin-Jageman, 2020).
Beth is a superstar, on many fronts, but is perhaps best known for her incredible Research Methods in Psychology textbook (https://wwnorton.com/books/9780393536263). Just being asked to work on this commentary was a thrill. Then, working together, I learned a lot from her, especially with her approach to writing, which kept us on task and productive.
The article is open-access, so check it out. Here’s my favorite paragraph:
Introductory coursework is the ideal time to foster estimation thinking. Teachers can use the prompt, “How much?” to help students consider the magnitudes of effects and to seek context. Using the prompt, “How wrong?” can encourage students to embrace uncertainty and to introduce the key idea of sampling variation. Finally, prompting students with, “What else is known?” helps them see science as a cumulative and integrative process rather than as a series of “one-and-done” demonstrations. These three questions instill a nuanced view of science, where any one study is tenuous, and yet the cumulative evidence from a body of research can be compelling. This is a sophisticated epistemic viewpoint that avoids both excessive confidence and undue cynicism.
Morling & Calin-Jageman, 2020, p. 174
Morling, B., & Calin-Jageman, R. J. (2020). What Psychology Teachers Should Know About Open Science and the New Statistics. SAGE Publications. doi: 10.1177/0098628320901372
I finally had some spare time to document and post the mirror tracing and word-search tasks I developed for some replication work my students and I completed (Cusack, Vezenkova, Gottschalk, & Calin-Jageman, 2015).
The mirror-tracing task is just like it sounds–participants trace an image with their mouse or track pad but the mouse movements are mirrored, making it hard to stay in the line. You can vary task difficulty by changing line thickness. There is an expected weak negative correlation with age. The script can even posts the traced images back to your server, which is cool for making figures showing how groups differ with representative data.
The word-search task is also like it sounds. You can use pre-defined grids, or the script can generate a grid for you. I’ve used it to try priming for power (control vs. power-related words hidden in the grid) and to look at frustration (by having a grid that *doesn’t* have all the target letters…mean, I know).
Cusack, M., Vezenkova, N., Gottschalk, C., & Calin-Jageman, R. J. (2015). Direct and Conceptual Replications of Burgmer & Englich (2012): Power May Have Little to No Effect on Motor Performance. PLOS ONE, e0140806. doi: 10.1371/journal.pone.0140806
At the end of February I (Dr. Bob) visited a local elementary school as part of the Oak Park Educational Foundation’s Science Alliance Program.
I was matched up with Sue Tressalt’s Third Grade Class at Irving Elementary. For an activity, I brought along the neuroscience program’s collection of Finch Robots, a set of laptops, and the Cartoon Network simulator I have been developing (Calin-Jageman, 2017, 2018). I introduced kids to the basic rules of neural communication, and they explored Cartoon Network, learning how to make brains to get the Finch Robots to do what they wanted (e.g. avoid light, sing when touched, etc.). It was a great class, and a ton of fun.
I’m proud of Cartoon Network, and the fact that it can make exploring brain circuitry fun. It’s simple enough that the kids were able to dive right in (with some help), yet complex enough that really interesting behaviors and dynamics can be modelled.
As a kid, my most formative experience in science was learning logo, the programming language developed by Seymour Papert and colleagues at MIT. Logo was fun to use, and it made me need/want key programming concepts. I clearly remember sitting in the classroom writing a program to draw my name and being frustrated at having to re-write the commands to make a B at the end of my name when I had already typed them out for the B at the beginning of my name. The teacher came by and introduced me to functions, and I remember being so happy about the idea of a “to b” function, and I immediately grasped that I could write functions for every letter once and then be able to have the turtle type anything I wanted in no time at all.
Years later I read Mindstorms and it remains, to my mind, one of the most important books on pedagogy, teaching, and technology. Papert applied Piaget’s model of children as scientists (he had trained with Piaget). He believed that if you can make a microworld that is fun to explore, children will naturally need, discover, and understand deep concepts embedded in that world. That’s what I was experiencing back in 2nd grade–I desperately needed functions, and so the idea of them stuck with me in a way that they never would in an artificial “hello world” type of programming exercise. Having been a “logo kid” it was amazing to read Mindstorms and recognize Papert’s intentionality behind the experiences I had learning Logo.
Anyways, bringing Cartoon Network to an elementary school for a day gave me a great feeling of carrying on a tiny piece of Papert’s legacy. The insights kids develop in just an hour of playing with neural networks are amazing–the idea of a recurrent loop made immediate sense to them, and that also sets up the idea that both excitation and inhibition are important. And, like in Logo, the kids were excited to explore–to know that their experience was not dependent on getting the ‘right’ answer but on trying, observing, and trying again.
The day was fun and even better I received a whole stack of thank-you cards this week. Reading through them has kept a smile on my face all week. Here’s a sample.
Calin-Jageman, R. (2017). Cartoon Network: A tool for open-ended exploration of neural circuits. Journal of Undergraduate Neuroscience Education : JUNE : A Publication of FUN, Faculty for Undergraduate Neuroscience, 16(1), A41–A45. Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/29371840
Calin-Jageman, R. (2018). Cartoon Network Update: New Features for Exploring of Neural Circuits. Journal of Undergraduate Neuroscience Education : JUNE : A Publication of FUN, Faculty for Undergraduate Neuroscience, 16(3), A195–A196. Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/30254530
It was a whirlwind 2018. Irina and I are just now catching our breath and finding some time to update the lab website.
One awesome piece of news we forgot to publicize is that our latest paper came out in the August issue of Neurobiology of Learning and Memory (Patel et al., 2018). This paper continues our work of tracking the molecular fragments of a memory as it is forgotten. Specifically, we tracked 11 genes we suspected of being regulated *after* forgetting (Perez, Patel, Rivota, Calin-Jageman, & Calin-Jageman, 2017). Things didn’t work out quite as well as we had expected: of our 11 candidate genes 4 didn’t show much regulation, meaning that our previous results with these genes were probably over-estimating their importance (curse you, sampling error!). On the other hand, we replicated the results with the other genes and found that some of them are actually regulated for up to 2 weeks after the memory is induced, long after it seems forgotten.
Here are two key figures. The first is the memory curve for sensitization in our Aplysia -it shows that after memory induction there is strong sensitization recall that decays within a week back to baseline. Even though the memory seems gone, giving a reminder 2 weeks after learning rekindles a weak re-expression of the memory. That’s a classic “savings” effect.
The next figure traces the time-course of memory-induced gene expression (levels of mRNA) for 6 specific genes, measured in the pleural ganglia that contains neurons known to be important for storing sensitization memory. You can see that each of these transcripts is up- or down-regulated within 24 hours of learning, and that in each case this regulation lasts at least a week and sometimes out to 2 weeks. So, just as the behavioral level of the memory fades but isn’t really completely gone, the some of the transcriptional events that accompany learning also seem to persist for quite some time.
Why would this occur? Perhaps these transcripts are part of savings…maybe they set the stage for re-expressing the memory? Or maybe they are actually part of forgetting, working to remove the memory? Or maybe both? For example, one of the transcripts is encodes an inhibitory transmitter named FMRFamide. It is really up-regulated by learning, which would normally work against the expression of sensitization memory. So perhaps this helps suppress the memory (forgetting), but in a way that can be easily overcome with sufficient excitation (savings)… that’s an exciting maybe, and it’s the thing we’ll be working this summer to test.
As usual, we’re so proud that this paper was made possible through exceptional hard work from some outstanding DU student researchers: Ushma Patel, Leticia Perez, Steven Farrell, Derek Steck, Athira Jacob, Tania Rosiles, and Melissa Nguyen. Go slug squad!
Patel, U., Perez, L., Farrell, S., Steck, D., Jacob, A., Rosiles, T., … Calin-Jageman, I. E. (2018). Transcriptional changes before and after forgetting of a long-term sensitization memory in Aplysia californica. Neurobiology of Learning and Memory, 155, 474–485. doi:10.1016/j.nlm.2018.09.007
Perez, L., Patel, U., Rivota, M., Calin-Jageman, I. E., & Calin-Jageman, R. J. (2017). Savings memory is accompanied by transcriptional changes that persist beyond the decay of recall. Learning & Memory, 25(1), 45–48. doi:10.1101/lm.046250.117
The sluglab has a new preprint out, currently under review at the Neurobiology of Learning and Memory. We shows that both transcription and savings can persist for as long as 2 weeks after the induction of long-term sensitization, way beyond the decay of recall. Interestingly, all the long-lasting transcriptional changes start within 1 day of training. Lots of student co-authors on this one; it was a *lot* of work. Looking forward to the reviews.
Most long-term memories are ‘forgotten’–meaning that it becomes harder and harder to recall the memory. Psychologists have long known, though, that forgetting is complex, and that fragments of a memory can remain. For example, even after a memory seems forgotten it can be easier to re-learn the same material, something called ‘savings memory’. That suggests that there is at least some fragment of a memory that persists in the brain even after it seems forgotten…but what?
Today our lab has published a paper shedding a bit of light on this long-standing mystery (Perez, Patel, Rivota, Calin-Jageman, & Calin-Jageman, 2017). We tracked a sensitization memory in our beloved sea slugs. As expected, memories faded–within a week animals had no recall of the prior sensitization. Even more exciting, we found similar fragments of memory at the molecular level–there was a small set of genes very strongly regulated by the original training even though recall had fully decayed.
Why? Do these persistent transcriptional changes help keep a remnant of the memory going? Or are they actually doing the work of fully erasing the memory? Or do they serve some other function entirely (or no function at all)? These are some of the exciting questions we now get to investigate. But for now, we have these fascinating foothold into exploring what, exactly, forgetting is all about in the brain.
As usual, we are enormously proud of the undergraduate students who helped make this research possible: Leticia Perez, Ushma Patel, and Marissa Rivota. Ushma, who wants to do science illustration, is making an incredible piece of artwork representing these findings. A draft is shown above. She submitted it for the cover of the journal, but sadly they journal selected a different image (boo!). Still, a very exciting and proud day for the slug lab!
Perez, L., Patel, U., Rivota, M., Calin-Jageman, I. E., & Calin-Jageman, R. J. (2017). Savings memory is accompanied by transcriptional changes that persist beyond the decay of recall. Learning & Memory, 25(1), 45–48. doi: 10.1101/lm.046250117
This year was a big year for our lab at the Society for Neuroscience conference. Leticia Perez, who has been in the lab for the past two summers, gave an amazing talk on our work on forgetting. In addition, I (Bob) helped organize a Professional Development Workshop on doing better neuroscience.
It was a huge honor to get to lead this workshop. I gave a presentation on sample-size planning (which is sooo vital to doing good science). David Mellor at the Open Science Framework spoke about pre-registration. And Richard Ball, who co-directs project Tier, spoke about reproducible data analysis. Like the good Open Scientists we are, we used the Open Science Framework to post all our slides and resources: https://osf.io/5awp4/. SFN also made a video, which should be posted soon.
SFN staff told us it was the best attended workshop for the meeting. Hooray! Hope all our attendees will go forth to spread the good word about these small tweaks that can have such a big impact on scientific quality.