Two old-school brilliant hippocampologists make today’s NY Times, the link is here. The short version is that they were able to mimic CA1 output to rescue a memory in rat. I’ll have to take a look at the original paper to critique it further, but it’s important if it holds up.
Of course, behind a firewall, for those without subscriptions, but if you do have one, I think it’s an important article for those of us interested in the intersection of human cognition and computation. Click above to get the abstract.
The basic cognitive problem addressed is how human rapidly learn and abstract given the noisy, relatively sparse inputs of our sensory systems. It’s a big problem for the Decade of the Mind crowd and for those interested in the whole notion of reverse-engineering our brains.
At the same time, this is not the article for explaining how things are done at the neural level. But it does perhaps lay out some clues as to what we might be looking for.
Hat tip to one of my former students and of course, because I began my career in neuroscience studying invertebrate brains:
I asked permission to sit in on the Journal Club that’s held regularly by our neuroscience doctoral students today. It was a real treat. They are a bright bunch.
A key point of discussion concerned the sort of operational definitions that are fairly common in behavioral neuroscience (e.g. habit learning, spatial learning). These definition are extremely important in the design of experiments and in the interpretation of results and they evolve over time: model-based and model-free were the two relevant contexts for today’s paper from Redish’s lab at University of Minnesota.
Central to these neuroscience approaches is the holy grail of dissociating the different types of learning that occur between regions of the brain. The problem of course is that many brain regions participate in any one kind of learning. Further, any experimental design, no matter how excellent (in my opinion, the late David Olton was among the very best) is likely to have any single learning experience confounded by multiple types of learning.
Here is Nathan Schneider’s essay In Defense of The Memory Theater. It is at once, both dystopian and entirely optimistic about the future of computing, the Net and books as any essay that I’ve read recently. His uncle, a former biologist at NIH plays a central role, inventing a text-based knowledge system that itself becomes alive as a “memory theater”.
Coincidentally, Dame Frances A. Yates‘, essay on The Art of Memory–referenced in Schneider’s piece–is next on my own reading list.
The story of course is on Todd Sacktor and PKM zeta, a form of my own favorite molecule, protein kinase C.
Human memories are a lot like Hebb imagined in his book, The Organization of Behavior (1949)…
From today’s NY Times. Money quote:
The recordings, taken from the brains of epilepsy patients being prepared for surgery, demonstrate that these spontaneous memories reside in some of the same neurons that fired most furiously when the recalled event had been experienced.
Actually the piece from today’s NY Times suggests that from an evolutionary standpoint, being smarter isn’t necessarily a winning strategy for animals. That idea may be worth bringing up in my talk at the Third Decade of the Mind symposium–I’m flying out to Des Moines this afternoon.