For those in the arts, few moments are more blissful than those spent “in the zone,” those times when the words or images or notes flow unimpeded, the artist functioning as more conduit than creator.
Viewed in this light, artist Melissa McCracken’s chromesthesia—or sound-to-color synesthesia—is a gift. Since birth, this rare neurological phenomenon has caused her to see colors while listening to music, an experience she likens to visualizing one’s memories.
Trained as a psychologist, she has made a name for herself as an abstract painter by transferring her colorful neurological associations onto canvas.
McCracken told Broadly that chromesthetes’ color associations vary from individual to individual, though her own experience of a particular song only wavers when she is focusing on a particular element, such as a bass line she’s never paid attention to before.
While her portfolio suggests a woman of catholic musical tastes, colorwise, she does tend to favor certain genres and instruments:
Expressive music such as funk is a lot more colorful, with all the different instruments, melodies, and rhythms creating a highly saturated effect. Guitars are generally golden and angled, and piano is more marbled and jerky because of the chords. I rarely paint acoustic music because it’s often just one person playing guitar and singing, and I never paint country songs because they’re boring muted browns.
Her favorite kind of music, jazz, almost always presents itself to her in shades of gold and blue, leading one to wonder if perhaps the Utah Jazz’s uniform redesign has a synesthetic element.
If you’ve seen Bong Joon-ho’s film Okja, about an Agribusiness-engineered gargantuan mutant pig and her young Korean girl sidekick, you may have some very specific ideas about CRISPR, the science used to edit and manipulate genes. In fact, the madcap fictional adventure’s world may not be too far off, though the science seems to be moving in the other direction. Just recently, Chinese scientists have reported the creation of 12 pigs with 24 percent less body fat than the ordinary variety. It may not be front-page news yet, but the achievement is “a big issue for the pig industry,” says the lead researcher.
There’s much more to CRISPR than bioengineering lean bacon. But what is it and how does it work? I couldn’t begin to tell you. Let biologist Neville Sanjana explain. In the Wired video above, he undertakes the ultimate challenge for science communicators—explaining the most cutting-edge science to five different people: a 7‑year-old, 14-year-old, college student, grad student, and—to really put him on the spot—a CRISPR expert. CRISPR is “a new area of biomedical science that enables gene editing,” Sanjana begins in his short intro for viewers, “and it’s helping us understand the basis of many genetic diseases like autism and cancer.”
That’s all well and good, but does he have anything to say about the pig business? Watch and find out, beginning with the adorable 7‑year-old Teigen River, who may or may not have been primed with perfect responses. Play it for your own kids and let us know how well the explanation works. Sanjara runs quickly through his other students to arrive, halfway through the video, at Dr. Matthew Canver, CRISPR expert.
From there on out you may wish to refer to other quick references, such as the Harvard and MIT Broad Institute’s short guide and video intro above from molecular biologist Feng Zhang, who explains that CRISPR, or “Clustered Regularly Intersperced Short Palindromic Repeats,” is actually the name of DNA sequences in bacteria. The gene editing technology itself is called CRISPR-Cas9. Just so you know how the sausage is made.
Enough of pig puns. Let’s talk about brains, with neuroscientist Dr. Bobby Kasthuri of the Argonne National Laboratory. He faces a similar challenge above—this time explaining high concept science to a 5‑year-old, 13-year-old, college student, grad student, and a “Connectome entrepreneur.” A what? Connectome is the product of the NIH’s Human Connectome Project, which set out to “provide an unparalleled compilation of neural data” and “achieve never before realized conclusions about the living human brain.” This brain-mapping science has many objectives, one of which, in the 5‑year-old version, is “to know where every cell in your brain is, and how it can talk to every other cell.”
To this astonishing explanation you may reply like Daniel Dodson, 5‑year-old, with a stunned “Oh.” And then you may think of Philip K. Dick, or Black Mirror’s “San Junipero” episode. Especially after hearing from “Connectome Entrepreneur” Russell Hanson, founder and CEO of a company called Brain Backups, or after listening to Sebastian Seung—“leader in the field of connectomics”—give his TED talk, “I am my connectome.” Want another short, but grown-up focused, explanation of the totally science-fiction but also completely real Connectome? See Kasthuri’s 2‑minute animated video above from Boston University.
Growing up in America, I heard nearly every behavior, no matter how unpleasant, justified with the same phrase: “It’s a free country.” In her recent book Notes on a Foreign Country, the Istanbul-based American reporter Suzy Hansen remembers singing “God Bless the USA” on the school bus during the first Iraq war: “And I’m proud to be an American / Where at least I know I’m free.” That “at least,” she adds, is funny: “We were free – at the very least we were that. Everyone else was a chump, because they didn’t even have that obvious thing. Whatever it meant, it was the thing that we had, and no one else did. It was our God-given gift, our superpower.”
But how many of us can explain what freedom is? These videos from BBC Radio 4 and the Open University’s animated History of Ideas series approach that question from four different angles. “Freedom is good, but security is better,” says narrator Harry Shearer, summing up the view of seventeenth-century philosopher Thomas Hobbes, who imagined life without government, laws, or society as “solitary, poor, nasty, brutish, and short.” The solution, he proposed, came in the form of a social contract “to put a strong leader, a sovereign or perhaps a government, over them to keep the peace” — an escape from “the war of all against all.”
But that escape comes hand in hand with the unpalatable prospect of living under “a frighteningly powerful state.” The nineteenth-century philosopher John Stuart Mill, who wrote a great deal about the state’s proper limitations, based his concept of freedom in something called the “harm principle,” which holds that “the state, my neighbors, and everyone else should let me get on with my life, as long as I don’t harm anyone in the process.” As “the seedbed of genius” and “the basis of enduring happiness for ordinary people,” this individual freedom needs protection, especially when it comes to speech: “Merely causing offense, he thinks, is no grounds for intervention, because, in his view, that is not a harm.”
That proposition remains debated more heatedly now, in the 21st century, than Mill probably could have imagined. But then as now, and as in any time of human history, we live in more or less the same world, “a world festering with moral evil, a world of wars, torture, rape, murder, and other acts of meaningless violence,” not to mention “natural evil” like disease, famine, floods, and earthquakes. This gives rise to perhaps the oldest problem in the philosophical book, the problem of evil: “How could a good god allow anyone to do such horrific things?” Some have taken the fact that the wars, murders, floods, and earthquakes continue as evidence that no such god exists.
But had that god created “human beings that always did the right thing, never harmed anyone else, never went astray,” we’d all have ended up “automata, preprogrammed robots.” Better, in this view, “to have free will with the genuine risk that some people will end up evil than to live in a world without choice.” Even so, the mere mention of free will, a concept no more easily defined than that of freedom itself, opens up a whole other can of worms, especially in light of research like neuroscientist Benjamin Libet’s.
Libet, who “wired up subjects to an EEG machine, measuring brain activity via electrodes on our scalps,” found that brain activity initiating a movement actually happened before the subjects thought they’d decided to make that movement. Does that disprove free will? Does evil disprove the existence of a good god? Does offense cause the same kind of harm as physical violence? Should we give up more security for freedom, or more freedom for security? These questions remain unanswered, and quite possibly unanswerable, but that doesn’t make considering the very nature of freedom any less necessary as human societies — those in “free countries” and otherwise — find their way forward.
Of course, he produced dozens of novels, plays, and short stories before taking his leave. Perhaps his caffeine habit had a little something to do with that?
Pharmacist Hanan Qasim’s TED-Ed primer on how caffeine keeps us awake top loads the positive effects of the most world’s commonly used psychoactive substance. Global consumption is equivalent to the weight of 14 Eiffel Towers, measured in drops of coffee, soda, chocolate, energy drinks, decaf…and that’s just humans. Insects get theirs from nectar, though with them, a little goes a very long, potentially deadly way.
Caffeine’s structural resemblance to the neurotransmitter adenosine is what gives it that special oomph. Adenosine causes sleepiness by plugging into neural receptors in the brain, causing them to fire more sluggishly. Caffeine takes advantage of their similar molecular structures to slip into these receptors, effectively stealing adenosine’s parking space.
With a bioavailability of 99%, this interloper arrives ready to party.
On the plus side, caffeine is both a mental and physical pick me up.
In appropriate doses, it can keep your mind from wandering during a late night study session.
It lifts the body’s metabolic rate and boosts performance during exercise—an effect that’s easily counteracted by getting the bulk of your caffeine from chocolate or sweetened soda, or by dumping another Eiffel Tower’s worth of sugar into your coffee.
There’s even some evidence that moderate consumption may reduce the likelihood of such diseases as Parkinson’s, Alzheimer’s, and cancer.
What to do when that caffeine effect starts wearing off?
Gulp down more!
As with many drugs, prolonged usage diminishes the sought-after effects, causing its devotees (or addicts, if you like) to seek out higher doses, negative side effects be damned. Nervous jitters, incontinence, birth defects, raised heart rate and blood pressure… it’s a compelling case for sticking with water.
Animator Draško Ivezić (a 3‑latte-a-day man, according to his studio’s website) does a hilarious job of personifying both caffeine and the humans in its thrall, particularly an egg-shaped new father.
Go to TED-Ed to learn more, or test your grasp of caffeine with a quiz.
If you’ve been accused of living in “a world of your own,” get ready for some validation. As cognitive scientist Anil Seth argues in “Your Brain Hallucinates Your Conscious Reality,” the TED Talk above, everyone lives in a world of their own — at least if by “everyone” you mean “every brain,” by “world” you mean “entire reality,” and by “of their own” you mean “that it has created for itself.” With all the signals it receives from our senses and all the prior experiences it has organized into expectations, each of our brains constructs a coherent image of reality — a “multisensory, panoramic 3D, fully, immersive inner movie” — for us to perceive.
“Perception has to be a process of ‘informed guesswork,’ ” says the TED Blog’s accompanying notes, “in which sensory signals are combined with prior expectations about the way the world is, to form the brain’s best guess of the causes of these signals.”
Seth uses optical illusions and classic experiments to underscore the point that “we don’t just passively perceive the world; we actively generate it. The world we experience comes as much from the inside-out as the outside-in,” in a process hardly different from that which we casually call hallucination. Indeed, in a way, we’re always hallucinating. “It’s just that when we agree about our hallucinations, that’s what we call ‘reality.’” And as for what, exactly, constitutes the “we,” our brains do a good deal of work to construct that too.
Seventeen minutes only allows Dash to go so far down the rabbit hole of the neuroscience of consciousness, but he’ll galvanize the curiosity of anyone with even a mild interest in this mind-mending subject. He leaves us with a few implications of his and others’ research to consider: first, “just as we can misperceive the world, we can misperceive ourselves”; second, “what it means to be me cannot be reduced to — or uploaded to — a software program running on an advanced robot, however sophisticated”; third, “our individual inner universe is just one way of being conscious, and even human consciousness generally is a tiny region in a vast space of possible consciousnesses.” As we’ve learned, in a sense, from every TED Talk, no matter how busy a brain may be constructing both reality and the self, it can always come up with a few big takeaways for the audience.
Everyone used to read Samuel Johnson. Now it seems hardly anyone does. That’s a shame. Johnson understood the human mind, its sadly amusing frailties and its double-blind alleys. He understood the nature of that mysterious act we casually refer to as “creativity.” It is not the kind of thing one lucks into or masters after a seminar or lecture series. It requires discipline and a mind free of distraction. “My dear friend,” said Johnson in 1783, according to his biographer and secretary Boswell, “clear your mind of cant.”
There’s no missing apostrophe in his advice. Inspiring as it may sound, Johnson did not mean to say “you can do it!” He meant “cant,” an old word for cheap deception, bias, hypocrisy, insincere expression. “It is a mode of talking in Society,” he conceded, “but don’t think foolishly.” Johnson’s injunction resonated through a couple centuries, became garbled into a banal affirmation, and was lost in a graveyard of image macros. Let us endeavor to retrieve it, and ruminate on its wisdom.
We may even do so with our favorite modern brief in hand, the scientific study. There are many we could turn to. For example, notes Derek Beres, in a 2014 book neuroscientist Daniel Levitin brought his research to bear in arguing that “information overload keeps us mired in noise.… This saps us of not only willpower (of which we have a limited store) but creativity as well.” “We sure think we’re accomplishing a lot,” Levitin told Susan Page on The Diane Rehm Show in 2015, “but that’s an illusion… as a neuroscientist, I can tell you one thing the brain is very good at is self-delusion.”
Johnson’s age had its own version of information overload, as did that of another curmudgeonly voice from the past, T.S. Eliot, who wondered, “Where is the wisdom we have lost in knowledge? Where is the knowledge we have lost in information?” The question leaves Eliot’s readers asking whether what we take for knowledge or information really are such? Maybe they’re just as often forms of needless busyness, distraction, and overthinking. Stanford researcher Emma Seppälä suggests as much in her work on “the science of happiness.” At Quartz, she writes,
We need to find ways to give our brains a break.… At work, we’re intensely analyzing problems, organizing data, writing—all activities that require focus. During downtime, we immerse ourselves in our phones while standing in line at the store or lose ourselves in Netflix after hours.
Seppälä exhorts us to relax and let go of the constant need for stimulation, to take longs walks without the phone, get out of our comfort zones, make time for fun and games, and generally build in time for leisure. How does this work? Let’s look at some additional research. Bar-Ilan University’s Moshe Bar and Shira Baror undertook a study to measure the effects of distraction, or what they call “mental load,” the “stray thoughts” and “obsessive ruminations” that clutter the mind with information and loose ends. Our “capacity for original and creative thinking,” Bar writes at The New York Times, “is markedly stymied” by a busy mind. “The cluttered mind,” writes Jessica Stillman, “is a creativity killer.”
In a paper published in Psychological Science, Bar and Baror describe how “conditions of high load” foster unoriginal thinking. Participants in their experiment were asked to remember strings of arbitrary numbers, then to play word association games. “Participants with seven digits to recall resorted to the most statistically common responses (e.g., white/black),” writes Bar, “whereas participants with two digits gave less typical, more varied pairings (e.g. white/cloud).” Our brains have limited resources. When constrained and overwhelmed with thoughts, they pursue well-trod paths of least resistance, trying to efficiently bring order to chaos.
“Imagination,” on the other hand, wrote Dr. Johnson elsewhere, “a licentious and vagrant faculty, unsusceptible of limitations and impatient of restraint, has always endeavored to baffle the logician, to perplex the confines of distinction, and burst the enclosures of regularity.” Bar describes the contrast between the imaginative mind and the information processing mind as “a tension in our brains between exploration and exploitation.” Gorging on information makes our brains “’exploit’ what we already know,” or think we know, “leaning on our expectation, trusting the comfort of a predictable environment.” When our minds are “unloaded,” on the other hand, which can occur during a hike or a long, relaxing shower, we can shed fixed patterns of thinking, and explore creative insights that might otherwise get buried or discarded.
As Drake Baer succinctly puts in at New York Magazine’s Science of Us, “When you have nothing to think about, you can do your best thinking.” Getting to that state in a climate of perpetual, unsleeping distraction, opinion, and alarm, requires another kind of discipline: the discipline to unplug, wander off, and clear your mind.
Composer and percussionist Dame Evelyn Glennie, above, feels music profoundly. For her, there is no question that listening should be a whole body experience:
Hearing is basically a specialized form of touch. Sound is simply vibrating air which the ear picks up and converts to electrical signals, which are then interpreted by the brain. The sense of hearing is not the only sense that can do this, touch can do this too. If you are standing by the road and a large truck goes by, do you hear or feel the vibration? The answer is both. With very low frequency vibration the ear starts becoming inefficient and the rest of the body’s sense of touch starts to take over. For some reason we tend to make a distinction between hearing a sound and feeling a vibration, in reality they are the same thing. It is interesting to note that in the Italian language this distinction does not exist. The verb ‘sentire’ means to hear and the same verb in the reflexive form ‘sentirsi’ means to feel.
It’s a philosophy born of necessity—her hearing began to deteriorate when she was 8, and by the age of 12, she was profoundly deaf. Music lessons at that time included touching the wall of the practice room to feel the vibrations as her teacher played.
While she acknowledges that her disability is a publicity hook, it’s not her preferred lede, a conundrum she explores in her “Hearing Essay.” Rather than be celebrated as a deaf musician, she’d like to be known as the musician who is teaching the world to listen.
In her TED Talk, How To Truly Listen, she differentiates between the ability to translate notations on a musical score and the subtler, more soulful skill of interpretation. This involves connecting to the instrument with every part of her physical being. Others may listen with ears alone. Dame Evelyn encourages everyone to listen with fingers, arms, stomach, heart, cheekbones… a phenomenon many teenagers experience organically, no matter what their earbuds are plugging.
And while the vibrations may be subtler, her philosophy could cause us to listen more attentively to both our loved ones and our adversaries, by staying attuned to visual and emotional pitches, as well as slight variations in volume and tone.
Ayun Halliday is an author, illustrator, theater maker and Chief Primatologist of the East Village Inky zine. She’ll is appearing onstage in New York City this June as one of the clowns in Paul David Young’s Faust 3. Follow her @AyunHalliday.
Image Photo courtesy of the Laboratory of Neuro Imaging at UCLA.
Sometimes—as in the case of neuroscience—scientists and researchers seem to be saying several contradictory things at once. Yes, opposing claims can both be true, given different context and levels of description. But which is it, Neuroscientists? Do we have “neuroplasticity”—the ability to change our brains, and therefore our behavior? Or are we “hard-wired” to be a certain way by innate structures.
The debate long predates the field of neuroscience. It figured prominently in the work, for example, of John Locke and other early modern theorists of cognition—which is why Locke is best known as the theorist of tabula rasa. In “Some Thoughts Concerning Education,” Locke mostly denies that we are able to change much at all in adulthood.
Personality, he reasoned, is determined not by biology, but in the “cradle” by “little, almost insensible impressions on our tender infancies.” Such imprints “have very important and lasting consequences.” Sorry, parents. Not only did your kid get wait-listed for that elite preschool, but their future will also be determined by millions of sights and sounds that happened around them before they could walk.
It’s an extreme, and unscientific, contention, fascinating as it may be from a cultural standpoint. Now we have psychedelic-looking brain scans popping up in our news feeds all the time, promising to reveal the true origins of consciousness and personality. But the conclusions drawn from such research are tentative and often highly contested.
So what does science say about the eternally mysterious act of artistic creation? The abilities of artists have long seemed to us godlike, drawn from supernatural sources, or channeled from other dimensions. Many neuroscientists, you may not be surprised to hear, believe that such abilities reside in the brain. Moreover, some think that artists’ brains are superior to those of mediocre ability.
Or at least that artists’ brains have more gray and white matter than “right-brained” thinkers in the areas of “visual perception, spatial navigation and fine motor skills.” So writes Katherine Brooks in a Huffington Post summary of “Drawing on the right side of the brain: A voxel-based morphometry analysis of observational drawing.” The 2014 study, published at NeuroImage, involved a very small sampling of graduate students, 21 of whom were artists, 23 of whom were not. All 44 students were asked to complete drawing tasks, which were then scored and compared to images of their brain taken by a method called “voxel-based morphometry.”
“The people who are better at drawing really seem to have more developed structures in regions of the brain that control for fine motor performance and what we call procedural memory,” the study’s lead author, Rebecca Chamberlain of Belgium’s KU Leuven University, told the BBC. (Hear her segment on BBC Radio 4’s Inside Science here.) Does this mean, as Artnet News claims in their quick take, that “artists’ brains are more fully developed?”
It’s a juicy headline, but the findings of this limited study, while “intriguing,” are “far from conclusive.” Nonetheless, it marks an important first step. “No studies” thus far, Chamberlain says, “have assessed the structural differences associated with representational skills in visual arts.” Would a dozen such studies resolve questions about causality–nature or nurture? As usual, the truth probably lies somewhere in-between.
At Smithsonian, Randy Rieland quotes several critics of the neuroscience of art, which has previously focused on what happens in the brain when we look at a Van Gogh or read Jane Austen. The problem with such studies, writes Philip Ball at Nature, is that they can lead to “creating criteria of right or wrong, either in the art itself or in individual reactions to it.” But such criteria may already be predetermined by culturally-conditioned responses to art.
The science is fascinating and may lead to numerous discoveries. It does not, as the Creators Project writes hyperbolically, suggest that “artists actually are different creatures from everyone else on the planet.” As University of California philosopher professor Alva Noestates succinctly, one problem with making sweeping generalizations about brains that view or create art is that “there can be nothing like a settled, once-and-for-all account of what art is.”
Emerging fields of “neuroaesthetics” and “neurohumanities” may muddy the waters between quantitative and qualitative distinctions, and may not really answer questions about where art comes from and what it does to us. But then again, given enough time, they just might.
We're hoping to rely on loyal readers, rather than erratic ads. Please click the Donate button and support Open Culture. You can use Paypal, Venmo, Patreon, even Crypto! We thank you!
Open Culture scours the web for the best educational media. We find the free courses and audio books you need, the language lessons & educational videos you want, and plenty of enlightenment in between.