Each of us has a normal state of mind, as well as our own way of reaching a different state of mind. As the School of Life video above reminds us, such habits go back quite deep into recorded history, to the eras when, then as now, “Hindu sages, Christian monks and Buddhist ascetics” spoke of “reaching moments of ‘higher consciousness’ – through meditation or chanting, fasting or pilgrimages.” In recent years, the practice of meditation has spread even, and perhaps especially, among those of us who don’t subscribe to Buddhism, or indeed to any religion at all. Periodic fasting has come to be seen as a necessity in certain circles of wealthy first-worlders, as has “dopamine fasting” among those who feel their minds compromised by the distractions of high technology and social media. (And one needs only glance at that social media to see how seriously some of us are taking our pilgrimages.)
Still, on top of our mountain, deep into our sitting-and-breathing sessions, or even after having consumed our mind-altering substance of choice, we do feel, if only for a moment, that something has changed within us. We understand things we don’t even consider understanding in our normal state of mind, “where what we are principally concerned with is ourselves, our survival and our own success, narrowly defined.”
When we occupy this “lower consciousness,” we “strike back when we’re hit, blame others, quell any stray questions that lack immediate relevance, fail to free-associate and stick closely to a flattering image of who we are and where we are heading.” But when we enter a state of “higher consciousness,” however we define it, “the mind moves beyond its particular self-interests and cravings. We start to think of other people in a more imaginative way.”
When we rise from lower to higher consciousness, we find it much harder to think of our fellow human beings as enemies. “Rather than criticize and attack, we are free to imagine that their behavior is driven by pressures derived from their own more primitive minds, which they are generally in no position to tell us about.” The more time we spend in our higher consciousness, the more we “develop the ability to explain others’ actions by their distress, rather than simply in terms of how it affects us. We perceive that the appropriate response to humanity is not fear, cynicism or aggression, but always — when we can manage it — love.” When our consciousness reaches the proper altitude, “the world reveals itself as quite different: a place of suffering and misguided effort, full of people striving to be heard and lashing out against others, but also a place of tenderness and longing, beauty and touching vulnerability. The fitting response is universal sympathy and kindness.”
This may all come across as a bit new-age, sounding “maddeningly vague, wishy washy, touchy-feely – and, for want of a better word, annoying.” But the concept of higher consciousness is variously interpreted not just across cultural and religious traditions but in scientific research as well, where we find a sharp distinction drawn between the neocortex, “the seat of imagination, empathy and impartial judgement,” and the “reptilian mind” below. This suggests that we’d benefit from understanding states of higher consciousness as fully as we can, as well as trying to “make the most of them when they arise, and harvest their insights for the time when we require them most” — that is to say, the rest of our ordinary lives, especially their most stressful, trying moments. The instinctive, unimaginative defensiveness of the lower consciousness does have strengths of its own, but we can’t take advantage of them unless we learn to put it in its place.
Related Content:
Meditation for Beginners: Buddhist Monks & Teachers Explain the Basics
How Meditation Can Change Your Brain: The Neuroscience of Buddhist Practice
The Neuronal Basis of Consciousness Course: A Free Online Course from Caltech
Medieval Monks Complained About Constant Distractions: Learn How They Worked to Overcome Them
Based in Seoul, Colin Marshall writes and broadcasts on cities, language, and culture. His projects include the book The Stateless City: a Walk through 21st-Century Los Angeles and the video series The City in Cinema. Follow him on Twitter at @colinmarshall or on Facebook.
Read More...
Jazz improvisation has become a hot topic in neuroscience lately, and little wonder. “Musical improvisation is one of the most complex forms of creative behavior,” write the authors of a study published in April in Brain Connectivity. Research on the brains of improvisers offers “a realistic task paradigm for the investigation of real-time creativity”—an even hotter topic in neuroscience.
Researchers study jazz players for the same reason they take MRI scans of the brains of freestyle rappers—both involve creating spontaneous works “where revision is not possible,” and where only a few formal rules govern the activity, whether rhyme and meter or chord structure and harmony. Those who master the basics can leap into endlessly complex feats of improvisatory bravado at any moment.
It’s a power most of us only dream of possessing—though it’s also the case that many a researcher of jazz improvisations also happens to be a musician, including study author Martin Norgaard, a trained jazz violinist who “began studying the effects of musical improvisation… while earning his Ph.D. from the University of Texas at Austin,” notes Jennifer Rainey Marquez at Georgia State University Research Magazine.
Norgaard interviewed both students and professional musicians, and he analyzed the solos of Charlie Parker to find patterns related to specific kinds of brain activity. In this recent study, Norgaard, now at Georgia State University, worked with Mukesh Dhamala, associate professor of physics and astronomy, using an fMRI to measure the brain activity of “advanced jazz musicians” who sang both standards and improvisations while being scanned.
The researchers’ findings are consistent with similar studies, like those of John Hopkins surgeon Charles Limb, who also considers jazz a key to understanding creativity. While improvising, musicians show decreased activity in the prefrontal cortex, the area of the brain that generates planning and overthinking, and gets in the way of what psychologists call a state of “flow.” Improvising might engage “a smaller, more focused brain network,” says Norgaard, “while other parts of the brain go quiet.”
Training and practice in improvisation may also have longer-term results as well. A study contrasting the brain activity of jazz and classical players found that the former were much quicker and more adaptable in their thinking. The researchers attributed these qualities to changes in the brain wrought by years of improvising. Norgaard and his team are much more circumspect in their conclusions, but they do suggest a causal link.
In a study of 155 8th graders enrolled in a jazz for kids program, Norgaard found that the half who were given training in improvisation showed “significant improvement in cognitive flexibility.” Research like this not only validates the intuitions of jazz musicians themselves; it also helps define specific questions about the cognitive benefits of playing music, which are generally evident in study after study.
“For nearly three decades,” Norgaard says, “scientists have explored the idea that learning to play an instrument is linked to academic achievement.” But there are “many types of music learning.” It’s certainly not as simple as studying Bach to work on accuracy or Coltrane for flexibility, but different kinds of music creates different structures in the brain. We might next wonder about the mathematical properties of these structures, or how they interact with modern theories of physics. Rest assured, there are jazz-playing scientists out there working on the question.
Related Content:
This is Your Brain on Jazz Improvisation: The Neuroscience of Creativity
Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness.
Read More...
I suspect many fewer people are assigned John Hersey’s Hiroshima, a book most everyone in my cohort read at some stage in their education. And certainly, far fewer people are subjected to the kind of alarmist (and reasonably so) propaganda films that dramatized the grisly details of fallout and nuclear winter. Even the recent HBO miniseries Chernobyl, with its grotesque depiction of radiation poisoning, prompted a wave of tourism to the site, drawing Instagram generation gawkers born too late to have heard the terrifying news firsthand.
Yet, the threat of a nuclear disaster and its attendant horrors has hardly gone away. The UN General Assembly issued a statement this year warning of the highest potential for a devastating incident since the Cuban Missile Crisis. We are entering a new era of nuclear proliferation, with many countries who have no love for each other joining the race. “As the risk of nuclear confrontation grows,” writes Simon Tisdall at The Guardian, “the cold war system of treaties that helped prevent Armageddon is being dismantled, largely at Trump’s behest.” Calls for a No-First-Use policy in the U.S. have grown more urgent.
Living memory of the period in which two global superpowers almost destroyed each other, and took everyone else with them, has not deterred the architects of today’s geopolitics. But remembering that history should nonetheless be required of us all. In the Business Insider video above, you can get a sense of the scope of nuclear testing that escalated throughout the Cold War, in an animated timeline showing every single explosion in Japan and the various testing sites in Russia, New Mexico, Australia, and the Pacific Islands from 1945 into the 1990s, when they finally drop off. As the decades progress, more countries amass arsenals and conduct their own testing.
Despite the expert warnings, something certainly has changed since the fall of the Soviet Union. Over a forty year period, the U.S. and the U.S.S.R. trained to annihilate the other, and the prospect of nuclear war became an extinction-level event. That may not be the case in a fragmented, multipolar world with many smaller countries vying for regional supremacy. But a nuclear event, intention or accidental, could still be catastrophic on the order of thousands or millions of deaths. The animation shows us how we got here, through decades of normalizing the stockpiling and testing of the ultimate weapons of mass destruction.
Related Content:
53 Years of Nuclear Testing in 14 Minutes: A Time Lapse Film by Japanese Artist Isao Hashimoto
Protect and Survive: 1970s British Instructional Films on How to Live Through a Nuclear Attack
Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness
Read More...
The origin of dramatic storytelling in cinema is often traced to a single movie, D.W. Griffith’s The Birth of a Nation. It also happens to be a film that celebrates the racist violence of the Ku Klux Klan, based on a novel, The Clansman, that does the same. The film’s technical achievements and its racism became integral to Hollywood thereafter. Only relatively recently have black filmmakers begun entering the mainstream with very different kinds of stories, winning major awards and making record profits.
This would have been unthinkable in the 1920s, a period of intense racial violence when black WWI veterans came home to find their country armed against them. “When the soldiers returned,” writes Megan Pugh for the San Francisco Silent Film Festival, “Jim Crow still reigned supreme and lynch mobs continued to terrorize the South.” Hollywood placated white audiences by only ever featuring black characters in subservient, stereotypical roles, or casting white actors in blackface.
Against these oppressive representations, black filmmakers like Oscar Micheaux and George and Noble Jackson “used cinema to confront American racism,” responding to Griffith with films like Micheaux’s Within Our Gates and the Jacksons’ uplifting The Realization of a Negro’s Ambition. There were also several white filmmakers who made so-called “race movies.” But most of their films avoid any explicit political commentary.
These include the films of Richard Norman, who between 1920 and 1928 made seven feature-length silent movies with all-black casts, “geared toward black audiences.” He made romances, comedies, and adventure films, casting black actors in serious, “dignified” roles. “Instead of tackling discrimination head-on in his films,” writes Pugh, “Norman created a kind of world where whites—and consequently racism—didn’t even exist.”
Though we may see this as a cynical commercial decision, and its own kind of appeasement to segregation, the approach also enabled Norman to tell powerful, alternate-universe stories that a more realist bent would not allow. 1926’s The Flying Ace, for example, Norman’s only surviving film, is about a black fighter pilot returning home to “resume his civilian career as a railroad detective—without removing his Army Air Service uniform, a constant reminder of his patriotism and valor.”
Norman tells the moving story of Captain Billy Stokes (see Part 1 at the top), “a model for the ideals of racial uplift,” despite the fact that “African-Americans were not allowed to serve as pilots in the United States Armed Forces until 1940.” One might say that rewriting recent history as wish-fulfillment has always been a function of cinema since… well, at least since The Birth of a Nation, if not further back to The Great Train Robbery.
Norman takes this impulse and dramatizes the life of an impossibly heroic black WWI serviceman, at a time when such men faced widespread abuse and discrimination in reality. While he insisted that he only made genre films, and avoided what he called the “propaganda nature” of Micheaux’s films, it’s hard not to read The Flying Ace as a political statement of its own, and not only for its oblique topical commentary.
The film centers on positive, complex black characters at a time when studios made quite a bit of money doing exactly the opposite. Norman gave black audiences heroes of their own to root for. In The Flying Ace, Captain Stokes not only returns from flying dangerous missions for his country, but he then goes on to capture a band of thieves who stole his employer’s payroll. The character “never would have made it onscreen in a Hollywood movie of the time.”
Norman established his studio in Jacksonville Florida, at the time considered “the Winter Film Capital of the World.” Many major studios decamped there from New York until WWI, when they moved west to L.A. Norman, who grew up in Middleburg, Florida, made a fortune inventing soft drinks before turning to movies. He returned to his home state to find little competition left in Jacksonville in the 1920s.
His studio would become “one of the three leading producers of race films in America,” next to the Micheaux Film Corporation and the Jacksons’ Lincoln Motion Picture Company. In 2016, Norman Studios was designated a National Historic Landmark. The filmmaker’s son, Richard Norman Jr. became a pilot, inspired by The Flying Ace, and has plans to turn the building into a museum celebrating Jacksonville’s, and Norman’s, cinema legacy.
Related Content:
Watch the Pioneering Films of Oscar Micheaux, America’s First Great African-American Filmmaker
101 Free Silent Films: The Great Classics
Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness
Read More...Image by Georges Biard, via Wikimedia Commons
With his first three features Reservoir Dogs, Pulp Fiction, and Jackie Brown, Quentin Tarantino claimed 1990s Los Angeles as his own. Then he struck boldly out into not just new geographical and cultural territories, but other time periods. With his first full-on period piece, 2009’s Inglourious Basterds, he showed audiences just how he intended to use history: twisting it for his own cinematic purposes, of course, but only making his departures after steeping himself in accounts of the time in which he envisioned his story taking place. This naturally involves plenty of reading, and Tarantino recently provided HistoryNet with a few titles that helped him properly situate Inglourious Basterds in the Europe of the Second World War.
Tarantino calls Ian Ousby’s Occupation: The Ordeal of France 1940–1944 “a very good overview that answered all of my questions about life in Nazi-occupied France.” Ulysses Lee’s The Employment of Negro Troops is “the most profound thing I’ve ever read on both the war and racist America of the 1940s, commissioned by the U.S. Army to examine the effectiveness of their employment of black soldiers.” And for Tarantino, who doesn’t just make films but lives and breathes them, understanding Nazi Germany means understanding its cinema, beginning with Eric Rentschler’s Ministry of Illusion: Nazi Cinema and Its Afterlife, “a wonderful critical reexamination of German cinema under Joseph Goebbels” that “goes far beyond the demonizing approach employed by most writers on this subject,” including even excerpts from Goebbels’ diaries.
Rentschler also “dares to make a fair appraisal of Nazi filmmaker Veit Harlan,” who made antisemitic blockbusters as one of Goebbels’ leading propaganda directors. But the work of no Nazi filmmaker had as much of an impact as that of Leni Riefenstahl, two books about whom Tarantino puts on his World War II reading list: Glenn B. Infield’s Leni Riefenstahl: The Fallen Film Goddess, the first he ever read about her, as well as Riefenstahl’s eponymous memoir, which he calls “mesmerizing. Though you can’t believe half of it. That still leaves half to ponder. Her descriptions of normal friendly conversations with Hitler are amazing and ring of truth” — and that praise comes from a filmmaker who made his own name with good dialogue.
In a recent DGA Quarterly conversation with Martin Scorsese, Tarantino revealed that he’s also at work on a book of his own about that era: “I’ve got this character who had been in World War II and he saw a lot of bloodshed there. Now he’s back home, and it’s like the ’50s, and he doesn’t respond to movies anymore. He finds them juvenile after everything that he’s been through. As far as he’s concerned, Hollywood movies are movies. And so then, all of a sudden, he starts hearing about these foreign movies by Kurosawa and Fellini,” thinking “maybe they might have something more than this phony Hollywood stuff.” He soon finds himself drawn inexorably in: “Some of them he likes and some of them he doesn’t like and some of them he doesn’t understand, but he knows he’s seeing something.” This is hardly the kind of premise that leads straight to the kind of violent catharsis in which Tarantino specializes, but then, he’s pulled off more unlikely artistic feats in his time.
Related Content:
Quentin Tarantino Explains How to Write & Direct Movies
How Quentin Tarantino Steals from Other Movies: A Video Essay
Based in Seoul, Colin Marshall writes and broadcasts on cities, language, and culture. His projects include the book The Stateless City: a Walk through 21st-Century Los Angeles and the video series The City in Cinema. Follow him on Twitter at @colinmarshall or on Facebook.
Read More...
Who wants to live in the present? It’s such a limiting period, compared to the past.
Were Ebert alive today would he still express himself thusly in a recorded interview? His remarks are specific to his cinematic passion, but still. As a smart Midwesterner, he would have realized that the corn has ears and the potatoes have eyes. Remarks can be taken out of context. (Witness the above.)
Recent history has shown that not everyone is keen to roll back the clock—women, people of color, and gender non-conforming individuals have been reclaiming their narratives in record numbers, airing secrets, exposing injustice, and articulating offenses that can no longer stand.
If powerful, older, white heterosexual men in the entertainment business are exercising verbal caution these days when speaking as a matter of public record, there’s some goodly cause for that.
It also makes the archival celebrity interviews excerpted for Quoted Studios’ animated series, Blank on Blank, feel very vibrant and uncensored, though be forewarned that your blood may boil a bit just reviewing the celebrity line up—Michael Jackson, Woody Allen, Clint Eastwood holding forth on the Pussy Generation 10 years before the Pussyhat Project legitimized common usage of that charged word….
(In full disclosure, Blank on Blank is an oft-reported favorite here at Open Culture.)
Here’s rapper Tupac Skakur, a year and a half before he was killed in a drive by shooting, casting himself as a tragic Shakespearean hero,
His musings on how differently the public would have viewed him had he been born white seem even more relevant today. Readers who are only passingly acquainted with his artistic output and legend may be surprised to hear him tracing his allegiance to “thug life” to the positive role he saw the Black Panthers playing in his single mother’s life when he was a child.
On the other hand, Shakur’s lavish and freely expressed self pity at the way the press reported on his rape charge (for which he eventually served 9 months) does not sit at all well in 2019, nor did it in 1994.
Like the majority of Blank on Blank entries, the recording was not the interview’s final form, but rather a journalistic reference. Animator Patrick Smith may add a layer of visual editorial, but in terms of narration, every subject is telling their own undiluted truth.
It is interesting to keep in mind that this was one of the first interviews the Blank on Blank team tackled, in 2013.
Six years later, it’s hard to imagine they would risk choosing that portion of the interview to animate. Had Shakur lived, would he be cancelled?
Guess who was the star of the very first Blank on Blank to air on PBS back in 2013?
Broadcaster and television host Larry King. While King has steadfastly rebutted accusations of groping, we suspect that if the Blank on Blank team was just now getting around to this subject, they’d focus on a different part of his 2001 Esquire profile than the part where he regales interviewer Cal Fussman with tales of pre-cellphone “seduction.”
It’s only been six years since the series’ debut, but it’s a different world for sure.
If you’re among the easily triggered, living legend Meryl Streep’s thoughts on beauty, harvested in 2014 from a 2008 conversation with Entertainment Weekly’s Christine Spines, won’t offer total respite, but any indignation you feel will be in support of, not because of this celebrity subject.
It’s actually pretty rousing to hear her merrily exposing Hollywood players’ piggishness, several years before the Harvey Weinstein scandal broke.
For even more evidence of “a different world,” check out interviewer Howard Smith’s remark to Janis Joplin in her final interview-cum-Blank-on-Blank episode, four days before here 1970 death:
A lot of women have been saying that the whole field of rock music is nothing more than a big male chauvinist rip off and when I say, “Yeah, what about Janis Joplin? She made it,” they say, “Oh…her.” It seems to bother a lot of women’s lib people that you’re kind of so up front sexually.
Joplin, stung, unleashes a string of invectives against feminists and women, in general. One has to wonder if this reaction was Smith’s goal all along. Or maybe I’m just having flashbacks to middle school, when the popular girls would always send a delegate disguised as a concerned friend to tell you why you were being shunned, preferably in a highly public gladiatorial arena such as the lunchroom.
I presume that sort of stuff occurs primarily over social media these days.
Good on the Blank on Blank staff for picking up on the tenor of this interview and titling it “Janis Joplin on Rejection.”
You can binge watch a playlist of 82 Blank on Blank episodes, featuring many thoughts few express so openly anymore, here or right below.
When you’re done with that, you’ll find even more Blank on Blank entries on the creators’ website.
Related Content:
Alfred Hitchcock Meditates on Suspense & Dark Humor in a New Animated Video
Joni Mitchell Talks About Life as a Reluctant Star in a New Animated Interview
Ayun Halliday is an author, illustrator, theater maker and Chief Primatologist of the East Village Inky zine. Join her in NYC on Monday, December 9 when her monthly book-based variety show, Necromancers of the Public Domain celebrates Dennison’s Christmas Book (1921). Follow her @AyunHalliday.
Read More...Two years ago, a scandalous “art heist” at the Neues Museum in Berlin—involving illegally made 3D scans of the bust of Nefertiti—turned out to be a different kind of crime. The two Egyptian artists who released the scans claimed they had made the images with a hidden “hacked Kinect Sensor,” reports Annalee Newitz at Ars Technica. But digital artist and designer Cosmo Wenman discovered these were scans made by the Neues Museum itself, which had been stolen by the artists or perhaps a museum employee.
The initial controversy stemmed from the fact that the museum strictly controls images of the artwork, and had refused to release any of their Nefertiti scans to the public. The practice, Wenman pointed out, is consistent across dozens of institutions around the world. “There are many influential museums, universities, and private collections that have extremely high-quality 3D data of important works, but they are not sharing that data with the public.” He lists many prominent examples in a recent Reason article; the long list includes the Venus de Milo, Rodin’s Thinker, and works by Donatello, Bernini, and Michelangelo.
Whatever their reasons, the aggressively proprietary attitude adopted by the Neues seems strange considering the controversial provenance of the Nefertiti bust. Germany has long claimed that it acquired the bust legally in 1912. But at the time, the British controlled Egypt, and Egyptians themselves had little say over the fate of their national treasures. Furthermore, the chain of custody seems to include at least a few documented instances of fraud. Egypt has been demanding that the artifact be repatriated “ever since it first went on display.”
This critical historical context notwithstanding, the bust is already “one of the most copied works of ancient Egyptian art,” and one of the most famous. “Museums should not be repositories of secret knowledge,” Wenman argued in his blog post. Prestigious cultural institutions “are in the best position to produce and publish 3D data of their works and provide authoritative context and commentary.”
Wenman waged a “3‑year-long freedom of information effort” to liberate the scans. His request was initially met with “the gift shop defense”—the museum claimed releasing the images would threaten sales of Nefertiti merchandise. When the appeal to commerce failed to dissuade Wenman, the museum let him examine the scans “in a controlled setting”; they were essentially treating the images, he writes, “like a state secret.” Finally, they relented, allowing Wenman to publish the scans, without any institutional support.
He has done so, and urged others to share his Reason article on social media to get word out about the files, now available to download and use under a CC BY-NC-SA license. He has also taken his own liberties with the scans, colorizing and adding the blue 3D mapping lines himself to the image at the top, for example, drawn from his own interactive 3D model, which you can view and download here. These are examples of his vision for high-quality 3D scans of artworks, which can and should “be adapted, multiplied, and remixed.”
“The best place to celebrate great art,” says Wenman, “is in a vibrant, lively, and anarchic popular culture. The world’s back catalog of art should be set free to run wild in our visual and tactile landscape.” Organizations like Scan the World have been releasing unofficial 3D scans to the public for the past couple years, but these cannot guarantee the accuracy of models rendered by the institutions themselves.
Whether the actual bust of Nefertiti should be returned to Egypt is a somewhat more complicated question, since the 3,000-year old artifact may be too fragile to move and too culturally important to risk damaging in transit. But whether or not its virtual representations should be given to everyone who wants them seems more straightforward.
The images already belong to the public, in a sense, Wenman suggests. Withholding them for the sake of protecting sales seems like a violation of the spirit in which most cultural institutions were founded. Download the Nefertiti scans at Thingiverse, see Wenman’s own 3D models at Sketchfab, and read all of his correspondence with the museum throughout the freedom of information process here. Next, he writes, he’s lobbying for the release of official 3D Rodin scans. Watch this space.
Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness
Read More...Among typography enthusiasts, all non-contrarians love Helvetica. Some, like filmmaker Gary Hustwit and New York subway map creator Massimo Vignelli, even made a documentary about it. Created by Swiss graphic designer Max Miedinger with Haas Type Foundry president Eduard Hoffmann and first introduced in 1957, Helvetica still stands as a visual definition of not just modernism but modernity itself. That owes in part to its clean, unambiguous lines, and also to its use of space: as all the aforementioned typography enthusiasts will have noticed, Helvetica leaves little room between its letters, which imbues text written in the font with a certain solidity. No wonder it so often appears, more than half a century after its debut, on the signage of public institutions as well as on the promotion of products that live or die by the ostensible timelessness of their designs.
But as times change, so must even near-perfect fonts: hence Helvetica Now. “Four years ago, our German office [was] kicking around the idea of creating a new version of Helvetica,” Charles Nix, type director at Helvetica-rights-holder Monotype tells The Verge. “They had identified a short laundry list of things that would be better.” What shortcomings they found arose from the fact that the font had been designed for an analog age of optical printing, and “when we went digital, a lot of that nuance of optical sizing sort of washed away.” Ultimately, the project was less about updating Helvetica than restoring characters lost in its adaptation to digital, including “the straight-legged capital ‘R,’ single-story lowercase ‘a,’ lowercase ‘u’ without a trailing serif, a lowercase ‘t’ without a tailing stroke on the bottom right, a beardless ‘g,’ some rounded punctuation.”
The development of Helvetica Now also necessitated a close look at all the versions of Helvetica so far developed (the most notable major revision being Neue Helvetica, released in 1983) and adapting their best characteristics for an age of screens. Few of those characteristics demanded more attention than the spacing — or to use the typographical term, the kerning. But however astonishing a showcase it may be, Helvetica Now doesn’t drive home the importance of the art of kerning in as visceral a manner as another new typeface: Hellvetica, designed by New York creative directors Zack Roif and Matthew Woodward. Much painstaking labor has also gone into Hellvetica’s kerning, but not to make it as beautiful as possible: on the contrary, Roif and Woodard have taken Helvetica and kerned it for maximum ugliness.
The Verge’s Jon Porter describes Hellvetica as “a self-aware Comic Sans with kerning that’s somehow much much worse.” If that most hated Windows font hasn’t been enough to inflict psychological disturbance on the designers in your life, you can head to Hellvetica’s official site and “experience it in all its uneven, gappy glory.” Roif and Woodard have made Hellvetica free to use, something that certainly can’t be said of any genuine version of Helvetica. In fact, the sheer cost of licensing that most modern of all fonts has, in recent years, pushed even the formerly Helvetica-using likes of Apple, Google, and IBM to come up with their own typefaces instead — all of which, tellingly, resemble Helvetica. We can consider them all weapons in the life of a designer, which, as Vignelli put it, “is a life of fight. Fight against the ugliness.” Happy downloading…
Related Content:
The History of Typography Told in Five Animated Minutes
Designer Massimo Vignelli Revisits and Defends His Iconic 1972 New York City Subway Map
Van Gogh’s Ugliest Masterpiece: A Break Down of His Late, Great Painting, The Night Café (1888)
Based in Seoul, Colin Marshall writes and broadcasts on cities, language, and culture. His projects include the book The Stateless City: a Walk through 21st-Century Los Angeles and the video series The City in Cinema. Follow him on Twitter at @colinmarshall or on Facebook.
Read More...
I have come to the personal conclusion that while all artists are not chess players, all chess players are artists.
–Marcel Duchamp
“Over the roughly one and half millennia of its existence, chess has been known as a tool of military strategy, a metaphor for human affairs, and a benchmark of genius,” points out the TED-Ed animated history of the game by Alex Gendler, above. The first records of chess date to the 7th century, but it may have originated even a century earlier, in India, where we find mention of the first game to have different moves for different pieces, and “a single king piece, whose fate determined the outcome.”
It was originally called “chaturanga,” a word that Yoga practitioners will recognize as the “four-limbed staff pose,” but which simply meant “four divisions” in this context. Once it spread to Persia, it became “chess,” meaning “Shah,” or king. It took root in the Arab world, and traveled the Silk Road to East and Southeast Asia, where it acquired different characteristics but used similar rules and strategies. The European form we play today became the standard, but it might have been a very different game had the Japanese version—which allowed players to put captured pieces into play—dominated.
Chess found ready acceptance everywhere it went because its underlying principles seemed to tap into common models of contest and conquest among political and military elites. Though written over a thousand years before “chaturanga” arrived in China—where the game was called xiangqi, or “elephant game”—Sun Tzu’s Art of War may as well have been discussing the critical importance of pawns in declaring, “When the officers are valiant and the troops ineffective the army is in distress.”
Chess also speaks to the hierarchies ancient civilizations sought to naturalize, and by 1000 AD, it had become a tool for teaching European noblemen the necessity of social classes performing their proper roles. This allegorical function gave to the pieces the roles we know today, with the piece called “the advisor” being replaced by the queen in the 15th century, “perhaps inspired by the recent surge of strong female leaders.”
Early Modern chess, freed from the confines of the court and played in coffeehouses, also became a favorite pastime for philosophers, writers, and artists. Treatises were written by the hundreds. Chess became a tool for summoning inspiration, and performing theatrical, often Punic games for audiences—a trend that ebbed during the Cold War, when chessboards became proxy battlegrounds between world superpowers, and intense calculation ruled the day.
The arrival of IBM’s Deep Blue computer, which defeated reigning champion Garry Kasparov in 1996, signaled a new evolution for the game, a chess singularity, as it were, after which computers routinely defeated the best players. Does this mean, according to Marcel Duchamp’s observation, that chess-playing computers should be considered artists? Chess’s earliest adopters could never have conceived of such a question. But the game they passed down through the centuries may have anticipated all of the possible outcomes of human versus machine.
Related Content:
Garry Kasparov Now Teaching an Online Course on Chess
A Free 700-Page Chess Manual Explains 1,000 Chess Tactics in Plain English
Vladimir Nabokov’s Hand-Drawn Sketches of Mind-Bending Chess Problems
Chess Grandmaster Garry Kasparov Relives His Four Most Memorable Games
Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness
Read More...
“Fame is a prison,” tweeted Lady Gaga, and many Twitter wars ensued. She was only echoing an old sentiment passed down through the entertainment ages, from Greta Garbo (“I detest crowds”) to Don Johnson. The emotional toll of celebrity is so well-known as to have become a standard, almost cliché, theme in storytelling, and no recent artist has exemplified the tortured, reluctant celebrity more prominently than Kurt Cobain.
Cobain may have wanted to be famous when Nirvana broke out of Washington State and signed with major label Geffen, but he did not want the kind of thing he got. At the end 1993, when the band recorded their MTV Unplugged in New York special, he seemed positively suffocated by stardom. “We knew Cobain didn’t seem all that happy being a rock star,” recalls music journalist David Browne, who sat in the audience for that legendary performance, “and that Nirvana was essentially acquiescing to industry dictates by taping one of these shows.”
Cobain’s rare talent was to take his bitterness, despair, and rage and turn them back into deftly arranged melodic songs, stripped down in “one of the greatest live albums ever,” writes Andrew Wallace Chamings at The Atlantic. “An unforgettable document of raw tension and artistic genius. While intimacy was an intended part of the [Unplugged] concept… parts of the Nirvana set at Sony’s Hells Kitchen studio feel so personal it’s awkward.”
The performance reveals “a singer uncomfortable in his own skin, through addiction and depression” and the continued demands that he make nice for the crowds. The clipped interactions between Cobain and his bandmates, especially Dave Grohl, have become as much a part of the Nirvana Unplugged mythology as that frumpy green thrift-store cardigan (which recently sold at auction for $137,500).
Kurt’s disheveled crankiness may have been part of Nirvana’s act, but he also never seemed more authentically himself than in these performances, and it’s riveting, if painful, to see and hear. Five months later, he was dead, and. Unplugged would become Nirvana’s first posthumous release in November 1994. In the quarter century since, “accounts have emerged,” writes Browne, that show exactly “what was taking place in the days leading up to that taping.”
“The rehearsals were tense,” Browne continues, “MTV brass weren’t thrilled when the promised guests turned out to be the Meat Puppets and not, say, anyone from Pearl Jam. Cobain was going through withdrawal that morning.” And yet every song came together in one take—only one of three Unplugged specials in which that had ever happened. “The entire performance made you feel as if Cobain would perhaps survive…. The quiet seemed to be his salvation, until it wasn’t.”
Marking the album’s 25th anniversary this month, Geffen has rereleased Unplugged in New York both digitally and as a 2 LP set, announcing the event with more behind-the-scenes glimpses in the rehearsal footage here, previously only available on DVD. At the top, see the band practice “Polly,” and see a frustrated Grohl, whom Cobain considered leaving out of the show entirely, smoke and joke behind the scowling singer.
Further up, see Cobain strain at the vocals in “Come as You Are,” while Grohl shows off his newfound restraint and the band makes the song sound as watery and wobbly as it does fully electrified. Above, Cobain and guitarist Pat Smear work out their dynamic on Bowie’s “The Man Whole Sold the World,” while cellist Lori Goldston helps them create “the prettiest noise the band has ever made,” writes Chamings. Even 25 years on, “there is no way of listening to Unplugged in New York without invoking death; it’s in every note.” Somehow, this grim intensity made these performances the most vital of Nirvana’s career.
Related Content:
Animated Video: Kurt Cobain on Teenage Angst, Sexuality & Finding Salvation in Punk Music
How Kurt Cobain Confronted Violence Against Women in His “Darkest Song”: Nevermind‘s “Polly”
Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness
Read More...