Tuesday, December 11, 2012

Human Evolution: A Geographer's Perspective

Not long ago, I received an email asking me to speak to a church group about the evolution of humans from a geographer’s perspective.  It was a member of the Ethical Humanist Society in Skokie Illinois; he'd seen my blog. I knew enough about the organization to quickly agree, but I asked for several months, to go over my books and notes on hominids and humans and their migration out of Africa. At first I had in mind a sort of a Jared Diamond approach; I wanted to reread his books  (Guns, Germs, and Steel .. and The Third Chimpanzee -- I tried, but could never get through Collapse), Darwin’s The Descent of Man, and an Anthro text called Darwin’s Legacy.  But the more weekends I tinkered with my thoughts, the more the scope of my talk expanded, and I chose in the end to weave together themes from many of my existing blogs, culminating with a new set of ideas which has intrigued me. This blog is a summary of my talk which I gave to full house Dec. 2.  

As I put down my thoughts in narrative here, I will embed links to my various essays and will mention specific books so anyone interested in going deeper can do so easily.  As my date approached, whenever I timed myself honestly I could not nearly roll through my address in 45 minutes, so I invoked what most people call natural selection, evolution, or survival of the fittest; it is simply the process of elimination.  I jettisoned whole chunks, distilled others, and then I cut even more to allow myself some digression -- to switch it up a bit with something spontaneous if the mood struck. After all, innovation is an essential part of evolution too.

I must admit I was a bit outside my comfort zone as this was the first time I was to speak on evolution in a formal setting.  I worried, as is normal for me, that something might go terribly, crashingly wrong.  I teach cartography and GIS, and while I’ve been designing a class on biogeography I haven’t nearly rolled it out yet.  But I have plenty of raw enthusiasm, quite a few thoughtful perspectives, formal coursework, and my heavily annotated science books which are listed to the right. This was quite a welcome opportunity too.  By now I’ve exhausted my family members on the topic. To be fair, they are still receptive to my ruminations -- but only in small doses.

A little background about Ethical Humanists.  A few years go when I took an online survey to determine which religion best matched my views, I hadn't known of ethical humanists but it turned out I was one.  I’m no authority, but Humanists seem a bit like Unitarians I’ve known and many liberal Quakers I grew up with -- keenly interested in knowledge, goodness, and community -- and pointedly light on the supernatural talk. Like the Unitarian church I briefly attended in Southern Illinois, these Ethical Humanists bring in speakers.  I recently saw Dan Barker, author of Godless, address this very group; he is a former evangelical preacher and is now an articulate, lighthearted and very effective spokesman for those who have shed their religious beliefs.

And as I was milling about before the talk I picked up a little card summarizing the mission statement of the parent organization: American Humanists.  Excerpts from their manifesto include
 



"Humanism is a progressive philosophy of life that, without supernaturalism, affirms our ability and responsibility to lead ethical lives of personal fulfillment that aspire to the greater good of humanity."
 
"Knowledge of the world is derived by observation, experimentation, and rational analysis." …
 
"Humans are an integral part of nature, the result of unguided evolutionary change."  
I found that refreshing; clearly I could expect a kind and educated audience and the last bit was particularly helpful with my last-minute jitters. Here is their web site -- they have events all through the week, and for children, too.
 
My title was "Human Evolution: A Geographer’s Perspective" and I started off with my standard clarification of the scope of geography -- a discipline which is oddly inconspicuous in America but has become much easier to explain with the advent of Geographic Information Systems.  GIS uses a variety of digital maps for the same location.  One might be of winds, another of population, and a third map of hospitals.  And each one carries an inventory; imagine a spreadsheet wherein each row represents one little mark on the map: a weather station, census tract, or particular hospital.  Respective columns contain speed and direction for various times of the day; everything from the U.S. census; and all information about hospital facilities.  Stack these maps, and GIS will let you count the number of young children 5 miles downwind from a chemical plant, and determines whether nearby pediatric wards could handle an explosion.  The databases are relational, with location being the relational link.  This is how geographers think -- location is our hook, and we can hook into all things for which location matters.  
 

I also quickly ran through some of the more common misunderstandings about evolution -- common, that is, among the 45% or so of Americans who believe in evolution at all (I know, WTF!).  Evolution is not a drive toward perfection, humans didn't come from chimps, very few traits result from a single gene, biology actually does interact with environment, and so on.  You only need three things for evolution to occur, according to Richard Dawkins in the seminal The Selfish Gene. 1. something must duplicate, 2. it must do it imperfectly, and 3. it must matter in the real world.  That's it.  Then it evolves.  And I defined species, using the biological definition: groups which don't naturally have fertile offspring.  Species, I'll add, are usually the result of geographical barriers separating populations which are then groomed by selective pressures or drift apart randomly.  The exception to this is called sympatric speciation, where different groups fall into different niches within the same locale.  There is much about this last one which is still a mystery to me; I understand why they would go into different niches but as populations drift apart, but why would the hybrid (like a mule), vigorous in all other ways, be sterile?  I've read expert opinions, but  I still don’t know.


I showed the well known map of human migration out of Africa about 50,000 years ago, shortly after the Toba super-volcano pressed human populations into the thousands -- to near extinction -- for thousands of years.  That genetic bottleneck is thought to explain why there is very little variation in the human genome today.  There is said to be more variation in a single social group of chimps as there is in the entire human race. The routes humans took from there is to the left. 

Then I rolled out the story of how I traced my own DNA, using National Geographic’s Genographic Project and 23andme. 
 
As I was preparing my talk I very much wanted to comment on 3.5 billion years of life, not just the last 200,000 years in which humans have existed.  Here's how I did it: first I described ring species, using the example of the circumpolar Laurus Gull, all the way around at 80 degrees north. Laurus characteristics vary only slightly everywhere; they satisfy the biological definition of a single species by producing fertile offspring, with neighbors.  However, where the ends of this ring meet the gulls won’t have anything to do with one another. The Herring Gull and the lesser Black-Backed Gull are therefore different species if you face east, but the same species if you're looking west!  And here is where it becomes relevant to humans. There is a similar and utter lack of species breaks when you trace humans all the way to the first spark of life; it’s incremental variation, all the way down.  Never would one generation be unable to produce fertile offspring with the next generation and this is true not only going down but back up the trunk, branches, and twigs, to all living things today.  You can travel from the woman to the ladybug on her shoulder if you crawl from one leaf (her), back through time, all the way to the junction, then back up another branch and twig to the bug.  You'll never step from one species to another. 

So, to understand humans it only makes sense to start 3.5 billion years ago when all life began.  And we know life only started once, because all genetic code from protozoa to great blue whale is written in the same language with the same four letters: nucleotides A, C, G, and T.
 
When I was practicing this next bit, which is really the only new thing I had to offer, I always ran long so I created a graphic to help me skip through it.  My thesis is that plants and animals were driven by four essential needs (gathering food, finding mates, evading predators, and protecting young), and in response evolved patently spatial strategies: improved perception and control of the environment, and improved navigation through it.

I've ruminated before about first life.  From there forward, the strand which would become human usually opted for complexity, given the option.  There is nothing shabby about simplicity -- bacteria took that route and ... how does that saying go:  "He who laughs last, last best."  We tooled up, while they stripped down.  They hunker. We lumber.  You might summarize that our ancestors were both imprudent and lucky.  Good fortune and has blissed us with brains and consciousness. 

Digression: It is simply awesome to be self-reflective; it's spectacular, really. Dumbfounding.  I must add that I also feel like a spectator;; I don't presume to think I am independently charged.  On the topic of free will I’m with Sam Harris -- we only feel that we have free will because we can’t predict what we are going to decide next!  But it makes sense that we feel so self-directed.  As Robert Wright clearly explains in The Moral Animal, self delusion has some clear evolutionary advantages. 
Now we come to the meat of this story.  Several brilliant authors have elucidated on live from its origins to modern day.  Richard Fortey's Life, Matt Ridley's Genome, Nick Lane in Life Ascending, Bill Bryson's Short History of Nearly Everything, and of course Richard Dawkins' The Ancesters Tale... they all take us on a long long journey to the past.
 
I won't touch on every invention, but if you read the chart carefully I think you’ll appreciate the spatial dimension of each juncture.  It's commonly thought that we first duplicated in porous rock by the deep sea vents' stable gradient of heat and pH.  You might say we were born in bondage.  When we finally developed our own cell wall we were free to blithely drift and duplicate for more than a billion years, without a care (so to speak) until the moment a mutation caused one bit to eat another.  Nutritious! It was rewarded by survival, and procreation.  So predation 2.4bya, placed a premium for both parties on movement so when one of the cellular creatures grew a tail, actually a flittering hair, a "fagellum," it too was rewarded in offspring.  Soon we (our ancestors) all had little wigglers.  Two billion years later, when a flotation bladder accidentally became a rudimentary lung, we crawled onto land.  But there, cold nights were particularly important – there had never been such temperature fluctuations underwater. Warm bloodedness emerged, soon as it could, simply because of the survival advantage of being first up for breakfast.  Our eyes had developed around this time, so we could scan our environment for predators and (conversely) for food.  In the broad view this was nothing special.  Eyes have happened about 40 times.  Judged against the other eyes, human eyes are just mediocre.

The amniotic egg about 200 million years ago allowed us to have young in even where there were no ponds.  This is a spatial explosion!  Next came the secondary palette and we could run with food in our mouths, still breathing.  Larger meals, greater territory still.  
 
There is a short period in our history 145-65 mya, where we reversed our general trend toward complexity and territorial expansion.  There were dinosaurs about; they better at eating than we were.  So we hunkered down for nearly 100 million years, burrowing, developing timid habits, in a spatial sense constricting ... until that wonderful astroid landed east of theYucatan Peninsula, causing rains of buring sulfur and effectively removing our opressors.  It was suddenly safe not only to emerge from underground, but to take to trees, and we did, thereby improving our stereo vision and building up bony eyesockets. The process of elimination (by fatal falls) certainly helped with these new innovations. 

We were never destined to greatness, remember.  It's not fate, but luck.  
 
Eventually homo habilis (the handyman) started making tools: first, a stone knife, which was simply an improved, replacable tooth and nail. This cleverness paid off in food, as man attacked beast, and no doubt in mates -- as man attacked man. Lots of animals make tools, but this was a particularly good one.  Not as good as the spear, though, and I’ve recently read that the spear may have spurred a social revolution. Prior to the spear and bow the behavior of hanging back, whenever a group tried to take down a large animal, paid off in survival tokens; so everyone tended to be a little tentative. Then spears made cooperative hunting possible by limited the personal risk. it built community; it's the prisoner's dilemma, pure and simple.  
 
But then came the big leap -- language -- 100,000 years ago. We had been getting smarter, no doubt -- in the previous 2 million years our brains had tripled in size but with language we could suddenly tap into the brains of others. My 16 year old son recently observed that language is a lot like telepathy, and it's true.  When we developed written language we could access the thoughts of people far away (and those of some dead people too). Language was not the first wireless network, however; it was probably second to mirror neurons. It is commonly believed (among scientists) that mirror neurons allow one to actually experience the experience of others. I watch you grasping a coffee mug, my brain feel grasping -- my grasping neurons fire. Some have speculated that this is how fish school, birds flock, and why yawns are infectious.  Mirror neurons may very well have been the dawn of empathy.

Then just 600 years ago movable type printing press brought an immense knowledge base to the literate commoner, and it allowed the literate commoner to share his intelligence too. Do you see how this leaps and stretches across space?

Each bar in the diagram represents a much shorter time frame, reflecting the acceleration of change. The bar on the right, spanning just 5,000 years, might as well have focused on innovations in transportation -- wheel through boat, horse, horseshoe, steamboat, bicycle, locomotive, car, airplane, helicopter, jet, and rocket ... but I’ve chosen instead to emphasize communication.

After telegraph (1843), telephone (1876) radio (1901), television (1925), photocopier (1958), satellite (1963), worldwide web (1989), with the advent of the computer, internet, and the inexpensive mobile phone we are witnessing a revolution in information as important as what happened with language 100,000 years ago – but with 100,000 times the spatial bang. The two maps show internet connectivity (node to node, local connections ignored) and facebook friends. But think of google searches, netflicks, wikipedia, facebook, skype, MOOCs, Youtube, and all maknner of mobile apps. Isn't it interesting to speculate that while no other animal is even aware of it, homo sapien sapien brains around the world are in the process of coalescing in a very real sense, into one brain.  The new brain still has a slippery grip, but it's a global grip.  An interplanetary reach even, as we recently have sent an extension of our own eyes to Mars.
 
I ended my talk with a little fun, speculating about ideas, which are virus-like replicators and therefore also subject to evolution. Much like genes, ideas duplicate, imperfectly, and they are filtered by the external world. A good joke, bad news, and useful information is repeated, while other noise fizzles out. Like genes, concepts can also team up to duplicate more reliably.  

For example, think of this. Combine empiricism, conjecture, hypothesis testing, evidence based reasoning, statistics, peer review, transparency, and a few more concepts and you have the scientific method, which has spread.  To carry it further (in my way of thinking) science often comes up against religion, which have at their core just three little ideas: 1. there is an invisible, mysterious entity, 2. it can and will hurt you if you don’t follow its rules.  And 3. the first rule is believe in it.  This circular core could not withstand scrutiny by itself, so is protected by its own cell: faith, of the sort that means belief without evidence.  Now comes the hat-trick: this faith is presented as a virtue.  And then distractions are attached, such as ritual, song, cathedrals, art, diet, dress, scripture, etc.  hence the mad,  circular, core concepts are sufficiently insulated that the whole package replicates, most asily in the lesser educated minds, and those of children -- imprinted, like a duckling for a lifetime.  And that's all you need but if becomes lucrative there will be those to promote and defend it for other reasons as well.
 
This idea of ideas having their way with us -- whether it’s science, religion, or neighborhood gossip -- well I find it ... er .. infectious.
 
Looking back on my hour, there were two big surprises. The first was that while I was arguing that human evolution can be seen as a series of advances in spatial awareness and control, I nearly stepped off the stage.  The humor in that was not lost on the crowd, I’m happy to report.  Secondly, when someone asked me if I’d written my thoughts down, and I said I’d probably do it in my blog, there was actually applause! This, recall, is the same blog my vigilant statistics tracker regularly reminds me that almost no one reads. That encouragement probably pushed me over the edge and I actually took the time to do it.  Everyone, but particularly if you are an Ethical Humanist, and if you've read this far ... bliss you.

Saturday, November 3, 2012

Three Books Covering It All

Now and then someone writes a hugely ambitious book intended for regular guys like me.  This is what Bill Bryson did in 2003 with A Short History of Nearly Everything.  To be honest I own a copy but I've never actually read it. I listened to the audio recording a couple of times, mostly as I bike to work.  It’s Bryson himself speaking with brightness and enthusiasm matching his prose.  A few days ago I started listening a third time, and shucks, I’m hooked again.  I won't comment on what it means about my memory (hey, we listen to songs we love over and over, right?), but there goes the next six road hours.  Bryson apologizes for the title of his book right away – it's not really about everything that is known, but since he spends a good part on the origin of the universe in a sense it is about everything.  I'm a geographer after all, so my professional opinion is an endorsement. But as I found with a subsequent book, Home, Bill Bryson is much better at writing chapters than he is at titling them.  Some of these: "Muster Mark’s Quarks,” or “Goodby to all That” are just too cute and opaque to do justice to the illuminating and serious content they contain.  On the other hand, his sprinkles of jaunty humor do lighten what could otherwise be, for many of us, a heavy load of STEM disciplines: physics, astronomy, chemistry, geology, and biology. 

And that pretty much is the aggregate sum of my complaints.  Bryson takes the reader on – if you can believe this – a romp which starts at the “singularity” -- when matter has an infinite density and zero volume (!?) -- and goes all the way to the advent of homo sapiens.  He uses a long series of pivotal discoveries and a colorful cast of characters (scientists, all) as stepping stones along paths which diverge, dead end, criss-cross, backtrack but in the aggregate move science forward, and move the reader toward understanding.  The book is nicely referenced with end notes, a bibliography, and index.  A word of warning: A whole lot happens before life even begins.  “The Rise of Life” is Chapter 19, page 350.  By the time we get to humankind, Bryson is pretty much winding down. 

That was quite a lot to wrap my mind around so it was a few years before I searched for a book Bryson had referred to fairly often: Richard Fortey’s Life, and Unauthorized Bibliography (1998).  I just could not find it, but I did come across a used copy of his Life: A Natural History of the First Four Billion Years of Life on Earth, (1997).  I’m guessing it’s the same book?  Seemed so, so I ordered it.   It arrived poorly packaged and had been rained on and the pages were swollen.  But I discovered I could all but flatten it with a couple of weeks in a bench vice  Some of the photographs were nearly ruined, but I got to it eventually, and was not disappointed.

This is a nice companion to Bryson, in a way, and another way it’s a similar run through the same terrain.  Forty’s has an index and glossary but oddly no references or bibliography.  Another big difference, and a welcome one to me, is that Fortey starts life out in Chapter 2; that means he’s basically Earth-bound and thereby skips a whole lot Bryson had gone over.  Almost everything, you could say.  There is no talk of the singularity, the Big Bang, and no hypothetical trips toward the end of a curved universe.  But it’s life onward, and with more pages to do it in.  Fortey’s writing style is clear and engaging but less effervescent than Bryson’s.  He doesn’t continually pause to delight over quirky details of the odd personalities and academic villains amongst history's most brilliant minds.

Though they  start about 10 billion years apart, Bryson and Fortey both end their stories with the rise of homo sapien; neither ventures into the complexities of the modern world except to describe the scientists and discoveries which make the story possible. 

And that is exactly where Richard Dawkins starts – at the advent of homo sapien – in his 2004 tome The Ancestor’s Tale.  But instead of moving forward from there he cleverly goes backward -- from modern humans to the origin of life.  It's brilliant.  He covers the same time period as Fortey: 3.5 billion years.  But  where Fortey started at the seed and followed life up through time,  Dawkins starts at a leaf – modern man – and travels downward from twig to larger twig to branch to larger branch, converging at last with all life forms.  There are 39 junctions, or “rendezvous,” along the way.  Each juncture lets him comment on the diverging branches and their own twigs, but he always comes right back to the human ancestry to go one branch lower.  It reminds me of National Geographic’s Genographic Project, by which I traced my paternal lineage backward (with genetic markers).  At each each generation backward I expanded my contemporary circle of kin.     My Y chromosome, used by the Genographic Project, passes only from male to male so at each generation the female line is ignored and that male ancestor contributed half again of what is now me.  Go back 2,000 generations and it's really just an interesting academic exercise; I'm  homeopathically diluted.  However, and ironically, in tracing the evolutionary tree this doesn't happen.  All of what humans were to become was embedded, long, long ago, in something like a small mouse.  It wasn’t a mouse, but its genetic offspring became a mouse, and a beaver, and a lot more, and us.  It was a common ancestor.   

As you might imagine, the journey backward is a familiar one early on.   Those we encounter are recognizably like us.  After navigating the Cro-Magnon, Neanderthal, Ergaster, Habilis, and so on, we come to rendezvous #1 about 6 million years ago where we meet up with the ancestor of modern chimpanzees.    In rrendezvous #3 we merge with gorillas', #6 is New World Monkeys', and #9 tree shrews'.  You can see where this is going; farther back the company gets more general and less, we are sure to feel, like ourselves.  Rendezvous #17 is with amphibians, #22 with lampreys and hagfish, # 31 sponges, #36 plants, until the end, #39 … eubacteria.  There are replicating iota even farther back … protobacteria, DNA, RNA … but these are not yet “life” as we think of it.

As I was going through this I thought surely the farther I went the drier and drier it would become and the more difficult it would be to stay interested.  But if anyone has a knack for drawing out the true wonder of the natural world, it’s Richard Dawkins and I found the latter chapters excellent reading too.

Each of these books is truly an epic.  Adding Bryson (574 pages of text), Fortey (315 pages) and Dawkins (614 pages) one has more than 1,500 pages and that's enough to ruminate on for quite awhile.  If you're an extreme reader, add on the 192 pages of notes, references and bibliography. 

Yes, these are potentially intimidating books, to be sure.  But if you pick up a copy of one (maybe start with Dawkins) – don’t be surprised if you get sucked right in to emerge -- one way or another -- 3.5 billion years, or 14 billion, later.

Saturday, August 11, 2012

Reframing Organizations (reviews)

This may be one of the best known books on leadership, as far as I can tell; it's in its fourth edition now and often used as a text in management and leadership classes. There are various ways to look at organizations - the main point of the book is to explore some of them and show that it's often useful to shift frames to get a new perspective.   A scenario near the end in which a public insult is made to an incoming manager shows how the various frames suggest very different ways to reply.  It also shows how each one of them can be overdone.

Of the four frames taken up in this book - structural, human resources, political, and symbolic - three are excellent. The Structural approach uses a factory or machine as a metaphor - it's concerned with- goals, order, precision, planning, consistency, reliability, specialization, and assessment. Human Resources views the organization more like a family; it's sensitive to personalities, communication, strengths and weaknesses, motivations and employee well being. The Political approach deals with coalitions, partnerships, team building, domains, and group conflict; its metaphor is a jungle.

I've read elsewhere of a couple of others: the Natural Systems Model apparently views the organization as a an organic whole -- a sort of Gaia hypothesis of organizations, which I don't put much stock in. The Sociotechnical Model focuses on the humans and their tools, but most intriguing, to me, is the Cognitive Model which recognizes that personal goals can be aligned with the organizational objectives. It focuses on specialization and the flow of information.

Back to the book at hand. the Symbolic Frame is the fourth in the Boleman/Deal presentation, but it probably doesn't deserve the position. Its metaphor is theater, carnival, or temple but symbols actually more of a tool for the other perspectives rather than they are a viable frame of their own. For example, symbolism and symbolic gestures can be very influential in a family -- say, with a birthday cake, in a jungle -- a bright color might signal "poison", or even in a factory where a large red X by the chopping pit might save a few lives. But when symbols happen in a theater -- while it may seem real, it's really make-believe.

Regardless, the authors spend a chapter proposing symbols as a viable organizational framework. Symbols "project legitimacy" they "engender support, faith, and hope." The purpose of meetings, from this view, is to provide an "expressive occasion to clear the air and promote collective bonding," A plan is just a badge of honor, a game, or an advertisement. A strategic plan is an especially  high honor, no matter that it's often nothing more than "fanciful," they say. An organization "needs to foster faith and confidence" and therefore it undertakes evaluation (e.g. of employees) to "assure spectators that [it is] responsible, serious, and well managed." To top it off, get this: "Successful leadership is having followers who believe in the power of the leader."

This chapter made me ill.

Fortunately, the authors are probably wrong about symbolism. Sure it can be powerful, emotive, motivating, it can signal of commemorate important things and a shorthand way to evoke complex concepts. Words are symbols. Facial expressions are symbols. So is the American Flag, the facebook logo, a song, birds chirping. We're swimming in symbols and this is important because they mean something.   But when they don't mean what they pretend to they can be very dangerous, so the ability to detect fakes has been rewarded by natural selection.  Yet and still, we are still victims to illusion.  We can be manipulated. But if an organization were to actually use symbols as if they were meaningful in themselves you'd have ... North Korea? It's hard to even think of a company that has tried this because it would fail. Enron? 

No, smoke and mirrors is not a legitimate organizational frame. Symbols for real things are useful, that's as far as it goes.

But honestly, this shortcoming doesn't detract from this insightful summary of three other approaches, or from much of the discussion of symbols as tools, in other chapters. It's content-rich, well organized, and well written, though sometimes with a few more vignettes than I would prefer.  For someone new to organizational thought, this will be a great primer..
...

Then a friend recommended a more recent book, also by Boleman but with a different co-author (Joan Gallos, a professor of leadership at the University of Missouri,); it’s called Reframing Academic Leadership and it takes the same four-frame schema but specifically in the context of higher education.  The first few chapters do seemed like a rehash of the first book: structural, political, and so on … but then the examples became infused with something that is very unique to academia: the rift that often develops between faculty and administration.  “Faculty can see staff as unduly constrained and bureaucratic,” they explain. “Staff often wonder why they have to track their hours and vacation days when faculty seem to come and go as they please.” 
Being an expert within a discipline or – more often – sub-discipline is not very amenable to hierarchical control.  Disciplines themselves may become "silos," with little communication horizontally with other departments. But more important, it’s not always easy for faculty members to see or appreciate the complex institutional machinery required to assemble groups of inquisitive youth in rooms at the appointed hour, year after year, in a fluid and unstable external environment.  Meanwhile, academic administrators (unlike those running a factory or grocery store), cannot hope to understand what actually happens at the other end of the hierarchy.  They do not have the disciplinary expertise.  There’s a built-in volatility which is difficult to control. 

The previous book only went so far as to compare universities to hospitals.  That’s an interesting thought – doctors and patients there may be counterparts to faculty and students here.   

But they go much farther in this book.  The reference group for an individual faculty member, for example, often does not include the administration or staff, colleagues in other divisions, or even colleagues in their own department. They may, instead, be aligned intellectually with likeminded specialists in other institutions, institutions which, from the business model, may even be institudional competitors.  The culture, goals, outlook, perspective, motivation, associations, and knowledge base of Professor Jones may be worlds apart from that of Dean Johnson or Provost Peters.  And although they may distrust one another, and may even fight, they must also pull in the same direction if the institution is to survive. Such is life in academia.

The audience for this book is probably small not only because it is specific to academia, but because it will probably be of more value to administrators than to faculty.  Faculty can often all but ignore the broad institutional view of the administration although they shouldn’t, of course.  However, faculty will find the book interesting, and will recognize their own importance in the larger schema. The first law of higher education leadership, the authors write, is “If you lose the faculty, you lose.” And they also discuss the “pervasive faculty scorn for bureaucracy, administrators, and hierarchy.”

The same three "frameworks" as in the previous book are wound again through academia: political, structural, human resources … to great effect.  It's all different in this unique environment.  As before, an effort is made on account of symbolism too, which fell short, I thought, for the same reasons I expressed above.  There is no denying the symbolic power of Arizona State University President Michael Crow’s 2002 inaugural address (reprinted in part), in which he describes “A new gold standard” of higher education.  The speech was moving, inspirational, and effective because it presented a vision; but it was a vision which involved people, politics, and structure.  It wasn’t that the symbols were so valuable in themselves, it was that they were effective form of communication.  If symbology is an entire frame of academic leadership, so is shouting.  If there was anything symbolic that needs a little explaining, I thought, it may be the stark difference in dress code that often divides faculty and administration.  If communication, interaction, and cooperation is so important between these groups -- if they are all on the same team, why the broad-brushed underscoring of differences, with apparel?  I understand the jeans and and sneakers on faculty -- they're comfortable and that's also what students wear.  What's with the suits and ties?  Probably, administrators must communicate with politicians, businessmen, legislators, donors, and others who may appreciate the formality.  But it does affect the faculty-administration dynamic and it's an issue that may be worth taking up in the next edition.

Despite that shortcoming, and a couple of chapters at the end that refer vaguely to “feeding the soul” and the “sacred nature of academic leadership,” the insights keep coming, chapter by chapter:  Transparency and secrecy, reward structures, recruiting and hiring, managing budgets and personnel, review, accountability, motivation, cross-disciplinary cooperation, communication, self-control, autonomy, accountability, conflict resolution, assessment, regulations and guidelines – these are all addressed.  Anyone involved in higher education will be thankful for this illuminating book. 

Tuesday, May 29, 2012

The Omnivore's Dilemma (a review)

Probably, if you were going to read it you have already, but if you haven't ... you might want to pick it up. It was the #1 New York Times Bestseller in 2006 and it's still quite relevant today -- maybe more so, who knows. 

Normally we may think of evolution as a drive toward complexity, but bacteria has gone the other direction, it's evolved downward into simplicity and into very niche environments. This is an excellent survival strategy - what will survive any catastrophe you can imagine? Somewhere, bacteria, probably.

But most animals have opted for complexity and flexibility instead. They are able to move about and adapt to new environments. But here's the rub: since they choose to be flexible, they have to be flexible.

There is a direct analogy in the gastronomic world, it turns out. Some creatures have taken the simple approach by consuming a limited range of things. They can afford to do this because they have evolved elaborate intestines with which to work food over thoroughly and in which to harbor bacteria which converts one sort of input into all the various nutrients their bodies need. These are the herbivores and carnivores, and genetic code alone, which we call instinct, is sufficient to get them fed. On the other hand, omnivores have taken the high road; their innards are leaner and less elaborate so they must gather the right mix of inputs themselves.   And in doing so, they must avoid the dangerous ones. This requires a lot of care, and thought, and therefore ... big brains. It's a tradeoff of a simple lifestyle and an elaborate belly, or a complicated lifestyle, and a lean interior. So the omnivore's dilemma is gathering how to gather the right foods and not take in the harmful kind.  That in itself is a dilemma, but Pollan points out there are plenty of moral quandries as well.

The book is as entertaining as The Botany of Desire (2001), in which he looked at the story of apples, potatos, tulips, and marijuana from the plants' perspective. Here he takes on corn, grass, meat, and fungus, and once again we benefit from his careful research and introspection (the latter, occasionally laid on a little thick, for my taste). He also does a great deal of field observation, visiting the food factories and farms, talking to many different kinds of people, gathering mushrooms, and even slitting some chicken necks himself, and shooting a wild boar.  He describes much of this so well I felt I had done it too.

His best field trips included a large sustainable farm in Virginia where production is high, costs are relatively low, waste is almost nil, and the animals are mostly content. It's most impressive in the cleverness with which it all works, and the owner explains that in detail.  It's a stark contrast to some of the more corporate operations - like a standard corn-fed feedlot, a poultry farm, even an organic farm that turned out to be pretty much like the others. In these chapters the moral dilemmas come into the sharpest focus.

Food -- if you haven't noticed --  has become a new moral battleground, and when Pollan disparaged the new methods, and the lower quality of food they sometimes produce at times I felt he didn't fully appreciate the countervailing moral implications of the much larger quantities turned out now.  All that food is a good thing, too.  When he marveled that corn production increased from 75 bu/acre in 1950 to 180 in 2006 (140% increase, and often to the detriment of small time farmers), he didn't mention that world population increased by 172% in the same period.   Sometimes I though it was a little one-sided because the older methods could not easily produce the food we need now.  Hybrid vigor, that gives us pumped up ears of corn, is itself infertile.  That's not a Monsanto conspiracy -- as he intimates -- it is a fact of nature.  Vigorous hybrids, like mules, are often -- oddly -- infertile.  In the end it appears he was sometimes just exploring some of the more extreme views of his interviewees, as his own conclusions seemed balanced and reasonable, in my opinion.  As a reader I felt I had been treated fairly.

First, it's corn's dizzying ascendency as a food source, with the field trip to a chemical plant that rips the kernal pulp apart, sending it out in a tangle of different spigots -- some headed for the gas tank, others to the various mixers of myriad foodstuffs, others to make non-edibles.   There's a good discussion of the political and economic forces driving the corn industry too.  In the second section, on  "grass," he works on a the sustainable Virginia farm, among other things. When it comes to meat, he compares the sustainable approach to that in a large organic poultry operation, a feedlot, and commercial slaughterhouse, and more.   And all through the book he comments on underlying philosophical issues.

And the section of fungus (mushrooms) is interesting from a botanical perspective, mostly.  It could have been in The Botany of Desire.  In the end he pulls the story together by describing his "perfect meal" made up of perfect ingredients, served to perfect guests.  That just seemed unnecessary, to me but by that time I'd had a good enough intellectual and philosophical material to chew on - enough food for thought, you might say - to forgive him a little retrospective self-indulgence.

Saturday, May 12, 2012

Thinking, Fast and Slow (a review)


I’ve been shaken by a recent book called Thinking, Fast and Slow (2011), by Daniel Kahneman.  I read the book mainly because Steven Pinker (one of my top-shelf authors) called Kahneman “the most important psychologist alive today.” His work, Pinker wrote, “has reshaped social psychology, cognitive science, the study of reason and of happiness, and behavioral economics …”  I figured anything Pinker endorses with superlatives can’t be bad.  And it wasn’t.  It’s actually amazing.

Kahneman, it turns out, is a Nobel laureate for developing prospect theory, and he has been actively working in the field for nearly 60 years.  The Theory (with lifelong collaborator Amos Tversky) suggests that people evaluate risks on the basis of losses and gains from a reference point, and not simply on outcomes, as utility theory would suggest. 


The book is a summary of his and others' research on judgments and decisions.  He started his own work at 20 when he was asked to determine which prospective officers in the Israeli Army should be promoted. His PhD is from Berkeley and he’s now faculty emeritus at Princeton. He is collaborative, and ingenious, and focused on ways in which the human mind cuts corners and what can be done about it. At the end of each chapter he suggests how lessons from these often startling findings can be used in daily life, to think more clearly. So here is the main point.  In ways we can understand and predict, we make systematic errors in judgment throughout the day.  It’s probably because of natural selection that we’re good at making quick decisions and, although they are often good enough, they are also rife with shortcuts, exaggerations and mistakes.  Then they are capped off neatly with overconfidence.

He says the mind has two operating systems which he labels 1 and 2.  The first is the intuitive mind, the transparent bit that recognizes a pencil as a pencil, or a face as angry or sad.  It's the part that chooses between fight and flight, that judges the size of numbers, keeps track of time, recalls memories, and basically takes the first quick swing at everything.  We consider this “knowing.” System 2, what you might call “thinking,” includes estimation, calculation, and all manner of figuring out – much of which happens very fast and we are at least partially aware when it's at work.  Both systems cut corners, jump to conclusions, omit inconvenient information, exaggerate, and guess.  System 2’s biggest problem, according to Kahneman, is laziness.  It leaves System 1 in charge, and when called on it does the minimum that is required.  Generally this means piecing together a credible solution, then going back to rest.
At first this dichotomy seemed a bit forced to me, as I thought there must be a continuum instead.  But it turned out to be a fabulous way to parse out how, when, and why our thinking can go wrong. 
Things we encounter first hand seem more important and more common than they should, as do things we can easily recall.  When the chance of something (good or bad) is small we tend to either ignore or exaggerate the possibility of it happening.   We consider losses about twice as important as an equal gain.  Thoughts immediately preceding can dramatically influence a judgment.  In retrospect, assessing an event, we give far too little weight to blind luck.  Considering options separately often results in different conclusions than what you would arrive at with a broader framework.  We use stereotypes to draw inferences from the group to the individual, and the other way around.  And we’re bad at statistics.  Consider this:
A woman is seen reading the New York Times on the New York subway.  Which of the following is a better bet about her level of education?
    1. She has a PhD
    2. She does not have a college degree
If you chose the first, as I did, think again.  There are a lot more nongraduates than PhDs riding the subway.  We overlooked the base data. 
Here’s a clever one:
Hearing that the highest incidences of kidney cancer in U.S. counties are found in the rural, sparsely populated states you might guess the reason: these places are poorer, less educated, with less access to health care, where people have high-fat diets, drink more and smoke.  That would certainly make some sense.  It’s a coherent, compelling, and completely fabricated story.
If you had heard instead that these same areas have the lowest rates of kidney cancer, you might come up with this one: rural living is healthier, with clean air, fresh locally grown food,  people get more exercise and suffer less stress than they would in urban areas. 

So which of these two statements about kidney cancer is true?  Both are true; you see, there are fewer people in rural counties, and we are comparing cases per 1,000, and wher the denominator is small random variations in the numerator will cause large changes in the rate.  Example: If a disease affects 20% of the greater population, in counties with just 2 people the rate is likely to be 0%, some will have 50%, and a few 100%.  None will reflect the actual average. This is a statistical phenomenon which even statisticians often overlook.  
Some of the findings in the book as so bizarre I still have my doubts, although the research seems sound, has been replicated, results are consistent, and the effects are strong.  For example, when subjects were asked to estimate the percentage of African nations which are members of the UN it mattered whether the respondents had just spun a 10 or 65 on a (rigged) “random” wheel.  Their estimates for UN membership averaged 25% and 45%, respectively!  While people are holding a sequence of numbers in their head they have less self-control eating cake!  Or if they grip a pencil between their teeth (forcing a smile) cartoons seem funnier to them compared to when they hold it between pursed lips (forcing a frown)!  People pencil-frowning will concentrate more strongly on the task at hand.   And those required to nod were more likely to agree with an editorial they listen to than those who were told to shake their heads.  These results just seems absurd.  I admit that my incredulity is not evidence that it's not true.  I just don't see how such simplistic influences would not be exploited, and therefore removed, by evolutionary forces.  

However, I found myself falling for many of the little test-yourself demonstrations in the book.  Consider this question:

Was Ghandi 130 years old when he died?  You’ll know he wasn’t.  But how old was he, when he died?  The typical response is significantly a larger number, compared to subjects who were first asked if he had died at the age of 40.  The first group would tend to adjust from 130 downward until they hit ages they thought were plausible; and the others do the same upward from 40.  Someone who figured Ghandi was between 65 and 85 might guess nearly a 20 year difference, depending only on the anchor they had been primed with.

Anchors are one of many subtle influences to bias our thinking. Here are some others:
The availability heuristic (page 129), confidence by coherence (209), the illusion of validity (211), the  illusion of skill (212), the illusion of intuition/ strength of algorithms (223) , the hindsight bias (202), the illusion of control (259), the halo effect (199), nonregression to the mean  (175), inside vs. outside view (248), the planning fallacy (250), stereotyping (168) , representativeness (149), the substitution effect (139), probability neglect (144), statistical and causal base rates (168), the conjunction fallacy (158), theory induced blindness (277), risk aversion (270),  loss aversion (284), the endowment effect (293), bad news bias (301), possibility effect (311), certainty effect (311), the expectation principle (312), denominator neglect (329), and overweighting (333).  There is narrow vs. broad framing (336), reference points (281), the sunk cost fallacy (345), and the evaluability hypothesis (360), and the effect of categories (356).  The focusing illusion (402) is a simple one, with reach: (“nothing in life is as important as you think it is when you are thinking about it”), as is WYSIATI (153: “what you see is all there is”).  
This may seem like a chore to work through, but it’s not.  The writing is very clear and engaging, and the 38 well-crafted chapters are each, remarkably, 10 pages long.  The book has  plenty of examples, end notes with references, concluding advice, full reprints of two seminal journal articles in two appendices, and lots of little demonstrations you can try on yourself.  Another example of these.  Which of these two opportunities would you prefer?:
  1. a gamble that offers a 10% chance to win $95 and 90% chance to lose $5
  2. or would you pay $5 for a 10% chance to win $100 and a 90% chance to win nothing?
Most people find the second one more attractive.  The fact that they are identical is difficult to see because we are more adverse to losing $5 than we are to paying $5.  Here’s another: You are considering surgery (with slightly better long-term results than radiation therapy) and you were told, about the surgery
    1. The one-month survival rate is 90%
    2. There is a 10% mortality in the first month.
Doesn't the first one sound better?  Now consider this 
  • Adam switches from a gas-guzzler of 12 mpg to a slightly less voracious guzzler: 14 mpg
  • Beth switches from a 30 mpg car to one that runs at 40 mpg
Who will save more gas by switching?  The answer is Adam, by a large margin.
One of the best sections of the book is the last. It describes two very distinct senses of self: one is the experiencing self and the other, the remembering self.  When someone asks “how are you?” it might mean "how's your mood right now?" but more likely "how have things been going?” … in other words ... “how happy are you with your memories?”  It’s the remembering self that most people care about, whether they're looking at their own lives (it’s why we take pictures on vacations) or others'.  When subjects were asked to imagine taking a dream vacation, after which all memory of it would be erased, most would not send themselves (or another amnesic) to hike mountains or trek through the jungle.  "These experiences are mostly painful in real time and gain value from the expectation that both the pain and the joy of reaching the goal will be memorable.”  “Odd as it may seem,” Kahneman concludes, “I am my remembering self, and the experiencing self, who does my living, is like a stranger to me.”
Unfortunately it's a little worse still, because the remembering self doesn’t represent the experiencing self very accurately. Looking back, durations of experiences don't matter much.  Only two things do:  1. peaks (good or bad), and 2. how the event ended.  Kahneman explains how these findings can be put to practical use, and also how education, religion, illness, poverty, wealth, and other factors impact the two forms of satisfaction.
It’s a great book -- I believe it's a full life’s work -- and my criticisms are minor.  Maybe not enough distinction was made between one-time judgments and iterated ones.  In the Prisoner’s Dilemma, for example, once-off games lead to mutual defection but when done repeatedly with the same players (as in the real world) it leads instead to altruism.  Some of the results in a psychology lab may be quite different in the real world.  There are also a few surprising typos, like a mix-up on page 335, where “AD” and “BC” are switched, throwing off the explanation; and on page 337 two miscalculations in the formulas actually invalidate the results.
But if these are the low points, they are not very low.  The peaks were many and very high -- and even though it took me almost a month to read, but duration doesn't count for much -- I'm remember it very fondly and it will sit on my top shelf.  Next to Pinker.
(Ghandi died at 79, BTW) 

Sunday, April 22, 2012

On The Origin of Creativity

I'm intrigued by the subject matter, so having read several positive reviews and finding myself stuck in an airport, I paid list price for Jonah Lehrer's Imagine: How Creativity Works. I'd read his How We Decide a couple of years ago, and enjoyed it. My anticipation, boosted by a recent NPR interview and one in The Economist, steadily disassembled as I read the book itself.

Lehrer does not cite the scientific literature well - there is no list of sources in the back and many claims have no clear references at all. He seems a little gullible (or sensational) in regard to some other studies. One showed that red backgrounds increase test-takers' accuracy and attention to detail, while blue backgrounds double their creativity. Were it so easy. And a neurologist can anticipate a puzzle solver's breakthrough 8 seconds in advance. And, he tells us that all the easy problems of the world have been solved, and that cultivation of athletes in the Unites States should be used as a model for cultivating creativity. Here's my favorite, from a footnote: "Urban areas and the human cortex rely on extremely similar structural patterns to maximize the flow of information and traffic through the system." (p183) There was no reference.

But my main criticism is that the book relies almost exclusively on anecdote. He trots out case after case of well-known successes (masking tape, Bob Dylan, 3M, Pixar, google, and so on.), and some unknown ones (a surfer, a bartender who puts bacon grease and celery extract in drinks) --always in retrospect -- and draws out what he presents as yet another insight into creativity. But many of these are contradictory. For example, does creativity come out of isolation (p 19) or from teamwork (p120); from breaking convention (p 20) or submitting to its constraints (p 23)? Does it help to be in a positive mood (p32) or a depressed one (p76) or an angry state (161) or a relaxed one (50); does caffeine and other stimulants make the epiphanies less likely (33) or more likely (57)? Should stealing others' ideas be encouraged (247) or discouraged (244)? Does broadening one's set of skills and interests increase creativity (41) or should one concentrate on a single goal (95)? Does relaxation stimulate creativity (p 45) or does difficulty do it better (54)? Does creativity drive toward perfection (p 63) or is it a celebration of errors? (87). Does insight come in a flash (p 17) or is it revealed slowly, after great effort (56)? Must a good poem be "pulled out of us, like a splinter," (p 56) or is it best "vomited." (19)

All of these, apparently.

The book boils down in the end to four vague conclusions which he calls "meta-ideas."
1. Education is necessary
2. Human mixing stimulates creativity
3. Creativity requires willingness to take risks
4. Society must manage the rewards of innovation

For me, the best revelation is on p 159: Brainstorming sessions, in which "there are no bad ideas" do not often result in good ideas, because criticism is essential. This is the key to the growth of knowledge, good government, and much more -- and a theme that is developed thoroughly in David Deutsch's The Beginning of Infinity. That's a much more stimulating and challenging read, which explains creativity (and much else) far better than this one does.


Deutsch doesn’t focus on the simple triggers for creativity – blue color, happy mood, and so on – he places creativity (he calls it conjecture) in the center of all knowledge.  Empiricists, who believe all the laws and lessons in nature are there to be read, have it wrong, he says.  Science starts with a creative guess, a hypothesis, a mental leap.  Objective tests only assess the value of the speculations, to determine whether they have what Deutsch calls reach, which is the value of an idea in a different context.  For example, bird wings have reach when the bird flies to a different continent and the wings work there as well.  But they don’t have reach a mile above the surface of the Earth because the atmosphere is too thin to support them.

When it comes to ideas, some have more reach than others too. Natural selection, for example has incredible reach.  It not only reaches throughout the biological world, with genes, it also reaches throughout the world of ideas by way of concepts or memes.  Some concepts are more salient in more varied situations than are others, and ideas replicate, selectively.

But how did creativity itself evolve?  Other animals can imitate or copy behaviors, but they don’t transmit underlying meanings like humans do.  An ape may learn to use a rock to crack a nut, but it’s basically stringing together all the simple movements that requires. A human, on the other hand, would learn what it means to crack a nut: the underlying concept.  Deutsch’s example is a student, having heard a college lecture, will be able to explain its meaning without repeating a single sentence of the lecture itself.  The meme had replicated, not the sounds. 

Some think creativity was selectively advantageous by making some individuals more sexually attractive, or was selected for because of the practical benefits of innovation.  But Deutsch considers this unlikely, first, because it would be very difficult to detect small differences in creativity of a potential mate, especially if it was not put to practical use.  And it apparently wasn’t put to practical purpose because for an eon, while human creativity was evolving, there simply wasn’t that much change in behavior, culture, or technology. 

Instead, oddly, creativity developed as a result of rigid maintenance of a static society, he said.  The human “creativity trait” resulted from thousands of years of conformity.   Here’s a summary to his chapter called “The Evolution of Creativity:”

On the face of it, creativity cannot have been useful during the evolution of humans, because knowledge was growing much too slowly for the more creative individuals to have had any selective advantage.  This is a puzzle.  A second puzzle is: how can complex memes even exist, given that brains have no mechanism to download them from other brains?  Complex memes do not mandate specific bodily actions, but rules.  We can see the actions, but not the rules, so how do we replicate them?  We replicate them by creativity.  That solves both problems, for replicating memes unchanged is the function for which creativity evolved.  And that is why our species exists.

So if Deutsch is right, the faithful replication of a complex idea requires creative thought, because with each iteration the original meaning must be recreated.  Even if the most minute alterations of a message are discouraged, the very ability to divine meanings from external cues is an act of creativity itself.  Throughout nature, females are generally more selective in choosing a mate than are males, because of the relatively small number of children they can bear. Males with higher status are usually more successful procreating, and often with multiple partners.  In a static society, status comes from faithful adherance to cultural norms, and the male who "gets it" best -- he who always seemed to know the right thing to do in every situation, had more chance spreading his genes.  He who could recreate the cultural attributes, norms, and mores from the subtle cues of human interaction -- he who was most creative -- was naturally selected.  And so, within a breathtakingly static culture, and although it was not used for innovation, the uniquely human mental skill we call creativity came about.

But static societies have moments of relief.  Deutsch points out that we are now in the longest ever period of Enlightenment, which is simply a time when humans turn their creativity outward.  They conject or Imagine new ideas, and they test these new ideas for reach. This is when the only facet of creativity that Lehrer wrote about -- successful innovation -- can be found in abundance.    

During the Enlightement, salient ideas will spread, and the others won’t. In fact, it's the rejection of bad ideas, not the striking upon good ones, which matters most; ideas themselves are subject to natural selection.  Through repetition of this creative process, by trial and elimination, ideas with real reach sometimes emerge.  Like masking tape, for example.  Like Pixar movies.  Like Nike’s “just do it” slogan, as Lehrer points out.  Like Natural Selection.  Like Google.

But ... despite his enthusiasm ... I’d put my money against the bacon-infused “old fashioned.” A mixed drink with bacon grease, I reckon, may be good enough at the witching hour, but I doubt if it has much reach.  The fat must be trimmed, after all.  Bad Ideas must perish.  That is, after all, a big part of the creative process.  


---------------------------------
Here's a sad story.  I wrote the review of Imagine (above) and posted it on Amazon.com. Mine was the first critical review among many lauditory ones, so it went to the top.  It was "liked" by more people than my 30+ other reviews altogether.  Every day another 5-10 strangers went out of their way to compliment me.  I was at 600 and climbing fast.  I'd even been recommended by some, who went on to read my other reviews, for membership to the Vine which is an invitation-only club whose members get books in advance, for free.  I had hopes.

Then someone who knows everything about Bob Dylan read the book and found something fishy: a quote they did not recognize.  They inquired and Lehrer promised to provide the source, couldn't, and finally admitted making it up and took a hard fall from grace.  He quit his job as book reviewer at the New York Times and Amazon pulled his book from their shelves, removed the link, scratched all my votes, and ended my glory.

Friday, April 13, 2012

It's true, I like Zombies

I recall, when I was 13, getting creeped out by a werewolf.  Rather, it was the vague thought of a werewolf stimulated by terrible reception on a black and white TV.  It helped that I was alone in the farmhouse at night – we had no close neighbors -- and the only unnatural light was the small TV set.  I was flipping the channel knob and channel 13 was almost entirely static.  Over the hiss I barely heard a faint howl, and through the dancing gray pixels – barely – the outline of a wolf.  That was it.  I scared myself with almost nothing.

I also was impressed by the 1999 film Blair Witch Project, particularly one scene in which the screen was entirely black and the sound was just a person clucking.  Again, nearly nothing -- but everyone was on the edge of their seats.

On a shallow beach in Belize a few days ago I was able to walk out into the ocean about 1,000 feet, and then I swam out another 5 minutes.  I may have been a half mile from shore, no one knew I was there, and so again, there was nothing.  This time the thought of sharks really freaked me out -- just disappearing, like that!  I went back to shore right about then.

So, I like to think I have a fairly nuanced sense of fear, and maybe for this reason I dismissed zombies long long ago as too stupid, too ugly, too slow and clumsy to be taken seriously.  The base appeal of zombies, I thought, was that they were dead, and gross, and that didn’t interest me.  I’m afraid of a razor blade. Zombies were way too obvious.

Background, for those few who don’t know:  Zombies are humans who have fallen victim to a strange disease which first kills them, then awakens them in whatever random state of decay.  They don’t seem to rot more from that point on, but they lumber on stupidly, endlessly, always hungry for (living) human flesh -- particularly brains.  They’re generally easy to avoid, and a blow to the head will kill them for good.  If they notice a living human they stumble towards it, to bite.  The bite will kill, and one who dies this way becomes a zombie too. Simple as that.

With the help of my sons, who turned me on to the graphic novel The Walking Dead, I've come to realize that zombies are actually a brilliant invention. It's apocalyptic, either by epidemic for which there are random immunity, like small pox, the plague, typhoid, Ebola.  Despite their obvious silliness, you'd be hard pressed to find a monster requiring so little suspension of belief.

I have compared the zombie situation to what would happen if you took a random group of U.S. citizens and parachuted them into the Brazilian jungle.  New predators, new challenges, total uncertainty, and then the tribes of survivors would form.  Some tribes would become threats, as well.  And it’s not an unfamiliar jungle – it’s the ruins of their own neighborhoods.  And instead of crocodiles, the threat comes from familiar things -- neighbors, and loved ones. The better analogy would be a total failure of the electrical grid, which would cause a collapse of government, businesses, services.  Neighbors prey on neighbors while diseases (the zombie part) spreads wildly.  How interesting.  How terrifying: modern people, thrown back in time.

Zombies reintroduce problems from the past: lack of water, food, shelter, medicine.  Survivors are nomads, and they coalesce into tribes, with governments, rules, specialization of labor.

There's another take on zombies which is pretty interesting. Survivors are often confronted with their loved ones who have turned.  They still feel love for wife, husband, or child, but there is nothing but the shell of the person left.  These bittersweet confrontations are reminiscent of a friendship or love relationship gone bad.  The person looks like the friend or lover, but they are just no longer interested.  The flame, on their side, has died.  Ouch.

You see, just a little twist and the zombie story is a familiar one.
----------------
 
There’s an ant, the Camponotus leonardi, which is susceptible to a parasitic fungus, the Ophiocordyceps unilateralis which gets into its nervous system, causing the poor creature to climb a stalk of grass, clamp on, and die.  Fungal antlers sprout from its head, and the spores of the fungus infect other ants.  Zombie ants.  But a better zombie is caused by trematodes, or flukes.  Here is a slightly condensed quote about the fluke, from a Scientific American blog:
The parasitic Dicroelium dentriticum, lives in the livers of sheep, but its intermediate host is an ant. A snail accidentally eats the fluke’s eggs and the parasite hatches and develops in its gonads. The fluke is excreted in the snail’s slime, which is eaten by an ant.  Once infected, the ant continues about its business by day, but as the sun goes down, the parasite takes over. Every night, the zombie ant will leave its colony behind and search for a blade of grass, to climb to the top, to bite down, and to wait until sunrise. Night after night, the ant will dutifully wait atop its blade until it gets accidentally eaten by a grazing sheep, thus completing its enslaver’s life cycle.

They still don’t scare me as much as static, clucking in the dark, or thoughts of a shark far from shore, but when it comes to monsters, zombies simply take the cake. They require just a small stretch of the biological imagination, and it’s social sciences from there.