Sunday, December 28, 2014

Mental Math

There's a type of pain a brain will suffer only from mathematics, and I had a splitting headache of that kind in high school once as I struggled with some concept I couldn't quite master.  I was good with numbers at that time, in the most advanced classes and really enjoying it -- but when I approached the teacher for help I was summarily turned away, I don't know why. Maybe the guy was having a bad day. But I needed help and wasn't getting it, so I finished that class as best I could and didn't take another, for a long long time. And that was too bad -- because although math does hurt, it's really really cool.

Recently I took a course called Mental Math out of Harvey Mudd, with Professor Arthur Benjamin through The Great Courses, an online business that is remarkably solid.  It was the tenth course I've taken through TGC and one of the best.  As it turns out, these skills are so useful I wish I had them long ago.  Benjamin starts most of his 12 lectures by demonstrating, to a live audience, some pretty impressive feats.  And then he breaks them down into digestible parts.   

Some of the lessons were more or less common sense.  Like breaking down a problem like 19*32 to (20*32) - 32.  That's 608.  Or that multiplying a single digit by 9 results in a product starting one less than the digit followed by its complement to 9.   9*8=72     9*3=27 and so on.

Some lessons were approximate.  If you want to check the order of magnitude of a multiplication answer, add the number of digits on both sides.  Your product will have that many digits or one fewer.  Which?  If the product of the two largest place digits is 10 or more, it'll be the sum of the digits.  If it's four or less, definitely the sum minus one.  Example: 6475*480.  Since 6*4 >=10, the product has the full 7 digits.  If we multiply 1234 * 298, since 1*2 <=4, the product will have 6 digits.  Check it, it's true.

Multiplying by 11 is freaky easy.  11*54=594 .. that 5 and 4 look familiar?  The 9 is the sum of them.  11*32=352.  11*18=198.  That is, 11*AB=A  A+B  B and it works for large numbers too!  11 * ABCDE is A   A+B   B+C   C+D   D+E  and E -- 11*2345= 25,795.  It only gets a little tricky when the two numbers add to more than 9, like 11*789, in which case it's 7 7+8  8+9   9  or 7  15  17  9.   Carry the 1's right to left and you easily see 8,679.  That's pretty much it for 11zies.

It turns out the way I learned multiplication is the most drawn-out way to do it.  Here's another way, the Criss Cross method: take a small one first: 21*47.  Right to left it's 1*7 = SEVEN.  (1*4)+(2*7)=18, that's EIGHT, carry a 1.  This is followed by 4*2 [plus the carried 1] is NINE, so the answer is 987.  Here's about as bad as it gets, with more carrying:  37*62.  7*2=14, FOUR carry the 1.  [ (7*6=42)+(3*2=6)]=48, plus the carried 1 =49 that's NINE carry the 4. Finally, 3*6=18 plus the carried 4= 22 ... so the answer is 2294.  In other words AB*CD  is  AC, [(B*C)+(A*D)], then B*D, with a little bit of carrying.  For bigger numbers this works, too, but you'll probably need paper ... I'll show you ABCD*EFGH.  Start with the ones: D*H, then the next product digit moving left comes from (C*H)+(D*G).  Then BH+CG+DF.  Then AH+BG+CF+DE, then AG+BF+CE, then AF+BE, then AE.  Yes it's a chore but all you write down is the answer.

Here's another way when the two digit numbers are anywhere near close together.  Let's start easy: 43*42.  Look for a nice tens number nearby (40).  43 is +3 from 40 and 42 is +2 away.  Take 43 and
add the +2 (from the 42-40), OR the 42 plus the =3.  Either way you get 45.  Now 45*40 is easy to figure in your head ... as easy as 45*4, it's 1800.  Then just add the multiplication of the two adjusters: 2*3.  So the answer is 1806.  The same thing works when one or both numbers are below the target tenzie.  Try 28*34.  It's just 30*32=960 ... plus (-2*4) ... 952.  See, I added 4 to 28 or subtracted 2 from 34 to get the 32 to multiply by 30.  If both numbers are below the target, you add their product of course, because a negative times a negative is a positive.  Example: 38*36 = (34*40)+(-2*-4) = 1360+8=1368.  

If you're multiplying two numbers that start the same, and the last two numbers add to 10 there's a super easy way.  67*63, take the 6 and multiply by (6+1), then concatenate the product of 7 and 3.  That's (6*7),(7*3) ... 4221.

To multiply any two numbers between 10 and 20, you can do it this way.  Say 17*14.  Take 17+4=21, times 10 (210) then add 7*4 ... it's 238.  It works either way of course -- you could go 14*17, 14+7=21, times 10 and add the same 4*7.

When it comes to division, I learned the old long division, which is just a waste of time when the divisor is small because in that case short division on paper is just the same and so much quicker.  Why write all those numbers?  If you want to get the order of magnitude of your quotient correct, the length of the answer is the difference in length of the number and divisor, or sometimes that +1 digit.  To determine whether to add the 1 digit just compare the two largest digits.  If that for the number you're dividing is smaller, it'll be just the difference in digits.  E.g., 6543/739, because the 6 is smaller than the 7 it  will yield a 1 digit number (8.84...); but the result of 5439/284 will be two digits (19.15...).  Because 5 is larger than 2. 

If you want to know if something is divisible by 2 just look at the one=place digit, everyone knows that.  6748 is, 7643 isn't.   But it'll be divisible by 3 if the sum of all the digits are a multiple of 3.  Try 3471, that adds to 15, so 3471 divides neatly by 3.  A number is divisible by 4 if the last two digits are divisible by 4.  By 5 if the last digit is 5 or 0, of course.  By six if it's divisible by both 2 and 3 (see above).  7 is the most complicated: Take off the last digit, double it, and subtract it from the rest.  E.g., 112?  2 doubled is 4 and 11-4=7.  Since that's a multiple of 7, 112 is too.  It's divisible by 8 if the last three digits are divisible by 8.  Nine is like three; sum the digits and if that is a multiple of 9, you're good.

Another way to do 7 is this.  By example, take 1234.  Add or subtract a multiple of 7 to get to get a 0 in the one's place: 1234-14=1220, then kill the 0: 122.  Do it again: 122+28=150, and kill the 0.  15 is not divisible by 7, so neither is 1234. This trick works for any number ending in 1, 3, 7, or 9.  Is 1472 divisible by 23?  Well, 1472-92=1380 ... 138+92 =230.  Kill the 0 and you get 23.  So yes it is.  

Besides being fun, there's some immediately practical information in this course.  To calculate change just take the complement which adds to 9 for every digit except the last one, which adds to 10.  So pay $10 for something that costs $1.32?  $8.68 change.  Costs $6.98?  It's $3.02.  Just add another $10 if you paid with a $20 bill.  I now can beat just about any checkout clerk.

Here's another really freaky cool trick.  Let's say you multiplied two very large numbers: 1,246*35,791=44,595,586 .  To check your work add 1, 2, 4, 6= 13, then add that 1 and 3, to get 4.   Then add the 3, 5 , 7, 9, 1 that's 25, and 2+5=7.  Then since you're multiplying, multiply the 7*4=28, the 2+8=10 and the 1+0 =1.  That's a lot of collapsing, but it's worth it.  Compare this to the sum of 4, 4, 5, 9, 5, 5, 8, 6= 46 and 4+6=10 and 1+0 =1.  If the numbers match the the answer is most probably right!  If the numbers don't match, it's certainly wrong, like this: Does 27*43= 1151?  Well, 9*7=63 and 6+3=9 ... and the digits of 1151 add to 8.  So it's WRONG, for sure.  This trick works for subtraction just the same.

Squaring two-digit numbers that end in 5 is easy because the product always ends in 25.  Take 85 squared.  8*9=72, so it's 7225.   X5squared is [X*(X+1)] and tack on 25.   Wala. Works for multi-digit numbers too.

Vedic Division is really pretty extraordinary.  It works best when dividing by a 2-digit number that ends in 9 or another high number.  Say, 47869/49.  You change the 49 to 50 and just divide by 5, working left to right. 47/5 is 9, remainder 2. But because we fudged a bit to get 5, instead of dividing 28 (the carried 2 and the next digit, 8) by 5 you first add the 9 from the quotient to 28, so the next is 37/5 =7R2, then (26+7=33)/5=6R3 and 39+6=45 ... so the answer is 976 R45.  If your divisor ends in 8 then double the previous digit in the quotient; if 7, triple it.  If it ends in 1, subtract the previous quotient; if in 2, subtract it twice.  If that's not clear, buy the lectures.  They're worth it, I assure you.

One of the most fun lessons was figuring out the day of the week for any date.  You have to memorize a few things, like add 1 for the 1900s, 3 for the 1800s, 5 for anything in the 1700's and 0 for the 1600s or 2000s.  And you have to memorize a number among 0-6 for each month.  January is 6, Feb 2, M 2, A 5, M 0, Ju 3, Jul 5, A 1, S 4, O 6, N 2, Dec. 4.  Pure memorization, though there are tricks you can use.  Then you do it this way.  Let's say Feb 12 1809 -- Charles Darwin's birthday.  3 for the 1800s plus 2 because there were two leap years by '09 (9/4, throw out the remainder), then add the 9 itself ... 14.  For Feb add 2, that's 16.  Then 12 for the day, we're at 28.  Divide by 7, the R is what we're looking for.  28/7=4R0.  You start at Sunday with 0, Monday as 1, through Saturday (6).  Darwin was born on a Sunday.  This gets easier when you start dropping any multiple of 7 at any time, and drop any multiple of 28 years between the years 1901 and 2099.  Nov. 6 1975? It's 75/4=18 leap years and 75-56 (28*2) is 19.  19+18 is 37, drop the 7s, that leaves 2.  Add another 2 for Nov and 6 for the 6th, and 1 for the 1990s. That's 11.  Drop a 7 and it's 4, that's Thursday.  If you're early in the millennium, it's real easy.  What day is July 4 in 2015?  27/7 =3R6 Saturday.  

There are just a couple of twists.  On leap years subtract 1 from the months Jan and Feb, so they are 5, and 1 respectively.  And this astronomical correction I was not even aware of: any year ending in 00 does not leap, unless it ends in 400 -- then it does.  

Many of these tricks, if you actually do them in your head, require holding numbers while you work on others.  That gets confusing.  The Major System helps, because it converts numbers to letters.  You basically read numerals like the alphabet instead.  1=T/D/Th, 2=N, 3=M, 4=R, 5=L, 6=G/sh/ch, 7=K/hard G, 8=F/V, 9=P/B, 0=S/Z   Words are much easier to remember than strings of digits, and when you have to keep both in your head, they are less likely to get confused.  I've been using the Major System for awhile, so I was gratified when Benjamin recommended it.

I've combed through much of my notes for this little summary -- but there's more, and Professor Benjamin will explain it much betters.  He took me to a few places, near the end, which almost started up that old headache again.  But it was worth it.  I recommend the video version of the course, not the audio, because there is quite a lot of visualization.   If you buy it, wait for a sale, and you'll need a notebook.  Benjamin is earnest, enthusiastic, well paced, and clear. He gives excellent examples, explains why these things work, and demonstrates almost inhuman mastery of these skills, sometimes thinking aloud so you can see his process.  It's so much fun.  Here's a link:

Friday, July 4, 2014

Memes are the new Genes

After Darwin struck upon evolution he kept it pretty much to himself for nearly 20 years; he knew he was on to something big, but also that it would be vehemently opposed. If Wallace hadn’t been about to spill the beans with his similar insight, Charles may have never shared his depth of thought and impressive wealth of supporting evidence.  And although, today, while genetic evolution is about as close to fact as you can get, in the United States just a little over half accept it as probably or certainly true. According to a Gallup  poll two thirds think humans were probably or definitely created in their present form; Some groups (e.g. Republicans) seem to be moving more firmly to that view.

Why would they do that?

Well maybe it’s just not pleasant to think of one’s self coming from the muck, as resulting entirely from a series of errors, as being infused with prehistoric impulses, or as a descendant of an ape, shrew, worm, and sponge and kin to everything alive -- or even not alive -- today.  It’s particularly hard to accept if it makes one question the happier more familiar explanations of human existence.  Evidence for evolution is easy to reject when there are still magical stories to retell.

Given the resistance to something so solidly shown, it’s not surprising to me that memetics has had a rough go too, even though it also has obvious merit and deep implications.  Memetics, you might say, is the new genetics.  It’s like we are in the mid 1800s again, resisting this germ of a huge idea.  

There have been more recent books on the topic, like Tim Tyler’s Memes: The Science of Cultural Evolution (2011)and Brodie's The Virus of the Mind (2011) but I think it will be hard to beat Susan Blackmore’s The Meme Machine (1999) for a solid primer on this mind-bending train of thought.

The word "meme" has even hit popular culture;  I just Binged “Internet Meme” and found 825,000 hits.  But that's just a popular video on YouTube -- it's relevant to memetics, but in a trivial way.  Memes are more than that:: these are ideas that duplicate themselves, jumping from brain to brain.  When you retell a good joke you've heard, have you just used the joke, or has the joke used you? 

But it goes deeper; let me try to intrigue you with something more.

Consider the basics of genetic evolution: individuals are different, and the more successful ones tend to pass on the contributing genes.  Creatures that are more clever than their cohorts are often better survivors, so cleverness is rewarded and brains became larger and smarter. Memetics suggests that at some point these brains become receptive to ideas that have nothing to do with the host's survival or procreation. Humans then, are the result of two evolutionary  forces: genes which groom our bodies and brains, and memes which infect our minds.  Because ideas can influence behavior, and behavior can affect genes, while at the same time genes can affect the ability to learn -- the two forces influence one another.  But they often pull in different directions.

Meme is a shortened version of 'mimeme' which means 'imitated thing' in ancient Greek but Richard Dawkins in his 1976 The Selfish Gene wanted something shorter and a little more like 'gene'.  To be fair, Hamilton in '63 and Haldane in '55 contributed to the idea he made popular then.

Memes aren’t just any idea you might have, any emotion or feeling or creative impulse.  They are the thoughts that can transfer from person to person by imitation. Although they are not perfectly analogous to genes, they have some important things in common.  They replicate. They change.  And they matter.  That's enough for natural selection to take hold, as Dawkins explained when he coined the term meme in his 1976 book The Selfish Gene  and this means they move and change on their own. They are like viruses, which depend on living cells but move between them freely. Blackmore puts it this way:  memes are unleashed when brains become sophisticated enough to 1) transform an idea from one point of view to another, 2) decide what to imitate, and 3) produce matching behavior.

It's tempting to try to compare memes to genes directly, but this has been one stumbling block in their acceptance. Stephen Jay Gould called memetics a “meaningless metaphor” and others have been harsher still.  But memes aren't like genes.  Why would they have to be?

Blackmore uses the analogy of a recipe for a meal to make this point.  If it is written down and photocopied it is very much like genetic code – each person might make a little change, by taste or error, but with every generation the code is reset.  If it is passed on by observation or verbally, however, or jotted down on the back of an envelope, then it is not like genes because any error or alteration will persist.  In that sense it’s Lamarkian [Lamark believed in heritability of acquired characteristics].  Either way, the recipe is a meme. 

Is the whole recipe the meme or just each ingredient?  A similar question is central to genetic evolution (is it genes, is it organisms, species or groups which are selected?).  But Blackmore answers “any or all of the above.”  The key to memes is imitation, which she carefully distinguishes between contagion, social learning, and imagination. She contrasts the prevailing view of evolutionary psychologists and her own, and at times she even disagrees (refreshingly) with Pinker, Dawkins, and Dennett -- all of whom are my main guys, by the way.  There are many references and citations.

Many who have supported memetics assume that memes mainly inform genetic change – in other words, that while ideas evolve their function is to affect genes.  So we had the idea to leave the forest for savanna, and genes were then groomed for bipedalism.   Memes may affect us, but through genetic selection.

But this can't be right.  Memes don’t always agree with genes. Sure, in a culture that is insular, a tribe that is small, or when families are close-knit, the flow of information and directives is mainly vertical, generation after generation.   That’s the way the genes flow too, so in those situations there is no conflict -- ideas that increase survival and procreation will propagate and thrive if the tribe does too.  On the other hand, for example, if an Amish household adopts the idea of celibacy, both that family line and the idea are not likely to fare well.

This may be a reason to distrust neighboring tribes, to create and reject the "other" -- they may have ideas which would be invasive to our own.

Because when information flows horizontally memes travel in a cross-current to genes.  Their interests (which is ultimately self propagation) may be quite different. Though some ideas are aligned with genetic survival, some are irrelevant to it, and some fly in the face of genetic advantage and this is where it becomes most interesting and useful to understand.  Consider family planning, contraception, abortion, homosexuality ... genes would "object" vehemently to these notions, yet memetics would predict that they could thrive.

Happily, many of these ideas are testable.  Is internet connectivity correlated to use of contraception?  Is abortion a greater taboo in authoritarian societies? Is a rigid society more socially conservative? Are gay rights more common where press is unfettered? At the time of the writing the answers were not known.

If memes duplicate selectively, what makes one more likely to be repeated or imitated than another?  This has been carefully studied by advertisers and politicians.  Currency, novelty, alignment with preexisting ideas, repetition, danger, contrast, association, utility, sequence, context, timing.  All of these things affect how convincing or coercive a message will be.

One of the hard facts about genes are that they aren't always nice to their host.  It's replication of the genes, not the creature, that drives evolution and sometimes interests clash between genes and their organism. There are lots of examples how humans even are full of quirky errors -- the way the eye is wired, for example, leaving a blind spot.  Genes aren't aligned with groups, either: a 50:50 sex ratio is a good example of this, as I've explained in a previous post.  Likewise, memes aren’t always nice to genes and they aren’t always particularly nice to their humans. Just as a virus jumps from host to host, memes do the same, with similar disregard for the health of their host beyond the meme's ability to spread itself more widely. 

Darrell Ray wrote The God Virus (2009), which made fairly close and very interesting analogy of the spread of religions (although he didn't use the term meme) to viruses.

At the heart of the argument Blackmore presents, is imitation. 
"Once imitation arose three new process could begin.  First, memetic selection (that is the survival of some memes at the expense of others). Second, genetic selection for the ability to imitate the new memes (the best imitators of the best imitations have higher reproductive success).  Third, genetic selection for mating with the best imitators." (116)
There is a chapter on how memes may have been the root cause of the jump in brain size about 2.5 million years ago, about when toolmaking began.  Her speculation, to summarize, is that this was the dawn of true imitation and genes that assisted imitation were quickly groomed by sexual selection.  David Deutsch said something similar in The Beginning of Infinity.  Brain size, Blackmore speculated, may be like peacock feathers: peahens took a liking to large feathers and feathers got larger and larger and larger in a crazy feedback loop called runaway evolution.  "We need not take it for granted," she said, "that big brains, intelligence and all that goes with them are necessarily a good thing for the genes." (p 120)

Another leap, about 100,000 years ago -- language -- also may have been directed by memes. From the meme's perspective "a silent person is an idle copy machine waiting to be exploited." (p 84).  A scandal, horrifying news, useful information, anything that taps into sexual needs or increments social status are memes just pressing for expression.   Those individuals who could better express themselves would have the advantage and so language itself emerged, she argued -- in the service of imitation.

She takes on altruism (Chapters 12 and 13), cults (Chapter 14) religion (15), and the impact of the internet (16).  In the clinching chapter Blackmore speculates that self awareness itself (i.e., consciousness) may have been born of memes, as so: sense of self creates a sense of ownership, including ownership of ideas, hence a proponent of those ideas, thus giving a meme an advantage by grooming a sense of self. "The self," she wrote, "is a great protector of memes."

Could we be the battlefield for competing memes, and their soldiers as well?  Do we take ideas in, convince ourselves that they are ours, and then protect them as possessions so we actively promote them to others? But by this new way of thinking, ideas seem a bit like parasites.

But there's little wonder (as I've noticed), that the science of memetics is hardly popular, and even the subject of derision.  I mean half adults in the United States don't accept genetic evolution --are we really ready to consider that news, the gossip, lessons from mother, our hobbies, knowledge, trades, warnings, friendly advice, careers, our avocations, preoccupations and all of the rest are not only outside under our control .. but that might be taking us for a ride?

I don't think so. 

Despite the small foothold, evident in the graph to the right (the two scales are wildly different of course), memetics as a science is probably not going anywhere fast.  But Blackmore suggests there might be some personal advantage to considering it anyway.  If you ponder your own thoughts -- and she recommends meditating -- to trace their origins, it may help put things in a better more healthy perspective. Modern life may be stressful, she suggests, not because we want to take advantage of all the wonderful new opportunities and ideas, but because the ideas want to take advantage of us.

Certainly food for thought.
The Meme Machine Susan Blackmore, 1999.
264 pp references, index.

Tuesday, May 20, 2014

Conflict and Decision Making in Organizations

Encountering conflict is rarely fun and people go to great lengths to avoid it.  But with none, there would be no change.  In this essay I will begin with conflict at the very lowest level, Darwinian evolution, and work my way up to complex organizations using the same principals.  

Evolution is a simple process in which strife plays a central role.  Richard Dawkins, in his seminal book The Selfish Gene (1976) wrote that all that is needed for evolution to take hold is three things: 

  1. Something matters (has contact with the outside world)
  2. It can replicate
  3. Sometimes variations occur
Look closely. Number 3 is conflict, pure and simple.  It’s a difference of opinion, just in the language of DNA.  Say all egg shells were perfect for a time, then the environment changes and the shells might be a little too thick or thin.  Neither is good for the chick.  Then one bird lays eggs with thicker walls; in essence she’s just disagreed about egg laying.  Which is the better idea?  Nature answers with differential survival rates and off it goes.  Conflict improves everything, all the time.   

 We even have a lot of internal conflict and a whole lot more than we recognize.  I particularly liked Daniel Kahneman (the father of behavioral economics)’s treatment of systems 1 (knowing) and system 2 (thinking) in Thinking Fast and Slow (2013) to illustrate how imperfectly we actually go about our days.  We decide quickly, on incomplete information and with sketchy logic, and then we are overconfident in our decisions.  This is the solution evolution has given us because it works … well enough.  The human brain, he wrote, is “a machine designed for jumping to conclusions."  
Meditate once; if you’re like me, you’re a mess inside. The lessons I draw from this are all hard ones: 
  • try not to make hasty decisions
  • nobody is perfect (we’re not even very good)
  • don’t be sure
  • practice forgiveness, all around
As if all the internal confusion is not enough, we encounter many more ideas, options, and challenging opinions  when we communicate with others.  All of these compete for traction too.  In one way it’s just more inner turmoil, just amplified.  In human history this jumped once with language, again with writing and the printing press, then with the radio and telephone, and now with the internet -- we are drowning in competing ideas.  Talk about conflict.

But there is another interesting aspect to the sea of information.  We share ideas and then we sometimes alter them.  Some of these ideas are more salient than others.  Some also join together, forming philosophies, ideologies, political or economic systems, paradigms.  Ideas are much like genes, groomed by selection, teaming up for more complex solutions. Ideas satisfy all of Dawkins’ requirements.  Ideas evolve, too.  

When you get multiple people together they all have different, and sometimes conflicting, goals.  There are limited resource, so decisions have to be made. But on the basis of what, by what measure?  It’s tempting sometimes to take the easiest choice.  Or what’s best for self.  But would it be better to pursue the greatest good for all?  Yet how inclusive is “all?” And should the choice be best just now, or in some longer time frame?  Should we go for best-average goodness, or is equity a better goal? Is goodness itself measured in happiness, meaningfulness, or some other unit? 

All those are certainly important, and let’s even say that one or more goals can be agreed upon – a mission statement, if you will.  How best to reach those goals?

There will be disagreement and even outright struggle.  But it’s remarkable how many opportunities there are for cooperative, mutually beneficial, relationships; nature, even, is full of them.  It’s probably fair to say that the more complex something is, the more cooperation is required, and that many cooperative relationships are fragile.  They may break down when individuals can cheat the group for private gain -- that’s why we have rules, laws, cultural norms, and peer pressure – to stop mass defection. 

When an institution gets complicated enough it typically specializes and organizes in a layered way.  Let’s take a university for an example – lots of opinions, strong personalities, great complexity, and plenty of conflict.  There are certainly coalitions with competing perspectives, and also a whole lot of cooperation.  There are hierarchies. The basic one goes something like this: Students, faculty, program coordinator, chair, dean, provost, president.  Another one is student aide, office staff, supervisor, program director, vice president, president.  Want a new degreed program?: Faculty, college, provost, president, board, perhaps state legislature.  Student grade complaint?: professor, chair, dean, grade appeals committee.  Faculty tenure?  Department personnel committee, chair, dean, provost, president.  There are lots and lots of vertical hierarchies.

In a complex institution important decisions are made in at least two ways: within groups, and between levels of the hierarchy.  

First, within levels. There are meetings, usually in committee because there are just too many people otherwise.  Meetings which are solely to transfer information are irrelevant for decision making purposes.  Someone gives a presentation, distributes literature, people report what they’ve done ... these may be useful for other reasons, but not for deciding.   Nothing happens that couldn’t be done with a targeted announcement, or a web page. 

Question and and answer sessions, which may follow presentations, are different.  They reverse the flow of information and allow for two-way exchanges in which ideas may usefully clash.  And sometimes there are brainstorming sessions -- idea-gathering -- but as Jonah Lehrer pointed out in Imagine, brainstorm sessions where “there are no bad ideas,” are not all that useful because  there actually are bad ideas.  You have to sort through the ideas.  

The best committees have members who represent different  constituencies, will engage with issues coming before the committee, and are able and willing to contribute their perspectives and listen to others’.  Right there are five ways committees can fail.    

For running a committee it’s hard to beat Roberts Rules of Order as a beautifully fair process which insures that all voices are heard, nothing is done in a rash way, everyone has equal say, and things move on at a fair pace.  More important decisions require a higher level of agreement, there is always an opportunity to reverse or improve a solution, and minority voices are fully heard.  It’s a wonderful, surprisingly simple, system which I learned primarily by reading The Complete Idiot’s Guide to Roberts’ Rules  by Nancy Sylvester (2004). (it’s much more enjoyable than the original source).

Most committees claim to follow Robert’s Rules of Order but from my experience very few actually do beyond the sequence “a motion=>a second=>a majority vote.”   But even that skips the essential step, discussion – it’s “motion=>second=>discussion=>vote” and in that discussion there may be secondary motions relating the main motion, and the secondary ones have to get voted on first.   Sound complicated?  Just at first.  Using Sylvester’s ladder analogy, there is a set order of motions that may be made; you can skip steps going up, but can’t skip pending motions coming down.  Higher number motions must always be voted on first.  

  1. Main motion is made
  2. A motion may be made to basically kill the motion
  3. A motion may be made to amend the motion
  4. A motion may be made to amend the amendment
  5. A motion may be made to refer it to a committee
  6. A motion may be made to postpone to a certain time
  7. A motion may be made to limit or extend limits of the debate
… and so on.  There are 14 levels in a strict order.  And believe it or not, it does all make sense.
In 25 years serving on committees I’ve never heard a secondary motion, except those I’ve made, and when I do there is general confusion about whether that sort of thing is even allowed.

It’s bad enough when “Roberts” is used to ramrod a vote through a group, but it’s worse when the committee chair misunderstands his/her own designated role as facilitator, and believes that the chair wields authority as if it is his/her own committee.   Chairs should really read Roberts, or have a parliamentarian (a Roberts expert) at hand because when a chair lords it over a group a lot of good conflict is missed out on, a lot of good disagreement is lost, and decisions are therefore ill-informed.  Of course if members know the rules, this can’t happen.  But they generally don’t know them well enough to stop a rogue chair.   

Another common failure is when no one moderates discussion, in which case the more assertive or emphatic members become the authoritarians, not only monopolizing the airwaves, but possibly intimidating junior or less vocal members with their forceful opinions.  Roberts describes how every member who wants to speak can have their turn, limits the length and number of times a person can address a single topic, lets new voices jump sequence, and attempts to alternate between opinions for and against an argument.  If it’s just a free-for-all, the group may appear to agree on something, when one or two, representing themselves, have basically done all the talking.

Another form of decision making is consensus which, contrary to popular opinion, does not mean “apparent unanimity,” a general assessment of views shared.  Reaching consensus is a formal process. The goal is unanimity and the process is by lengthy discussion with special attention given to dissenting opinions.  The group attempts to improve the decision so that as many perspectives as possible are satisfied.  If unanimity is not possible in the end, the dissenters may agree to “stand aside,” letting the will of the group carry -- that would still be consensus. It’s a very careful, formal, respectful process, and like Roberts’ it’s not well understood.  The only time I saw it carried out in decades, it was followed by a quick motion=>second=>vote on the question “do we have consensus.”  There is no voting, with consensus. 

The two methods do much the same thing.  In Roberts’ terms consensus would be an extended debate, 100% agreement is required -- a single “No” vote extends the process -- and people have the right to abstain.   Roberts is more efficient, consensus is more inclusive. 

Like it or not, decisions often come down from above; the big picture must be taken into account.  So faculty tell students to write a paper and then grade them in their office.  Deans tell departments if they can hire.  Provost tells Deans about their budgets, and so on.  So regardless of how hard a committee may have worked to arrive at a decision, regardless of the process, or the quality of the decision itself – the moment it is passed to a higher level, it becomes advice.  

Since I’ve shared my opinion on how Roberts and consensus goes wrong; I’ll mention now how the hierarchies sometimes seem to fail. 

There are at least three common problems.  The first is when a level simply delegates decision making to the layer beneath it – “here, you decide and I’ll just go with your decision.”  If decisions from below are allowed to simply percolate upward, it’s the bigger picture that is missing and  things are likely to spin out of control.  Recently I heard a complaint about a reversed decision: [all these lower layers] agreed, how could [the next layer] possibly disagree with all that went before?  Well, if that’s the way it works, you only need the lowest level, right?  That would be students, let them decide how to run the University.  Whoa, they just banished tuition and fees, eliminated requirements and gave themselves A’s.  No, it’s the different views between layers that is so essential.  Levels in a hierarchy are valuable because and only if they can disagree.  

Second, reasoned decisions from below may be ignored by a higher level.  When this happens all the committee work is a waste of time and you just have an authoritarian system.  Then you’d  better cross your fingers because it’s actually very difficult to understand all the issues from 1,000 feet above ground level; you see more from up there, but much in quite low resolution.  

The third problem is when directives from above skip a step going downward or are forced through with no opportunity for pushback.  In other words, micromanagement. Not allowing a layer to reflect -- even briefly -- on the ultimate decision being passed down is unwise because there might actually be good reasons to make some more adjustments.  As before, the layers serve a purpose, here as a quick feedback loop or early warning system. Ideally, decisions go up, step by step, and they come down step by step and at each step there is an opportunity for quality control, feedback, and improvement.  

Skipping levels or forcing down decisions (especially unpopular ones) is likely to damage morale too – as in “neither your reasoning nor your opinions nor you, matter” -- which may have an impact on cooperation, later.

Usually the layers are in place for a good reason, one would hope. But fiefdoms tend to grow organically, if allowed to, spawning  more layers and sometimes with their own layers too.  A director hires two associate directors, each of which has assistants and staff.  It's a cancerous sort of growth, you might say.  This is when two important words come most into play: Accountability and Transparency.  Otherwise, you are likely to have indiscriminate growth, inefficiencies, and waste.  And when there are too many layers things more often get delayed, lost, or perverted.  Too few layers and you have partial blindness.   

Like the incestuous little fiefdoms, procedures can grow out of control with inattention, too. They may be distorted, over time, by adding steps, often with good intentions: Are the right groups consulted, are records kept, are things done in sequence?   Are the steps inclusive, do they incorporate differing views?  Are peripheral interests informed?  Are there checkpoints for abuses or error?   And sometimes in all the effort to satisfy these concerns important questions are lost:  Is it efficient? Does it even move quickly

enough to work?  Are there so many steps that things get misplaced along the way?  Fixing procedures does require the 500 foot or 1,000 foot view. 

I’ve found it helpful (and sometimes amusing) to diagram complex processes when I can finally figure them out.  Here’s a favorite ... As a chair I often have to hire adjuncts, and it’s twelve steps, with some gaps, before they can post their syllabus on the course management system. 

Things work quite well too, often, that’s for sure.  Here I’ve tried to explain why disagreement is so important to improvement and a few ways which, seems to me, it could sometimes be put to better use.  When I’m too quick to judge or criticize, too harsh or pointed, or even unclear, clouded, hypocritical, naive, self contradictory, if my reasoning isn't sound or I'm more confident about this than I should be; if I've overlooked something so important that it changes everything ... well I wouldn't be surprised.  I am, after all, a machine jumping to conclusions.  Take my thoughts for what they’re worth.  Improve them, please, and I’ll be trying to do that, too.   

Saturday, May 3, 2014

How Genes Affect Learning

Genetic influences on behavior, and learning, are two things that interest me deeply so I picked up G is for Genes, ( 2014) by a behavioral geneticist from King's College, London, and a psychologist from the University of York, U.K.  The book addresses this question: how do genes affect learning?

This is not at all about eugenics or I wouldn't have read it.  The authors are interested in understanding exactly how genes affect learning so that we can narrow the learning-gap more skillfully.  I’ll try to peel out some of the most surprising, interesting, and useful bits, and I’ll draw on some courses I’ve taken recently too – Philosophy of the Mind by Patrick Grim of the State University of New York and Understanding the Mysteries of Human Behavior, by Mark Leary of Duke.

Height is heritable of course.  If your parents are both tall you're likely to be tall too.  In the U.S. , for example, for white males, it's about 80% that way;  the rest is due either to the shared environment (like a particular family or school) or a non-shared environment (experiences unique to the individual).  In the case of height, being raised in an affluent home is a shared experience and being stricken by a childhood illness is non-shared.  Both might affect your height.

 But in some parts of the world height is environmental – genes account for no more than half.   Why?  Not because the genes matter less, it’s that the environment matters more – there’s more variation.  In Somalia some people don’t have enough to eat, and some have plenty.

The same thing is true about education, of course.  Look at these findings. Eighty percent of reading differences in Australian children after kindergarten are genetic.  In Colorado it is just 66% and in Scandinavia, 30% of reading skills are genetic.  What's going on?  Australian children at the age of 5 go to school from 9 to 3 p.m.  Colorado requires just 3-4 hours a day, and in Sweden and Norway compulsory schooling begins at 7.  When some go to to preschool and some don’t, some kids have parents who read to them and some don’t -- the environment really matters.  By the age of 10 reading levels are 80% genetic in all three countries.

So is there a “reading gene?”  No, like most things, many many genes interact to affect many many things.  Genes are generalists, the authors say.  The environment is a specialist.  But how do we know that 80 percent of reading ability is due to DNA?  The answer to these sorts of questions usually involve twins:  identical twins raised apart or adopted children in the same shared environment.  Fraternal twins who are more similar than sequential siblings tell us something about the shared environment.  Steven Pinker says we should throw out almost all studies on parenting unless they control for heritability – maybe Johnny is a pistol because he had angry parents who spanked him, but then again maybe anger just runs in the family.  You need lots of twins to sort that out.

I’ve read many times that much of personality is genetic -- approximately half, depending on the trait. Here is a typical breakdown for personality, based on twin studies of course:

Back to learning.  When it comes to IQ the percent heritable, as you might expect, changes with aging – probably because the environment has the most impact early on, then less throughout most of life.  

Whatever it is that is measured by the IQ test is largely carried in DNA.  The authors of G is for Genes explored many nuances about IQ but didn’t really describe the measure itself as thoroughly as I had hoped and expected, so let me draw primarily on Dr. Grim’s lecture 14: Intelligence and IQ:

The test was designed in the early 1900s by Alfred Binet, a Frenchman, who had recently abandoned an attempt to correlate academic prowess with skull size (First, he couldn’t get a stable measure, and then he found no correlation).  His new test, made up of a battery of questions (he said the questions didn’t matter much, there just had to be numerous) intending to assess a child’s mental age.  He was attempting to characterize lagging students as “slow” rather than “sick," and thereby keep them out of the asylum.  Soon someone divided his score by chronological age, hence the “quotient” began.  Much later standard deviations above and below average were used as they still are today.  But Benet had explicitly stated that his test was not a single measure of ability, just a predictor of achievement, it should not be used for ranking normal children, and what it measured was neither innate nor immutable.  On the contrary, the whole purpose was to help slower children do better.

It was co-opted a few years later by the American H. H. Goddard, a eugenics enthusiast, and he used the measure in exactly the ways Binet had warned against.   Goddard thought the IQ was a measure of the whole person, he considered intelligence innate, and he thought it was immutable.  Many people still do.  He used these now-common terms for those at the lower end of the scale: idiot, imbecile, and moron. And it was on the basis of his work that Justice Oliver Wendell Holmes Jr. upheld a Virginia state law in 1920 forcing sterilization on those scoring poorly on this exam.  Holmes' infamous quote:“three generations of imbeciles is enough.”

Now the test is known to be sensitive to culture, reading skills, age, level of development, and practice – one can easily improve their IQ score through study.  And many now argue that there are different sorts of intelligence, pointing as evidence to savants who may be amazing at one skill and abysmal at another – or at brain injuries which impair one ability specifically. The following intelligences are often suggested: linguistic, logical/mathematical, spatial, musical, kinesthetic, interpersonal, intrapersonal, naturalist, and emotional.  Darwin is an example of naturalist intelligence, Einstein was logical/mathematical.  Mozart was musical. Buddha and Ghandi were intrapersonal.

G is for Genes would have benefited from such a closer look at IQ, which Asbury and Plomin refer to simply as a “g score,” but they do point out some interesting and unexpected things about it. 
“We asked thousands of children, parents, and teachers about class sizes, school buildings, resources like books and computers, chaos in classrooms, and a whole host of other oft-cited factors and yet, when we fed their ideas into genetically sensitive studies, these factors … accounted for almost none of the differences between our children in terms of their achievement.  … The environment within the school, it appeared, had no impact on children’s academic performance.”  (p 115-16)

Then what did? According to the authors, this startling finding was the reason they wrote the book.  After considerable effort they found that the answer might be in the interpersonal relationships students have in school -- the non-shared experiences such as interactions with peers and teachers.

They also found that while IQ or g is the greatest predictor of academic performance at later years, but it’s less important for the very young for whom shared environmental factors are more important (just like height in poor regions).  If you remove the effect of IQ altogether, achievement is still 40% heritable for other reasons.   One of these is confidence, they say.   It boosts academic achievement and what’s more, it's is at least as heritable as IQ itself -- self confidence is, in large part, genetic.  Socioeconomic status also has a bearing on achievement, and surprisingly it is heritable too.  It breaks down this way: 50% of educational attainment is heritable, 40% of occupational status is, and 30% of income is related, somehow, to DNA.  These results are from twin studies, I assume, although the book was not as well referenced as I would like.

Gender differences were discussed as well.  Two thirds of math ability is genetically influenced in both genders, and boys and girls have similar average abilities in the hard sciences.  However, there are two reasons why males might dominate the sciences.  First, although the averages are similar, there is more variability among boys.  That is, girls tend to be clustered near average in scientific aptitude, where more boys are extra good, and more are very bad.   The best boys are better than the best girls (and the worst worse than the worst).

Second, adults tend to study what they like, not what they’re good at; aptitude does not necessarily predict choice.  A recent study showed that children who were good at math and science but also had strong verbal skills were less likely to go into the STEM disciplines.  Women were more likely than men to be good at both.

So what comes from all this?  The authors devote the last part of the book to policy proposals, but honestly their plans are ideal, with a lot of individual contact, personalized assessment and customized programs of study.  It would be prohibitively expensive.  It draws to mind a bit of wisdom I once heard about innovation: it’s very easy to be innovative if you pretend money is not an issue. 

The biggest thing I drew from the book was that when it comes to learning, genes really really matter -- they matter about half – and this has to do with innate intelligence(s), confidence, socioeconomic status gender and more. 
In the other half, there is still a ton of wiggle room.  

Thursday, March 27, 2014

Getting Things Done ... works for me!

My job is complicated and hard and I’m surrounded by interesting people who often disagree; that's why I like it.  I stay late and wake up at night thinking of ways I might do it better.  I look forward to going to the office, usually.    But recently I have begun to feel things are too complicated; I was clawing through emails and obligations, trying to meet deadlines and still move things forward. Whatever was on top of the pile, at the top of my inbox, or knocking at the door got my attention.  Lots of little things, and some big things began to slip by me. It was bad.
A friend once recommended a book called Getting Things Done, by David Allen, for people in my situation.  He's an organizational guru, I guess, so I figured it might be good at least for tips and commiseration so I bought a copy.   It's a fairly complicated system, in that it seems to encompass just about everything.  You begin by offloading all your duties, obligations and aspirations; there's a place for everything.  But it's fairly straightforward too and about halfway through the book I began to to think hmmm...  There are hierarchical lists: Projects, actions, tasks.  And there are places called "someday, maybe" and "waiting for," and areas to dump things you haven't yet had time to sort. If you put everything down in the system, he claimed, you can then relax your biological mind.  That sounds nice.  

Actions items, according to Allen, should be things you can actually complete.  Not “the enrollment project” but “ask Bill for the data on x.”  These are concrete pieces that you can do in a sitting, and for each you can estimate the time it will take.  If that's less than 4 minutes, he says, don't write it down, just do it.  Everything can be assigned a priority, of course: high, medium, low.  That much I'm already used to, but each also can have a context which is the location or environment where it can be accomplished.  I can’t paint the bedroom when I’m at work, for example, but I can do my grading at work or at home.  And any item can be tapped to move it forward a bit -- sort of a "consider this next" if time allows.  There are fixed deadlines, of course, which go on the calendar ...  and reminders for things you don't need to do now, but will need to do later.  You can mark the things you need to do but tend to avoid, like taxes or grading.  Please, system, show me the important things I have the hardest time thinking of.  All these nuances made sense, I thought.

When you put everything in to the system (notebooks, filing cabinets, drawers, bulletin boards) you can clear your desktop, clear your inbox, and clear your mind, he says.  And then, if you're in the office, say, with 30 minutes before a meeting, you can ask it for things you can do there that are important, that will take less than 30 minutes, with priority items first.  Oh and highlight the things you would most like to avoid.

One of the worst things, I've found, is sending someone a task you need done before you can move forward.  It might just be a question that needs answering --not exactly on my "to do" list, but not exactly off it either. The GTD system recommends a “Waiting For” bucket for warehousing things like this.  And it recommends scheduling reminders for things like this, so they aren't lost between the cracks.  A calendar event should also be able to schedule its own reminder.  There's another place called a someday maybe  list where you keep track of your pipe dreams. Fix the motorcycle. Learn the piano, go to China ...   not a real project or action item yet, but who knows.

But Dave Allen wrote this twelve years ago.  No way I'm going get out notebooks and file folders. I mean I need it synchronized with my email and calendar because that's where I feel like I spend most of my time. 

So, I’m thinking to myself, someone should write an app for that.  Hmm..  I typed in GTD into the Playstore and as it turns out, someone has.  It's called IQtell, and it’s become my favorite application since Google Earth; it does everything I mentioned above and more.  Syncs to my IOS, Windows, and Android devices. Plays nicely with email, calendar and my go-to notepad Evernote.  I can read through my emails and swipe them right into the GTD system, organized correctly and archive the original which is still hyperlinked to the action item.  It sends texts.  Pretty frigging amazing, I must say.

I set it up to show just the tools I use.  I make macros easily, then with just a click I'll file it, schedule it, archive it, schedule a reminder -- whatever -- and move on.

I've become a little evangelical about IQTELL and GTD.

When Google Earth came out I asked my wife to remind me of it the next day because I was sure to think it was a dream. What that did with the world, IQtell has done for my mind. I can finally see it and it's not pretty, I assure you.  But it's working much better now.  

I'm sold on GTD and I'm sold on IQtell too.  I've invested quite a bit of time into it and have not been disappointed yet.  I've since learned that there are other programs out there: You'll see them compared HERE.

But I don't care. This one seems to do it all.  It's free, there is a lively forum, great training videos and quick email support.

Disorganizized?  Try it!  You may like it.

Sunday, March 2, 2014

How to Kill a Good Idea

If you ever find yourself in a position of some authority you will be able to push your own projects forward -- and I hope they are good ones because they are going to be a lot more interesting to you than anything else is. Other people will have ideas too and fortunately they will often be flawed. When that happens just sit them down and carefully explain where they went wrong.  Be as encouraging as you can.  

Sometimes their idea may be  good; this is where real leadership comes in.  You must be quick and ready with a response.   You have to assess the proposal, the person you’re dealing with, the climate, the surroundings, the level of threat.  Think of it as a pathogen and you are the antibody; be decisive and quick.  This is a direct challenge for the resources you need to put toward yourself.

It would look bad just to fire them, and anyway you might need creative people around in a pinch. But that good idea has to go
... unless you can steal it, which is the best option for all the obvious reasons, if you can pull it off.  If the idea hasn’t been shared in a public forum, or if it wasn’t fully formed, or it came from someone with no status or power -- perfect.  They may even be flattered that their brainchild has been … “adopted" by someone who is actually in a position to raise it up.  Don't feel bad about calling it yours -- it’s an opportunity for them to practice gratitude and humility.  That’s very character building.

But what if you can’t take credit, how do you kill a good idea?  Here are some things to consider.

Ignore it.
You were just too busy to open their email or return their call (yes, you’re that important ). They may take their idea somewhere else, if you're lucky.  Then it won’t be your problem.  Added bonus: after your dominance has been well established you'll get fewer calls to ignore. 

Let’s not, and say we did.
Don't say this outright, of course but this one is fun because there's a lot of pretending involved.  You have to take on a little bit of the project, of course -- but very very little.  There, it’s done. 
But we already went over that!
Express surprise.  They’ll think they missed an important meeting, that they’re out of the loop.
That won’t work, but I can’t tell you why 
Say it like you know something that they don't know and that you're even protecting them from a political gaffe or pointless effort "Oh nope!  Trust me.  That'll go nowhere."  Give someone nearby a knowing smile, that’ll surely throw them off track.

Great idea, why don't you do it?
Only use this one if they've come to you for help, not for permission, or it may backfire.  But if you're certain they don't have the resources to follow through it will end nicely all around.

It’s too late for that!
Goodness gracious, why didn't you suggest that when it would have been helpful?  Say it with incredulity and you’ll really drive it in, ouch! 
I'll take it from here
Then just put it on the back burner indefinitely.

That breaks best practices
This is one of my favorites, but use it sparingly.  Not only are you cool, for the jargon, but it will look like there actually is a list of "best practices" -- and you know what they are. It's like priesthood almost.

There's a committee already working on that
It could actually be true -- who would know?  But imply that their work is duplicative and therefore a waste of everyone's time.

That’s already been decided
This is a quick straight-arm technique.  Sure, it’s rude, it’s blunt, but sometimes you don’t have to care.
Nice idea, but there are more important things to do
(Yes, that’s right -- your things.) 

There’s no budget for that
This may work if there is even a little money involved.  You’re also showing that you know the financials; they don’t.  And that’s that.

Someone would sue us if we did that
It's not illegal, we'd win for sure.  That's not the issue.  But lawsuits are expensive!  Surely they'll understand.

An attack diversion
Use this as a last resort and only when the good idea is really bad for you. Accuse them of being sexist or racist, or homophobic -- use any of those conversation stoppers and be vehement and intimidating so everyone around you is cowed, and will appear to agree.

Let's do a study!
Involve them designing an elaborate survey to measure the viability of the plan.  If it comes back positive,  misinterpret the results in the conclusion of a "final report."  No one will read the report but the matter will be put to rest.

You need to talk to Joe
There is no Joe, but send them to someone peripheral and let them try to find their way back.
I have something else for you to do
Oh, listen, that's nice but I need you to do something for me.  (And you should think twice before promoting your own ideas again, upstart).

We've tried it and it didn't work
Been there, done that.  Particularly with newcomers, this will set them back.  It doesn’t matter if it’s true or not; they won’t likely question history.
We'll discuss it at the next meeting
Say this at every meeting.  Notice you’re not actually saying no, you’re willing to discuss it.  So they should be happy. 

It’s against policy
It’s just a bit risky if there is no such policy.  But then, who knows policy?  Anyway what is policy?  With a small p it could mean anything … even best practices which just means “a pretty good idea, says me.”

I can’t hear you.
Don’t say this, of course.  But pretend you don't understand their proposal. "Huh?"  "What?"  Misinterpret, mix up some important details. Pretend to be baffled; it could even be fun.

Say something crazy
We don't need [a stick in the sand -- (their idea)] ... we already have [a paper airplane].  It'll be so unexpected that you may get the stunned silence you're after. Then change the subject.

I’ll warn you that in a weak moment you might catch yourself actually listening, as if it might be a legitimate idea worth real consideration.  You might even find yourself wondering  “hmm… wouldn’t that be something, is it worth a try? Maybe I could actually help make that happen?” But then it’ll hit you: “Why didn’t I think of that?”  No good.  That’s the signal that it's time to pull the rip cord:  

I’ve got to go
Look at your watch and exclaim “Oh my god. I have another meeting!”  That’s right.  You just called it. With yourself. In the coffee lounge.

Another crisis averted.