Cathy Davidson
The main point of this book is that every individual has a limited perspective and so we need other people. Davidson calls this "collaboration by difference". Through reading her book I've certainly learned a lot of things I wouldn't otherwise have learned, including but not limited to things that are artifacts of her perspective.
Davidson explains that she was diagnosed as ADHD as an adult, and it's not hard to believe. Sometimes the arguments venture far from the path you might expect them to traverse, and bookends sometimes seem to have been strapped on in editing to give a sense of coherence. Hers is a perspective different from mine, and I agree it can be valuable. It may also be what leads to broad generalizations, quick flips, and hyperbolic language that doesn't always seem to feel a need to connect closely with the truth, as with this reference to a study that Davidson does not like:
The headline-grabbing "multitaskers aren't even good at multitasking" is a nonsensical statement, when you break it down and try to think about it seriously.
If you try to think about it seriously, the study defined multitasking as the ability to focus on one task, then quickly stop, shift, and focus on a new task. They compared how well people did when they were forced to shift, and the people who naturally switch tasks a lot did worse (were slower) than people who naturally don't switch tasks a lot. This makes sense. People who tend to focus less will switch tasks more often, left to their own devices. There's no reason to expect them to focus well when you tell them to. They are less focused. It doesn't necessarily mean that less focus is always bad, and it doesn't mean that the result of the study is a nonsensical statement.
The result of that study also doesn't undermine Davidson's related claim, that switching tasks a lot might lead to better results because you end up exposed to more different ideas that you can work with flexibly. I find a lot to like in this:
We know that in dreams, as in virtual worlds and digital spaces, physical and even traditional linear narrative rules do not apply. It is possible that, during boundless wandering thinking, we open ourselves to possibilities for innovative solutions that, in more focused thinking, we might prematurely preclude as unrealistic. The Latin word for "inspiration" is inspirare, to inflame or breathe into. What if we thought of new digital ways of thinking not as multi-tasking but multi-inspiring, as potentially creative disruption of usual thought patterns. Look at the account of just about any enormous intellectual break-through and you'll find that some seemingly random connection, some associational side thought, some distraction preceded the revelation. Distraction, we may discover, is as central to innovation as, say, an apple falling on Newton's head.
I can definitely identify with the experience of being too locked in to one way of looking at something, and so missing solutions I might have seen if I stepped back, took a break with something else, talked to somebody about it, browsed the 'net for a bit, etc.
The poetic appropriateness is almost enough to convince me that Davidson was right to include the apocryphal reference to Newton, but that kind of looseness with fact grinds on me and makes it hard for me to take her seriously as a scholar in places. I think it's important to use language precisely. (Oh, that I may I not be held to that standard on my blog.)
For example: Davidson seems several times to be ending her book. One ending tells the story of her trip to Korea, when she visits 동대문시장.
In a guidebook on the plane home, I read that the Dongdaemun was founded in 1905. Its name originally meant, Market for Learning.
It's not a big deal to the general reader, I guess, and maybe it's the fault of that guidebook, but this just isn't so. I could understand "Its original name meant", since it was originally called 배우개장, but "Dongdaemun" has never meant "Market for Learning". It feels like she's sweeping too much under the rug, neglecting details that I want to know about. I happen to have some knowledge about this (and could double check with wiki), but what about all the areas where I don't know enough to fact check Davidson? If you read this book and then go around saying, "Hey, did you know Dongdaemun used to mean 'market for learning'?" you will sound like a fool, if someone knows better, and what's worse you will be spreading misinformation if they don't know better.
So how am I supposed to interpret Davidson's other claims, about research?
These key factors for educational success - rigor, relevance, and relationships - have been dubbed the new three Rs, with student-teacher ratio being particularly important. Small class size has been proved to be one of the single most significant factors in kids' staying in and succeeding in school. Twenty seems to be the magic number.
The very last book I read, The good school, took a position almost exactly opposite on class size, saying that usually parents are too worried about it, and that the body of research indicates it has only minor effects, and then only for dramatic reductions of the type you are not likely to see in practice. The reader is lucky that Davidson provides a footnote here - but it's to a 1999 summary of prior research, and doesn't seem to fully support Davidson's dramatic claim.
There are other places where Davidson seems to be just talking nonsense:
If we establish a mean, deviation from the mean is almost inevitably a decline.
This is a one-line description of the strangest statistics I have ever heard proposed. I am glad I do not inhabit this particular world of voodoo statistics. It's pretty clear how Davidson feels about mathematicians:
Is the pure abstract thinking of the mathematician really superior, cognitively, to the associational searching and reading and analyzing and interpreting and synthesizing and then focusing and narrating that are the historian's gift and trade? One could also note that it is mathematicians and statisticians, not historians, who tend to make up the statistical measures by which we judge cognitive excellence, achievement, and decline. On the other hand, it was two historians who created H-Bot, that robot who can whup most of us on standardized tests.
Yes, one certainly could note that it tends to be statisticians who "make up" statistical measures. And there could even be something to the argument, if it were made with, say, evidence of how mathematicians make IQ tests that are biased in such and such a way. Such an argument would fit well in the section of the book about standardized tests, which is approximately where, I believe, we first heard about this interesting H-Bot:
In 2006, two distinguished historians, Daniel H. Cohen and the late Roy Rosenzweig, worked with a bright high school student to create H-Bot, an online robot installed with search algorithms capable of reading test questions and then browsing the Internet for answers. When H-Bot took a National Assessment of Educational Progress (NAEP) test designed for fourth graders, it scored 82 percent, well above the national average. Given advances in this technology, a 2010 equivalent of H-Bot would most likely receive a perfect score, and not just on the fourth-grade NAEP; it would also ace the SATs and maybe even the GREs, LSATs, and MCATs too.
First, it is interesting to note that the most interesting thing I have heard about being done by historians is in fact a work of computer science. Kudos to that high school student. (And now it's clear where I stand if fight breaks out between hard and soft sciences.) Second, Davidson never actually explains that H-Bot only answers history questions, and even then, only the multiple-choice ones from NAEP (about two-thirds of the history questions, according to the original paper, a very interesting one from 2006 that I would like to look at in more depth in a later post, called No Computer Left Behind). Now, that is not really so bad - mathematicians are not offended that computers can answer math questions. The original paper seems to argue that since knowledge can be recorded, it is not worth knowing. As a student I was similarly keen on history without dates, without memorization. But there is value in having humans know these things, without having to look them up.This recent article provides an illustration: "Only about a third of American adults can name all three branches of government, and a third can't name any. Fewer than a third of eighth graders could identify the historical purpose of the Declaration of Independence." These are not things that you should have to look up - but they are things that you can test quickly and easily with multiple choice. I agree that history classes should not be limited to "memorize this list of the three branches of government", but that doesn't make it off limits on the test. Next, Davidson's claim that just half a decade after H-Bot, computers can easily ace advanced tests, tests that are computer-adaptive and, more to the point, do include essays, is pure claptrap. If she is thinking just of their multiple choice sections it may be possible, with Watson easily winning Jeopardy, showing us that the ability to answer questions is not uniquely human. (But even Watson isn't perfect.) What it doesn't mean is that now humans don't have to know anything.
Here's one more claim of Davidson's that I just don't think is based in reality:
Parents and students all know when a teacher isn't doing her job.
I think it's more complicated than that. Judging a teacher by the standardized test scores of the students is not perfect, it's an expedient, it is at least based on something objective but alone it is not perfect. Neither is relying on parents to somehow "just know". I think parents are pretty far removed, really. There is actually some evidence that student perception of teacher quality correlates with the standardized test measures. I think most folks agree that we need to use more than one measure to evaluate teachers, and there is progress in this direction.
Before leaving testing, Davidson and her sources seem to really like essay tests, suggesting that we just need to spend a little more time grading to solve all the problems of education. I have never seen a fair way of grading essays, and I have never seen a standardized grading scale or rubric that gave a score range of more than about five points. I think essays are great. Sometimes I write my own. But I don't think they are a way to fairly compare students from different classrooms or schools or states or countries. Some of the history was interesting:
That letter grade reduces the age-old practice of evaluating students' thoughts in essays - what was once a qualitative, evaluative, and narrative practice - to a grade.
The first school to adopt a system of assigning letter grades was Mount Holyoke in 1897, and from there the practice was adopted in other colleges and universities as well as in secondary schools. A few years later, the American Meat Packers Association thought it was so convenient that they adopted the system for the quality or grades, as they called it, of meats.
Davidson seems to takes such glee in trying to imply that giving students letter grades is treating them like meat. If anything, it's just the reverse! (I can't deny it's a little funny either way.)
Here's an argument against standardized testing that I think has a little bit more substance:
We don't subject a new employee to a standardized test at the end of her first year to see if she has the skills the job requires. Why in the world have we come to believe that is the right way to test our children?
Of course, some companies (EPIC, for example) actually do have new employees take standardized tests after completing training - and I think that could be a really good idea, when their job depends on skills and knowledge they may not have learned in school. But there is something to be said for people being judged in real life based on the work that they do, rather than on their test-taking skills. I think Davidson is right that project work, in which students actually do/make something meaningful, should play a bigger part in education.
But about projects: I love projects. But I don't think they can be everything, exactly because they allow and even encourage so much specialization:
That girl with the green hair would be enlisted to do the artwork. Rodney would be doing any math calculations. Someone might write and sing a theme song. Someone else would write a script. Budding performers would be tapped to narrate or even act out the parts. The computer kids would identify the software they needed and start putting this together.
There are skills that every student in a school should learn. I think it is wrong to endorse a system in which, for example, maybe two students learn to type and become the typists for the class. Everybody needs to know how to type. I think a wide range of skills are similar enough to typing in their fundamental nature that every student should be able to do them well and individually, not just in a team. I happen to think that basic programming should be one of these skills. I would have loved to see a reference to Rushkoff's Program or Be Programmed.
I do agree with Davidson that education should use technology much more effectively. But often she goes off in this direction:
If we want national standards, let's take the current funds we put into the end-of-grade testing and develop those badges and ePortfolios and adaptable challenge tests that will assist teachers in grading and assist students, too, in how they learn. Lots of online for-profit schools are developing these systems, and many of them work well. Surely they are more relevant to our children's future than the bubble tests.
Okay here's my venting about badges: Badges are dumb. They are not that motivating, and they're certainly not what an employer wants to see later. "Wow, you got the red badge? Get out of my office." I have a similar gripe about portfolios, e- or otherwise: If the point of doing something is to put it in a portfolio, it is a pointless thing. A portfolio is meant to collect work you've done. It is not an end in itself. If you are a photographer going out to do a shoot to build your portfolio, you are not a photographer. First do photography, then look at what you've done and put your best work in your portfolio. Etc. Don't write "for your portfolio" don't paint "for your portfolio" don't design "for your portfolio". A portfolio is a record of projects done, not a project itself.
In places Davidson and I completely agree on what's going on but really disagree about what it means or what to do about it:
Recently a group of educators trained in new computational methods have been confirming Gould's assertion by microprocessing data from scores earned on end-of-grade exams in tandem with GIS (geographic information systems) data. They are finding clear correlations between test scores and the income of school districts, schools, neighborhoods, and even individual households. As we are collecting increasing amounts of data on individuals, we are also accumulating empirical evidence that the most statistically meaningful "standard" measured by end-of-grade tests is standard of living, as enjoyed by the family of the child taking the exam.
To me, this is a problem because it reflects socio-economic injustice (often falling along race lines, by the way) that is alive and well in our country. It means we have to work to improve education and so-called wrap-around social services to improve educational outcomes for those that need them most. To Davidson, it seems to merely imply that standardized tests are bad. Look: kids in the south Bronx are not going to suddenly go to college and get good jobs if you just stop having them take standardized tests. I think it is not in those students' best interest to suggest that the problem is the tests.
So again, here's the part of Davidson's stuff that I agree with:
I'm not against testing. Not at all. If anything, research suggests there should be more challenges offered to students, with more variety, and they should be more casual, with less weight, and should offer more feedback to kids, helping them to see for themselves how well they are learning the material as they go along. This is called adaptive of progressive testing. Fortunately, we are close to having machine-generated, -readable, and -gradable forms of such tests, so if we want to do large-scale testing across school districts or states or even on a national level, we should soon have the means to do so, in human-assisted, machine-readable testing-learning programs with real-time assessment mechanisms that can adjust to the individual learning styles of the individual student.
Davidson is very into games, and apparently had something to do with 02M422 "Quest 2 Learn", a public school here in New York, which is built around games. They would do a lot better on the NYC progress report if they could get their ELA scores to go up. I think games can be fun, but I think relying on existing games that are made purely for entertainment to teach fundamental literacy skills is a bit of a stretch. Here in the quote he's not necessarily agreeing with me, but you could read it as indicating the thought that games can provide immersive enrichment, not necessarily replacing what you might call "the basics":
E. O. Wilson, the distinguished professor emeritus of biology at Harvard, thinks so too: "Games are the future of education. I envision visits to different ecosystems that the student could actually enter ... with an instructor. They could be a rain forest, a tundra, or a Jurassic forest."
While Nim may be a game that can be modeled mathematically, I don't think it's teaching anybody basic math. And so on. I don't think that the interactive computer systems to teach these educational building blocks yet exist. I would like to make them. I'm thinking about it.
Here are some more places where I really AGREE with Davidson:
Talking about a guy from Mozilla:
He's studying how we actually use our computers, because humans, as we know from the attention-blindness experiments, are notoriously poor at knowing how we actually do what we do.
It is darn hard to be really self-aware, really meta. Here's a quote from the head of Wikipedia:
There are so many problems to fix in the world. Why waste time having people all working on the same thing when they don't even know about it? I visit big corporations and I hear all the time about people spending a year or two on a project, and then they find out someone else is working on exactly the same thing. It makes no sense. It's a waste of valuable time. There are too many problems to solve.
I encounter this at work. I wish everybody would share more about what they're doing and collaborate when it makes sense. And it's not just me wishing that: several people have the same Dilbert cartoon up at their desks, about how Dilbert's company has two separate projects: both created to reduce redundancy.
And though categories are necessary and useful to sort through what would otherwise be chaos, we run into trouble when we start to forget that categories are arbitrary. They define what we want them to, not the other way round.
I think this reflects a really powerful understanding of the world, similar to the understanding that good scientists have that at best their "laws" are models that attempt to describe and predict the world we experience, but they are constructs of humanity.
And finally, here's a little passage that I sort of liked because it relates to how I want to make my educational technology project work. It's knit in to much with the anecdote to pull it out cleanly, and I don't like how it almost compares a basketball player to a working dog, but I think there's something to the idea about adjusting goals to maximize effort and persistence. It's related to Mihaly Csikszentmihalyi's "Creativity: Flow and the psychology of discovery and invention", which everybody in education loves to reference. This is how differentiation can work:
On some afternoons, a young star on our basketball team, a first-year student, would also be there working with Bob Bruzga and others to improve his jump by trying to snatch a small flag dangling above his head from a very long stick. As the student doggedly repeated the exercise over and over, a team of professionals analyzed his jump to make recommendations about his performance, but also recalibrated the height of the flag to his most recent successes or failures, putting it just out of reach one time, within the grasp of his fingertips the next, and then, as his back was turned, just a little too high to grasp. It is a method I've seen used to train competition-level working dogs, a marvelous psychological dance of reward and challenge, adjusted by the trainer to the trainee's desires, energy, success, or frustration, and all designed to enhance the trainee's ability to conceptualize achievement just beyond his grasp. It is also a time-honored method used by many great teachers. Set the bar too high and it is frustratingly counter-productive; set it too low and dulled expectations lead to underachievement. Dogged is the right word. That kid just would not give up. I would watch him and try again to inch my arm forward.
(Davidson was in rehab for an arm injury.)
This has gotten long, which is a testament to the richness and (possibly or) diversity of ideas in the book. I'll just add that of course Davidson is a big supporter of computer literacy, the internet, and so on, which makes it a little strange that it seems that her book is really not available online. And it's a book. I don't really see any place in Davidson's vision of the future where people read whole books. And it's not just Davidson caught in this apparent contradiction: the result of a Mozilla conference on using the web in education had as it's final product, that's right, you guessed it: a book. At least it's available as a PDF - but it blows my mind that it doesn't seem to also be available as linked HTML pages. Am I just missing it?
So there it is. Reflections on Now you see it. I will make no attempt to give the book a grade, number of stars, etc. :)
No comments:
Post a Comment