The new kids: Big dreams and brave journeys at a high school for immigrant teens
Brooke Hauser
I read this book about the International School @ Prospect Heights (17K524) for my work reading group book club thing. It's a book of stories. I haven't read any fiction in a while. I guess I read some Douglas Adams over the summer. Maybe what I mean is, I haven't read any mundane fiction in a while. Not that the stories of the students and teachers are mundane - they follow the ordinary rules of physics is all. They're good stories. They give you a human view of a lot incredible lives. But it isn't a book of ideas, unless the idea is "let's have empathy" - and that isn't a bad idea.
It did have some interesting words!
umber
sable ponytail: seems to be just an ordinary ponytail
vitiligo: I had wondered about this and am glad to have a word for it now.
brackish (okay I do know what this means...)
atelier
scrim
Tibetan Spicoli: this one is interesting... it seems like the only place this phrase has ever been used is in this book. Is it a typo? Is it just overwhelmed by other uses? Seriously the only other reference I can find is some joker garbage on urban dictionary...
kitten heels
Les Brown: comical motivational speaker?
liminal space between
ebullient: cheerful and happy; I knew that...
skein (noodle): like a ball (skein) of yarn...
arrondissement: district of Paris
stupefaction (I guess I know this one too)
chafing-dish: those things they use to keep food hot at buffets
Batusi: THIS IS REALLY FUNNY
compas (re: bachata)
tulle (re: taffeta)
chignon
T-zone (cosmetics): apparently your forehead down to your nose makes a "T"?
Wednesday, December 28, 2011
Tuesday, December 27, 2011
Thoughts on Now You See It by Cathy Davidson
Now you see it: How the brain science of attention will transform the way we live, work, and learn
Cathy Davidson
The main point of this book is that every individual has a limited perspective and so we need other people. Davidson calls this "collaboration by difference". Through reading her book I've certainly learned a lot of things I wouldn't otherwise have learned, including but not limited to things that are artifacts of her perspective.
Davidson explains that she was diagnosed as ADHD as an adult, and it's not hard to believe. Sometimes the arguments venture far from the path you might expect them to traverse, and bookends sometimes seem to have been strapped on in editing to give a sense of coherence. Hers is a perspective different from mine, and I agree it can be valuable. It may also be what leads to broad generalizations, quick flips, and hyperbolic language that doesn't always seem to feel a need to connect closely with the truth, as with this reference to a study that Davidson does not like:
If you try to think about it seriously, the study defined multitasking as the ability to focus on one task, then quickly stop, shift, and focus on a new task. They compared how well people did when they were forced to shift, and the people who naturally switch tasks a lot did worse (were slower) than people who naturally don't switch tasks a lot. This makes sense. People who tend to focus less will switch tasks more often, left to their own devices. There's no reason to expect them to focus well when you tell them to. They are less focused. It doesn't necessarily mean that less focus is always bad, and it doesn't mean that the result of the study is a nonsensical statement.
The result of that study also doesn't undermine Davidson's related claim, that switching tasks a lot might lead to better results because you end up exposed to more different ideas that you can work with flexibly. I find a lot to like in this:
I can definitely identify with the experience of being too locked in to one way of looking at something, and so missing solutions I might have seen if I stepped back, took a break with something else, talked to somebody about it, browsed the 'net for a bit, etc.
The poetic appropriateness is almost enough to convince me that Davidson was right to include the apocryphal reference to Newton, but that kind of looseness with fact grinds on me and makes it hard for me to take her seriously as a scholar in places. I think it's important to use language precisely. (Oh, that I may I not be held to that standard on my blog.)
For example: Davidson seems several times to be ending her book. One ending tells the story of her trip to Korea, when she visits 동대문시장.
It's not a big deal to the general reader, I guess, and maybe it's the fault of that guidebook, but this just isn't so. I could understand "Its original name meant", since it was originally called 배우개장, but "Dongdaemun" has never meant "Market for Learning". It feels like she's sweeping too much under the rug, neglecting details that I want to know about. I happen to have some knowledge about this (and could double check with wiki), but what about all the areas where I don't know enough to fact check Davidson? If you read this book and then go around saying, "Hey, did you know Dongdaemun used to mean 'market for learning'?" you will sound like a fool, if someone knows better, and what's worse you will be spreading misinformation if they don't know better.
So how am I supposed to interpret Davidson's other claims, about research?
The very last book I read, The good school, took a position almost exactly opposite on class size, saying that usually parents are too worried about it, and that the body of research indicates it has only minor effects, and then only for dramatic reductions of the type you are not likely to see in practice. The reader is lucky that Davidson provides a footnote here - but it's to a 1999 summary of prior research, and doesn't seem to fully support Davidson's dramatic claim.
There are other places where Davidson seems to be just talking nonsense:
This is a one-line description of the strangest statistics I have ever heard proposed. I am glad I do not inhabit this particular world of voodoo statistics. It's pretty clear how Davidson feels about mathematicians:
Yes, one certainly could note that it tends to be statisticians who "make up" statistical measures. And there could even be something to the argument, if it were made with, say, evidence of how mathematicians make IQ tests that are biased in such and such a way. Such an argument would fit well in the section of the book about standardized tests, which is approximately where, I believe, we first heard about this interesting H-Bot:
First, it is interesting to note that the most interesting thing I have heard about being done by historians is in fact a work of computer science. Kudos to that high school student. (And now it's clear where I stand if fight breaks out between hard and soft sciences.) Second, Davidson never actually explains that H-Bot only answers history questions, and even then, only the multiple-choice ones from NAEP (about two-thirds of the history questions, according to the original paper, a very interesting one from 2006 that I would like to look at in more depth in a later post, called No Computer Left Behind). Now, that is not really so bad - mathematicians are not offended that computers can answer math questions. The original paper seems to argue that since knowledge can be recorded, it is not worth knowing. As a student I was similarly keen on history without dates, without memorization. But there is value in having humans know these things, without having to look them up.This recent article provides an illustration: "Only about a third of American adults can name all three branches of government, and a third can't name any. Fewer than a third of eighth graders could identify the historical purpose of the Declaration of Independence." These are not things that you should have to look up - but they are things that you can test quickly and easily with multiple choice. I agree that history classes should not be limited to "memorize this list of the three branches of government", but that doesn't make it off limits on the test. Next, Davidson's claim that just half a decade after H-Bot, computers can easily ace advanced tests, tests that are computer-adaptive and, more to the point, do include essays, is pure claptrap. If she is thinking just of their multiple choice sections it may be possible, with Watson easily winning Jeopardy, showing us that the ability to answer questions is not uniquely human. (But even Watson isn't perfect.) What it doesn't mean is that now humans don't have to know anything.
I think it's more complicated than that. Judging a teacher by the standardized test scores of the students is not perfect, it's an expedient, it is at least based on something objective but alone it is not perfect. Neither is relying on parents to somehow "just know". I think parents are pretty far removed, really. There is actually some evidence that student perception of teacher quality correlates with the standardized test measures. I think most folks agree that we need to use more than one measure to evaluate teachers, and there is progress in this direction.
Before leaving testing, Davidson and her sources seem to really like essay tests, suggesting that we just need to spend a little more time grading to solve all the problems of education. I have never seen a fair way of grading essays, and I have never seen a standardized grading scale or rubric that gave a score range of more than about five points. I think essays are great. Sometimes I write my own. But I don't think they are a way to fairly compare students from different classrooms or schools or states or countries. Some of the history was interesting:
Davidson seems to takes such glee in trying to imply that giving students letter grades is treating them like meat. If anything, it's just the reverse! (I can't deny it's a little funny either way.)
Here's an argument against standardized testing that I think has a little bit more substance:
Of course, some companies (EPIC, for example) actually do have new employees take standardized tests after completing training - and I think that could be a really good idea, when their job depends on skills and knowledge they may not have learned in school. But there is something to be said for people being judged in real life based on the work that they do, rather than on their test-taking skills. I think Davidson is right that project work, in which students actually do/make something meaningful, should play a bigger part in education.
But about projects: I love projects. But I don't think they can be everything, exactly because they allow and even encourage so much specialization:
There are skills that every student in a school should learn. I think it is wrong to endorse a system in which, for example, maybe two students learn to type and become the typists for the class. Everybody needs to know how to type. I think a wide range of skills are similar enough to typing in their fundamental nature that every student should be able to do them well and individually, not just in a team. I happen to think that basic programming should be one of these skills. I would have loved to see a reference to Rushkoff's Program or Be Programmed.
I do agree with Davidson that education should use technology much more effectively. But often she goes off in this direction:
Okay here's my venting about badges: Badges are dumb. They are not that motivating, and they're certainly not what an employer wants to see later. "Wow, you got the red badge? Get out of my office." I have a similar gripe about portfolios, e- or otherwise: If the point of doing something is to put it in a portfolio, it is a pointless thing. A portfolio is meant to collect work you've done. It is not an end in itself. If you are a photographer going out to do a shoot to build your portfolio, you are not a photographer. First do photography, then look at what you've done and put your best work in your portfolio. Etc. Don't write "for your portfolio" don't paint "for your portfolio" don't design "for your portfolio". A portfolio is a record of projects done, not a project itself.
In places Davidson and I completely agree on what's going on but really disagree about what it means or what to do about it:
To me, this is a problem because it reflects socio-economic injustice (often falling along race lines, by the way) that is alive and well in our country. It means we have to work to improve education and so-called wrap-around social services to improve educational outcomes for those that need them most. To Davidson, it seems to merely imply that standardized tests are bad. Look: kids in the south Bronx are not going to suddenly go to college and get good jobs if you just stop having them take standardized tests. I think it is not in those students' best interest to suggest that the problem is the tests.
So again, here's the part of Davidson's stuff that I agree with:
Davidson is very into games, and apparently had something to do with 02M422 "Quest 2 Learn", a public school here in New York, which is built around games. They would do a lot better on the NYC progress report if they could get their ELA scores to go up. I think games can be fun, but I think relying on existing games that are made purely for entertainment to teach fundamental literacy skills is a bit of a stretch. Here in the quote he's not necessarily agreeing with me, but you could read it as indicating the thought that games can provide immersive enrichment, not necessarily replacing what you might call "the basics":
While Nim may be a game that can be modeled mathematically, I don't think it's teaching anybody basic math. And so on. I don't think that the interactive computer systems to teach these educational building blocks yet exist. I would like to make them. I'm thinking about it.
Here are some more places where I really AGREE with Davidson:
Talking about a guy from Mozilla:
It is darn hard to be really self-aware, really meta. Here's a quote from the head of Wikipedia:
I encounter this at work. I wish everybody would share more about what they're doing and collaborate when it makes sense. And it's not just me wishing that: several people have the same Dilbert cartoon up at their desks, about how Dilbert's company has two separate projects: both created to reduce redundancy.
I think this reflects a really powerful understanding of the world, similar to the understanding that good scientists have that at best their "laws" are models that attempt to describe and predict the world we experience, but they are constructs of humanity.
And finally, here's a little passage that I sort of liked because it relates to how I want to make my educational technology project work. It's knit in to much with the anecdote to pull it out cleanly, and I don't like how it almost compares a basketball player to a working dog, but I think there's something to the idea about adjusting goals to maximize effort and persistence. It's related to Mihaly Csikszentmihalyi's "Creativity: Flow and the psychology of discovery and invention", which everybody in education loves to reference. This is how differentiation can work:
(Davidson was in rehab for an arm injury.)
This has gotten long, which is a testament to the richness and (possibly or) diversity of ideas in the book. I'll just add that of course Davidson is a big supporter of computer literacy, the internet, and so on, which makes it a little strange that it seems that her book is really not available online. And it's a book. I don't really see any place in Davidson's vision of the future where people read whole books. And it's not just Davidson caught in this apparent contradiction: the result of a Mozilla conference on using the web in education had as it's final product, that's right, you guessed it: a book. At least it's available as a PDF - but it blows my mind that it doesn't seem to also be available as linked HTML pages. Am I just missing it?
So there it is. Reflections on Now you see it. I will make no attempt to give the book a grade, number of stars, etc. :)
Cathy Davidson
The main point of this book is that every individual has a limited perspective and so we need other people. Davidson calls this "collaboration by difference". Through reading her book I've certainly learned a lot of things I wouldn't otherwise have learned, including but not limited to things that are artifacts of her perspective.
Davidson explains that she was diagnosed as ADHD as an adult, and it's not hard to believe. Sometimes the arguments venture far from the path you might expect them to traverse, and bookends sometimes seem to have been strapped on in editing to give a sense of coherence. Hers is a perspective different from mine, and I agree it can be valuable. It may also be what leads to broad generalizations, quick flips, and hyperbolic language that doesn't always seem to feel a need to connect closely with the truth, as with this reference to a study that Davidson does not like:
The headline-grabbing "multitaskers aren't even good at multitasking" is a nonsensical statement, when you break it down and try to think about it seriously.
If you try to think about it seriously, the study defined multitasking as the ability to focus on one task, then quickly stop, shift, and focus on a new task. They compared how well people did when they were forced to shift, and the people who naturally switch tasks a lot did worse (were slower) than people who naturally don't switch tasks a lot. This makes sense. People who tend to focus less will switch tasks more often, left to their own devices. There's no reason to expect them to focus well when you tell them to. They are less focused. It doesn't necessarily mean that less focus is always bad, and it doesn't mean that the result of the study is a nonsensical statement.
The result of that study also doesn't undermine Davidson's related claim, that switching tasks a lot might lead to better results because you end up exposed to more different ideas that you can work with flexibly. I find a lot to like in this:
We know that in dreams, as in virtual worlds and digital spaces, physical and even traditional linear narrative rules do not apply. It is possible that, during boundless wandering thinking, we open ourselves to possibilities for innovative solutions that, in more focused thinking, we might prematurely preclude as unrealistic. The Latin word for "inspiration" is inspirare, to inflame or breathe into. What if we thought of new digital ways of thinking not as multi-tasking but multi-inspiring, as potentially creative disruption of usual thought patterns. Look at the account of just about any enormous intellectual break-through and you'll find that some seemingly random connection, some associational side thought, some distraction preceded the revelation. Distraction, we may discover, is as central to innovation as, say, an apple falling on Newton's head.
I can definitely identify with the experience of being too locked in to one way of looking at something, and so missing solutions I might have seen if I stepped back, took a break with something else, talked to somebody about it, browsed the 'net for a bit, etc.
The poetic appropriateness is almost enough to convince me that Davidson was right to include the apocryphal reference to Newton, but that kind of looseness with fact grinds on me and makes it hard for me to take her seriously as a scholar in places. I think it's important to use language precisely. (Oh, that I may I not be held to that standard on my blog.)
For example: Davidson seems several times to be ending her book. One ending tells the story of her trip to Korea, when she visits 동대문시장.
In a guidebook on the plane home, I read that the Dongdaemun was founded in 1905. Its name originally meant, Market for Learning.
It's not a big deal to the general reader, I guess, and maybe it's the fault of that guidebook, but this just isn't so. I could understand "Its original name meant", since it was originally called 배우개장, but "Dongdaemun" has never meant "Market for Learning". It feels like she's sweeping too much under the rug, neglecting details that I want to know about. I happen to have some knowledge about this (and could double check with wiki), but what about all the areas where I don't know enough to fact check Davidson? If you read this book and then go around saying, "Hey, did you know Dongdaemun used to mean 'market for learning'?" you will sound like a fool, if someone knows better, and what's worse you will be spreading misinformation if they don't know better.
So how am I supposed to interpret Davidson's other claims, about research?
These key factors for educational success - rigor, relevance, and relationships - have been dubbed the new three Rs, with student-teacher ratio being particularly important. Small class size has been proved to be one of the single most significant factors in kids' staying in and succeeding in school. Twenty seems to be the magic number.
The very last book I read, The good school, took a position almost exactly opposite on class size, saying that usually parents are too worried about it, and that the body of research indicates it has only minor effects, and then only for dramatic reductions of the type you are not likely to see in practice. The reader is lucky that Davidson provides a footnote here - but it's to a 1999 summary of prior research, and doesn't seem to fully support Davidson's dramatic claim.
There are other places where Davidson seems to be just talking nonsense:
If we establish a mean, deviation from the mean is almost inevitably a decline.
This is a one-line description of the strangest statistics I have ever heard proposed. I am glad I do not inhabit this particular world of voodoo statistics. It's pretty clear how Davidson feels about mathematicians:
Is the pure abstract thinking of the mathematician really superior, cognitively, to the associational searching and reading and analyzing and interpreting and synthesizing and then focusing and narrating that are the historian's gift and trade? One could also note that it is mathematicians and statisticians, not historians, who tend to make up the statistical measures by which we judge cognitive excellence, achievement, and decline. On the other hand, it was two historians who created H-Bot, that robot who can whup most of us on standardized tests.
Yes, one certainly could note that it tends to be statisticians who "make up" statistical measures. And there could even be something to the argument, if it were made with, say, evidence of how mathematicians make IQ tests that are biased in such and such a way. Such an argument would fit well in the section of the book about standardized tests, which is approximately where, I believe, we first heard about this interesting H-Bot:
In 2006, two distinguished historians, Daniel H. Cohen and the late Roy Rosenzweig, worked with a bright high school student to create H-Bot, an online robot installed with search algorithms capable of reading test questions and then browsing the Internet for answers. When H-Bot took a National Assessment of Educational Progress (NAEP) test designed for fourth graders, it scored 82 percent, well above the national average. Given advances in this technology, a 2010 equivalent of H-Bot would most likely receive a perfect score, and not just on the fourth-grade NAEP; it would also ace the SATs and maybe even the GREs, LSATs, and MCATs too.
First, it is interesting to note that the most interesting thing I have heard about being done by historians is in fact a work of computer science. Kudos to that high school student. (And now it's clear where I stand if fight breaks out between hard and soft sciences.) Second, Davidson never actually explains that H-Bot only answers history questions, and even then, only the multiple-choice ones from NAEP (about two-thirds of the history questions, according to the original paper, a very interesting one from 2006 that I would like to look at in more depth in a later post, called No Computer Left Behind). Now, that is not really so bad - mathematicians are not offended that computers can answer math questions. The original paper seems to argue that since knowledge can be recorded, it is not worth knowing. As a student I was similarly keen on history without dates, without memorization. But there is value in having humans know these things, without having to look them up.This recent article provides an illustration: "Only about a third of American adults can name all three branches of government, and a third can't name any. Fewer than a third of eighth graders could identify the historical purpose of the Declaration of Independence." These are not things that you should have to look up - but they are things that you can test quickly and easily with multiple choice. I agree that history classes should not be limited to "memorize this list of the three branches of government", but that doesn't make it off limits on the test. Next, Davidson's claim that just half a decade after H-Bot, computers can easily ace advanced tests, tests that are computer-adaptive and, more to the point, do include essays, is pure claptrap. If she is thinking just of their multiple choice sections it may be possible, with Watson easily winning Jeopardy, showing us that the ability to answer questions is not uniquely human. (But even Watson isn't perfect.) What it doesn't mean is that now humans don't have to know anything.
Here's one more claim of Davidson's that I just don't think is based in reality:
Parents and students all know when a teacher isn't doing her job.
I think it's more complicated than that. Judging a teacher by the standardized test scores of the students is not perfect, it's an expedient, it is at least based on something objective but alone it is not perfect. Neither is relying on parents to somehow "just know". I think parents are pretty far removed, really. There is actually some evidence that student perception of teacher quality correlates with the standardized test measures. I think most folks agree that we need to use more than one measure to evaluate teachers, and there is progress in this direction.
Before leaving testing, Davidson and her sources seem to really like essay tests, suggesting that we just need to spend a little more time grading to solve all the problems of education. I have never seen a fair way of grading essays, and I have never seen a standardized grading scale or rubric that gave a score range of more than about five points. I think essays are great. Sometimes I write my own. But I don't think they are a way to fairly compare students from different classrooms or schools or states or countries. Some of the history was interesting:
That letter grade reduces the age-old practice of evaluating students' thoughts in essays - what was once a qualitative, evaluative, and narrative practice - to a grade.
The first school to adopt a system of assigning letter grades was Mount Holyoke in 1897, and from there the practice was adopted in other colleges and universities as well as in secondary schools. A few years later, the American Meat Packers Association thought it was so convenient that they adopted the system for the quality or grades, as they called it, of meats.
Davidson seems to takes such glee in trying to imply that giving students letter grades is treating them like meat. If anything, it's just the reverse! (I can't deny it's a little funny either way.)
Here's an argument against standardized testing that I think has a little bit more substance:
We don't subject a new employee to a standardized test at the end of her first year to see if she has the skills the job requires. Why in the world have we come to believe that is the right way to test our children?
Of course, some companies (EPIC, for example) actually do have new employees take standardized tests after completing training - and I think that could be a really good idea, when their job depends on skills and knowledge they may not have learned in school. But there is something to be said for people being judged in real life based on the work that they do, rather than on their test-taking skills. I think Davidson is right that project work, in which students actually do/make something meaningful, should play a bigger part in education.
But about projects: I love projects. But I don't think they can be everything, exactly because they allow and even encourage so much specialization:
That girl with the green hair would be enlisted to do the artwork. Rodney would be doing any math calculations. Someone might write and sing a theme song. Someone else would write a script. Budding performers would be tapped to narrate or even act out the parts. The computer kids would identify the software they needed and start putting this together.
There are skills that every student in a school should learn. I think it is wrong to endorse a system in which, for example, maybe two students learn to type and become the typists for the class. Everybody needs to know how to type. I think a wide range of skills are similar enough to typing in their fundamental nature that every student should be able to do them well and individually, not just in a team. I happen to think that basic programming should be one of these skills. I would have loved to see a reference to Rushkoff's Program or Be Programmed.
I do agree with Davidson that education should use technology much more effectively. But often she goes off in this direction:
If we want national standards, let's take the current funds we put into the end-of-grade testing and develop those badges and ePortfolios and adaptable challenge tests that will assist teachers in grading and assist students, too, in how they learn. Lots of online for-profit schools are developing these systems, and many of them work well. Surely they are more relevant to our children's future than the bubble tests.
Okay here's my venting about badges: Badges are dumb. They are not that motivating, and they're certainly not what an employer wants to see later. "Wow, you got the red badge? Get out of my office." I have a similar gripe about portfolios, e- or otherwise: If the point of doing something is to put it in a portfolio, it is a pointless thing. A portfolio is meant to collect work you've done. It is not an end in itself. If you are a photographer going out to do a shoot to build your portfolio, you are not a photographer. First do photography, then look at what you've done and put your best work in your portfolio. Etc. Don't write "for your portfolio" don't paint "for your portfolio" don't design "for your portfolio". A portfolio is a record of projects done, not a project itself.
In places Davidson and I completely agree on what's going on but really disagree about what it means or what to do about it:
Recently a group of educators trained in new computational methods have been confirming Gould's assertion by microprocessing data from scores earned on end-of-grade exams in tandem with GIS (geographic information systems) data. They are finding clear correlations between test scores and the income of school districts, schools, neighborhoods, and even individual households. As we are collecting increasing amounts of data on individuals, we are also accumulating empirical evidence that the most statistically meaningful "standard" measured by end-of-grade tests is standard of living, as enjoyed by the family of the child taking the exam.
To me, this is a problem because it reflects socio-economic injustice (often falling along race lines, by the way) that is alive and well in our country. It means we have to work to improve education and so-called wrap-around social services to improve educational outcomes for those that need them most. To Davidson, it seems to merely imply that standardized tests are bad. Look: kids in the south Bronx are not going to suddenly go to college and get good jobs if you just stop having them take standardized tests. I think it is not in those students' best interest to suggest that the problem is the tests.
So again, here's the part of Davidson's stuff that I agree with:
I'm not against testing. Not at all. If anything, research suggests there should be more challenges offered to students, with more variety, and they should be more casual, with less weight, and should offer more feedback to kids, helping them to see for themselves how well they are learning the material as they go along. This is called adaptive of progressive testing. Fortunately, we are close to having machine-generated, -readable, and -gradable forms of such tests, so if we want to do large-scale testing across school districts or states or even on a national level, we should soon have the means to do so, in human-assisted, machine-readable testing-learning programs with real-time assessment mechanisms that can adjust to the individual learning styles of the individual student.
Davidson is very into games, and apparently had something to do with 02M422 "Quest 2 Learn", a public school here in New York, which is built around games. They would do a lot better on the NYC progress report if they could get their ELA scores to go up. I think games can be fun, but I think relying on existing games that are made purely for entertainment to teach fundamental literacy skills is a bit of a stretch. Here in the quote he's not necessarily agreeing with me, but you could read it as indicating the thought that games can provide immersive enrichment, not necessarily replacing what you might call "the basics":
E. O. Wilson, the distinguished professor emeritus of biology at Harvard, thinks so too: "Games are the future of education. I envision visits to different ecosystems that the student could actually enter ... with an instructor. They could be a rain forest, a tundra, or a Jurassic forest."
While Nim may be a game that can be modeled mathematically, I don't think it's teaching anybody basic math. And so on. I don't think that the interactive computer systems to teach these educational building blocks yet exist. I would like to make them. I'm thinking about it.
Here are some more places where I really AGREE with Davidson:
Talking about a guy from Mozilla:
He's studying how we actually use our computers, because humans, as we know from the attention-blindness experiments, are notoriously poor at knowing how we actually do what we do.
It is darn hard to be really self-aware, really meta. Here's a quote from the head of Wikipedia:
There are so many problems to fix in the world. Why waste time having people all working on the same thing when they don't even know about it? I visit big corporations and I hear all the time about people spending a year or two on a project, and then they find out someone else is working on exactly the same thing. It makes no sense. It's a waste of valuable time. There are too many problems to solve.
I encounter this at work. I wish everybody would share more about what they're doing and collaborate when it makes sense. And it's not just me wishing that: several people have the same Dilbert cartoon up at their desks, about how Dilbert's company has two separate projects: both created to reduce redundancy.
And though categories are necessary and useful to sort through what would otherwise be chaos, we run into trouble when we start to forget that categories are arbitrary. They define what we want them to, not the other way round.
I think this reflects a really powerful understanding of the world, similar to the understanding that good scientists have that at best their "laws" are models that attempt to describe and predict the world we experience, but they are constructs of humanity.
And finally, here's a little passage that I sort of liked because it relates to how I want to make my educational technology project work. It's knit in to much with the anecdote to pull it out cleanly, and I don't like how it almost compares a basketball player to a working dog, but I think there's something to the idea about adjusting goals to maximize effort and persistence. It's related to Mihaly Csikszentmihalyi's "Creativity: Flow and the psychology of discovery and invention", which everybody in education loves to reference. This is how differentiation can work:
On some afternoons, a young star on our basketball team, a first-year student, would also be there working with Bob Bruzga and others to improve his jump by trying to snatch a small flag dangling above his head from a very long stick. As the student doggedly repeated the exercise over and over, a team of professionals analyzed his jump to make recommendations about his performance, but also recalibrated the height of the flag to his most recent successes or failures, putting it just out of reach one time, within the grasp of his fingertips the next, and then, as his back was turned, just a little too high to grasp. It is a method I've seen used to train competition-level working dogs, a marvelous psychological dance of reward and challenge, adjusted by the trainer to the trainee's desires, energy, success, or frustration, and all designed to enhance the trainee's ability to conceptualize achievement just beyond his grasp. It is also a time-honored method used by many great teachers. Set the bar too high and it is frustratingly counter-productive; set it too low and dulled expectations lead to underachievement. Dogged is the right word. That kid just would not give up. I would watch him and try again to inch my arm forward.
(Davidson was in rehab for an arm injury.)
This has gotten long, which is a testament to the richness and (possibly or) diversity of ideas in the book. I'll just add that of course Davidson is a big supporter of computer literacy, the internet, and so on, which makes it a little strange that it seems that her book is really not available online. And it's a book. I don't really see any place in Davidson's vision of the future where people read whole books. And it's not just Davidson caught in this apparent contradiction: the result of a Mozilla conference on using the web in education had as it's final product, that's right, you guessed it: a book. At least it's available as a PDF - but it blows my mind that it doesn't seem to also be available as linked HTML pages. Am I just missing it?
So there it is. Reflections on Now you see it. I will make no attempt to give the book a grade, number of stars, etc. :)
Tuesday, December 20, 2011
(link) on the non-availability of excellent prepared curricula
This post from a Washington Post (why do they have everything that resonates with me?) blog is about one particular Teach for America teacher who experienced the curricula vacuum, but I felt very much the same way when I was teaching in New York. "Hi teacher, it's your first year, make complete curricula from scratch for five classes," the system seems to say. I would love to see some better options.
New teacher decries lesson plan gap
New teacher decries lesson plan gap
Sunday, December 18, 2011
Thoughts on The Good School
I just finished reading The Good School by Peg Tyre. From the introduction, here's what the book sets out to do:
And it does it with some success. The audience is very clearly parents, and there is a good deal of pandering. Parents are the greatest! etc. And the beginning of the book is overfull of this kind of thing, including a chapter specifically for busy parents who couldn't possibly read a whole book, who just want to find a good preschool and go to sleep. So this chapter is sort of a drag, giving the recommendations without the evidence, and so on.
Chapter two on testing picks up a little bit, and there's a cogent explanation of the limitations and possible unintended consequences of standardized tests. The book might be worthwhile just for this explanation. Two interesting quotes. First, the "law" named after Donald T. Campbell:
And some words from John Tanner, who runs Test Sense:
Chapter three on class size is interesting, because while acknowledging that everybody wants small class size and also that all the research seems to show smaller classes are better, if by small margins, Tyre tries to conclude that class size is not such a big deal in the domains that are usually in the field - and maybe she's right that 22 kids is not worlds better than 24, 34 not so much better than 36, and that teacher quality is more important, but I can't accept her apparent conclusion that smaller classes are not something to pursue. Never mind that nowhere is the fundamental issue of the reality of differentiation addressed: one teacher cannot do different things with different students at the same time. (Tyre does relate what I think may be an awfully common example of bad differentiation, which is simply giving a struggling, slower student less of the same stuff to do. Lowering expectations is better teaching?)
The book really picks up at chapter four, where Tyre has a strong case that there is consensus in the literature about how best to teach reading, and that too often this conclusion is ignored, to the detriment of students. To my amusement, there is a section headed "Teaching reading is rocket science," which echoes a section from a speech (given after the publication of the book) by Los Angeles superintendent John Deasy, "Teaching is rocket science". Everybody wants to talk about rocket science now. And the research-backed right way to teach reading is phonics, or as they say in the UK "systematic synthetic phonics" (because you synthesize or blend sounds together) which I happen to know because I read this section from the 2010 England Department for Education white paper "The importance of teaching", where they describe as one of their goals in the executive summary that they must:
So it seems that people are coming around to this, but I remember even in my MAT program what I learned about how people teach reading was that there were a number of competing philosophies, and none of them necessarily had the upper hand. Phonics has the upper hand. It has the only hand. You must teach children to sound out words.
There's a chapter on math too, which I generally agree with. Tyre emphasizes the importance of carefully planned curriculum that helps students progress through conceptual understandings of carefully arranged mathematics. There's praise for Singapore math and, more indirectly, Common Core.
Tyre has support for recess. Great. Also she spends time talking about how some teachers are better than others. It's amazing that this needs to be said, but apparently it really does. The last chapter is the tale (true, she says) of how some parents got involved and helped make their local school better. Tyre supports parent involvement and system transparency, and I can only hope that things work out in general as well as they do in her story.
There are plenty of things to disagree with or be frustrated by in this book, including some little ones that are just annoying or unfortunate. Using "an octagonal" instead of just saying an octagon. Leaving the "l" out of "public". These are the most superficial. I am more concerned when I can't find a relevant endnote for a numeric claim. But the book is at its best when it goes all the way to a conclusion on a topic, provides real evidence for that conclusion, and invites the reader to engage it. Together with the outlines of educational history, I think it is a worthwhile book that can definitely help parents get started with a more informed and powerful involvement in education.
If you are interested in "looking under the hood," this book is for you. It will help bring you up to speed on some of the most crucial issues and controversies that are likely to affect your child's education. It will provide you with a SparkNotes version of the history of education to explain to you why things are the way they are. It will introduce you to the freshest thinking - and some of the most innovative ideas - about how to help our kids do better. But more than that, it will help you judge the value of these ideas by providing you with the most solid research available. In areas where research is not yet clear, you will meet people and hear about research that will be creating headlines - and perhaps school policy - in the years to come.
And it does it with some success. The audience is very clearly parents, and there is a good deal of pandering. Parents are the greatest! etc. And the beginning of the book is overfull of this kind of thing, including a chapter specifically for busy parents who couldn't possibly read a whole book, who just want to find a good preschool and go to sleep. So this chapter is sort of a drag, giving the recommendations without the evidence, and so on.
Chapter two on testing picks up a little bit, and there's a cogent explanation of the limitations and possible unintended consequences of standardized tests. The book might be worthwhile just for this explanation. Two interesting quotes. First, the "law" named after Donald T. Campbell:
"The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitore."
And some words from John Tanner, who runs Test Sense:
He draws a parallel between encouraging or even allowing teachers to teach to the test, and encouraging people to study for the eye test before they go to the DMV. "What would happen," Tanner says, "is that we'd have a lot of people passing the test but not have a clue whether they can actually see well enough to drive."
Chapter three on class size is interesting, because while acknowledging that everybody wants small class size and also that all the research seems to show smaller classes are better, if by small margins, Tyre tries to conclude that class size is not such a big deal in the domains that are usually in the field - and maybe she's right that 22 kids is not worlds better than 24, 34 not so much better than 36, and that teacher quality is more important, but I can't accept her apparent conclusion that smaller classes are not something to pursue. Never mind that nowhere is the fundamental issue of the reality of differentiation addressed: one teacher cannot do different things with different students at the same time. (Tyre does relate what I think may be an awfully common example of bad differentiation, which is simply giving a struggling, slower student less of the same stuff to do. Lowering expectations is better teaching?)
The book really picks up at chapter four, where Tyre has a strong case that there is consensus in the literature about how best to teach reading, and that too often this conclusion is ignored, to the detriment of students. To my amusement, there is a section headed "Teaching reading is rocket science," which echoes a section from a speech (given after the publication of the book) by Los Angeles superintendent John Deasy, "Teaching is rocket science". Everybody wants to talk about rocket science now. And the research-backed right way to teach reading is phonics, or as they say in the UK "systematic synthetic phonics" (because you synthesize or blend sounds together) which I happen to know because I read this section from the 2010 England Department for Education white paper "The importance of teaching", where they describe as one of their goals in the executive summary that they must:
Ensure that there is support available to every school for the teaching of systematic synthetic phonics, as the best method for teaching reading.
So it seems that people are coming around to this, but I remember even in my MAT program what I learned about how people teach reading was that there were a number of competing philosophies, and none of them necessarily had the upper hand. Phonics has the upper hand. It has the only hand. You must teach children to sound out words.
There's a chapter on math too, which I generally agree with. Tyre emphasizes the importance of carefully planned curriculum that helps students progress through conceptual understandings of carefully arranged mathematics. There's praise for Singapore math and, more indirectly, Common Core.
Tyre has support for recess. Great. Also she spends time talking about how some teachers are better than others. It's amazing that this needs to be said, but apparently it really does. The last chapter is the tale (true, she says) of how some parents got involved and helped make their local school better. Tyre supports parent involvement and system transparency, and I can only hope that things work out in general as well as they do in her story.
There are plenty of things to disagree with or be frustrated by in this book, including some little ones that are just annoying or unfortunate. Using "an octagonal" instead of just saying an octagon. Leaving the "l" out of "public". These are the most superficial. I am more concerned when I can't find a relevant endnote for a numeric claim. But the book is at its best when it goes all the way to a conclusion on a topic, provides real evidence for that conclusion, and invites the reader to engage it. Together with the outlines of educational history, I think it is a worthwhile book that can definitely help parents get started with a more informed and powerful involvement in education.
Saturday, December 17, 2011
Selections from and thoughts on Thinking, Fast and Slow
I recently read Thinking, Fast and Slow by Daniel Kahneman. I haven't talked to people about a book I'm reading like this in quite some time, which is appropriate, in that Kahneman does a good job in applying his psychological findings in the way his book is written, and toward what audience:
Nisbett and Borgida summarize the results in a memorable sentence:
Subjects' unwillingness to deduce the particular from the general was matched only by their willingness to infer the general from the particular.
So while he does include the general, he also works well with the specific. One conclusion of research is that people learn better from the specific, and:
This is a profoundly important conclusion. People who are taught surprising statistical facts about human behavior may be impressed to the point of telling their friends about what they have heard, but this does not mean that their understanding of the world has really changed. The test of learning psychology is whether your understanding of situations you encounter has changed, not whether you have learned a new fact. There is a deep gap between our thinking about statistics and our thinking about individual cases. Statistical results with a causal interpretation have a stronger effect on our thinking than noncausal information. But even compelling causal statistics will not change long-held beliefs or beliefs rooted in personal experience. On the other hand, surprising individual cases have a powerful impact and are a more effective tool for teaching psychology because the incongruity must be resolved and embedded in a causal story. That is why this book contains questions that are addressed personally to the reader. You are more likely to learn something by finding surprises in your own behavior than be hearing surprising facts about people in general.
And on his audience, Kahneman says:
Observers are less cognitively busy and more open to information than actors. That was my reason for writing a book that is oriented to critics and gossipers rather than to decision makers.
And while I'm nearly always a critic, I even became a bit of a gossip on the subject of this book, because of the fascinating collection of important findings that are all made immediately person and applicable.
The basic thesis of the book is this: people have fundamentally two kinds of thinking going on in their heads. System one is fast, intuitive, and easy. It often makes the right decision for you, but it is vulnerable to a collection of systematic deficiencies. System two is slow, deliberate, and difficult. It can make good decisions if you give it time and effort, but it is limited.
One systematic problem with system one is that when you hear that seven people were killed by sharks last year, you are more scared than you should be (or not scared at all). Two reasons:
The focusing illusion:
Nothing in life is as important as you think it is when you are thinking about it.
And one of many biases of system one resulting in a failure to get statistical thinking right:
The bias has been given several names; following Paul Slovic I will call it denominator neglect. If your attention is drawn to the winning marbles, you do not assess the number of nonwinning marbles with the same care. Vivid imagery contributes to denominator neglect, at least as I experience it.
Here winning marbles are people getting munched by sharks, and nonwinning marbles are people not so munched. Shark munching is more vivid than marbles.
Kahneman talks a lot about problems people, even statisticians, have with statistics. Like this question:
For a period of 1 year a large hospital and a small hospital each recorded the days on which more than 60% of the babies born were boys. Which hospital do you think recorded more such days?
- The larger hospital
- The smaller hospital
- About the same (that is, within 5% of each other)
The answer is that the smaller hospital will vary more and so have more such days. But people don't get this question right. That's often Kahneman's conclusion. People don't get this stuff right. Here he takes this kind of thinking and goes on to us it to support this claim:
The truth is that small schools are not better on average; they are simply more variable.
Well this is a big area of debate, especially in NYC, but it is the case that people usually just ignore the issue of student variability mattering more in a smaller group of students. I didn't really see enough evidence in the text to conclude that Kahneman conclusively settled this issue, but it did give me another thing to think about when I see claims like "a larger proportion of charter schools are in the bottom 10% of all schools". We should expect effects like that, if charter schools are usually smaller than other schools, due to chance.
In many instances I immediately thought to myself, "if people just knew the math, they could work this out, and they wouldn't make these mistakes!" Part of Kahneman's point is that mistakes happen even when people do know the math, if they don't actually do it, instead relying on their "gut" (system one). But there was also one place where I wasn't sure I did know the relevant math:
Imagine an urn filled with balls, of which 2/3 are of one color and 1/3 of another. One individual has drawn 5 balls from the urn and fournd that 4 were red and 1 was white. Another individual has drawn 20 balls and found that 12 were red and 8 were white. Which of the two individuals should feel more confident that the urn contains 2/3 red balls and 1/3 white balls, rather than the opposite. What odds should each individual give?
In this problem, the correct posterior odds are 8 to 1 for the 4:1 sample and 16 to 1 for the 12:8 sample, assuming equal prior probabilities. However, most people feel that the first sample provides much stronger evidence for the hypothesis that the urn is predominantly red, because the proportion of red balls is larger in the first than in the second sample. Here again, intuitive judgements are dominated by the sample proportion and are essentially unaffected by the size of the sample, which plays a crucial role in the determination of the actual posterior odds. In addition, intuitive estimates of posterior odds are far less extreme than the correct values. The underestimation of the impact of evidence has been observed repeatedly in problems of this type. It has been labeled "conservatism."
I kind of guess this is some Bayesian thing, and maybe after a few minutes with Google I could work it out, but off the top of my head I don't know how to solve for those results. And it may be that some people without math backgrounds would have this experience for more examples in the text, and in life. I should probably try and work this out. But the other fun thing about the passage is how the last sentence could be read as a dry joke at conservatives' expense.
There is another interesting argument based on regression to the mean, relevant to teachers and anyone who considers punishment and reward. People are statistically likely to do better after a very bad performance, and to do worse after a very good one - whether you punish or reward at all:
I had stumbled onto a significant fact of the human condition: the feedback to which life exposes us is perverse. Because we tend to be nice to other people when they please us and nasty when they do not, we are statistically punished for being nice and rewarded for being nasty.
Another really interesting topic is that of experts. People tend to be too confident. Experts tend to be WAY too confident, even when results are essentially random. Kahneman offers convincing evidence that the financial markets, at least investment, are essentially random. And yet everybody in the business thinks they're so damn GOOD at it.
...the illusions of validity and skill are supported by a powerful professional culture. We know that people can maintain an unshakable faith in any proposition, however absurd, when they are sustained by a community of like-minded believers.
And the expert delusion is valid in social science fields too.
Each of these domains entails a significant degree of uncertainty and unpredictability. We describe them as "low-validity environments." In every case, the accuracy of experts was matched or exceeded by a simple algorithm.
That's right, a simply algorithm is better than an expert, mostly because experts tend to make over-confident, over-extreme predictions, that are easily way off if you wait and check. And it doesn't even have to be a particularly GOOD algorithm. Kahneman mentions Robyn Dawes's 1979 article "The Robust Beauty of Improper Linear Models in Decision Making", which you can find online:
ABSTRACT: Proper linear models are those in which predictor variables are given weights in such a way that the resulting linear composite optimally predicts some criterion of interest; examples of proper linear models are standard regression analysis, discriminant function analysis, and ridge regression analysis. Research summarized in Paul Meehl's book on clinical versus statistical prediction and a plethora of research stimulated in part by that book all indicates that when a numerical criterion variable (e.g., graduate grade point average) is to be predicted from numerical predictor variables, proper linear models outperform clinical intuition. Improper linear models are those in which the weights of the predictor variables are obtained by some nonoptimal method; for example, they may be obtained on the basis of intuition, derived from simulating a clinical judge's predictions, or set to be equal. This article presents evidence that even such improper linear models are superior to clinical intuition when predicting a numerical criterion from numerical predictors. In fact, unit (i.e., equal) weighting is quite robust for making such predictions. The article discusses, in some detail, the application of unit weights to decide what bullet the Denver Police Department should use. Finally, the article considers commonly raised technical, psychological, and ethical resistances to using linear models to make important social decisions and presents arguments that could weaken these resistances.
A further problem related to experts, is that if you do happen to be an intelligent expert, aware of your fallibility, people won't trust you:
Experts who acknowledge the full extent of their ignorance may expect to be replaced by more confident competitors, who are better able to gain the trust of clients. An unbiased appreciation of uncertainty is a cornerstone of rationality - but it is not what people and organizations want. Extreme uncertainty is paralyzing under dangerous circumstances, and the admission that one is merely guessing is especially unacceptable when the stakes are high.
This is very interesting to me, because an expert who knows she is fallible and also knows people won't trust her if she says so can take the justifiable approach of feigning confidence in an effort to favorably influence a situation. The effect is that people who are trustworthy sound exactly like people who aren't. Fascinating.
It reminds me of the concerns around reporting confidence intervals or margins of error. If you are intelligent, you know what they are. But if you report them, people who don't understand will think you are less trustworthy. I would argue that if possible you should only tell intelligent, informed people about your margins of error, and leave them off when talking to other people. Of course this is kind of condescending, but could be better than having the majority of people think they can discredit you because "he even admits he could be wrong!" Of course it's difficult to report differently to different people, up to considering the readership of a periodical, etc.
And the last interesting thing in the book is about happiness. Kahneman looked into how good people's lives are. You can do this two ways: asking people how they feel about their lives overall, or looking at how they feel moment by moment through the day. Kahneman puts more weight on the latter, which I think is a pretty fair choice. He measures it by "U-index" which is sort of the measure of how much you're unhappy per day.
The use of time is one of the areas of life over which people have some control. Few individuals can will themselves to have a sunnier disposition, but some may be able to arrange their lives to spend less of their day commuting, and more time doing things they enjoy with people they like. The feelings associated with different activities suggest that another way to improve experience is to switch from passive leisure, such as TV watching, to more active forms of leisure, including socializing and exercise. From the social perspective, improved transportation for the labor force, availability of child care for working women, and improved socializing opportunities for the elderly may be relatively efficient ways to reduce the U-index of society - even a reduction by 1% would be a significant achievement, amounting to millions of hours of avoided suffering.
I was interested in his comments on religion:
Religious participation also has relatively greater favorable impact on both positive affect and stress reduction than on life evaluation. Surprisingly, however, religion provides no reduction of feelings of depression or worry.
He also had a chart that seemed to suggest getting married made you less happy in the long run, but then he argued that we really shouldn't interpret it that way. Good? Well, I'll finish with what I thought was probably the most feel-good moment of the whole darn book:
It is only a slight exaggeration to say that happiness is the experience of spending time with people you love and who love you.
Thursday, December 15, 2011
I'm thinking a lot about educational technology and continuing the experiments I began with a rather dry implementation at naldaramjui.com. This article (from The Washington Post's Answer Sheet blog, one of the most consistently good sources of interesting educational perspectives I know of) really puts in words a lot of my thoughts - especially this:
What is happening in the classroom that could not be duplicated by a computer?
If the answer is “nothing,” then there is a problem. In fact, I believe that if teachers can be replaced by computers, they should be. By that I mean if a teacher offers nothing that your child can’t get from a computer screen, then your child might as well be learning online. On the other hand, no screen will ever replace a creative, engaged, interactive, relevant, and inspiring teacher, especially one who takes advantage of the precious face-to-face experience of people learning together. Collective, communal, collaborative learning is key to many of the ways we all work now, often in collaborative and distributed ways. How is the school working to teach real, human, management, leadership, and collaborative skills in the unique environment of the classroom?
The article is by Cathy Davidson, and it makes me interested in possibly reading her book, Now you see it.
Subscribe to:
Posts (Atom)