Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

elleng

(130,156 posts)
Tue May 24, 2016, 01:57 PM May 2016

Leaked Questions Rekindle Fierce Debate Over Common Core Tests.

'The questions, taken from a Common Core fourth-grade reading test, came to a Columbia professor in an email from an anonymous teacher, part of a blistering critique of the exam. The professor put the questions and the critique on her blog, and before she knew it, her posting ignited on the Internet, fueling a new round of anger about high-stakes standardized testing.

As fast as the company that manages the tests played whack-a-mole, trying to get the questions taken down, teachers, parents and education experts kept spreading them on blogs and Twitter — despite the fact that the questions are still being used right now in testing. Some argued that robust public discussion of test items and their shortcomings was the best way to ensure better tests.'>>>

http://www.nytimes.com/2016/05/25/us/leaked-questions-rekindle-fierce-debate-over-common-core-tests.html?

5 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
 

arcane1

(38,613 posts)
1. Good lord, they're arguing that copyright law forbids the questions from going public?
Tue May 24, 2016, 02:15 PM
May 2016

Talk about "part of the problem"!

It seems the questions are almost deliberately made too difficult. Well, I suppose that would conveniently make the teachers look bad.

underpants

(182,279 posts)
2. It's the old pineapple with a trick up its sleeve trick
Tue May 24, 2016, 02:29 PM
May 2016

My head hurts from reading that. We aren't in a Common Core state.

Ford_Prefect

(7,828 posts)
3. Shred them and use it for a community barbecue. Their over-hyped well paid devastation
Tue May 24, 2016, 03:43 PM
May 2016

of meaningful learning and their disrespect for the complexities of American culture needs to go down in flames.

Individual learning processes are complicated and operate on several levels. Common Core has been used to pummel independent thinking out of learning and punish those who learn at different rates, or in different ways as well as those whose values differ from the Ivy covered towers. It likewise punishes those teachers who see the path to wisdom and knowledge as an illuminated dialogue rather than rote methodology and sacred tests.

Peace Patriot

(24,010 posts)
4. Almost everything is privatized now, including the 'TRADE SECRET' voting machines.
Tue May 24, 2016, 03:47 PM
May 2016

Did you know that we are forbidden, by law, to review the 'TRADE SECRET' code in all of the voting machines all across our land?

Just sayin'. Privatized testing of our children--that is, testing of our children FOR PROFIT--is no surprise.

Igel

(35,197 posts)
5. The blogger does something a bit odd.
Tue May 24, 2016, 09:52 PM
May 2016

It's apparently the NY version of the PARCC test that's at issue.

And the blogger points out that the test is at odds with some of the requirements of the Common Core standards.

What's missing is the connection between the two assertions. Why? Because the NYS standards are not verbatim CC.

Take one issue: An essay question has to do with "structural elements" but the CC standards require that they be explained, presumably orally. Not in writing. The horrors! The test is unfair! Oh, noes!

However, the NYS writing standards say,

Standard 2: Students will read, write, listen, and speak for literary response and expression.

• Write interpretive and responsive essays that

- describe literary elements such as plot, setting, and characters

- describe themes of literary texts

- compare and contrast elements of texts


Which is exactly what the PARCC question requires. In other words, the conflict isn't between the NYS standards to be taught in 4th grade and the test question, the conflict is between national CC standards for 4th grade and the test question for NYS students. Oh.

And so it continues. She complains that the test isn't aligned to standards that it's not supposed to be aligned with. In so doing, we assume she's picked the right standards for alignment. She's the expert, after all. And she's led us down the garden path. I personally don't think I look good with a big old iron ring in my nose.

The quibble over lexile ranking for one reading passage is possibly important, possibly not. The difficulty with that part of the blog is a lack of knowledge. The book is 6-8th grade or 9th grade, depending on how you gauge lexiles. But this is presumably a cold read, and there's no information on the lexile ranking of the passage given. The blogger seems to be saying that all portions of the book must be at the same lexile. This is not true, on its face. Is this the case with the shark text? Who knows? We certainly don't, but we're asked to condemn based on the ignorance we possess and the certainty that the teacher wouldn't be trying to mislead us. Or herself.

I've studied literature in a few languages. As I've learned languages and done "sample" readings along the way, I've read selections that were easy. Not simplified, just easy. Going from selections to entire works was traumatic. One Turgenev story pops out as especially memorable: I could read 20 pages and look up perhaps 10 unknown words ... after the first couple of pages. Those first couple of pages described in fantastic detail a forest glade at sunrise--types of plants and plant parts, trees and tree shapes, landscape textures, ground shapes, gradations of light and dark and varieties of green and grey and brown. There were dozens of fairly low-frequency words on each page I'd never seen. The first page or two determined the lexile. After that, the lexile ranking would have dropped down many grade levels. The point: The lexile of "the passage" is no higher than the lexile of the work as a whole, but can be far lower. It's a serious error to think that a part is necessarily equivalent to the whole.


As for the issue of fairness, I'd note there were questions on a Texas standardized test that a student had difficulty with. We're allowed to read the questions out loud or deal with some mechanical aspects--correct page, etc. Not with answers. In other words, some of the info in the test leaks out among teachers, inevitably. However, it was obvious that some questions were difficult, some were confusing, some were just miswritten. "Refer to diagram 3" and there were only two diagrams labeled "1" and "2"; the question should have read "refer to diagram 1". Or the question says "refer to the dialog between Jake and Ben on lines 10-18 of the text," except that there was no Jake and no Ben in the reading passage. You might realize that Jake and Ben were in the next story or that the question works if you realize it refers to "Mary and Sue" on lines 10-18. Or you might just randomly guess as your stress level spikes. Either way, if you know the question and can figure out ahead of time what it must be asking--or know that it's a nonsense question and not to spend 5 minutes puzzling over it--you're at an advantage. Some students have an easier time than others. But all are measured by the same metric, inequitably. Test questions, good, bad, or indifferent all wash out in the end when the passing score cut-off is decided based on an administration assumed blind for all students. When individual students get that advantage, it's called "cheating." When a teacher does this for some students it's called "freedom of speech." I guess students who blurt out, "Hey, the answer to #10 is (b)" or "The answer to #3 is 'synecdoche'!" don't get freedom of speech.

I'd say that the tests Texas just gave weren't proofed well. On the other hand, I don't know if any of the goof-ball questions even counted towards a student's score. Why? Because every test has experimental questions. There may be 50 questions, but only 40 or 45 are scored. The others are there to determine discrimination and difficulty, some resurface and some are scrapped. Keep this in mind with the odd questions that the blogger decries. We assume they counted. We can't know that.


All standardized tests are, moreover, constructed with a similar kind of item composition. Some are entry level: If you're in the bottom quartile for your age group, you should be able to answer them. Everybody gets a foot in the door, ranked some place above zero. Where you, the average student, should be is in the upper half, where there's the most sensitivity to rankings and more questions are pitched. However, there are also much harder questions above those that are a challenge to the 1%ers. (Most young teachers these days, I am given to understand, think that the average kid should readily achieve 100/100 because the test should just cover the essential points. Reach for the mean! Strive for mediocrity!)

Take a released science test question from Texas for a now-defunct test; Texas is vehemently non-CC. A car goes 1/3 of the way around a race track with a diameter of (let's say) 500 m. This takes 1 second. If the coefficient of friction between the rubber of the tires and the track is 0.2, what angle must the track be at to avoid skidding? (It gave the car's mass, as well. And the numbers themselves are lost in the Lethe.)

To solve this you have to find the distance, then the car's speed, and then the force needed to keep the car on the track and the maximum centripetal acceleration exerted by the track on the vehicle. You'd find that the car would skid. So then you'd have to adjust the track angle to so that the horizontal component of the track's normal force matched the maximum centripetal acceleration. The standard formula chart doesn't give the geometric formula and over half the students for the grade level that the test was written for didn't have trig. Most of the teachers of regulars classes for that subject were in the dark about how to find the answer. Yet there it was. The AP teachers' students, however, would mostly have been able to do this, as would some of the pre-AP students. It's not an entry level question. It's not a "this is where the average student should be" question. It's "let's see what you can do, baby Einsteins" question. Those who think that a bell curve should look like a sliding board were outraged. I was amused. At their outrage. And by the fact that that year, at the school I was observing, the science team had decided that centripetal force was omissible because it wasn't one of the few required standards that made up 60% of the test but one of the dozens of "supporting" standards that made up only 40% of the test. In other words, that school's regulars students wouldn't have even seen the topic. Yet there it was, and it was a fair question.

For every question like that there were two like "How long does it take a stone to fall 3 meters from rest with an acceleration of g?" and one "if a car goes at 60 mph for 2 hours, how far does it travel?" question. I remember none of those kinds of questions; the one that stands out is the one that would have given AP students a run for their money. (Along with one that dealt with Faraday's law where all the info and distractors were provided via a truly gnarly graphic. Again, the mediocre and easy questions are thoroughly Lethe-rinsed.)

Lots of distrust going on, with a huge hankering to be outraged over what we pride ourselves on knowing but which isn't so. Bodes poorly for all kinds of future outcomes.
Latest Discussions»Issue Forums»Education»Leaked Questions Rekindle...