Why Learn History Book Club // Get your copy and join the conversation!

Drumroll please! We are thrilled to announce our summer OER Conference for Social Studies Book Club pick! This month you are invited to join us in reading Why Learn History (When It’s Already on Your Phone) by Sam Wineburg (IndieBound / Amazon), who happens to be one of our keynote speakers for the OER Conference for Social Studies. 

Since the 1990s, Sam Wineburg has been one of the leaders in research on historical thinking and the teaching and learning of history. He is also one of the founders and directors of the Stanford History Education Group (sheg.stanford.edu), one of the largest providers of free educational resources in the world. Wineburg believes it is essential to provide students with the critical thinking tools necessary to sort through the incredible amount of information being thrown at them every day, and to do so may require updates to traditional teaching practices. 

Our community discussion about Why Learn History will kick off with our first book club questions on July 14 right here in this thread, located in the OER Conference for Social Studies Discussion Forum. We’ll post a new question each Thursday for three weeks leading up to the conference which takes place August 3-4. So, grab a copy of the book, bookmark this thread so you can return on the 14th, and prepare for some rich discussions with other members of the community.

Our first week of conversations will cover Part One of the book. Let the reading begin!

Why Learn History // Week One Questions 

We are excited to start our book club conversation on Why Learn History (When it’s Already on Your Phone) as we make our way to Sime Wineburg’s August 4th keynote address in the OER Conference for Social Studies. Post your thoughts and answers to the questions below or add your own question. Erik Christensen will be leading the discussion and will be checking in throughout the day to respond to the conversation.  

In Chapter 1 "Crazy for History" Wineburg gives a critical analysis of American testing systems (and their effectiveness) and reaches the conclusion that "...no national test can allow students to show themselves to be historically literate."  Further, Wineburg makes the claim that multiple choice tests "convey the dismal message that history is about collecting disconnected bits of knowledge..., where one test item has nothing to do with the next, and where if you can't answer a question in a few seconds, it's wise to move on to the next. [These] tests mock the very essence of problem solving."   

  • How should teachers of history in 2022 test our students?  
  • How do you determine if your students are learning history?  
  • How do you, to paraphrase Wilfred McClay, make those hard choices about what gets thrown out of the story so that the essentials can survive?

Why Learn History // Week Two Questions 

Bloom's Taxonomy is referenced in many professional development sessions, teacher-admin conversations, and there may even be a poster of this pyramid in your classroom. Wineburg suggests that in a history classroom, Bloom's Taxonomy should be inverted so that knowledge is at the top.    

  • Do you agree with Wineburg's thesis - that Bloom's Taxonomy doesn't work in a history classroom?   
  • What would an inverted version of the pyramid look like in your practice?   
  • Is knowledge the result of critical thinking? Or is knowledge needed to think critically?  

Why Learn History // Week Three Questions 

In Chapter 7 "Why Google Can't Save Us" Wineburg dives deep into the internet's ability to deeply confuse expert and novice historians (and everyone else!). He describes several case studies that highlight how difficult it can be to assess information that we come across online.    

  • As more students are conducting research online, how are you managing the information they are exposed to?   
  • Do you teach digital literacy?   
  • How do you practice online reasoning or claim testing? How do you practice it with your students?   
  • What routines or activities have made the biggest impact to negotiating the power of the internet in your classroom?  

Post your response to the questions in the comments below as we complete our final week of the Book Club. Be sure to join us for Sam Wineburg’s Keynote Address on August 4 at 1:00 PM PDT!  

Top Replies

  • Hi everybody. Figured I wade into this thread because reading everyone's posts and being familiar with Wineburg's book got me thinking. Anyway, I often find myself TL-DR-ing my posts.  I'm sure this…

  • You bring up some interesting points for the start of a discussion. First, I have not read Wineburg's book - yet. I am purchasing today. However, your points on the Zinn book - which I have had for a very…

  • So far, I am enjoying the book...definitely not finished, but I'm on track(ish) to be ready to respond to Part 2 later this week! :) I find Wineburg's tone and style to be straightforward and comprehensible…

Parents
  • Hi everybody. Figured I wade into this thread because reading everyone's posts and being familiar with Wineburg's book got me thinking. Anyway, I often find myself TL-DR-ing my posts.  I'm sure this post will be no exception, so thank you up front if you are willing to get through it. 

    The thing that got me going was the first question about "How to test?".  I'm mostly sharing this as chance for a sounding board.  If anyone wants to engage or comment feel free.  My answer to the question of "how to test" is that it entirely depends on your goals and your context, thus there is no answer in a singular or monolithic sense.  It is also like the Eric Clapton song: "It’s in the Way That You Use It" with the "it" being whatever assessment tool you are using. 

    For example, to play devil's advocate, I know many like to throw MCQs under the bus as an assessment tool.  And Wineburg's points about the problems of MCQs are legitimate; using them does come with a host of problems, the worst being over-reliance on them, particularly on high-stakes assessment. That being said, depending on my goals, I still find them useful and maybe even necessary:

    1) MCQs can be great for launching class discussions around the answers and the questions (especially if you are teaching students how to parse questions or interpret a stimulus). 

    2) They are quick when assessment needs to be quick- both in terms of giving feedback and formatively assessing misunderstanding or confusion in real time.

    3) They can motivate (not all but some).  I mean this in both the grade and pedagogical senses.  In the grade sense, by the time I see students as high school sophomores at my school, the culture of "grades" has been so thoroughly ingrained that to deny a grade's motivational power over students, even if it is an extrinsic motivation with all the attendant issues of turning learning into a transaction, seems mostly academic and not practical.  Students are motivated by their MCQ "reading check" scores to do things they otherwise would not do, and I am willing to use that because in my context there is a real return that makes a difference.  Admittedly this is more about the "check" part than the MCQ format part, but the efficiency of MCQs is what allows me to do the checks often enough to encourage my students' habits and build accountability.  In the pedagogical sense, I have a clicker system I use with MCQs for their reading checks.  These clicker checks are consistently one of their favorite routine activities (based on their feedback and despite my skepticism).  As one student put it, "It's like a Kahoots with teeth, and I like that."  

    4) MCQs have logistic advantages.  Here is where they really shine.  A) They leave a clear paper trail and they provide data that is easier for content teams to collect and analyze, especially if you tether each question to a standard. B) They are also relatively objective if standardized (i.e. meaning same questions used by multiple teachers across the same course), which is useful when trying to align multiple classes and teachers to a curriculum.  In my experience, if you don't worry about such consistency and alignment, then you get serious equity issues between classes. (i.e. Little Joe benefits because he gets teacher X or has the class after lunch.  Little John is penalized because he gets teacher Y or has the class first thing in the morning, etc.)  This may be the case regardless of assessment instrument, but aligned MCQ questions give you a quick data point to track and verify such perceptions). C) They also can be easily "corrected" or reassessed by students who wish to improve (#growth mindset) if a teacher is willing to provide such a system.  And lastly, D) while it may be true that logistics shouldn't be the main driver of the "why" of assessment, the reality of limited resources, limited time, and teaching 150 students a day means that efficiency often wins out over other concerns.  A reasonable level of do-ability given context has to be part of best-practice.  Otherwise, teachers don't do it, and even if they do it, the time to grade means the feedback comes days, weeks, or even months after the fact and can get squishy in terms of reliability.

    All this is to say, "how to assess" is about goals and context. I think diversity of assessment is good (similar to Vince Furlong's outline). I think a student's mastery of a standard should be triangulated from multiple data points and "looks".  Not just one.  And I would never advocate only relying on MCQ tests or quizzes, but it is okay to use MCQs when they make sense. I think whatever assessment instrument is used should be understood both in terms of what it can and cannot do and its tradeoffs.  If goals and context say another instrument would work better, then use the other.  This is the big advantage of Backward Design (#UBD) because it forces you into thinking about the goal first and then how you would really know if you achieved it.  But at the same time, just because a project or activity seems more engaging or creative does not always make it a better way to assess.  Projects can be huge time sinks.  Activities can sprawl.  Students can miss the point or fail to see the connection or feel like it was a lot of wasted effort for too much show and too little substance or significance.  This is not an argument that projects are bad.  The point is to be judicious with all assessment options. Lastly and probably most importantly, what happens after the assessment.  How does the student use the assessment's evaluation and feedback to learn?  What reflection do they do?  What analysis?  How well does the assessment, its evaluation, and the teacher's feedback communicate next steps to the student? (#Assessment for learning not of learning).  In conversations about assessing students, it seems we often get caught up with the means of testing, and don't spend enough time helping students understand how to use the results or put the results into a constructive framework from which to grow.

    Anyway, to me, education is about negotiating a ridiculous number of complex variables with flexibility and versatility. It is not a clear heuristic of "do this and not that".  It's more like Go or Chess; it's not much like Checkers.  As a case in point, when I teach a course with only 12-15 students I almost never use MCQs. There is no need (similar to MaryLynn Bieron's context).  When I teach courses with over 100 students and have to work on content teams with other teachers, then MCQs, while imperfect, are quite useful.  And as far as the stimulus-based MCQs used by AP, I tell my students these questions are more like history puzzles.  So much has to go right in order to answer them, that nobody, not even the College Board, can use these as the only means of assessment (e.g. Did they interpret the question correctly? Did they read the stimulus accurately?  Did they notice the relevant clues and ignore distractors?  Did they remember the relevant historical background knowledge, etc.).  Hence the use of DBQs, LEQs and SAQs in conjunction with the MCQs.  Combined, these four instruments are not bad as far as complex assessment goes even though they are not perfect.  As a coach, I believe you should practice the way you play, so in my AP class, there is not a whole lot of time for other types of assessments beyond these four for tests.  I sometimes wish I could do more projects or simulations in AP, but then I realize to do so would come with tradeoffs in time and coverage that I am often unwilling to make.  I also try to be respectful of my students' preciously limited time and attention spans.  For when they leave my room to the other seven classes they have in their schedule, or their sports, or band, or debate, or church, or friends, or family, etc., it often amazes me that they do any of the things I ask them to do.

    Anyway, if you got this far, thanks for reading, and I appreciate the inspiration to articulate what now seems to be my assessment philosophy, not that I had ever put it into words like this before. I probably got a little preachy at times.  Sorry for that.  Wasn't my goal. It all just sort of came out. :-)

  •   this is a very convincing riposte to the criticism levied upon MCQ assessments. Thank you for your thoughts - a more reflective post might not be written!

    Certainly in the "AP World" teachers have a job to do, but that's more evidence of what Adam refers to as the "systemic" issue. For many, the AP test (or NY Regents, etc.) is the end-all, be-all of assessing historic knowledge.

Reply
  •   this is a very convincing riposte to the criticism levied upon MCQ assessments. Thank you for your thoughts - a more reflective post might not be written!

    Certainly in the "AP World" teachers have a job to do, but that's more evidence of what Adam refers to as the "systemic" issue. For many, the AP test (or NY Regents, etc.) is the end-all, be-all of assessing historic knowledge.

Children
No Data
Don't forget
to register!
Sign up now