Welcome to the Active Learning in Political Science © blog. Our goal is to provide resources and ideas for using active learning techniques in the political science classroom and to promote general discussion about innovative teaching methods.
On the subject of how the higher education system puts non-affluent students at a disadvantage:
Brittany Bronson writes in today’s New York Times that for today’s college graduates, “the path to underemployment begins early, and those with certain levels of financial privilege will have an easier time avoiding it.” The complete op-ed is Long Odds in the Game of Life.
Here is a review of another practical guide for teaching:
Elizabeth F. Barkley, K. Patricia Cross, and Claire Howell Major, Collaborative Learning Techniques: A Handbook for College Faculty, Jossey-Bass, 2005.
Collaborative Learning Techniques is organized much like Classroom Assessment Techniques and in fact there is some overlap in terms of content.
Methods presented in the book that I had used before required adjustment and iteration before they met my expectations. From this perspective I think that the book is most useful as a starting point for experimentation.
For example, in the technique of “Test-Taking Teams,” students (1) study course content as a team to prepare for an exam, (2) take the exam separately for individual grades, (3) discuss the exam among themselves, and (4) take the same exam together for a group grade.
I see the technique’s general applicability, but to me the initial joint study session is problematic for two reasons. First, given students’ wildly conflicting schedules, a joint study session will have to be held in the classroom to avoid inconvenience, which eats up time that might be more productively used in other ways. Second, some students have much better study skills than others and those students should not be required to devote time and energy on their lower-performing peers prior to an individually-graded exam. A better option might be for students to (1) study individually before the initial exam, (2) discuss how they studied with their teammates after they know their exam scores and are more receptive to altering how they study, and (3) collectively take the team exam.
“Grading and Evaluating Collaborative Learning” was the most thought-provoking chapter for me. The authors state that:
“[s]ince achieving individual accountability while still promoting group interdependence is a primary condition for collaborative learning, it is most effective if grades reflect a combination of individual and group performance. One way to achieve this is to . . . ensure that individual effort and group effort are differentiated and reflected by a product that can be evaluated” (84).
I still haven’t quite figured out how to do this efficiently. Students often default to chopping up group tasks into discrete chunks. No real collaboration takes place and the final product can be disjointed and of uneven quality. Or there are free riders. Teammate evaluations help address this problem to some extent, but this assessment mechanism is summative rather than formative — it occurs at the end of the semester when it’s too late for a student to change his or her behavior.
I finished up classes with my teaching cover yesterday: circumstance now means that I’m likely to be marking their final exam too, so part of the time was given over to what they should expect.
As well as the usual stuff – how the paper is structured, what they have to answer, how questions do(n’t) relate to specific weeks in class – we also talked about more general technique, not least because this is a first year group.
The key point I asked them to focus on was what I consider to be the golden rule of all exams: answer the question.
It’s easy to forget how much this matters, both for students and assessors.
For students, exam situations are stressful and the more considered planning goes out of the window: a familiar word is spotted in a question and the student latches on to it, churning out everything they know about that word.
For assessors, that’s a big problem, because in effect it’s just the transmission-reproduction model of teaching. In such situations, it’s very hard indeed to evaluate the students’ ability to think and reflect on a question.
That’s why I repeatedly tell my students that I would not only prefer a shorter, less detailed answer that directly answered the question to a longer, more detailed one that didn’t, I would also mark the former more higher.
It’s also why the exam questions I tend to produce don’t map simply on to specific classes. This means students have to review all of their notes and work from class, because they can’t be sure what’s going to appear, or how they are going to have to use it. The relative novelty of the questions themselves (in terms of their formulation) also encourages students to read them more closely.
In so doing, we might hope to foster an environment where reflective practice is encouraged.
I’ll tell you in a few weeks’ time whether that worked or not.
Terry Doyle and Todd Zakrajsek, The New Science of Learning: How to Learn in Harmony with Your Brain, Stylus Publishing, 2013.
The New Science of Learning is a very concise and easy-to-read advice guide for undergraduates that is based on the findings of cognitive science research. I’ll be using it this fall in my first-year seminar. My hope is that it will help students, many of whom are not that well prepared for college, improve their academic performance. Here is one of the book’s authors speaking at Quinnipiac University, from the perspective of how to teach more effectively.
I’ve created these writing assignments that correspond to the chapters of the book:
- Of the different practices that help people learn more effectively, which is the one that you currently use the least frequently? What would you need to change in your life so that you used it more frequently?
- Think about the last three nights. How well did you sleep on each of these nights? What changes would enable you to sleep better? How can you implement these changes?
- Thinks about the last three days. At what times were you physically active and for how long? How did your levels of mental alertness change during the day? Do you notice any pattern between physical activity and alertness?
- Do you write notes by hand in your college courses? Do you annotate text that is assigned in these courses? Why? Given the benefits on learning of note-taking and annotating reading assignments, how well will you perform academically this semester?
- Describe an assignment in one of your courses this semester that reflects the pattern recognition principle of similarity/difference, proximity, figure-ground, or cause/effect. What is the assignment and how does it reflect the principle? What will be the effect on your understanding or memory of the material?
- Name an activity in which you use either distributed practice or elaboration. What is a specific change that you can make in your daily behavior to better incorporate either one into your college experience?
- What is your approach to failure? Do you embrace the possibility of it or try to avoid it at all costs? When you fail, what is your reaction? Based on your answers to these questions, do you have a growth mindset or a fixed mindset toward learning? Why?
- Do you engage in task shifting? When? What is a specific change that you can make in your daily behavior to reduce task shifting?
As I wrote in my post on the perils of small classes, this past semester I used Google Forms to create a digital ballot for presentation competitions. The ballot worked well — students could hide their votes from other students, and tabulating the results only took a few seconds on my part.
The success of the ballot led me to adopt Google Forms for end-of-semester teammate evaluations. This turned out to be a much simpler method of incorporating individual accountability into collaborative projects than the worksheets I had used previously in a first-year seminar and a capstone course. No time wasted in class while students complete evaluation forms, and no entering of numerical data into a spreadsheet. I create one form for the whole class, which I send out by embedding the link in one email. Google does the rest for me.
Some students failed to follow directions, but that happens regardless of whether an evaluation is on paper or electronic. As stated on the forms I created, I just deleted those responses from the results.
The BBC has reported on the Reddit button game that started on April 1. The game is simple: a timer counts down from 60 seconds. When someone presses the button, the clock resets. You can only press the button once, and you are assigned a color according to the amount of time that elapses before you press the button. If no one presses the button before the timer hits zero, the game ends. The number of game players is approaching one million people.
As pointed out by Joshua Bleiberg and Darrell M. West of the Brookings Institute:
- The button pressers have no clear common interest.
- They are not organized.
- The process of watching and pressing the button is mundane.
- There is not a strong incentive to press the button.
Given that online communities have formed on the basis of the colors that game players have earned, the game could be a useful tool for exploring the mechanics of collective action, identity formation, or political communication.
A graphical representation of elapsed time between each press of the button, updated in real time, is available here.
Today’s example is the apparent failure of British psephologists/pollsters to predict the outcome of the general election last week. Pretty much every one was in agreement that a hung Parliament beckoned, with attendant coalition negotiations.
Instead, the Conservatives pulled out a clear majority. Cue much pulling of hair and gnashing of teeth.
The academics and pollsters involved have already produced a great range of post-mortems, before anything official kicks off. Below are some of the best bits of these dissections.
As an exercise with students about how academic research progresses, this offers an excellent opportunity to explore the testing of models against real-world data and the way in which lessons are learnt and internalised.
Too often, we present ‘research’ as simply ‘correct’, so showing a bit of humility (as these people have done) is a helpful corrective.