The new address of this blog is activelearningps.com. Please update your email, Facebook, and other subscriptions.
I very rarely incorporate feature films into my courses — Dr. Strangelove has been one of the few exceptions — but anyone who is teaching political psychology should take a look at Inside Out from Pixar. The film, for which the psychologists Paul Ekman and Dacher Keltner functioned as consultants, is a visual representation an 11-year old girl’s mind. Emotions take the center stage, especially in regards to memory formation and retrieval, but imagination, attention, and other processes also play into the film’s plot. More detail on the science incorporated into Inside Out is here.
Today is the first of a two-part series by guest contributor Tricia Stapleton, Director of the Society, Technology, and Policy Program at Worcester Polytechnic Institute.
As I’ve noted in previous posts (here and here), I’m in the process of tweaking my Intro to IR class. I’ve successfully used the International Relations in Action (IRiA) simulation, but I want more out of it in terms of student learning outcomes, in particular student engagement during the rounds and better connections between the sim and scholarly content. After the first round in Spring 2014, I changed the course assignments with the goal of more strongly linking theory to (sim) practice. The new assignments were reflection and research pieces for IRiA scenarios for my Fall 2014 class. In addition to completing background reports before the game started, each team was required to produce an editorial, a response letter to another team’s editorial, and a Twitter campaign during the course of the simulation. The work was posted to a course wiki, and all teams had access to them.
I began thinking about creating a Twitter activity after reading Simon’s post on using Twitter to help build community. This blog also has several other posts on how Twitter can be helpful (here, here, and here). However, I was reluctant to have students actually post to Twitter, where I might not have control over content. Student interactions often become quite intense during the course of the sim, and even though I have a disclaimer on my syllabus regarding appropriate language and respect for others, I was concerned that students might post inappropriate content. Fortunately, I already had the benefit of a developed course wiki that provided some functionality for hosting fake Twitter feeds. Although it wasn’t perfect, the added bonus of the wiki was that all created materials for the scenarios were housed on one website.
Teams completed one “Twitter campaign” during an assigned scenario in the simulation. I dictated which scenario based on a team’s possible points per round. The IRiA text establishes the number of points a team competes for in a scenario: 1, 3, 5, or 10 points. Teams working toward a 10-point objective usually have several tasks to complete, and are very busy in class. Teams on the lower end – fighting for 1-point or 3-point objectives – reported in Spring 2014 that they felt less engaged in those rounds. They simply have less to do to “win” their objective, and end up taking a backseat to other teams’ efforts. The Twitter campaign was a way to think about and participate in a round, even if a team didn’t have much to contribute or gain from negotiations in terms of points. To make sure that students were engaging with the material, and not just posting a few offhand remarks, students were also required to turn in a “campaign strategy” at the start of the round. In this 3-page report, the team explained its strategy for its Twitter campaign in the context of its objectives and potential events in the scenario. The report included any predetermined content (fully-formed tweets, hashtag ideas, etc.). And, the team was asked to consider potential weaknesses in its campaign and address how it might respond to any attempts to exploit these vulnerabilities.
Overall, the students performed well. Their reports showed a good grasp of how to use social media to promote their agenda, as well as explorations of how their campaigns might backfire. Student evaluations of the sim overwhelmingly indicated that the Twitter campaign was really interesting, and it made them think about the connections between media and politics. The IRiA text currently has no role for the media, so it does fill a conspicuous gap. However, teams didn’t engage much with each other on Twitter beyond their assigned scenario. In the future, I might offer extra credit for additional tweets, or figure out a way to designate one team member as the communications director.
One of the mutterings that has flitted about the HE sector here in the UK in recent months has been the idea of a Teaching Excellence Framework, an equivalent of the Research Excellence Framework (REF) that has either come, or is coming, to your local university.
As with REF, the idea would be to use a range of metrics to identify, measure and encourage high quality learning & teaching. That might include completion rates, ‘value added’, dissemination of good practice and case studies of innovation and general excellence.
The idea is one that got a big boost last week, when the new government announced that it would allow British universities to increase their fees if their teaching was of high quality.
Now you might imagine that I’d be all for that, since I could write an impact case study already now and my institution does very well in L&T metrics.
However, I find myself being deeply ambivalent about it all.
In part, that ambivalence is driven by my general concern about any one-size-fits-all approach, especially because – just as in research – there are very many ways to make a useful contribution and (conversely) just as many ways to not capture that. How do you compare a MOOC with lab work with a simulation?
But it also comes from a concern about the abstraction of L&T from its context.
When I run a module, I run it with the particular group of students in mind. Each class or session finds me (like it would most colleagues) trying to adapt the material to meet the needs of the students as we progress. Most mundanely, the confused face will prompt us to rework some section, to unpack it and help that confusion pass. Less mundanely, I might rework an entire module to accommodate the needs of a group (as indeed I will be doing this summer).
In REF, individual researchers submit four pieces of work for peer-evaluation: this forms the bulk of the work, alongside some elements dealing the research environment and its wider impact.
In a TEF, it’s inconceivable that we’d do the same: I submit my four best students? My four best modules? My four best sessions?
No, there’d be a bunch of metrics at programme or departmental or university level, describing some of those things I’ve mentioned above. Then there’d be some kind of case study approach.
This anonymises – or, rather, depersonalises – what we do. As a useful piece by John Canning points out, a TEF might start out with good intentions, but quickly institutions would move towards playing the game and optimising the metrics.
This already happens: institutions in the UK spend a lot of time and effort in trying to improve their league table rankings and that includes targeting poorly-performing indicators. That’s not a bad thing – my own university now has much better systems for supporting students who might fail modules, for instance – but the tendency is always to focus on the metric, rather than the students themselves.
I appreciate that in all of this, I’m just the sort of person who should be getting involved, as someone who cares about, and has a substantial job role in, L&T. So I’m going to try to use my ambivalence to good effect. That’s why I’m going to try and be part of the discussion about TEF wherever I can, so that even if it’s not perfect, it will still be less imperfect than otherwise. And I’d say the same to colleagues: rather than just grumble about what’s being done to you, try and shape it.
All that said, I’m about to have a couple of weeks of not thinking about work at all. Doubtless I’ll return with some moment of inspiration from my break, but I really wouldn’t hold your breath. See you on the other side.
I recently returned from the Online Learning Consortium’s conference on blended learning. Blended, or hybrid, means a course in which lecture content has been moved online, and less-frequent classroom sessions focus on higher-order tasks of application, evaluation, or synthesis.
Here is the advice that veterans of blended course design gave at the conference:
- Set student expectations in advance. Students who are new to blended courses frequently conclude that they are a bad combination of the online and face-to-face worlds. It’s up to instructors to frame the experience as one that provides greater access to and more effective interaction with faculty. Pitching the course as an experiment is probably the worst message to send.
- Online content and face-to-face exercises must correspond to but not duplicate each other. Students’ classroom participation in team- or project-based activities, for example, needs to align with the key concepts of the online content so that both sides of the course unfold in a coherently-scheduled, mutually-reinforcing manner. A frequent method of assessment that prevents non-proficient students from progressing through the content is highly useful in this regard. If online replicates what happens in the classroom, or if they are not integrated with each other, students will either stop engaging with the former or stop being physically present in the latter.
- Students need to understand that “online time” does not replace “homework time.” They will still need to devote significant effort outside of class to research, writing, or the completion of problem sets. This message can be highlighted as part of the orientation to using online content that students will need at the beginning of the semester.
- Conversely, instructors need to be careful not to overwhelm students with material in excess of what students would encounter in the course’s traditional version.
- Online video should be in 5-10 minute pieces with Goldilocks-style assessment exercises after each piece — something not too easy nor too difficult. This fosters students’ engagement with the content by giving them the feeling that they’re being fairly challenged. If the assessments are perceived as too difficult or as irrelevant busy work, student motivation to access the content will decrease.
- When producing video, don’t be afraid to be a real human. Students are not looking for a Taylor Swift-level of production value.
- Use replicable tools, methods, and content to drive down the financial and emotional costs of creating additional blended courses in the future.
This is the first in a multi-part series chronicling my experience collaborating with two undergraduate students on a new comparative politics simulation.
This fall, I’ll be collaborating with two undergraduates to design a simulation for my Introduction to Comparative Politics class. The students both hold leadership positions in a student organization that creates and runs 4-6 hour long simulations (Strategic Crisis Simulations). We met at last year’s APSA Teaching and Learning Conference (which just released its call for proposals for the 2016 conference– read more here!) and they reached out to me to explore ideas to work together. I thought both the process of developing a simulation, as well as the actual simulation, would be of interest to ALPS readers so I will blog about the experience through the fall semester, including details about the simulation, challenges we encounter, and an after-action report.
This collaboration will earn the students independent study credit and I will gain a simulation: win-win. I’ve struggled in the past to find a comparative politics simulation of the depth and breadth I’d like. I have used the Statecraft simulation for four years in my Introduction to International Relations class (more on Statecraft on ALPS here and here). I appreciate the depth that a long-term simulation provides, as well as the “buy-in” from the students fostered by the repeated interactions. I also like the ability to use one simulation to bring in many concepts, but have not found anything similar for an introduction to comparative politics class.
I’ve met with the students twice now and we started to lay the groundwork for the independent study and simulation. We’ve talked about my goals for the course and developed a basic outline for the simulation. Although in the very early stages, our idea is to assign students identities – broadly defined – within a fictional (but modeled on real) country that is undergoing a democratic transition. At various stages of the simulation, the students will create institutions, have elections, and struggle with ethnic tensions. I know other simulations that model one or another of these concepts; our goal is to build a simulation that draws all these concepts into one. But as I told the students after we met a few weeks ago: “Tell me if I’m getting too crazy with these ideas and we need to simplify!”
I’m excited about exploring this opportunity to build relationships, use the expertise of others, and use simulation building as an educational experience as part of the independent study. I’ll ask my undergraduate collaborators to contribute some guest blog posts as well. Stay tuned!
The New York Times recently published an interactive illustration of confirmation bias: guess the rule obeyed by a sequence of three numbers. I won’t go into detail about the game other than to say that it’s wonderfully simple and I definitely fell into the mind trap. Some implications for politics and business are presented after players submit their answers and this can provide a launch point for class discussion.
The puzzle nicely complements the Zendo game in which players create hypotheses about an arrangement of blocks — by demonstrating the cognitive biases that affect much of our decision making.