Pedagogy : Edtech :: Chicken : Egg?

Today I was in an inter-district meeting via G+. We started with introductions where we were charged with sharing something innovative going on in our district. At my turn I shared that I was happy about the academic redesign process that my district has gone through over the past 6 months, particularly because I like that we are considering pedagogical shifts before implementing devices with kids. My basic claim is that I would rather see teachers ready to handle student-centered, discovery-type classrooms, which leads to a specific purpose for implementing technology to help make that happen. I was surprised when some of the members of the meeting pushed back a bit on that notion. The basic argument (which I sincerely hope I’m not mis-representing, this was a very amicable conversation) seemed to be that teachers need to know the technology to be able to teach differently using it.

My frustration with a ‘devices first’ approach stems from, for example,  hearing stories of districts spending millions of dollars to ‘transform’ doing math from paper worksheets to PDF worksheets in Notability. It seems to me that we should train teachers in the (very difficult to master) craft of teaching through inquiry and student dialogue, at which point they would be ready to implement fantastic tools like Desmos or Geogebra to facilitate that learning.

I’m wondering what you think, internet. Am I off my rocker? Am I missing something? Or does pedagogy first resonate with you as well? I appreciate your thoughts.

Modeling the Mistake Game

I’ve been convinced for some time about the value of playing the mistake game, but I have been unable to get my students to successfully buy into it. Today we started graph stacks, where one of the three kinematic graphs (position vs. time, velocity vs. time, and acceleration vs. time) are given to students and they need to qualitatively sketch the other two. I wanted to use it as an opportunity to once again try the game.

First period I allowed students to work for a while on attempted the graphs, and then I assigned two groups the same problem. I asked one of the two groups to purposefully make a mistake and didn’t say anything to the other group.

2014-09-26 08.45.29While I liked the comparison, we ended up with 6 people in front of the room while all the conversation was focused on the three with the mistake. They rode it out and did a great job, but I still didn’t feel like kids knew what questions they should ask to back the group into a logical corner. So I decided to change it up 2nd period.

I again let students work for a while, and then I picked a problem and put it on the board with some mistakes embedded. The position vs. time graph is the original that was given, and the black on the v vs. t and a vs. t graphs is what I originally drew . The orange is the corrections we eventually made through questioning.

2014-09-26 10.34.19

I told students that I was going to model a presentation where we play the mistake game. I then gave the presentation; “The position is decreasing and positive so the velocity was positive but decreasing as well.” Then I gave them a minute to talk to their partners about good questions to ask.

And they didn’t ask good questions.

But what happened was that I was able to stop the minute someone asked a great question. “What is the slope of the position graph at time zero?” We then had a conversation about how forcing someone into a logical corner doesn’t happen with one question; it happens with a series of questions. So once I know that the slope of the position graph is zero at time zero, that leads to the logical connection that the velocity has to be zero at time zero.

Still, I didn’t feel like it went that well. So I let them work for a while again, then I picked the next problem and modeled it again.

They forced me into a corner in less than 2 minutes.

They learned through the first modeling session that a good question is one about the specific of the graph, not about what the person was thinking. Starting a question with “Why did you….” often doesn’t help. Starting instead with “What is the slope…” or “Are the velocities positive, negative, or zero…” does.

3rd period I repeated the process of modeling the mistake game with exactly the same results; it was painful the first time, and quick the second. I think I’ve got a keeper. Tons of students asked questions; they really seemed to be into it. Monday we’ll be trying the game with students presenting and I’ll update this post with the results.

UPDATE: Anecdotally, I felt like the day students presented their graph stacks with purposeful mistakes was one of the best whiteboarding experiences I’ve had so far. For each and every problem students were explicitly evaluating and analyzing every aspect of each graph, as opposed to correct graphs where they seem to say ‘yep, looks right.’ The quiz results were impressive. Out of a 4 point scale, last year the average was 2.36 whereas this year’s was 3.21 (p<0.0001). I’m in for the mistake game as a regular part of class from now on!

Reluctant Participants and Board Meetings

As I start my 3rd year of Modeling Instruction, I’m happy to be in a place where I can start tweaking rather than making sweeping changes to my courses. My primary goal this year is to give more help and attention to students who struggle, and one of the ways I plan to do this is to pointedly seek methods for engaging them more during class. My first plan of attach on this concerns board meetings.

If you are not familiar, a “board meeting” is loosely defined as having students form a large circle so that they can observe each group’s whiteboards. I typically use this method of whiteboarding to have groups compare data from the same lab in order to induce aspects of a particular model. 

Despite having 25 students in a class, I noticed last year that board meetings tended to be dominated by less than 5 people. I want to try to get all the students involved; I want them all contributing and wrestling with the data. This year I’m going to have board meetings start by giving students 1-2 minutes to simply look around and make at least one observation. I want them to do this silently, individually. I think that sometimes there are students (like me) who are comfortable word-vomiting immediately about what they see, which then overwhelms students who prefer to sit back, take in info, and process before speaking. I want to give that second group time to process. After this time period, I’m going to have them turn to share their observation with the person next to them. Again, I want every single student in the room interacting about the data. After that I think I’ll have them go around the circle to share with the whole group. I thought about letting groups volunteer or cold-calling on groups, but by going around the circle I can step out and simply record their thoughts with minimal guidance and intervention.  As groups report in, I think I’ll stick with my observations/claims approach to help students organize the information reported out. 

I think throughout the year I will slowly remove the scaffolds like turning to partner or going around the circle in favor of more organic approaches, but I’m thinking I’d keep the 1-2 minutes of process time. I really want to help the processors engage before the vomiters get in their way. 

I know this isn’t new in general (yeah, yeah, it’s basically ‘think pair share’), but I think applying the idea specifically to a board meeting has some merit. I’ll report back with how it goes. I’ll also hopefully be posting with other possibilities for getting *all* students engaging in the various aspects of a modeling classroom. 

UPDATE: I did this will all my classes and I believe it was very successful. In addition, we had finished collecting data in one class period but didn’t have time to whiteboard it, so I had them put it in their lab notebooks (sketch a graph, record the equation in words, write the slope and intercept with units and uncertainties), and then to write a couple of sentences summarizing what the results meant. When they came back the next day, I had them take 2 minutes to discuss their paragraphs with each other. I like that this both helped them think about the data first, and then also incorporated some writing, which I hope to do more. After discussing their summaries, I had them gather in a circle and do what I described above. I really believe that this process helped get more students directly involved in wrestling with the data than only doing a standard board meeting. 

I want to thank Patrick Briggs, who keynoted for our all-district kickoff yesterday, for explicitly pointing out  that many students need time to think and prepare before they are willing/able to have an academic conversation.  

Differentiating Professional Development

Today I came across the following tweet by Kate;

I was initially torn. On one hand, I’ve been in the audience for this, and it’s frustrating. On the other, for the last couple of years I’ve been the one in front, and that’s not easy either. I’ve given some lip service to trying to differentiate this type of required professional development but haven’t followed through. Additionally, the team I work with and I have a general goal of wanting to get away from a model where teachers depend on us for technology training and instead focus on improving  pedagogical approaches, so I want to help teachers to be able to learn the specific tech skills they need, when they need it, without a need for sit-n-git PD.

So I posed a question;

There were two ideas that came out of the discussion that I am going to particularly focus on because I think they could work for me.

I like this idea because the list could even be split into ‘need to know,’ intermediate, and advanced sections so that folks who already have the basic competencies can expand their skills with that particular tool, and it could set a baseline for what we expect all teachers to know and be able to do (kinda like we do for students…) with that tool. I like that it very granularly differentiates for teachers. That said, I really like the possible collaborative nature of the second idea;

I like that here teachers could work together to learn whatever competency is expected. I think this is what I would try first, as I’m pretty big into collaborative learning and want to model that with teachers as well.

In either situation, I would like if this were the norm;

As the PD leader, I should be doing two things; provide learning experiences for my participants, and providing opportunities for them to share what they have learned with each other. (Side note: this is no different that what good teaching in a classroom looks like). One reason I particularly like these methods of differentiating PD is that it makes it more difficult for students teachers to get sidetracked, as they can move on to learn things they don’t already know. (I’m the worst student; I try to multitask with twitter, mail, and more twitter, and I end up missing a lot. For that reason as well as this study I have been trying more often to close my laptop and take notes by hand. I’m confident that being allowed to move ahead and explore, with accountability, would keep me more focused.)

Do you have other ideas for differentiating PD? Thoughts about these methods? Let me know in the comments!

What Makes For Good Ed Tech? An ISTE 2014 Reflection

A couple weeks back I attended the ISTE 2014 convention, and I discovered something;

This wasn’t the first time I got worked up about edtech, but this time my frustration is directed towards the amount of money thrown around, particularly on products that don’t consider pedagogy nor the extensive research available on  how students learn. That got me thinking about how we can wade through the dump and find the treasure.

So I wanted to look more at some companies where I really value their emphasis on students and learning to see if I could find some patterns.

Let’s start with Desmos. A quick click to their about page reveals this;

Screen Shot 2014-07-16 at 9.31.31 PM

It’s very clear, and easy to find, that their focus is on constructivist learning. Then if you dig a bit deeper, you’ll find that they’ve partnered with amazing teacher leaders Dan Meyer, Christopher Danielson, and Fawn Nguyen to make some great lessons, designed for learning, powered by Desmos. I also had the fortune to have an extended conversation with Eli Luberoff, CEO of Desmos, and was struck by how much their pedagogical ideals influence what they do. They want to create a place for students to experience math, not a place where math is done to them. It’s inspiring.

Another good example is Dreambox. Their front page boasts

Screen Shot 2014-07-16 at 9.47.56 PM Screen Shot 2014-07-16 at 9.48.09 PM

My daughter uses Dreambox through her school, in a different district than where I teach. I was won over to Dreambox first by the exercises she was completing that place strong emphasis on conceptual development of place value and the meaning of mathematical operations, and then by a great conversation with Tim Hudson, a former math teacher who now designs curriculum for Dreambox. Tim confirmed that pedagogy and conceptual development of mathematics are at the forefront in the design of Dreambox activities.

Aleks

At first glance, Aleks (adaptive learning software) seems to be grounded in research.

Screen Shot 2014-07-16 at 9.53.23 PM

I started digging a bit about Knowledge Space Theory, and found KST is about assessing knowledge, not about how students are able to actually gain that knowledge The difference is important. While it’s good to know what students do and don’t understand, it’s more difficult, in my experience, to actually get them to learn things. Dreambox focuses on getting students to understand concepts through conceptual development, while Aleks focuses on, from what I have seen, drill and kill practice based on what the platform decided a student doesn’t know.

Khan Academy 

Screen Shot 2014-07-16 at 9.55.11 PM

I admire that Sal Khan wants to change “education for the better by providing a free world-class education for anyone anywhere”. It’s an admirable goal, and one worth pursuing.

The problem is that KA repeatedly refuses to consider research on pedagogy and student learning (see Frank Noschese’s and Christopher Danielson’s posts for starters). The about page boasts about data and badges (read Bill Ferriter’s post about the problem with badges) rather than about deep thinking and conceptual development. I won’t rehash Frank and Christopher’s arguments, but seriously, go read those posts. It’s amazing what we do actually know about learning, and that Mr. Khan is dismissive of it all.

After my Twitter rant at ISTE about edtech nonsense, Kelly made an interesting observation;

Edtech as an industry seems bent largely on ‘personalization’ and ‘individualization'; there is, however, a significant research base on student learning through collaboration and dialogue. Edtech should aid in promoting methods that work, rather than move away from them. Some are. I’m hoping this post helps myself and others make some strides as to how to find those edtech companies that really do have students, rather than dollars, at their core.

As for the edtech startups,  I can only hope they heed Frank’s edtech PR tips.

Finally, the most reliable method I have found in vetting edtech is to pay attention to what the right people are saying. Everybody in the MTBoS raves about Desmos. When I originally posted to Twitter asking about Dreambox I got rave reviews from folks I highly respect. KA, on the other hand, is not spoken highly of in those circles, and I don’t ever hear mention of Aleks. Chances are good, it seems,  that if a number of twitter folks are raving about a product for it’s usefulness in student learning, it’ll be a good one. Find people who explicitly evaluate learning effectiveness, and listen to them.

 

The Current State of Educational Technology

I’m starting to feel like a technology curmudgeon.

I’ve been thinking for a while about how technology should be used in schools. Around 3 years ago I started pushing for more access to technology in my district, and I would like to think my motivation was righteous; I saw possibilities to enhance student engagement and learning but didn’t have the ability to do so because of filters/policies/lack of hardware. So I pushed. And pushed.

The first result was the ability to pilot Google Apps for Education with my kids. I had them do a research project where they investigated types of forces and used Google Docs to compile their research. It was neat, and pretty cutting edge for my district at the time (circa 2010). Did they learn how to use Google docs and how to collaborate? Sure. Did they learn any physics? I honestly doubt it.

Then from 2010-2012 I was able to acquire probeware that collects digital data for physics, then we use computers with a program called LoggerPro to analyze the data. One of the great things LoggerPro does is allows for video analysis, such that we can plot position, velocity, and acceleration vs. time for objects within the video frame. For quite some time the workflow was as follows; we would collect the data using cameras, walk to the computer lab, upload and analyze the data, then print the graphs so we could discuss it the next day.

Fast forward to 2012. I somehow was able to convince someone to give me 10 laptops to use in my room. The very first day they were ready I ran into an interesting problem with some data students had collected. Some said the data indicated a linear relationship, some said quadratic. We had 15 minutes left of class, and I made a snap decision; go re-collect the data, this time being very careful when doing the video analysis. We came back together, and sure enough every graph was linear.

This would not be possible without the technology accessible to me at that very moment. But where would this activity fit on the SAMR model? I’m not sure that really matters. In this case, kids were certainly learning more physics, though not as much about how to use technology, since LoggerPro isn’t as scalable to life-outside-school as is using Google docs. Does that make one better than the other? Depends on your objectives, I imagine. But the point I want to make was that I didn’t need extensive training to redefine my technology use in the classroom; instead I needed students to have immediate access to the technology so that they could use it in the moment for learning. My training on how students learn physics through experiences was far more valuable than learning the technology itself.

For the last two school years I have held a half-time position in my district as a technology integration specialist. This year in particular has been amazing, as we have been able to hire enough TOSAs to have a team of us who collaborate to help teachers integrate tech as well as to investigate and make decisions regarding the future of technology in our schools. I love that my boss significantly weighs our input in making decisions. My question right now is what direction these decisions should head in the area of technology integration and professional development.

A couple weeks ago I had a brief twitter conversation with a few others regarding how we (as tech trainers) help staff use technology effectively.

What I mean is that I think a focus on SAMR (or any other tech-focused PD model) loses the forest in the trees. The tech isn’t the focus; learning is.

Then at the TIES 2013 conference, my by far favorite session of day one was given by George Couros, who definitely didn’t mention any websites I can use in class tomorrow. Instead, he said this;

The biggest game changer in education is the way our teachers think.

He also showed us what success looks like. And told us to be more dog. And to jump.

But seriously. Is moving education forward really about using the flashy new game you learned? Or is it about using good pedagogy, then having tools at your disposal to be able to utilize that pedagogy? I think the answer is clearly the latter, but the majority of time, money, and effort seems focused on devices and software rather than on how students learn.

Time to change that.

Innovation and Disruption in Everyday Education

Two nights ago I came across a tweet from Huntington Post Education;

I then modified and retweeted it;

What followed was an overwhelming number of retweets, favorites, and follows (at least for me, with a measly 600 some followers). Additionally, if you click on the link, you will see that HuffPo has since changed the title of the article to These 11 Leaders are Running Education But Have Never TaughtInteresting.

The vast majority of the RTs and interactions shared my sentiment, but one caught my eye;

And a conversation ensued;

Challenge Accepted.

As I started thinking about who and what I was going to highlight here, the tweets kept rolling in. This one really got me thinking.

The excerpt that really struck me;

Of course, even in Disrupting Class, the predictions of the ed-tech end-times were already oriented towards changing the business practices, not necessarily the pedagogy or the learning. [Emphasis mine]

I think that the ‘disruption’ really needed in education is to simply utilize methods of instruction and systems that have been demonstrated to be effective through research. In the end I don’t think we need to revolutionize the entire system, as we have pockets and individuals to serve as wonderful models. The real problem is how to scale from individuals doing great things to a great system as a whole.

As I highlight some of these innovations by everyday teachers, let’s start with the greatest disruption in my teaching, Modeling Instruction. Modeling is a highly researched, highly effective method for teaching Physics. Modeling came out of a great disruption; physics teacher David Hestenes wrote a basic concept inventory for his physics classes thinking they would rock it. Instead, they bombed it. Years of research then gave birth to Modeling. Frank Noschese, a ‘normal’ physics teacher in New York State, gave a great TEDx talk demonstrating how students “Learn Science by Doing Science” using Modeling. In fact, Frank was recently lauded by a non-educator for his work with modeling. Kelly O’Shea is closing in on 200,000 views on her blog where she posts guides to how to  implement MI, her modified MI materials, and other thoughts relating to physics education. She teaches at a private school in NYC. Both (and the many other modelers ‘disrupting’ traditional physics teaching) are ‘just’ teachers.

Standards Based Grading (SBG) is a movement in education more widespread than modeling instruction. The basis of SBG is to guide students towards mastery of topics rather than pushing them through an outdated factory model of learning.  Rick Wormeli and Robert Marzano are two academics leading the charge in SBG, though it has primarily succeeded as a grassroots movement of educators working in isolation. Frank and Kelly, mentioned above, are also teacher-leaders in this field. SBG has in fact even entered the higher-ed realm, with Andy Rundquist pioneering its use through non-standard assessments in his physics classes. In my district my wife was one of the first to implement SBG 5ish years ago as a result of her Masters thesis. Many others have followed suit, and, for certain in my case, the result is increased student learning.

Project Based Learning (PBL) is a movement where students learn by doing, with a flexible route to demonstrating learning in comparison to other methods of instruction. The most visible example of PBL I know of is Shawn Cornally’s BIG school, where he is attempting to scale PBL to make school more awesome, a worthy task. Project Lead the Way is an example being implemented in my district, a program where students learn engineering through PBL. Students interact regularly with engineers from Seagate, Toro, and other local firms, and produce plans and prototypes with their guidance. Two other teachers at my school pioneered the building of an Environmental Learning Center around “the idea that meaningful learning happens when students engage with the community around them, including the natural environment.”

Many teachers were Flipping the Classroom before Khan Academy popularized it, and many have similarly continued to innovate within the flipped structure. Ramsey Musallam in particular popularized a variation called Explore Flip Apply, which was developed because of research indicating that sparking students’ interest and thinking through inquiry before providing content delivery improves learning outcomes. A local colleague of mine, Andy Schwen, wrote a nice post describing his transition from a pure flip to the EFA model.

Twitter is utopia for individual educators uniting to improve learning, and perhaps the best example of this that I know of is a loose collection of math teachers known as the Math Twitter Blog-o-Sphere. They use the hashtag #MTBoS, interact regularly, and have fantastic conversations about student learning. What’s really amazing is that from this virtual community has sprouted a real one. Tweetups are a regular occurrence (I have participated in three), and for two years now they have organized a loose, edcamp-style workshop called Twitter Math Camp. Last year 100+ educators took part.

I’m fairly certain that I’ve missed numerous ‘disruptions’ and ‘innovations’ out there. So my challenge to you; fill the comments with examples. They can be specific instances (projects, lessons, whatever), or general cases. I am particularly interested in examples outside of the math and physics world in which I primarily live. Blow it up, my hope is that maybe someone important will notice and realize that educators are the voice that’s missing from the education reform table.