## What Makes For Good Ed Tech? An ISTE 2014 Reflection

A couple weeks back I attended the ISTE 2014 convention, and I discovered something;

This wasn’t the first time I got worked up about edtech, but this time my frustration is directed towards the amount of money thrown around, particularly on products that don’t consider pedagogy nor the extensive research available on  how students learn. That got me thinking about how we can wade through the dump and find the treasure.

So I wanted to look more at some companies where I really value their emphasis on students and learning to see if I could find some patterns.

It’s very clear, and easy to find, that their focus is on constructivist learning. Then if you dig a bit deeper, you’ll find that they’ve partnered with amazing teacher leaders Dan Meyer, Christopher Danielson, and Fawn Nguyen to make some great lessons, designed for learning, powered by Desmos. I also had the fortune to have an extended conversation with Eli Luberoff, CEO of Desmos, and was struck by how much their pedagogical ideals influence what they do. They want to create a place for students to experience math, not a place where math is done to them. It’s inspiring.

Another good example is Dreambox. Their front page boasts

My daughter uses Dreambox through her school, in a different district than where I teach. I was won over to Dreambox first by the exercises she was completing that place strong emphasis on conceptual development of place value and the meaning of mathematical operations, and then by a great conversation with Tim Hudson, a former math teacher who now designs curriculum for Dreambox. Tim confirmed that pedagogy and conceptual development of mathematics are at the forefront in the design of Dreambox activities.

Aleks

At first glance, Aleks (adaptive learning software) seems to be grounded in research.

I started digging a bit about Knowledge Space Theory, and found KST is about assessing knowledge, not about how students are able to actually gain that knowledge The difference is important. While it’s good to know what students do and don’t understand, it’s more difficult, in my experience, to actually get them to learn things. Dreambox focuses on getting students to understand concepts through conceptual development, while Aleks focuses on, from what I have seen, drill and kill practice based on what the platform decided a student doesn’t know.

I admire that Sal Khan wants to change “education for the better by providing a free world-class education for anyone anywhere”. It’s an admirable goal, and one worth pursuing.

The problem is that KA repeatedly refuses to consider research on pedagogy and student learning (see Frank Noschese’s and Christopher Danielson’s posts for starters). The about page boasts about data and badges (read Bill Ferriter’s post about the problem with badges) rather than about deep thinking and conceptual development. I won’t rehash Frank and Christopher’s arguments, but seriously, go read those posts. It’s amazing what we do actually know about learning, and that Mr. Khan is dismissive of it all.

Edtech as an industry seems bent largely on ‘personalization’ and ‘individualization'; there is, however, a significant research base on student learning through collaboration and dialogue. Edtech should aid in promoting methods that work, rather than move away from them. Some are. I’m hoping this post helps myself and others make some strides as to how to find those edtech companies that really do have students, rather than dollars, at their core.

As for the edtech startups,  I can only hope they heed Frank’s edtech PR tips.

Finally, the most reliable method I have found in vetting edtech is to pay attention to what the right people are saying. Everybody in the MTBoS raves about Desmos. When I originally posted to Twitter asking about Dreambox I got rave reviews from folks I highly respect. KA, on the other hand, is not spoken highly of in those circles, and I don’t ever hear mention of Aleks. Chances are good, it seems,  that if a number of twitter folks are raving about a product for it’s usefulness in student learning, it’ll be a good one. Find people who explicitly evaluate learning effectiveness, and listen to them.

## The Current State of Educational Technology

I’m starting to feel like a technology curmudgeon.

I’ve been thinking for a while about how technology should be used in schools. Around 3 years ago I started pushing for more access to technology in my district, and I would like to think my motivation was righteous; I saw possibilities to enhance student engagement and learning but didn’t have the ability to do so because of filters/policies/lack of hardware. So I pushed. And pushed.

The first result was the ability to pilot Google Apps for Education with my kids. I had them do a research project where they investigated types of forces and used Google Docs to compile their research. It was neat, and pretty cutting edge for my district at the time (circa 2010). Did they learn how to use Google docs and how to collaborate? Sure. Did they learn any physics? I honestly doubt it.

Then from 2010-2012 I was able to acquire probeware that collects digital data for physics, then we use computers with a program called LoggerPro to analyze the data. One of the great things LoggerPro does is allows for video analysis, such that we can plot position, velocity, and acceleration vs. time for objects within the video frame. For quite some time the workflow was as follows; we would collect the data using cameras, walk to the computer lab, upload and analyze the data, then print the graphs so we could discuss it the next day.

Fast forward to 2012. I somehow was able to convince someone to give me 10 laptops to use in my room. The very first day they were ready I ran into an interesting problem with some data students had collected. Some said the data indicated a linear relationship, some said quadratic. We had 15 minutes left of class, and I made a snap decision; go re-collect the data, this time being very careful when doing the video analysis. We came back together, and sure enough every graph was linear.

This would not be possible without the technology accessible to me at that very moment. But where would this activity fit on the SAMR model? I’m not sure that really matters. In this case, kids were certainly learning more physics, though not as much about how to use technology, since LoggerPro isn’t as scalable to life-outside-school as is using Google docs. Does that make one better than the other? Depends on your objectives, I imagine. But the point I want to make was that I didn’t need extensive training to redefine my technology use in the classroom; instead I needed students to have immediate access to the technology so that they could use it in the moment for learning. My training on how students learn physics through experiences was far more valuable than learning the technology itself.

For the last two school years I have held a half-time position in my district as a technology integration specialist. This year in particular has been amazing, as we have been able to hire enough TOSAs to have a team of us who collaborate to help teachers integrate tech as well as to investigate and make decisions regarding the future of technology in our schools. I love that my boss significantly weighs our input in making decisions. My question right now is what direction these decisions should head in the area of technology integration and professional development.

A couple weeks ago I had a brief twitter conversation with a few others regarding how we (as tech trainers) help staff use technology effectively.

What I mean is that I think a focus on SAMR (or any other tech-focused PD model) loses the forest in the trees. The tech isn’t the focus; learning is.

Then at the TIES 2013 conference, my by far favorite session of day one was given by George Couros, who definitely didn’t mention any websites I can use in class tomorrow. Instead, he said this;

The biggest game changer in education is the way our teachers think.

He also showed us what success looks like. And told us to be more dog. And to jump.

But seriously. Is moving education forward really about using the flashy new game you learned? Or is it about using good pedagogy, then having tools at your disposal to be able to utilize that pedagogy? I think the answer is clearly the latter, but the majority of time, money, and effort seems focused on devices and software rather than on how students learn.

Time to change that.

## Innovation and Disruption in Everyday Education

Two nights ago I came across a tweet from Huntington Post Education;

I then modified and retweeted it;

What followed was an overwhelming number of retweets, favorites, and follows (at least for me, with a measly 600 some followers). Additionally, if you click on the link, you will see that HuffPo has since changed the title of the article to These 11 Leaders are Running Education But Have Never TaughtInteresting.

The vast majority of the RTs and interactions shared my sentiment, but one caught my eye;

And a conversation ensued;

Challenge Accepted.

As I started thinking about who and what I was going to highlight here, the tweets kept rolling in. This one really got me thinking.

The excerpt that really struck me;

Of course, even in Disrupting Class, the predictions of the ed-tech end-times were already oriented towards changing the business practices, not necessarily the pedagogy or the learning. [Emphasis mine]

I think that the ‘disruption’ really needed in education is to simply utilize methods of instruction and systems that have been demonstrated to be effective through research. In the end I don’t think we need to revolutionize the entire system, as we have pockets and individuals to serve as wonderful models. The real problem is how to scale from individuals doing great things to a great system as a whole.

As I highlight some of these innovations by everyday teachers, let’s start with the greatest disruption in my teaching, Modeling Instruction. Modeling is a highly researched, highly effective method for teaching Physics. Modeling came out of a great disruption; physics teacher David Hestenes wrote a basic concept inventory for his physics classes thinking they would rock it. Instead, they bombed it. Years of research then gave birth to Modeling. Frank Noschese, a ‘normal’ physics teacher in New York State, gave a great TEDx talk demonstrating how students “Learn Science by Doing Science” using Modeling. In fact, Frank was recently lauded by a non-educator for his work with modeling. Kelly O’Shea is closing in on 200,000 views on her blog where she posts guides to how to  implement MI, her modified MI materials, and other thoughts relating to physics education. She teaches at a private school in NYC. Both (and the many other modelers ‘disrupting’ traditional physics teaching) are ‘just’ teachers.

Standards Based Grading (SBG) is a movement in education more widespread than modeling instruction. The basis of SBG is to guide students towards mastery of topics rather than pushing them through an outdated factory model of learning.  Rick Wormeli and Robert Marzano are two academics leading the charge in SBG, though it has primarily succeeded as a grassroots movement of educators working in isolation. Frank and Kelly, mentioned above, are also teacher-leaders in this field. SBG has in fact even entered the higher-ed realm, with Andy Rundquist pioneering its use through non-standard assessments in his physics classes. In my district my wife was one of the first to implement SBG 5ish years ago as a result of her Masters thesis. Many others have followed suit, and, for certain in my case, the result is increased student learning.

Project Based Learning (PBL) is a movement where students learn by doing, with a flexible route to demonstrating learning in comparison to other methods of instruction. The most visible example of PBL I know of is Shawn Cornally’s BIG school, where he is attempting to scale PBL to make school more awesome, a worthy task. Project Lead the Way is an example being implemented in my district, a program where students learn engineering through PBL. Students interact regularly with engineers from Seagate, Toro, and other local firms, and produce plans and prototypes with their guidance. Two other teachers at my school pioneered the building of an Environmental Learning Center around “the idea that meaningful learning happens when students engage with the community around them, including the natural environment.”

Many teachers were Flipping the Classroom before Khan Academy popularized it, and many have similarly continued to innovate within the flipped structure. Ramsey Musallam in particular popularized a variation called Explore Flip Apply, which was developed because of research indicating that sparking students’ interest and thinking through inquiry before providing content delivery improves learning outcomes. A local colleague of mine, Andy Schwen, wrote a nice post describing his transition from a pure flip to the EFA model.

Twitter is utopia for individual educators uniting to improve learning, and perhaps the best example of this that I know of is a loose collection of math teachers known as the Math Twitter Blog-o-Sphere. They use the hashtag #MTBoS, interact regularly, and have fantastic conversations about student learning. What’s really amazing is that from this virtual community has sprouted a real one. Tweetups are a regular occurrence (I have participated in three), and for two years now they have organized a loose, edcamp-style workshop called Twitter Math Camp. Last year 100+ educators took part.

I’m fairly certain that I’ve missed numerous ‘disruptions’ and ‘innovations’ out there. So my challenge to you; fill the comments with examples. They can be specific instances (projects, lessons, whatever), or general cases. I am particularly interested in examples outside of the math and physics world in which I primarily live. Blow it up, my hope is that maybe someone important will notice and realize that educators are the voice that’s missing from the education reform table.

## When You Can’t Do Standards Based Grading

My wife first introduced me to Standards Based Grading (SBG) 5ish years ago, while writing her masters thesis on the topic. After 3 years of  pushing I finally bought in, particularly because of what I perceive as a special harmony between SBG and the the method of instruction I use in physics, Modeling Instruction. I helped implemented SBG in our regular physics course and was happy with the results. However, the only class I teach this year is a concurrent enrollment U of MN course, which I wrote more extensively about here. The students are mostly highly motived high school juniors and seniors. I love, LOVE teaching this class, but it has a glaring problem for SBG; it is articulated through the U of MN.

I thought there were some significant problems I was having could be addressed with SBG.

1. There are only four exams and a final all year for the U of MN aspect of the course. This is far too little assessment; neither my students nor I really knew where they stood before taking these high stakes exams. It made my grading load nicer, but it wasn’t best for kids.
2. I stopped grading homework last year (I still checked it, but for no credit), and I found that students simply didn’t do it. I still believe that it is practice and thus shouldn’t be part of a grade, but also that they really do need to practice to succeed.
3. Four exams  per year means two per semester, which meant that a significant part of a student’s HS semester grade was based on just two exams.
4. Students didn’t know what they had to be good at to succeed on the exams.

I set out to solve these problems using SBG.

Background: Before I continue there are some features of the course that are very relevant to making this work. First of all, it is important that I have a bit of flexibility with how grades are calculated. There is a 10% category as set by the U that was for homework, but I was told I could use it how I see fit. As I mentioned before, I decided last year that I was done assigning a grade to homework (but that’s a different post…), so I had 10% of the U grade that I could use for re-assessable quizzes. Furthermore, since the HS grade is split into two semester grades, whereas there is only one college grade for the whole year (it’s a one semester U course taught in a year at the HS), I have even more flexibility with the HS grade. Thus I am able to carve out 25% for a ‘SBG’ category for their HS grade. It’s not perfect, but it’s what I have.

The grading scale, as set by the U, is fairly forgiving with 15% increments instead of the standard 10%, such that the A cutoff is 85%, B is 70%, etc. This is key to a using a non-standard scoring methods (such as a 4 point scale)  because a 2/4 at 50%, is still a D+. That said, one could always use a 4 point scale and map those scores to percents. Really the four points are meant to represent levels of mastery (exemplary, proficient, developing, basic), not percentages. In my case I can use an alternate scale and it still fits within my percentages, but some tweaking could certainly fix that in other cases.

The Quizzes are based on standards I have written for the course, which in turn are based on the skills I deemed necessary for students to succeed on the four U of MN exams. The quizzes are scored on a 2-1-0 scale, where 2 means they nailed it, 1 means they understand something but not everything, and zero means they didn’t know where to begin. The first quiz generated an awesome amount of learning, as most students scored themselves a 1 and were very motivated to improve their learning and thus that score. After a couple of quizzes and students getting very frustrated at multiple reassessments at 1, I caved and started giving 1.5 (B/B+). I don’t mind that distinction as I give 1.5s when I can’t give a 2 (they didn’t nail it), but they have still shown proficiency (they demonstrated understanding but made some minor mistakes). I’m still seeing kids reassess to shoot for the 2.

Wait, did you say self-graded quizzes? OK, this is my favorite part of the course, I stole it from Frank. Students take a quiz. There’s a bunch of bright colored keys in the back of the room along with red pens (side note: don’t love red, but it’s what I have at the moment). When a student finishes the quiz, they walk to the back, check their work against the key, correct and annotate their quiz, score themselves a 0,1, 1.5, or 2, and hand it in. I hover in the back both to keep them honest and to answer questions when needed. This serves two purposes;

1. Students get instant feedback on exactly what they did and did not do correctly, which is vastly more important than…
2. Takes the correcting load off of me.

I almost hate mentioning #2, but the reality is that the normal student load in a public high school is something like 150 kids, so doing SBG with reassessments could get overwhelming. I want this to be something that is helpful for the students and for me. Quiz grading takes me very little time with this method, and we also don’t ‘waste’ class time going over the quiz as a group since they already corrected it themselves.

When I take a look at the quizzes, I am looking at a couple of things. First of all, I am looking to see if there are patterns in what was answered incorrectly so I can adjust instruction if necessary. Second, I am looking closely at the 2’s to make sure they really demonstrated a complete understanding of the material. This process is MUCH faster than scrutinizing each quiz to see if they get an 8.5 or a 9.

Hold on. If you don’t give partial credit on a quiz, then don’t they all get D+’s? Kind of. At first they did, with only 2,1,0. I didn’t want to be haggling over points. I want students to fully understand each and every standard so they can nail those U of MN exams. Case study; on quiz 1 (Constant Velocity problem solving), most of my students got a 1. This is because the U (like the college board for AP exams) strongly emphasizes algebraic problem solving, and students resist doing so. Last year I didn’t feel like my students had a good feel for algebraic problem solving  until second semester; this year, they all took the bet and lost, and as a result, they are reassessing. And they are reassessing well.

What I love

• Forced reassessments force learning
• The feeling that students are in control of their learning and their grade

What I don’t as much

• Part of me is ok with certain timelines for demonstrating learning (the exams), but another part believes the final deadline should really be the end of the course. In a more pure SBG system students could potentially figure out physics in the last month of school and then earn a grade that reflects that understanding, an A. In my system, if they didn’t figure out kinematics by Exam 1, then they probably won’t get the A in the course even if they reassess on the quizzes, due to the high weights of the non-reassessable exams. But I can’t change this anyway.

Conclusions

The learning gains I have seen over last year so far are exceeding even my optimistic expectations; below is a box plot comparing the last two years of the U of MN Exam 1 (more dynamic link here);

So far all indicators point to success of this new system over my old one. Do you see holes? Have suggestions to make it even better? Let me know in the comments.

Frank wrote a great post about The Spirit of SBG that I think complements this post in that it emphasizes that SBG is about increasing learning, not about a system itself. I’m using a framework for SBG as best as I can to attempt to help increasing learning, so I hope that the sprit of SBG is being kept in that.

## CVPM Unit Summary

I only have one standard for CVPM, as I didn’t want to get bogged down with a super granular standard list.

CVPM.1: I can represent a constant velocity problems graphically and algebraically and solve problems using both numeric and algebraic methods.

I start day one of my essentially honors level, first year physics course with the Buggy Lab. (If you’re not familiar with the Buggy Lab, or even if you are, read Kelly’s post about it). This takes 2 full days, sometimes 2.5, with 45 minute periods.

From there I use Practice 1 stolen from Kelly, found in my CVPM Packet, which takes me about a day and a half (of 45 minute periods). Here’s a post about the board meeting to discuss the data.

Days 5-6 or so are the Cart Launch Lab. Here’s a picture of my notes while students discussed the data in a board meeting.

Next is Practice 2, also stolen from Kelly, though I add that we walk them with motion detectors, 1 day ish. (Update: Whiteboarding took the whole period and I decided that that was more worthwhile than actually walking them with motion detectors, we’ll do more of that in CAPM)

The last worksheet is Practice 3, which I developed to help develop more algebraic problem solving. This is because my class is actually a U of MN class taught at the HS level, and the U emphasizes algebraic problem solving. 2 days. This worksheet went very well, and here are some notes about starting the whiteboarding process with it as well as the ensuing conversation.

After Practice 3 I  have two days of difficult problem solving practice. The first is the standard lab practicum where students must cause two buggies of different speeds to head-on crash at a particular location. Here’s a post describing the practicum.  The second is a difficult, context rich problem that students work on in groups.

All in all the unit takes me  13-14 days, including the quiz at the end and a day to FCI pretest.

## Transitioning from Energy to Momentum

In my college level physics class we study Energy right before momentum. I really like this, particularly because we can begin our study of momentum as driven by the fact that a pattern emerges from data that is not explainable by Energy.

On the first day of my momentum unit I typically do a fun car crash activity to help students start thinking about how force and time are related in collisions. The next day we start building the momentum transfer model. (We’ll come back to force-time relationship at the end of this paradigm series) Last year, not having experience with Modeling Instruction, I just dove right in (chronicled starting with day 1 here). This year I wanted to utilize the discover, build, break cycle that Frank Noschese talked about in his TEDx talk. One of the tenants of modeling is that models are useful for certain cases and not for others. Thus I used an inelastic collision to springboard into momentum based on the fact that an energy analysis is not particularly useful for this situation.

When students walked in I showed them a scenario where a moving cart (A) collides with a stationary cart (B) of equal mass. I asked them to use the Energy Transfer Model (ETM) to predict the final velocity of the carts. A typical analysis looks something like this;

Assuming there is no conversion of energy to thermal energy, the kinetic energy of the first cart should end up as combined kinetic energy for both carts after the collision;

$\frac{1}{2}m_Av_{Ai}^2=\frac{1}{2}m_Av_{Af}^2+\frac{1}{2}m_Bv_{Bf}^2$

Noting that for this case $m_A=m_B$  and  $v_{Af}=v_{Bf}$,  the whole thing simplifies to

$v_{Ai}^2=2v_f^2$

Solving for the final velocity of the two carts together in terms of the initial velocity of the first one,

$v_f=\frac{v_{Ai}}{\sqrt{2}}$

Once we got to here I simply said “Go test it,” and they got to work in the lab.

Before I go on I want to comment on the lack of thermal energy in the above derivation. Many of my students correctly tried to include E_therm in their analysis. This is great, but I pointed out that today was a lab day and thus we need to be able to measure things. Me: “Can we easily measure E_therm?” Student:”Ummmm…no.” “Right, so let’s ignore it and see if the data upholds that assumption.” They almost always (correctly) want to include E_therm in every energy analysis, but we have done a couple situations in the lab where stored gravitational interaction energy transfers to kinetic energy for dynamics carts where assuming no changes in E_therm yielded good data. Thus students were primed for me to suggest that we could ignore E_therm. However, this is tempered with the fact that I do a demonstration showing that kinetic energy transfers to thermal energy in collisions (a couple weeks prior) and that they are used to me guiding towards ‘wrong’ answers. So I believe students went into lab cautiously optimistic that our the lab evidence would support the derived equation.

It doesn’t.

It only takes students 5-10 minutes to realize that the final velocities are closer to half the initial rather than the initial divided by the square root of two. Some of them try to justify the data (well, it seems kind of close to root two…), but after conferring with their classmates they give up and go with two. At that point I pulled them back up to the front of the room.

Me: So, did our equation work?
Students: Nope
M: But was their a pattern?
S: Yep. Final velocity is half the initial.
M: Wait, you mean that energy doesn’t predict the final velocity, but something else does?
S: Um…..

We had a quick discussion about how something must be going on that is different from energy. We also talked about how it makes sense that energy wouldn’t work; we expect some of the initial kinetic energy to convert to E_therm  after the collision.

From here I continued day 1 in pretty much the same way as last year. I found after a 45 minute period students were just about ready to talk about a relationship, just slightly behind where day 1 ended before adding the energy piece. My students are much more used to the idea of paradigm labs this year and are getting pretty good at looking for meaning in lab data, so I am not surprised that this addition didn’t significantly change the day one timeframe. Tomorrow we start with presenting the student derived relationships.

## An Empirical Start to the Energy Transfer Model (Part 2)

At the end of the first post in this series I lamented that starting energy empirically meant that I couldn’t include changes in thermal energy like starting this modeling unit more traditionally does. I shouldn’t have worried. Turns out that emphasizing that changing the energy of a system through working, heating, or radiating helps them overall with energy conservation despite that thermal energy in particular isn’t address. But I’m getting ahead of myself.

Days 1-4 ish are outlined in the first post of this series. I’m now picking up at around Day 5.

Kinetic Energy

We started this unit by finding that the area under the force vs. postion graphs for two different springs, when made equal, yielded equal velocities when launching carts. I emphasized at this time (and over and over again as we went through the unit) that the area under graphs, if it has a physical meaning, means a change in something. In this case it’s a change in energy, though we hadn’t gotten that far yet. I just emphasized it’s a change in something. So in the first activity the change in something predicted velocities. In the second it correlated with a change in height. At that point we coined the term gravitational interaction energy, and we looked at how the final gravitational interaction energy was the same as the initial plus the change in energy (as found from the area under the F vs. x graph) The third, starting now, looks at the correlation of that change with velocity. They now know that this has something to do with kinetic energy, since we had the energy=pain talk, but not exactly how.

There are many variations of this lab, most using springs. I found that if you attach a force detector to a cart (which we did for the area vs. change in height experiment previously), you can just pull the cart with a rope and get pretty good data for area vs. v^2 even though the force isn’t constant. Which I think is extra cool. Basic setup for this experiment is below. Note the horizontal track.

I learned one pretty neat trick when I performed the lab myself. For each trial, it doesn’t really matter where the end point is, as long as you find the area for some displacement and then record the final velocity that corresponds to the end point for that displacement (assuming you start from rest, which I did). So I had students graph force vs. position to find the area (change in energy) that we were interested in, and then plot velocity vs. position so that they easily find the corresponding ending velocity. This way they can set the integral (area) section to be the same for each trial, then quickly use the examine function in logger pro to find the ending velocity at that same endpoint for each trial. Slick.

Plotting change in energy vs. v looks like this. Note that since I took this data I actually called the area work, since that is the means by which the energy is changing in this case. I did not instruct them to do that, however.

It actually looks fairly linear, especially to kids who are looking for things to be linear. However, typically data was non-linear enough, and we linearized a quadratic doing central force, so most groups linearized using v^2 on the x axis.

When the data is linearized, it looks like this.

Certainly that looks more linear! Student data actually turned out good as well. Always nice when that happens.

The board meeting for this went amazingly fast. In the first class a student commented almost right away about the units of the slope. They started trying to figure out what the units should be, and I wrote on the board. With a little prodding we finally figured this out;

$\frac{\text{units of rise}}{\text{units of run}}=\frac{N\cdot m}{\frac{m^2}{s^2}}=\frac{kg\cdot\frac{m}{s^2}\cdot m}{\frac{m^2}{s^2}}=\frac{kg\cdot\frac{m^2}{s^2}}{\frac{m^2}{s^2}}=kg$

Whoa. All that simplifies to kg? Cool.

The classes did this in different orders, but essentially within 10 minutes they had figured out that the intercept was zero (both empirically from their data as well as logically by thinking through why it should be zero), that the slope was half the mass, and that the slope relating to the mass made sense because the units of the slope simplify to kg.

Thus

$\{\text{Area under F vs. x graph}\}=\frac{1}{2}mv^2$

From here we went on to be explicit about the names of everything. The area represented a change in energy. In the first case (pulling carts up ramps), it’s a change in gravitational interaction energy. In this case, it’s a change in kinetic energy.

This is more or less where day 5 ended. No, seriously, at this point they (keep in mind this is a college level class taught at the high school, so essentially top 20% kids) took data, whiteboarded it, and figured out meaning in a 45 minute class period.

Day 6 ish: Lab wrap up and transition to Energy Bar Charts

I started the day by teaching energy bar charts (LOLs). (Need a primer on energy bar charts? Kelly comes through again). We then went through the labs drawing the LOL for each one. This did two things; first, and most importantly, it emphasized that the area under the force vs. position graph found a value that measured how energy changed from the first snapshot to the second snapshot. Secondly, it was a way to show students how to draw LOLs. After drawing the LOLs for our two experiments, we had a conversation about how energy changes. The modeling instruction teacher notes lists that there are three ways energy changes; working, heating, and radiating. (Side note: I strongly prefer starting energy from a First Law of Thermodynamics perspective (strict conservation of energy) rather than from a Work-KE theorem perspective. More on that in a later post on partial truthsThey brought up convection and conduction, and I talked about how these are just two different ways for heat to transfer. We briefly talked about molecular interactions and KE transfer here, but I kept it quick. The point here was to plant the seed that what we are doing generalizes beyond work performing the energy transfers in and out of the system, but that for now we are going to focus on work (rather than heating or radiating) as a mechanism to transfer energy.

This took an entire day, as I have them draw the LOLs first, then we have a conversation about them. After today I assigned a worksheet on drawing LOLs and writing the qualitative energy conservation equations. This is a modified version of worksheet 3 in the standard modeling curriculum, modified by myself, Kelly O’Shea, and Marc Schrober (in reverse order?).

I’m hoping to write more about the development process, but overall I found, very anecdotally, that starting energy this way helped students see conservation on a system basis, and they have no problems with the idea that energy can enter or leave a system through working, heating, or radiating. It took a while to differentiate between energy stored in the system as thermal energy versus energy leaving the system through work done by friction, air resistance, or normal force (bouncing ball or other examples), but that’s to be expected no matter how this is done. My regular physics students certainly had trouble with that distinction despite starting ETM ‘traditionally.’ Both classes saw this demonstration (video here) to show that kinetic energy certainly does, often, transfer to thermal energy. The difficultly generally is tracking that energy; is it stored as a change in E_therm in the system, or does it leave via work? It took a while to work through that (pun intended).

Concluding Thoughts

I’m going to leave you with this. When I first started learning about Modeling Instruction, I assumed it was all about the labs, such as those outlined so far in this series. I have since learned, however, that though the labs provide a foundation for the concepts being learned, working through those concepts through whiteboarding is as much as important as the paradigm labs. Whiteboarding is where students flesh out the differences between what they think and what science demonstrates as a better truth, and where they hopefully cement their beliefs as those that align with science. Don’t underestimate the full framework of Modeling Instruction as a complete system for helping students through the process of learning like scientists.