Today we compared data.
I started class by asking students why the net force at the bottom of the arc was not zero. They realized quite quickly that forces should not be balanced since the direction of motion is changing (due significantly to previous emphasis on velocity as a vector, that is, when direction changes, velocity changes. Once again I credit Kelly O’Shea and her method for teaching BFPM for this). So then we drew a free body diagram and reasoned through which force should be bigger; tension up or mg down? They want to say down, but a quick discussion about what would happen if the net force was down dispels that idea. (Since the object is moving horizontally for the snapshot we have taken, a downward force would cause downward motion; clearly this is not the case.)
Side note #1: I want to mention why I didn’t do the typical rubber stopper lab (here’s a video if you are not familiar with that lab, ignore the flying pig part). I have done it for years, and I found it at best to be an example of large systematic error. I actually had my honors classes write it up last year for that very purpose; how to write a lab with significant errors. I have never been able to get quality data, especially with students. I find that they even have trouble getting a good pattern for F proportional to square of v. Thus I wanted to try something new, and the pendulum method was something I had done previously as well and was suggested by another modeler on twitter as well (@BEPhysics). I found two main negatives for using this particular lab to build central force; 1) the data we take and the resulting analysis are only valid for the bottom of the circle, and 2) though it is true that F_net is greatest at the bottom of the circle, this lab falsely ‘proves’ that, in that force vs. time data shows a maximum at the bottom, but that is more due to the fact that the force detector is pointed vertically and thus only reads the vertical component of the tension. Even if this supports an overall correct vision, I don’t like the idea that kids would get the correct vision from an incorrect assumption.
Then looking at the regressions as a whole, we re-confirmed that the data was quadratic, that is, that the net force on a pendulum at the bottom of it’s swing is proportional to the square of the speed at the bottom. That’s cool all by itself.
Next I had students looking for other patterns (what happens as r and m change?). I had them stay in their clusters first (see Modeling Central Force: Day 1 for more on this). This meant that they had different masses but approximately the same radius, and thus could look for mass dependence.
Side note #2: I have a lot of questions/problems with this part of the Modeling process that I hope will be addressed when I take the workshop this summer. I am still wrestling with the best way to have students share their data with each other to start finding the patterns. I also think that I need to bite the bullet and go get some large whiteboards; I have whiteboard desks and lots of space on my main whiteboard, but neither of those are portable, and using them has proven to be more of a workaround than something better than actual whiteboards.
I learned a lot about the guidance part of modeling in this particular instance. I really should have been more systematic in defining the radius and mass for each group. In one class, two clusters should have had different radii but were only different by 5 cm or so. More so, it would have been VERY helpful if each cluster had consistent masses, so that when they compared radius dependence they could do so with constant mass.
So here’s an example of what I was trying to do with the pairs/clusters concept, which was less successful than could have been due to my ambitiousness in directions.
This would have made the comparison process MUCH easier, I believe, as I could have them get into ‘clusters’ to check for mass dependence, and get into ‘groups’ for radius dependence. This is something I am going to consider trying later if I have other labs with two main parameters such as this.
Most groups had some sort of mass proportional to net force kind of data, but it was not obvious due flawed data. Data was VERY quadratic, but the actual parameters for the quadratic generally were not even close to what they should be. The A term (f(x)=Ax^2+Bx+C) should be m/r, but generally wasn’t. Some groups even had significant outliers; they had a larger mass than another group with the same radius, but the A terms were similar. Bad data was a big problem for this; in fact, NONE of the groups found the ‘correct’ terms for the quadratic (I found this out in analyzing the data later; we didn’t go so far in class to check). I didn’t have enough time to investigate reasons for this, though the fact that students and I can do the same lab with completely different results (meaning good vs. bad data) has been driving me nuts this year. Non-zero B values were a problem as well; though students were astute enough to realize the C had to be zero physically (Zero speed had to yield balanced forces, so if F_net was to be zero with v=0, C must equal zero) and thus it became a sort of litmus test for good data. C not zero? Now we don’t trust the rest of your regression. This worked reasonably well. B should be zero as well, but that was only true in the best of cases.
That said, in two out of three classes the data was convincing enough to show that F_net is proportional to mass and inversely proportional to radius. I used one of the other classes data sets to show the bad data class what was up (their data was truly horrible; no noticeable patterns at all! Very frustrating, both for them and for me). Thus
I asked what this seemed similar to; hey, it’s kind of like F_net=ma! Yes, yes it is. Meaning that
And this is where we ended for the day. (Note: It got much less frustrating for the wrap up tomorrow, which I will be posting asap)
Side note #3: I have been wondering to myself for quite some time about the rationale for linearizing. It seems as though it is something that made lots of sense 30 years ago, before we could graph any function in 2 seconds with a hand-held device. Now, however, why linearize? I don’t think I even learned how until teaching AP Physics, as in college we would simply deal with the actual functions, whatever they were. I would love some input on this. In this case, I linearized for one class and, as expected, the slope was essentially equal to the A in the quadratic while the intercept is equal to the B value. So why bother?