Troy Patterson

Educator, Thinker, Consultant

Month: October 2014

Handwriting xml code

xml code exampleWell, it finally happened. The latest update to Mac OS X, Yosemite, broke one piece of software that I use every week – Podcast Maker. I’ve been using Podcast Maker for many years. It did one thing, turn basic text information into nicely formed code that I could then copy into TextWrangler in order to create the xml file for iTunes to recognize the latest podcast episode. Very handy. I believe that I paid $35 for it at some point. It was $35 well spent.

Now, I’m back to hand coding the xml file. Although not my favorite activity, there is a certain challenge to it. Coding is either right or not. Thus, if I do happen to make a mistake, the podcast feed just doesn’t work. I do get nearly immediate feedback on the process. Did I get everything right or not? I know as soon as I upload the file and hit refresh in iTunes. If the new episode shows up, I got it right. If not, well……

How often do we provide students with that same experience? Not waiting for the teacher to validate the work, but objective, right or wrong feedback? Not everything fits into the model. But, immediate feedback is pretty powerful. This is one reason that I was so happy to find the rubric grading model in Moodle. And the teacher who uses it for feedback on oral presentations. Sure, students can get some immediate feedback on a presentation, but that feedback is probably too nuanced for them to truly understand.

One thing that I really like about Moodle is the power and flexibility to provide students with feedback. Feedback can occur instantaneously with known answers (like multiple choice or cloze) or can be provided by the teacher. Feedback is powerful. According to Marzano, providing feedback is one of the high-yield strategies. I can attest that when it comes to hand coding xml files, it sure is effect.

Moodle 2.8

Moodle 2.8 is expected to ship in mid-November. The next release will focus on improving the Gradebook (one area where Moodle can use some consistency). The Gradebook has seen some improvements but with 2.8 should become a fully functioning feature. 2.8 will focus on bringing many improvements. Let’s take a look at a few.

New Grader Report.

The new Grader report will utilize the whole window for presentation. This will increase the amount of information available. This follows the improvement of being able to always view columns (student names). The new Grader will also provide smooth scrolling in all directions (Yea!). Additionally, all platforms will be supported, including tablets and phones. Since the world is really going mobile, this is a welcome focus. “Single view” mode will allow editing of any row or column on its own. A big, Hallelujah on this one. This will be very handy for actually entering grades.

New Natural weighting aggregation method

This will allow for grades to be combined simply. It also will provide a “clearer interface for using weights”. I’m a bit less excited about this one. Weighting of grades is something that seems to be confusing for many teachers. This is one of the areas that could truly use improvement for some teachers. Many teachers understand weights and use them well. However, in my experience, far too many don’t really, truly understand how weighting works.

Improved Grader setup page

There will be a new design with easier terminology and clearer layout. Making things clear and easy to understand is always a good thing. Too many times there is circular logic in the explanation or definitions in Moodle.

Improved Grade import/export

This falls into the nice, but not an earth shaker for me. I’ve exported the grades as .CSV files. That is a pretty robust and useful format.

Other improvements

Several other improvements are on the way. Forum module will have a reply by email feature. Assignment module will have an option to add additional files. Choice module will allow for more than one choice to be made. Database will add fields that can be marked as required. Quiz will get additional completion options. (Completion is a powerful tool within Moodle that is frequently overlooked.) Lesson module will allow for introductions.

My thoughts

Moodle keeps getting better and better. It is the most powerful of the LMS’s that I’ve used. The modularity of Moodle is a real strength. However, it is not always the prettiest belle at the ball. Indeed, sometimes it can be downright ugly. One of the groups that I worked with was ready to get rid of Moodle because it looked so “dated”. The teachers were sure that the students wouldn’t use it because of the look. I quickly tweaked a few things (theme change, reorganize to 2 columns over the standard 3 column, and some color changes) and the teachers were ecstatic. They now love the “look and feel” of the site.

This leads me to believe that Moodle really should address the overall design philosophy of the program. Well, that experience and working with many, many other teachers. Creating an exciting, easy to use experience could help propel Moodle even further. Yes, I’m am aware that there are many themes out there. However, the default look and feel is still a very powerful undercurrent for the program.

Overall, I’m excited about using Moodle and where it is going. It is a truly powerful tool that is on a great track for educators.

Mute the Messenger

I found this article, Mute the Messenger, through my RSS feed this week. I found the article fascinating. Essentially, it is the tale of standardized testing and what could potentially be the ugly reality of assessment. It is not the shortest of articles, but a great read.

Now, take the article with a bit of skepticism. Still, it is a very powerful article. Yes, many of the points may be simply circumstantial. Yes, there could be a lot of information that is missing. Still.

Let’s take a look at a few of the quotes.

Testing advocates believed that more rigorous curricula and tests would boost student achievement—the “rising tide lifts all boats” theory. But that’s not how it worked out.

This is one of the powerful quotations for me. There is a fundamental belief that making the curriculum and assessments more rigorous would “obviously” led to more learning. Funny thing about learning though, sometimes it is more complex than people want to think.

Texas Education Commissioner Robert Scott, long an advocate of using tests to hold schools accountable, broke from orthodoxy when he called the STAAR test a “perversion of its original intent.”

Yep. Some are starting to realize that just increasing testing isn’t the panacea that some want to think that it is.

Stroup sat down at the witness table and offered the scientific basis behind the widely held suspicion that what the tests measured was not what students have learned but how well students take tests.

…his testimony to the committee broke through the usual assumption that equated standardized testing with high standards. He reframed the debate over accountability by questioning whether the tests were the right tool for the job. The question wasn’t whether to test or not to test, but whether the tests measured what we thought they did.

This points out a profound function of testing that all too many take for granted. What does testing really measure? Yes, we end up with a number at the end of testing. However, what does that number really mean? What do tests really measure? These are crucial important questions.

Stroup argued that the tests were working exactly as designed

Stroup had caught the government using a bathroom scale to measure a student’s height.

The scale wasn’t broken or badly made. The scale was working exactly as designed. It was just the wrong tool for the job. The tests, Stroup said, simply couldn’t measure how much students learned in school.

Here is the crux of the matter. Are we really using the right tools? Are we using assessments correctly? Are we sure that the assessments measure what we think that they measure? I remember times as a principal where the number one question was “what was the topic of the writing” section. Once we knew what the topic was, we were pretty sure (and always right on) about how the students would do on the assessment. Quite frankly, we knew that the topic was really, really important. We knew how well the students could write. Even more importantly, we knew that if the topic was something that the students weren’t interested in, they would not do well on the assessment.

Well, one of the legislators called for Stroup and Pearson to have a debate. That debate would never happen.

…standardized tests have become the pre-eminent yardstick of classroom learning in America, and Pearson is selling the most yardsticks.

Pearson is heavily invested (literally) in assessment. Quite frankly, they are selling the yardsticks.

But, here’s one of the interesting things. Stroup was also teaching kids. He had developed a program that helped students learn math. He knew that the kids were being successful, but that success wasn’t showing up on the statewide standardized tests. He started looking at why.

Stroup knew from his experience teaching impoverished students in inner-city Boston, Mexico City and North Texas that students could improve their mastery of a subject by more than 15 percent in a school year, but the tests couldn’t measure that change. Stroup came to believe that the biggest portion of the test scores that hardly changed—that 72 percent—simply measured test-taking ability. For almost $100 million a year, Texas taxpayers were sold these tests as a gauge of whether schools are doing a good job. Lawmakers were using the wrong tool.

So, he does the research and finds out that what the tests really measure is how well students take the test. His research found that 70% of the test score was “insensitive to instruction”. Essentially, this means that teachers, schools and educators can’t change about 70% of the test results. Pearson called foul. They stated that he had made a mistake. According to Pearson, only 50% of the test is “insensitive to instruction”. That’s right. Pearson admitted that about half of the score that would determine how well teachers were teaching was unchangeable by the teacher. Honestly, teachers are being evaluated by these scores. Jobs, reputations, etc. – all determined by these tests. Yet, here is Pearson admitting that 50% of that score is determined by the student’s ability to take a test. Nothing the teacher or school could do would effect this part of the score.

Stroup concluded that the tests were 72 percent “insensitive to instruction,” a graduate- school way of saying that the tests don’t measure what students learn in the classroom.

After correcting what Pearson interpreted as the mislabeled column, Way wrote, the tests were “only 50 percent” insensitive to instruction.

“teachers account for about 1% to 14% of the variability in test scores,” largely confirming Stroup’s apparently controversial conclusion.

If it’s true that the test measured primarily students’ ability to take a test, then, Stroup reasoned to the House Public Education Committee in June 2012, “it is rational game theory strategy to target the 72 percent.” That means more Pearson worksheets and fewer field trips, more multiple-choice literary analysis and fewer book reports, and weeks devoted to practice tests and less classroom time devoted to learning new things. In other words, logic explained exactly what was going on in Texas’ public schools.

Oh, and the legislator who had called for a debate between Dr. Stroup and Pearson. The debate that never happened. Well, he retired. He is now a lobbyist for Pearson.


Powered by WordPress & Theme by Anders Norén