This semester I've taken over a large first year subject, for which the previous subject convenor had organized to use MyMathLab Global for weekly testing. The subject is based around Glyn James' "Modern Engineering Mathematics", a book which is OK for content, and pretty awful for exercises. This means that as users of the text, we have access to the publisher's (Pearson) online testing service. For educators the idea is terrific: every week I simply pull from the exercise bank a set of 10 exercises corresponding to our current topic, and make them available for a week during which time students can have unlimited goes at them. So in theory it's an easy way for students to get some easy marks, and reduce the burden of marking weekly tests by the lecturer (me) and the subject tutors.
The subject I am teaching "Engineering Mathematics 2" is a follow on subject from – wait, you;ve guessed it! – "Engineering Mathematics 1".
What could be better? Well, aside from the extraordinary ease of testing, I have begun to have doubts about the efficacy of the system.
- It's a commercial system, which means that students have to buy a "personal access code" to use it. A code comes with the book if they purchase it. However, a code lasts for only 12 months, and if students have bought the book (and its code) for Eng Maths 1 (as they all should), and if there's been a break in their studies for any reason, then their code may have expired by the time they come to Eng Maths 2. Then there are all the students who have got subject exemptions from Eng Maths 1 and never had a code in the first place. The local publishers reps have been terrific and have provided me with lots of extra codes to hand out, but the onus is on me to get these dratted codes in the first place, and ensure the students get them.
The system only tests the final answer. A typical question is:
Find the partial derivatives
, and of .
and the students get three entry boxes for their results. MyMathLab Global is very picky about the form of a solution, and if the form of a student solution differs from the "correct one" they are marked wrong. For example, we find that
if a student leaves out the "1" of the "16" in error, the answers is just marked wrong. If the student decides to write
it is marked wrong. MyMathLab Global doesn't seem to include a CAS to check if two answers are equivalent (aside from some very simple arithmetic and operator commutativity). This has been a source of annoyance to my students, who may enter an answer which is mathematically correct, and yet marked wrong. A very spirited attack on MyMathLab Global can be seen here.
- The system checks only the final answer. We might well ask: what mathematical learning is actually being tested here? Surely we want our students to show that they have mastered a mathematical technique, process, or algorithm, and we would mark them on their working, not just on the final answer. At least, that is how I mark by hand. A question for which the student's working is fundamentally correct but the final result is wrong will still get most marks from me. I regard the final result as just one part of a question. So suppose the student sets out to answer a question, and makes a small slip somewhere along the way which effects every succeeding step, and the final result. I would give almost compete marks – there's only been one very small slip, and aside from that the logic and the working are exemplary. But MyMathLab Global can't do this. An answer is either correct, or wrong.
I have been hoping to check out MAA WeBWorK as a comparison, but the only server I have access to runs a linux distribution (RHEL 5.9) which can't run it. I think WeBWorK, from my understanding, is rather more nuanced in terms of mathematical equivalence of expressions, so probably overcomes item 2 above, and being open source completely overcomes item 1. I don't think it overcomes item 3, though.
I will continue to use MyMathLab Global – its convenience is simply too great – in spite of my misgivings. But I think that online assessment in mathematics is still a very long way from testing the students' ability to do mathematics, as opposed to their ability to obtain an answer. This latter is not mathematics: it is applied arithmetic; and noble and useful as it may be, it is only a tiny part of what we, as mathematics educators, should be testing.