Rewriting Workshop for Moodle
Shortly after MoodleMoot DE 2009 in Bamberg, I started to work on Workshop acitivity module for Moodle 2.0. I have specified the implementation plan and discussed it with the community. I've been using git branches and git cvsexportcommit during the development a lot. Thanks to working on this, I have learnt something about unit tests, design patterns and Moodle 2.0 API. And - which is the most important - I've been working with such a great people around Moodle core!
Workshop in my blog RSS
Moodle, Mahara and beer fan
Latest stable version 1.9.8+
There are 4 commits into the stable branch from the last development week. Tim Hunt fixed a wrong capability checking in Quiz module (MDL-22410) and several regressions in clean_param(), introduced when we were converting ereg to preg, because ereg functions are deprecated in PHP 5.3 (MDL-19418).
Martin Dougiamas committed a bunch of patches that kills the scripts expected to be included only, if they are called directly via URL (MDL-22388). During the initial setup of every script life time, Moodle core defines constant MOODLE_INTERNAL which shall be used to make sure that the script is included and not called directly (constants can’t be defined via URL even with bloody register_globals on). This trick helps to prevent libraries, forms definitions and other scripts from potential security holes. My mom told me not to talk to strangers so I personally prefer silent one-line check
defined('MOODLE_INTERNAL') || die();
instead of explaining the poor hacker that the direct access to the script is forbidden (Mahara uses this shorter form and I just like it).
Gordon Bateson found a way how to deal with an XHTML issue in Hotpot by using IFRAME instead of OBJECT for Internet Explorer, as HTML forms are not able to escape OBJECT in this browser.
Future version Moodle 2.0 Preview 2
There are 141 commits into the future release branch from the last week. It is now tagged as Preview 2. Since Preview 1, there are many improvements and bug fixes. Many thanks to all who help us with testing.
Quotes of the week
Watch your back!
Backup and restore support used to be my not very favourite part of Moodle modules coding. I was kind of scared (and bored) by all those direct fwrite() calls to generate XML representation of the module data. With the new backup/restore framework, written by Eloy Lafuente, this is not true any more! Eloy has prepared really well designed system that deals with a lot of tasks automatically, yet in a flexible way. Following his tutorial on backup support for Moodle 2.0 activity modules, I was able to quickly hack a working prototype of Workshop backups.
What I like the most so far is the way how module just defines a description of its database structure and lets the core to actually fetch the data a convert them into XML representation. Given the necessary amount of information, new backup system is able to “automagically” handle all relations, references to other tables and including embedded files (only those really needed by the module, of course!).
There are still parts to finish both in Moodle core and in modules – most notably the ability to backup subplugins information and to restore 1.9 backups. But I can sleep well again as backup and restore is not nightmare any more for module developers.
After more than eight years, one of the characteristic symbols of Moodle, the default user icon (yes, that very well known smiley cake), was replaced by a bland avatar for the standard theme in Moodle 2.0. That is ironic that the smiley actually replaced the original “shadowhead with something a bit more positive”. The story of the “f1.jpg” image and how it resisted various attempts to replace it is sometimes quite interesting. Anyway, I will miss it… So long!
But we had to celebrate and be glad, because this brother of yours was dead and is alive again; he was lost and is found.
– (Luke 15:31-32, NIV)
Yesterday, new workshop module has been committed into Moodle CVS HEAD and became a part of standard Moodle 2.0 distribution again. It’s been a long way to get into this stage but the real journey just starts. There are still some features to tune up and some known bugs to fix. The module seems to work pretty well, though. I had some real testing data from workshop 1.9 courses available (many thanks to those who could provide them) and from my point of view, the new UI gives clearer overview and more control over the activity. However, my opinion does not matter – our good Moodle users will have to say what works for them and what does not.
Most of the legacy workshop features remained. There are two most significant differences. Firstly, new workshop puts two grade items into the course gradebook – a grade for submission (it est a grade that students get for their own work) and a grade for assessment (a grade that estimates the quality of the peer-assessment). Legacy versions of workshop automatically summed up these two grades. Nowadays, Moodle gradebook allows more way of aggregations. Teachers can set up their gradebooks to mimic the old behaviour, of course.
The second significant breakage of backward compatibility is in how the assessments of example submissions are evaluated. Example submissions are provided by teacher and students are expected to train the assessment process on them. Teacher can, for example, put an example of a really good work and a really poor one. In previous workshop version, grade for the assessment of example submissions was immediately calculated so the student got a feedback like “if you assessed this submission in this way, you would receive this grade for assessment”. In new workshop, evaluation of students’ assessment is more dynamic process. Teacher can use different methods of the grade calculation and change the input parameters of the calculation “on the fly”. Therefore, workshop module itself is not able to calculate the grade for assessment without teacher assistance. Instead, teachers just provide so called reference assessments of example submissions (where they express the quality of the submission) and such benchmark is then displayed to students. So students can compare their assessment with the teacher’s one but no grade can be calculated.
I hope the other improvements – namely the possibility to control the submission allocation for peer-assessment, manual switching of workshop phases, overall UI rework to make information intuitive and easy to understand – will compensate the lack of these dropped features. And who knows – maybe in some future version, we will find a way how to introduce them again in the new framework.
While the test site where you can play with the new workshop module is up and running, I am working on migration procedures to get current 1.9 workshop data into the new framework. I am following similar approach as Petr used for the migration of the Resource module. I personally call it Scavenger design pattern.
The basic idea of the migration is that all 1.9 tables are renamed using _old suffix and new ones are created as if we were starting from scratch. Then the upgrade script goes through all 1.9 records, transforms data into new formats and inserts them into new tables (marking old records as processed). Once finished, the new workshop core tables are fed up with the old data. Later during the installation, new workshop subplugins get created (allocators, grading strategies and grading evaluators). They find the old dead workshop tables and start picking data from them. At the end of the day, the new tables are populated with the old data.
Such a procedure requires that 1.9 workshop instances are at least in sort-of-well-defined-coma state. Therefore I started to fix the most critical functional and security bugs in workshop 1.9. So far the most important one may be that the workshop in Moodle 1.9.7 will push grades into gradebook as expected during the Synchronise legacy grades procedure. Big thanks to all patient users who help me with testing both new features and fixes.
As a teacher I know from my own experience that the best method to understand something is to try to explain it to somebody else. It really works. Often, I realize some new important details or concepts or relationships while giving a lesson to the class.
So, when I wanted to sort out all my ideas about the Workshop random allocation method, I decided to document it so the teachers (and students, too!) know how the assignment is done and what results they can expect. I have described the method of random assigning submissions for peer-review and published it in the Workshop forum at moodle.org. If you have a minute, please check the PDF attached to the post and leave a comment in the forum. Thanks in advance!
I am stuck on random allocation subsystem for the new Workshop module. What looks like a trivial algorithm at the beginning, turns into a complex computation once you take various aspects into account. I have spent literally hours and hours by sketching squares, circles and arrows between them trying to discover all possible situations the workshop activity can get into.
In the most trivial case, everything is clear and easy. Say you have ten students in your course and you want each of them to review and assess three submissions. So you just randomly select three submissions for each student and you are done. If you want, you may order the allocator to ensure the self assessment. So it simply adds yet another allocation for every student, setting reviewees to the students themselves. Easy, ha?
Now, groups and groupings would like to play the game. Still no big issue, though. From my point of view, “Separate groups” mode is for those situations where you want or need students being enrolled in the single course but you do not want the groups to interact in any way. Ideally, students from different separate groups should not even know about each other. On the other hand, teachers use “Visible groups” mode, when they want some sort of group work support.
So, if the workshop is in the “Separate groups” mode, the allocator just have to be careful to select a reviewer of a given submission from the same group as the author is from. What if the author is in several separated groups? If I follow the precedent of the Forum behaviour, I can require the author’s submission must belong to a single separate group (like a post in a forum). As every user is allowed to have one submission per workshop at most, this results in that (s)he can’t submit into other separate groups (s)he is member of. Can this be an issue? Probably not – typical students are not expected to be members of several separate groups. But the software design is not about typical cases only. It must work in very rare situations, too.
If the workshop is in the “Visible groups” mode, the allocator shall select reviewers from all available groups. It shall try its best to have the reviews allocation balanced. Ideally, if your paper is going to be reviewed by three peers and there are three visible groups in the workshop, the allocator should pick one reviewer from each group. Advanced teachers could use this feature to get more objective assessment.
So far, I have assumed that reviewers are the same people as authors – it est “students” (peers) in your course. This has always been a must for the Workshop module. In Workshop 2.0, this is to change. There are two capabilities to control which role or roles can submit and/or assess submissions. By default, the legacy role Student is allowed both to submit and to assess others’ submissions. But you can localy override the permissions and – voilà – workshop behaves as an universal peer-reviewing system to be used as a conference management system, for example. So, some of your course participants (let us say Students) can be allowed to submit their work and others (for example Non-editing teachers or Assistants) are responsible for the assessment. You can have a course with 1000+ students divided into small groups (randomly or according their preference) and a cohort of reviewers, each of them having a group of students in charge. In this scenario, the random allocator is responsible for allocating submissions for reviewers in as balanced way as possible.
Other settings like the activity “Default grouping” or “Available for group members only” make the situation even more complicated. So, what looked like a simple game with a die, now has a thick book full of complex rules, tables and strange diagrams…
I have a working prototype of new feature in Workshop 2.0 – manual allocator. As I have described in my recent blog post, manual allocator is a Workshop subplugin that allows teacher to allocate submissions to the selected peers for review. The user interface may be a bit unusual at the beginning. However, I think it presents all relevant information in an understandable way once you realize how it works.
To demonstrate the new UI, I installed wink (thanks for the tip, Helen!) and prepared a screencast of the user interface (my first screencast ever!). Now I am awaiting for some feedback from the community. If they like it, the similar layout could be used consistently in the whole module.
Modularity is one of the key features of Moodle. We have pluggable modules almost everywhere around a Moodle site: course activities, course formats, enrolment plugins, question types, authentication plugins, content filters, side blocks etc. Plugins may define their own database tables, permissions and language strings. Having independent modules implementing the given interface allows developers to relatively easily extend the standard distribution and customize it for their needs.
I have decided to follow the principle of modularity in the new Workshop module. The core of the module defines the required behaviour for the main features. Then, there are plugins that implement the functionality of the given component. Workshop consists of three main process components: submission, allocation and assessment.
The process of a student’s work submission is controlled by a submitter. Technically, submitter is an instance of a class implementing the submission interface. Its main purpose is to render a submission form and save submitted data. There will be a default submitter shipped with the workshop, which offers a submission of an online text with optional attachments.
The process of allocating submissions for review is controlled by an allocator. Allocator is a tool that assigns submissions to peers to be reviewed. So far, I plan to have two allocators available: Random allocator and Manual allocator. As the name suggests, random allocator implements the behaviour of the pre-2.0 Workshop. The manual allocator is a new feature that allows teachers to manually assign submissions. See the screenshot of the user interface (click to enlarge).
Other allocators can be simply added by writing a custom class that implements the allocation interface. So, I can imagine a plugin called Sorting Hat that checks the students’ grades received so far and tend to assign a work to a reviewer at the same grade level, for example. Similarly, the method described in a paper by Joan Codina and Josep Fontana can be implemented as an allocator. The important point is that allocation can be combined. For example, teacher can allocate several submissions manually and then let the rest of submissions allocate randomly.
Finally, the process of assessment is controlled by so called grading strategy. Grading strategy encapsulates the assessment form definition and the computation of grades. Four default plugins that implement this component will be available and they are described in the Workshop 2.0 specification.
So far, I have working drafts of the core code and some component plugins implemented. I have refactored them several times and I suppose I will probably do yet in some details. During the development, I am still learning the object oriented style of thinking. Luckily, there are three new friends that help me with this task: git branch and rebase, unit testing and design patterns
As Steven noticed in his post, this would be a great feature for teachers and/or students to comment forum posts, online submission texts or (what a surprise ) a peer work in the new workshop module. Eventualy this might become a standard part of the Moodle core extending the current Comments 2.0 proposal. I am definitely going to consider this in the future.
The Workshop module specification has been marked as “Waiting for the approval”. I have incorporated valuable comments, ideas and suggestions that came during the community feedback period. Many thanks to Penny Leach, Mary Parke, Rick Barnes, Dan McGuire, Tim Hunt, Daniel Brouse, Mark Pearson, Matt Gibson and Vernellia Randall for their replies in the request for comments thread.
According to the Gnome Time Tracker, I have spent 51 hours and 32 minutes working on the specification (includes the time I was testing the current behaviour and was studying the source code). I think I’ve got a clear vision how 95% of the new module should work, how 85% of the UI should look, and how 75% of the internal API should be. So, I think it’s time to hack some code and chew bubblegum. And I’m all out of bubblegum.
Working on the Workshop 2.0 specification, I studied the current behaviour of the module. I needed to figure out how the calculation of grade for assessment (also known as grading grade) works. This part of the module has always been a mystery for me – and I was not alone. Before I learned how the calculation actually works, I proposed to rewrite it from scratch. Interesting. Is it just me tending to push own solutions instead of trying to understand someone else?
After some time studying the code I realized the ideas behind are pretty clever. Now I think there is no need to reinvent the wheel. If the calculation is documented well and the Workshop module has advanced reporting features that help to understand and explain why a student received the grade, the current algorithm should be kept.
Some non-trivial issues and questions emerged, however. The calculation is based on some basic statistical estimations. To be able to measure the quality of assessments, it is assumed that for a given student’s submission there is the only one theoretical objective assessment. Something like “if Zeus assessed the submission, he would give it 67/100″. Is this philosophically right? Is there the only truth about the work out there? And also – the grading in Workshop is determined by the grading form designed by the teacher. So even if Zeus is absolutely objective, his assessment is paltry if he has to use a crappy grading form.
It is clear the Workshop has a potential to be very mighty evaluation tool. In hands of a reckless teacher it becomes a dangerous weapon, however.
Other Workshop module views
David's closed Workshop issues RSS
An XML representation of a search request
Published on 05 July 2019, 3:57
[MDL-64862] Submission types without Online text - no Submission grade to pass and Assessment grade to passPublished on 01 March 2019, 7:57
Published on 07 November 2018, 10:34
Published on 13 May 2018, 9:58
Published on 12 May 2017, 2:18
Published on 16 November 2016, 11:24
Published on 18 March 2017, 0:41
Published on 30 November 2016, 15:03
Published on 15 July 2017, 9:27
Published on 30 September 2016, 12:21