Today I gave a presentation at the (very inspiring) UWA eLearning Expo about recent Moodle developments and the ongoing roadmap. I thought you might like to be in on that too, so I’ve returned to my office and recorded my presentation.
A while ago I wrote a blog about learning analytics from different perspectives giving examples of different analytics based tools that could benefit different users. Since then I’ve had discussions with numerous people, many of whom have great ideas for analytics tools, but I’ve discovered there is a disconnect between the analytics people want and their understanding of where to find the data.
To get from question to answer there needs to be an understanding of where the data are located and how they can be brought together. My intention with this blog is to show you where to find data for analytics in Moodle.
Source 1: Database tables
The database tables are used by Moodle and its plugins for data storage. They are able to be queried for information about users, and their involvement, as well as course and site information. I would estimate that more than half of the data needed for analytics are stored in these database tables.
The limitation of these data is that they are not historical – they represent the current state of the system. There is some historical data, for example Forum posts and Chat sessions, but for historical information generally you need logs or observers. One advantage of drawing from database tables rather than logs is that such data can be gathered in real-time, all the time, which is not advisable for log data (more on that later).
Here is a summary of the data in Moodle database tables. I’ve categorised the data by the perspectives relevant to analytics.
|Grades and achievements||
Examples of using database data
Here are some examples of how data in Moodle’s database tables could be used for learning analytics. It’s not a comprehensive list, but perhaps there are ideas here that could inspire some great analytics tools.
- Student involvement and achievement
- Accesses to enrolled courses
- Progress through course
- Relative success or risk of failure
- Opportunities for students to undertake activities or interact
- Teacher involvement
- Regularity of access to courses
- Timely interaction with students
- Timely grading
- Success of students in teacher’s courses
- Potential to assist students at risk or commend success
- Course quality
- Richness of content and activities
- Use of assessment
- Student-teacher ratios
Source 2: Logs, Events and Observers
Currently the logging of events in Moodle is undergoing change. Rather than referring to past implementations of logging, I’ll be more forward looking, referring to events and logging as used to some extent in Moodle 2.6 and used fully in Moodle 2.7. The new logs are richer and more focussed on educational activities.
From logs it is possible to extract information about events that have taken place. Here are some relevant aspects of events that are captured.
|Component||The part of Moodle (module, block, core) in which the event took place|
|Action||What took place, based on a pre-defined list of verbs|
|CRUD||Whether the action was to create, read, update or delete|
|Educational level||Whether the action was teaching, participating or other (eg. administering)|
|User IDs||Who was responsible for the action and who they might have been affecting (eg. a teacher grading a student)|
|Course and context||Where it happened|
|Timestamp||When it happened|
Here is a list of verbs (action words) that are used with events currently. This set may grown.
accepted, added, answered, assessed, attempted, awarded, backedup, called, commented, completed, created, deleted, duplicated, evaluated, failed, graded, imported, loggedin/loggedout, loggedinas, locked, moved, passed, previewed, reassessed, reevaluated, submitted, suspended, switched, viewed, registered, removed, restored, reset, revealed, unlocked, upgraded, updated
One of the problems with logs is that they grow very large. This makes efficient searching and processing of log information almost impossible, particularly on larger sites. With richer event information being captured, there are also events being recorded from more places in Moodle. There is the potential to direct log information to log stores outside of the Moodle database. The intention of this change is to allow searching and processing of logs without impacting the performance of the Moodle server itself. There is also the potential to export log data to files for filtering and analysis outside Moodle. So it is possible to get detailed log information, but this cannot be used in real-time, say for a block or a report that combines logs with other information.
One way to capture event information so that it can be used in real-time is with observers. As each action takes place an event is “triggered” within Moodle and observers can “observe” events based on certain criteria. The new logging system is an event observer that consumes all events that are triggered and stores them (to one or more log storage plugins). It’s possible to create new observers that can focus on a subset of events, store relevant information so that it can later be presented efficiently. If you were interested in, say, creating a report that focussed on enrolment actions, you could allow the report to observe enrolment events, store records in its own table and then present the results to users appropriately, any time it was needed. The report could even make use of messages to send out alerts when necessary.
Examples using events and log data
- Monitoring site activity and focal points
- Number of user accesses, which could be used to infer time online
- Repeated exposure to resources and activities within courses
- Students accessing teacher feedback on activities
- Student retention in courses (based on enrolments and unenrolments)
Source 3: Click tracking by external monitors
- their environment (browser, OS, device),
- where in the world they are coming from and
- the paths they are following through your site.
This information is useful to administrators wanting to ensure their Moodle site is catering to users’ needs. To discover learning analytics from Google Analytics, it is possible to drill down into usage information, This will not yield the same sort of information as the Moodle database or logs, instead showing patterns of behaviour. This information could potentially be fed back into Moodle as Google provides an API to query analytics data, which could be presented in a Moodle report or block.
Another relevant click-tracking tool is the Moodle Activity Viewer or MAV. This is a system in two parts: a server-side component that collects course activity usage statistics and a browser plugin that takes the page delivered from Moodle to your browser and overlays the page with colour to turn the course page into a heatmap. This shows teachers where the focus of activity in a course is taking place.
Could this understanding be built-in?
Unfortunately, at this stage, there are no simple generic mechanisms built into Moodle that allow you to freely gather and combine information without writing code. There are some exceptions attempting to allow generic report writing, but I don’t think these are simple enough for ordinary users yet. Currently, if you have specific questions that can’t be answered using standard Moodle reports, the best way to get the answers you want is by writing (or get a developer to write) a plugin (report or block). Hopefully this guide so far will provide an understanding of what data are available and where to find them.
Is there a possibility to create the reports without coding them from scratch?
One potential future step would be to allow plugins (and Moodle itself) to be able to describe the data they store. With this meta information, it could be possible to use a generic tool to gather and combine specified information on-the-fly and tweak the criteria as needed. This would allow access to the rich data in the Moodle database (with appropriate security constraints, of course).
It could also be possible to create a generic observer that can be configured on-the-fly to consume events of interest and record them. The current logging and events system APIs allow such alternative logging. Providing a sense of what events could be observed would be the challenge here, but at least events are now somewhat “self describing” meaning meta information is recorded with the coded description of the event objects.
For administrators interested in the sort of user information that Google Analytics reveals, it is possible in Moodle to determine a user’s browser, OS and device type. Moodle already does this to allow page customisation for different combinations of these factors. It would not be a great step to capture and present this information in a report. Google could probably do this better, but perhaps you’re not too keen to allow Google to snoop on your students and their learning activities. Moodle logs could be used to infer the paths and behaviour of students, but this would be a very costly exercise, requiring a great deal of computing power, preferably away from the Moodle server.
What to do with this data?
The final challenge then is to go beyond data gathering and analysis to provide tools that can use this information to support teaching; tools that help students learn, teachers teach and administrators to cover their butts. Only then will we see the LMS take education beyond what could be achieved in the classroom.
The submission deadline for the Moodle Research Conference (MRC2014) is approaching fast. I imagine many people around the world are feverishly preparing their submissions. Unlike most conferences, the MRC draws together people with different experience from many fields who happen to be conducting research in and around Moodle. Being one of the co-chairs for this year’s MRC, I thought I’d put together a guide to help authors.
Links to past research
As a researcher, you are never working alone. Basing your research on work that has come before gives you a solid foundation and increases the credibility of your work. Reviewers are not only judging your paper, they are looking at your knowledge of the field. Citing appropriate past research demonstrates your understanding and places your work within your research area. References should be formatted according to the prescribed standard and should provide enough detail to allow a reviewer to find the cited work. Cited works should be primarily from peer-reviewed sources. Ideally, you should be able to demonstrate a need for your current work based on past research.
After setting the paper within past research, you should then define the aim of your research and this is done with research questions. Such questions could be phrased as hypotheses, but this is not essential for an MRC paper. Your research questions can be used to define the structure of the remaining paper including the conclusions at the end of the paper, where the answers to these questions should be presented.
Without evidence a paper is simply opinion. In order to answer your research questions, you need to gather and analyse evidence. The evidence should answer the research questions, proving or disproving something – either outcome is valuable to report. The evidence you present could come from one (or more) of many sources such as experimental results, user data gathered in Moodle, surveys, case studies, etc. You should be able to show how the evidence you have gathered builds on the past research you have written about earlier in the paper. Even if your paper is focussed on the the development of a new tool (such as a Moodle add-on), you should go beyond a simple description, showing evidence that the tool works in practice and can have benefits.
A few more tips
- Writing quality and flow
- MRC papers must be written in English. Poor writing distracts reviewers from the important research work you are reporting. If English is not your first language (or even if it is) get someone else to proof read your paper before you submit it. Also consider the flow of your paper: each paragraph should follow on from the last and each section should lead into the next. You are arguing the value of your work and your argument should seem logical.
- Follow the template and use its styles
- The MRC, like most conferences, provides a template to demonstrate the expected paper format. Rather than copying the styles shown, use the template as the starting point for your submitted paper. Use the styles in the template rather than manipulating text to look like the styles. Doing this is easier and is something all word processor users should be able to do. It also ensures all papers in the final proceedings are consistent. If your paper appears different, reviewers will feel responsible to point this out and that will detract from the review. Look through the Moodle Research Library for examples of accepted papers from past MRC conferences.
- Anonymise your work properly
- The MRC uses double-blind peer review, so authors don’t know who is reviewing their work and reviewers don’t know who has authored the paper they are reviewing. If the reviewer sees you’ve done a poor job anonymising your paper, that may impact their review. See the guide to submitting papers for things to check when anonymising your document.
- Present data visually
- A picture is worth a thousand words. Presenting data as a table or chart makes it easier for readers to understand. Screen captures are a great way to show tools in use. All tables and figures should be labelled and there should be a reference to these items within the text to include them at appropriate points in the flow of the document.
- MRC2014 site
- MRC2014 Call for Papers
- Moodle Research site
- Guide to submitting papers
- Moodle Research Library
- Simon, Carbone, A., de Raadt, M., Lister, R., Hamilton, M., & Sheard, J. (2008): Classifying Computing Education Papers: Process and Results. Proceedings of the International Computing Education Research Conference (ICER2008), Sydney, Australia, 6-7 September, 2008. 161 – 171.
- Simon, Sheard, J., Carbone, A., de Raadt, M., Hamilton, M., Lister, R., et al. (2008): Eight years of computing education papers at NACCQ. Proceedings of the 21st Annual Conference of the National Advisory Committee on Computing Qualifications (NACCQ 2008), Auckland, New Zealand, 4-7 July 2008. 101 – 107.
I was asked by a teacher of software development if I could give an overview of how we use the Scrum Framework in a real-world, open source project, here at Moodle. Being a former development teacher myself, I could not refuse.
The video below outlines the Agile software development processes followed at Moodle HQ. If you’re a developer or someone training to be a developer, this will be relevant to you.
Forgive my ums and ahs. It’s been a while since I was in teacher-mode.
From the static Web to dynamic mobile browsing
In the beginning, when Learning Management Systems (LMSs) were young battlers, Moodle came about as a combatant that succeeded through its stubborn simplicity. Other LMSs attempted to overload interfaces with Java to achieve an edge. Moodle, on the other hand, stuck to standard Web interfaces to achieve the same result. The result was that Moodle was considered simpler and more user-friendly. If you knew how to use a Web browser, you could use Moodle; you didn’t have to have any additional browser plugins installed. Moodle’s usage grew rapidly, overtaking its competition, because people could understand it.
LMSs are also being used beyond the desktop. Now that we are finally seeing consistency among desktop browsers, developers are faced with a new challenge in the form of mobile devices. The standards set for the Web are still followed (although I think a mobile browser war is just getting started), but the physical interface to the browser is different on mobile devices. No longer can we rely on users with a mouse, keyboard and monitor; the Web has to work with touch interfaces also. We aren’t even afforded the luxury to assume a reasonable minimum screen size.
A new battleground
I have been involved in the bureaucratic effort to select a new LMS for a university. Battle was fought by lining up each LMS candidate side-by-side against a set of features. The LMS with the most checkmarks next to its name was the victor. Moodle won this battle many times because it was well featured. If the feature didn’t exist in the standard distribution, there were add-ons to supplement it, and if that wasn’t enough, you could always customise. The other thing Moodle had going for it was its underdog status, which I’ve talked about before.
About two years ago, at the 2011 Australian Moot, I sensed a new set if biases creeping into the public consciousness. No longer were people asking for more features, instead they were wanting style and speed. Does this mean Moodle is feature-complete? Probably not, but at least most people seem satisfied with the current feature set and seem to have shifted priorities. A new battleground has been forming in my mind in the last couple of years.
So what is Moodle doing to arm itself for this new battleground? Here are some newish additions to Moodle’s arsenal.
People spend a lot of time in Moodle using the editor. The WYSIWYG editor has been around from very early in Moodle’s history, but now it’s is being simplified. We’re still using TinyMCE for now, but keep your eyes open in future for a brand new, home-grown editor alternative that will be slicker still.
Access to the world’s data
Repositories are sources of files. They could be files on your computer, files on the institution’s server, files from the Web or files from “the cloud”. This concept seemed to stump some people at first, but it is now starting to make sense. At the advent of Moodle 2.0, there were a few teething problems with repositories, but this part of Moodle has settled down into something smooth and reliable.
An interface that works on anything
Apparently students and teachers have new-fangled mobile devices now, and they want to access their Moodle sites on these devices. Responsive themes allow a single Web interface to react to different screen sizes. On a large screen, the view is not too different from the standard theme, with a few rounded edges. On a small screen, things are rearranged: menus collapse into icons, blocks shift to below content and pop-ups fill the screen. There are a number of other changes that the use of touch devices have promoted as well. Not only is the interface becoming more usable on different devices, it’s also becoming more accessible to users with disabilities.
Is it working?
Well, none of the things I’ve mentioned above appeared on the feature list a few years back, so are they needed now? There are a large number of registered sites still on 1.9 – why? Is it a case of “If it ain’t broke, don’t fix it”, or is the simplicity of older Moodle versions still more attractive to some users? Change, it seems, happens slowly in the world of education. Change can be dramatic for people.
When Mary Cooch conducted some training for existing Moodle users at Our Lady’s Catholic High School, the new interface was different enough that they did not recognise they were still using Moodle. One participant’s response was that the new system was “Unbelievably simpler than Moodle!” Others had similar comments, and even though it’s a small sample size, I think we can see that as evidence that Moodle is getting simpler.
The battleground of the future
The battle goes on.
The question now is where the battles of the future will be fought. Predicting the future is precarious, and I’m undoubtedly going to be proven wrong, but I have to speak at a conference next week, so I’d better come up with some ideas that sound slightly visionary.
Massive Open Online Courses (MOOCs) are a hot topic at the moment, with large courses being offered online to anyone willing to participate. Many are anticipating that MOOCs will have an impact on the future of higher education. Moodle has recently conducted what could be seen as an experimental step into the MOOC world. Check out learn.moodle.net.
Massive is big, but is there something bigger. Moodle and other LMSs have traditionally focussed on tertiary education and corporate training. There is a smattering of use in primary and secondary education, but it is limited to a relatively small number of classrooms. However when you compare the student numbers and budgets of these sectors side-by-side, primary and secondary education dwarf the other sectors. So why are LMSs not being used widely in primary and secondary education. I believe the answer is that primary and secondary teachers are not well supported and have less time to attempt such ventures than their colleagues in higher levels of education. Where LMSs could start to become useful is through large-scale integration at state or federal levels. If an LMS is set up where the curriculum is defined, teachers would be freed of the laborious tasks of gathering resources, establishing assessment and conducting grading. Instead they would be free to focus on what they do best: teaching.
End of one-size-fits-all education
At almost any level of education, once the class grows beyond a handful of students, necessity prevents teachers from implementing individual learning plans. The burden of assessing students regularly enough, measuring their performance and adjusting the curriculum to suit them becomes nearly impossible. But that is where LMSs can help. At the moment providing an individual path through a curriculum that automatically adjusts for a student is possible, but it is cumbersome. Hopefully we can improve on that in the future.
How do you encourage developers to be more productive?
A few months ago, I was intrigued by a presentation by Dan Pink, an American public speaker. Here is a version of that presentation (and there are a few similar presentations around, including a TED talk).
In the presentation, Pink claims that extrinsic motivators, specifically financial incentives (bonuses, raises, promotions, stocks,…), can be counter-productive to the goal of encouraging workers in certain circumstances. In the presentation, Pink refers to studies at MIT, so I went searching for publications for these studies and found Ariely (2005) and Awasthi & Pratt (1990).
While people can be motivated by financial incentives, the studies found that financial incentives can reduce performance for tasks involving a cognitive component. Software development certainly involves cognitive tasks, in fact programming is about as cerebral as you can get.
So if money doesn’t work, what does? Pink’s thesis is that employees will be more productive when they have a sense of:
- mastery and
Pink refers to cases at Atlassian and Google, where employees are reported (in the media) to receive many perks. I’ve been to Google, and while I did enjoy the free food, the work environment was certainly not anarchistic, in fact it seemed quite ordinary on the inside. What Pink emphasises is that these companies offer a degree of autonomy to their workers, that employees have the potential to develop professional masteries for their current job and for future jobs, and that employees are able to see a sense of purpose in what they do day-to-day.
Developer Incentives at Moodle?
Some aspects suggested by Dan Pink were already in place at Moodle, but some have been added or enhanced in recent months. I will describe how we offer a sense of autonomy, master and purpose to members of the STABLE team at Moodle (the devs who work on the existing releases of Moodle).
Apart from being a relatively relaxed working environment, there are some specific differences that may set Moodle apart from other development offices.
- Devs choose, set-up and maintain their own development environments. Code meets at the repository, but how it gets there is up to the developer.
- Using the Scrum framework, devs choose issues they will resolve from a prioritised backlog of issues. This ensures that the highest priority work gets done, but devs have a sense of ownership over, and responsibility for, the issues they choose.
- After every two sprints (sprints are typically three weeks long), devs have a week to work on a project of their own choosing. The projects have to benefit the Moodle community, but is open to interpretation by the developer. This means that one week out of every seven, the developer is completely autonomous.
Mastery is an area we could be working more on, but there are a few initiatives in place at Moodle.
- Devs can nominate external training courses and are supported to attend.
- Devs nominate areas of interest in Moodle and are allowed to specialise in those areas.
- Devs receive in-house productivity training . There are also irregular presentations on development related topics related to the current focus of work (for example, in-code documentation writing, Web services, etc.)
Purpose is something that Moodle has a lot of. Moodle allows many people to access education, some of whom would not be able to do so otherwise.
In saying that, it is easy to lose sight of that purpose when devs are focussed on lines of code while reading the grumbles of users on bug reports.
It is important t0 regularly remind developers that there is a community out there and they really appreciate the work devs are doing. We have, in the past, dragged devs to a Moodle Moot, where there is a lot of back-patting. We are hoping to do that again this year.
If you are a member of the community and wish to express your gratitude, please do so. Send me an email or post a message on the Moodle forums. It will really help.
Do these incentives work?
From my perspective, I would have to say “yes” – encouraging a sense of autonomy, mastery and purpose does help developers, their progress, as well as the general working environment. It’s hard to quantify the effect of making these aspects more obvious to developers, but I have noted some improvements since we have.
- Our turn-over of staff is low. The devs seem are content and passionate about their work, particularly when they have a chance to work on what they are interested in. This really helps avoid slacking off when it comes to doing “more of the same”; with sufficient variety, developers are quite happy to switch to unstructured work and then back to structured sprints again.
- General productivity is higher and being maintained. The number of issues being led through our process has increased and that is a good sign.
- The STABLE team is producing some significant contributions to Moodle, and not always in the same way. We had a very colourful show-and-tell session last Friday with some very excited developers (including devs from outside the STABLE team). Here are some examples of what was put on show…
An optimised view for the Gradebook (Rajesh Taneja)
There are a number of issues relating to the usability of the Moodle Gradebook, which can become unwieldy. With some simple modifications, the Gradebook becomes a much more usable space.
- See MDL-25544 for details.
Previews for Database activity uploads (Adrian Greeve)
Currently, uploading data into a Database Activity provides little feedback or control. Adding in a preview, with field matching, allows easier uploading.
- See MDL-37503 for details.
A Moodle development kit (MDK) (Frédéric Massart)
The MDK automates many regular dev tasks including Git operations, adding information to issues on the Moodle Tracker and automation of site instantiation and population with dummy data.
This project has been quite a collaborative effort and is still growing.
What is technical debt?
Technical debt is accumulated future work that has amassed as the result of decisions made during the development of software (or more generally during the development of information systems). The term was coined in 1992 by Ward Cunningham, who realised that as soon as software is created there is an element of deferred work, which can be equated to “going into debt” (Cunningham, 1992).
Analogously, technical debt is often equated to financial debt, such as a loan. The value of a technical debt is not dollars, but the cost of the time needed to rectify problems. As software is created, compromises are made between delivering a flawed but acceptable system now or delaying and delivering a superior system later. These compromises result in a backlog of work that needs to considered before a future release.
Technical debt comes about when developers create less than ideal code. Fowler (2009) suggests that developers can do this deliberately or inadvertently. A developer can deliberately decide to use a quick-and-dirty solution now with the intention of replacing this solution with a better one later. “I’m not sure if that database query will scale, but I’ll write a TODO to fix that later.” Developers can choose to sacrifice non-essential software elements by failing to write documentation, avoiding creating reusable abstractions or failing to follow coding standards. Alternately a developer can inadvertently introduce problems into the code. This can happen through forgetfulness as deadline pressure builds or simply when the skill required to solve a particular problem exceeds a developer’s technical experience. “I’m not exactly sure how this code works, but I’m going to reuse it now as it seems to solve the problem.” This approach is sometimes referred to as Cargo Cult programming (McConnell, 2004).
Fowler also suggests that technical debt can be introduced through behaviour that is either reckless (“I don’t know how significant this is but I don’t have time for it now.”) or prudent (“The cost of introducing this now will be greater than the cost we will incur by delaying our release.”).
By crossing these deliberate-inadvertent and reckless-prudent dimensions as axes, four quadrants appear; these quadrants can be used to categorise the sources of technical debt.
As no system is perfect, technical debt is something that cannot be avoided. It is something that needs to be managed rather than ignored.
Is technical debt bad?
Technical debt, like financial debt, is not all bad. Any debt is doomed if there are no means to repay it. “Few of us can afford to pay cash for a house and going into debt to buy one is not financially irresponsible, provided that we know how to pay it back” (Allman, 2012). Projects need to consider the level technical debt they are capable of supporting and be aware of their technical debt at all times, ensuring that it does not exceed this level.
If a software project accumulates more technical debt than can be “repaid”, the quality of the software becomes poorer. If lacking quality reaches a level that is obvious to users, this can affect their decision to use that software in future.
What is “open technical debt”?
Open technical debt is the technical debt accumulated by an open source project. To understand it you have to know that open source projects differ from commercial developments in terms of code ownership, project management and in the philosophy that motivates the project.
Open source software is freely given to a community of users and that community is invited to provide feedback to the project to guide its future. Compared to a software system created by a commercial vendor, where the code ownership is simple, in open source projects the community owns the software and benefits from the effort they invest.
Open source projects vary in scale from small projects, involving a small number of loosely organised volunteer developers, through to large-scale projects, that are bigger than many commercial software undertakings. The project I am involved in is Moodle, which involves hundreds of developers and has many thousands of registered sites with over 60 million users worldwide. The project employs 25 full-time employees and works with a large network of commercial Partner organisations who deliver services to the community and help support the project financially. Managing such a project is often difficult as there is no single product owner who you call on to make decisions and set priorities.
When technical debt accumulates in an open source project and impacts on the quality of the software, it is obvious to the community. But this is balanced by the community’s sense of responsibility to fix these problems, improve the quality of the software and pay off that technical debt.
In open source projects, strengths can also be weaknesses. The potential of a large community and large number of developers can lead to powerful software, but left unchecked it can also lead to technical debt. If that debt is not recorded and “paid off” it could lead to the downfall of the project.
Where does open technical debt come from?
It is important to be aware of where technical debt is coming from in a project. Using Fowler’s technical debt quadrants it is possible to categorise the sources of problems in open source code.
You might think that most technical debt in an open source project is the result of reckless developers contributing code with inadvertent consequences to the project as a whole. In fact this is quite the opposite of the behaviour that an open source project elicits from developers. When someone contributes code, their code becomes part of the open source and is open to scrutiny by all. Metaphorically, their dirty washing is being aired for the entire world to see. If a problem is later found it is easy to track it back to a change made by a specific developer. This tends to lead to well-conceived code with a sense that reputations are on the line.
As the person responsible for triaging issues as they are reported for the Moodle project I know every freckle and bump in its complexion. On a regular basis I track the sorts of issues that are left in our bug tracker and a large chunk of these are unfulfilled improvement requests. When releases are finalised, decisions are made to “draw a line”, even though improvements could be made. So the technical debt of the Moodle project, as an example of an open source project, is predominantly in the prudent-deliberate quadrant with lots of ideas for making the software better being known but not acted upon.
Does this differ from a commercial project? Well I can’t say for sure, but I suspect it does. I would say that closed source software lacks the pressure that openness creates. Also, when the priority setting falls to a single decision maker with commercial deadlines to meet, I think that technical debt would shift more to the reckless side of this field. But then, I’m biased.
Avoiding and embracing open technical debt
While accepting that some technical debt is unavoidable, there are ways that it can be minimised.
Openness of flaws
Having an open bug tracking system allows anyone to see what bugs have been reported and what improvements have been suggested. This means that the extent of the technical debt of a project is on display to all. Being open in this way creates incentives for developers to avoid creating technical debt in the first place, and to reduce technical debt in the long-term. It also shows the community that work is being done in a way that follows defined priorities.
Following agile software development practices allows developers of an open source project work together to fulfil the priorities of the project. As priorities shift (and they do when you are responding to a community), being agile means that developers can respond quickly. In fact I can’t conceive of an open source project being managed any other way.
Contributed code in an open source project is not automatically accepted. Before it is integrated with the codebase it usually has to satisfy experienced developers involved in the project. This is certainly the case at Moodle where all code goes through at least three levels of review before it is integrated into rolling releases and even more before major releases. When this is done politely, as well as ensuring software quality, this also helps to assure contributing developers and instil a sense of confidence.
Once any software project grows to more than a trivial size, it needs to be modularised. In open source this is especially beneficial for two reasons.
- Modularity provides focus points for developers who want to contribute to a project without needing to understand the entire codebase.
- Modularity allows a project to designate code as official and unofficial. Official code is what is distributed to users as the core project code. Unofficial code can be plugins that individuals have written. Technical debt can then be measured against the official core code while keeping the potential “high-risk” debt of unofficial code “at arm’s length”. That’s not to say that developers sharing plugins should not be supported and recognised.
Willingness to deprecate
As a project develops, changes will occur over time. Often modules become neglected, particularly if no one from the developer community has an interest in maintaining that module. When this happens, the community has to recognise the state of the module and deprecate it. Deprecation is like writing off technical debt; while it comes with a loss of functionality it also notionally frees up resources to focus on other parts of the project.
Cunningham, W. (1992). The WyCash portfolio management system. In OOPSLA 1992. http://c2.com/doc/oopsla92.html
Fowler, M. (2009). Technical debt quadrant. Retrieved from http://martinfowler.com/bliki/TechnicalDebtQuadrant.html