COMMENT
A High Court decision has prevented the release of a comparison between the research quality of New Zealand and British universities until the validity of the comparative exercise has been determined at a full trial. This interim injunction was a commonsense outcome that served the best interests of the tertiary education sector and the wider economy.
The comparison was attempted by the Government's Tertiary Education Commission as part of the performance-based research fund exercise. It was a flawed comparison and it risked damaging the international reputation of a sector that earned $2.2 billion last year. If comparisons are to be made with other countries on the basis of research quality, they need to be made with the utmost care.
Auckland University, which with Victoria University took the legal action, is not afraid of being judged internationally. It asks only that the comparison be fair and valid.
The High Court found the Tertiary Education Commission had breached natural justice and legitimate expectations by not consulting the sector about its intention to attempt the comparison. It has instructed the commission to consult the tertiary sector to see if it can come up with an international comparison that meets a much more robust and accurate standard.
Just how difficult is it to make these comparisons? Readers might conclude from reports in the Herald that the results of the performance-based research fund exercise are easy to compare with those of the research assessment exercise in Britain. They are not. There are major differences that make it impossible to produce a reliable comparison.
It is true the two systems have many similarities. These include an emphasis on the quality of research being carried out (rather than simply the quantity of research being produced), the use of expert panels of peers to review the quality of the research, and the public reporting of this quality by institution and by department.
On the other hand, there are at least seven key aspects in which the two systems are significantly different and which make reliable comparison impossible.
First, under the local exercise the research of each individual academic is graded, while under the British model only the best research is submitted. (Think about trying to compare performance statistics of individual cricketers with the overall performance of a team.)
Secondly, under the performance-based research fund the research of all eligible academics on teaching or research contracts must be assessed, while in Britain departments submit the work of only selected eligible researchers.
Thirdly, the eligibility rules are different.
These three factors alone produce an apples v oranges scenario. For example, in Britain many individuals who are involved only in teaching or who are recent graduates can be excluded from assessment, without penalty. Thus, an academic department can be awarded the maximum score even when it has a moderate number of staff who are relatively young or inactive in research.
By contrast, under the New Zealand exercise a department can gain the maximum possible quality score only if all of its academic staff are assessed as being world-class researchers.
Fourthly, the basis of assessment is different: under the local exercise academics are individually graded on their total research output, certain nominated works, evidence of their "peer esteem" and other evidence of their contributions to the research environment in New Zealand.
In Britain only the nominated works (journal articles and so on) are considered for each nominated individual, then an overall assessment is made of the research environment within the department.
Fifthly, the scoring systems are different.
These last two factors accentuate the apples v oranges scenario. For example, many individuals in the British system whose research output counts towards the assessment of "attainable levels of international excellence" for their department would not score an A grade under the New Zealand exercise because of weaker evidence of their peer esteem and lesser contributions to the research environment.
This applies particularly to staff at the early stages of their career who have not had sufficient time to win evidence of peer esteem or make significant contributions to the research environment, and could score as low as a C grade under the performance-based research fund.
Sixthly, the New Zealand exercise used only 12 assessment panels, with subject areas grouped together, while the British model uses well over 60 assessment panels, essentially one for each subject area.
Finally, in the British exercise the descriptions of different standards of research performance vary across panels, making meaningful comparisons difficult between widely different subject areas, while under the performance-based research fund a fixed set of standards was used for all panels.
There are very good reasons for these differences. The designers of the New Zealand model recognised that many of the features of the British exercise were not appropriate for the needs of our tertiary system and chose not to adopt them.
The resulting model was considered to be more transparent, but very much stricter, than the research assessment exercise.
Accordingly, the results from the British exercise cannot be reliably used for the purposes of comparison with the performance-based research fund. While there are some parallels between the two systems, the differences between them make it impossible to make reliable international comparisons of the sort that the Tertiary Education Commission has attempted.
Furthermore, publication of flawed comparisons carrying the approval of the Government could seriously harm to the reputation of the degrees awarded by our universities, the employability of our graduates overseas, our ability to attract high-quality staff and students and take part in international exchanges, and the ability to promote New Zealand as a high-quality destination for international students.
Of equal concern is that the commission did not consult any of our 22 degree-granting institutions about its late decision to undertake and publish international comparisons in this form.
As the High Court's decision in favour of the universities rightly shows, this is much less than we should expect from a Government agency when a key element of this country's international reputation is at stake.
* Marston Conder is a professor at Auckland University. He chaired the Government-appointed working group that developed the model for the performance-based research fund in 2002.
Herald Feature: Education
Related information and links
<i>Marston Conder:</i> NZ's academic reputation put at risk by comparison
AdvertisementAdvertise with NZME.