NewsPronto

 

News

News Pronto News

How to better allocate research money and fix a flawed system

  • Written by Julia Lane Institute fellow at American Institutes for Research
 Julia Lane Institute fellow at American Institutes for Research
Julia Lane Institute fellow at American Institutes for Research

Taxpayers want to know that their money is well spent on research. Yet funding agencies persist in trying to explain research results in terms of papers and publications rather than in terms of people – which is how the ideas from research affect both science and the economy.

Already CSIRO has had more than A$110 million cut in the latest budget and has had its research potential cut back significantly.

With almost 500 jobs lost in the last financial year and another 700 expected to go in 2014/15, it is clear that the pressure is on to allocate research dollars more efficiently than ever.

In doing so, we must also change our decision-making approach lest the process of research be distorted. We must move from document-counting exercises, and 20th century-style manual reporting requirements, to a more sensible framework which measures activities that matter, exploits 21st century technologies and is based on analysis by an expert community.

Flawed process

Answers to the question about the returns on investment have been patchy.

Serious academic work attributes much of productivity growth in the 1990s to investments in information technology, which were driven at least partly by investments in basic research.

Using a broad brush, the American Association of Universities draws a dotted line between research grants and the invention of the internet and the world wide web, magnetic resonance imaging (MRI), MP3 players and the Global Positioning System (GPS).

The approach taken by Australia’s Excellence in Research for Australia (ERA) is to rely on a catalogue of production of scientific documents.

I would argue this approach, which is the one used by many funding agencies, is the wrong framework to use. It doesn’t reflect how research is done and it’s not understandable to the public.

Research is not a slot machine wherein funding generates results in nice tidy slices in three- to five-year time intervals.

Instead, research ideas are transmitted through human networks – the black box between research funding and results – in a long and often non-linear fashion, over quite long periods.

So the right framework begins with identifying the right unit of analysis – people – and examining how research funding builds public and private networks.

Thorough analysis increasingly points to the importance of intangible flows of knowledge, such as contacts at conferences, business networking and student flows from the bench to the workplace. As the American physicist Robert Oppenheimer pointed out: “The best way to send information is to wrap it up in a person.”

STAR METRICS

Some universities and agencies – recognising the need for accountability – are starting to build a people-centred data system, largely inspired by the STAR METRICS project.

The data show for the first time the building blocks of research at the project level: showing the people who do the work and the firms who supply the scientific equipment.

The approach avoids manual, burdensome reporting and uses existing data drawn from the human resource records and financial reports of universities. It provides detailed insights into the production of science, and the results are understandable to the public.

It’s people that matter

Most importantly, given that so much knowledge is transferred through and by people, among the most important research products are the training of students and postdoctoral fellows.

Information on these pathways can now be routinely captured and used to describe the initial effects of science funding. Work is beginning in tandem with statistical agencies to capture placements.

Funding agencies are also starting to use the technology available to them, rather than relying on people filling out forms to determine what science is being done, and with what results. For every project that is funded, there is some written description.

Science agencies are using machine learning to use the words scientists themselves use to describe their science and identify research topics, just like Google uses the words in text documents to identify topics of interest for web search.

Research products are increasingly both digital and accessible via the web, making it possible to obtain much relevant information through web scraping and automated inference. Such approaches are also being used to trace the transmission of scientific ideas.

Much more can be done with such rich evidence if an expert community is fostered and whose research can answer taxpayers’ questions about research, just as the Household, Income and Labour Dynamics in Australia (HILDA) survey built a community to answer questions about health and income.

No such community exists for research funding. Research agencies are mandated to identify and fund the best science. Universities have research and education missions, and minimal evaluation capacity. Neither have the capacity or credibility to describe results.

If we are to satisfactorily explain the results of Australian research, we should establish an independent Australian institution with the mission of conducting and catalysing path-breaking research.

That community should use both 21st century computational and social science to document the relationships between investment in Research & Development, the people who do the research, and economic growth, and describe them to all stakeholders, including the taxpayer.

Only then can we hope to restore common sense to research funding, and move from document-counting into the 21st century.

 


Author

  1. Julia Lane

    Institute fellow at American Institutes for Research


Source

The Conversation