Can you name one great innovation funded by OpenAI? How about one funded by Google? There are plenty. But where are all the innovations from the National Science Foundation (NSF)?
It has the power to fund up to $100 million or probably even more in research infrastructure costs. Why, then, are general-purpose widely impacting breakthroughs similar to OpenAI’s ChatGPT not coming out of the funds lately? Why does the NSF website not at least list some major widely used products it funded in the last decade?
Apparently, the NSF is not getting its priorities right. For instance, a project titled “The Development of Computational Thinking among Middle School Students Creating Computer Games” was awarded more than a million USD ($1,092,908.00) in 2009, and the same Principal Investigator was awarded $701,767.00 in 2014 for a study on “Can Pair Programming Reduce the Gender Gap in Computing? A Study of Middle School Students Learning to Program.”
I do not think these projects made any difference to the middle school students even after nine years.
In rapidly progressing areas like generative AI, that kind of timeframe can obsolete the proposed ideas and create an entirely newer body of scientific knowledge. Some of the comments made in the reviews that I received reflect upon the ignorance of the reviewers. Apparently, there are no binding criteria for selecting the reviewers, nor is there a mandate for recusal when the reviewer does not have the knowledge to judge the proposed ideas.
Although they perform the most important function of the NSF, namely, help with the adjudication of the grant applications, the reviewers are neither accountable nor sufficiently paid to demand accountability. It is widely rumored that you need to be in the circles of the adjudicating reviewers and the program directors for your project to be funded by the NSF.
Apparent conflict of interest is taken seriously, but not latent biases such as ethnicity, affiliation, or even area of research. Equity, diversity, and inclusion are given priority, but is it being taken too far so that merit is being sacrificed at the altar of social justice? Science is advancing rapidly. How pragmatic and even ethical is it to take months to reject a researcher’s ideas, that too with flimsy review comments?
This question is not just for NSF, but to all those publishers and funders who sit on applications and research papers for months. Can the lawmakers step in to make a difference?
There may not be many researchers who turn into lawmakers or lawmakers with researchers in their circles, but researchers are an important community that lawmakers represent. In the long run, the economy is primarily driven by innovation, particularly in science and relevant fields.
There must be a wide referendum on the practicalities of current research adjudication processes, and legislative action must ensue from it. To start with, the law can make it mandatory for funding agencies—and for that matter, even journals—to announce review turnaround times.
Reviewing plays a substantially important role. It is well known that Google’s PageRank was not accepted for the SIGIR conference in 1998 and Einstein’s theory of relativity did not win the Nobel Prize in 1921. But the scientific community hasn’t learned from any of such incidents.
Reviewing must be made a top priority, paid, and made accountable. Manual peer reviews must be supported and complemented by AI-based tooling. Compared to the bias in AI models, human bias is complex. A number of factors such as ego, patriotism, and nepotism compound the problem.
AI-based reviews have a much faster turnaround, and the technology is mature enough to give it a try. Significant projects and huge funding must have multiple rounds of oversight.