Earlier, I had written on the how I thought it wrong to force academic researchers in Computer Science to promise technology transfer and collaborations with the industry. The crux of my argument was that industry and academia have different goals. I want to elaborate on that some more, especially from an academic perspective.
I think academic researchers are too hard on themselves when they judge the merit of their contributions in terms of adoption by industrial practitioners or whoever the perceived end-users are thought to be or by the amount of money they have saved someone. Those are arguably the standards that an industrial software firm would apply to judge the success of its product. Should academic researchers apply the same standard?
I think not.
As Kuhn famously pointed out, the researcher works in a community. The community judges the merit of a researcher's claims based on the paradigm the community follows. The paradigm is everything. The open problems comes from the paradigm. The solutions ought to make sense in the paradigm. A solution that makes sense may open up new problems, which then becomes part of the paradigm. That is what Kuhn calls normal science (as opposed to a paradigm shift, which is not the point of this post). A contribution then amounts to solving a problem within the paradigm.
Clearly, the problems of a paradigm cannot be generic things such as "to make contribution to social welfare" or "to have impact on industry practice" or "to foster end-user adoption". They have nothing to do with a particular paradigm. The problems of a paradigm are always technical in nature. Then why are researchers tempted to judge their contributions by standards that don't apply to them?
You will be surprised how many times a presenter is asked by another researcher from the audience, "...but how much money did you save anyone by applying your techniques?" (I had this asked of me once many year ago and even though I was relatively green, the question felt wrong to me even then), and how many times a researcher touts what he saved the world. Isn't this anti-intellectualism?
I suspect one reason is the excessive focus on churning out publications. Many researchers have become adept at churning them out. I think in many areas of Computer Science, this has the adverse effect of blurring the paradigm. Few people really know what others are doing, except superficially. (I am pretty sure there are some who don't even know what is in the paper that bears their name.) So in this sense, some researchers have unknowingly stopped judging contributions by the criteria of the relevant paradigm. And when that happens, they start resorting to the other non-paradigm criteria, by someone's else criteria.
I do not begrudge a researcher his success in commercializing his research or taking it to the masses. In fact, it is to be applauded, for it points to an additional skill-set that he has. But let us not make that the criteria of judging someone's research contributions.
I also have nothing against those who collaborate actively with industrial partners. To each his own. But that is no reason to apply non-paradigm criteria to those who don't.
At the beginning of his book "The Tacit Dimension" Polanyi mentions how he was struck by how communism required that scientists turn to solving the problem of the current five-year plan. It makes me wonder if he would have found questionable the constant harping about researchers delivering what the industry wants.
Showing posts with label software industry. Show all posts
Showing posts with label software industry. Show all posts
Thursday, February 23, 2012
Wednesday, July 22, 2009
A Memo on Academic Research
These are tough times for academic research in Computer Science. Multi-institution research, technology transfer and exploitation in "real-world" -- the term has come to mean nothing to me and its usage makes me cringe even -- scenarios is more commonly a prerequisite to funding. It is simply not enough to publish in peer-reviewed venues. An intellectual proposition is considered to be of little merit unless accompanied by deliverables -- I don't know the history of this term, but I suspects its origin in industry -- in the form of software, hardware, documentation, and demos.
What are my primary responsibilities to society as a researcher in Computer Science? To apply my mind to the challenges people face and to strive for semantic solutions, not ad hoc ones; to disseminate the results of my efforts; and to mentor students.
These responsibilities conflict with most of the things that are currently passing in academia. For example, applying my intellect pretty much rules out multi-institution research because of the bureaucratic commitments such an activity entails and the reduction of any intellectual proposition to the lowest common denominator. Coordinating two institutions fruitfully is hard enough; coordinating a consortium of 15, each of which brings multiple research groups, is an exercise in futility. Coordination should not be confused with dissemination. Disseminating is like spreading spores -- the spores might take root in fertile ground and germinate into something wonderful. By contrast, coordination is like a marriage of persons who speak completely different languages. Most cultures hold the marriage of minds as an ideal; why in academia then have we created conditions that encourage institutions to enter into a marriage of convenience?
Some of the things I mentioned above, such as cooperation and technology transfer, are not bad in of themselves. What is bad is making them integral to research proposals. Research is risky: one might not achieve the results one had hoped to, and it typically takes many years for a line of research to mature. Such being the case, how can every researcher promise a transfer of technology to industry? I would view with suspicion anyone who promises that: either they are being dishonest or there is little research in their proposal. It would be a thoroughly good idea to have separate funding for transfer of technology; that's precisely the role of incubators. (Some universities indeed have incubators set aside for this purpose.)
Clearly, we are accountable towards those who fund our research, whether it be the taxpayer or a private entity. But how may we be judged? By peer review -- by more or less the same criteria that are used to judge a PhD dissertation. The number of quality publications, citations, technology transfer, the numbers of PhD students graduated, software produced, demos, could all be factors in the evaluation. However, the biggest part of judging will continue to be "did the research lead to significant new results and insights"? The judges could include some of the people who approved the proposal, with the others replaced by independent experts in the field -- it's like being called to do jury duty.
My intention is not to paint the software industry in a malign light. However, we must accept that industry and academia simply have different motivations, different objectives and timescales for achieving those, and different standards of judging achievement. It is best to leave them both to their own devices.
What are my primary responsibilities to society as a researcher in Computer Science? To apply my mind to the challenges people face and to strive for semantic solutions, not ad hoc ones; to disseminate the results of my efforts; and to mentor students.
These responsibilities conflict with most of the things that are currently passing in academia. For example, applying my intellect pretty much rules out multi-institution research because of the bureaucratic commitments such an activity entails and the reduction of any intellectual proposition to the lowest common denominator. Coordinating two institutions fruitfully is hard enough; coordinating a consortium of 15, each of which brings multiple research groups, is an exercise in futility. Coordination should not be confused with dissemination. Disseminating is like spreading spores -- the spores might take root in fertile ground and germinate into something wonderful. By contrast, coordination is like a marriage of persons who speak completely different languages. Most cultures hold the marriage of minds as an ideal; why in academia then have we created conditions that encourage institutions to enter into a marriage of convenience?
Some of the things I mentioned above, such as cooperation and technology transfer, are not bad in of themselves. What is bad is making them integral to research proposals. Research is risky: one might not achieve the results one had hoped to, and it typically takes many years for a line of research to mature. Such being the case, how can every researcher promise a transfer of technology to industry? I would view with suspicion anyone who promises that: either they are being dishonest or there is little research in their proposal. It would be a thoroughly good idea to have separate funding for transfer of technology; that's precisely the role of incubators. (Some universities indeed have incubators set aside for this purpose.)
Clearly, we are accountable towards those who fund our research, whether it be the taxpayer or a private entity. But how may we be judged? By peer review -- by more or less the same criteria that are used to judge a PhD dissertation. The number of quality publications, citations, technology transfer, the numbers of PhD students graduated, software produced, demos, could all be factors in the evaluation. However, the biggest part of judging will continue to be "did the research lead to significant new results and insights"? The judges could include some of the people who approved the proposal, with the others replaced by independent experts in the field -- it's like being called to do jury duty.
My intention is not to paint the software industry in a malign light. However, we must accept that industry and academia simply have different motivations, different objectives and timescales for achieving those, and different standards of judging achievement. It is best to leave them both to their own devices.
Subscribe to:
Comments (Atom)