Guest Post: ResearchGate's RG Score and what it could mean to science
by Dr. Ijad Madisch
[Editor's note: The following is a guest post from Dr. Ijad Madisch, co-founder of the social research-oriented site ResearchGate. Earlier this year, ResearchGate rolled out a reputation metric called the "RG Score," a number meant to help users better evaluate resources. Education Dive invited Madisch to explain the history and thought process that went into devising that score, as well as how he sees academics and scientists making use of the new number.]
Reputation is a social game, a valuable asset that is handed on from somebody who owns it to someone else. It depends on sources. The more diverse and credible they are, the higher the chances are that others get an idea of who you are and believe it, too.
This is no different in the world of science, although researchers have quantified this emotional affair. An intricate system of records helps to make out an established expert. It’s all based on articles published in academic journals and how these publications have been perceived by peers. The traditional journal system is centuries old. The Journal Impact Factor (JIF) – one of the most widely used metrics for the measurement of scientific reputation – has been around since 1961. This method and others have been quite helpful in the past to figure out who the real aces are, given that scientific fields are often fragmented into countless subdivisions, and not everyone knows somebody in a particular niche.
That said, the times are a changin.’ The web offers scientists innumerable opportunities to forgo academic journals and present their findings online. When Sir Berners-Lee brought it to life in 1989, he was, as he himself describes it: “thinking about all the documentation systems out there as being possibly part of a larger imaginary documentation system.” Something is holding scientists back from becoming part of this system though. A major reason is the way how scientific reputation is measured today. The old methods just aren’t fit for the web. Apart from being last century, additional reasons we have to part ways with old metrics and establish a new scientific track record.
For years scientists have complained about how their reputation is measured. Some methods simply don’t reflect and do justice to individual achievements. A good example is the JIF. This is the ratio between the citations of a journal and the sum of articles published in this journal in the previous two years. The JIF is issued by Thomson Reuters yearly, and is meant to serve as an indicator for the prestige of a journal. Instead it’s often used as a proxy for scientists’ reputations. Even Thompson Reuters cautions against using the method as a point of reference for individual accomplishments. The company writes on its website: “Again, the impact factor should be used with informed peer review. Citation frequencies for individual articles are quite varied.” Nevertheless, the JIF is widely regarded as an indicator for scientists’ credibility. For instance, most German medical faculties require a certain number of JIF-indexed papers from postdoc applicants.
In 1997, Per O. Seglen, a Norwegian professor for cancer proteomics, argued against the JIF in an editorial published in the British Medical Journal. The metric concealed the real number of citations an article gets, Seglen says, as half of the articles from one journal were cited ten times more often than the other half. Another point of criticism Seglen finds is that the JIF doesn’t indicate the quality of research. The metric also depended on the scientific field in which an academic journal is published, says Seglen. Journals covering areas of basic research were more likely to have higher JIFs, because articles had a higher turnover rate, were short lived, but used many references per article. Other scientists have since echoed his complaints and have developed other metrics to solve these problems and serve as an alternative to the JIF. For instance, the H-Index ties reputation to the researcher more closely and is based on the number of citations of single papers. One problem remains though: these methods only take into account what’s printed in journals, and it can take years until an article is cited and turns into a source for reputation.
What we need now are metrics that count everything a scientist does, no matter if it’s published on paper or online. In other areas, online reputation reporting systems already work as an effective quality control mechanism. Just consider tremendously popular companies like the marketplace eBay, the odd job online bazaar TaskRabbit or the private home rental engine Airbnb. All of these services rely on the trust of people who don’t know each other. Customers can tell from reviews or point systems if the person who they’re about to do business with has been reliable in the past. This track record allows informed decision making and helps service providers to build their reputation. Those who’ve made it up top can expect more jobs, rentals or higher sales. It’s a transparent system based on the opinions of many. Granted, it’s not without flaws either; reviews can be forged, and point systems can be gamed. It’s still far more reliable than playing lucky and trusting the Yellow Pages though.
These remarkable examples show the way that science could go. If there were a recognized method to measure scientific reputation online, researchers could finally embrace all the opportunities the web offers them. They’d have the freedom to publish what they want, and get credit for every part of their research, too. Just imagine: for a researcher who’s tried to fit a never ending stream of data on a few sheets of paper, the vastness of the web must seem like bliss. Finally there’s room to present anything and everything, ranging from raw and negative results to full-fledged reports. Publishing science online ideally isn’t only a researcher’s paradise, but also serves the general public. The whole research process could be made transparent, preventing misconduct and sugarcoating. If negative research results were made accessible, scientists could prevent repeating their peers’ mistakes and find answers to pressing questions much faster.
A whole array of new alternative methods to measure scientific reputation has been launched in the past few months, ResearchGate’s RG Score being one of them. Now researchers finally have the chance to evaluate their peers’ work, regardless whether it’s a big discovery, raw data or negative results, and no matter in which medium it first appeared. Reputation is handed on from scientist to scientist, without delay. This is a step in the right direction. Now it’s up to the researchers to lead the way, make a name for themselves and embrace the opportunities the web offers them to drive scientific progress in a modern and dynamic way.
About the author:
Dr. Ijad Madisch studied medicine and computer science in Hannover, Germany, and at Harvard University. He received a summa cum laude for his doctoral thesis on virology and was awarded the 2008 doctoral research award from Hannover Medical School, Germany. In 2005 he received the RSNA Young Investigator Prize for his work on ultra high-resolution CT Imaging of tissue-engineered bone growth. Together with two friends Madisch launched the professional network for scientists, ResearchGate, in May 2008. After several years in Boston, where he worked as a radiology researcher at Massachusetts General Hospital, Ijad decided to devote himself to the network full-time and moved to Berlin. Four years later, more than 2.3 million scientists from around the world collaborate, publish and build a reputation for themselves on the platform.
Would you like to see more education news like this in your inbox on a daily basis? Subscribe to our Education Dive email newsletter! You may also want to read Education Dive's interview with CourseSmart's Cindy Clarke about how CourseSmart Analytics could change e-textbooks.