How to solve the problems of scientific publications?

In the first part of this article, we looked at the problems of the scientific publishing system that make research and data dissemination difficult. I am very glad that this topic turned out to be interesting and attracted many interesting ideas and comments.



This time we will discuss what solutions have already been invented, implemented and developed. Finally, I will describe my vision of an optimal system for the exchange of scientific information.







Scientific information retrieval systems 



The fact that now you can find any scientific article online is no longer surprising. However, just a couple of decades ago, things were far from simple.



Few stories from life
(15 ) . , . ! , . , .



( ) , 90- ! , , .



Now you can find any article by author, title, publication year or keywords from the abstract. All you have to do is open Scopus, PubMed, Google Scholar or another system.



However, there is undoubtedly room for improvement. Most systems search only by annotation, although searching the entire text will give access to a much larger amount of information. This raises the problem that often the full text is not publicly available (because of the paywall). In addition, search capabilities by the methods used in the work and individual experiments would be useful.



The use of search engines is an important factor in the creation of new journals or preprint archives. Often a popular search service is used by many scientists in a particular field (for example, in biology and medicine, it is PubMed). In this case, articles published on resources that are not indexed by such search engines turn out to be practically invisible to the scientific community.



Open access 



One of the most important problems of modern scientific journals is the restriction of access to articles by subscription (paywall). Anyone who has worked with articles at least once has come across this.



Summary of the previous series
, . , , . , , 30 , . , . , , .



The problem of access to articles is generally recognized, and the scientific community is making great efforts to translate all articles into open access.



The solution to the problem has been known for a long time: when submitting an article for publication, the authors pay a one-time fee, and the article is freely distributed. Anyone can freely download the article. Publication costs are a topic for a separate discussion, but even at current prices, publishing all articles in the public domain will be cheaper than subscriptions for all universities.



The most interesting thing is that it is also beneficial for the magazines - they do not need to maintain the subscription system, they just take money from the authors.



PeerJ
rg_software. PeerJ , open access ( 1200$). โ€” 400$ ( ) โ€” .


In addition, articles in the public domain can have much more readers (since access to such articles is not limited by anything), which means it is easier for them to get citations. Again, this is beneficial to everyone - both authors and even magazines, because it raises the impact factor.



Presubmission



Many magazines have their own standards for formatting text, illustrations and other parts of the article. This can lead to a very annoying problem. Not everyone is faced with this problem, but if you are very unlucky, you can waste six months doing completely unproductive work and waiting without any improvement in the research itself. We are talking about the need to completely redo the design for the magazine, if in the previous magazine the editor rejected the article. Changes can often be significant and time-consuming. And also the article can be rejected more than once.



But in biology ...
, . .



In order to avoid completely rewriting the entire article, the journals offer authors a presubmission. Authors only send a short description of their article to the editor, who makes a preliminary decision. If the article does not suit the magazine, then you can send it to another magazine without wasting time preparing the entire design. If the editor is interested in your work, then the usual submission of the full text for review begins. At the moment, many publishers provide authors with this opportunity.



Presubmission may seem like a minor improvement, but in today's environment with a huge number of journals, it can significantly simplify the life of authors and save a lot of time. 



Pre-registration of the study (preregistration)



A rather interesting pilot project was launched by the PLoS publishing house. You can register your project in the journal at the very beginning of work . In this case, we are talking only about the concept; upon registration, there are no final results or a finished text. This pre-registration is an interesting opportunity to get feedback from other academics and potential reviewers early on. This approach helps to optimize the work and speed up the review already when submitting to the journal.



Another plus of preliminary registration is the publication of the results, regardless of whether it was possible to confirm the stated hypothesis or not. The fact is that it is almost impossible to publish negative results now. This leads to a bias in the perception of scientific facts: only hypotheses that have been confirmed are published, those that failed are published extremely rarely. Pre-registering can fix this problem. If you have registered your project from the beginning, the final result will be published regardless of whether it is positive or negative.



There are obvious disadvantages to the preliminary registration of projects. Some scholars believe that in this way it will be possible to "occupy" interesting topics and then slowly explore them. On the whole, the question of priority in such a system becomes very controversial. In many areas of science, it is not so much the initial idea that is important as its experimental verification. That is, it is easy to propose a project, but much more difficult to implement. Pre-registration can encourage more projects to be submitted than the group can research.



But there is also a more obvious disadvantage of this system. Magazines that use pre-registration are at a disadvantage compared to traditional magazines. After all, pre-registration requires you to open the details of your project, as well as from the very beginning to choose a journal in which you will publish your research. Other scholars can use your project ideas and publish their research in a traditional journal. That is, the pre-registration system can work effectively only if all journals participate in it.



Reviewing Articles



Summary of the previous series
. โ€” . , , . .



Review is the most important part of the publishing process; without it, you cannot be sure that the information has been verified. However, this is a long and laborious process. Here are some ways to optimize your review.





Preprints



A preprint is an authored scientific text that has not yet been reviewed by reviewers. Authors can post their research on dedicated sites such as arxiv.org and biorxiv.org.



Preprint services have become very popular lately. From the point of view of disseminating scientific information, they are in no way inferior to ordinary articles - anyone can download the manuscript and read it. The main difference is that the preprint has not been verified. But if a specialist reads it, this problem is not too significant - the reader acts as a reviewer for himself. But the preprint can be posted much earlier, speeding up the exchange of information.



Most often, preprints are sent to a regular scientific journal at the same time. That is, after a while, the preprint becomes a regular peer-reviewed article. In some cases, preprints can help solve the problem of closed access, as well as get comments from colleagues on the article while it is still under revision.



But in physics ...
Jerf :

<...> , paywall : arxiv.org. arXiv, sci-hub ( , , arXiv 99% ). , , , - , , . - , , arXiv. , , .



ยซ , ? , .ยป , ( , ) , , arXiv. , , , email.



, , arXiv- - . , , , , - .






F1000Research



The magazine F1000Research combined both possibilities - the manuscript is first published as a preprint, and after review it receives the status of a verified article. In my opinion, this is a promising direction, but so far very few magazines use it.



This magazine also allows you to publish posters and slides with doi assignments. That is, it becomes easier to search for these materials and, if necessary, to quote.



I am very close to the position of this magazine indicated on their home page:

Publish all your findings including null results, data notes and more.

Engage with your reviewers openly and transparently.

Accelerate the impact of your research.


Interestingly, a similar model was chosen for JMIRx - magazines associated with biorxiv, medrxiv and psiarxiv. Authors upload an article to the archive, and JMIRx editors select some articles and submit them for review. Authors themselves can apply for revision of their article. The edits proposed by the reviewers are also uploaded to the archive.



This is how the idea is described in JMIRx:

Researchers could submit type-1 electronic papers [non peer-reviewed preprints] to preprint servers for discussion and peer-review, and journal editors and publishers would pick and bid for the best papers they want to see as 'type-2 papers' [ version of record] in their journal.


In my opinion, this is a great example of a new approach to publishing results. I do not agree with everything in their decisions, for example, reviewers are invited mainly according to the proposals of the authors of the article, but I think the principle of introducing innovations is very correct.



Peer review prior to submission to the journal



Recently, several journals have united so that scientists submit their article not to a specific journal, but to a general review for all . After passing the review, the optimal journal for publication will be selected. The editor also participates in this process, who in this case represents the entire union.



This review format gives you confidence that the article will not be rejected by a particular journal in the process. This means that the authors will not waste time on re-submission, because the choice of a particular journal for publication occurs after the review.



The obvious development of such a scheme will be the consolidation of an increasing number of journals. However, competition between publishers can become an obstacle to this process. Within the framework of one publishing house, it is not very difficult to find the right magazine on the topic. In addition, different publishers have magazines with similar topics, and they are often close in terms of impact factor.



Publishing reviews



Some journals (such as eLife and Nature ) publish peer reviews. I think this is correct, because the review is an important part of the scientific process. If a reviewer suggested good experiments and noticed important inaccuracies, he contributed to the development of the study. On the other hand, sometimes the requirements of the reviewers can be completely illogical, then it is also useful to see the reviews in order to understand what was added and what the authors originally suggested. At the same time, the publication of the review does not prevent the reviewers from remaining anonymous.



The question of the need for anonymous reviews has no clear answer. In most cases, the semi-blind method is used, when the names of the authors are known, and the reviewers are anonymous. In Nature, a variant of double-blind reviews was proposed, when neither the authors nor the reviewers disclose their names (after the review, the names of the authors are opened, of course). At the same time, the task of ensuring the anonymity of the article falls on the authors, and this is far from easy. Often, by topic, object and methods of work, it is possible to accurately identify the laboratory that performed the research.



Deanonymization of the reviewer
CactusKnight , :



, ยซยป ( ) 14 ,




And the already mentioned F1000Research, on the contrary, supports the opposite approach - the review there is completely open. Both authors and reviewers know each other's names. I donโ€™t have a definite opinion which approach is better. Each of them has its own advantages. On one point, many participants in the discussion agree - the anonymous review is probably more critical.



Remuneration for reviewers



One of the obvious problems, in my opinion, is that the work of reviewers is not paid.





It is clear that journals are not at all interested in changing such a system. But despite this, small progress is taking place. Some journals are at least discussing the possibility of free publications for active reviewers. The steps in this direction are very small, but it seems that the community is beginning to seriously reflect on the flaws of the existing model.



Impact factor as a measure of coolness



A very important problem of the modern scientific process as a whole is how to assess the success and effectiveness of scientific work. This is an eternal topic that can be discussed from all sorts of sides, but today it is important for us how scientific publications influence this.



The point is that articles are the main measure of a scientist's success. The vast majority of success metrics use one or another metric related to publications. Everyone who was even a little connected with science heard about the Hirsch index, the number of citations and the impact factor. The latter is most often used when reporting on grants and when receiving new ones (that is, it determines how much money a scientist will have). This means that the impact factor most clearly affects the success of researchers.



Impact factor is a serious business




The Impact Factor is the sum of citations for a year, of all articles that came out in the previous two years. That is, it is the average measure of citation of articles in the journal. Hence, the main drawback of this parameter follows - the impact factor is a characteristic of a journal, and not of a separate article. To some extent, these values โ€‹โ€‹are correlated - a bad article will not be published in a prominent journal. The problem is that this is a very indirect estimate. We do not know which journals the authors sent their article to, we also do not know what the editor was guided by when he accepted or rejected the article. This can be either the quality or novelty of the article, or a hype topic or a well-known scientist in the authors. We do not know why the article ended up in a good journal - this is a cumulative measure that combines all the advantages and disadvantages of the article. Besides,the final decision is made by only one person - the editor, and the assessment of the quality of a scientific article depends on his decision. All this makes the impact factor a very opaque and difficult to analyze measure of the quality of an individual article.



The very question of the quality of scientific work is very complicated. What is more important, high quality of experiments or novelty? Or maybe the momentary popularity of the topic? But the impact factor hides all these parameters (and many others) in one figure, calculated for all items over two years. 



What if the Impact Factor is useless?




Many scientists oppose the use of impact factors - for example, the combination of DORA and ASAPbio , which advocate the abolition of impact factors. Nobel laureate Randy Sheckman, one of the founders of eLife magazine, also calls for abandoning such a metric. Interestingly, eLife initially did not want to index their magazine in this rating. But the compilers of the list of impact factors, Thomson Reuters, did not take their opinion into account.





Most important, however, are the evaluation principles used by large donors. If they decide to abandon the Impact Factor in favor of a different method of assessment, this can very quickly change the status quo.



Reproducibility



This is the most important problem facing the entire scientific community now. There is no single solution to this problem, and donors, journals, and scientists themselves must work together to improve the reliability of the data. However, scientific publications play an important role in this process. Stricter peer review of articles to validate research methods and data availability should be the definitive starting point for improving reproducibility.





One way to organize the description of methods, materials, and data is through various standard forms. There is currently no single standard for describing methods, but some magazines offer their own design guidelines. For example, Cell press uses the so-called STAR methods and key resources table. This is a list of criteria for describing methods and accurately specifying all materials used. These criteria are not ideal, but they are a big step forward. Cell also no longer allows methods to be transferred to Supplementary, which also helps standardize the description.



It is also worth noting the emergence of a large number of resources exclusively for the exchange of research protocols (for example, protocols.io ). For this, special magazines are made with very detailed procedures. And for example a magazineJoVE (Journal of Visualized Experiments) publishes not only textual descriptions of procedures, but also a video showing the details of the process, which can be very useful for reproducing complex experiments.



How to solve all problems
, , . . , , .



When it comes to reproducibility, then of course the Retraction Watch should be mentioned . This is an attempt to solve the problem from the other side. They look for image modifications, experimental irregularities and other falsifications in already published articles. It is important to understand that even the most attentive and responsible reviewers can miss a mistake or inaccuracy. This is where the community helps by flagging suspicious articles.



Often, as a result of the actions of Retraction Watch, the magazine recalls articles with falsification. It should be noted here that the next step in the modern system is almost never taken. Nobody reviews other articles that the same reviewers have checked, for example. Even if an editor in one journal finds a systematically unscrupulous reviewer, other journals will not find out about it.



Interaction of scientists



Last time a lively reaction was caused by the opportunity to comment on scientific articles. Earlier, a special genre of publications was widespread - โ€œCommentary to the articleโ€. That is, one scientist wrote a small note to the journal, where he discussed the article of other authors. This is a good opportunity for scholarly discussion, but rather slow.



Now I hardly come across such notes. There is a section with comments on magazine sites, but almost no one uses it. I am very inspired in this respect by Habr, since here comments serve as a valuable source of additional information that develops the ideas of the article. It is clear that comments in a scientific journal should work differently, but the very possibility of publishing small experimental articles or discussions is in demand.



How journal comments work now
, Iamkaant :

www.nature.com/articles/nature14295, Nature, , , . , , , , . . , , , . โ€ฆ


, . , - . , , , , . , . .



Here it must be borne in mind that most of the readers of the article are specialists in the same field. Many of them want to use the data or experimental approaches from the article. That is, their opinion can be valuable both for the authors and for other readers of the article. And scientists now use a variety of tools to facilitate such communication.



Here is an important reason for this.
:

, ยซ ยป. . ! : โ€” ? ?




Surprisingly, Twitter remains one of the most popular platforms. Scientists not only share links to their articles, but also arrange quite large discussions. It seems to me that this is not very convenient (at least because of the limit on the number of characters), but the platform has already become a kind of social network for scientists.



But more specialized platforms are also evolving . Probably the most famous social network for scientists is ResearchGate . This is a fairly user-friendly site with wide functionality. Here you can upload articles and preprints, subscribe to updates of scientists you are interested in, create WIP projects with not yet published experiments, write comments on articles. There is also a rating system, which consists of publications, questions, answers and the number of subscribers.



Elsevier has its own social network for scholars based on Mendeley's bibliography program . Surprisingly, even after purchasing Elsevier, the program remains free. This is a fairly convenient reference manager, but as a social network I did not use it.



How I see the magazine of the future



In fact, in the title of this section I deceived you. There is nothing particularly futuristic about this concept. I am not suggesting replacing scientists with robots to automatically collect data or using blockchain to protect against falsification. Everything that I suggest using is already there and in use. I am only suggesting that we combine the concepts that work together.



main idea 



The idea is to create a unified archive of information where authors can upload research on any topic. Moreover, it combines the advantages of scientific journals, but works (almost) automatically!



The system will be based on a program such as JANE . If you are not familiar with such programs, and you have a short scientific text on biomedical topics at hand, I recommend that you follow the link and try it. JANE searches for similar articles, and based on this, selects suitable journals and lists authors working on this topic. The details are described here in this article .



Who is Jane?
Have you recently written a paper, but you're not sure to which journal you should submit it? Or maybe you want to find relevant articles to cite in your paper? Or are you an editor, and do you need to find reviewers for a particular paper? Jane can help!



Just enter the title and/or abstract of the paper in the box, and click on 'Find journals', 'Find authors' or 'Find Articles'. Jane will then compare your document to millions of documents in PubMed to find the best matching journals, authors or articles.




It seems to me that such a program is ideal for the role of an automatic editor. She will be able to check subject categories and keywords, as well as find reviewers.



The author of the article uploads the manuscript to the server, the program finds suitable reviewers, sends them emails. Reviewers check the article, make a decision, mark the status of the article on the site, and send the necessary comments to the authors. Based on the results of the corrections, the reviewers authorize the publication of the article and give it a grade.



Thus, we get a peer-reviewed journal for the price of a preprint archive!



These are not all the features that I would like to offer, but this is the most basic idea. It is easy to add additional necessary functionality to it. Let's see how such a service might look in detail.



Details



Format notes



The service should be available to everyone, any documents should be publicly available .



I really like the concept of F1000Research magazine that I mentioned earlier. Therefore, immediately after loading, the text becomes available as a preprint . At the same time, it is specifically indicated that the review has not yet passed. After the review, the status of the article changes to the reviewed article.



Typically, an article consists of several sections , often representing different hypotheses, experiments, or parts of work. I find it helpful to add keywords to such sections and even individual experiments to make it easier to find them. It is also worth mentioning the authors who conducted and analyzed this experiment.



For each experiment, reference is made to the methods used, which are described with references to the complete protocols . Such protocols can be published separately on special sites (for example, protocols.io ).



All data must be uploaded to independent services. Links link each experiment to specific results files.



For more control and reproducibility, you can add a requirement to open access to the electronic laboratory journal for the project (an example of such a journal is benchling). Now this does not require any effort, but it can significantly reduce falsification, improve the description of experiments. However, electronic laboratory journals have not yet become a standard, so the requirement may be too strict at this time.



Provide links to other articles of different types . That is, now not only citations of the article will be considered, but some citations will be indicated as references, some as a basis for research, and some are cited to indicate contradictions. Different types of citation will contribute differently to the evaluation of the cited article. Mentions and main links can be determined automatically (by the number of citations in the article). An interesting approach to different types of links is described in this article , and I'll discuss this in more detail below.



Add citation of specific sections of articles (at least for internal links). That is, when you cite another article, you indicate in which section of this article the necessary information is located. This will greatly simplify the search for facts in the article and verification of literature sources.



registration



Anyone can read articles and comments. But only registered users can upload articles, write comments and reviews.



To register, you must be the author of an article in a peer-reviewed journal or receive a recommendation from a scientist with publications. Such a system serves as a certain guarantee of specialists for the possibility of publication.



Each participant is assigned a rating. When registering, the rating is determined on the basis of bibliometric indicators (number of articles, citations, Hirsch index).



Automatic editor and review



When authors upload their article to the server, the system analyzes the abstract and selects keywords and reviewers. When selecting reviewers, their co-authors and affiliations are taken into account to avoid conflicts of interest.



Reviewers are selected with different ratings, but not too low. That is, the system will try to find a reviewer with a high rating and an average one. The search for reviewers is carried out not only among registered users, but among all scientists who have publications in the database.



Selected reviewers receive emails from the journal inviting them to review the manuscript. The letter contains an abstract, a link to the full text and a link to an anonymous channel of communication with the authors. If the reviewer accepts the proposal, he reads the article and writes his review to the authors. If it refuses, the system looks further.



The work of the reviewer in this case differs only in the assessment of the manuscript according to several indicators. For the rest, the reviewer also describes the advantages and disadvantages of the article, suggests additional experiments. The reviewer's feedback is immediately available to the authors, they can correct the article and conduct additional experiments. In this case, there is no need to wait for responses from all reviewers, the authors receive an immediate response immediately.



All registered users can write "custom" reviews and rate the article. This rating can be considered independently of the rating of the selected reviewers, or they can be combined taking into account the possible conflict of interest and rating.



Rating



All registered users are rated. The rating is gained for articles, including in other journals, for reviews, for comments. Ideally, the rating should be semi-automatic.



It is important that all user actions are logged and for each action the user's rating can be increased or decreased. For example, a reviewer receives points for each review. If the review turned out to be biased or if an obvious mistake was missed, the reviewer will receive a minus in the rating. For a good review will receive additional points plus. I will describe below how the controversial cases that arise when evaluating articles are dealt with.



Comments 



The review and the authors' responses to it become the first level of comments. At the same time, reviewers can remain anonymous or open their names at their own discretion.



All registered users can leave comments on any article. These can be โ€œuserโ€ reviews, free discussion threads, reproduction of experiments (both successful and unsuccessful), complaints about insufficiently detailed methods and unavailable data.



Such complaints can be semi-automatic - you can leave a complaint by clicking the button and describing the essence of the complaint. Authors can correct such a mistake themselves, then their rating does not decrease. If the authors do not respond to the complaint, an additional reviewer is assigned, and the authors and original reviewers of the article lose rating.



Comments with individual experiments receive their own doi and can be reviewed by a reviewer. Authors can also post additional experiments to their article in the form of comments. This is often useful, as experiments that do not fit the line of an article are usually not included in the manuscript. This opportunity will allow you to do small work based on the article, which does not drag on to a full-fledged publication. For example, this can be useful for student projects.



Net



This type of service can take full advantage of networks with a large number of users. For example, you will be able to get recommendations for articles that are read by people with the same search history.



Or subscribe to a famous scientist. Then only by using his evaluations of articles can one get a cut of his opinion on published works. That is, each scientist becomes, in a sense, an editor, but selects articles after their publication, and not before.



In addition, such a network of interactions allows one to find conflicts of interest.



Conflict resolution



It is clear that conflicts can arise in any system. This can be seen in the example of edit wars on Wikipedia, and on other resources. Algorithms can be used to identify and resolve conflicts, but in some cases a human decision is needed. Moderators can work on a permanent basis or be volunteers. In the second case, active users with a good rating may receive messages asking them to check a specific article or comment. That is, they can work as reviewers, but to address a specific issue.



The search for conflicts in many cases can be automated. Several typical potential integrity violations can be considered.



Opposing schools- there are times when several groups hold different theories and try to downgrade the opposite point of view. 



This situation can be determined by the graph of links - citations, ratings and affiliations. Opposing groups will be represented by isolated clicks in the graph, and the scores of each other's articles will be opposite. In this case, you can mark grades, comments and reviews from another group with a special label - you can't just throw them away, they can contain valuable information. But such reviews cannot be taken into account as impartial either.



"Friends" - sometimes the opposite situation happens, colleagues or acquaintances overestimate each other. This can also be calculated from the graph of links and citations. Such assessments can also be tagged with a special tag.



Complaints... As I already wrote in the section about comments, you can complain about an article if you find a serious flaw in it. This may be insufficiently detailed descriptions of methods, missing code or data references, or there may be a more serious problem - image manipulation, falsification, or pseudo-scientific theories. A small complaint can be corrected by the author without involving a moderator. A serious violation is dealt with by an independent moderator (or several) who make a decision. Depending on this decision, the rating of authors, reviewers and commentator changes.



Promotion



Darksnake described a similar system in his commentary on the first part . He also shared an idea for a way to promote such a magazine.

Exactly as a preprint archive. There are very few of them now. On the basis of the archive of preprints, you can make a formal journal. It's not realistic to make your own magazine from scratch. But making a magazine based on a ready-made base of publications is quite realistic.


It seems to me that such a promotion model is also suitable for the described system. Indeed, from the user's point of view, it does not differ much from the preprint archive. There is no additional cost for editorial staff in this case either.



A bit of fantasy
, โ€” . , . โ€” , . , .



PubMed Central (PMC). โ€” National Institutes of Health (NIH) โ€” , NIH, . , PMC.



Payment for the work of reviewers 



I described a free service option for both readers and authors. Of course, it requires a certain amount of money to maintain and develop infrastructure and other necessary expenses. However, for the price of the preprint service, we get a peer-reviewed journal.



In the future, you can add the remuneration of reviewers, for example, 100-200 euros per review. Even so, the total cost for authors will be much lower than the average cost of an open access publication. There are a variety of options: reviewers can be paid based on the quality of their review and rating. Payment for publication in the magazine can be according to the "pay what you can" scheme or any other, but in any case for much less money than open access costs now.



Problems



In discussions of this system, I have encountered several potential problems. Let's discuss some of them (I'm sure there will be more in the comments).



Too much workload for reviewers



There are concerns that too many articles will be submitted to reviewers without prior selection by the editor. It seems to me that this is a solvable problem.



First, it's easy to keep track of how many articles were submitted to each reviewer and not submit new ones if there are already too many of them.



Secondly, there is a fairly large reserve of scientists who are usually not sent articles for review - postdocs and graduate students. Often, they are the ones who review articles in not very cool journals, but they receive an offer from their supervisor, who is formally considered a reviewer of the article. Many people are skeptical about peer review by scientists early in their careers, but it seems to me that many of them are able to cope with the task as well as their older colleagues.



Third, in my experience, not all reviewers completely read the article before submitting it for review. In this case, a potential reviewer himself can decide to take the article or not by reading the abstract and looking at the pictures. The reviewer can always refuse to accept an article if he is not interested in it, if he does not consider himself competent enough in the required field, or if he does not have time.



Fourthly, the rating system should help solve this problem. Thus, scientists with a higher rating can gain an advantage in finding a reviewer. And the rating can be displayed even with anonymous communication. Conversely, users with a low rating will be limited in their ability to submit articles (for example, no more than once every few months), since they were previously noticed submitting poor quality data.



Sharpening articles "like"



This means that scientists will adjust their articles as much as possible to any rating system introduced. And if you propose to rate an article with conditional "likes" from other scientists, then the articles will be optimized to get more of these likes. This is a perennial problem that arises in any rating system. In a commentary on the previous article, jungeschwili mentioned the cobra effect and Goodhart's law , which describe the problem very well. Nevertheless, several ways can be proposed to compensate for this effect.



First, rating is not intended to be the only measure of article quality. I propose only to move away from the impact factor assessment, and also add an obvious assessment of the reviewers.



Secondly, automatic search for conflicts of interest allows you to bypass a noticeable share of "likes" from "friends" or "enemies" of the author, which may be less objective.



Thirdly, it is optimal to have feedback - if a scientist evaluates articles in bad faith, then the price of his evaluation falls.



We need to go deeper



Microarticles



The problem that is the premise of this paragraph, I did not analyze last time. It consists in how to evaluate all the authors of the article. In biology, the first author is considered to be the most important, who made the greatest contribution to the work. But there can be a lot of gradations, and it can be difficult to divide the degree of participation among a large number of authors. Some journals ask to indicate directly the contribution of the authors at the end of the article.



However, there is a more interesting solution. If the article is published not as a whole, but as a set of separate experiments, then each can be given a separate list of authors. And besides, you can not cite the whole article, but only the parts that interest you.



There are a number of other interesting advantages to this concept. Petr Lidsky described the concept and even the process of transition to it in great detail in this article.... If you are interested in looking at one of the options for the development of future magazines, then I highly recommend reading.



Integration of the received information



One of the most important problems is not even how best to publish a scientific article, the organization of scientific knowledge as a whole. After all, each article is just a tiny piece of a huge puzzle that we are trying to assemble without a diagram.



Reviews are now one of the few ways to combine disparate studies. Scientists write them on the basis of various experimental works. However, even the best reviews get outdated quickly enough, with new articles coming out every week.



In the comments ( one , two) to the previous article suggested using the wiki format to keep reviews updated by the community. In my opinion, this is a rather interesting idea. Moreover, it is already used in slightly different versions in various databases. However, there is room for development here as well.



This is how my colleague, Zoya Chervontseva, describes her view on the problems of scientific publications:

It seems to me that the biggest problem with biological publications right now is the mess in the semantic part, not in the organizational one. There are many private statements (protein A does this and that under such and such conditions) that do not correlate with each other in any way. Reviews try to fill this need, but they are lacking. The ideal system of publications, IMHO, should explicitly write its new statements into some structure (graph?) Of previous knowledge.



A separate difficulty lies in the fact that many statements are now probabilistic - it is not protein A that does something, but protein A, perhaps, according to the results of our new super-sophisticated protocol, with such and such a p-value binds to this site in DNA - and so much for thousands of sites in the genome. That is, it turns out not even a unit of information, but the probability density of information)). Plus, of course, the batch effect and bugs in analysis programs - yes, but it seems that irreproducibility here is not as bad as the fact that we, in principle, are not able to properly integrate this type of information now.


Nowadays, various databases perform a similar function. But they are still far from ideal. Not to mention creating a data structure that describes a large section of science at once. However, I am sure that this direction will continue to develop actively in the near future.



Role of classic magazines in the future 



It is clear that classic magazines will not disappear overnight. But this is not necessary. They may well work in parallel with the open access system.



For example, they can select eminent scientists and commission them to review different areas of expertise. Or doing science journalism. Or to present research more easily to non-specialists.



A little conclusion



These are the directions for the development of publications seem promising to me. I think that in the near future even such forecasts may turn out to be rather modest - there is a huge potential for development in this area, and who knows which format of information presentation will be the most effective. I am only sure that we need to develop scientific publications and use the available opportunities.



More formats, good and different




And I would like to thank you for reading this article and invite you to a discussion in the comments. I am sure you have interesting ideas and I will be happy to discuss them.



Acknowledgments



Many thanks to Olga Zolotareva for discussions and ideas for this article. Thanks to Sofya Kamalyan for help in checking the text. Thanks to everyone who took part in the discussion on the first part of the article, especially: Peter Lidsky, Nadezhda Vorobyova, Omar Kantidze, Zoya Chervontseva, Alexei Savchik. For Habr users: rg_software, Jerf, CactusKnight, qvan, technic, darksnake, nnseva, damewigit and many others. And also to all colleagues with whom we discussed the problems of scientific publications.



The first part of the article.



All Articles