This is nice and all, but how is a "like button" going to help this situation in any way?
The root causes don't have to do with informal peer recognition, but a public, measurable kind of recognition, expressed in publications and citations.
Citations actually already are a implicit voting system for quality work. ( in theory...)
Encouraging journals to publish more "boring" research, and if need be, dedicated journals, can have an actual impact.
===
Edit: what would be more interesting imo is similar but more extensive: a decentralized, public peer review system.
Citations actually already are a implicit voting system for quality work. ( in theory...)
I'm not sure this is even true in theory. The decision to cite something looks to me to be more about relevance than quality. Citing a paper says, "they investigated a related question," rather than "their investigation was especially rigorous/thorough."
Encouraging journals to publish more "boring" research, and if need be, dedicated journals, can have an actual impact.
Only if tenure committees and funding agencies consider the "boring" research sufficiently important.
(For those reading along: the_duke is talking about the project I'm involved with, https://plaudit.pub .)
I think you're right about the root cause; that's why Plaudit's endorsement data is free and open data, publicly available through CrossRef. Citations indeed are a similar implicit voting system, but they have the "problem" that they only accumulate over a larger time scale: an article first has to undergo peer review, then get published, then get read, then get used, and then people using it also need to get their results reviewed and published. We're talking several years here.
That's why evaluators often use the journal name/Impact Factor as a proxy for "expected number of citations". With Plaudit, research can start accumulating endorsements from the moment the preprint is published, so it could serve a similar role.
(And since Plaudit endorsements judge impact and robustness separately, articles and the journals that publish them can also get recognition for "boring" research - which is not possible using the Impact Factor.
===
Luckily, there are also plenty of initiatives for alternative public peer review systems. Additionally, there are also tons of "blockchain for science" projects (caveat: most appear to consist of a single whitepaper), although it's not clear to me what science-specific problem the decentralisation solves.
If you'd like some pointers to some specific such projects, let me know.
My main point of criticism is that it is essentially a like button that requires almost no effort. I can see the end resulting being researchers just trading "likes" without much consideration.
It could at least require a public text that justifies your approval of the paper, with a certain minimum length (100 characters plus). This would be something like a "peer review light".
Yes, this is the main potential problem. The hope is that this is combated somewhat by endorsements being completely transparent, traceable and open data, instead of the intransparant process of peer review - which has its own problem of nepotism.
The root causes don't have to do with informal peer recognition, but a public, measurable kind of recognition, expressed in publications and citations. Citations actually already are a implicit voting system for quality work. ( in theory...)
Encouraging journals to publish more "boring" research, and if need be, dedicated journals, can have an actual impact.
===
Edit: what would be more interesting imo is similar but more extensive: a decentralized, public peer review system.