# In addition to open access

Open access is mainly about the necessity to escape rent-making publishers thanks to open-access repositories such as arXiv and HAL. While pre-print submission is getting increasingly common (thank goodness!), there are a number of mechanisms that maintain the grip of publishers. Let's review some of them and how we can redirect our flow to disable them.

## Overlay journals¶

One issue for young tenure-track researchers is the necessity to publish in high-impact-factor journals, which are typically housed by rent-making publishers for pre-Internet reasons. To avoid this, I believe we should move our quality works to overlay journals, also known as "reviewing entities" or "Peer Community in", where peer reviewing happens on pre-print repositories like arXiv or HAL. We can still put out pre-prints first, then iterate on them in the open, taking into account feedback from colleagues and reviewers. The record of these iterations can also become valuable information for newcomers eager to cross knowledge gaps.

## Post-prints¶

Since the beginning of my graduate studies I have fortunately been able to submit pre-prints of all my works, focussing the reviewing process on improvements by decoupling it from the ability to share my works. The habit even evolved into post-prints, where I keep updating manuscripts after their publication, because I noticed a second wave of meaningful feedback starts flowing around 2–3 years after publication. Here is a quick model:

• The first wave is peer review, it filters mistakes or issues that appear in reading. It is mandatory and requires a significant investment over a short time span.
• The second wave comes from peers trying to reproduce the work. It can question design choices and spin exciting questions. You need to maintain an active online presence to benefit from it.

I have been very happy with the interactions that came from maintaining both post-prints and an active online presence. Ideas take time to spread: putting out a manuscript or a proper code distribution are only the first steps of a meaningful journey.

## Code distribution¶

As academics, we want our works to be reproduced not only by experts, who are already advanced in their various paths of knowledge, but also by newcomers eager to climb up their own paths. For them, published papers ripe with field idiosyncrasies (which come naturally from compression to a fixed number of pages) are not an efficient tool. That's why we should distribute source code of our works, not simply publish it. Here is a quick attempt at defining what distribution means:

• All dependencies are listed.
• The installation procedure is documented.
• Top notch: the code can be compiled using free and open-source software.
• Top notch: a one-line procedure allows to try it out.
• After installing dependencies and compiling the code, anyone can run it.
• Experiments described in the paper can be reproduced, e.g. in simulation.

Note the emphasis on "compilable": non-compilable source code is weaker and somehow less likely to be used, for good reasons. By distributing usable code, we help ensure knowledge gaps can be crossed by newcomers who take the time to work them out. Papers cannot be fully detailed on every point, but compilable source code has to.

## Discussion ¶

You can use Markdown with $\LaTeX$ formulas in your comment.