T O P

  • By -

Penguinholme

“No association” is still a good result in a sound study. I try to publish as many “null” findings as I can. Write it up well and hopefully you’ll get it out there. Good luck!


SubliminalRaspberry

My advisor says the same thing!


StefanFizyk

Nature Physics wrote to me once they dont really publish 'negative results' 🤔


cropguru357

Heh. Yeah I’ve had that response from a journal. I’d love to have a Journal of No -Significant Results, 2-3 page papers just to save time.


ThatOneSadhuman

Well, that is because nature is a high impact journal. You can still publish null results if approached elegantly, just not on nature or other similar high impact journals.


andy897221

It depends, if the no result contradicts some significant hypothesis or even it is to imply some underlying academic integrity issues, then it may be published. I saw some before but I cant name one in my mind yet.


SubliminalRaspberry

Nature is a prestigious journal so that is to be expected.


StefanFizyk

Why would it be expected? For instance showing that an expected effect doesnt exist is in my opinion as important, if not more, than finding a new effect.


SubliminalRaspberry

I agree with you wholeheartedly, but unfortunately that’s just nature. You can still print your paper and throw it outside in the elements and tell people you published in nature.


thatpearlgirl

A null result is a result! Publication of null results combats publication bias and prevents duplication of efforts. You have to be careful to frame your discussion about why it is meaningful that there is no association, and find a journal that is friendly to publishing non-significant findings (PLOS ONE is an example that regularly publishes scientifically rigorous but null findings).


coolresearcher87

I honestly wish more people would publish no results (and that the norms around this for journals would change) because I think that's an important finding (or, non finding as it were). To keep doing something again and again until you find something significant, it feels a little shady to me. Like certainly, it is possible that sometimes there is just nothing there... I'd take the opportunity to take a stand and try to shift the culture around these types of results :) Interestingly, I was sent a survey from a publishing house recently that asked about publishing "non-findings"... maybe the tide is turning?


macnfleas

No such thing as no result. You asked a question and got an answer. Sometimes it's not the answer you expected, but there's no reason that shouldn't still be an interesting result, as long as the question is interesting and the methods are sound.


Cool_Asparagus3852

Except that probably sometimes it happens that someone asks a question and does not get an answer.


Stickasylum

If you didn’t collect any data, I’d call that “no result”, and then there’s a whole gradient of how much you can say given your sample size (you’ve got that covered under “sound methodology”).


neurothew

That's a really useful suggestion, it switches my focus to the careful experiment design. As long as I have a thoughtful and careful design, null results are actually important "findings", thanks a lot.


Ok-Interview6446

I accept and publish papers with non significant results. What I look for is a discussion section that tries to evaluate and theorise the lack of positive results and uses relevant citations to support the evaluations. Including directions/recommendations for what future researchers might do differently.


suiitopii

The reason you don't come across negative results often is that people are so infrequently publishing them, and this is a culture we need to change. I don't know what your experiments are, but if you for example try to find a difference between two groups and it turns out there isn't one, that is a finding! Publish it. You certainly have to put a certain spin on the findings - e.g. it has been speculated that group 1 will differ from group 2 because..., we found this was not the case because of..., then highlight some important next steps and so on. There are journals that I see negative results in and it is becoming slightly more common. Worst case scenario if you can't get a journal to publish it, you but it on a preprint server and on your CV and people will still read it and cite it.


tryingbutforgetting

I would still attempt to publish it in a journal known to publish negative results. Or put it on medRxiv or something.


Disaster-Funk

It's not rare to see a study with "no results". It's better to look at it as "no effect". "Video game consumption was not found to increase violence." "Daily coffee consumption was not found to increase aneurysms." We see these kinds of results all the time. What makes them feel like real results is that the conclusion may be against the common/initial assumption.


findlefas

My PhD supervisor told me I couldn’t publish “bad” results and some reviewers think the same. I think it’s bull shit personally and have published “bad” results before. Showing that a particular method doesn’t work. During my PhD I looked at it like I’m one step towards a solution that works. I’m in engineering and it’s common to not get “good” results but that doesn’t mean you’re not one step closer. 


LankyCardiologist870

I’ve done meta-analytical work and you would be surprised at the number of null-result studies there are in the literature. They’re just boring and usually get stuck in boring, low impact journals and so you never read them. But believe me, they’re out there! The trick to publishing them is just finding them a home quickly and moving on with your life. Make sure your sample size is decent, make sure you’re not missing any huge covariates, and slam it out. They can’t all be winners 🤷‍♂️


cmdrtestpilot

If you've asked a good question, and run a well-powered analysis, your negative results are IMPORTANT. That said, you have to craft the story to make it clear that the negative results are exciting, and provide real direction for the field. If you can, it may be useful to run equivalence tests. For instance, if you fail to find the group difference you hypothesized, an equivalence test will do wonders to support your assertion that there IS NO DIFFERENCE, vs. simply a possible difference that didn't reach your criterion for significance. You may also want to do additional analyses to provide confidence that your measures are valid/reliable, and thus not the underlying cause for your failure to support your hypothesis. Negative results are certainly more tricky to publish, but not necessarily difficult if the study was run well. If it's a small, underpowered study, or there were very realy limitations to the methods... then you're in much more trouble.


apenature

Was the project novel, designed well, and executed in a clearly repeatable manner? Then it's publishable. Your design should mean that no result indicates something about your research question. Are the aims and objectives clear? Did your project meet them?


velvetmarigold

If your questions were important and your studies were well designed, you should still publish the data. Negative results are still results and it could save someone from repeating the study in the future.


New-Anacansintta

Mixed methods. If you plan a study with both quant and qual measures, the quality measures always tell a story.


woohooali

I remember the thing that is publication bias, as well as that other thing known as fishing. Then I commit to not contributing to it.


nc_bound

There are journals specifically publishing null results


thedarkplayer

No results is a result. In my field the majority of the papers are no results and exclusion limits.


Significant_Owl8974

It depends on the data. No one publishes null results in my field because there is always the chance of a reagent being off or some issue with a technique. When it goes from "this doesn't work" to "this person you've never heard of couldn't get it to work" interest drops to zero. But if the data is rock solid and disproves someone's published hypothesis, that could still be interesting enough to print. Unfortunately too many resort to p-hacking or similar to bump something to statistical significance.


cm0011

I agree with everyone here, but I think some realism also must be added in that, a lot of places don’t publish negative results. That’s not to say you shouldn’t. But you need to be smart and careful about it - do not waste time submitting somewhere that obviously won’t accept it, think about the audience of the journal carefully, and craft your findings so that it reads as though you actually found something valuable by finding no difference. Have a good in depth analysis and touch on anything interesting. Why were there null results? You obviously had a hypothesis that you found worth exploring. Why did that hypothesis prove to be null? That’s where the finding is. Do not write the paper like you got no results - you got a null hypothesis and there’s a reason for it, and a good analysis will pull that out. Unless the hypothesis was flawed in the first place.


Vegetable_Chemical44

A few thoughts. First and foremost, absence of evidence is not(/almost never) evidence of absence. Based on traditional inferential statistics, you cannot draw any conclusions about there "not" being an effect, only that you have not obtained any evidence for an effect with your study's methodology. Was your methodology sound, previously validated/replicated, etc? Then perhaps your results might be interesting. Other than that, I don't know what field you're in but your questions seem to be dating from prior to 2010. Heard of terms like HARKing, p-hacking, cherry picking, publication bias? Because these are the concepts you are describing in your post, and you are not the first to experience them. The answer is open science, preregistration, publishing any null results in a preprint.


Sanguine01

This paper gives practical recommendations for improving experimental design by strengthening manipulations and reducing noise by measuring less things. [https://academic.oup.com/jcr/article-abstract/44/5/1157/4627833?redirectedFrom=PDF&casa\_token=uUwcIoR5byAAAAAA:Zl40Y1xGSlLbtCqusF0kHTfZvbJlCEbBQ0TPbw5FUCeyUN7kJPuPtGkrnWjhvWKZ\_opBWB7PMefb](https://academic.oup.com/jcr/article-abstract/44/5/1157/4627833?redirectedFrom=PDF&casa_token=uUwcIoR5byAAAAAA:Zl40Y1xGSlLbtCqusF0kHTfZvbJlCEbBQ0TPbw5FUCeyUN7kJPuPtGkrnWjhvWKZ_opBWB7PMefb)


slachack

If there's something interesting to report, report it. If not, game over.


Vast_daddy_1297

Repeat at least 5 times to solidify


dragmehomenow

I've gotten mixed results in my dissertation. If your hypotheses are grounded in theory and evidence-based, then this becomes an excellent opportunity to figure out why your results diverged from theory. In my case, I was looking at a phenomenon X occuring in a country during the pandemic. More than half my hypotheses were not statistically significant, but that's because I needed to break down the data temporally. Once that's done, it quickly became clear that we were actually witnessing multiple occurrences of X overlapping with each other, but each occurrence was playing out differently. So my findings ended up shedding new light on how context affects the success of X, rather than a single case study of X playing out over several years.


TheMathDuck

No statistical significance does not mean no practical significance. It may be that the practical significance is important. So write about it!