Most research typically falls into one of five categories: case studies (looking at one person with a specific illness or injury or condition), prospective studies (where you pick something you are interested in, find a group of people, and follow them to see what develops), retrospective studies (where you take a cohort (group) and look back to see what they have in common related to some factor in which you are interested), meta analyses (which find trends in a bunch of studies), and controlled clinical trials. In most cases the study will find association, rather than causation.
For example if I were to be able to get the weight data of everyone who joins a gym in, say Maine, and then after a year get the weight of everyone again, and if those who stayed in the gym had a lower weight than those who left I could say staying in a gym is associated with weight loss. I could not say it CAUSED it though, because I have no idea of what other factors could be involved. Maybe most of the people left due to health reasons that caused the weight.
In a clinical trial the idea is to control for extraneous factors.... make everything the same, except for the one thing in which you are interested. So if I take a group of men, within 5 years of age, in good health, with no weight training experience, etc.... and take 2 weeks doing squats with them 3 times a week, with a specific starting weight, and a rule for progression, etc... I could then measure, gains in strength of leg muscles. With enough people in my group, or enough small studies to add up to enough people in repeated studies showing enough positive gains, I might be able to propose that within the group I am studying squats cause that amount of gains in strength.
People sometimes treat scientific studies with skepticism because they can contradict each other. But there are reasons for that. First: the larger the number of people in the trial the harder to do the study and the more expensive to run. So it is best not to run out and buy whatever food one study links to some health benefit, until there are enough studies with enough people agreeing on that benefit. Second: It is important to consider the source of the research. If someone wants to sell you something and they are doing the research to prove its worth that is a red flag. It might be fine and above board, but profit is a huge motivator, and if a lot of money rides on the outcome, ask yourself whether a big corporation might feel pressure to publish only those studies that help its own bottom line. Third: It is extremely hard to run a really tightly controlled study. For example, if the person being tested or the person doing the study, or the person recording outcome knows which people in the study had a placebo and which had the real treatment that can affect the outcome. Fourth: Where is the research being published? Is the journal peer reviewed? Is the journal supported by people who benefit one way or another by those who would profit from a certain outcome?
I think it behoves all people to be good readers of research. Look for studies with a control group, that are double blind (where people in the study and analyzing the data do not know who is in what group), published in a peer reviewed, well respected journal. Look at the lead and the second author of the study, where the research is done, and who paid for it. (None of these things may make you throw out the results, but are helpful if they differ from other work in that area). Look for meta analyses that compare all the well organized studies, and flesh out the chaff from the wheat, so to speak.
In other words, be a critical (as in thinking clearly and not agreeing with the most cool sounding ideas, or the most persuasive voice) reader of research.
I actually wrote this because I want to mention someone who is, in my mind, one of the great researchers of our time. And I want to mention him because I want to make a point about how yoga is taught, studied, and practiced, and more broadly what it means to be human. But that is for next time.