The Problem With Observational Science...
inlet13
Posts: 1,979
...is the modeling and, because of modeling, there will always will be bias. This is true if it's observational science in Medicine, Social Sciences, Climate Change,... etc.
http://online.wsj.com/article/SB10001424052702303916904577377841427001840.html
Analytical Trend Troubles Scientists
By GAUTAM NAIK
In 2010, two research teams separately analyzed data from the same U.K. patient database to see if widely prescribed osteoporosis drugs increased the risk of esophageal cancer. They came to surprisingly different conclusions.
'You can troll the data, slicing and dicing it any way you want,' says S. Stanley Young, of the National Institute of Statistical Sciences.
One study, published in the Journal of the American Medical Association, found no increase in patients' cancer risk. The second study, which ran three weeks later in the British Medical Journal, found the risk for developing cancer to be low, but doubled. Which conclusion was correct?
It is hard to tell, and the answer may be inconclusive. The main reason: Each analysis applied a different methodology and neither was based on original, proprietary data. Instead, both were so-called observational studies, in which scientists often use fast computers, statistical software and large medical data sets to analyze information collected previously by others. From there, they look for correlations, such as whether a drug may trigger a worrisome side effect.
The Food and Drug Administration says it is reviewing the conflicting U.K. data on the class of osteoporosis treatments known as oral bisphosphonates. The outcome matters given that millions take the drugs world-wide. If a substantial cancer risk is proven, it will force doctors to reconsider how they prescribe such drugs.
Merck & Co. says its Fosamax—one of the most popular drugs in the class—has been prescribed 190 million times since first being approved in 1995. Michael Rosenblatt, chief medical officer at the company, says that clinical trial data and more recent reports based on patient use "do not suggest an association between [the drug] and esophageal cancer."
While the gold standard of medical research is the randomly controlled experimental study, scientists have recently rushed to pursue observational studies, which are much easier, cheaper and quicker to do. Costs for a typical controlled trial can stretch high into the millions; observational studies can be performed for tens of thousands of dollars.
John Ioannidis, of Stanford University, has found biases in some studies.
In an observational study there is no human intervention. Researchers simply observe what is happening during the course of events, or they analyze previously gathered data and draw conclusions. In an experimental study, such as a drug trial, investigators prompt some sort of change—by giving a drug to half the participants, say—and then make inferences.
But observational studies, researchers say, are especially prone to methodological and statistical biases that can render the results unreliable. Their findings are much less replicable than those drawn from controlled research. Worse, few of the flawed findings are spotted—or corrected—in the published literature.
"You can troll the data, slicing and dicing it any way you want," says S. Stanley Young of the U.S. National Institute of Statistical Sciences. Consequently, "a great deal of irresponsible reporting of results is going on."
Despite such concerns among researchers, observational studies have never been more popular.
Nearly 80,000 observational studies were published in the period 1990-2000 across all scientific fields, according to an analysis performed for The Wall Street Journal by Thomson Reuters. In the following period, 2001-2011, the number of studies more than tripled to 263,557, based on a search of Thomson Reuters Web of Science, an index of 11,600 peer-reviewed journals world-wide. The analysis likely doesn't capture every observational study in the literature, but it does indicate a pattern of growth over time.
A vast array of claims made in medicine, public health and nutrition are based on observational studies, as are those about the environment, climate change and psychology.
The numbers are expected to increase as more databases become available and generate more studies. One massive undertaking, for example, is the National Children's Study. Conducted by the National Institutes of Health, it will collect data on thousands of American children, all the way from birth to age 21, and assess how genetic and environmental factors may influence health outcomes.
A hot area of medical research that highlights some of the problems with observational studies is the search for biomarkers. Biomarkers are naturally occurring molecules or genes associated with a disease or health condition. In the past two decades, more than 200,000 papers have been published on 10 cardiac biomarkers alone. The presence or absence of the biomarkers in a patient's blood, some theorized, could indicate a higher or lower risk for heart disease—the biggest killer in the Western world.
Yet these biomarkers "are either completely worthless or there are only very small effects" in predicting heart disease, says John Ioannidis of Stanford University, who extensively analyzed two decades' worth of biomarker research and published his findings in Circulation Research journal in March. Many of the studies, he found, were undermined by statistical biases, and many of the biomarkers showed very little predictive ability of heart disease.
His conclusion is widely upheld by other scientists: Just because two events are statistically associated in a study, it doesn't mean that one necessarily sets off the other. What is merely suggestive can be mistaken as causal.
That partly explains why observational studies in general can be replicated only 20% of the time, versus 80% for large, well-designed randomly controlled trials, says Dr. Ioannidis. Dr. Young, meanwhile, pegs the replication rate for observational data at an even lower 5% to 10%.
Whatever the figure, it suggests that a lot more of these studies are getting published. Those papers can often trigger pointless follow-on research and affect real-world practices.
The problems aren't entirely new. In the late 1980s and early 1990s, a raft of observational studies consistently suggested that hormone-replacement therapy, or HRT, could protect postmenopausal women against heart disease. Tens of thousands of women were given the drugs on that basis.
It was a bad call. Many of the studies were eventually undermined because women who used the drugs were healthier than those who didn't, and thus had lower rates of heart disease anyway. Later controlled trials suggested that not only did HRT fail to protect against heart disease, but it might have increased the risk.
Observational studies do have many valuable uses. They can offer early clues about what might be triggering a disease or health outcome. For example, it was data from observational trials that flagged the increased risk of heart attacks posed by the arthritis drug Vioxx. And it was observational data that helped researchers establish the link between smoking and lung cancer.
Jan Vandenbroucke, a professor of clinical epidemiology at Leiden University in the Netherlands, dismisses some of the drawbacks of observational studies, saying they tend to be overblown. He notes that even controlled trials can yield spurious or conflicting results.
"Science is about exploring the data…it has a duty to find new explanations," he says. "Randomized controlled trials aren't intended to find any explanations."
In the case of most observational studies, investigators plumb existing databases, looking for associations between different variables—thus generating an observation or a "discovery."
That technique can yield confusing results. Between 1995 and 2008, the FDA received reports of 23 people, most of them women, who were diagnosed with esophageal cancer after taking an oral bisphosphonate. Similar reports came in from Europe and Japan.
The use of bisphosphonates has soared in recent years. In the U.K., about 10% of women over the age of 70 take the drugs, so even a small increase in cancer risk would indicate many new cancer cases.
At Queen's University in Belfast, cancer epidemiologist Liam Murray and his colleagues decided to assess the tumor risk of bisphosphonates. They embarked on an observational study using a computerized database containing anonymized patient records for about six million people in the U.K.—one of the largest such databases anywhere.
At roughly the same time, a separate group led by Jane Green of the University of Oxford, began a similar examination of the same U.K. database. The teams were unaware of each other's projects.
The Murray team found that the increase in esophageal or gastric cancer risk was 7% higher in those who took the drug versus those who didn't, leading to the conclusion the use of the drugs "was not significantly associated" with either cancer.
The Green paper in BMJ found the esophageal cancer risk was 30% higher for those on the drugs, and that the risk of esophageal cancer increased when the drugs were prescribed 10 or more times, or for longer than five years.
In other words: in the normal U.S. and European population, one out of every 1,000 people aged 60 to 69 will get the cancer. But for those who take Fosamax and other related drugs, the incidence rises to two in every 1,000.
There could be several reasons why the studies arrived at different conclusions, including varying methodologies. The Murray study first identified users of the drugs, matched them to random people of the same sex and age in the population, and then tracked them until some developed cancer.
The Green team identified the cancer cases first and then assessed which drugs they had been given in the past.
"We were looking forward, they were looking backward," says Christopher Cardwell, a medical statistician and co-author of the Murray paper in JAMA.
The opposing impressions provoked Daniel Solomon, a rheumatologist at Brigham and Women's Hospital, to co-author a long opinion piece in the journal Nature Reviews in June. "Each of these methods introduces potential for different types of biases," says Dr. Solomon. "But what we can say is that both studies rule out a large increase in risk. We have learned at least that from the papers."
Dr. Young of the National Institute of Statistical Sciences takes a more skeptical view. He notes that because the Green study reports on three different variables at once, it introduces errors due to the classic problem of "multiple testing."
Dr. Green acknowledges that her team didn't adjust for multiple testing. She also notes that because information about the patients isn't consistent, "this database may not be the ideal place to look."
So is the conclusion in the Murray paper the correct one?
Not necessarily. The authors of that study acknowledge that their work has less statistical power than the Green paper, and that "poorly measured or unmeasured causes of bias may have masked an association" between the drugs and cancer.
There is another question. Each study only followed patients over five years or less. What if esophageal cancer develops over a longer period, say, 10 years? In that case, the design of both studies would be invalid.
"It's not that one paper is right and the other is wrong," says Dr. Young. "There is enough wrong with both papers that we can't be sure."
http://online.wsj.com/article/SB10001424052702303916904577377841427001840.html
Analytical Trend Troubles Scientists
By GAUTAM NAIK
In 2010, two research teams separately analyzed data from the same U.K. patient database to see if widely prescribed osteoporosis drugs increased the risk of esophageal cancer. They came to surprisingly different conclusions.
'You can troll the data, slicing and dicing it any way you want,' says S. Stanley Young, of the National Institute of Statistical Sciences.
One study, published in the Journal of the American Medical Association, found no increase in patients' cancer risk. The second study, which ran three weeks later in the British Medical Journal, found the risk for developing cancer to be low, but doubled. Which conclusion was correct?
It is hard to tell, and the answer may be inconclusive. The main reason: Each analysis applied a different methodology and neither was based on original, proprietary data. Instead, both were so-called observational studies, in which scientists often use fast computers, statistical software and large medical data sets to analyze information collected previously by others. From there, they look for correlations, such as whether a drug may trigger a worrisome side effect.
The Food and Drug Administration says it is reviewing the conflicting U.K. data on the class of osteoporosis treatments known as oral bisphosphonates. The outcome matters given that millions take the drugs world-wide. If a substantial cancer risk is proven, it will force doctors to reconsider how they prescribe such drugs.
Merck & Co. says its Fosamax—one of the most popular drugs in the class—has been prescribed 190 million times since first being approved in 1995. Michael Rosenblatt, chief medical officer at the company, says that clinical trial data and more recent reports based on patient use "do not suggest an association between [the drug] and esophageal cancer."
While the gold standard of medical research is the randomly controlled experimental study, scientists have recently rushed to pursue observational studies, which are much easier, cheaper and quicker to do. Costs for a typical controlled trial can stretch high into the millions; observational studies can be performed for tens of thousands of dollars.
John Ioannidis, of Stanford University, has found biases in some studies.
In an observational study there is no human intervention. Researchers simply observe what is happening during the course of events, or they analyze previously gathered data and draw conclusions. In an experimental study, such as a drug trial, investigators prompt some sort of change—by giving a drug to half the participants, say—and then make inferences.
But observational studies, researchers say, are especially prone to methodological and statistical biases that can render the results unreliable. Their findings are much less replicable than those drawn from controlled research. Worse, few of the flawed findings are spotted—or corrected—in the published literature.
"You can troll the data, slicing and dicing it any way you want," says S. Stanley Young of the U.S. National Institute of Statistical Sciences. Consequently, "a great deal of irresponsible reporting of results is going on."
Despite such concerns among researchers, observational studies have never been more popular.
Nearly 80,000 observational studies were published in the period 1990-2000 across all scientific fields, according to an analysis performed for The Wall Street Journal by Thomson Reuters. In the following period, 2001-2011, the number of studies more than tripled to 263,557, based on a search of Thomson Reuters Web of Science, an index of 11,600 peer-reviewed journals world-wide. The analysis likely doesn't capture every observational study in the literature, but it does indicate a pattern of growth over time.
A vast array of claims made in medicine, public health and nutrition are based on observational studies, as are those about the environment, climate change and psychology.
The numbers are expected to increase as more databases become available and generate more studies. One massive undertaking, for example, is the National Children's Study. Conducted by the National Institutes of Health, it will collect data on thousands of American children, all the way from birth to age 21, and assess how genetic and environmental factors may influence health outcomes.
A hot area of medical research that highlights some of the problems with observational studies is the search for biomarkers. Biomarkers are naturally occurring molecules or genes associated with a disease or health condition. In the past two decades, more than 200,000 papers have been published on 10 cardiac biomarkers alone. The presence or absence of the biomarkers in a patient's blood, some theorized, could indicate a higher or lower risk for heart disease—the biggest killer in the Western world.
Yet these biomarkers "are either completely worthless or there are only very small effects" in predicting heart disease, says John Ioannidis of Stanford University, who extensively analyzed two decades' worth of biomarker research and published his findings in Circulation Research journal in March. Many of the studies, he found, were undermined by statistical biases, and many of the biomarkers showed very little predictive ability of heart disease.
His conclusion is widely upheld by other scientists: Just because two events are statistically associated in a study, it doesn't mean that one necessarily sets off the other. What is merely suggestive can be mistaken as causal.
That partly explains why observational studies in general can be replicated only 20% of the time, versus 80% for large, well-designed randomly controlled trials, says Dr. Ioannidis. Dr. Young, meanwhile, pegs the replication rate for observational data at an even lower 5% to 10%.
Whatever the figure, it suggests that a lot more of these studies are getting published. Those papers can often trigger pointless follow-on research and affect real-world practices.
The problems aren't entirely new. In the late 1980s and early 1990s, a raft of observational studies consistently suggested that hormone-replacement therapy, or HRT, could protect postmenopausal women against heart disease. Tens of thousands of women were given the drugs on that basis.
It was a bad call. Many of the studies were eventually undermined because women who used the drugs were healthier than those who didn't, and thus had lower rates of heart disease anyway. Later controlled trials suggested that not only did HRT fail to protect against heart disease, but it might have increased the risk.
Observational studies do have many valuable uses. They can offer early clues about what might be triggering a disease or health outcome. For example, it was data from observational trials that flagged the increased risk of heart attacks posed by the arthritis drug Vioxx. And it was observational data that helped researchers establish the link between smoking and lung cancer.
Jan Vandenbroucke, a professor of clinical epidemiology at Leiden University in the Netherlands, dismisses some of the drawbacks of observational studies, saying they tend to be overblown. He notes that even controlled trials can yield spurious or conflicting results.
"Science is about exploring the data…it has a duty to find new explanations," he says. "Randomized controlled trials aren't intended to find any explanations."
In the case of most observational studies, investigators plumb existing databases, looking for associations between different variables—thus generating an observation or a "discovery."
That technique can yield confusing results. Between 1995 and 2008, the FDA received reports of 23 people, most of them women, who were diagnosed with esophageal cancer after taking an oral bisphosphonate. Similar reports came in from Europe and Japan.
The use of bisphosphonates has soared in recent years. In the U.K., about 10% of women over the age of 70 take the drugs, so even a small increase in cancer risk would indicate many new cancer cases.
At Queen's University in Belfast, cancer epidemiologist Liam Murray and his colleagues decided to assess the tumor risk of bisphosphonates. They embarked on an observational study using a computerized database containing anonymized patient records for about six million people in the U.K.—one of the largest such databases anywhere.
At roughly the same time, a separate group led by Jane Green of the University of Oxford, began a similar examination of the same U.K. database. The teams were unaware of each other's projects.
The Murray team found that the increase in esophageal or gastric cancer risk was 7% higher in those who took the drug versus those who didn't, leading to the conclusion the use of the drugs "was not significantly associated" with either cancer.
The Green paper in BMJ found the esophageal cancer risk was 30% higher for those on the drugs, and that the risk of esophageal cancer increased when the drugs were prescribed 10 or more times, or for longer than five years.
In other words: in the normal U.S. and European population, one out of every 1,000 people aged 60 to 69 will get the cancer. But for those who take Fosamax and other related drugs, the incidence rises to two in every 1,000.
There could be several reasons why the studies arrived at different conclusions, including varying methodologies. The Murray study first identified users of the drugs, matched them to random people of the same sex and age in the population, and then tracked them until some developed cancer.
The Green team identified the cancer cases first and then assessed which drugs they had been given in the past.
"We were looking forward, they were looking backward," says Christopher Cardwell, a medical statistician and co-author of the Murray paper in JAMA.
The opposing impressions provoked Daniel Solomon, a rheumatologist at Brigham and Women's Hospital, to co-author a long opinion piece in the journal Nature Reviews in June. "Each of these methods introduces potential for different types of biases," says Dr. Solomon. "But what we can say is that both studies rule out a large increase in risk. We have learned at least that from the papers."
Dr. Young of the National Institute of Statistical Sciences takes a more skeptical view. He notes that because the Green study reports on three different variables at once, it introduces errors due to the classic problem of "multiple testing."
Dr. Green acknowledges that her team didn't adjust for multiple testing. She also notes that because information about the patients isn't consistent, "this database may not be the ideal place to look."
So is the conclusion in the Murray paper the correct one?
Not necessarily. The authors of that study acknowledge that their work has less statistical power than the Green paper, and that "poorly measured or unmeasured causes of bias may have masked an association" between the drugs and cancer.
There is another question. Each study only followed patients over five years or less. What if esophageal cancer develops over a longer period, say, 10 years? In that case, the design of both studies would be invalid.
"It's not that one paper is right and the other is wrong," says Dr. Young. "There is enough wrong with both papers that we can't be sure."
Here's a new demo called "in the fire":
<object height="81" width="100%"> <param name="movie" value="https://player.soundcloud.com/player.swf?url=http://api.soundcloud.com/tracks/28998869"></param> <param name="allowscriptaccess" value="always"></param> <embed allowscriptaccess="always" height="81" src="https://player.soundcloud.com/player.swf?url=http://api.soundcloud.com/tracks/28998869" type="application/x-shockwave-flash" width="100%"></embed> </object> <span><a href=" - In the Fire (demo)</a> by <a href="
<object height="81" width="100%"> <param name="movie" value="https://player.soundcloud.com/player.swf?url=http://api.soundcloud.com/tracks/28998869"></param> <param name="allowscriptaccess" value="always"></param> <embed allowscriptaccess="always" height="81" src="https://player.soundcloud.com/player.swf?url=http://api.soundcloud.com/tracks/28998869" type="application/x-shockwave-flash" width="100%"></embed> </object> <span><a href=" - In the Fire (demo)</a> by <a href="
Post edited by Unknown User on
0
Comments
the basic science, which i have implored you to learn, is not based on models or algorithms ... it comes down to the greenhouse effect ... if CO2 and other gases do in fact warm the planet and our actions are increasing the amounts of these gases in the atmosphere - then there is little doubt that we are artificially warming the planet ...
You don't get it. Nor are you trying to. How do you prove any of what you said?
<object height="81" width="100%"> <param name="movie" value="https://player.soundcloud.com/player.swf?url=http://api.soundcloud.com/tracks/28998869"></param> <param name="allowscriptaccess" value="always"></param> <embed allowscriptaccess="always" height="81" src="https://player.soundcloud.com/player.swf?url=http://api.soundcloud.com/tracks/28998869" type="application/x-shockwave-flash" width="100%"></embed> </object> <span><a href=" - In the Fire (demo)</a> by <a href="
how do you prove anything in this world? ... do you believe smoking causes cancer? ... do you believe that eating mcdonalds everyday leads to heart disease? ... do you believe submerging yourself in frigid water will lead to hypothermia?
Read the article.
* "belief" (or bias) is part of the problem.
<object height="81" width="100%"> <param name="movie" value="https://player.soundcloud.com/player.swf?url=http://api.soundcloud.com/tracks/28998869"></param> <param name="allowscriptaccess" value="always"></param> <embed allowscriptaccess="always" height="81" src="https://player.soundcloud.com/player.swf?url=http://api.soundcloud.com/tracks/28998869" type="application/x-shockwave-flash" width="100%"></embed> </object> <span><a href=" - In the Fire (demo)</a> by <a href="
"With our thoughts we make the world"
i did read it ... are you now going to discount everything that has ever come from observational science? ...
http://www.realclimate.org/index.php?p=11479%3Cbr
The legend of the Titanic
Filed under:
Climate Science
Communicating Climate
— rasmus @ 3 May 2012
It’s 100 years since the Titanic sank in the North Atlantic, and it’s still remembered today. It was one of those landmark events that make a deep impression on people. It also fits a pattern of how we respond to different conditions, according to a recent book about the impact of environmental science on the society (Gudmund Hernes Hot Topic – Cold Comfort): major events are the stimulus and the change of mind is the response.
Hernes suggests that one of those turning moments that made us realize our true position in the universe was when we for the first time saw our own planet from space.
NASA Earth rise
He observes that
[t]he change in mindset has not so much been the result of meticulous information dissemination, scientific discourse and everyday reasoning as driven by occurrences that in a striking way has disclosed what was not previously realized or only obscurely seen.
Does he make a valid point? If the scientific information looks anything like the situation in a funny animation made by Alister Doyle (Dummiez: climate change and electric cars), then it is understandable.
Moreover, he is not the only person arguing that our minds are steered by big events – the importance of big events was even acknowledged in the fiction ‘State of Fear‘.
A recent paper by Brulle et al (2012) also suggests that the provision of information has less impact than what opinion leaders (top politicians) say.
However, if the notion that information makes little impact is correct, one may wonder what the point would be in having a debate about climate change, and why certain organisations would put so much efforts into denial, as described in books such as Heat is on, Climate Cover-up, Republican war on science, Merchants of doubt, and The Hockeystick and Climate Wars. Why then, would there be such things as ‘the Heartland Institute’, ‘NIPCC’, climateaudit, WUWT, climatedepot, and FoS, if they had no effect? And indeed, the IPCC reports and the reports from the National Academy of Sciences? One could even ask whether the effort that we have put into RealClimate has been in vain.
Then again, could the analysis presented in Brulle et al. be misguided because the covariates used in their study did not provide a sufficiently good representation of important factors? Or could the results be contaminated by desinformation campaigns?
Their results and Hernes assertion may furthermore suggest that there are different rules for different groups of people: What works for scientists doesn’t work for lay people. It is clear from the IPCC and international scientific academies that climate scientists in general are impressed by the increasing information (Oreskes, 2004).
Hernes does, however, acknowledge that a background knowledge is present and may play a role in interpreting events, which means that most of us no longer blame the gods for calamities (in the time before the enlightenment, there were witch hunts and sacrifices to the gods). The presence of the knowledge now provides a rational background, which sometimes seems to be taken for granted.
Maybe it should be no surprise that the situation is as described by Hernes and Brulle et al., because historically science communication hasn’t really been appreciated by the science community (according to ‘Don’t be such a scientist‘) and has not been enthusiastically embraced by the media. There is a barrier to information flow, and Somerville and Hassol (2011) observe that a rational voice of scientists is sorely needed.
The rational of Hernes’ argument, however, is that swaying people does not only concern rational and intellectual ideas, but also an emotional dimension. The mindset influences a person’s identity and character, and is bundeled together with their social network. Hence, people who change their views on the world, may also distance themselves from some friends and connect with new people. A new standpoint will involve a change in their social connections in addition to a change in rational views. Events, such as the Titanic, Earth rise, 911, and Hurricane Katarina influence many people both through rational thought and emotions, where people’s frame of mind shifts together with their friends’.
What do I think? Public opinion is changed not by big events as such, but by the public interpretation of those events. Whether a major event like hurricane Katrina or the Moscow heat wave changes attitudes towards climate change is determined by people’s interpretation of this event, and whether they draw a connection to climate change – though not necessarily directly. I see this as a major reason why organisations such as the Heartland are fighting their PR battle by claiming that such events are all natural and have nothing to do with emissions.
The similarity between these organisations and the Titanic legend is that there was a widespread misconception that it could not sink (and hence it’s fame) and now organisations like the Heartland make dismissive claims about any connection between big events and climate change. However, new and emerging science is suggesting that there may indeed be some connections between global warming and heat waves and between trends in mean precipitation and more extreme rainfall.
-Eddie Vedder, "Smile"
Well, one thing is factual: there's a difference between observational studies and randomly controlled experimental studies. This article is getting at that fact.
<object height="81" width="100%"> <param name="movie" value="https://player.soundcloud.com/player.swf?url=http://api.soundcloud.com/tracks/28998869"></param> <param name="allowscriptaccess" value="always"></param> <embed allowscriptaccess="always" height="81" src="https://player.soundcloud.com/player.swf?url=http://api.soundcloud.com/tracks/28998869" type="application/x-shockwave-flash" width="100%"></embed> </object> <span><a href=" - In the Fire (demo)</a> by <a href="
Not at all. Observational science has its uses.
<object height="81" width="100%"> <param name="movie" value="https://player.soundcloud.com/player.swf?url=http://api.soundcloud.com/tracks/28998869"></param> <param name="allowscriptaccess" value="always"></param> <embed allowscriptaccess="always" height="81" src="https://player.soundcloud.com/player.swf?url=http://api.soundcloud.com/tracks/28998869" type="application/x-shockwave-flash" width="100%"></embed> </object> <span><a href=" - In the Fire (demo)</a> by <a href="
Through an observational study, of course.
But seriously, the article makes some valid points in my estimation. All information and conclusions should be vetted and challenged.
"With our thoughts we make the world"