Continue to Site »
Site will load in 15 seconds

The Science Void

Cultivators have been forced to make decisions based on community consensus, but legalization has opened the door to fact-based knowledge.

24 736x4143

Cannabis Final V3 Fmt
Illustrations by David Moore

Editor's Note: This article was first published in September 2019 when growers first started reporting "dudders" symptoms. It is now widely understood that "dudders" was caused by Hop Latent Viroid, or HpLvD. Read more from Cannabis Business Times about the virus here

There is no question that the internet has done wonders for humanity, including the cannabis industry. As the world’s personal computers connected to the web and the Internet’s first websites were born, cannabis breeders, seed sellers and growers created forums to share their knowledge and collaborate.

To this day, online message boards remain a major source of information for both the hobby grower and the commercial cultivator. With little to no peer-reviewed, academic studies to back up claims, ideas are instead held to a democratic standard, where the theory with the most backers generally is elevated to “fact” status.

Relying on peers and long-time cultivation experts can sometimes prove valuable, especially due to a lack of available cannabis research. But in other instances, commonly held beliefs among cultivators are essentially “myths”—unsubstantiated beliefs. For example, flushing is a common practice for some growers who believe that irrigating only with water in the final two weeks of flowering pulls nutrients from the plant’s buds, offering a cleaner smoke evidenced by white ash. Drs. Allison Justice and Markus Roggen dispelled that theory in a recent article in Cannabis Business Times.

Justice and Roggen found no harm in flushing a crop (and acknowledged that a benefit unrelated to pulling nutrients from buds may exist). But the same cannot be said about all cannabis cultivation-related information circulating online. The dearth of reliable data can be costly for cultivators and impact their reputations.

Due to the lack of scientific, peer-reviewed cannabis information, CBT spoke with experts to help cultivators separate fact from fiction and create substance instead of noise when publishing research notes.

Beware of ‘Bad Science’

“Bad science” abounds in the cannabis industry, from shoddy research papers to unbelievable claims made on trade show floors, according to industry experts. Roggen, founder of Complex Biotech Discovery Ventures, a private research laboratory focused on cannabis analytics and chemical process development, says he comes across it nearly every day.

He remembers an especially egregious claim by a company that makes CBD water. While attending a trade show, Roggen asked company representatives to explain how they were able to blend the CBD in the water, as CBD is not a water-soluble cannabinoid. According to Roggen, they said: “We pull the water molecules apart into their individual atoms, and then we reconstitute the water and infuse it with a new memory so that it can hold CBD.”

Roggen didn’t need much time to conclude their science was flawed. “That was easy to analyze because, from [understanding] basic principles of chemistry, you knew that was all bullshit,” Roggen tells CBT.

Not all unfounded claims are so easy to spot, though. Dr. Raymond Cloyd, a state extension entomologist with Kansas State University, is an experienced peer reviewer and has seen his fair share of misinformation in both academia and in the cannabis industry. “I do know today that science is being ignored. And there’s no doubt about it,” he tells CBT.

In the cannabis industry, at least, cultivators shouldn’t be raked over the coals for this, says Mojave Richmond, a consultant with BioAgronomics Group and a CBT columnist. The abundance of incomplete information and data is not a result of malicious intent, in most cases, but a consequence of the lack of resources from which this industry suffers. “Yes, the scientific method should be a thorough process, but unlike other crops who enjoy support from Big Ag, government and universities, cannabis companies who are much smaller with far less resources have had to conduct speculative science based upon what is often anecdotal evidence,” he says. “It’s like trying to adjust the microscope and fill the petri dish with one hand while sweeping a dirty floor with the other.”

Spot Final V2 Fmt

Although the reflex to share solutions discovered in-house often comes from a desire to help out a fellow grower and can prove helpful, the consequences of applying untested fixes can lead to delays in finding timely solutions or, in worst case scenarios, entire crop losses.

Cloyd recently reviewed industry reports of a new symptomology that is perplexing cultivators. The group of symptoms, currently referred to as “Dudders,” “dudding” or simply “dud,” is a fairly recent addition to the cannabis industry’s lexicon. Cultivators and breeders are attributing a slew of symptoms to the condition, including stunted plants with little to no trichome production, dark green leaves, rubbery stems that snap off easily and an overall loss of vigor. Most of the available information about it is found in cannabis message boards and sporadically throughout social media (a few exceptions notwithstanding), but the discussions online about the symptomology, Cloyd says, hold “no scientific proof, or even close to it. … It’s just anecdotal. It’s information that is not backed up by any scientific, especially experimental, empirical research.”

Several researchers (including Cloyd and Roggen) note that sharing hypotheses about Dudders’ origins, or even making assumptions about a relationship between the alleged symptomology observed in a number of cannabis plants (often without data on how many plants are affected or which environmental variables could be at play, for example) may seem helpful, but sharing vague and unconfirmed findings can cause more harm than good.

When reviewing academic papers or general claims, Cloyd looks for certain signs to ensure that the information presented is scientifically sound. One major red flag is when companies or researchers don’t include the experiment’s design (i.e., how they tested what they tested). “When you write [an experiment plan], it’s designed to be replicated by somebody else and get similar results,” Cloyd explains, “but if you don’t have experimental design, you don’t know how to set it up [to replicate it].”

The number of replications an experiment goes through is the second most important aspect. Cloyd suggests replicating experiments five to 10 times before claiming a finding as fact. This is especially important when dealing with outdoor crops, as seasonal changes can affect findings and treatment efficacy, he adds. “If you were going to look at cannabis or hemp production outdoors, you can’t just do it in one year. You have to do it multiple years to take into account environmental variability.”

Finally, Cloyd scours research reports to see if the experimenters included both the P value and the N value. The P value measures the statistical significance of an experiment—a P value of less than 0.05 (which means there’s less than a 1 in 20 chance of the finding being wrong) is considered statistically significant. The N value is the simple measure of how many samples were studied. “If you say this treatment resulted in 75 percent control, well how many units or individuals was that?” Cloyd asks. “Because if it’s two, it’s meaningless. But if it’s 50, then you have less variation.”

He adds that research done “without a good scientific base [assures that] the information becomes meaningless because you can’t replicate it and it varies from situation to situation.”

What Is ‘Good Science?’

“What makes good science is taking an empirical approach by formulating hypotheses,” Cloyd says. If a hypothesis posits that a virus causes dudders, “then my objectives are to determine if dudders is caused by a virus, under what conditions, what cannabis cultivars [are affected]. … I call it the funnel approach, where you start getting narrower and narrower” with the questioning.

When scientists develop a hypothesis, “it needs to be able to either be validated or falsified by experiments,” Roggen says. For example, last summer Roggen hypothesized that if you evaporate a cannabinoid-infused ethanol solution using a rotary evaporator, the terpenes would be lost by the evaporation process—the thought being that because the vapor pressure from terpenes and ethanol are similar, the evaporator would remove both in the process. He developed an experiment plan that would confirm or prove false his hypothesis, and “I actually did the experiment and realized terpenes do not evaporate together with ethanol. I was wrong.”

The fact that Roggen’s hypothesis was proven wrong did not devalue the science that went into the discovery. Cloyd believes this is where a lot of cannabis professionals, and even professional scientists and researchers, stumble. “I think one of the most underappreciated journals, if it existed, would be the ‘Journal of Negative Responses’ because that’s just as good to know as getting a positive response.” In other words, good science is not predicated on the hypothesis being validated.

Good science also avoids issues with Type 1 and Type 2 errors. A Type 1 error, also known as a false-positive, occurs “when we are observing a difference when in truth there is none (or more specifically—no statistically significant difference).” Type 2 errors, also called false-negatives, occur “when we are failing to observe a difference when in truth there is one,” according to UC-Berkeley.

Avoiding those errors comes back to establishing a proper experimental design that focuses on eliminating all of the possible options until only one possible answer remains and is confirmed, Cloyd says. “If you don’t do your methodology correctly, … your results are not giving you data to validate your inferences.”

Roggen adds that good science becomes great when it is done transparently. To avoid the common pitfall of changing a hypothesis in the middle of an experiment, experimenters should make their plans public before starting the research. “First, publish your research plan and conduct the research, and then you publish what you did. That way you can’t really step away from it,” Roggen explains. For example, if a researcher is studying the effects of CBD on sleep and notices a negative effect, the public awareness of that research means there is a smaller chance that negative impact gets buried or omitted.

Bring Value, Not Noise

Roggen maintains that valid science is less about finding the right answer than about asking the right questions and having proper experimental designs. That said, most cultivators won’t have the equipment or expertise in house to conduct the required experiments to answer those questions.

What every cultivator can do, however, is collect as much data about whatever question they are trying to answer, whether it be identifying a potential plant pathogen, finding an optimal cultivation environment or evaluating vendor claims. “When you’re scouting, make notes on everything,” Cloyd advises, adding that “you can always get rid of [irrelevant data], but you can’t obtain [missing data] afterwards.”

When lab analysis is required in an experiment, Roggen and Cloyd suggest cannabis growers turn to public or private research labs. For example, if growers encounter symptoms that they don’t recognize in their crops, “the best way [to find the cause is to] send it to a diagnostic clinic to get it tested for a virus or disease,” Cloyd says. “Once [cultivators] have that information, then they could do their own in-house studies, setting up small experiments. That’s the starting point for [research]—taking some data. Although it’s subjective data, it’s better than nothing.”

Roggen concurs: “Yes, you can’t do a clinical study [that is] FDA-approved right now because that’s difficult as [cannabis remains federally] illegal, but that doesn’t mean the industry can’t do anything.” He points to universities in international markets like Canada, Germany, Israel and Spain as examples of publicly funded research centers working on cannabis science.

“And some companies are big enough themselves to handle research,” Roggen says. “Just be open about it. Publish the whole data set. Yes, there’s always the danger if you have [an] industry do their own research, you run into the same thing like tobacco and opioids. But in the absence of anything, even company-run research is still better than nothing. But you have to be willing to also publish bad results.”

This can seem like a lot of effort and due diligence for work that may or may not directly impact a cultivation company’s bottom line. That said, Cloyd points out, “if science were easy, everyone would do it.”

Brian MacIver is senior editor for Cannabis Business Times and Cannabis Dispensary magazines.

Page 1 of 4
Next Page