এটি একটি কঠিন প্রশ্ন!
প্রথম জিনিসগুলি, আপনি পরিসংখ্যানগত তাত্পর্য নির্ধারণ করতে যে কোনও প্রান্তিক নির্বাচন করতে পারেন তা নির্বিচারে। বেশিরভাগ লোকেরা 5% p ভ্যালু ব্যবহার করে এটি অন্য কোনওটির চেয়ে বেশি সঠিক করে না। সুতরাং, কিছুটা অর্থে, আপনার পরিসংখ্যানিক তাত্পর্যকে কালো বা সাদা বিষয়গুলির চেয়ে "বর্ণালী" হিসাবে ভাবা উচিত।
H0ABXY H0ppH0 to be true (অর্থাত্ প্রবণতা নেই)।
pH0 (there's statistically significant evidence that H0 could be false). If we get a "high" p-value, then the results are more likely to be a result of luck, rather than actual trend. We don't say H0 is true, but rather, that further studying should take place in order to reject it.
WARNING: A p-value of 23% does not mean that there is a 23% chance of there not being any trend, but rather, that chance generates results as those 23% of the time, which sounds similar, but is a completely different thing. For example, if I claim something ridiculous, like "I can predict results of rolling dice an hour before they take place," we make an experiment to check the null hypothesis H0:="I cannot do such thing" and get a 0.5% p−value, you would still have good reason not to believe me, despite the statistical significance.
So, with these ideas in mind, let's go back to your main question. Let's say we want to check if increasing the dose of drug X has an effect on the likelihood of patients that survive a certain disease. We perform an experiment, fit a logistic regression model (taking into account many other variables) and check for significance on the coefficient associated with the "dose" variable (calling that coefficient β, we'd test a null hypothesis H0: β=0 or maybe, β≤0. In English, "the drug has no effect" or "the drug has either no or negative effect."
The results of the experiment throw a positive beta, but the test β=0 stays at 0.79. Can we say there is a trend? Well, that would really diminish the meaning of "trend". If we accept that kind of thing, basically half of all experiments we make would show "trends," even when testing for the most ridiculous things.
So, in conclusion, I think it is dishonest to claim that our drug makes any difference. What we should say, instead, is that our drug should not be put into production unless further testing is made. Indeed, my say would be that we should still be careful about the claims we make even when statistical significance is reached. Would you take that drug if chance had a 4% of generating those results? This is why research replication and peer-reviewing is critical.
I hope this too-wordy explanation helps you sort your ideas. The summary is that you are absolutely right! We shouldn't fill our reports, whether it's for research, business, or whatever, with wild claims supported by little evidence. If you really think there is a trend, but you didn't reach statistical significance, then repeat the experiment with more data!