The P-value gives each person an opportunity to decide if evidence is sufficient. An α of 0.05 has been a common rule of thumb due to its inclusion in critical value tables. Since there is no practical distinction between the P-values of 0.049 and 0.050, it makes no sense to treat α of 0.05 as a universal rule for what is significant. So, you should use an α that is appropriate for each case.
Statistical tests do NOT tell us how large or how important an effect is! Thus, in analyzing data you need to include:
Confidence intervals for the parameter actually estimates the size of an effect.
Lack of significance does not imply that H0 is true! Significance tests are not always valid. Faulty data collection, outliers in the data can invalidate a test.
Recall: Tests of significance and CI are based on the laws of probability. Random samples or randomized experiments ensures that these laws apply.
Beware of multiple analysis. Many tests run at once will probably produce some significant results by chance alone! Once you have a hypothesis, design a study to search specifically for the effect. If the results of this study are statistically significant, then this is real evidence.