Sublime
An inspiration engine for ideas
However, instead of the traditional hypothesis-based approach to statistical analysis, in which the analyst or decision maker comes up with a hypothesis and then tests it for fit with the data, big data analysis is more likely to involve machine learning.
Thomas H. Davenport • Big Data at Work: Dispelling the Myths, Uncovering the Opportunities

nonparametric methods are better suited to small samples. There’s no point using nonparametric statistics if you’ve got a lot of data (for example n > 100).
Roman Zykov • Roman's Data Science: How to monetize your data
Parametric statistics is a branch of statistics that assumes sample data come from populations that are adequately modeled by probability distributions with a set of parameters.
Jim Frost • Hypothesis Testing: An Intuitive Guide for Making Data Driven Decisions
If we add and subtract the margin of error to the sample mean of 80, we have a 95% confidence interval that ranges from 67.6 to 92.4, which, as expected, contains the population mean of 78 (see Chapter 3 for more detail on generating confidence intervals).
Jeff Sauro • Quantifying the User Experience: Practical Statistics for User Research
such as the Kolmogorov-Smirnov test, the meaning of which, I have to admit, I had forgotten. (It’s a way to determine whether a model correctly fits data.)
Seth Stephens-Davidowitz • Everybody Lies: The New York Times Bestseller

The null hypothesis is rejected if your data set is unlikely to have been produced by chance. The significance of the results is described by the confidence level that was defined by the test (as described by the acceptable error “alpha-level”).
Maura Ginty • Landing Page Optimization: The Definitive Guide to Testing and Tuning for Conversions
The fact is that, despite its mathematical base, statistics is as much an art as it is a science. A great many manipulations and even distortions are possible within the bounds of propriety.