Biomarkers are measurable parameters of the human body that serve as indicators of underlying biological or pathological processes. Despite the spectacular technological advances that have allowed the scientific community to measure an ever-expanding list of body parameters with greater sensitivity and specificity that aver before, these advances have not translated into greater numbers of clinically useful biomarkers, including those related to the measurement of immunological function itself or the measurement of immunological function as a means to detect or quantitate the effects of other diseases. Biomarkers guide patient management in a multitude of settings including screening, diagnosis, prognosis, treatment choice and treatment monitoring. They also serve as primary sources of efficacy and safety data required by the Food and Drug Association for the approval of new therapies and medical devices. Despite the widespread and ever-growing need for new biomarkers and the deep investments made in their development by funding agencies and industry, the failure rate for biomarker development is extraordinarily high. Despite tens of thousands of reports of putative new biomarkers in the peerreviewed literature, only a handful are qualified for drug development or approved for clinical use by the FDA and only about 100 biomarkers have proven clinically useful and reliable enough to be used in routine medical practice [1]. Biospecimens are the starting materials for the vast majority of biomarker measurements, including those relevant to the immunome. Overall, biomarker development has an extremely poor track record for success. Given the high degree of variability in the way that human biospecimens are collected, handled, stabilized, stored and transported, it is worth asking whether human biospecimens used for biomedical research and product development might be a significant source of the irreproducibility that is presently rife within this field of research [2]. In turn, could pre-analytical variation in human biospecimens be a major contributor to biomarker development failures?? It is not a question that investigators ask themselves often enough, but the “garbage in, garbage out” paradigm is as true for biomedical research as it is for data science. Poor or unknown quality of biospecimens used for biomarker development is a doubled-edged sword. On the one hand, if the analysis test is itself in development, as is usually the case, variable results from various iterations of the analysis platform cannot be reasonably interpreted as being linked to variation in the technology if the test materials (biospecimens) are highly variable and beset with artifactual bias. This makes improvement in the analytical validity (analytical performance) of the platform more challenging. On the other hand, if the analysis platform is valid and reliable, the clinical validity (how the measurement relates to the clinical outcome of interest) of the measured biomarker becomes difficult or impossible to determine if pre-analytical variation in the biospecimens creates artifact that varies in type and amount from one sample to the next and overrides or obscures the correlation of the biomarker measurement with the clinical outcome.