InterVenn’s approach to using AI is very targeted to signal processing, specifically the deconvolution of signal from noise in large amounts of mass spectrometry data. We apply deep learning on the backend of our data pipeline to efficiently process LC-MS output in a very short amount of time. This ranges from spectral identification and retention time prediction during biomarker discovery to peak selection and quantification in our targeted panels. These applications are possible because of the large amount of data generated from mass spectrometry experiments. We employ open source formats which make the trained neural networks (specific to a particular problem) highly reproducible. Once the data are wrangled into per-glycoform, per-patient expression, however, we are looking at much smaller pieces of data (hundreds of patients by ~1000 biomarkers). This context is not appropriately powered to generate deep learning models that will validate in independent data; thus, all of our classifiers to-date employ traditional statistical models and cross-validation rather than deep learning. In other words, we do not input our biomarkers into a “black box” to output a diagnosis on the other end.