Start Your Journey with the FAMS™ Software Suite

Building Smarter Machine Learning Workflows

Uncovering the bias/variance decomposition unlocks the knowledge to manage data drift and maintain models that matter.

Get Started

The Data/Model Process Innovation  

Revealing the bias/variance decomposition for an expected population has resulted in a futuristic modeling workflow - and we're just getting started.

Human Innovation

This is real human research and engineering, supported by AI in marketing, using ChatGPT, and coding with Claude. AI improved our productivity, but it did not touch our vision.

Transparency & Trust

Every research article published by Dr. Guggenheim is freely available to anyone, anywhere in the world.

And they are all directed toward working data scientists and machine learning engineers.

Get Started
Get Started
Font iconFont icon

Impact with Integrity

We aim to make a dramatic difference in the practice of machine learning. Whether it’s boosting efficiency, reducing variance, or improving data-driven advice, our work is guided by measurable outcomes.

Get Started
Get Started
Font iconFont icon
Get Started

From Curiosity to Discovery - The Path to Measuring Divergence 

It started with a simple question - why do more than 70% of machine learning models fail in production?

Get Started

The Idea Sparked

Upon discovering the bias/variance secret behind feature scaling while a professor at the University of Illinois Gies College of Business, Dr. Dave Guggenheim quit to pursue this research with intensity because he knew that the bias/variance decomposition is to machine learning as what the Higgs field is to quantum physics

From Concept to Prototype

It took a total of five years to go from the first crude estimator to a complete workflow including data stability, model selection, and hyperparameter tuning. As an added bonus, Management in a Box gives every engineer the power to become a business consultant. Each of these software applications is far advanced over current methods and practices.

Prototypes for all four software-as-a-service applications are complete, but services are not yet available in the cloud.

Official Launch

Please email for a demonstration of the prototypes and to be put on the mailing list for news and information.

Partnering for Impact

We are seeking new partners for funding or distribution.

How It Works

A complete workflow for detecting instabilities, discovering population details, choosing the one best model at the correct entry point, and tuning it to the ragged edge of performance for classification and regression.

01

Connect Your Data

Securely upload your data in a variety of formats (CSV, Excel, or JSON) - when the run is complete, all data is deleted and never kept for any reason.

Visual minimal data preparation with air-tight control (i.e., no data leakage) is next to preserve the purity of the bias/variance decomposition.

02

Choose which steps

The Shadow Population Estimator is automatically invoked prior to running STABILITYLAB™ or FAMS™

STABILITYLAB™: Run ISGG, DFIS, and/or FWDD; because of the extensive calculations, all components are GPU-accelerated

FAMS™: Full model selection or Turbo-coded only; the first model run at the median includes all algos, and the next two use only the top 6 from the median. Keep it good, but keep it fast as well.

HYPERTUNE™: Select from NGBoost, AdaBoost, CatBoost, GradientBoost, LightGBM, XGBoost, RandomForest or ExtraTrees
(bolded indicates GPU acceleration)

MANAGEMENT IN A BOX™: Separate from the model selection and tuning workflow, this module merges data and decision-making into a seamless package.

03

Automate & Analyze

Shadow Population Estimator: delivers the probability mass function with the three most important model entry points identified.

STABILITYLAB™: Generates detailed report showing data and bias/variance instability across the population for stratified importance, shifting importance, and feature-level divergence.

FAMS™ (Future-Aware Model Selection): Creates an extensive collection of modeling information, and when combined with the probability mass function, the GenAI report provides complete justification for model selection with numerical backup.

HYPERTUNE™: Shows the recursive grid score, Bayesian score, improvement (if any), and it allows for the model to be downloaded as a pickle file or have the hyperparameter values extracted.

MANAGEMENT IN A BOX™: Extensive GenAI reports identify the most important contributors to the problem, and detailed mitigation plans are provided. Planning is expanded through a series of dialogues, similar to working with a management consultant.

04

Optimize Over Time

With the correct entry point at the median of the expected bias/variance population, it is possible for HYPERTUNE™ alone to handle data drift by recomputing hyperparameter values. Fixing the model in production becomes a matter of updating new values instead of a complete redesign.

The Mind Behind the Machine  

Nine-time entrepreneur, two-time college professor, and fulltime technologist with a focus on thinking differently.

Pioneers of Data Science and AI

Yoshua Bengio
Leo Breiman

Ian Goodfellow
Geoffrey Hinton
Daphne Koller
Yann LeCun
Fei-Fei Li

Andrew Ng
Robert Tibshirani

To paraphrase Newton, our research went further because we stand on the shoulders of giants. None of these experts is a member of StabilityLabML, but their work guided our development that led to this futuristic workflow and we are eternally grateful.

Dave Guggenheim

Founder and Chief Technology  Officer
StabilityLabML

PhD in Information Systems with 20+ years in business analytics and machine learning research.

Loki

Inspiration Officer
StabilityLabML

A rescue consisting of rottweiler, husky, terrier, speed dog, and love. And during some of our walks in the woods, fundamental discoveries in machine learning were made.