Human Innovation
This is real human research and engineering, supported by AI in marketing, using ChatGPT, and coding with Claude. AI improved our productivity, but it did not touch our vision.
Uncovering the bias/variance decomposition unlocks the knowledge to manage data drift and maintain models that matter.

Revealing the bias/variance decomposition for an expected population has resulted in a futuristic modeling workflow - and we're just getting started.
This is real human research and engineering, supported by AI in marketing, using ChatGPT, and coding with Claude. AI improved our productivity, but it did not touch our vision.
It started with a simple question - why do more than 70% of machine learning models fail in production?
Upon discovering the bias/variance secret behind feature scaling while a professor at the University of Illinois Gies College of Business, Dr. Dave Guggenheim quit to pursue this research with intensity because he knew that the bias/variance decomposition is to machine learning as what the Higgs field is to quantum physics
It took a total of five years to go from the first crude estimator to a complete workflow including data stability, model selection, and hyperparameter tuning. As an added bonus, Management in a Box gives every engineer the power to become a business consultant. Each of these software applications is far advanced over current methods and practices.
Prototypes for all four software-as-a-service applications are complete, but services are not yet available in the cloud.
Please email for a demonstration of the prototypes and to be put on the mailing list for news and information.
We are seeking new partners for funding or distribution.
A complete workflow for detecting instabilities, discovering population details, choosing the one best model at the correct entry point, and tuning it to the ragged edge of performance for classification and regression.
Securely upload your data in a variety of formats (CSV, Excel, or JSON) - when the run is complete, all data is deleted and never kept for any reason.
Visual minimal data preparation with air-tight control (i.e., no data leakage) is next to preserve the purity of the bias/variance decomposition.
The Shadow Population Estimator is automatically invoked prior to running STABILITYLAB™ or FAMS™
STABILITYLAB™: Run ISGG, DFIS, and/or FWDD; because of the extensive calculations, all components are GPU-accelerated
FAMS™: Full model selection or Turbo-coded only; the first model run at the median includes all algos, and the next two use only the top 6 from the median. Keep it good, but keep it fast as well.
HYPERTUNE™: Select from NGBoost, AdaBoost, CatBoost, GradientBoost, LightGBM, XGBoost, RandomForest or ExtraTrees
(bolded indicates GPU acceleration)
MANAGEMENT IN A BOX™: Separate from the model selection and tuning workflow, this module merges data and decision-making into a seamless package.
Shadow Population Estimator: delivers the probability mass function with the three most important model entry points identified.
STABILITYLAB™: Generates detailed report showing data and bias/variance instability across the population for stratified importance, shifting importance, and feature-level divergence.
FAMS™ (Future-Aware Model Selection): Creates an extensive collection of modeling information, and when combined with the probability mass function, the GenAI report provides complete justification for model selection with numerical backup.
HYPERTUNE™: Shows the recursive grid score, Bayesian score, improvement (if any), and it allows for the model to be downloaded as a pickle file or have the hyperparameter values extracted.
MANAGEMENT IN A BOX™: Extensive GenAI reports identify the most important contributors to the problem, and detailed mitigation plans are provided. Planning is expanded through a series of dialogues, similar to working with a management consultant.
With the correct entry point at the median of the expected bias/variance population, it is possible for HYPERTUNE™ alone to handle data drift by recomputing hyperparameter values. Fixing the model in production becomes a matter of updating new values instead of a complete redesign.
Nine-time entrepreneur, two-time college professor, and fulltime technologist with a focus on thinking differently.

Yoshua Bengio
Leo Breiman
Ian Goodfellow
Geoffrey Hinton
Daphne Koller
Yann LeCun
Fei-Fei Li
Andrew Ng
Robert Tibshirani
To paraphrase Newton, our research went further because we stand on the shoulders of giants. None of these experts is a member of StabilityLabML, but their work guided our development that led to this futuristic workflow and we are eternally grateful.

Founder and Chief Technology Officer
StabilityLabML
PhD in Information Systems with 20+ years in business analytics and machine learning research.

Inspiration Officer
StabilityLabML
A rescue consisting of rottweiler, husky, terrier, speed dog, and love. And during some of our walks in the woods, fundamental discoveries in machine learning were made.