Foundation models trained on human actions.

Unbox AI builds foundation models that learn from what people do, not what they say. Our Large Behavioral Models (LBMs) are trained on chronological sequences of human actions—purchases, transactions, interactions—across billions of users. This research blog documents our work advancing this new paradigm.

Theoretical Foundations

Our research has roots in topology—the mathematical study of shape and structure—which provides rigorous tools for unsupervised and self-supervised learning. Because topological methods generalize naturally to discrete structures like graphs, this perspective led us to graph learning and the insight that Transformers can be viewed as Graph Neural Networks operating on sequential (line-graph) structures. This unifying view enables us to extend language-modeling concepts beyond text to new domains, such as behavioral sequences.