Unbox AI builds foundation models that learn from what people do, not what they say. Our Large Behavioral Models (LBMs) are trained on chronological sequences of human actions—purchases, transactions, interactions—across billions of users. This research blog documents our work advancing this new paradigm.
The first behavioral foundation model for visual art and aesthetics. Trained on 215 billion human interactions across major art and design platforms.
Language modeling techniques applied to detailed workforce behavioral data, creating BehaviorGPT-v2: a foundation model for employee behaviors.
Language modeling applied to rich grocery consumption data to build BehaviorGPT-v1, treating user history as a language to predict future events.
Our research has roots in topology—the mathematical study of shape and structure—which provides rigorous tools for unsupervised and self-supervised learning. Because topological methods generalize naturally to discrete structures like graphs, this perspective led us to graph learning and the insight that Transformers can be viewed as Graph Neural Networks operating on sequential (line-graph) structures. This unifying view enables us to extend language-modeling concepts beyond text to new domains, such as behavioral sequences.