This event has ended
Still interested?
Let us know and we’ll reach out.
👉 Book a demo here
Join Michal Kosinski, from Stanford Graduate School of Business and Stanford HAI, for a thought-provoking keynote on theory of mind, cognitive bias, and the emerging properties of large language models — and what they may mean for decision-making, forecasting, and automation.
🗓️ June 4, 2026
⏰ 14:00 – 17:00 CET
📍 Trekanten, Oslo
Trusted by leading companies
Still interested?
Let us know and we’ll reach out.
👉 Book a demo here

Fairsight news
Espen Skorstad | CEO & Founder, Fairsight
Emergent Properties in Large Language Models:
How models trained on language begin to exhibit human-like reasoning patterns – and biases
Michal Kosinski | Associate Professor Stanford University
Fireside chat
Espen x Michal
Meet the awesome people speaking at Fairsight Future, giving their unique insights


Long before today’s debate about AI, privacy, and prediction became mainstream, Michal Kosinski was already producing research that challenged how we think about human judgment, digital behavior, and what data can reveal about us. Today, as Associate Professor at Stanford Graduate School of Business and faculty affiliate at Stanford HAI, he is turning that same lens on large language models.
Kosinski is not known for safe questions. He is known for asking the ones that matter early. His work spans psychology, psychometrics, computational social science, and artificial intelligence. It has led to more than 80 peer-reviewed papers, placed him among the world’s most-cited researchers, and made him a voice that reaches far beyond academia — including coverage and appearances across outlets such as The New York Times, BBC, CNN, and The Economist.
In this keynote, Kosinski explores one of the most important developments in AI today: the fact that language models are beginning to display patterns that resemble human reasoning, theory of mind, and cognitive bias. His published work has shown that recent LLMs can solve classic false-belief tasks used in psychology, while related research shows that larger models can also develop human-like intuitive errors.
For HR leaders, executives, and investors, the question is not whether these systems are impressive. It is whether we understand them well enough to use them wisely. This session will examine what these emerging capabilities may mean for decision-making, forecasting, automation, and the limits we still cannot ignore.
Michal Kosinski is an Associate Professor of Organizational Behavior at Stanford Graduate School of Business. His work bridges AI and psychology: he uses AI to study human behavior, and psychological theory and methods to understand and evaluate AI systems. He earned his Ph.D. in psychology from the University of Cambridge, where he pioneered methods for inferring psychological traits from digital footprints.
Michal has published over 80 peer-reviewed articles in leading scientific journals, including PNAS, Nature Computational Science, JPSP, and American Psychologist. He is a co-author of the textbook Modern Psychometrics and has contributed multiple chapters to major reference works, including the seminal Handbook of Social Psychology. His research has been cited more than 27,000 times (h-index: 62), placing him among the top 1% of highly cited researchers worldwide.
His work has been recognized with honors, including the SPSP Diener Award in Personality Psychology (2025)—a flagship mid-career award in the field—the ARP Early Career Award (2025), SPSP Distinguished Fellowship (2024), the EAPP Early Achievement Award (2023), and an APS Rising Star Award (2016).
Beyond academia, Michal regularly advises corporations and government bodies, including the U.S. Federal Trade Commission, the U.S. Equal Employment Opportunity Commission, the European Parliament, and the U.S. Department of Justice. He was behind the first press article exposing the privacy risks exploited by Cambridge Analytica. His research informed major privacy and technology policy debates, contributed to the record $5 billion fine imposed on Facebook, and inspired a cover story in The Economist, a Broadway theater production, multiple TED talks, and a video game. His work has been featured in thousands of press articles, books, podcasts, and documentaries worldwide.
A clearer understanding of what researchers mean by theory of mind, reasoning, and cognitive bias in large language models.
Insight into what these emerging properties may reveal about intelligence — both artificial and human.
A practical view of where AI can support decision-making, and where leaders should stay cautious.
A sharper understanding of the limits, blind spots, and failure modes that still matter in real-world use.
Better language for evaluating AI beyond hype, especially in hiring, forecasting, and automation contexts.
