Those of you who listened to NPR recently might remember hearing a piece about the study of algorithms. Those of you in Boston may have seen an article in The Boston Globe. The voice of the expert being interviewed (or written about) was none other than our own Berkeley Dietvorst ’03. He attended East High School after St. Anne’s where he graduated as valedictorian of his class. He is presently enrolled in a PhD program at The Wharton School of the University of Pennsylvania studying judgment and decision-making. Berkeley says:
All of the teachers at St. Anne’s made my education better and pointed me in the right direction. In particular, Jeff Bird, Joe Figlino, Don Gifford, and John Dicker taught me many of the basic skills I needed to conduct scientific research.
When I was in high school, I thought that I wanted to get a job in finance, so I narrowed my college search to universities that had good undergraduate business schools. I visited a variety of college campuses, but when I walked through Penn’s campus, something just felt right. Also, it didn't hurt that it had an excellent business school. As an undergraduate student, I chose to major in finance and decision processes. I tried a finance internship after my junior year and decided that it wasn't for me; I felt drawn toward a career that would allow me to work independently and choose what I wanted to work on every day. I decided to earn a PhD studying judgment and decision-making because of my interest in psychology, economics, and decision sciences.
For my current research, I was interested in why people prefer not to use algorithms for making predictions even though an abundance of research has shown that algorithms are more accurate at forecasting than humans. My coauthors (Joseph Simmons and Cade Massey) and I hypothesized that people would lose confidence in algorithms after seeing them err and, therefore, be less likely to use them. In our research, we found that people did indeed lose confidence in an algorithm after seeing it err and were more likely to use a human forecaster instead, even when they had seen the algorithm outperform the human by a wide margin. Currently, we are figuring out how to get people to use algorithms for forecasting, even after they have learned that those algorithms are imperfect.