About

 

I am a postdoctoral fellow at the Schwartz Reisman Institute for Technology and Society. My research focuses on moral theory and philosophy of action, with a particular interest in the intersection between the two, and the application of these studies to issues in the ethics of AI and machine learning. I finished my PhD in 2017 under the supervision of Sergio Tenenbaum, with Philip Clark and Andrew Sepielli as readers.

Benjamin_5527

Photo credit Jenna Muirhead

In my dissertation, I offer a new defence of the guise of the good view. The guise of the good view holds that all practical mental states, such as desire and intention, involve the agent taking the content of that state to be good. I show how this theory can be developed so as to avoid common objections, and argue for it by appeal to its theoretical fruitfulness, showing how the view can solve problems in normative ethics, metaethics, philosophy of action, and the theory of practical reasoning.  You can find more details about my project in my dissertation abstract here.

I have a new project on understanding the place of reasoning and rationality in our evolutionary history and scientific understanding of ourselves. The first paper in this project, co-written with Julia Smith, argues that we should understand the evolutionary function of reasoning as being collective deliberation, rather than individual decision making as is generally assumed or primarily a means of facilitating social interaction and cooperation as proposed by Hugo Mercier and Dan Sperber. A pre-print of that paper is available under the research tab above. I am now working on extending this idea to practical reasoning as well as theoretical reasoning.

In addition, I have begun a new project in AI ethics that looks at the role of transparency and justification in creating fair AI. There has been concern over the “black-box” nature of advanced machine learning systems, and a push for transparency. But technical explanations of the workings of artificial neural networks and such are unlikely to be meaningful or useful for those harmed by decisions that have been delegated to AI algorithms. I apply work in the philosophy of action and practical reasoning on reasons for action to look at what we should look for in order to justify and explain AI decisions.

You can contact me at benjamin.wald@utoronto.ca