Ai, decisions, and the reasons to believe: ethics-through-epistemology approach
DOI:
https://doi.org/10.18716/ojs/phai/2025.2276Keywords:
psychology of responsible AI, responsible decision making, reasons to believe, ethics of AI, epistemology of AIAbstract
The paper puts forward a notion of artificial intelligence (AI) as a cognition technology and centres on the role of AI systems as a sort of epistemic plug for the decision-making process by human moral agents. In this sense, the paper argues for an ethics-through-epistemology approach to AI. The question of responsibility is approached from the perspective of the decision maker and explains her motivational setup when deliberating about doing what is morally right. I start by arguing that understanding AI as a cognition technology forces us to re-conceptualize the moral responsibility for AI as the responsibility in the context of cognition. I next unwrap this claim and discuss three major elements in re-focusing the philosophical discussion on the responsibility for AI: shifting focus (a) from actions to decisions, (b) from the question of imputability to the problem of harm mitigation/prevention, and (c) from ontological to epistemic conditions. Based on this, I argue that the responsible stance towards decision-making with AI presupposes an obligation to evaluate reasons for actions. I then analyse these in terms of reasons to believe in connection with the epistemic authority of AI as a cognition technology.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Dina Babushkina

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.


