Thank you for the piece. There is a concern, it seems to me, that your argument is too reliant on a rather controversial position within epistemology, namely, that philosophy can (much less that its primary goal is to) generate "truth" or at least true beliefs. But if Quine (inter alia) is to be believed, the domain of a priori truth is illusory; or at least, we should be wary of claims to have arrived at a priori truth without a defense of the position that can found at all. There are two alternatives: 1) the "cash value" of philosophy is the inculcation of true beliefs in some minimalist sense (this construal seems most apt in the case of ethics/pol. phil) or 2) the central virtue of philosophy is, as I believe, its intrinsic intellectual value to its practitioners. You develop a position along the lines of 1) when you reference the "social welfare" benefits of high-quality philosophy. But this claim is dubious. 21st Century Analytic philosophy is notoriously abstruse and rarefied, as has been belabored and bemoaned by both proponents and detractors. Who is to benefit from the production of AI-augmented "cutting-edge" philosophy save those few savants already fluent in the terms of the relevant debate? The value must therefore be to practitioners of philosophy. But if they are incentivized to offload cognitive (and constitutively philosophical) tasks to AI, then it seems that, at least to a certain degree, the intrinsic value of the philosophical practice is degraded. If baseball players all wore vision enhancing goggles at bat, it would still be baseball they played, but one constitutively valuable skill (hand-eye coordination) would be rendered obsolete. The game would lose some of its intrinsic worth. In the case of philosophy, moreover, we should note that practitioners already function as sole producer as well as primary (perhaps sole) consumer of research. But should their ability to understand that research erode, for the cultivation of their skills are to be subordinated to the noble pursuit of pure knowledge, and indeed the rigor of AI may exceed their diminished or at least inferior cognitive faculties - then the already comically parochial audience of philosophical research will sustain a further diminution. At least, I think your argument that "The primary reason for subsidizing research should be the pursuit of truth, understanding, and contributions to social welfare" requires a further defense of why these reasons are preeminent and an articulation in more detail of how our embrace of AI can promote them.
I don’t think it’s controversial (or should be) that one of the central aims of philosophy is the pursuit of some kind of epistemic good, whether that’s truth, understanding, clarity, or wisdom. It has other aims and values. But that doesn’t mean all the institutions in which philosophy is practiced ought to be governed by these other aims and values, especially when such institutions are publicly funded. So the question is whether journals and universities are primarily meant to support the intrinsic value of philosophy as a practice, which I agree is important.
My claim is that even if it were one of their functions, it shouldn’t be the only one. We’re not covering ourselves in glory by admitting that the value of philosophy doesn’t lie in its pursuit of any of the above epistemic goods. Heck, I think even literature and the arts pursue such goals in their own way. Claiming that journals should not be seen primarily as clubs where ‘practitioners’ can engage in their favorite hobby, at the expense of those of us who are trying to understand the world (and for some, change it), should not be a controversial thesis. And if the claim that philosophy pursues some truths or other epistemic goods is controversial, then we should reassess the entire history of philosophy, Nietzsche included. Philosophy as a *pure* practice is great; there are plenty of places to do it. I’m not sure professional journals, especially in the current economy, and when subscriptions, research, etc are largely publicly funded, are one of them.
PS: I never claimed anything about a priori truths.
Interesting! Help me understand: how can a claim 'be too cynical to be true'? :) I'm not just trying to nitpick, but it seems to me that cynicism is just orthogonal to the truth (but not to reputation management -- get out of my head, Dan Williams!).
Solid breakdown of the guild mentality. The public choice framing really crystallizes something I've seen playing out in my own dept over the last two years when these debates come up. Essentially, there's a genuine fear that if we admit AI can enhance certain parts of the research process, we're admitting that parts of what we do are just cognitive overhead rather than intrinsic to the philosophical work itself. I dunno if journals will shift until junior scholars start pushing back more vocally tho.
This fear -- it seems to be assuming that the bits 'intrinsic to the philosophical work itself' would not be enhanced through AI. Am I reading that correctly? Why on earth should we make that assumption?
Thank you for the piece. There is a concern, it seems to me, that your argument is too reliant on a rather controversial position within epistemology, namely, that philosophy can (much less that its primary goal is to) generate "truth" or at least true beliefs. But if Quine (inter alia) is to be believed, the domain of a priori truth is illusory; or at least, we should be wary of claims to have arrived at a priori truth without a defense of the position that can found at all. There are two alternatives: 1) the "cash value" of philosophy is the inculcation of true beliefs in some minimalist sense (this construal seems most apt in the case of ethics/pol. phil) or 2) the central virtue of philosophy is, as I believe, its intrinsic intellectual value to its practitioners. You develop a position along the lines of 1) when you reference the "social welfare" benefits of high-quality philosophy. But this claim is dubious. 21st Century Analytic philosophy is notoriously abstruse and rarefied, as has been belabored and bemoaned by both proponents and detractors. Who is to benefit from the production of AI-augmented "cutting-edge" philosophy save those few savants already fluent in the terms of the relevant debate? The value must therefore be to practitioners of philosophy. But if they are incentivized to offload cognitive (and constitutively philosophical) tasks to AI, then it seems that, at least to a certain degree, the intrinsic value of the philosophical practice is degraded. If baseball players all wore vision enhancing goggles at bat, it would still be baseball they played, but one constitutively valuable skill (hand-eye coordination) would be rendered obsolete. The game would lose some of its intrinsic worth. In the case of philosophy, moreover, we should note that practitioners already function as sole producer as well as primary (perhaps sole) consumer of research. But should their ability to understand that research erode, for the cultivation of their skills are to be subordinated to the noble pursuit of pure knowledge, and indeed the rigor of AI may exceed their diminished or at least inferior cognitive faculties - then the already comically parochial audience of philosophical research will sustain a further diminution. At least, I think your argument that "The primary reason for subsidizing research should be the pursuit of truth, understanding, and contributions to social welfare" requires a further defense of why these reasons are preeminent and an articulation in more detail of how our embrace of AI can promote them.
I don’t think it’s controversial (or should be) that one of the central aims of philosophy is the pursuit of some kind of epistemic good, whether that’s truth, understanding, clarity, or wisdom. It has other aims and values. But that doesn’t mean all the institutions in which philosophy is practiced ought to be governed by these other aims and values, especially when such institutions are publicly funded. So the question is whether journals and universities are primarily meant to support the intrinsic value of philosophy as a practice, which I agree is important.
My claim is that even if it were one of their functions, it shouldn’t be the only one. We’re not covering ourselves in glory by admitting that the value of philosophy doesn’t lie in its pursuit of any of the above epistemic goods. Heck, I think even literature and the arts pursue such goals in their own way. Claiming that journals should not be seen primarily as clubs where ‘practitioners’ can engage in their favorite hobby, at the expense of those of us who are trying to understand the world (and for some, change it), should not be a controversial thesis. And if the claim that philosophy pursues some truths or other epistemic goods is controversial, then we should reassess the entire history of philosophy, Nietzsche included. Philosophy as a *pure* practice is great; there are plenty of places to do it. I’m not sure professional journals, especially in the current economy, and when subscriptions, research, etc are largely publicly funded, are one of them.
PS: I never claimed anything about a priori truths.
Interesting! Help me understand: how can a claim 'be too cynical to be true'? :) I'm not just trying to nitpick, but it seems to me that cynicism is just orthogonal to the truth (but not to reputation management -- get out of my head, Dan Williams!).
I think that's right. I was being a bit tongue-in-cheek, but I also think that sometimes cynical interpretations may be too simple.
Solid breakdown of the guild mentality. The public choice framing really crystallizes something I've seen playing out in my own dept over the last two years when these debates come up. Essentially, there's a genuine fear that if we admit AI can enhance certain parts of the research process, we're admitting that parts of what we do are just cognitive overhead rather than intrinsic to the philosophical work itself. I dunno if journals will shift until junior scholars start pushing back more vocally tho.
This fear -- it seems to be assuming that the bits 'intrinsic to the philosophical work itself' would not be enhanced through AI. Am I reading that correctly? Why on earth should we make that assumption?
"Cognitive overhead": I like that.