works
Gilbert Harman and Sanjeev Kulkarni Statistical learning theory as a framework for the philosophy of induction incollection Statistical Learning Theory (SLT) provides a formal mathematical framework for the philosophy of induction, offering a distinct alternative to paradigms such as learning in the limit or Bayesianism. By assuming an unknown objective probability distribution, SLT evaluates inductive methods through empirical risk minimization and the Vapnik-Chervonenkis (VC) dimension, which measures the richness of a hypothesis class. This approach determines the conditions under which Probably Approximately Correct (PAC) learning is achievable, establishing that a class of rules is learnable if and only if it has a finite VC dimension. Applying SLT to philosophical epistemology provides a rigorous foundation for reliability theories of justification and offers a refined interpretation of Karl Popper’s falsifiability. Specifically, VC dimension serves as a more accurate measure of a theory’s empirical content than Popper’s original metrics or simple parameter counts. Furthermore, SLT challenges the traditional role of simplicity in inductive inference, demonstrating that predictive success depends on balancing data coverage with VC dimension rather than descriptive brevity. The framework also illuminates the efficacy of transductive inference, where direct induction from labeled data to specific new cases frequently outperforms total generalization. These results suggest that the formal constraints of machine learning provide essential insights into the nature of reliable reasoning and the structure of scientific hypotheses. – AI-generated abstract.

Statistical learning theory as a framework for the philosophy of induction

Gilbert Harman and Sanjeev Kulkarni

In Prasanta S. Bandyopadhyay and Malcolm R. Forster (eds.) Philosophy of Statistics, Amsterdam, 2011, pp. 833–847

Abstract

Statistical Learning Theory (SLT) provides a formal mathematical framework for the philosophy of induction, offering a distinct alternative to paradigms such as learning in the limit or Bayesianism. By assuming an unknown objective probability distribution, SLT evaluates inductive methods through empirical risk minimization and the Vapnik-Chervonenkis (VC) dimension, which measures the richness of a hypothesis class. This approach determines the conditions under which Probably Approximately Correct (PAC) learning is achievable, establishing that a class of rules is learnable if and only if it has a finite VC dimension. Applying SLT to philosophical epistemology provides a rigorous foundation for reliability theories of justification and offers a refined interpretation of Karl Popper’s falsifiability. Specifically, VC dimension serves as a more accurate measure of a theory’s empirical content than Popper’s original metrics or simple parameter counts. Furthermore, SLT challenges the traditional role of simplicity in inductive inference, demonstrating that predictive success depends on balancing data coverage with VC dimension rather than descriptive brevity. The framework also illuminates the efficacy of transductive inference, where direct induction from labeled data to specific new cases frequently outperforms total generalization. These results suggest that the formal constraints of machine learning provide essential insights into the nature of reliable reasoning and the structure of scientific hypotheses. – AI-generated abstract.

PDF

First page of PDF