10 found
Order:
Disambiguations
Sanjeev Kulkarni [6]Sanjeev R. Kulkarni [4]
  1.  75
    Reliable Reasoning: Induction and Statistical Learning Theory.Gilbert Harman & Sanjeev Kulkarni - 2007 - Bradford.
    In _Reliable Reasoning_, Gilbert Harman and Sanjeev Kulkarni -- a philosopher and an engineer -- argue that philosophy and cognitive science can benefit from statistical learning theory, the theory that lies behind recent advances in machine learning. The philosophical problem of induction, for example, is in part about the reliability of inductive reasoning, where the reliability of a method is measured by its statistically expected percentage of errors -- a central topic in SLT. After discussing philosophical attempts to evade the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   37 citations  
  2. Probabilistic coherence and proper scoring rules.Joel Predd, Robert Seiringer, Elliott Lieb, Daniel Osherson, H. Vincent Poor & Sanjeev Kulkarni - 2009 - IEEE Transactions on Information Theory 55 (10):4786-4792.
    We provide self-contained proof of a theorem relating probabilistic coherence of forecasts to their non-domination by rival forecasts with respect to any proper scoring rule. The theorem recapitulates insights achieved by other investigators, and clarifi es the connection of coherence and proper scoring rules to Bregman divergence.
    Direct download  
     
    Export citation  
     
    Bookmark   68 citations  
  3. The Problem of Induction.Gilbert Harman & Sanjeev R. Kulkarni - 2006 - Philosophy and Phenomenological Research 72 (3):559-575.
    The problem of induction is sometimes motivated via a comparison between rules of induction and rules of deduction. Valid deductive rules are necessarily truth preserving, while inductive rules are not.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  4. Aggregating Large Sets of Probabilistic Forecasts by Weighted Coherent Adjustment.Guanchun Wang, Sanjeev R. Kulkarni & Daniel N. Osherson - unknown
    Stochastic forecasts in complex environments can benefit from combining the estimates of large groups of forecasters (“judges”). But aggregating multiple opinions faces several challenges. First, human judges are notoriously incoherent when their forecasts involve logically complex events. Second, individual judges may have specialized knowledge, so different judges may produce forecasts for different events. Third, the credibility of individual judges might vary, and one would like to pay greater attention to more trustworthy forecasts. These considerations limit the value of simple aggregation (...)
    Direct download  
     
    Export citation  
     
    Bookmark   6 citations  
  5. Wishful Thinking and Social Influence in the 2008 U.S. Presidential Election.Michael K. Miller, Guanchun Wang, Sanjeev R. Kulkarni & Daniel N. Osherson - unknown
    This paper analyzes individual probabilistic predictions of state outcomes in the 2008 U.S. presidential election. Employing an original survey of more than 19,000 respondents, ours is the first study of electoral forecasting to involve multiple subnational predictions and to incorporate the influence of respondents’ home states. We relate a range of demographic, political, and cognitive variables to individual accuracy and predictions, as well as to how accuracy improved over time. We find strong support for wishful thinking bias in expectations, as (...)
    No categories
     
    Export citation  
     
    Bookmark  
  6. Statistical learning theory as a framework for the philosophy of induction.Gilbert Harman & Sanjeev Kulkarni - manuscript
    Statistical Learning Theory (e.g., Hastie et al., 2001; Vapnik, 1998, 2000, 2006) is the basic theory behind contemporary machine learning and data-mining. We suggest that the theory provides an excellent framework for philosophical thinking about inductive inference.
     
    Export citation  
     
    Bookmark   1 citation  
  7. Précis of Reliable Reasoning: Induction and Statistical Learning Theory.Gilbert Harman & Sanjeev Kulkarni - 2009 - Abstracta 5 (S3):5-9.
     
    Export citation  
     
    Bookmark  
  8. Response to Shaffer, Thagard, Strevens and Hanson.Gilbert Harman & Sanjeev Kulkarni - 2009 - Abstracta 5 (S3):47-56.
    Like Glenn Shafer, we are nostalgic for the time when “philosophers, mathematicians, and scientists interested in probability, induction, and scientific methodology talked with each other more than they do now”, [p.10]. 1 Shafer goes on to mention other relevant contemporary communities. He himself has been at the interface of many of these communities while at the same time making major contributions to them and this very symposium represents something of that desired discussion. We begin with a couple of general points (...)
     
    Export citation  
     
    Bookmark  
  9. Statistical Learning Theory: A Tutorial.Sanjeev R. Kulkarni & Gilbert Harman - 2011 - Wiley Interdisciplinary Reviews: Computational Statistics 3 (6):543-556.
    In this article, we provide a tutorial overview of some aspects of statistical learning theory, which also goes by other names such as statistical pattern recognition, nonparametric classification and estimation, and supervised learning. We focus on the problem of two-class pattern classification for various reasons. This problem is rich enough to capture many of the interesting aspects that are present in the cases of more than two classes and in the problem of estimation, and many of the results can be (...)
     
    Export citation  
     
    Bookmark  
  10. Improving Aggregated Forecasts of Probability.Guanchun Wang, Sanjeev Kulkarni & Daniel N. Osherson - unknown
    ��The Coherent Approximation Principle (CAP) is a method for aggregating forecasts of probability from a group of judges by enforcing coherence with minimal adjustment. This paper explores two methods to further improve the forecasting accuracy within the CAP framework and proposes practical algorithms that implement them. These methods allow flexibility to add fixed constraints to the coherentization process and compensate for the psychological bias present in probability estimates from human judges. The algorithms were tested on a data set of nearly (...)
    Direct download  
     
    Export citation  
     
    Bookmark