<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Reflective Equilibrium · Pablo Stafforini</title><link>https://stafforini.com/tags/reflective-equilibrium/</link><description/><generator>Hugo -- gohugo.io</generator><language>en</language><lastBuildDate>Fri, 25 Jan 2008 00:00:00 +0000</lastBuildDate><atom:link href="https://stafforini.com/tags/reflective-equilibrium/index.xml" rel="self" type="application/rss+xml"/><item><title>intuition</title><link>https://stafforini.com/quotes/hanson-intuition/</link><pubDate>Fri, 25 Jan 2008 00:00:00 +0000</pubDate><guid>https://stafforini.com/quotes/hanson-intuition/</guid><description>&lt;![CDATA[<blockquote><p>When large regions of one’s data are suspect and for that reason given less credence, even complex curves will tend to look simpler as they are interpolated across such suspect regions. In general, the more error one expects in one’s intuitions (one’s data, in the curve-fitting context), the more one prefers simpler moral principles (one’s curves) that are less context-dependent. This might, but need not, tip the balance of reflective equilibrium so much that we adopt very simple and general moral principles, such as utilitarianism. This might not be appealing, but if we really distrust some broad set of our moral intuitions, this may be the best that we can do.</p></blockquote>
]]></description></item></channel></rss>