@mCoombes314: It's easier to understand conceptually, certainly.
The impossibility of currently implementing it is a slight mark against it, however :)
Of course, that's not a huge issue, since computing an upper bound and removing illegal positions is also not possible to implement right now.
It's just that at least there you get the upper bound as reward for your work :)
Another interesting empirical method of estimating would be to have a legal move generator play out random games constantly, recording all the unique positions.
At first, nearly every position would be unique. Over time, though, the rate of addition of new positions would slow. After letting the process run for a decent amount of time, you could then plot the count of unique positions vs time, and compute the limit for the function(s) that best fit, which would probably be a decent estimate.
It would also require substantial storage, but nothing like effectively computing all of the 8-32 piece tablebases.
The practical concern there is that you might not see much slowing until you outpaced current storage limitations (to guarantee no collisions, you'll need about 160 bits per position, which "only" gives you about 50 billion positions per TB, which is a much, much smaller number than any currently computed upper bound), and once a gargantuan amount of data had been stored, the check to see whether a position had already been reached would be very expensive.
The upshot is that chess is pretty complicated :)