+ d) / (P.N))
That is, the distance is equal to the distance from the point
to the eye times the ratio of the distance from the point to
the plane vs. the point to the plane plus the eye to the plane.
The distance from the point P to the eye, ||P||, is approximated
as the z depth, and texgen'd into S. The functiom (P.N)/d is
particular to this plane, and texgen'd into T. The texture
lookup on s,t returns a value which is a fog amount for the
distance as computed above. (S is rescaled to the maximum
possible view distance; T is rescaled to allow the ratio to
range from 0..16 or so.)
Because the substitution of Z for ||P|| is independent of the
chosen plane, this approximation doesn't prevent boundaries
from matching up.
Moreover, even though the texture can only approximate the
actual function (since it's discretely sampled and interpolated),
there's no problem with the boundaries being computed slightly
differently for different faces; since, regardless of the face,
the true distance is identical at the boundaries, and the
eye distance is identical at the boundaries, it follows that
(1 + d/(P.N)) must be identical at the boundaries (even though
are different for the different faces), from which it
follows that T = (P.N)/d must be identical at the boundaries.
Thus the same texture coordinates are generated at the boundaries,
to the limits of the precision of the texgen computation.
(This was verified in the test application by viewing S and T
independently instead of the fog computation.)
This means a small texture can be used, since the approximations
caused by reducing the texture cannot show up as visual
discontinuities.
I didn't intentionally design this property into the way
I factored the distance math--I was just looking for some
function that could be texgen'd--but it's worth remembering
for similar future situations. (Of course, most of the time
we work with continuous vertex data, not facetted vertex
data, so it's never an issue.)