Physically-based ray tracers need to solve complex equations describing the flow of light within a given scene. These equations are typically solved via a Monte Carlo algorithm, which carefully constructs a randomised process, for which the average outcome is the image we seek.

An unbiased Monte Carlo algorithm is one for which the only source of error comes from statistical variance (which we see in the image as noise).

This means that if you were to render 100 images with an unbiased Monte Carlo algorithm and average them together, it would be a more accurate result. The same is not true for algorithms which are not unbiased, for example Photon Mapping: because the photons have a finite area of effect, illumination details smaller than this cannot be captured, meaning no matter how many images you average together from the same Photon Mapping process, it will not improve details unless you reduce the photon radius to zero (at which stage you have an inefficient path tracing algorithm). Similarly, by enforcing a low number of light "bounces", further rendering will never approach a correct result which takes all the possible light paths into account.

The practical benefit of using an unbiased algorithm is that there are no abstract computer graphics settings to adjust: the process always approaches the most accurate result possible.

Computer processing power becomes cheaper all the time, and because of this the unbiased rendering paradigm becomes more powerful over time too: it is preferable to simply let the computer produce the best possible result directly, than the user spending time trying to find a reasonable compromise.