Suppose we have my usual hierarchy of types, Animal, Giraffe, etc, with the obvious type relationships. An IEqualityComparer<T> is contravariant in its type parameter; if we have a device which can compare two Animals for equality then it can compare two Giraffes for equality as well. So why does this code fail to compile?

IEqualityComparer<Animal> animalComparer = whatever;

IEnumerable<Giraffe> giraffes = whatever;

IEnumerable<Giraffe> distinct = giraffes.Distinct(animalComparer);

This illustrates a subtle and slightly unfortunate design choice in the method type inference algorithm, which of course was designed long before covariance and contravariance were added to the language.

Let's take a look at the signature of the Distinct extension method that was used above:

public static IEnumerable<T> Distinct<T>(

this IEnumerable<T> source,IEqualityComparer<T> comparer)

Type inference computes a set of bounds on what T can be based on an analysis of each argument and its corresponding formal parameter type. For the first argument, since IEnumerable<T> is a covariant interface, the type inference algorithm computes that T must be Giraffe or any larger type.1 The specification expresses this notion by saying that Giraffe is a "lower bound" -- whatever is finally chosen has to be a type that is not smaller than Giraffe.

Read more: Fabulous Adventures In Coding

QR: