In the Information Theory, as defined by Claude E Shannon, information is a surprise. Surprise comes from diversity, not from equality. Considering the data, a variable that can occupy a single value only, actually a constant, has no surprise and no information.
Whatever case you take randomly from the dataset, you know the value of this variable in advance, and you are never surprised. To have at least some information in it, a variable must be at least dichotomous, meaning it must have a pool of at least two distinct values. Now, imagine that you take a case randomly out of the dataset, but you know the overall distribution of that variable. If one state is more frequent, appearing in 80% of cases, you would expect that state, of course. You would be surprised 20% of the time. With 50%—50% distribution, no matter which state you...