In this post, I show how long a chain of refinements of preorders can be:

Here, each is a preorder on some fixed set. The number (of the longest chain possible) is what I am after.

This problem came up when I wanted to analyse the query complexity of the NL* algorithm by Bollig et al myself. NL* is a automata learning algorithm for nondeterministic automata. Simplifying a lot: this algorithm has an “outer loop” that refines a certain preorder. The number of iterations of this loop is bounded by how often a preorder can be refined. So how long can a chain of preorders be?

## Preorders and refinement

We start with a set . A preorder is a relation such that:

• The relation is reflexive: for all , and
• The relation is transitive: and implies for all .

Given two such relations and , we say that is (strictly) finer than if . In words: the relation relates fewer elements than .

The coarsest preorder is the relation where everything is related, i.e., . On the other extreme, we have the finest relation . In the finest relation nothing is related (except each element with itself). I will use this diagonal set later on, so I introduce the notation .

The NL* algorithm starts with the coarsest relation, assuming that everything (states of an automaton) is equal. Only when observations show that the elements must be different, the algorithm refines that relation. Say has elements. Clearly the number of times it can be refined is bounded by , as the coarsest relation has only elements. A slightly better bound is , because we know the last relation has to be reflexive. Can we exactly compute how often we can refine the relation, also taking into account transitivity?

## Examples

Let’s compute some examples by hand.

• : In this case there is only one element, and only one preorder. So the length of the longest chain is 1.

• : In this case, there are four preorders. And we can chain three of them as follows:

There is another chain of length 3 where we choose to include instead of . The length of the longest chain is 3.

• : This is a bit more involved. I will add a figure to show a chain of length 6 later.

I wanted to find a pattern in these numbers, so I wrote a brute-force program to enumerate all possibilities. Doing so, I got the sequence 1, 3, 6, 10 and 15. Do you recognise these numbers?

## The triangular numbers

I didn’t, but luckily oeis does! These numbers are exactly the triangular numbers. But why‽

This puzzled me and so I asked around. Together with Todd Schmid, we found an inductive proof of why the triangular numbers show up here. Below, I will show the main construction.

## Proof

Let denote the length of the longest preorder chain on the set . I will show that

Here, is the disjoint union of sets, the cartesian product of sets and the cardinality of a set. As a special case, we may take to be a singleton and retrieve an inductive proof that .

For the main construction to show the above formula, I find it easier to think about going from the finest relation to the coarsest. We have our sets and and we imagine that will be smaller than during the whole construction, this is without loss of generality. We start with the finest relation and add pairs one-by-one (in any order). This results in a chain of length :

Note that we don’t have to care about transitivity here: each is below , but nothing is below or above .

We continue this chain by using the longest chain for . Again, we don’t have to take action for transitivity: we might introduce a relation and already have but every is already related to at this point in the chain, so holds. After the chain for , we glue on the chain for (again no special care for transitivity is needed).

So far we have constructed three chains, but we haven’t arrived at the coarsest relation yet. To do so, we may add any element , and this forces (by transitivity) to include all pairs . And so, we can make one more step in this chain.

Carefully glueing each chain together (the end-points are equal and shouldn’t be counted twice), we obtain the length

as required.

Note that the above chain cannot be made any longer. In the first part, we add single elements, which cannot be done any better. Then, inductively, we assume the longest chains for and . And then, we are forced to the coarsest relation by adding any element.

This does not yet mean that this is the longest chain. (It is a local optimum, not a global one.) Some more thinking is required to convince yourself that there is no longer chain. To be honest, I don’t know a good argument for that.

We now know the exact number of refinements possible (using only the assumptions of a preorder):

Unfortunately this precise bound is still quadratic, just as our lazy bound of was. (Although the bounds is roughly half of the lazy bound.)

This is different if we go to equivalence relations. For equivalence relations, the longest chain of refinements is linear! This is because each equivalence relation corresponds to a partition. And so a chain of refinements corresponds to a chain of surjections

Each surjection at best reduces the number of elements by 1. So starting with , we can only reduce the size times, resulting in a chain of length .

## For nominal sets

I wanted to know the precise bound in the context of automata learning of nondeterministic nominal automata. Our current bound (based on the lazy ) was too high for my taste. Unfortunately, the exact bound is still high (asymptotically still quadratic). The formula

also holds for nominal sets, I believe. Here you should read as the number of orbits of the product , which can be quite big.

As a base case, we have the single orbit sets (the set of -tuples of distinct atoms). I conjecture that

As an example, has length 2 because . There is no longer chain, since only has two orbits.

## Conclusion

There is not much more to say. I do really like relating bounds of algorithms to a chains of mathematical objects. In this case I considered chains of preorders to obtain a bounds of a learning algorithms. Doing so, I tightened a bound to the triangular numbers .

Again thanks to Todd and his partner to help me with the proof. I posted this problem on a slack channel of a research group and, as I understand it, they had fun solving this problem during a dinner in London! Thanks!