|
| |||
|
|
The upside-down polynomial transformation Looking at the properties of universally quantified types, I found the following transformation: Take a polynomial functor F(a) and define the new type u = ∀a. F(a) → a (The arrow → means the function type, or the exponentiation object in the category.) The type u can be computed explicitly if we know the polynomial F(a) = c_0 + c_1 a + c_2 a^2 + ... which, in the categorical language, is written as F(a) = c_0 + c_1 × (1 → a) + c_2 × (2 → a) + ... Now, the type u = ∀a. F(a) → a can be computed as u = (c_0 → 0) × (c_1 → 1) × (c_2 → 2) × ... by using the following four properties: ∀a. (z → a) → a = z, (this should follow from the Yoneda lemma somehow, I'll have to look this up) ∀a. z × F(a) → a = ∀a. z → F(a) → a = z → ( ∀a. F(a) → a ), ∀a. (P(a) + Q(a)) → a = ∀a. ( P(a) → a ) × ( Q(a) → a ), ∀a. G(a) × H(a) = ∀a. G(a) × ∀b. H(b) In the algebraic notation, the result is written as the product of powers, u = 0 ^ {c_0} 1 ^ {c_1} 2 ^ {c_2} ... (So if c_0 is nonzero, the whole thing is equal to 0; the coefficient c_1 is ignored in any case; and the nontrivial information starts from c_2 onwards.) In other words, each term of the polynomial is turned upside-down, and all summations are converted into multiplications. This vaguely resembles the duality relation in the category theory, where products are dual to co-products (i.e. sums) and all arrows are reversed. What a curious transformation: It turns the polynomial "upside-down" and exposes information about its coefficients in a weird way. I've never seen anything like this before. I don't know how to express this transformation algebraically. What if the functor F is not polynomial (and not covariant)? The resulting type `u` could be a recursively defined type in some cases, so things certainly become more complicated in general. For example, let F(a) = (1 + a) → a. The type u = ∀a. F(a) → a becomes u = ∀a. ((1 + a) → a) → a = ∀a. (1 → a) × (a → a) → a How can we implement a value of type `u`? We are given two functions, say e : 1 → a and f : a → a, and we are supposed to produce a value of type `a`. We could call e(1); but we could also apply f to e(1); and we could apply f many times to e(1). So, possible values of type `u` are parameterized by a nonnegative integer, according to how many times we apply f to e(1). This type is usually denoted as Nat and can be defined by the recursive equation Nat = 1 + Nat. If we use the explicit recursion constructor such as the Y-combinator, we could write Nat = Y ( a → 1 + a ). Note the similarity between the formula a → 1 + a and the formula for F(a) = (1 + a) → a. These two formulas are categorically dual to each other. We obtained Nat as the upside-down transformation from F. Therefore, the upside-down transformation has again reversed the categorical arrows and gave us the recursive type Nat. Another similar example is F(a) = (z + a × a) → a. The dual formula is a → (z + a × a). The recursive type defined by this formula is a binary tree with leaves of the type z. Let us check that the same type comes out of the upside-down transformation of F(a): u = ∀a. ((z + a × a) → a) → a = ∀a. (z → a) × (a × a → a) → a A value of type `u` is a function with two arguments, `e : z → a` and `f : a × a → a`, returning a value of type `a`. The only way of implementing a value of type `u` is to have a value of type `z` and use `e` on that, or to apply `f` to two different previously computed values of type `a`. All possible values of type `u` are therefore some `e(z)` for various given values of type `z`, combined together via `f`. The resulting expression may look like f( f(e(z1), f(e(z2), e(z3)), e(z4)). The set of all these expressions is the set of all binary trees with leaves of type `z`. There is a curious effect: a recursive type, such as a binary tree or Nat, can be defined using a non-recursive equation (the formulas that define `u` are never recursive). In some cases, the non-recursive equation has no solutions (the type cannot be implemented). In those cases, we may conclude that the corresponding recursive equation is ill-defined, i.e. does not actually define a type. Examples of such equations are p1 = z → p1 or, written via the Y-combinator, p1 = Y (a → (z → a)) and p2 = p2 × p2 or p2 = Y (a → a × a). The corresponding "upside-down" formulas are u1 = ∀a. ((z → a) → a) → a and u2 = ∀a. ((a × a) → a) → a None of these types can be implemented. For instance, to implement a value of type `u2`, we must provide a value of type `a` given a function f : (a × a) → a. The only way of getting an `a` would be to call `f` on an argument of type `a × a`, but we don't have any values of type `a` yet. Similarly, we cannot implement any values of type `u1`. The "upside-down" encoding provides an easy criterion for the "implementability" of recursive type definitions. I am actually not sure what this "implementability" means. It is no problem to write code that defines some values of the types p1 and p2 in a programming language. However, both of them seem quite useless for any practical applications; the values of these types can't be used for any reasonable computation, they always enter an infinite loop of one sort or another. I always found it quite difficult to reason about such recursive types. "Domain theory" seems to be a very obscure branch of type theory, with no good books on it and no clear examples written with a practitioner in mind. So, perhaps the "upside-down" encoding is a better way of handling these types. If the "upside-down" encoding `u` does not have implementations (which is relatively easy to check), the recursive equation is meaningless. |
||||||||||||||