Thursday, May 3, 2012

The Essence of Dynamic Typing

Dynamic vs Static typing discussions exist probably since Bertrand Russel tried to formalize set theory and somebody questioned his preferences. One presumes, like all other discussions in math and science, it would be mostly resolved by now due to the sheer pressure of evidence and well thought arguments from both sides. Sadly, like most debates in programming this one is filled with misconceptions and both sides are famous for speaking past each other. I'll try to find the essence of dynamic typing, the one feature that is supposed to add expressiveness to dynamic typing (not dynamic languages in general, this won't cover Lisp-like macros, eval, Smalltalk become:, reflection, code-loading, etc.) and see what it fundamentally means.

Dynamic typing offers us the possibility of writing programs that are partially defined and may fail due to certain inputs. Let's see an example (using Javascript syntax):

    function example(b, x) {
        if (b) {
            return length(x);
        } else {
            return x * 2;
        }
    }


The function above is well defined if b is true and x is applicable to length() or if x is applicable to *2. The programs example(true, "abcdef") and example(false, 3) give both the same result and should not fail at runtime, while example(false, "abcdef")example(true, 3)example(3, false), etc., have undefined behavior. The essence of dynamic typing is to move this undefined behavior to run time, instead of trying to disallow it at compile time, allowing more programs to be expressed.


It's also possible to express and try to run an infinite amount of programs that only evaluate to undefined behavior, without hope of useful work being computed. Most, if not all, in the static typing camp find this downside to be greater than the added expressiveness.

Going back to our example I think it's safe to say that there's no value in the programs exhibiting only undefined behavior (i.e. all paths lead to crashes) and there's value in finding as early as possible if a program goes in that direction. OTOH dynamically typed programs usually have many valuable paths even if there exist undefined paths. Actually the whole dynamic typing argument can be boiled down to:
The value of executable paths is greater than the value of unexecutable paths. Undefined behavior in unexecutable paths can not be observed.
In this point of view trying to avoid the execution of paths that a program don't execute is a wasteful tautology: "avoid doing what you won't do" is a meaningless advice.

Aside, it's interesting to notice that such language can have an erasure model of compilation and don't require runtime tags. Of course it will be unsafe if the unexecutable paths suddenly are executed but this is another issue.

The example program can also be written in OO style, but the idea is the same.

Fundamentally dynamic typing relies on undefined behavior for some code paths which, surprisingly, exist in almost all statically typed languages. In Haskell, for example, it's easy to add undefined behavior by using the aptly named undefined value. What dynamically typed languages offer is a single static type (i.e. any or object or stuff) for all expressions so you can write down any kind of expressions and they all type check. One could do the same in Haskell using a modified Prelude and not using data declarations, essentially having something like this:


    data Any = B Bool | N Int | C Any Any | Nil | F (Any -> Any)
    -- we support booleans, numbers, cons cells, nil and functions


    length (Nil      ) = N 0
    length (C car cdr) = case (length cdr) of
                             N n -> 1 + n
    -- let it crash on other paths


    example (B True)  x     = length x
    example (B False) (N n) = N (n * 2)


This is the has the same behavior and static guarantees. For this small example the static typing won't get in the way of writing the same programs we would in a dynamically typed language. Of course this breaks down when we want to declare our own data types, because we need to extend Any to support structures. We can do it using other features of Haskell or extend the language, but this exercise is pointless. Any Turing-complete language can encode another Turing-complete language and any static type system can express a dynamic type system, because it is statically unityped.


If Haskell had no undefined value (or any equivalent) we could still encode the undefined behavior like this, using U as the undefined value:

    data Any = B Bool | N Int | C Any Any | Nil | F (Any -> Any) | U


    length (Nil      ) = N 0
    length (C car cdr) = case (length cdr) of
                             N n -> 1 + n
                             U   -> U
                         -- we propagate the undefined case
    length _           = U
    -- all other cases return the undefined primitive

    example (B True)  x     = length x
    example (B False) (N n) = N (n * 2)
    example _         _     = U


This is less convenient, but it is a syntax issue not one of semantics.

Dynamic or static is a language style that is very hard to reconcile, because even if statically typed languages can encode dynamically typed programs you end up having two sub languages that don't talk to each other, an encoded dynamically typed program can't use a function defined in the statically typed side because the statically typed function has many properties expressed via the type system while the dynamically typed program has no static invariant whatsoever. Even if a statically typed language offers a decent encoding of dynamic typing it would be no better than using a dynamically typed language directly. OTOH a dynamically typed language (without a before runtime execution phase) can't offer any of the static typing guarantees and such encoding is impossible by design. This doesn't mean static typing is a fundamentally better approach, it depends on how much are those additional guarantees valued over the cost of having to prove your program to the static type checker.

Fundamentally dynamically typed languages rely on the existence of undefined behavior, either via failed tag matches, doesNotUnderstand: messages or plain crashes. This same feature exist in most statically typed languages, so it's possible to encode dynamic typing in them, albeit with less syntactic sugar but it adds no value to them. Most dynamic languages reify this undefined behavior and allow it to be inspected at runtime, so a program can work around a undefined behavior which adds a meta-programming feature to the language (e.g. using method_missing in Ruby) but this is orthogonal to the basic idea of allowing programs with undefined paths to be expressed because of the value found in the defined/executable paths.

Theoretically it's possible to make all behavior defined, you just need to consider all cases in the primitives. For example, if statements/expressions require a boolean but many dynamic languages have the concept of truthiness so one can use anything in the test part of the if and it'll never fail at runtime with some sort of undefined behavior error. You can extend this to all primitives (e.g. arithmetic operators return their unit when presented with invalid values, attribute accessing returns Nil for undefined attributes) and end up with a program that won't crash but may do very unexpected things. This language is equivalent to using on error resume next in classic VB.

There's research on gradual typing which tries to provide a single type system with static and dynamic parts with minimal overhead. The results from this research is very encouraging as it would allow a program to have static guarantees over some areas and not over others, which can be very useful for some forms of programming.

Sunday, February 26, 2012

Using laziness to improve the performance of Haskell programs

To improve the performance of a program the only solution is to reduce the amount of work it does. In Haskell we have three options:

  1. Replace an algorithm and/or data structure by one with performance characteristics more suited to our problem. It'll work less because the algorithms work less.
  2. Add strictness to the program so it don't unnecessarily delay computations. It'll work less by not working on thunk creation and such.
  3. Add laziness to the program so it don't do unnecessary computations. It'll work less by not doing some work that will never be used.
Option 1 is usually the only one available in strict programming languages. Option 2 is the usual candidate when Haskell people are talking about improving performance. Option 3 is known, but as it seems counter intuitive when we're trying to optimize for performance we tend to forget about it. It's a great tool to use in some situations and it's always good to think about it.

Suppose we have, inside our program, a function that checks some property of a data structure and depending on the result of this property we branch the flow. Let's say we use the following property:
prop xs = length xs > 2
If the list has at least three elements the property is True, otherwise False. The problem here is we fold the entire list to find its length. Thanks to laziness we are only going through the spine of the list not the actual elements (which may do expensive computations on their own) but it's a waste to do all this work to check if the list has at least three elements. The following program does the same computation and works much less:
prop (_:(_:(_:_))) = True
prop _             = False
The pattern only traverses at most three conses of the list. Testing both versions in GHCi demonstrate the performance improvement. It seems a fact of life that these transformations are necessary, but actually this is due to some unnecessary strictness outside our program.

We only need to write down this new version because length returns an Int which is a strict, unboxed, value in GHC. So to check if length xs > 2 we need to compare two strict values which need to be fully evaluated, so length must be fully evaluated.

Now that we found the culprit, what can we do? As usual we are saved by theory and it's paladin. Let's digress a bit and find a lazier number representation.

The length of a list is natural number, not an integer. There's no list with negative length, it makes no sense. If we use the Peano axioms to express natural numbers we end up with the following datatype:
data N = Z   -- A natural N is either zero (Z)
       | S N -- or the sucessor of a natural (S N)
Naturals have a total order:
instance Ord N where
    Z   <= _   = True
    _   <= Z   = False
    S x <= S y = x <= y
This formulation of the naturals is as lazy as we need. The comparison will only go as far as it needs, until we reach a zero on either side.

Now we restate the property, but using genericLength instead of length and using our natural number formulation (assume the usual Eq and Num instances for N):
prop xs = genericLength xs > (2 :: N)
Now the strictness problem disappears. If we think about this a bit we'll realize our earlier performance improvement is isomorphic to this, because the spine of a list is isomorphic to the natural numbers.

Let that sink in for  a while.

If we write prop using naturals, but in the second form we have this:
prop xs = case (genericLength xs :: N) of
    (S (S (S _))) = True
    _             = False
The only difference in the patterns is we're using S x instead of _:x. Remembering that _:x means we care only about the shape of the list not its contents it becomes clear that empty lists are equivalent to zero and a cons is equivalent to a successor.

Our optimization was just stating in long form the isomorphism between the spine of a list and natural numbers.

Tuesday, December 20, 2011

Understanding higher-order code for free


Recently Rahul Goma Phulore asked for an intuition to understand the Haskell expression:
(∙ repeat) ∙ take
In Haskell both (f ∙) and (∙ g) are syntactic sugar for (λg → f ∙ g) and (λf → f ∙ g) respectively. These are called sections.

Interestingly either sections or function composition alone are pretty easy to grasp, but when combined become food for sphinxes' enigmas.

Runar Oli gave a couple of excellent answers for his particular question, but I thought this is a good example to walk through some beauties from logic and higher-order types. We could try to look at this code and build some sort of intuitions based on spacesuits or burritos, but instead we can follow the golden road of logic and exploit a well known fact about programs and types. The Curry-Howard isomorphism states that types are isomorphic to theorems and programs are isomorphic to proofs, so whenever we have a type and a program implementing that type the program is a proof of whatever theorem the type represents. Usually a type is much more general than the particular program we're writing so the type doesn't capture the entire semantics of our program. In a sense this is not good situation, because our program depends on a stricter semantics than the types give, but I'll ignore this conundrum for now and instead focus on the types as theorems bit.

Once we start seeing types as theorems we can apply other results from logic to our problem. For example, if we know our theorem is related to a another theorem (e.g. it implies something else) than we can exploit the results of the related theorem. In the excellent paper Theorems for Free Wadler shows that parametricity gives us many useful theorems because we can't produce polymorphic values ex nihilo. Suppose you have the following type:
f ∷ a → a
Ignoring the implementation (don't even try to think about it) what it says about f, what is this theorem? For starters, it's a valid, tautological, implication that states "a implies a". Also it states that there exists a program, such that if you give it something that is a proof of a theorem it'll return something that is a proof of the same theorem, for all possible theorems imaginable. Let's that thought sink for a while: for all possible theorems imaginable.

There is only one proof that is valid for all theorems, a proof that exploits a loophole in the given logic system (if such loopholes exist) and proves anything. In Haskell this loophole is (i.e. bottom) the value that inhabits all types, in other words the proof for all theorems. can manifest in many guises, but essentially it's due to non-termination: all programs that fail to terminate are isomorphic to bottom and can be used to prove any theorem. Any Turing-complete language suffers from this problem. We can ignore  even in Haskell because fast and loose reasoning is morally correct, so we'll pretend it doesn't exist for the rest of this post.

Ignoring  , in all forms, we can see that there's only a proof that's isomorphic to any given proof, namely the same given proof. In other words, the only program implements a → a is the identity:
λx → x
For more complex types we can have many possible proofs, which means the type is more lenient than our program and if we depend on the program semantics we may find some bugs later when we change the program and it still type checks but it no longer does what we want. Let's take a look on the type of a function we're interested in:
repeat ∷ a  [a]
It states that for all proofs it'll return a list of equivalent proofs. A list is either empty or an element followed by a list:
data List a = Empty | Cons a (List a)
We can encode algebraic data types  as explicit sums of products. Ignoring the labels the above is equivalent to:
data List a = () + (a × List a)
() is the unit of our algebra, if we see List as a logical term, then the equivalent of unit is (i.e. true, in this formulation we can view  as false, which only means our proof proves nothing). We can also use conjunction as × and disjunction  as +:
data List a =   (a  List a)
A list is either a trivial theorem or the conjunction of a theorem with itself. A list is not a very interesting theorem, but the proofs represented by lists are more interesting. Going back to our example, repeat is a theorem that states "a theorem implies either  or itself", which is also not much interesting. Which kinds of proofs can satisfy this theorem (i.e. type check)? Below are some choices:
repeat x = []    -- ignores the given proof and returns the empy list
repeat x = [x]   -- returns a singleton list with the given proof
repeat x = [x,x] -- a list with the proof twice
repeat x = ...   -- and so on
Actually as the list theorem is isomorphic to  or a conjunction of any number of copies of a theorem, we can clearly see that it is isomorphic to the pair of a theorem and a natural number, so all possible proofs of repeat are lists containing any natural number of copies of a proof. This is all neat but the most important thing about this is a free theorem stating that repeat will return either the [] or a list of a number of copies of the given element. It can't produce the elements of the list ex nihilo, they must all be equal to the given element.

Going further let's see the type of ( repeat):
( repeat) ∷ ([a]  b)  (a  b)
It takes a function, an a and returns a b. The function takes a [a] and returns a b. Now it's easier to see what it's doing. ( repeat) can't produce proofs of b ex nihilo, it must use the given function (remember we disallowed , so it can't just diverge). What are possible possible proofs that satisfy this theorem:
(∙ repeat) = λf → λx → f []
(∙ repeat) = λf → λx → f [x]
(∙ repeat) = λf → λx → f [x,x]
(∙ repeat) = ... -- and so on
The pattern is identical to the pattern saw in the possible implementations of repeat, which is either surprising or unsurprising, depending on how it fancies you. The main difference is that repeat just returned the value (i.e. [] or [x] or ...) and ( repeat) takes a function and applies it to the value.

Let's see the type of take:
take ∷ Int → [a] → [a]
This theorem has a few possible proofs:
take = λn → λxs → []
take = λn → λxs → xs
take = λn → λxs → case xs of {[] → []; (x:_) → [x]}
take = ...
None of these implementations do what we expect from take but all type check (i.e. prove the theorem). Now we're seeing how flimsy our types are, but it's mostly because we expect too much from general theorems. Either way, we have the same free theorems here, the resulting list will either be [] or contain only elements from the original list. This free theorem comes from the parametricity of take, every parametric function has some related free theorem.

Now what is the type of (∙ take)?
( take) ∷ (([a]  [a])  b)  Int  b
It looks familiar to the type of ( repeat) doesn't it? Actually we can make it more similar, let's first contrast the types of repeat the partially applied take:
repeat ∷  a   [a]
take 1 ∷ [a] 
 [a]
Both have the same structure and their sections also look similar:
( repeat) ∷ ([a]  b)  ( a   b)
(
 take 1) ∷ ([a]  b)  ([a]  b)
It seems that taking the right section of a function will take it inside out and "promote" types in both sides of the  to functions to b. You can see this a CPS transform of the function, with the continuation as the first argument.

As with (∙ repeat), (∙ take 1) has more interesting free theorems, relating on how it can produce b from [a]. In most higher order compositions the number of free theorems increase, because we (usually) have more parametricity and the universal quantitification limits how we can compose those.

Dissecting ( take) type we see that it takes a higher-order function an Int and returns a b. Let's see some possible proofs for this theorem:
(∙ take) = λf → λn → f id
(∙ take) = λf → λn → f (const [])
(∙ take) = λf → λn → f (λxs → case xs of {[] → []; (x:_) → [x]})
(∙ take) = ...
Like what happened with the possible (∙ repeat) proofs we are mainly repeating the possibles proofs for take and making it higher-order. We can fake some but not much, namely we can return [] (which are trivial due to being isomorphic to ) or we can pick some elements from the list, but we can't produce elements ex nihilo and we must call f (i.e. our continuation).

We can read the type of (∙ take) as "Given a function that uses mappings from lists to lists to produce something and an int it'll return something". Whatever ( take) does is limited to look at an Int an to provide a function of type [a]  [a].  f can ignore the function provided by ( take) or use it but it must provide a b.

Finally let's look at our original enigma. First we can use the right section of take to make everything prefix:
(∙ repeat) ∙ take = (∙ take) (∙ repeat)
Remembering the types of (∙ repeat) and (∙ take) we can use the basic application rule to find the resulting type (with some unification):
(∙ repeat) ∷ ( [a] →  b ) → (a → b)
(∙ take)   ∷ (([a] → [a]) →  c     ) → Int → c
As we apply (∙ take) to (∙ repeat) we unify b with [a] and c with (a  b), which is  (a  [a]), and we end up with:
(∙ take) (∙ repeat) ∷ Int → (a → [a])
This final type has the same free theorems as take and repeat, namely it will either return [] or a list with some number of copies of the same element.

Most of this exercise may seem pointless but it's mostly due to the uninteresting list theorem and, consequently, to the uninteresting take theorem. With some dependent type encoding the length of list we could have a much stronger (and less general) theorem for take and even repeat would be a bit more interesting.

The relation of right sections to continuations is more relevant to understand the types and how these combinators work. A left section it's the reverse, the result is the continuation of something, instead of expecting a continuation:
(repeat ∙)       ∷ (b  a         )  (b  [a])
(uncurry take ∙) ∷ (b 
 (Int, [a]))  (b  [a])
Here I used uncurry to make the correspondence between the types clearer in this case. The reasoning is similar and left as an exercise for the reader.

Wednesday, November 23, 2011

Understanding Object Systems

There are some regular discussions on programming that are quite boring, but from time to time you can find new thoughts lurking there. Using ideas from Theory of Constraints we can model conflicts in a way that the apparent dichotomy evaporates and a better way presents itself.

The particular dilemma I was looking at was the Smalltalk's image problem, actually not any of the problems described there but the usual spiel about how images are against the Unix way and such. The essence of the argument boils down to "I want to manipulate Smalltalk code as source files like everything is done in the Unix way" (many times people just talk about how Vim or Emacs is much better than the image, but the problem goes a little deeper than that) which is an enormous flag indicating that the person probably doesn't understand how Smalltalk works ("Source code in files? How quaint." -- Kent Beck). OTOH the Unix way is also a cornerstone of software engineering so this isn't a worse is better scenario but a true indicative that something very important is missing.

It occurred to me a way to solve this dilemma, just expose the image as a mount point (e.g. using FUSE), with methods as files inside classes (directories) with categories and protocols as directories with links to the methods/classes and special links like super also provided. As it happens with all good ideas there should be already something on the internet, so I used my google-fu and found exactly what I wanted on a FUSE based Smalltalk description (which unfortunately didn't lead me to actual software to play with). It even described the kind of integration with the live image I was thinking about (e.g. changing a method file and reflecting the change in the image) which is really cool. It's interesting to see this concept isn't used by all Smalltalkers around the globe to respond to these complaints so either is fundamentally flawed or there is something missing. Looking through the search results I also found many pointers to Smalltalk shells or shells in Smalltalk which are the same underlying idea presented in a different way (the Stash idea is also very close to this).

In Unix we have (or should have, Plan 9 got this right, but I digress) everything as a file, so we can see processes as files and such. In Smalltalk everything is an object so it should be just a matter of changing the UI to bridge the two concepts. Going further, having every object as an executable file and being able to send messages directly to it (e.g. #12ab76e9 at: 1 put: 'Hello World' would send the at:put: message to the object identified by the hash #12ab76e9. With a proper mount and some path extensions we could do almost everything in Unix, without having to reinvent the shell in Smalltalk.

This solves the source code in files problem, as you can mount your image and use VI, sed, git, etc., to manipulate the definitions. You can even manipulate the image from the shell, implementing a polymorphic mkdir that does the right thing when it's a class or a category. It looks like a wonderful idea, or isn't? Is it too good to be true?

In software I believe there are two problems that every idea should solve before becoming something useful:

  1. Does it work in theory? You need to really think things through, many times there are theoretical barriers that make a nice idea fail badly in practice.
  2. Is it feasible in practice? How much effort you need to make it work? Measure this in yaks.
It seems to be workable in theory so I fear it's unfeasible in practice, after all the Smalltalk images are built around the GUI, so it may require some serious yak shaving to untangle some bits (probably like all software projects it'll work almost right and then become almost impossible). But I think it's a worthwhile project and it would help greatly on explaining how object systems actually work.

Saturday, August 20, 2011

What is the meaning of source code?

Today I was thinking about literals in source code and what is their meaning. I ended up in a conversation with @msimoni and we were talking past each other, so I figured out it was better to write it down anywhere.


Let's start with a language based on the untyped lambda calculus, the syntax is pretty simple:


    e = x λx.e | e e


It's not the most pleasant syntax (although is many times better than the Turing machine) to program. So we decide to add some sugar on top of it: data-types, special forms, literals, etc.. Hopefully we define these on the calculus primitives and the language does not become more complex to reason. Essentially the meaning of the additional syntax is their translation to the calculus. Now we have two choices: either we add this sugar as special syntax or we bake some form of syntax extensibility in the compiler/interpreter/front-end. Before we go down this road I'll digress a bit and talk about source code.

Most languages use source code as a way to express programs and as an input to compilers/interpreters, but when we start viewing it as one representation of the program we need to define what a program is. Eventually we'll use some syntax (usually the same) to denote the program's meaning, but we need to think of the program as an object we can manipulate instead of text to be fed to the compiler. If we're going to store and render the program we don't need to use the same forms for both needs. Lets talk about a concrete issue, how do we store and represent numbers.

If we encode naturals in our language using Church encoding we can easily render 2 to the programmer and accept it as a valid input, while we store the program using the complete encoding (i.e. λf.λx.f (f x)). We can even represent λx.2+x while internally keeping it as the expanded form (left as an exercise to the reader :). Of course this way makes the storage less efficient than it could be which leads us to the first observation: literals are just compression mechanisms. They work as compression for both the compiler/interpreter and for us (as it's simpler to read/write 2 than the equivalent encoding). If we just store the encoded number though we need to also keep something to remind us to compress it to 2 when rendering it, we could rely on pattern matching and equational reasoning but we'll find out that many compression schemes work with the same encoding (e.g. the user would see either the number 2 or part of data structure).


Viewing it in this light we can go back to literals with a new perspective, our language can have a limited set of literals with built-in encoders or have a simpler literal form and an extensible encoding mechanism. Most languages go the first way but the libraries' programmers end up using strings and some parsing or EDSLs. If we go the second way we end up with LISP-like macros for literals. Even if the language is going to have a fixed set of literal forms it's better to have almost no intelligence in these than to add support for many specific concerns. For example, it's better to have plain string literals without any embedded escaping mechanism than to have escaping for special characters (that are only special because of the escaping concerns) and to support mechanisms to control terminals or do layout (e.g. \n and friends).