Monday, July 12, 2010

Amazing discussion in 1966!

At the end of the Peter Ladin's "The next 700 programming languages" There is a very interesting discussion between giants (Strachey, Landin, Smith, Young, and Abrahams) of the software engineering. A definite must read...

Strachey: I should like to intervene now and try to initiate a slightly more general discussion on declarative or descriptive languages and to try to clear up some points about which there is considerable confusion. I have called the objects I am trying to discuss DLs because I don't quite know what they are. Here are some questions concerning ])Ls: (1) What are DLs? (2) What is their relationship to imperative languages? (3) Why do we need DLs? (4) How can we use them to program? (5) How can we implement them? (6) How can we do this efficiently? (7) Should we mix l)Ls with imperative languages?

It seems to me that what I mean by DLs is not exactly what other people mean. I mean, roughly, languages which do not contain assignment statements or jumps. This is, as a matter of fact, not a very clear distinction because you can always disguise the assignments and the jumps, for that matter, inside other statement forms which make them look different. The important characteristic of DLs is that it is possible to produce equivalence relations, particularly the rule for substitution which Peter Landin describes as (~) in his paper. That equivalence relation, which appears to be essential in alost every proof, does not hold if you allow assignment statements. The great advantage then of l)Ls is that they give you some hope of proving the equivalence of program transformations and to begin to have a calculus for combining and manipulating them, which at the moment we haven't got.


I suggest that an answer to the second question is that DLs form a subset of all languages. They are an interesting subset, but one which is inconvenient to use unless you are used to it. We need them because at the moment we don't know how to construct proofs with languages which include imperatives and jumps.


How should we use them to program? I think this is a matter of learning a new programming technique. I am not convinced that all problems are amenable to programming in DLs but I am not convinced that there are any which are not either; I preserve an open mind on this point. It is perfectly true that in the process of rewriting programs to avoid labels and jumps, you've gone half the way towards going into 1)Ls. When you have also avoided assignment statements, you've gone the rest of the way. With many problems yeu can, in fact, go the whole way. LisP has no assignment statements and it is remarkable what you can do with pure Lisp if you try. If you think of it in terms of the implementations that we know about, the result is generally intolerably inefficient--but then that is where we come to the later questions.

How do we implement them? There have not been many attempts to implement DLs efficiently, I think. Obviously, it can be done fairly straightforwardly by an interpretive method, but this is very slow. Methods which compile a runable program run into a lot of very interesting problems. It can be done, because DLs are a subset of ordinary programming languages; any programming language which has sufficient capabilities can cope with them. There are problems, however: we need entities whose value is a function--not the application of a function but a function--and these involve some problems.

How to implement efficiently is another very interesting and difficult problem. It means, I think, recognizing certain subsets and transforming them from, say, recursions into loops. This can certainly be done even if they have been written iu terms of recursions and not, as Peter Landin suggested, in terms of already transformed functions like iterate or while.

I think the last question, "Should DLs be nIixed with imperative languages?", clearly has the answer that they should, because at the moment we don't know how to do everything in pure DLs. If you mix declarative and imperative features like this, you may get an apparently large programming language, but the important thing is that it should be simple and easy to define a function. Any language which by mere chance of the way it is written makes it extremely difficult to write compositions of functions and very easy to write sequences of commands will, of course, in an obvious psychological way, hinder people from using descriptive rather than imperative features. In the long run, I think the effect will delay our understanding of basic similarities, which underlie different sorts of programs and different ways of solving problems.


Smith: As I understand the declarative languages, there has to be a mixture of imperative and descriptive statements or no computation will take place. If I give you a set of simultaneous equations, you may say "yes?", meaning well, what am I supposed to do about it, or you may say "yes", meaning yes I understand, but you don't do anything until I say "now find the values of the variables." In fact, in a well-developed language there is not just one question that I can ask but a number of questions. So, in effect, the declarative statements are like data which you set aside to be used later after I give you the imperatives, of which I had a choice, which get the action.


Strachey: This is a major point of confusion. There are two ideas here and I think we should try to sort them out. If you give a quadratic equation to a machine and then say "print the value of x", this is not the sort of language that I call a DL. I regard it as an implicit language--that is, one where you give the machine the data and then hope that it will be smart enough to solve the problem for you. It is very different from a language such as LisP, where you define a function explicitly and have only one imperative. which says "evaluate this expression and print the result."


Abrahams: I've clone a fair amount of programming in LISP, and there is one situation which I feel is symptomatic of the times when you really do want an imperative language. It is a situation that arises if you are planning to do programming in pure Lisp and you find that your functions accumulate a large number of arguments. This often happens when you have a number of variables and you are actually going through a process and at each stage of the process you want to change the state of the world a little bit-- say, to change one of these variables. So you have the choice of either trying to communicate them all, or trying to do some sort of essentially imperative action that changes one of them. If you try to list all of the transitions from state to state and incorporate them into one function, you'll find that this is not really a very natural kind of function because the natures of the transitions are too different.

Landin: I said in iny talk that LisP had not gone quite all the way and I think that this difficulty is connected with going all the way. If we write a function definition where the right-hand side is a listing of expressions such as

F(x) = E1 , E2, E~

thel~ we can say that this function will produce a three-list as its result. If llOW we have ~mother function G(x, y, z) = E, on some occasion we might have an expression such as G(a 2, b 2, c ~) and we often feel that we should be able to write G(F(t)), and another example which should be allowed is

G(a > b --~ E1 , E2 , E3 else E4 , E5 , E6).

l am not quite sure but I think you can get around your problem by treating every function as if it were in fact monadic and had a single argument which was the list structure you are trying to process.


Abrahams: This is a difficulty in other programming languages too; you cannot define a function of an indefinite number of arguments.


Naur: I still don't understand this distinction about an implicit language. Does it mean that whenever you have such a language there is a built-in feature for solving equations?


Abrahams: I think the point is whether you are concerned with the problem or are concerned with the method of solution of the problem.


Ingerman: I suggest that in the situation where you have specified everything that you want to know, though the exact sequence in which you evoke the various operations to cause the solution is left unspecified, then you have something which is effectively a descriptive language; if you have exactly the same pieces of information, surrounded with promises that you will do this and then this, then you have an imperative language. The significant point is that it is not all or nothing but there is a scale and while it is probably pretty simple to go all the way with imperatives, the chances of being ttble to get all the way descriptive is about zero, but there is a settle and we should recognize this scale. Smilh: I think that there is a confusion between implicit or explicit on the one hand and imperative or declarative on the other. These are two separate distinctions and can occur in all combinations. For illstance, an analog computer handles ilnplicit declaratives.


Young: I think it is fairly obvious that you've got to have the ability for sequencing imperatives in any sort of practical language. There are many, many cases in which only a certain sequence of operations will produce the logically correct results. So that we cannot have a purely declarative language, we must have a general purpose one. A possible definition of a declarative language is one in which I can make the statements (a), (b), (c) and (d) and indicate whether I mean these to be taken as a sequence or as a set; that is, must they be performed in a particular order or do I merely mean that so long as they are all performed, they may be performed in any sequence at any time and whenever convenient for efficiency.


Strachey: You can, in fact, impose an ordering on a language which doesn't have the sequencing of commands by nesting the functional applications.


Landin: The point is that when you compound functional expressions you are imposing a partial ordering, and when you decompose this into commands you are unnecessarily giving a lot of inforination about sequencing.


Strachey: One inconvenient thing about a purely imperative language is that you have to specify far too much sequencing. For example, if you wish to do a matrix multiplication, you have to do n a multiplications. If you write an ordinary program to do this, you have to specify the exact sequence which they are all to be done. Actually, it doesn't matter in what order you do the multiplications so long as you add them togcther in the right groups. Thus the ordinary sort of imperative language imposes much too much sequencing, which makes it very difficult to rearrange if you want to make things more efficient.

Thursday, May 27, 2010

Expression and Data

So functional programming is about expressions instead of statements. As such it is interesting to note that expressions are also how you "express" your data model. There was an interesting example of that in Haskell Cafe's mailing list:

A question was asked as: "
Why Either = Left | Right instead of something like Result = Success | Failure" This was interesting in the answers:

Either is a generic sum type. That is, "Either A B" only means "either you have an A, or you have a B". Use of Left to represent failure is merely a matter of convention. Similarly, the generic product type in Haskell is the 2-tuple--"(A, B)" only means "you have both an A and a B".

Tuesday, May 11, 2010

Monadic Web design in future of Cloud Computing?

In all discussions around cloud computing, at least to me, it seemsthat there are too much attention to low level detail and not enough on the bigger more abstract picture.

I think this presentation describes an interesting alternative to the conventional thinking on what cloud computing will, or at least should be. (

Even though the presentation talks about search, there may be bigger point in the approach to the problem.

In the presentation the speaker shows that an application can be broken down into various well known notions. I would think map/reduce would be one such notion. But there are other notions such as non determinism, back tracking, .... that an application may be based on.

In context of cloud, looking at it in abstract, if the underlying computational notions are cloud-aware/capable, or at least well understood as a cloud component, then application written on them can easily run in a cloud just as simply it can run on a single machine. The notions can be made to hide all the plumbing and make the application cloud agnostic. The big change from the existing imperative programming is a re-evaluation of the application along functional model to separate the problem domain logic from the notions. In the new model the application can be made to run in the cloud by only changing the underlying notions. This is exactly what happened when people redid their application to run in the cloud as Map/Reduce. But Map/Reduce can't be the only notion that can be made into a cloud. To find other one needs to go back and study the notions of computation in more abstract terms and implement them as cloud. The presentation shows examples of such an approach in context of sophisticated search queries.

Monday, April 12, 2010

Conceptual Tool

In Wikipedia Domain Specific Language is defined as:

a programming language or specification
language dedicated to a particular problem
domain, a particular problem representation
technique, and/or a particular solution technique


I have been looking for a better "frame", I may have found one.

I was looking at A domain-specific language for experimental
game theory
paper. And this Tutorial in game theory had a sentence that best captured the "DSL think" that I am looking for. On 2nd slide it says:

"To analyze strategic behavior we need the conceptual tool of game theory..."


A DSL is essentially allows you to apply the conceptual tools to your domain. The tools in turn need the underlying data in format that is also covered in your DSL.

Friday, January 22, 2010

Functional Programming Think...FP is a category!

In many of the articles/books on Category Theory the Category theory is explained in terms of mathematical concepts of Rings, Sets, Fields... I have been struggling to see how it all relate to functional programming. So this might be useful for others too...

In section 2.2 this Category Theory lecture notes there is an interesting explanation of explaning how functional programming forms a category.

A metaphor for categories and mappings

I was trying to see if Design Pattern concepts (specially compositional patterns) have been explained in terms of functors, monads,.... I ran into this post about category theory that is enlightening...

A metaphor for categories and mappings

This section in particular:

For a really simple, real world example, consider a lawnmower. When out of gas/petrol, it needs to be refueled. The category of empty lawnmowers has an arrow/mapping/morphism to the category of full lawnmowers. Here the focus is more on the _process_ of mapping one category into another, or mapping one collection of things into another collection.

It's been said that category theory places mappings/arrows on a fully-equal footing with objects/points/things.

Now back to the lawnmower example. A child walks out to watch his father mowing the lawn. The lawmower runs out of gas. The child says "Daddy, I think the lawnmower needs a drink."

What the child is doing is using his own set of mappings, from thirsty to not thirsty, to metaphorize the process of fueling the lawnmower. There's a mapping between these mappings, a functor. (I happen to think this categorical way of thinking is immensely important for how we understand the world, how we might program artificial intelligences, and so on. Even more than "design patterns," I think we and other creatures chunk the world into pieces we can understand by finding the mappings and functors and so on which allow us to grok the world. We are, I think, category theory engines.)



Also this in the comments

A more useful way to think about category theory compared to set theory is to view it as a kind of inversion of reasoning. In set theory everything is built up, and we understand objects by seeing what they are made of -- picking apart their internals. We relate two objects by comparing what they are made of. Category theory turns that on its head and takes relationships between objects as primitive. From CT's point of view objects are opaque; we understand an object not by peeking inside at it's internal structure, as we would if we were using set theoretic thinking, but rather by examining how the object relates to other objects.

Wednesday, January 20, 2010

Functional Programming Think

It seems that as I am learning more Haskell, I find more nuggets in paper that I had read before. For instance, John Hughes Generalising Monads to Arrows paper has some very interesting insight into functional programming mind set.


If this were an isolated case we might simply ignore it. But Swierstra and Duponcheel’s idea is clearly much more widely applicable: to optimise a combinator library, rede ne the combinators to collect static properties of the computations they construct, and then use those static properties to optimise the dynamic computations. If we think of a library as de ning a domain speci c ‘language’, whose constructions are represented as combinators, then Swierstra and Duponcheel’s idea is to implement the language via a combination of a static analysis and an optimised dynamic semantics. We may clearly wish to do this very often indeed. But every time we do, the type of >>= will make it impossible to use a monadic interface!



And also on Category Theory

3.1 On Category Theory
Before we do so, we make a short digression on the subject of category theory. The concept of a monad was developed by category theorists long before it eventually found an application in functional programming. Some might find it surprising that something so abstract as category theory should turn out to be useful for something so concrete as programming. After all, category theory is, in a sense, so abstract as to be rather unsatisfying: it is ‘all definitions and no theorems’ , almost everything turns out to be a category if you look at it long enough, to say something is a category is actually to say very little about it. The same is true of most categorical concepts: they have very many possible instantiations, and so to say that something is, for example, a monad, is to say very little. This extreme generality is one reason why it is hard for the beginner to develop good intuitions about category theory, but it is hardly surprising: category theory was, after all, developed to be a ‘theory of everything’, a framework into which very many di erent mathematical structures would t. But why should a theory so abstract be of any use for programming? The answer is simple: as computer scientists, we value abstraction! When we design the interface to a software component, we want it to reveal as little as possible about the implementation [think algebra vs calculus]. We want to be able to replace the implementation with many alternatives, many other ‘instances’ of the same ‘concept’. When we design a generic interface to many program libraries, it is even more important that the interface we choose have a wide variety of implementations. It is the very generality of the monad concept which we value so highly, it is because category theory is so abstract that its concepts are so useful for programming.
It is hardly surprising, then, that the generalisation of monads that we present below also has a close connection to category theory. But we stress that our purpose is very practical: it is not to ‘implement category theory’, it is to find a more general way to structure combinator libraries It is simply our good fortune that mathematicians have already done much of the work for us.

Wednesday, January 13, 2010

Class types as "convenience functions"

I had hard time understanding the common practice in haskell of MonadXXX class relationship with XXX type (e.g. MonadReader and Reader) until I run into this line:

The MonadWriter class provides a number of convenience functions for working with Writer monads. In the Writer Monad tutorial.

Looking at the class interfaces as "convenience functions" to work with underlying type is interesting way of looking at it.

Sunday, January 10, 2010