Monday, December 29, 2008

Subtext and monads

Over the xmas week holidays, I got to thinking about Subtext again. Since my conversation with the author had dropped off at the point where he said he's skeptical about getting subtext to work with normal programming languages as they're not functional and he doesnt want to bridge that gap, I set about thinking what would it take to bridge that gap.

And this led me to reading up on a couple of functional programming languages, and of course to Haskell. The thing that caught my attention was Monads, and the assertion that Monads was Haskell's answer to imperative programming - how it allows you to keep things pure and functional, and yet provide the "do this first, then this other thing" style, side effects and all.

This was, of course, exactly the opposite of converting imperative code to functional, but I thought the insight might be helpful.

Two days of confusion and a screaming wife later, I realized there's a reason monads have so many tutorials - they're pretty confusing concepts to start with. I'm still on shaky ground with them, especially having not passed that rite of passage of writing one all by myself, but the a-ha moment came from a post by Oliver Steele about how a monad is a rule that defines how to get from one statement in a program to the next, and further from this comment that even in lazy pure functional languages, f(g(h x) implies a sequence in with h is evaluated before g and g before f.

IMO, a tutorial is required that actually maps a monad's working to that of a von-neumann machine. Then, maybe, we imperative programmers will get it :)

Where does this leave subtext and my original thread of thought? I'm not sure. Would a conversion like this work?
Imperative code --> Monad equivalent --> functional code

How would the first conversion even happen? Need to think about this.

Tuesday, November 11, 2008

Idea: Automatic unit tests

Although the TDD folks have been pushing the concept of "test code first, actual code next", I've seen that doing that actually requires some maturity and a larger amount of discipline. But what struck me was that all developers actually do test their code before putting it out; they just dont code the unit tests always. How do they do this? Well, by running the code and looking at the output - be it in a debugger or with SOP prints or logger output, or GUI display. Everything that a junit requires is there, except the assertion which is done visually by the developer.

So how about creating a Record mode for the program run - probably as an Eclipse plugin. Once you click record, it would record the state of the program run - every method call and its inputs and outputs. This by itself is a solved problem - debuggers do it all the time. Then when the program's finished running, you present the developer with the execution trace, and ask him to pick the ones he'd like to convert into tests. When he picks particular method calls, the plugin then allows him to choose what preconditions of that method call need to go into the setup() (and corresponding postconditions into the tearDown()) and also allows him to inspect the output and create assertions. Once these are chosen, a test is created with these values automatically, and added to that class' test suite.

Idea: GTD version of Lightning

Been using Thunderbird with Lightning, and its a bit tough as I'm trying to use it GTD style. So this idea is to tweak lightning so its Task features become more GTD friendly.
  • First off, tasks should have sub tasks
  • Also, they must be allowed true categories, ie one task can be in multiple categories. Right now only one category is allowed.
  • Search Tasks feature should be there
  • Group by multiple attributes would be useful. Eg group by category (Next Action, Someday/Maybe) and then by context (@Home, @PC, etc - which I use the location field for)
  • Link tasks to the messages or events that they have been converted into.
.. and some more that i cannot remember now.

Friday, October 03, 2008

Can I code in Thought?

"I think in pictures, not English"

Way back when I was in probation in the initial stages of my career, I had a colleague who was completely out of his depth with the training assignment given to us. The assignment itself was'nt too difficult - create a client server library management system - the default assignment. At the risk of instantly dating myself as a dinosaur, I should add that this was in the then-state-of-the-art PowerBuilder IDE. He, however, was completely out of his depth as his background was mainframes. He had no frame of reference to start with on GUIs. Being a smart human being, however, he had an idea that stayed with me:

"I wish there was a 'Give up' button", he said, "which would generate all the code that I'm supposed to create once I press it!"

My brain latched onto that moment - maybe due to the novelty of the idea, or its absurdity in terms of actually implementing something like that. And then it linked that moment to another one from my college days:
I had quite innocently asked a classmate what language she thought in and she responded with what I've put as the heading to this para:
"I think in pictures"

This actually caught me unawares and dumbfounded as I'd had a whole brag in anticipation of her expected answer. But my brain latched onto that moment too probably due to the novelty of the idea, or its presumable absurdity from where I stood then.

Well, what if we did actually think in pictures?

With the recent interest in polyglot programming, I was taken back to these two moments in time as it set me thinking about the real-world machinery that actually allows us to convert thought to running code - programming languages.

I was reminded of my evolution along the computer languages scale starting from imperative/procedural languages (x86 assembly, GWBasic, Pascal), scripting (DOS scripting - I know this might sound blasphemous, but when I was growing up access to Unix was not easy :) and linux was still "under development", much like access to the internet - so DOS it was) to object oriented ones (Object Pascal, C/C++, Java). I call this an evolution for obvious reasons - each language was progressively more abstract, and presumably, a better model of the way humans think when solving problems.

These were the languages I could actually run a program in. Then there were others that I could imagine how they worked, but didnt have access to a compiler or runtime, and my only contact with them was through used books that I bought off the street about them - and these really intrigued me. Prolog (via logic programming texts from the 70s) was one, and so was SmallTalk (via old Computer Magazines) and to some extent Lisp and Forth (dont recall the actual sources).

The best of them, however, was APL of which I was able to get an old, but surprisingly well weathered textbook - and what an exposition of the language it was. It blew my mind that there existed a language that not only did not use ASCII, but used normal arithemetic and set theory operators naturally, and even extended the semantics of some of the vector calculus operators into an actual working programming language, and did it such a way as to put Perl to shame. If you dont believe me, go ahead and click that link above, and then come back.

More recently, I "discovered" Tcl/Tk, which seemed to do a similar houdini act - but with language extension; and in such a way that the translation agent (compiler, interpreter) was not too knowledgeable of the extension. How it implements this is in itself intriguing and specific to Tcl, but the feature is (by now) in other languages such as Scala. (I might be wrong in crowning Tcl as the first language to do this, but its definitely one of the first)

So both these languages APL and Tcl had the feature that they allow us to mirror what we want to achieve in a manner that is less natural in other languages - nothing new here, thats what DSLs do. But beyond that, these 2 languages in my experience broke some of the tacit assumptions that I had about how to write code, to which I could (now, in retrospect) add Lisps eval() and macros.

All of this led me to think: what if we "supersized" these breaks from tradition? Would that be closer to the way we think, and therefore let us write better code?

...and a few days of day-dreaming on the CTA buses later...



Here are some of the tangents that thought train led to, and as the list grew I grouped them as:
  • Evolutionary changes:
    • What if we were able to extend ascii at will?
    • Why are we still fighting the whitespace wars?
    • Do we really need escaping?
    • Syntactic Sugar: why do we still call it that? Its there for a reason, isnt it?
    • How about programming languages "support" a language feature instead of just "enabling" it?
    • What if we used a codebase instead of text files in directories a la Visual Age?


  • Revolutionary Changes:
    • What if we could code in something other than ascii or even text?
    • Related:What really requires us to code left-to-right, top-to-bottom other than a legacy of print media?
    • How about non-modal or multi-modal languages? AKA Polyglot programming on steroids.




forNow=UNTIL_NEXT_POST; giveUpButton.click(forNow);

Intrigued? Read on in future posts for a drill down on each question. Fair warning: there are no specific answers. Just some more thoughts..