Monthly Archives: June 2013

First step into covariant world

It is interesting how given concepts of programming languages made their way into the mainstream — I was able to observe how functions from “uhm, do we really need them?” perspective became first class citizens (“what, you cannot pass a function to a function?”).

The next thing on my watch list is covariance and contravariance — they were finally introduced in C#, but sadly, only partially.

Since I am working back and forth now on improving support of functions in Skila it was a hard to miss an opportunity to finally make this happen:

struct Foo 
  def virtual get() Foo = new Foo();
end

struct Bar : Foo
  def override get() Bar = new Bar();
end

You don’t have to mark those methods in any special way, compiler knows you are overriding the method, so you can return a derived type in a derived class. That’s it — done!

Adding this/self (self reference to current type instance) and THIS/SELF (per analogy, current type) is right behind the corner. In a way it will be a return to one of the COOL features, yet with more flexibility.

Advertisement
Tagged , , , , , ,

Inheriting methods — almost there

The last two weeks were dominated by upgrading to LALR(k) parser — I did very little from Skila TODO list, but something anyway. I almost finished dealing with methods inheritance – so far Skila supports the same model as in C#. I have one bug to hunt and one optimization to make (skipping creating empty virtual table). I don’t allow multiple class inheritance, however in general I would like to add it but without much expense. So this feature is on hold.

I rewrote generating classes and methods in the backend — now classes are pure structures (fields only) and methods are functions with extra argument this passed. Of course I had to write custom dispatcher for virtual methods — it was easier than I thought, one step less for porting the backend to JS and R.

It was great opportunity to refresh my knowledge about C# — I didn’t know that interface method declaration is implemented as a method with empty body — and while doing so my patience for two C# books I have ran out. Each time I try to get definitive answer all I find is either the story from the same as me author who feels the urge to patronize me, or mumbling over 8 pages about the given concept. Polish editions of C# in Depth by Jon Skeet and Pro C# 2010 and the .NET 4 Platform by Andrew Troelsen are for sale sold.

The incoming features are standalone functions and finding new angle for handling the data — something between C++, C# and Go. I am thinking of automatic de-/referencing the objects (this would mean no overloading based on pointer depth) with struct defaulting to data and class defaulting to non-null pointer to data — each default could be explicitly overridden though. We’ll see…

Tagged , , , , , , , ,

From goal-driven execution to macros

I keep reading how goal-driven execution model looks like, and while I really like natural feel of such condition:

if (a < b < c)

yesterday I found the piece of code which put me off. Consider printing out the just-read line in Icon:

write(read())

When read hits the EOF it fails, and thus entire chain of expressions fails — here it means write won’t be called. Let’s say you would like to catch the content of the line nevertheless, so you do a little rewrite:

line := read()
write(line)

OK, we have 2 lines, but the code basically does the same thing… Wait, it does not. It is not even close. Now when read hits the EOF the chain of expressions ends at the assignment, so it will be omitted and line will keep its previous value. Since the second line is completely unrelated it will be executed with the old line value.

Sure, I am surprised, and maybe part of the surprise is because I am not used to the way Icon works. But I think transparency of adding temporary objects to keep intermediate results is pretty fundamental concept, and Icon breaks it — something I cannot like.

From that point there are two approaches — give up or design improved goal-driven model (it might the same path though).

As for giving up I went back to my old idea of returning tuple — pair of function outcome and error:

if ((success:@,value) = dict["hello"]).success then
  // we have direct access to value
  // success is dropped

The syntax has a lot to be desired, so how about adding when construct which could handle exactly such cases:

when value = dict["hello"] then
  // we have direct access to value

This would be syntactic sugar only — pure internal rewrite to the if presented above.

And then it struck me — forget about Lisp homoiconicity, uniformity stupid! Lispers have to struggle with ugly syntax:

(if (= 4 (+ 2 2)) (...

instead of clean “syntax for the masses”:

if 4 = 2+2 then...

but Lisp syntax is uniform, and because of that Lispers don’t have to wait for anyone to bring new construct to the language. They add their when whenever they like — it blends in perfectly.

On the other hand in C-derivatives all you have is function-like expansion, you cannot add another for or while. Any attempt to bring Lisp-macro to C world would require allowing the user to alter the grammar of the language.

I didn’t solve anything with goal-driven model — instead I added yet another puzzle for myself. What to sacrifice in the design to bring power of macros to Skila?

If you find this topic interesting, more readings for you — The Nature of Lisp, Homoiconicity isn’t the point and Lisp: It’s Not About Macros, It’s About Read.

Tagged , , , , ,

LALR(k>1) — mission accomplished

It took me too long — 10 days — but I finally did it. LALR(k) is working and the only missing piece — that’s really ironic — is a parsing problem in Skila that regular LALR(k) is better suited for than forking LALR(1). Even with such poor performance of mine I am happy I have another tool in my toolbox. If you are going to implement LALR(k) parser on your own, here is what I’ve learned.

FIRSTk sets

Make sure you are reading math notation the right way — the algorithm given in “The Theory of Parsing, Translation, and Compiling (vol.1)” by Alfred V. Aho, and Jeffrey D. Ullman is rock solid once you understand it. Comparing to my old, naive LALR(1) code Aho&Ullman implementation is way more shorter and cleaner. Just pay attention to two issues:

  • empty set () is not the same as set with empty element ({ε}) — this makes the difference for multi-concatenation of sets (k), if the first set is empty you will get an empty set as the result,
  • initialize F0 for non terminals with less than k terminals only if α is empty. If it is not — it has to be k terminals, and if this condition cannot be fulfilled you don’t return anything for given production (returning empty string — ε — in this case will render your algorithm invalid).

FOLLOWk sets

Either I cannot read (again) or the definition of FOLLOW sets given in “The Theory of Parsing” leads to very easy implementation, but not useful and — no wonder — not used anymore. In better known book “Compilers: Principles, Techniques, and Tools” by Alfred V. Aho, et al. the definition is already changed, but no algorithm for building it is given. Let me try describe my algorithm:

A — non terminal, U — symbol (non- or terminal), α, β — a sequence of symbols, possibly empty, u — a sequence of non-terminals, possibly empty, k — cartesian product of two sets, each pair of sequences is concatenated and trimmed to k symbols,
  1. for every symbol initialize its entry as an empty set,
  2. set FOLLOW sets for start symbol as sequence of k pseudo-terminal EOFs,
  3. build initial FOLLOW sets from FIRST sets by iterating over all symbols and looking for them at RHS of every production — let’s say it is defined as A := α U β, where U is symbol we iterate over. Compute FIRSTk(β) — if it is empty set add to FOLLOWk[U] non terminal A, if it is not add FIRSTk(β) ⊕k A set (unlike reading math, I love using math notation because it makes me look more smarter than I really am),
  4. at this point we have two kind of entries — the ones with terminals only and the ones with single non terminal at the end — in form u A. The latter ones will be used as generators,
  5. iterate over all generators replacing their entries with u ⊕k FOLLOWk[A],
  6. repeat (5) until no single change is made to FOLLOWk sets,
  7. remove all generator entries.

Please note that unlike FIRSTk sets, every entry in FOLLOWk sets is k terminals long.

Lookaheads and DFA

Computing lookaheads is basically the same algorithm as step (3) from computing FOLLOWk sets only now the iteration is not over all symbols and searching them in productions — but over productions and looking for the split points between stack and input. After all computation it is necessary to either remove dead NFA states from DFA or mark them some way, because otherwise you will get a lot of conflicts for your grammar. A dead NFA state is a state with non terminal following split point.

Final words

After finding all the bugs you will notice the hardest part is UX of the parser. How to report DFA in meaningful way, how to define precedence rules efficiently and correctly. Those issues are ahead of me, but stay in touch — once I deal with them I surely share my experience with you. And last bit of advice — if you are not sure whether you should go with LALR(1) or LALR(k) I suggest go with the latter. It will force you to use proper algorithm (Aho&Ullman), not — with all respect — something half-baked by yourself (which will work yet with high chance of being inefficient). It will force you to clean up you code related to DFA as well. All LALR(k>1) parsers are of the same league, the only distinct parser is LALR(1) alone so starting from it is very likely to be waste of time. I love that feeling of peace of mind after finished job. Now I can enjoy the upcoming weekend with Skila…

Tagged , , , , ,

Nulls are OK

Judging by the hot disputes about null it should be some kind of the problem, but I have to admit I don’t see any. A lot of people complain about getting NullPointerException (and alike) as a bad thing — for me it is great. It is the mechanism that stops my program and tells me “listen, the flow of your program is incorrect, I prevented bad things to happen, sit down and fix it”. I am grateful for that because I didn’t add any extra line to my program and yet the runtime keeps me safe. null values are very useful if you start to treat them as normal citizens among your data, same way as you treat zeros when doing computations.

a = b + c;

Are you sweating because “b” could be zero?

Some languages try to cover null “problem” by introducing wrapper class for reference/pointer types (for example Scala with its Option ¹). However what kind of progress here is made with actually renaming null to None? I don’t see any ².

Since C# and extension methods I see things differently — to such extent that I started writing extensions instead of regular methods just to allow null on the caller side (to this day I remember pride of C++ which guarantees “this” cannot be null).

In many cases you have to deal manually with null, period, but in many, especially when the job is related to collections, null comes naturally. Consider such task — having HTML node extract first “<p>” node from its children, and then extract all “<span>” nodes from its children. If you use XPath query you have one sequence of selectors. How come in classic programming you have to add so many guards against null? Why not solve this without fear of null?

var spans = ((node??emptyNode)
              .Children
              .Where(it => it.Name=="p")
              .FirstOrDefault()??emptyNode)
                .Children
                .Where(it => it.Name=="span");

The only problem, or rather annoyance, is C# — not null.

Kotlin addresses the issue directly but misses the point — instead of introducing coalesce operator it allows subsequent call/access to fail (Kotlin “returns” null):

var children = node?.Children;

If “node” is null, the variable ”children” will be null as well (Fantom and Groovy also took this approach). I am not saying coalesce is superior in all cases, but adding a feature which moves you from null domain into the same null domain does not look like a solution to me.

Skila will use selective Null Object pattern ³ with short coalesce operator. For every type you can define a constant null object for coalescing. And then…

var spans = node->Children
            .Where(it => it.Name=="p")
            .FirstOrDefault()->Children
            .Where(it => it.Name=="span");

spans” won’t be null! Arrow operator (“->”) is hidden coalesce call. Because Skila is statically typed checked, it verifies at compile time whether you can use the power of coalescing (it checks if the null object is defined). And it does the right thing — it moves you from null domain into not-null domain.

This is probably not a complete blueprint — practice will be the best judge — but even upon completion it won’t be a magical wand. If you work on types that are collections, adding coalesce is like adding a grease into the gears, in other cases you should rely on old “if”, and for the rest — NPE comes to the rescue.


¹ I say ”yes” to Option, but not instead of null pointers, but as addition to them.

² Why Scala’s “Option” and Haskell’s “Maybe” types won’t save you from null (good read, but the example with Dictionary is weak).

³ There might a be a little confusion what the true “Null Object” pattern is, so let’s say Kotlin, Fantom, Groovy use thin null object (no body, everything falls into nulls), while Skila will use fat objects, with real body defined by user. This will allow to transform null collections into empty ones.

Tagged , , , , , , , ,

LALR(k>1) — FIRST sets

Update: Thanks to help by Hendrik Jan after 4 days of struggle I can see my mistake — problem solved.

Three days ago I started working on LALR(k>1) to save a weekend solely for Skila development — unfortunately I underestimated the problem and now not only I feel dizzy but also I don’t see the end of the work. All three days went on algorithm for computing FIRSTk sets.

Surprisingly I didn’t find too many sources explaining how to compute FIRSTk sets for k>1. There is old “The Theory of Parsing, Translation, and Compiling (vol.1)” by Alfred V. Aho, and Jeffrey D. Ullman (yes, that’s 1972), a page “Review of LR(k) and LALR(k) Parsing Theory” by Sean E. O’Connor and a paper “An Algorithm for Constructing a Semi-LL(2) Grammar’s Parsing Table” by Yoshida Keiichi, and Takeuchi Yoshiko.

I have hard time deciphering the last one, and I cannot use the former two because the algorithm presented there (the same one) is in my opinion incorrect. If you don’t have a copy of Aho&Ullman please refer to the mentioned page by Sean E. O’Connor because the notation is the same.

The algorithm uses helper Fi sets to iteratively compute FIRSTk sets. We have:

Fi(a) = { a }

for every terminal a and index i ≥ 0. For non terminals starting point is defined as:

F0(A) = { x | ∃ A := x α, |x| ≤ k }

x denotes a sequence (possibly empty) of terminals, α denotes a sequence of symbols (terminals or non terminals) — also possibly empty.

Two things worth noting — there has to be such production A := x α defined, a derived production is not enough, and second thing — x is a greedy sequence, if it can match k terminals, it does.

At each iteration we get from Fi(A) set no smaller Fi+1(A) set — i.e. each element from Fi(A) is present in Fi+1(A). The claim is when we reach to the point when Fi(A) is equal to Fi+1(A) we stop because Fi(A) is our desired FIRSTk(A) set.

And I don’t think this claim holds.

Since it says FIRSTk(A) = Fi(A) ⊆ Fi-1(A) ⊆ ... ⊆ F0(A) to support my doubt it is enough to find an element from F0(A) that does not belong to FIRSTk(A). Consider such productions:

A := a B
B := b

For k=2 what is the FIRSTk(A) set?

FIRSTk(A) = { a b }

Note, there is no other string, the FIRSTk(A) set contains just one element. And how does the F0(A) look like?

F0(A) = { a }

This element is not part of the FIRSTk(A) and as counter example it shows the algorithm is incorrect.

The real question is — where did I make a mistake? Because when it comes to finding errors by Mr.Nobody in well established publications by well know authors, let’s face it — Mr.Nobody is more often wrong than right. I will be grateful for your comments.

Solution: I misinterpreted the way x’es are obtained for A’s — they can be shorter than k if and only if α is empty.

Tagged , ,

Constraints on parameter value

From now on I will use term pointer both for not-null pointers and possible-null pointers. Just to avoid confusion with the mechanism I describe below.

You can declare a function parameter value with 3 types of constraints:

def foo(x MyClass)

is shorter form of:

def foo(val x MyClass)

The argument is passed by value and you have a guarantee you cannot change “x” inside “foo”. If you need speed and you know you work in single-threaded environment you could use a shortcut (both forms are shown):

def foo(ptr x MyClass)
def foo(ptr val x MyClass)

The argument is passed by pointer (fast), but since compiler guarantees the data is not changed inside a function, the only worry is whether the callee continues to work or not.

The second type:

def foo(var x MyClass)

The execution is slow (value is copied), you can change the value inside the function but the alterations do not leak outside.

And third one:

def foo(ref x MyClass)

Execution is fast and not only you can change the value inside, but any change is also visible outside the function (argument is passed via reference). However because a callee is affected you have to pass the argument consciously:

foo(ref x);

All the checking is done during compile time. However one constraint is missing — declaring a function which takes constant and forces callee it is constant as well. Too academic?

Consider creating a dictionary (a collection that maps keys on values). For fast retrieval usually a dictionary orders keys in some way — the algorithm is not important, all we know that keys are kept in some order, and dictionary uses this fact to skip a good portion of the keys when doing lookup. If you create dictionary and pass initial data, and then you change the keys on a callee side, the dictionary will not know about the change, and it won’t reorder the keys. Thus on next retrieval you can expect anything, which translates to “undefined behaviour”.
One way to solve this problem would be cloning all the keys, but this is inefficient and kills any attempt to optimize the algorithm (once you embed cloning in the dictionary you cannot get rid of it from the outside).

So far I see two ways to solve this — either add another constraint:

def foo(const x MyClass)

Meaning — I cannot change “x” inside, and I also require “x” is constant outside. In other words, the value has to be generally constant, not just in local scope. Since it is const two ways it can by passed silently via pointer (fast); note that passing the data explicitly by pointer (“ptr val”) does not give a guarantee on true constness.

Or go Ruby way and provide a method ”freeze” that would lock object against writing. I prefer the first approach (enforced during compilation) but I am not sure whether this is doable, it looks suspiciously tempting.

I like the idea of setting constraints on both sides, however even in this post I already see too rich syntax. Those constraints are much better if they are triggered automatically — for example knowing the call is done synchronously or not (for 100%) could allow automatic optimization how to pass data efficiently, by pointer or by value.

Tagged , , , ,

Correctness — tested or proven?

A chemist, physicist and a computer scientist were asked to prove a thesis that every odd number is a prime. A chemist started checking the numbers — 3 prime, 5 prime, 7, still prime number. We can conclude we have proven the thesis. Next was physicist — 3 prime, 5 too, 7 too, 9, oh, wait, not a prime number, but let’s continue, 11 prime, and 13 is prime as well. OK, thesis is proven, we had just an experimental error with 9. The last one was computer scientist who decided to be smart and write a program to prove the thesis. Once he did the program displayed the output: 3 is a prime number, 3 is a prime number, 3 is a prime number, 3 is a prime number…

There is already written a lot about static and dynamic type-checking and why one is better than the other. So I’ll start from different angle — why Skila has static type-checking and actually more than that. Two things about me:

  • I am a lazy guy, that is why I use computer to make computer do all the mundane work — it would be insane for me that computer would sip a cold drink and I would do all hard work,
  • I am demanding guy — I don’t want my computer to do any job (like formatting hard drive or sending spam), but to do things I need and moreover to do them correctly.

No wonder, I write programs. And let’s think for a second how I could write them, say a function. Like this:

def something(x)

or like this:

IEnumerable<T> something(IEnumerable<T> x)

First version does not tell me the things the second version does — that parameter is a collection, and the type of elements in result is exactly the same as in parameter. Let’s fix it:

# this function takes a collection
# type of elements in result is exactly
# the same as in parameter
def something(x)

Since I started this post from mentioning the work, let’s compare just for fun the amount of keystrokes:

# this function takes a colle
ction # type of elements in
result is exactly # the same
as in parameter
IEnumerable<T>IEnumerable<T>

3.5 vs. 1. But the story does not end here. This was just a comment! You know how important a comment is for a computer, right? We have to write unit test. Since I am a lazy person I skip this step here, but you should have the picture already:

  • dynamic type-checking and forcing every user to write excessive comments and units tests every time,
  • static type-checking with a guarantee on types, allowing users to test the logic of the program, not the language.

The last part is kind of important — covering basic features, like agreement of types in unit tests does not give user any guarantee, see the joke — those guys were not proving anything, all they did was unit testing. It also means that in order to have some “degree of guarantee” you have to write those tests every freaking time you write a new function.

As far I am concerned I prefer to write this “test” just once in my lifetime — it is called static type checking.

But being correct is more than types — I will throw into Skila whatever I can that removes the necessity of writing unit test. Because writing once is better than many times, and knowing is better than believing. Example?

/// this function takes 3 to 5 strings
void something(params string[] text)
{
  if (text.Length<3 || text.Length>5)
    throw new ArgumentException("pass 3 to 5 strings");
}

Not bad, but! You will know about problem during runtime. Also if you change your mind you would have to change comment and the code (no single point of control).

Skila way (not implement yet):

def something(params[3..5] text String)

If the user passes the arguments in expanded form — something("hello", "world", "!") — the check will be done in compile time, if as an array it will be done in compile time, but the point is — peace of mind. Prove (once) everything that can be proven, and leave the tests for the rest.

Tagged , ,

Fragments — multi-syntax language?

As an avid reader of old Joel on Software blog I am fully aware of the complain actually any developer can make:

They [people who don’t Get Things Done] will say, “Spreadsheets are really just a special case of programming language,” and then go off for a week and write a thrilling, brilliant whitepaper about the theoretical computational linguistic attributes of a spreadsheet as a programming language. Smart, but not useful.

Quoted from: The Guerrilla Guide to Interviewing.

I would like to be useful but after tinkering with syntax for collections I realized that maybe it is wrong path — one reason is I have only limited ASCII set so it is hard to satisfy all expressions I like to have and make them look nice at the same time. The other is if Skila is supposed to be really multi-purpose language can one syntax fit all the needs? Maybe another approach is required — one language, multiple syntaxes (one could think of the analogy of Java world — one byte code, multiple languages and thus multiple syntaxes).

I am not the first one with this idea, take for example TeX with its environments, once you start math mode all the symbols are treated differently than in the main block. In programming language it might pay off, because when writing general code you would like to be more self explanatory:

1/Math.Pow(x,2.0)

But if you have a function which does only heavy mathematical computation it could be more useful to make a switch:

#math
1/x^2
#end math

Who knows? It is an odd idea, but could be a right vehicle if you would like to have the same semantics but different syntax for given domain, you could say inner DSL.

Tagged , ,

Heavy lifting with interfaces

Once I touched the interfaces subject it seems the time it was sufficient to change a syntax a bit to make some progress passed away — this weekend I finished overloading, next time I plan to tackle overriding.

I barely had time to make minor adjustments:

  • interface name has to start with capital letter “I” (it is a requirement, not a convention),
  • the root type in Skila is “IObject”,
  • there is multiple type inheritance (one class + multiple interfaces),
  • I reversed and shortened Ruby syntax for mutators — “x!ChangeMe()” instead “x.ChangeMe!()”,
  • and I made “Void” result type optional when declaring or defining a function — so it is more like Pascal procedure.

Once I deal with interfaces I have to redo almost entire backend — so far I relied on PHP classes but I think this won’t pay off. First of all PHP has only virtual methods, so it is a dead end for me no matter how I look at it. The second reason is JavaScript — its object model is so different, that sooner or later I would have to find common denominator among backend languages to save my time. I think I should be safe with using just structures, standalone functions and custom dispatcher.

Tagged , , , ,