Tag Archives: macros

Uncharted territory — symbol generics

I had a little problem with this feature — not only I’ve recently got a job and I have much less time for developing Skila, but I had no luck finding a source explaining how you usually implement generics in compiler. Nevertheless I started with the flavor which bugged me since C++, namely symbol generics.

Consider a pair for C++, taking regular approach you won’t make anything more than a structure with “first” and “second” fields. The code works of course, but the moment you pass such data you have to comment a lot of code, because “first” carries absolutely zero information. Is this an address? Or salary? Or weight?

In C++ I was not completely lost — I wrote “NamedTuple” macro and then I had regular fields as above plus reference fields with the names I passed. Thus I could pass named tuple to any template function which expected regular tuple (because I had “first”, “second”, and so on, fields). The downside was as usual with macros in C++ – they are harder to maintain.

I am not against introducing macros to Skila, but I’ll wait until they become necessity. And with symbol generics I can do more than I did in C++:

class Tuple<'ITEM1'>
  var _1 Int = 5;
  alias ITEM1 _1;
end

func main() Int
do
  var m = new Tuple<'Id'>();
  return m.Id;
end

First of all, you deal with compiler, not preprocessor. Second — you don’t use references, but aliases, they don’t take space at all, so there is no penalty in taken memory. And third — it works. Here, you can see I instantiate the “Tuple” class with symbol “Id” and from that point I don’t use meaningless “_1” but “Id”. No need for extra comment stating “_1 holds Id”.

One of my idées fixes… fixed!

Tagged , , , , ,

NLT generator — macro expansions

As moving towards “grand finale” the generator becomes more mature accidentally I could say. I didn’t anticipate such scenario until I met one — consider such lexer rule:

GRAMMAR "{" 
{ 
  $token = TokenEnum.LBRACE;
  lexer.PushState(StateEnum.CODE);
  str_buf = new StringBuilder();
};
CODE "{"
{
  str_buf.Append($text);
  lexer.PushState(StateEnum.CODE);
};

When the lexer sees left brace in grammar it should switch to CODE mode and create string buffer. When it sees the same brace in CODE mode it should simply add it to the buffer and push CODE state again — it helps unwinding states when right braces are met.

This works nicely because once in CODE mode we treat all nested code blocks as one code body. Nothing wrong with that but when we need to differentiate between nested blocks such approach will fail — I was struggling a little when writing rules for scanning macros in NLT. Macro is written as:

$(variable : expression : expression)

Each expression can be macro as well, and of course within expression regular parentheses can be used. There is no problem whether lexer should be in MACRO or CODE state when closing parenthesis is found, the problem is whether state should be of this or that MACRO (inner or outer).

So I came up with idea of associating nesting counter with state — here instead of pushing MACRO when left parenthesis is found, I increase the counter, and I decrease it on right parenthesis. So when counter hits zero I know for sure it is inner MACRO boundary.

Yes, you could do it by yourself in your code, but why do it every time, when it can be done once in the framework.

The real change though is macro support. Previously I had to write:

formal_param -> attr:param_attr? colon:COLON? ...
{ 
  new FunctionParameter(
    currCoords(), 
    colon!=null ? NamedEnum.Yes : NamedEnum.No,
    attr!=null ? attr.Value : AttrEnum.Constant,
    ...

With macro expansion I can be more concise:

formal_param -> attr:param_attr? colon:COLON? ...
{ 
  new FunctionParameter(
    currCoords(), 
    $(colon : NamedEnum.Yes : NamedEnum.No),
    $(attr.Value : AttrEnum.Constant),
    ...

The created code is shorter too, because NLT generator extracts only needed part from the macro — as the side effect the productions are optimized as well.

Macro can be written in three ways:

$(variable)
$(variable : expression)
$(variable : expression1 : expression2)

In the first form generator produces “true” if the variable is present, “false” otherwise. In the second — it creates variable itself if it is present, or given expression if not. And in the third form it puts the first or the second expression depending if the variable is present.

There is one shortcut when the variable is a compound object — instead of writing:

$(var : var.property : expression2)

which is perfectly legal just long, one can use shorter syntax:

$(var.property : expression2)

Calling an object’s method is also OK, but a function is not:

$(foo(var) : expression2) // WRONG!

There is one optimization left — detecting if the given variable was used at all in the code and if it is a case not passing it. I will handle it and other speed ups after making NLT generator a true generator.

Tagged , , ,

From goal-driven execution to macros

I keep reading how goal-driven execution model looks like, and while I really like natural feel of such condition:

if (a < b < c)

yesterday I found the piece of code which put me off. Consider printing out the just-read line in Icon:

write(read())

When read hits the EOF it fails, and thus entire chain of expressions fails — here it means write won’t be called. Let’s say you would like to catch the content of the line nevertheless, so you do a little rewrite:

line := read()
write(line)

OK, we have 2 lines, but the code basically does the same thing… Wait, it does not. It is not even close. Now when read hits the EOF the chain of expressions ends at the assignment, so it will be omitted and line will keep its previous value. Since the second line is completely unrelated it will be executed with the old line value.

Sure, I am surprised, and maybe part of the surprise is because I am not used to the way Icon works. But I think transparency of adding temporary objects to keep intermediate results is pretty fundamental concept, and Icon breaks it — something I cannot like.

From that point there are two approaches — give up or design improved goal-driven model (it might the same path though).

As for giving up I went back to my old idea of returning tuple — pair of function outcome and error:

if ((success:@,value) = dict["hello"]).success then
  // we have direct access to value
  // success is dropped

The syntax has a lot to be desired, so how about adding when construct which could handle exactly such cases:

when value = dict["hello"] then
  // we have direct access to value

This would be syntactic sugar only — pure internal rewrite to the if presented above.

And then it struck me — forget about Lisp homoiconicity, uniformity stupid! Lispers have to struggle with ugly syntax:

(if (= 4 (+ 2 2)) (...

instead of clean “syntax for the masses”:

if 4 = 2+2 then...

but Lisp syntax is uniform, and because of that Lispers don’t have to wait for anyone to bring new construct to the language. They add their when whenever they like — it blends in perfectly.

On the other hand in C-derivatives all you have is function-like expansion, you cannot add another for or while. Any attempt to bring Lisp-macro to C world would require allowing the user to alter the grammar of the language.

I didn’t solve anything with goal-driven model — instead I added yet another puzzle for myself. What to sacrifice in the design to bring power of macros to Skila?

If you find this topic interesting, more readings for you — The Nature of Lisp, Homoiconicity isn’t the point and Lisp: It’s Not About Macros, It’s About Read.

Tagged , , , , ,