I’ve revived an API project that I started a couple of years ago. The proposed API creates interfaces for all the important Alloy objects (compiler, solvers, visualizers, solutions, instances, relations, tuples, and atoms). It makes code that uses the Alloy code base like visualizers, shells, and the new language server a lot cleaner and easier to use.
The motivation for the API is to create a rigid boundary between the different parts in Alloy. If we have such a boundary, we can start replacing part by part. (Big bangs tend to fail spectacularly.)
The glaring hole is the AST. I really think we should use a more modern compiler-compiler than cup & flex. The most likely candidate is ANTLR. However, ANTLR has interesting support for ASTs so I want to know more before locking in.
Hi
It’s great that you can work on the Alloy design and code. I agree regarding the parser, but I think there’s quite a deal of work. Out of my mind, I can think of the following reasons:
AFAIK, Cup implements LR(1) parsing while ANTLR is a powerful variant of LL (to what extent can an LR grammar be transposed in this LL variant without too much reworking?).
As the Alloy grammar is neither LR nor LL, a “lexer hack” is currently used and I’m not sure the same approach can or should be used with ANTLR.
The current grammar features many particular cases for deprecated Alloy versions (old keywords or grammar rules which aren’t legal Alloy anymore) or semi-official stuff (e.g. macros). Some decisions about the language, possibly some cleaning too, should be made before touching the parser itself I think.
Parsing is also -unfortunately- intertwined with error handling: too many semantic errors are reported as syntax ones, which therefore often lack a user-friendly explanation (which would be possible with a semantic context). So it would be intersting to gather the list of errors a user can be exposed to and rework them, some remaining as syntactic errors (so this would have an impact on the parser and the AST too).
Finally, I think an interesting project would also be to improve greatly the meta-capabilities. I sometimes think a real macro system would be nice to allow language extensions (and sometimes I think it’s a very bad idea due to possible misuses): once again, this would probably have an impact on parsing.
What do you think?
PS: regarding the API, I haven’t had time to look at it in detail but I have one question/remark: if I understand the model correctly, I think a Solution would better be called an Outcome (because Solution sounds like the result is SAT).
I do this kind of work for my living and I am unfortunately intricately aware of the pitfalls. However tempting rewrites or big bang changes are, they invariably tend to fail when a system has reached a certain complexity. And Alloy is above that threshold imho, it hides an amazing amount of features.
The only way I’ve succeeded in these cases is to decompose the problem into smaller problems. Complexity seems exponential relative to the moving parts. Alloy unfortunately is a key example of almost anything touching anything and to make it worse, chock full of statics which are global variables, and global variables are amazingly bad in software, especially testing.
So before even want to discuss changing anything in Alloy, we need to disentangle the different parts into a set of well defined lowly coupled, highly cohesive modules. For example, someone should be able to add a new solver type by adding a JAR. Currently it requires changes deep in the core. Same for visualizers, compilers, etc.
I think I’ve got a decent start and since yesterday I think I got a good model for the new traces as well. Inside Alloy it is a bit of a hack job, which surprised me a bit because the Pardinus/Kodkod code seems to have nice interfaces for representing traces. I think this is a clear example where an API would’ve been really useful.
Where I really could help is modeling the AST.
Anyway.
I made the case last time that Alloy could be used as an example for how software could be build using well defined specifications and solid software engineering principles. With my presence we have a nice tension between good software practices and focus on theory. I am here to learn the formal aspects but it would be great of we could find some theoreticians that want to really engage with the software engineering aspects. I find that the most interesting work always happens at the boundaries.
I am currently putting the stakes in the ground where the walls are going to be. Imho, his is the most important phase of every construction. Most parts are easy to change later but walls are pretty immovable after the construction stands. Participation in this process is highly desired.
I realize this is an old conversation, but I’m very interested in your progress. I’m a complete baby in both formal methods and compiler design, but Alloy has been very helpful in my research related to timing semantics for system behavior modeling, which is what I do for a living (model-based systems engineering researcher).
I’m curious what language and approach you used for modeling the AST. I agree with you on the bounding being the toughest part. I do find that models (especially UML-based) of as-is systems can help a lot with qualifying and quantifying coupling and cohesion issues in architectures. Anyhoo, how’s the effort going?
Also, great job on the project so far! I really appreciate the work you all put into it and the active support on SO and here.
It is quite far but I’ve been sidetracked with other projects since beginning of this year and too many health problems in the family. And maybe also a a bit because it is so hard to get traction …
However, I did make some significant progress with the sponsoring of Matthew Di Ferranti. You can look at it and try it out, it is fully working and waiting on feedback in the feature/api branch: