PlusCal in Alloy

I’ve been playing with Alloy (Electrum version) and added a ‘PlusCal’ like feature to Alloy. I finally got the basics to work and I am looking for some expert feedback. I know PlusCal is for us, the deplorable developer class, because it gives the impression of procedural code. However, that might actually convince some other developers to take a look. Many concurrent problems become a lot easier. However, so often in this world I find that my intuition is totally off so some feedback about this would be highly appreciated.

The first example I’ve got is the algorithm for multiplying as an addition:

let N = { n : Int | n >= 0 }

algorithm mult[ a, b : N, var r, n : Int ] {
	r := 0
	n := b
	while (n>0) {
	  	r := r.plus[a]
	  	n := n.minus[1]
            }

	assert valid r = a.mul[b]
}

During compilation, I translate this to predicates and sigs. I.e. the algorithm generates behind the scenes plain Alloy code.

The first is an enum specifying the labels. Since the example uses a ‘while’, a label is necessary and automatically generated. (You can also specify your own labels.)

enum multEnum {Done, _L100}

The next part is a sig that holds the state. This uses the Electrum model of vars.

one sig mult {
  var pc : multEnum,
  a, b : N,
  var r, n : Int,
}

The next part are the predicates that act as event methods:

pred mult.Done{
     this.pc = Done
     this.r' = this.r
     this.pc' = this.pc
     this.n' = this.n
}

As you can see, I any variables not changed in a predicate are kept constant. This was quite tricky when I started to use other sigs with vars. This is further discussed later with multiple processes.

pred mult._L100{
     this.pc = _L100
     this.n > 0 implies {
         this.r' = this.r.plus[this.a]
         this.n' = this.n.minus[1]
         this.pc' = _L100
       } else {
         this.pc' = Done
         this.r' = this.r
         this.n' = this.n
     }
}

My reasoning became that if you have an IF statement, you must ensure that the ifTrue and ifFalse parts assign the same variables. So they are automatically balanced. This works for the local variables and I think I devised a scheme for the non-local relations. Later more about that but this is a shady area for me.

pred mult.mult{
     this.r = 0
     this.n = this.b
     this.pc = _L100
     always (
         this.Done
        or this._L100
     )
     eventually (
       this.pc = Done
     )
}

And then an overall predicate that runs the Init section and, executes the event predicates in always, and expects that the program counter pc ends in Done.

So run { mult.mult } gives us

------State 0-------
┌─────────┬──────┬─┬─┬─┬─┐
│this/mult│pc    │a│b│r│n│
├─────────┼──────┼─┼─┼─┼─┤
│mult⁰    │_L100⁰│2│2│0│2│
└─────────┴──────┴─┴─┴─┴─┘
------State 1-------
┌─────────┬──────┬─┬─┬─┬─┐
│this/mult│pc    │a│b│r│n│
├─────────┼──────┼─┼─┼─┼─┤
│mult⁰    │_L100⁰│2│2│2│1│
└─────────┴──────┴─┴─┴─┴─┘
------State 2-------
┌─────────┬──────┬─┬─┬─┬─┐
│this/mult│pc    │a│b│r│n│
├─────────┼──────┼─┼─┼─┼─┤
│mult⁰    │_L100⁰│2│2│4│0│
└─────────┴──────┴─┴─┴─┴─┘
------State 3-------
┌─────────┬─────┬─┬─┬─┬─┐
│this/mult│pc   │a│b│r│n│
├─────────┼─────┼─┼─┼─┼─┤
│mult⁰    │Done⁰│2│2│4│0│
└─────────┴─────┴─┴─┴─┴─┘

There is an assert in the algorithm:

assert valid r = a.mul[b]

This automatically generates a predicate and an assert:

pred mult.valid_101{
        this.mult => eventually (
    	         this.pc = Done
            	 this.r = this.a.mul[this.b]
         )
}
   assert valid { mult.valid }

So we can check the model with:

check  valid for 10 but 6 int

And this gives no counterexample, indicating that our algorithm always has the same result as the Alloy mul operator.

Processes

The cool part is, of course, the multi-process part. So I created the following example. It is my simplest concurrent example where an increment gives a false result when run with multiple concurrent processes. In PlusCal, each label generates its own event predicate so processes are running ‘concurrent’.

some sig Process { var reg : Int }
    
algorithm incr[ var total : Int] : Process {
         total := 0   
    lda: p.reg := total
    inc: p.reg := p.reg.plus[1]
    sta: total := p.reg

    assert valid total = # Process
}

This algorithm will run in parallel for each process the sequence of the lda (Load register), inc (Increment the register), and sta (Store the register). Since it runs the processes in parallel, there are situations where the processes will overwrite each others values. So the assert must fail. It is the the simples concurrent problem I know.

I initially wanted to allow an arbitrary number of processes like things but the process is really something special since it controls the program counter. I also could not think of a use case where you need more than one type of ‘process’. So I used the return type syntax of a function to specify an arbitrary expression that is the domain of the program counter, aka pc. I.e. the pc in expr -> labels. The end condition is then that eventually, the pc must be pc = expr -> Done. (I believe this is much more powerful than the limited concept of PlusCal processes?) You can declare arbitrary sigs with the var keyword. This makes the code extremely readable.

enum incrEnum {sta, Done, lda, inc}

one sig incr {
  var pc : Process -> incrEnum,
  var p : Process,
  var total : Int,
}

As you can see, the pc is now a relation from the Algorithm return type (which is not a real return type, it is just used to specify the domain of p, to the labels. And an automatic variable p is introduced, that is the ‘current’ process for an event predicate. This variable is not automatically kept constant and can this vary over its domain in the different states.

This is the generated Alloy code:

sig Process {
  var reg : Int,
}
enum incrEnum {sta, Done, lda, inc}
one sig incr {
  var pc : Process -> incrEnum,
  var p : Process,
  var total : Int,
}
pred incr.sta{
     this.p -> sta in this.pc
     this.total' = this.p.reg
     this.pc' = this.pc ++ this.p -> Done
     reg' = reg
}
pred incr.Done{
     this.p -> Done in this.pc
     this.total' = this.total
     this.pc' = this.pc
     reg' = reg
}
pred incr.lda{
     this.p -> lda in this.pc
     this.p.reg' = this.total
     this.pc' = this.pc ++ this.p -> inc
     this.total' = this.total
     all x : (Process - this.p) | x.reg' = x.reg
}

Note the partial update. Since the lda event predicate updates p.reg, I need to keep all other processes constant. I am using the algorithm process expression and then subtract the assignments. I think this is ok but I could use your help to get some confidence in this idea. I also spoke to Daniel Jackson and I understand this is a classic complicated problem.

From Daniel, I understand this is called frame conditions. I implemented the analysis of the code flow in the algorithm code. However, Daniel indicated that this is a very useful feature to have on predicates. I think I can use the code analysis & generation of code also on the Alloy AST directly. So I will experiment with this. This would then look like a modifier on the pred keyword. E.e. pred auto or so. I think Electrum also has a variant with an act keyword that does something similar?

pred incr.inc{
     this.p -> inc in this.pc
     this.p.reg' = this.p.reg.plus[1]
     this.pc' = this.pc ++ this.p -> sta
     this.total' = this.total
     all x : (Process - this.p) | x.reg' = x.reg
}
pred incr.incr{
     this.total = 0
     this.pc = Process -> lda
     always (
         this.sta
        or this.Done
        or this.lda
        or this.inc
     )
     eventually (
       this.pc = Process -> Done
     )
}

So running it to find a case where we get the wrong answer:

check valid for 10 but exactly 2 Process

This gives a counter example:

------State 0-------
┌─────────┬─────────────┬────────┬─────┐
│this/incr│pc           │p       │total│
├─────────┼────────┬────┼────────┼─────┤
│incr⁰    │Process⁰│lda⁰│Process¹│0    │
│         ├────────┼────┼────────┼─────┤
│         │Process¹│lda⁰│        │     │
└─────────┴────────┴────┴────────┴─────┘	
┌────────────┬───┐
│this/Process│reg│
├────────────┼───┤
│Process⁰    │2  │
├────────────┼───┤
│Process¹    │0  │
└────────────┴───┘
------State 1-------
┌─────────┬─────────────┬────────┬─────┐
│this/incr│pc           │p       │total│
├─────────┼────────┬────┼────────┼─────┤
│incr⁰    │Process⁰│lda⁰│Process¹│0    │
│         ├────────┼────┼────────┼─────┤
│         │Process¹│inc⁰│        │     │
└─────────┴────────┴────┴────────┴─────┘	
┌────────────┬───┐
│this/Process│reg│
├────────────┼───┤
│Process⁰    │2  │
├────────────┼───┤
│Process¹    │0  │
└────────────┴───┘
------State 2-------
┌─────────┬─────────────┬────────┬─────┐
│this/incr│pc           │p       │total│
├─────────┼────────┬────┼────────┼─────┤
│incr⁰    │Process⁰│lda⁰│Process⁰│0    │
│         ├────────┼────┼────────┼─────┤
│         │Process¹│sta⁰│        │     │
└─────────┴────────┴────┴────────┴─────┘
┌────────────┬───┐
│this/Process│reg│
├────────────┼───┤
│Process⁰    │2  │
├────────────┼───┤
│Process¹    │1  │
└────────────┴───┘
------State 3-------
┌─────────┬─────────────┬────────┬─────┐
│this/incr│pc           │p       │total│
├─────────┼────────┬────┼────────┼─────┤
│incr⁰    │Process⁰│inc⁰│Process⁰│0    │
│         ├────────┼────┼────────┼─────┤
│         │Process¹│sta⁰│        │     │
└─────────┴────────┴────┴────────┴─────┘	
┌────────────┬───┐
│this/Process│reg│
├────────────┼───┤
│Process⁰    │0  │
├────────────┼───┤
│Process¹    │1  │
└────────────┴───┘
------State 4-------
┌─────────┬─────────────┬────────┬─────┐
│this/incr│pc           │p       │total│
├─────────┼────────┬────┼────────┼─────┤
│incr⁰    │Process⁰│sta⁰│Process⁰│0    │
│         ├────────┼────┼────────┼─────┤
│         │Process¹│sta⁰│        │     │
└─────────┴────────┴────┴────────┴─────┘
┌────────────┬───┐
│this/Process│reg│
├────────────┼───┤
│Process⁰    │1  │
├────────────┼───┤
│Process¹    │1  │
└────────────┴───┘	
------State 5-------
┌─────────┬──────────────┬────────┬─────┐
│this/incr│pc            │p       │total│
├─────────┼────────┬─────┼────────┼─────┤
│incr⁰    │Process⁰│Done⁰│Process¹│1    │
│         ├────────┼─────┼────────┼─────┤
│         │Process¹│sta⁰ │        │     │
└─────────┴────────┴─────┴────────┴─────┘
┌────────────┬───┐
│this/Process│reg│
├────────────┼───┤
│Process⁰    │1  │
├────────────┼───┤
│Process¹    │1  │
└────────────┴───┘
------State 6-------
┌─────────┬──────────────┬────────┬─────┐
│this/incr│pc            │p       │total│
├─────────┼────────┬─────┼────────┼─────┤
│incr⁰    │Process⁰│Done⁰│Process¹│1    │
│         ├────────┼─────┼────────┼─────┤
│         │Process¹│Done⁰│        │     │
└─────────┴────────┴─────┴────────┴─────┘	
┌────────────┬───┐
│this/Process│reg│
├────────────┼───┤
│Process⁰    │1  │
├────────────┼───┤
│Process¹    │1  │
└────────────┴───┘

The order is

                  p0  p1  total
   p1.lda          x   0  0
   p1.inc          x   1  0
   p0.lda          0   1  0
   p0.inc          1   1  0
   p0.sta          1   1  1
   p1.sta          1   1  1       

Perfectly demonstrating the problem of concurrent updates!

GCD

One of the first examples in PlusCal is Euclid

let N = { n : Int | n >= 1}
fun divisors[ a : N ] : set N {
    { r : N | a.rem[r] = 0 }
}

fun gcd[ a, b : N ] : lone Int {
    (divisors[a] & divisors[b]).max
}

algorithm euclid[ a, b : N, var u, v : Int ] {
    u := a
    v := b
    while( u != 0 ) {
        if ( u < v ) {
            u := v
            v := u
        }
        assign: u := u.minus[v]
    }
    assert valid gcd[a,b] = v
}
check valid 

This does not find any counterexamples.

Syntax

The syntax (a bit informal) is so far:

algorithm ID ('[' decls ']')? (':' expr ) compound
decls ::= decl (',' decl)*
compound  ::= '{' statement * '}'
statement ::= while | if | assignment | label | compound | skip
while	  ::= 'while' '(' expr ')' compound
skip      ::= 'skip'
if        ::= 'if' '(' expr ')' compound ( 'else' compound )?
label	  ::= ID ':'
assignment::= lvar ':=' expr
assert	  ::= 'assert' ID expr
lvar 	  ::= ID | lvar '.' ID | lvar '[' lvar ] ']'
expr      ::= <alloy expression>

Status

I’ve got this implemented in the Electrum 2.0.1 codebase. I’ve been able to keep it really separate from the main codebase. Of course, I needed to change the parser but that went ok. However, so far it is a prototype and not a lot of error handling is going on. If you’re interested I can share the codebase on Github. As said, not a lot of error handling and running it in the debugger gives you access to the generated code. Haven’t found the time yet to show the generated code in the GUI.

There is currently a limitation that I generate the predicates and sigs before the module is resolved. This feels wrong and I am afraid this is going to bite me. It also forces me to do some resolving myself to find the proper scope, which is ugly. It also requires any sigs to be defined above the algorithm, also ugly. However, there is a chicken-egg problem since the generated predicates, especially the overall predicate, needs to be there to be resolved.

I did not implement ‘with’. I’ve not spent a lot of time on it and I am not sure we need it, but this might be my lack of understanding.

Preliminary Conclusion

Interestingly, I like Alloy mostly for its specification aspects. I see it potentially as a great tool to specify the behaviour and not as much as verifying the behaviour. However, the Time concept in Alloy always felt too complicated. I like the var keyword because it makes handling and reasoning about state so much easier. However, writing all those event predicates, and especially making sure that unused variables are not changing between states, is a royal pain in the ass. A procedural approach solves many of those problems because the majority of the state is local and implicit. For me, it is also important that I can talk to my fellow developers in a language that has a much lower impedance mismatch than Alloy’s core language, while still having Alloy as the incredibly powerful expression language. And of course the solvers.

So I looked at PlusCal and found that it seems to map very well to Alloy. Actually, I think Alloy is far superior as the host language. PlusCal makes a very ad hoc impression, this variation leverages a lot of the more powerful Alloy/Electrum features. I’ve not tested it, but it would not surprise me if we could execute the verification a lot faster than TLC.

As usual, I’ve spent way too much time on it. So I am at a decision point to burn more time on this or give up. I like the feature a lot but there have been many times my lack of intuition in this area has burnt me. So please let me know what you think.

This might be meant as a joke

but I predict that any “deplorable developer” who comes across this comment will most likely decide that Alloy is not for them, and will not take a look. Is your intent to actively repel potential new users of Alloy?

This afternoon I plant to show my vehicle locking model to some developers, for the purpose of encouraging them to try using Alloy in their, and thus my, work. I would prefer that they don’t find comments like the above when they start looking for online resources to help them understand Alloy.

Demonstrating contempt for your addressable market in a public forum is not the way to win friends and influence people. Doing so in private isn’t great either, because they will be able to tell, but doing so in public is very unhelpful to those of us who’d like to advocate for Alloy.

Irony is sometimes hard.

Yes. Which is why deploying irony in semi-anonymous, public, text-based fora is often a mistake. The joke (if any) is lost and all that’s left is offence.

Interesting initiative. I thought it odd that a tool built to encourage rigour had to be jammed into a long comment.

I wonder why you think Alloy (SAT solvers) can verify these problems faster than TLC? Although we can use solvers implemented in C, the number of variables and clauses of the SAT problem may explode with a more complicated algorithm. On the other hand, TLC is implemented in Java, but we can run it distributedly. Thanks!

I am only hoping :slight_smile: I am just a deplorable developer :slight_smile:

I remember the times in the PlusCal manual as minutes? They seem to be way higher than what I see in Alloy for similar problems. However, you’re right that sometimes the solving times are inexplicably larger. There are also services that can do the SAT solving in parallel though.

That said I am out of my league here. I do the Java development and am greatly impressed with what the scientists have made possible. The fact that this relational model in Alloy actually works is mind-boggling to me. I debugged Kodkod and it is pure magic.

Anyway, if this work here ever starts to work we can do a competition! Would be fun! You’re a TLC person? Love your feedback on this. I tried to stay as close as possible to PlusCal while still allowing the relational model of Alloy to survive.

I use both Alloy and TLC in my research. I also love the Alloy way to specify system behavior, which I think in some cases more expressive than TLA+. But they are not the preferable ways for developers in industry. So I’m also interested in whether Alloy can handle PlusCal better (e.g., run faster), or we can use some Alloy features to extend PlusCal.

I think the field for developers is still wide open. In my world formal methods are a misspelt 4 letter. The AWS case is interesting but as far as I can see quite limited. I’d love to see more developers engaged. I think Alloy stands a good chance since the Alloy syntax is a lot easier to grasp and the relational model is from my perspective crucial.

Thank you for writing this. This is a great idea for an article, as I always learn a lot from a well worked out example, particularly one which spans different areas.

I didn’t understand this part.

"I initially wanted to allow an arbitrary number of processes like things but the process is really something special since it controls the program counter. I also could not think of a use case where you need more than one type of ‘process’.

As an initial attempt at re-wording for my own comprehension, would you agree with:

I initially wanted to allow an arbitrary number of multiplication processes to occur in parallel, but the control state process which controls the program counter is a different type of process than the multiplication through addition processes, because the control state needs to be singular, and not run in parallel.

Your second sentence sounds to me, as I read it, as contradicted by the sentence prior.

Evelyn Mitchell

Yeah, that was my initial thought, too. I was going to bring it up until I saw you already did.

Evelyn,
I was struggling initially how to model the process. Although it now looks so clear to me, I had to do some deep thinking and then steal from PlusCal. Basically pluscal is a sequence of statements that get executed in order. If you model a single process then you keep the state in a variable that holds the current state. In this case, it is called pc. Since my mind was fuzzy I hoped I could use the extensive quantifiers of Alloy to model the process concept. However, after many dead ends I realized, and looking what PlusCal did, that is that the pc needs to be a relation from Process->enumX, where enumX is an enum with all possible predicates. This allows each process to traverse the predicates in its own unique way.

In PlusCal, you have a special process keyword, i.e. processes are special. However, I was thinking of the hotel example where a Guest could be the process. (Which doesn’t work, but that is another story.) So after a lot of thinking I came to the conclusion that the process is really something special in this problem. I.e. it is different from all other variables because it needs to be in a relation with the pc, unlike the other vars that are just in the state sig. I.e. my code needed to be intricately aware about processes but could treat all other vars in the same way.

I still have the vague suspicion that the idea of a process could be broadened but that is for another day. This seems to work nicely.

So I do not think your sentence really captures what I was trying to say?

Hi Peter,

I’m not sure if this the best translation for the assertion: here you are requiring total correctness, that is, that the algorithm terminates and the assertion is valid at that point (a liveness property); I believe (not an expert…) that in PlusCal such an assert is translated as a safety property (partial correctness), that is, if the algorithm terminates (in general, if it reaches the assert) then the assertion is valid - this would be specified as always (this.pc = Done implies this.r = this.a.mul[this,b]).

However, with your current translation you are in fact restricting the behaviours to always be terminating, because, unlike in TLA, you are not specifying stuttering steps (steps where nothing changes): in Electrum they are not mandatory, but if you don’t allow them, then, since all behaviours must be infinite, only the terminating behaviours (that stutter only at label Done as specified by pred mult.Done) will be considered in analysis: for example, if you algorithm was wrongly specified, and the while did not terminate for a particular value of n, such behaviour would not considered in the analysis and you might not detect a failing assertion (for example one with an invariant inside the while). Notice that this would be the case even if you remove the eventually (this.pc = Done) from the specification - actually I think it is redundant, because the only infinite behaviours allowed by your spec are the ones that necessarily reach Done.

My suggestion would thus be to automatically generate stuttering steps and change the translation of assertions.

Notice that with Electurm, unlike with TLA, you will not be able to check for deadlock states (states where the only option is to take an implicit stuttering step - implicit means not specified in the TLA next relation - the translation from PlusCal to TLA includes the stutter step when Done is reached in the next relation, so this is not considered a deadlock).

Best,
Alcino

Thanks Alcino! Highly appreciated.

First question, you think it is a good idea to have Pluscal in Alloy? Assuming yes, is the syntax here proposed to your liking?

You hit something I wasn’t sure about how to do. This is why I made the assert a separate predicate. I also thought, maybe wrongly, that I could place the assert on different places. Each line is connected to a state so I could use the ‘current’ (or next?) state of the assert. A bit like the C/Java assert. But this has not been worked out in any way.

After my initial train wrecks with Alloy, I tend to avoid stuttering steps because they so easily create useless models. Now I am a bit more comfortable, it is clearer how to still get interesting models. I can of course easily add a stuttering step. Part of the problem is that I don’t fully ‘feel’ the stuttering steps. Anything I could read to get a better understanding? They always felt a bit hackish.

I am not sure I get this? Isn’t that the consequence that of removing the check for eventually pc=Done? If the while loop does not terminate, eventually pc=Done can never be true, so the assertion will fail?

However, one of the problems I saw is that I cannot handle while (true) {}, and that is a common pattern. So I’ll experiment with this.

Can you elaborate on why this is impossible in Electrum? I take it as an axiom that TLA can do what Alloy can do? Am I wrong?

Thanks!

Hi,

In the kind of models I do, I never felt a strong need for such a language, but if others find it useful why not (the actions in many distributed protocols are quite simple to model in normal Electrum). I have not read in detail your proposal for processes (yet), but the syntax you propose for the algorithms looks nice.

I think in PlusCal you can put an assert wherever you want. I don’t see a problem with that.

Lamport wrote extensively about that. I think that any introductory article about TLA discusses stuttering steps.

I think my explanation was not very good, so I’ll try again :slight_smile: In Electrum all behaviours are infinite. Having only stuttering steps when the algorithm terminates might be a bit problematic, because you might exclude from the analysis partial runs that do not reach that point. For example, if for particular values of a and b the algorithm needs 15 steps (loop iterations) to terminate and your scope for the time domain is 10 all runs for those inputs will be excluded from the analysis. Let’s suppose you have an assert inside the while to check for some loop invariant, and lets suppose that for those particular inputs there was a problem and that assert would actually fail after 3 steps. You would not detect this problem because that run would not be considered. If you had stuttering steps, then it would be ok, because Electrum would be able to produce an infinite trace that is a counter-example (it would take the 3 steps and then stutter forever). What I did say wrong is that if the loop does not terminate no infinite runs would exist: in fact if the values of the variables start repeating after some time it could still be possible to have an infinite trace.

This is a good example: you might want to have such kind of algorithms and an assert inside the while.

I’m not saying its impossible - we have just not figured out a good way to do it. TLA has a different verification technique (explicit state enumeration) that makes it simpler to verify deadlocks. Notice that even the notion of deadlock could a bit strange in Electrum and we would need to come up with a good definition: we don’t have an explicit model of a state machine, namely we don’t have an explicit specification of transitions, so it might be not clear what deadlock means. It just happens that many Electrum specifications look like a specification of a state machine (always (action1 or action2 or ...), where action_1, etc, are specified with formulas involving only relations or primed relations, without other temporal operators). In this case (that btw includes the result of the translation from your PlusCal like language) we could use the same definition of deadlock as in TLA - a deadlock is a state where no actions are enabled, and only stuttering steps can occur. Even in this particular case, due to differences in the underlying verification technique, it is not obvious to me how to check deadlock absence in Electrum. For Electrum specifications that do not fit in this pattern I don’t know if it is clear what deadlock means. If anyone has an idea about how to proceed with this we would be glad to discuss it.

Best,
Alcino

I think this would be quite valuable, wouldn’t it? I think it might be an incentive for people to move to Alloy if we can have at least the same features. Pluscal, and my work, will inevitably have to make these kind of policy choices. The nice things is of course that full Alloy remains available.

I do not know a lot about TLA+ but I find the GUI of Alloy infinitely superior but most of the world seems to be on TLA+. And a bit of competition might be good for us :slight_smile:

Granted. Detecting deadlocks is on our “nice-to-have” list but the Electrum logic is not the same as that of TLA+ so some thinking is in order to see whether we can devise an efficient and relevant deadlock detection procedure.

I guess you can think of stuttering steps from several points of views. Here are some random thoughts, by no means exhaustive.

From a theoretical point of view: infinite traces are certainly the simplest semantic interpretation structure for LTL because reasoning is easier than when you admit finite traces (several publications discuss the topic). Which is why we opted for them in Electrum.

Therefore let’s consider infinite traces only. Then how do you handle deadlocks (including correct termination, BTW) in this setting? For if a system deadlocks, then intuitively its execution is finite. But you don’t have finite traces! Fortunately, such an intuitively-finite trace can be represented as an infinite one with the last state repeated infinitely. To do that, you introduce a special action that will leave the state variables unchanged, a stuttering action. Now, a finite trace that could not exist in our (infinite) setting will happen as an infinite trace stuttering in the last state, and analysis will consider it when checking a property.

Regarding the practical, intuitive reading of a stuttering action, I think it depends on what you’re modelling and how you’re doing it. Depending on the situation, you may think of it as:

  • a NOP instruction
  • a process doing something else (including being idle)

However, if we have a stuttering action in our model then it may happen anytime (not necessarily at the last instant). This is, I guess, linked to what you call “useless models”. In practice, if you have a stuttering action, then you should surely also add fairness constraints (which will in particular enforce models where something happens, so to speak).

Hope this helps.

Thanks David! Appreciated. I think I need to read more on LTL, you got a book recommendation?

Not really… Logic in Computer Science: Modelling and Reasoning about Systems (by Huth & Ryan) is probably the best I know of (there is even some Alloy in it, although it may be oudated a bit) but it’s probably too much targetted towards MSc students.

Lamport’s book and articles on TLA+ are nice and his recent video lectures are very good in my opinion. Stephan Merz also has good lecture notes and articles on TLA+.

I’m also very fond of a course in Toulouse in which I teach, but slides are in French (the part on fairness is one of the best I’ve seen).

1 Like