but if I try to set an exact scope for A, it says ‘Sig A is variable thus scope cannot be exact.’ How come? Aren’t the possible values for A independent of the number of steps?

In Electrum, we can write:

always A’ = B

In LTL, typically the atomic formula parts have to be wrt one moment in time. Often, one has to use history variables to refer to the value of an element at a previous moment. So how is the above represented in NuSMV and in Alloy/Pardinus directly?

If A is var, a command with scope N on A means that, at each step, the valuation for A is a set comprised between {} and { a1, ..., aN }. Specifying that N is exact would rather mean that the set is equal to { a1, ..., aN }, hence that A is constant, which defeats the purpose of saying that it is var.

There’s nothing fancy going on. In propositional LTL, the interpretation structure (a trace) is a sequence of Boolean valuations for propositions. Here, in first-order LTL, the interpretation yields, for any state, a valuation for signatures and fields as sets of tuples of atoms (and the valuation for A' is that for A in the next state).
Regarding the Pardinus encoding (so, for SAT-based bounded model-checking), this is essentially the old approach of Daniel’s book, that is every var sig or field is implicitly adjoined a further column representing instants, with an encoding for lasso traces. See also Alcino’s paper.
For SMV, signatures and relations are represented by Booleans variables telling whether a given atom is in the valuation of A in the “current” state. Then, SMV features a next(.) operator to refer to the value of a variable at the next instant (if you know SMV well, notice that we do not use ASSIGN blocks but TRANS ones). See Fig.8 of this paper (deprecated on some others aspects, though)

Hope this helps. Tell me if you need more details.

PS: we have a paper on the way to explain all this in more detail.

Thanks for the answers. I’m interested in more details on the translation to SMV. I can see how the next(.) operator in SMV can be used to represent the next value of a variable in the definition of the model in SMV, but in Electrum, the use of x’ can be arbitrarily nested within LTL operators. Do you turn the conjunction of all the LTL facts into the definition of a transition system in SMV using the TRANS block? Is this related to the methods that translate LTL into Buchi automata?

One more question: was the decision to require a total next state relation in Electrum because this is required in SMV? Or was there a reason to have a total next relation for LTL analysis done within Alloy?

It’s a little more subtle than that, but this the idea. We recognize formulas that obviously follow a certain shape and map them to certain SMV sections:

present-time formulas → INIT

before-after predicates → TRANS

formulas of the shape G phi (phi in present time)→ INVAR

the rest (remaining facts + run/check pred to assess) goes into LTLSPEC

Not especially, we map Alloy formulas to SMV formulas. The work of turning the latter into automata would be that of the model checker. In practice, we call by default the “k-liveness” algorithm of nuXmv, which is based on a logical encoding rather than automata-based methods.

I’m not sure to understand the question… As you can see in the figure cited above, the prime operator is a convenience that reduces to a certain use of the X connective. And then we rely on the standard semantics of propositional LTL. Does this answer your question?

“the rest (remaining facts + run/check pred to assess) goes into LTLSPEC”

Does this mean that you make the property to check an implication as in:
{conjunction of LTL Alloy facts} => each LTL Alloy check
?

If so, then I’m still puzzled about how you deal with x’=y as an atomic formula in the middle of an arbitrarily nested LTL formula for the NuXMV translation. I don’t see how it works to just extract any before-after predicate in the middle of an LTL formula to be part of TRANS in SMV. Can you explain a little more?

And also, why not use the implication approach for all LTL facts rather than extract certain LTL facts (such as the one turned into INVAR above) for special treatment? Is it more efficient?

Wrt to the question about the total next state relation - I think you answered it by saying that you rely on the standard semantics for propositional LTL, which require a total next state relation.

Many thanks for taking the time to answer my questions!

Yes, with the exceptions stated above (allocation to INIT, TRANS, INVAR sections).

First, the fact that a tuple belongs or not to a relation is represented by a Boolean variable (this is similar to what’s done in Emina’s PhD thesis).

Then, conceptually (in practice, to simplify computations, we leverage static information like lower bounds or knowing that x below may just be a quantified variable rather than any relation), we translate the Alloy constraint x' in y in:

for all tuple t in the upper bound of x’ . [t \in x’] => [t \in y]
= for all tuple t in the upper bound of x . (X [t \in x]) => [t \in y]

where every subformula of the shape [t \in x] is a Boolean variable.

Not any before-after predicate. But if your Alloy model follows a standard pattern, you’ll have a fact describing events like fact { always some variables | event1 or ... or eventn }, where every event is a before-after predicate. So, if we keep the structure (as facts) of the Alloy model as long as possible during the compilation, this shape can often be detected and translated into a TRANS. Similarly, every toplevel present-time formula in a fact yields an INIT.

This is actually what we did in our very first compiler. But we found out that, with the nuXmv algorithm we use, it is indeed far more efficient to put as much as possible in the said sections.