I am still working on the model I described here. I’m basically implementing something where there is a function f : A -> B list
, and I want to compute List.flatten (List.map f list_of_as)
. Hopefully that OCaml pseudocode makes sense. Anyway, the important part is the list/sequence map and flatten.
I’ve found that my implementation of this has really bad performance when generating the CNF.
Here is a scaled down example of the problem:
module Test
sig A {}
sig B {}
sig As {
s: seq A
}
sig Bs {
s: seq B
}
// This maps from an A to a sequence of Bs
pred f (input: A, out: Bs) {
out.s = 0->B
}
// This maps from a sequence of As to a sequence of Bs using f
pred f_seq (input: As, output: Bs) {
As.s.isEmpty => Bs.s.isEmpty else
(some convert_first: Bs,
others: As,
convert_others: Bs |
// Convert the first A
f [As.s.first, convert_first] and
// Create a sequence for the other As
others.s = As.s.rest and
// Recurse on the other As
f_seq [others, convert_others] and
// Combine the two sequences
output.s = convert_first.s.append [convert_others.s])
}
pred test {
some a: As, b: Bs | f_seq [a, b]
}
run test for 5 but 5 int, 5 seq
Recursion unroll level 0: <1 second
Recursion unroll level 1: 2 seconds
Recursion unroll level 2: 200 seconds
Recursion unroll level 3: Still running
I expected this to scale exponentially, but this exceeded my expectations (in a negative way). The other odd thing is that Java is not using that much memory (about ~1GiB). If the problem was that unrolling was really blowing up that much, I would have expected more memory use.
Does this seem like expected behavior? Is this a sign that my problem domain might not be that well suited for Alloy?
Thanks in advance.