I’ve been blown away by OpenAI in general but the cherry on the top of the cake is undoubtedly that it supports Alloy!
Just ask for a model of the Einstein puzzle or a DAG!
Didn’t think it would happen in my time …
I’ve been blown away by OpenAI in general but the cherry on the top of the cake is undoubtedly that it supports Alloy!
Just ask for a model of the Einstein puzzle or a DAG!
Didn’t think it would happen in my time …
Write me an alloy model that generates a non-symmetric relation
Sure, here is a simple example of an alloy model that generates a non-symmetric relation:
sig Person {
name: String
age: Int
}
sig FriendOf {
from: Person
to: Person
}
pred isOlderThan(p1: Person, p2: Person) {
p1.age > p2.age
}
pred isYoungerThan(p1: Person, p2: Person) {
p1.age < p2.age
}
pred isFriend(p1: Person, p2: Person) {
some f: FriendOf | f.from = p1 and f.to = p2
}
fact {
// A person is older than their friends
all p1: Person, p2: Person | isFriend(p1, p2) => isOlderThan(p1, p2)
// A person is not younger than their friends
no p1: Person, p2: Person | isFriend(p1, p2) and isYoungerThan(p1, p2)
// The relation is anti-symmetric
no p1: Person, p2: Person | isFriend(p1, p2) and isFriend(p2, p1) and p1 != p2
}
This model defines a Person
with a name
and age
attributes, and a FriendOf
relation that connects people. The isOlderThan
and isYoungerThan
predicates define the relation between the ages of two people. The isFriend
predicate states that two people are friends if there is a FriendOf
relation between them.
The model then has a few facts:
These facts ensure that the isFriend
relation is non-symmetric. For example, if person A is friends with person B, then person B cannot be friends with person A. This means that the isFriend
relation is non-symmetric.
I hope this helps! Let me know if you have any other questions.
Are you confusing non-symmetric with anti-symmetric?
Yes, you are correct. I apologize for the confusion. A non-symmetric relation is a relation in which the order of the elements in the relation matters, such as the “is greater than” relation. An anti-symmetric relation is a relation in which two distinct elements cannot be related to each other in two different ways, such as the “is a proper subset of” relation.
Here is an example of an alloy model that generates an anti-symmetric relation:
Sadly repeats the same code again but with corrected comments
I am not claiming that in any way ChatGPT is perfect in any way, but would you’ve predicted an AI chat would’ve been able to do this? It is not that Alloy is javascript or Ruby! It is clear that if you play with it that it just fundamentally does not get Alloy fields, it treats them as fields in an object oriented language and is not really aware that they are a relation with the owning sig. Which I think is exactly the problem I had, and likely most newcomers, with Alloy.
The problem is that the robot sounds a lot more confident than its results warrant. It reminds me very much of some female journalists I know However, if you look at the result and notice errors it tends to correct them when asked. In this case, did you try to prod it further? The remarkable thing is that if you just keep yelling at it, it does come up with better answers.
Two years ago I tried to write a simple browser extension for my wife that translated a text box on an HTML page from French to Dutch. Google, as usual, drove me nuts then. I now did it with the help of ChatGpt in about an hour. Often its first suggestions were not perfect but made it easy to lookup the ideas. When I loaded the extension Google whined that the manifest version would deprecated in 2023 so I tried to prod ChatGpT to give me a newer manifest version. That did not work well, it gave me a newer looking file but it made some errors on some of the fields. Its excuse might be that it is trained on 2021 data.
I also had a fight with it about OSGi. But after yelling it did corrects its erroneous ways. But I was still blown away about the level of its knowledge about a relative obscure technology.
Clearly the problem in the hands of the inexperienced will be that they do not recognize the errors. But really, I’d never really thought it got this far already. I am already using it in my daily work instead of google searches.
Don’t get me wrong – I think it’s amazing. And indeed it seems that with the right scaffolding people are able to get ChatGPT to do a lot more.