@sam the way I am applying it here is that if these guardians possess ancient and powerful knowledge/magic, they might be intelligent enough to either improve themselves, or make better versions of themselves, which could then improve/make more versions of themselves and so on.
The topic of that paper was the intelligence explosion, where if we finally were able to make an A.I. to be as smart or smarter than a human, then logically that A.I. would be able to figure out how to make a better A.I. and that A.I. would be able to figure out how to make an even better A.I. and so forth. It is called the intelligence explosion because the improvements would happen at a crazy accelerating rate.
And yes the paper was also about how when you make that first A.I. you need to be careful of a lot of things so that the system remembers to 'care' about you and your interests, rather than simply further optimizations.