The authors are transparent about the framework’s current limitations. The primary challenge is catastrophic forgetting; as the model sequentially integrates new edits, its performance on earlier tasks degrades (Figure 6). While SEAL can perform multiple updates without a complete collapse, robustly preserving knowledge remains an open problem for this line of research.
Model collapse hasn’t been completely solved, but a recent paper suggested a method to delay it.
That’s actually pretty cool.
Also, alchemy is still alive, as long as we still have people collecting shit and doing science on it to try to get gold.