I don’t want the end executable to have to bundle these files and re-parse them each time it gets run.
No matter how you persist data you will need to re-parse it. The question is really just if the new format is more efficient to read than the old format. Some formats such as FlatBuffers and Cap'n Proto are designed to have very efficient loading processes.
(Well technically you could persist the process image to disk, but this tends to be much larger than serialized data would be and has issues such as defeating ASLR. This is very rarely done.)
Lots of people are talking about Pickle. But it isn’t particularly fast. That being side with Python you can’t expect much to start with.
% free -h total used free shared buff/cache available Mem: 125Gi 15Gi 90Gi 523Mi 22Gi 110Gi Swap: 63Gi 0B 63Gi
I’ll use it eventually. Just gotta let the disk cache warm up.