Sqlite cache needs updating
I understand that stored data is transient, but I’m afraid that subsequent operations might crash devenv/servicehub permanently, requiring users to manually delete the DBs as a fix.Note: i would not describe the data as being "transient". The intent of the db is (nearly all of the time) to allow us to compute data once and then store it for exceptionally long periods of time (days, weeks, months, etc.). With this specific use case, I believe that any WAL-related slowdown will be offset by not needing to open nonexistent files (the original reason for #22339 and this PR).
@sharwell I would also note, that the original cause of "slowness" described was precisely because SQLite tested a file to exist on each operation.(this is important since 8 out of 10 db will be in corrupted state, according to data I saw from esent, and if it can't recover it needs to rebuild from scratch and that is basically same as not having persisted storage) As far as I understand how SQLite works, locking mode is set for a connection, but exclusive mode never unlocks the database until this connection is closed (as SQLite docs say: "When the locking-mode is set to EXCLUSIVE, the database connection never releases file-locks.The first time the database is read in EXCLUSIVE mode, a shared lock is obtained and held.Working over networked file system wouldn't work (documentation explicitly states this), but the failure mode is not known for now, I'll take a look.My tests show that it works over SMB (symlinked vs15 directory to a remote fileserver over SSL VPN over Internet, strangely it's still usable): I believe that SQLite documentation term for "network file system" means "something like NFS" that doesn't support SHM and/or locking.
Search for sqlite cache needs updating:
Until the data actually needs to be updated, we want to store what we have across sessions to prevent unnecessary recomputation of it.