If you're not running against the SQLite test suite, then you haven't written a viable SQLite replacement.
If you're not running against the SQLite test suite, then you haven't written a viable SQLite replacement.
I'm not sure where they get their 90k CLOC count though, that seems like it might be an LLM induced hallucination given the rest of the project. The public domain TCL test suite is ~27k CLOC, and the proprietary suite is 1055k CLOC.
The value of SQLite is how robust it is and that’s because of the rigorous test suite.
I'm not sure where they get their 90k CLOC count though, that seems like it might be an LLM induced hallucination given the rest of the project. The public domain TCL test suite is ~27k CLOC, and the proprietary suite is 1055k CLOC.
> and the proprietary suite is 1055k CLOC.
Why is the code size of the proprietary test suite even public though?
Any serious SQLite re-implementation should buy it and test against it.
It's much more likely the issue is one of cost, not of seriousity.
https://github.com/Dicklesworthstone/frankensqlite#current-i...
Although I will admit that even after reading it, I'm not exactly sure what the current implementation status is.
RS over GF256 is more than adequate. Or just plain LDPC.
Utterly unmaintainable by any human, likely never to be completed or used, but now deposited into the atmosphere for future trained AI models and humans alike to stumble across and ingest, degrading the environment for everyone around it.
But nobody shows off static HTML sites on HN.
MIT plus a condition that designates OpenAI and Anthropic as restricted parties that are not permitted to use or else?
Impressive piece of work from the AIs here.
A better question is if the implementation was touched by anything other than generative AI.
Frankensqlite a Rust reimplementation of SQLite with concurrent writers
https://frankensqlite.com/