diff options
| author | dvs1 | 2025-03-16 17:59:41 +1000 |
|---|---|---|
| committer | dvs1 | 2025-03-16 17:59:41 +1000 |
| commit | 107f93b621ac3829e7aae687ede72473b5dad071 (patch) | |
| tree | 8de7d29eb7ae080be80e23f550f258817dba1696 /TODO.md | |
| parent | Get unsorted to work again. (diff) | |
| download | notYetAnotherWiki-107f93b621ac3829e7aae687ede72473b5dad071.zip notYetAnotherWiki-107f93b621ac3829e7aae687ede72473b5dad071.tar.gz notYetAnotherWiki-107f93b621ac3829e7aae687ede72473b5dad071.tar.bz2 notYetAnotherWiki-107f93b621ac3829e7aae687ede72473b5dad071.tar.xz | |
Spped things up by not downloading or converting things that didn't change.
Diffstat (limited to '')
| -rw-r--r-- | TODO.md | 9 |
1 files changed, 0 insertions, 9 deletions
| @@ -4,15 +4,6 @@ Make it perphekd! | |||
| 4 | 4 | ||
| 5 | ## Do these | 5 | ## Do these |
| 6 | 6 | ||
| 7 | Check the timestamps on the files, only update if source is newer than destination. Meh, it's already 600 times faster than the pandoc version. | ||
| 8 | |||
| 9 | - One quirk to watch for is if a URL path changes, the docs that have that URL need to be redone. | ||
| 10 | - pandoc is a lot slower though, so do this for sure when dealing with that. | ||
| 11 | - When scraping the web sites, they tend to be dynamically generated with no useful timestamp on them. | ||
| 12 | - The web site scrape happens locally anyway, I can compare source file timestamps. | ||
| 13 | - + So check timestamps when "downloading" the original, and before running pandoc on the result. Think that's the most time consuming steps. | ||
| 14 | - + Since this only stops updates of existing files, URLs changing are not a problem. | ||
| 15 | |||
| 16 | Add atom feed for single page. Alas cgit only seems to have ATOM feed on the whole repo, not individual files. | 7 | Add atom feed for single page. Alas cgit only seems to have ATOM feed on the whole repo, not individual files. |
| 17 | 8 | ||
| 18 | - However, once timestamps are sorted, I can use that code to generate (static?) RSS and ATOM feeds, and create page histories using diffs. | 9 | - However, once timestamps are sorted, I can use that code to generate (static?) RSS and ATOM feeds, and create page histories using diffs. |
