The following is the GNU All-permissive License as recommended in https://www.gnu.org/licenses/license-recommendations.en.html
Copyright (C) 2024 Free Software Foundation sysadmin@fsf.org
Copying and distribution of this file, with or without modification, are permitted in any medium without royalty provided the copyright notice and this notice are preserved. This file is offered as-is, without any warranty.
Contributions are welcome. See https://savannah.gnu.org/maintenance/fsf/.
MediaWiki
Exporting pages
Copy and paste the HTML table source of each listing generated by
http://cluestick/wiki/Special:AllPages, except for the Category
and File
listings (put those in a separate file).
Use regexes to extract page URLs from that text document. Remove the /wiki/
prefix.
Paste those into: http://cluestick/wiki/Special:Export. Download the full
page history. Make an xz
zip file to store as an archive. Paste the URLs into
the same form and download, this time with no revision history. Use this format
for extracting page data.
Extracting page data from XML
apt install python3-xmltodict pandoc
Symlink from the XML file (without revision history) as cluestick.xml
. Put
the following code into a script:
#! /usr/bin/python3
import os
import re
import json
import string
import xmltodict
os.system('mkdir -p ./old-pages ./old-pages-md')
with open('./cluestick.xml') as f:
parsed = xmltodict.parse(f.read())
wanted_data = parsed['mediawiki']['page']
for page in wanted_data:
title = page['title']
title = re.sub(r' ', '_', title)
title = re.sub(r'/', '_', title)
title = re.sub(r'\(', '_', title)
title = re.sub(r'\)', '_', title)
print(title)
try:
text = page['revision']['text']['#text']
except:
text = "Blank Page on Cluestick"
# fix last line on nico's user page
if title == "User:Ncesar":
text = text[:text.rfind('\n')] + "\n</pre>"
with open('./old-pages/' + title + '.wiki', 'w') as f:
f.write(text)
os.system('pandoc -r mediawiki ./old-pages/' + title + '.wiki -t markdown -o ./old-pages-md/' + title + '.md')
Run the script. It will generage markdown pages, but links will be broken, including special RT ticket number links and links to other wiki pages, given that once imported, they'll start with something like "/cluestick/".
Fixing URLs so they point to other pages under /cluestick
.
for x in old-pages-md/* ; do pandoc -r markdown "$x" -t html -o old-pages-html/"$(basename -s .md "$x")".html ; done
for x in old-pages-html/* ; do sed -i -e "s:</a>:\n</a>:g" "$x" ; done
sed -i -e "/wikilink/ s:href=\":href=\"https\://gluestick.office.fsf.org/cluestick/:" *
Update links that contain /
, (
, )
to: _
(There was one error with this command).
grep -ri -E -e "[/()]" page-list > pages-with-special-chars
while read line ; do sed -i "/"$(echo "$line" | sed -e "s:/:\\\\/:")"/ s.$line.$(echo "$line" | sed -e "s:[/()]:_:g")." old-pages-html/* ; done < pages-with-special-chars
Convert back to markdown:
for x in old-pages-html/* ; do pandoc -r html "$x" -t markdown -o old-pages-md-new/"$(basename -s .html "$x")".md ; done
Convert broken code lines, etc to indented code
#! /usr/bin/python3
import os
import re
import json
import string
import xmltodict
os.system('mkdir -p ./old-pages-code-fixed')
for filename in os.listdir('./old-pages-md-new'):
with open('./old-pages-md-new/' + filename, 'r') as f:
text = ""
for line in f.readlines():
if (line.find('`') == 0) and ((line.rfind('`') == len(line) - 2) \
or ((line.rfind('`') == len(line) - 3) and (line.rfind('\\') == len(line) - 2))):
line = re.sub(r'^`', '', line)
line = re.sub(r'`(\\)?$', '', line)
line = " " + line
line = re.sub(r'^\\$', '', line)
if (line.find(' ') == 0 and line[2] != ' '):
line = re.sub(r'^ ', ' ', line)
text += line
with open('./old-pages-code-fixed/' + filename, 'w') as g:
g.write(text)
Add header, copy to gluestick repo:
for x in old-pages-code-fixed/* ; do cat header.mdwn "$x" > ~/src/wikis/gluestick/cluestick/"$(basename "$x")" ; done