Hi Michal!
First of all, thank you for creating and hosting this wonderfully intuitive wiki. I'm using the wiki to strictly personal ends, namely organizing the information for my thesis. Wiki seems a great scrapbook for thoughts and cross-linking in an intuitive way.
Since I will definately depend on the content of this wiki, it would be good to be able to download this content in some way or another to have a local back-up, just for some emergency case. After all, technology has the unfortunate inclination to cause headaches once in a while…
Keep the good work up!
Cheers,
Joep
Hi,
since I have received several questions about "personal" backups - it will be done. although all the service data is backed up every day to a remote location no-one can guarantee 100% availability. so indeed personal backups (or snapshots).
I think it would be great if a backup would contain:
- source files for each page
- browsable HTML site
- file attachements
And this could be done by creating flat-file zip archive of source code and html + files.
I have no idea how to dump forums. It would require a lot of effort. But for the content pages it should work soon.
I do not think it would be possible to create a "binary" backup since no software could read it. This would be possible when Wikidot releases the source code (in the future).
Does anyone have any other/better/worse ideas how to implement backups?
cheers - michal
PS. in the meantime you could use one of the (I suppose many) download spiders that can make a copy of existing website to your local disk…
Michał Frąckowiak @ Wikidot Inc.
Visit my blog at michalf.me
Thank you for your quick and helpful reaction, I'll spider the site soon. The daily back-up is very relieving, I suggest to put that in the feature-list (http://www.wikidot.com/tour:what under "Why Wikidot?" to increase chances of world domination ;) ).
I'm not sure if changes to pages are stored in the source, but this could be something to consider as well (or leave at the responsibility of the users to back-up often enough).
The zip-file you suggest would be awesome. What comes to mind for better/worse ideas are check-boxes to select what one wants to back-up (html/source/files/history).
Cheers, Joep
I'd suggest an XML backup for the main content, plus an HTML web site, plus the attached files (optionally, because often these will be large and already on people's computers).
The advantage of XML for the content is that it can be processed by other tools easily. You can define the XML schema informally, and later make it formal. I assume you can extract the forum data hierarchically for a given site.
If you want an example of a database exported to XML like this, take a look at how Jira (a bug tracker from Atlassian) does its XML export.
Once the data has been collected into a directory, it can be zipped/gzipped, and then sent to an ftp server, or some other remote storage.
All this can happen automatically, once a day or so.
But doing backups is only meaningful if there is something we can do with the data.
I have implemented a simple backup option. It is not XML since it is not really readable by humans. It is a ZIP archive of extracted sources and file attachements. In the future it would be nice however to offer more options what and how to backup.
The backup is available through Site Manager » Backup.
There is no "restore" option. This is a simple backup as I said but should be useful for people who want to make sure their content is safe. Any extension in this subject would be a nice feature. For a short-term this would be nice to make a browsable backup too.
Michał Frąckowiak @ Wikidot Inc.
Visit my blog at michalf.me
Hi,
I have been using your back up and download facility and it seems to have been working.
Today, however, something seems to be wrong. I don't know if the probelm is at my end but I get a window opening with a load of raw html…
Any suggestions?
Thank you,
Douglas
and what is the content of this "raw html"? is it the zip archive itself but not recognized as zip by the browser?
Michał Frąckowiak @ Wikidot Inc.
Visit my blog at michalf.me
does this occur every time you create a backup?
Michał Frąckowiak @ Wikidot Inc.
Visit my blog at michalf.me
Hi,
Thanks a lot for this wonderful wiki.
I have this issue with backup. I am looking for saving the pages I created here in html format. Save page as is not good because it saves all the options in the tpo and left side of the page and also at the bottom. I need only the content in html format, with local references.
Perhaps backup could do that, but it only saves source file. It is somewhat risky, because if oneday wikidot ceases to exist (that happens to a lot of website), my sources are useless. I cannot retrieve them into html file for any other site. Therefore, putting very important contents here remains questionable to me.
If any how it can be done, please do so. If there is even a promise that it will of course be implemented in the future, I will go with wikidot.
By the way, I found that pbwiki lets backup in html format with references always working. They just ask for money to make my files private.
Regards,
amr
There are tools to crawl and download a site's html, tho they might not work well for a wiki. Anyway, the backup you can download could be setup on a different wikidot server (its open-source, other people can run them)
Hi again,
I'm having a go at using wget to backup the web pages. Does anyone have any idea how I can get it to log into a site or do I have to change the permissions to open the site first?
Regards,
Douglas
If you have a private site and want to download its content with wget, try this:
- log in with your browser
- you should have a cookie called WIKIDOT_SESSION_ID
- type "man wget" and find a section about cookies
- try to do the following:
wget --cookies=off --header "Cookie: WIKIDOT_SESSION_ID=<value>" ...........
substitute …….. with the rest of the parameters required for your downlolad.
Share your experience if you make it working!
michal
Michał Frąckowiak @ Wikidot Inc.
Visit my blog at michalf.me
Hi there,
My first little bit of success with wget
wget —load-cookies <cookies file …./cookies.txt> <webpage>
e.g.
wget —load-cookies .mozilla/firefox/zu7ovldc.default/cookies.txt http://armtec.wikidot.com/development
As advised I have logged in from my web browser. I have found a mozilla plug, in that may help, called 'allcookies' which generates a file that can be used even when the browser is not being used.
Douglas