This is an implementation of the Kiwix offline Wikipedia reader as a progressive web app which runs in modern web browsers. You can download ZIM files through the app for later or open ones you already have. As long as it's an OpenZIM format archive file you can open it. When you install it, it shows up like any other application on the desktop.
Two modes: Jquery (for older browsers) and ServiceWorker (newer browsers, also supports archives with dynamic content)
Download .zim files for use with Kiwix directly or with Bittorrent.
A proof-of-concept inspired and enabled by Hosting SQLite Databases on Github Pages and the ensuing Hacker News post. The compiled single-page app supports autocomplete for titles, automatic redirecting & other MediaWiki datasets like WikiQuote or Chinese Wikipedia. It makes no external API calls except to get Wikipedia's images.
Seems ideal for making offline copies of Wikipedia (or other Mediawiki installs, it's implied) available.
Search is disabled right now.
Github: https://github.com/segfall/static-wiki
SQLite copies of Wikipedia: https://www.kaggle.com/segfall/markdownlike-wikipedia-dumps-in-sqlite
Has instructions for turning an XML dump of Wikipedia into a SQLite database; unfortunately it uses node.js.
ISO time format.
YYYY-MM-DDTHH:MM:SSZ
(4 digit year)-(2 digit month)-(2 digit day)T(2 digit hour):(2 digit minutes):(2 digit seconds)Z
T - wallclock time starts here
Z - zulu time, or UTC time
A summary of how to set up a full Wikipedia.org mirror using several different approaches: Nginx as a caching proxy in front of wikipedia.org, Kiwix serving a downloaded backup, a full Mediawiki install with a database dump, or a mirror of Wikipedia made with XOWA.
Might be a fun thing to write a version of.
A list of common algorithms in computer science, compiled at Wikipedia.
Kiwix is a utility which lets you archive web pages so that they can be read offline or copied and distributed in the event that there is no connectivity. While it was designed to work with Wikipedia it will actually work with just about any website you throw at it. Cross platform, runs on Windows, Linux, and other OSes. There is even a portable Windows version that doesn't require installation.
Where and how to download mirrors of Wikipedia (or any of the Wikimedia projects, for that matter). You can put the copies up someplace to set up mirrors, put up local copies, archive them, give them to people, set up your own instance, use it for research...
XowA does one thing and does it well: It lets you make a local backup of Wikipedia (probably any Mediawiki, really) to carry around with you or copy to removable media (like a USB key or DVD-ROMs) so you can give out copies and read it offline. Runs on Windows, Linux, and MacOSX. Portable - can be run from removable media. Even includes a search engine. Written in Java.
The wikipedia page about how XMLHttpRequest works. This seems like a pretty straightforward description of the standard.
The more-or-less official file format for offline reading of Wikipedia. The full byte-by-byte description of ZIM archives can be found here. There is also a zimlib-git AUR package for Arch Linux.
Wikipedia page for a super simple and tiny API for automatically discovering and publishing data in as easy to understand a format as possible so that the dumbest and simplest of applications can use it. Links to the spec.
Tiny web server written in Golang that indexes and serves ZIM files. Lets you put a copy of Wikipedia on a relatively small device and serve it in such a way that just about any device on the Net can browser it. Includes its own indexer for search.
The Wikipedia page describing all of the known more-or-less standard HTTP status codes. Useful for people developing web applications.