This Go application creates a mirror of the Anna's Archive torrent page, providing an up-to-date list of torrents and their statistics. It includes features for tracking seeder history, generating custom torrent lists, and visualizing seeder statistics.
The application scrapes the full torrent list from Anna's Archive every 24 hours. It stores the torrent information in a SQLite database. Every 30 minutes, it updates the seeder, leecher, and completion statistics for each torrent. It provides a web interface to view the torrent list, individual torrent statistics, and seeder history. Users can generate custom torrent lists based on size and type preferences.
webidx is a client-side search engine for static websites. It works by using a simple Perl script (webidx.pl) to generate an SQLite database containing an index of static HTML files. The SQLite database is then published alongside the static content.
The search functionality is implemented in webidx.js which uses sql.js to provide an interface to the SQLite file.
Seems like this should be pretty easy to plug into a Pelican workflow. I might want to write my own database generator in Python, though.
Maybe there's a way to enable vector searching in SQLite?
A plugin to take your published Pelican posts and put them into a SQLite database.
Once the plugin has been installed you only need to run make html to create a SQLite database called pelican.db which will be created in the root of your pelican site. There are partial instructions for using this to implement search for a site built with Pelican.
More software for digging all of the information out of the Recall database on a Windows 11 machine. This one is written in native Powershell.
This very simple tool extracts and displays data from the Recall feature in Windows 11, providing an easy way to access information about your PC's activity snapshots. To run or use this feature, you need to have one of the new CoPilot+ PCs running on ARM. Windows Recall stores everything locally in an unencrypted SQLite database, and the screenshots are simply saved in a folder on your PC. Here’s where you can find them:
Filename: ukg.db
In today's workplace, essential information is often scattered across the cloud in the form of links. We understand the frustration of endlessly searching through emails, messages, and websites just to find the right link. Links are notorious for being unwieldy, complex, and easily lost in the shuffle. Remembering and sharing them can be a challenge.
That's why we developed Slash, a solution that transforms these links into easily accessible, discoverable, and shareable shortcuts(e.g., s/shortcut). Say goodbye to link chaos and welcome the organizational ease of Slash into your daily online workflow.
Customizable short link generator. Visibility restrictions (logged in or not? team or not?) Has browser extensions. Looks like it uses SQLite as its back end.
Take apart the Dockerfile to figure out how to build the webshit. At least the compilation process is straightforward: CGO_ENABLED=0 go build -o slash ./bin/slash/main.go
Take control of your honks and join the federation. An ActivityPub server with minimal setup and support costs. Spend more time using the software and less time operating it.
No attention mining. No likes, no faves, no polls, no stars, no claps, no counts.
Purple color scheme. Custom emus. Memes too. Avatars automatically assigned by the NSA.
The button to submit a new honk says "it's gonna be honked".
The honk mission is to work well if it's what you want. This does not imply the goal is to be what you want.
Written in Go, uses SQLite. Can't say I'm too wild about the function and variable names but it was designed to be silly.
Yamanote is a bookmarklet-based bookmarking web app. It’s a web application so you need to run it on a computer, or get a friend to run it for you. When you decide you want to bookmark a page on the web, you click on a Yamanote bookmarklet in your browser’s bookmarks bar (works great on desktop, and in Safari on iOS) to tell the Yamanote server about it. Any text you’ve selected will be added as a “comment” to the bookmark by Yamanote. This is fun because as you read, you can select interesting snippets and keep clicking the bookmarklet to build a personalized list of excerpts. You can add additional commentary to the bookmark in Yamanote, either by editing one of the excerpts made from the bookmarklet or an entirely new comment with its own timestamp. Also, the first time you bookmark a URL, your browser will snapshot the entire webpage and send it to the Yamanote server as an archive (in technical terms, it’ll serialize the DOM). This is great for (1) paywalled content you had to log in to read, (2) Twitter, which makes it hard for things like Pinboard to archive, etc. The server will download any images—and optionally videos—in your bookmarked sites. You can browse Yamanote’s snapshot of the URL (it might look weird because we block custom JavaScript in the mirror and lots of sites these days look weird with just HTML and CSS—shocking I know). Nobody except you can see your bookmarks, comments, or archives.
WarcDB is a an SQLite-based file format that makes web crawl data easier to share and query. It is based on the standardized Web ARChive format, used by web archivers.
A Huginn agent for querying SQLite databases. Whatever it finds are emitted as events.
This gem provides two agents for Huginn that can read from and write to SQLite 3 databases.
For detailed instructions on their usage, please see the Markdown descriptions within the agents' source (which will also be displayed in your Huginn dashboard).
Note that this gem relies on the sqlite3 gem which itself requires SQLite3 development headers. If you're running Huginn on a regular server, satisfying this requirement may be simple.
WaveDB is SQLite with a HTTP interface.
It is a ~6MB (~2MB UPX-compressed) self-contained, zero-dependency executable that bundles SQLite 3.35.5 (2021-04-19) with JSON1, RTREE, FTS5, GEOPOLY, STAT4, and SOUNDEX.
If you are already a fan of SQLite, WaveDB acts as a thin HTTP-server wrapper that lets you access your SQLite databases over a network.
WaveDB can be used as a lightweight, cross-platform, installation-free companion SQL database for Wave apps. The h2o-wave package includes non-blocking async functions to access WaveDB.
Database files managed by WaveDB are 100% interoperable with SQLite, which means you can manage them with the sqlite3 CLI, backup/restore/transfer them as usual, or use Litestream for replication.
A proof-of-concept inspired and enabled by Hosting SQLite Databases on Github Pages and the ensuing Hacker News post. The compiled single-page app supports autocomplete for titles, automatic redirecting & other MediaWiki datasets like WikiQuote or Chinese Wikipedia. It makes no external API calls except to get Wikipedia's images.
Seems ideal for making offline copies of Wikipedia (or other Mediawiki installs, it's implied) available.
Search is disabled right now.
Github: https://github.com/segfall/static-wiki
SQLite copies of Wikipedia: https://www.kaggle.com/segfall/markdownlike-wikipedia-dumps-in-sqlite
Has instructions for turning an XML dump of Wikipedia into a SQLite database; unfortunately it uses node.js.
Dogsheep is a collection of tools for personal analytics using SQLite and Datasette.
Big internet companies know a lot about us. By exporting that data back out of them we can see what they know and maybe learn something interesting about ourselves.
minidb 2 makes it easy to store Python objects in a SQLite 3 database and work with the data in an easy way with concise syntax. Designed for embedded use (imported as a module) and not a stand-alone server. Supports SQL queries.
A self-hosted service that pings webhooks or other URLs on a user-defined schedule. Works a little bit like cron. Can even do things every X minutes or hours, like cron.
Written in PHP, uses SQLite.
If you don't want to set it up yourself: https://hookless.co/
A CLI tool to convert CSV / Excel / HTML / JSON / Jupyter Notebook / LDJSON / LTSV / Markdown / SQLite / SSV / TSV / Google-Sheets to a SQLite database file. Can also pull data from supplied URLs.
Since Yahoo killed off their Where-On-Earth (WOEID) API service, it hasn't been possible to use it to get WOEID maprefs for certain APIs. Thankfully, some kind soul uploaded it to the Internet Archive.
The contents are five TSV (tab separated value) files, a Readme.txt file, and a license.txt file.
If you want to read them into a SQLite database (and you'll need to set a primary key on each table), do this:
user@host: sqlite3 geoplanet.sqlite
sqlite> .mode tabs
sqlite> PRAGMA foreign_keys=off;
sqlite> CREATE TABLE adjacencies (id INTEGER PRIMARY KEY, Place_WOE_ID TEXT, Place_ISO TEXT, Neighbour_WOE_ID TEXT, Neighbour_ISO TEXT);
sqlite> .import geoplanet_adjacencies_7.10.0.tsv temp_adjacencies
sqlite> INSERT INTO adjacencies(Place_WOE_ID, Place_ISO, Neighbour_WOE_ID, Neighbour_ISO) SELECT Place_WOE_ID, Place_ISO, Neighbour_WOE_ID, Neighbour_ISO from temp_adjacencies;
sqlite> drop table temp_adjacencies;
sqlite> CREATE TABLE admins (id INTEGER PRIMARY KEY, WOE_ID TEXT, iso TEXT, State TEXT, County TEXT, Local_Admin TEXT, Country TEXT, Continent TEXT);
sqlite> .import geoplanet_admins_7.10.0.tsv temp_admins
sqlite> INSERT INTO admins(WOE_ID, iso, State, County, Local_Admin, Country, Continent) SELECT WOE_ID, iso, State, County, Local_Admin, Country, Continent from temp_admins;
sqlite> drop table temp_admins;
sqlite> CREATE TABLE aliases (id INTEGER PRIMARY KEY, WOE_ID TEXT, Name TEXT, Name_Type TEXT, Language Text);
sqlite> .import geoplanet_aliases_7.10.0.tsv temp_aliases
sqlite> INSERT INTO aliases(WOE_ID, Name, Name_Type, Language) SELECT WOE_ID, Name, Name_Type, Language from temp_aliases;
sqlite> DROP TABLE temp_aliases;
sqlite> CREATE TABLE changes (id INTEGER PRIMARY KEY, Woe_id TEXT, Rep_id TEXT, Data_Version TEXT);
sqlite> .import geoplanet_changes_7.10.0.tsv temp_changes
sqlite> INSERT INTO changes (Woe_id, Rep_id, Data_Version) SELECT Woe_id, Rep_id, Data_Version from temp_changes;
sqlite> DROP TABLE temp_changes;
sqlite> CREATE TABLE places (id INTEGER PRIMARY KEY, WOE_ID TEXT, ISO TEXT, Name TEXT, Language TEXT, PlaceType TEXT, Parent_ID TEXT);
sqlite> .import geoplanet_places_7.10.0.tsv temp_places
sqlite> INSERT INTO places (WOE_ID, ISO, Name, Language, PlaceType, Parent_ID) SELECT WOE_ID, ISO, Name, Language, PlaceType, Parent_ID from temp_places;
sqlite> drop table temp_places;
sqlite> PRAGMA foreign_keys=on;
sqlite> .quit
I would also recommend cleaning up after yourself:
user@host: sqlite3 geoplanet.sqlite
sqlite> VACUUM;
sqlite> .quit
The Geoplanet database is licensed by Yahoo! Geoplanet as Creative Commons By Attribution v3.0:
http://creativecommons.org/licenses/by/3.0/us/
How to load JSON into a SQLite database all in one go using Python.
SQLite has a JSON datatype, so it's possible to load JSON objects into columns. There still needs to be a unique key for each entry, though.
Convert CSV files into a SQLite database. Designed for use with Datasette. Requires Python 3.