Reproducibly verify assumptions about your network: DNS, available hosts, open ports, TLS configuration; nmap, testssl, and dig/kdig in an Ansible-shaped trench coat.
Rysiek calls it a poor being's personal SHODAN.
pup is a command line tool for processing HTML. It reads from stdin, prints to stdout, and allows the user to filter parts of the page using CSS selectors.
Inspired by jq, pup aims to be a fast and flexible way of exploring HTML from the terminal.
Spektrum is a spectrum analyzer software for use with rtl-sdr.
The biggest advantage is that it can do sweeps across a large frequency span.
User interface part is written in Processing.
A webapp for gathering data on stocks you might want to purchase. Builds a history of performance to analyze. Supports research code for arbitrary queries. Seems to require MongoDB for its back-end.
Docker webshit, but can be run outside of that context.
Please note multiple researchers published and compiled this work. This is a list of their research in the 3G/4G/5G Cellular security space. This information is intended to consolidate the community's knowledge. Thank you, I plan on frequently updating this "Awesome Cellular Hacking" curated list with the most up to date exploits, blogs, research, and papers.
CygnusRFI is an easy-to-use open-source Radio Frequency Interference (RFI) analysis tool, based on Python and GNU Radio Companion (GRC) that is conveniently applicable to any ground station/radio telescope working with a GRC-supported software-defined radio (SDR). In addition to data acquisition, CygnusRFI also carries out automated analysis of the recorded data, producing a series of averaged spectra covering a wide range of frequencies of interest. CygnusRFI is built for ground station operators, radio astronomers, amateur radio operators and anyone who wishes to get an idea of how "radio-quiet" their environment is, using inexpensive instruments like SDRs.
The CLI tool is used to set up scanning runs. Data is graphed as output.
Approximate Nearest Neighbors is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data. It has the ability to use static files as indexes. In particular, this means you can share index across processes. Annoy also decouples creating indexes from loading them, so you can pass around indexes as files and map them into memory quickly. Every user/item can be represented as a vector in f-dimensional space. This library helps us search for similar users/items. We have many millions of tracks in a high-dimensional space, so memory usage is a prime concern.
Devoted to media archaeology, that is, historical research into forgotten, obsolete, neglected or otherwise dead media technologies. Depending on our understanding of “media” — one of the questions we’ll discuss — these might include forms as diverse as typewriters, phonographs, Polaroid photography, prison tattoo codes and the Victorian language of floral bouquets, outmoded video game platforms, computing systems, and musical instruments, smoke signals, scent organs, shorthand notation, and rocket mail delivery. Our premise is that understanding these things can help us gain a better sense of the development, meaning and legacy of media technologies, now and in the future; our goal is to introduce students to the skills and resources necessary for producing rigorous research on such obsolete and obscure media. The course will include an exposure to scholarship in media archaeology; an intensive introduction to research methods; finding and exploring word, image, and sound archives; and the restoration of media artifacts to their deep social, cultural and personal context. The course stems from the premise that media archaeology is best undertaken, like any archaeological project, collaboratively: we will follow a hands-on research studio model commonly used in disciplines such as architecture or design.
Fully automated decryption tool using natural language processing & artifical intelligence, along with some common sense. Input encrypted text, (hopefully) get the decrypted text back. You don't know, you just know it's possibly encrypted. Ciphey will figure it out for you. Ciphey can solve most things in 3 seconds or less.
Docs: https://docs.ciphey.online/en/latest
Ciphey can even be imported as a module in your own Python code!
It's basically the cryptographer's workbench I was going to write while I was in Pittsburgh.
Chartbrew is an open-source web application that can connect directly to databases and APIs and use the data to create beautiful charts. It features a chart builder, editable dashboards, embedable charts, query & requests editor, and team capabilities. Can pull data from MySQL, Postgres, MongoDB, and any API that returns JSON documents. Interactive graph and chart builder.
Written in node.js. Requires MySQL on the back-end.
If you use the service (https://chartbrew.com/) there's a free tier.
Binary Viewer is a tool for binary file discovery using visualizations that may highlight patterns.
DroiD64 is a graphical file manager for the contents of D64, D67, D71, D80, D81, D82, D88, T64 and LNX files. Examine your disk images in a fine-grained way to see what's in there.
A pure Python implementation of Fast Fourier Transformations (FFT) for Circuit Python. Ideal for use with the PyBadge, but should work with any Circuit Python-enabled platform. Requires an analog signal input of some kind.
The Internet Weather Map™ (IWM) is a free service that maps latency on the Internet. As an Open Community project, it uses data from volunteers all over the world to feed back latency on the Internet into a central database. Then that data is aggregated, and displayed in table and map formats, allowing you to see how the fast sections of the Internet are running. While it practically not possible to map out every segment of the entire Internet, the IWM product traces tens of thousands of segments to give you an informed idea as to it's overall latency.
The Latency Map is the heart of the service, which displays a map of any delays on the Internet, as well as in a table format. While the table displays the slowest segments, the map normally only displays delays (latency over 300ms). The data on this tab will refresh every 60 seconds, so there is no need to re-load the page manually.
Tools allows you get information on your domain name, including an MX Record Lookup, and some additional diagnostics.
HoloViews is an open-source Python library designed to make data analysis and visualization seamless and simple. With HoloViews, you can usually express what you want to do in very few lines of code, letting you focus on what you are trying to explore and convey, not on the process of plotting. Designed with Jupyter notebook-style data exploration primarily but it doesn't seem to b e a requirement.
A site where people study information about themselves - genomics, text, social networks - and share their techniques for doing so. I'm not entirely sure I'd feel safe uploading anything here, but at the very least some techniques could be learned from it.
A Python framework for doing graphical data analysis without needing to know Javascript. Visualizations are updated in realtime as they are interacted with. You'll have to write some code to set it up, it would appear mostly to get the data into the application to begin with.
Web mining module for Python, with tools for scraping, natural language processing, machine learning, network analysis and visualization.
Tulip is an information visualization framework dedicated to the analysis and visualization of relational data. Tulip aims to provide the developer with a complete library, supporting the design of interactive information visualization applications for relational data that can be tailored to the problems he or she is addressing.
Comes with Python embedded to interact with the data.
Versions for multiple OSes are available. Might be worth grabbing the .appimage to save time.l