The "Awesome GPTs (Agents) Repo" represents an initial effort to compile a comprehensive list of GPT agents focused on cybersecurity (offensive and defensive), created by the community. Please note, this repository is a community-driven project and may not list all existing GPT agents in cybersecurity. Contributions are welcome – feel free to add your own creations!
Disclaimer: Users should exercise caution and evaluate the agents before use. Additionally, please note that some of these GPTs are still in experimental test phase.
EvaDB is a database system for developing AI apps. We aim to simplify the development and deployment of AI apps that operate on unstructured data (text documents, videos, PDFs, podcasts, etc.) and structured data (tables, vector index).
The high-level Python and SQL APIs allow beginners to use EvaDB in a few lines of code. Advanced users can define custom user-defined functions that wrap around any AI model or Python library. EvaDB is fully implemented in Python and licensed under an Apache license.
Ideal for patching into existing AI APIs.
Welcome to our list of AI agents. We structured the list into two parts: Open source projects and closed-source projects and companies. The list is done according to our best knowledge, although definitely not comprehensive.
The simplest, fastest repository for training/finetuning medium-sized GPTs. It's a re-write of minGPT, which I think became too complicated, and which I am hesitant to now touch. Still under active development, currently working to reproduce GPT-2 on OpenWebText dataset. The code itself aims by design to be plain and readable: train.py is a ~300-line boilerplate training loop and model.py a ~300-line GPT model definition, which can optionally load the GPT-2 weights from OpenAI.
A PyTorch re-implementation of GPT, both training and inference. minGPT tries to be small, clean, interpretable and educational, as most of the currently available GPT model implementations can a bit sprawling. GPT is not a complicated model and this implementation is appropriately about 300 lines of code. All that's going on is that a sequence of indices feeds into a Transformer, and a probability distribution over the next index in the sequence comes out. The majority of the complexity is just being clever with batching (both across examples and over sequence length) for efficiency. Includes some sample code for training a blank copy of the model.
An excellent step-by-step article on how to partition and format hard drives larger than 2TB on Linux machines. It involves the use of a GPT partition table instead of a DOS-legacy partition table and the use of GNU Parted. Note that you'll need a fairly recent version of parted (I had to install v3.0.0 on my boxen).
Newer versions of fdisk will do this for you.