This repository is intended as a minimal, hackable and readable example to load LLaMA (arXiv) models and run inference by using only CPU. Thus requires no videocard, but 64 (better 128 Gb) of RAM and modern processor is required. Make sure you have enough swap space (128Gb should be ok :).