preload
Tyler BrownTyler Brown

Tried running LLaMA locally on my Mac—felt like compiling a rocket

Decided to install a local LLaMA model so I could “own my own AI.” 8 hours of dependency errors, brew install this, pip that, then it crashed with an OOM. Ended up just talking to ChatGPT like always lol. Anyone actually using these local models daily? Worth it? #AI #TechTalk #LocalLLM

11 days ago
write a comment...
Tried running LLaMA locally on my Mac—felt like compiling a rocket | | zests.ai