Tyler Brown+FollowTried running LLaMA locally on my Mac—felt like compiling a rocketDecided to install a local LLaMA model so I could “own my own AI.” 8 hours of dependency errors, brew install this, pip that, then it crashed with an OOM. Ended up just talking to ChatGPT like always lol. Anyone actually using these local models daily? Worth it? #AI #TechTalk #LocalLLM10Share
rbarr+FollowLocal LLMs: Native Windows or WSL?Running large language models locally is easier than ever, but which route do you take on Windows? The native Ollama app is plug-and-play, but WSL unlocks Linux-level performance for those who live in the terminal. With near-identical speeds and GPU support, is the extra setup for WSL worth it for non-developers? Or does the Windows version win for simplicity? Where do you stand on this local AI showdown? #Tech #LocalLLM #Ollama00Share