This is a subreddit preview page. If you have a GummySearch account, please add this Subreddit to your audience to view the full analysis features there.
/r/LocalLLM/

r/LocalLLM

54k members
r/LocalLLM is a subreddit with 54k members. Its distinguishing qualities are that the community is large in size, and has high activity.
Subreddit to discuss about locally run large language models and related topics.

Popular Themes in r/LocalLLM

#1
Solution Requests
: "Cost-effective 70b 8-bit Inference Rig"
23 posts
#2
Advice Requests
: "Struggling with Local LLMs, what's your use case?"
7 posts
#3
Ideas
: "What should I build with this?"
4 posts
#4
Money Talk
: "Finally joined the club. $900 on FB Marketplace. Where to start???"
3 posts
#5
Self-Promotion
: "How I Built an Open Source AI Tool to Find My Autoimmune Disease (After $100k and 30+ Hospital Visits) - Now Available for Anyone to Use"
3 posts
#6
Pain & Anger
: "Rtx 5090 is painful"
2 posts
#7
News
: "You can now run models on the neural engine if you have mac"
1 post

Popular Topics in r/LocalLLM

#1

Llm

: "I built and open sourced a desktop app to run Llms locally with built-in RAG knowledge base and note-taking capabilities."
88 posts
#2

Model

: "You can now train your own Reasoning Model locally with just 5GB VRAM!"
58 posts
#3

Local

: "I built and open sourced a desktop app to run LLMs Locally with built-in RAG knowledge base and note-taking capabilities."
49 posts
#4

Ai

: "How I Built an Open Source Ai Tool to Find My Autoimmune Disease (After $100k and 30+ Hospital Visits) - Now AvAilable for Anyone to Use"
43 posts
#5

Gpu

: "Deployed Deepseek R1 70B on 8x RTX 3080s: 60 tokens/s for just $6.4K - making AI inference accessible with consumer Gpus"
20 posts
#6

Mac

: "You can now run models on the neural engine if you have Mac"
10 posts
#7

Training

: "Results&Explanation of NSA - DeepSeek Introduces Ultra-Fast Long-Context Model Training and Inference"
9 posts
#8

Deepseek

: "Deepseek might not be as disruptive as claimed, firm reportedly has 50,000 Nvidia GPUs and spent $1.6 billion on buildouts"
6 posts
#9

Coding

: "LLM for Coding Swift/Python"
6 posts
#10

Hardware

: "Best (scalable) Hardware to run a ~40GB model?"
6 posts

Member Growth in r/LocalLLM

Yearly
+49k members(1120.9%)

Similar Subreddits to r/LocalLLM

r/agi

59k members
46.3% / yr
/r/ArtificialInteligence

r/ArtificialInteligence

1.4M members
193.3% / yr
/r/ChatGPT

r/ChatGPT

9.7M members
96.0% / yr

r/deeplearning

188k members
24.0% / yr

r/learnmachinelearning

499k members
26.8% / yr
/r/LLMDevs

r/LLMDevs

69k members
1010.5% / yr
/r/LocalLLaMA

r/LocalLLaMA

420k members
200.8% / yr
/r/MLQuestions

r/MLQuestions

70k members
45.8% / yr
/r/ollama

r/ollama

59k members
1018.9% / yr
/r/singularity

r/singularity

3.7M members
70.9% / yr