IngeniousRocks (They/She)

Don’t DM me without permission please

  • 2 Posts
  • 158 Comments
Joined 11 months ago
cake
Cake day: December 7th, 2024

help-circle






  • Stop listening to respond and this won’t happen. Actively listen and you’ll always have something to add, at the end, when it’s you’re turn to speak.

    Edit: Look y’all, I’ve been dealing with my ADHD unmedicated for over 20 years and I have to say sometimes you need to listen to the neurotypicals. Just because conversation skills like “active listening” don’t come natural to you doesn’t mean you should just discount the advice as neurotypical nonsense. Use those beautiful powers of observation and pattern reconition intentionally: listen, parse, connect back to context, respond. Believe it or not, you don’t need to have a response ready immediately, and most folks appreciate a few seconds of silence as it shows them you care enough to respond genuinely rather than just speaking.










  • If you haven’t yet, question what being a man means to you, and what being a good person means to you.

    You will, throughout your life, find those definitions challenged. How you respond to the first will help you to develop a stronger sense of how you relate to your gender, and how it effects the way you interact with yourself and the world. How you respond to the second determines your character, which is how the world will see you as a person, and with sufficient introspection how you will see yourself.

    Keep growing. Keep learning.


  • When/If you do, a RTX3070-lhr (about $300 new) is just about the BARE MINIMUM for gpu inferencing. Its what I use, it gets the job done, but I often find context limits too small to be usable with larger models.

    If you wanna go team red, Vulkan should still work for inferencing and you have access to options with significantly more VRAM, allowing you to more effectively use larger models. I’m not sure about speed though, I haven’t personally used AMDs GPUs since around 2015.



  • If you’re planning on using LLMs for coding advice, may I recommend selfhosting a model and adding the documentation and repositories as context?

    I use a a 1.5b qwen model (mega dumb) but with no context limit I can attach the documentation for the language I’m using, and attach the files from the repo I’m working in (always a local repo in my case) I can usually explain what I’m doing, what I’m trying to accomplish, and what I’ve tried to the LLM and it will generate snippets that at the very least point me in the right direction but more often than not solve the problem (after minor tweaks because dumb model not so good at coding)