

I guess I’m an immortal autocrat with foresight and pattern recognition good enough to be the global economy by the 16th century.
I play too much 4x.
Don’t DM me without permission please


I guess I’m an immortal autocrat with foresight and pattern recognition good enough to be the global economy by the 16th century.
I play too much 4x.


I set up Linux on a laptop with a particularly aggressive keyboard power button recently. I’d be at the terminal go to hit backspace and where Linux?


Actually had this occur on a USB NTFS drive I haven’t migrated the data to another fs from yet. It mounts at boot so my whole system got hung up until I removed it from the fstab and installed the tools to scan and fix the ntfs filesystem from the aur.
Was like a 20 minute fix, including time to research the tools I needed to fix it.


The subtle smile in the bottom left is really getting me.


Awesome!
If you’re still looking for advice, another good piece I try to Employ is:
Always be working towards something, even if that something is just relaxation. Live with intention, because time unaccounted is time lost.


Stop listening to respond and this won’t happen. Actively listen and you’ll always have something to add, at the end, when it’s you’re turn to speak.
Edit: Look y’all, I’ve been dealing with my ADHD unmedicated for over 20 years and I have to say sometimes you need to listen to the neurotypicals. Just because conversation skills like “active listening” don’t come natural to you doesn’t mean you should just discount the advice as neurotypical nonsense. Use those beautiful powers of observation and pattern reconition intentionally: listen, parse, connect back to context, respond. Believe it or not, you don’t need to have a response ready immediately, and most folks appreciate a few seconds of silence as it shows them you care enough to respond genuinely rather than just speaking.
Futo keyboard does it too, I think its an issue in android itself as a result, either that or they’ve similar implementations
Are they Chimeric? I’ve never seen such a clear division between tuxedo and tortoiseshell patterns like that.


Because nobody told them shoulder pads went out of style nearly 30 years ago.


I’d bet that selfhosting jellyfin and running sunshine/moonlight has saved me close to $800 on comparable services since I learned to do it last year. So I’d have to say my GPU, which is used mostly for those purposes.


Same.
I think my school might have had a local mirror of their d2l brightspace instance though because it miraculously was still up, but taking multiple minutes to load pages


shake shake shake
🎱 - Meeting at noon. It should be an email but…
Outlook not so good


I’ve been using flying Toasters as my screensaver since 2001, I never see it anymore because the display turns off, but its there!


I highly recommend unplugging all network cables from your smartTV, disabling its wifi, and using a cheap PC as a streaming box. Adblockers, media ripping (is possible), it can do more than smart tV, you can play games on it.


If you haven’t yet, question what being a man means to you, and what being a good person means to you.
You will, throughout your life, find those definitions challenged. How you respond to the first will help you to develop a stronger sense of how you relate to your gender, and how it effects the way you interact with yourself and the world. How you respond to the second determines your character, which is how the world will see you as a person, and with sufficient introspection how you will see yourself.
Keep growing. Keep learning.
When/If you do, a RTX3070-lhr (about $300 new) is just about the BARE MINIMUM for gpu inferencing. Its what I use, it gets the job done, but I often find context limits too small to be usable with larger models.
If you wanna go team red, Vulkan should still work for inferencing and you have access to options with significantly more VRAM, allowing you to more effectively use larger models. I’m not sure about speed though, I haven’t personally used AMDs GPUs since around 2015.
If you’ve got a decent Nvidia GPU and are hoping on linux, look into the Kobold-cpp Vulkan backend, in my experience it works far better than the CUDA backend and is astronomically faster than the CPU-Only backend.
If you’re planning on using LLMs for coding advice, may I recommend selfhosting a model and adding the documentation and repositories as context?
I use a a 1.5b qwen model (mega dumb) but with no context limit I can attach the documentation for the language I’m using, and attach the files from the repo I’m working in (always a local repo in my case) I can usually explain what I’m doing, what I’m trying to accomplish, and what I’ve tried to the LLM and it will generate snippets that at the very least point me in the right direction but more often than not solve the problem (after minor tweaks because dumb model not so good at coding)


🗯️‽#*&
Over time as the lactose becomes a more reliable resource in the gut, bacteria which can process it will begin to grow and with sufficient resources can compete well enough to stabilize in the gut biome. HG Modernism on Youtube has a video on it if you’re interested.
https://www.youtube.com/watch?v=h90rEkbx95w