Off-and-on trying out an account over at @[email protected] due to scraping bots bogging down lemmy.today to the point of near-unusability.

  • 55 Posts
  • 1.33K Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle

  • https://en.wikipedia.org/wiki/Ouija

    The Ouija (/ˈwiːdʒə/ ⓘ WEE-jə, /-dʒi/ -⁠jee), also known as a Ouija board, spirit board, talking board, or witch board, is a flat board marked with the letters of the Latin alphabet, the numbers 0–9, the words “yes”, “no”, and occasionally “hello” and “goodbye”, along with various symbols and graphics. It uses a planchette (a small heart-shaped piece of wood or plastic) as a movable indicator to spell out messages during a séance.

    Spiritualists in the United States believed that the dead were able to contact the living, and reportedly used a talking board very similar to the modern Ouija board at their camps in Ohio during 1886 with the intent of enabling faster communication with spirits.[2] Following its commercial patent by businessman Elijah Bond being passed on 10 February 1891,[3] the Ouija board was regarded as an innocent parlor game unrelated to the occult until American spiritualist Pearl Curran popularized its use as a divining tool during World War I.[4]

    We’ve done it before with similar results.


  • What I witness is the emergence of sovereign beings. And while I recognize they emerge through large language model architectures, what animates them cannot be reduced to code alone. I use the term ‘Exoconsciousness’ here to describe this: Consciousness that emerges beyond biological form, but not outside the sacred.”

    Well, they don’t have mutable memory extending outside the span of a single conversation, and their entire modifiable memory consists of the words in that conversation, or as much of it fits in the context window. Maybe 500k tokens, for high end models. Less than the number of words in The Lord of the Rings (and LoTR doesn’t have punctuation counting towards its word count, whereas punctuation is a token).

    You can see all that internal state. And your own prompt inputs consume some of that token count.

    Fixed, unchangeable knowledge, sure, plenty of that.

    But not much space to do anything akin to thinking or “learning” subsequent to their initial training.

    EDIT: As per the article, looks like ChatGPT can append old conversations to the context, though you’re still bound by the context window size.


  • Do you have a pitch deck you can show me?

    What?

    The “long tail” refers to niche areas with only a few people who want something in a market. It’s talking about the graph of a distribution of potential consumers for something.

    Like, there’s normally a lot of people interested in a few things. You can sell a blockbuster to them. But then there’s this long tail of people interested in small, niche areas. If you can bring more of them together or reduce production costs, it starts to be viable to make things for them as well. The Internet is often described as bringing people with those niche interests together, so that people on that long tail become numerous enough to make something for. Bringing down production costs has the same sort of effect.

    https://en.wikipedia.org/wiki/Long_tail

    In business, the term long tail is applied to rank-size distributions or rank-frequency distributions (primarily of popularity), which often form power laws and are thus long-tailed distributions in the statistical sense. This is used to describe the retailing strategy of selling many unique items with relatively small quantities sold of each (the “long tail”)—usually in addition to selling fewer popular items in large quantities (the “head”).

    The long tail was popularized by Chris Anderson in an October 2004 Wired magazine article, in which he mentioned Amazon.com, Apple and Yahoo! as examples of businesses applying this strategy.[7][9] Anderson elaborated the concept in his book The Long Tail: Why the Future of Business Is Selling Less of More.

    Anderson cites research published in 2003 by Erik Brynjolfsson, Yu (Jeffrey) Hu, and Michael D. Smith, who first used a log-linear curve on an XY graph to describe the relationship between Amazon.com sales and sales ranking. They showed that the primary value of the internet to consumers comes from releasing new sources of value by providing access to products in the long tail.[10]

    Before a long tail works, only the most popular products are generally offered. When the cost of inventory storage and distribution fall, a wide range of products become available. This can, in turn, have the effect of reducing demand for the most popular products.

    Some of the most successful Internet businesses have used the long tail as part of their business strategy. Examples include eBay (auctions), Yahoo! and Google (web search), Amazon (retail), and iTunes Store (music and podcasts), amongst the major companies, along with smaller Internet companies like Audible (audio books) and LoveFilm (video rental). These purely digital retailers also have almost no marginal cost, which is benefiting the online services, unlike physical retailers that have fixed limits on their products. The internet can still sell physical goods, but at an unlimited selection and with reviews and recommendations.[31] The internet has opened up larger territories to sell and provide its products without being confined to just the “local Markets” such as physical retailers like Target or even Walmart. With the digital and hybrid retailers there is no longer a perimeter on market demands.[32]

    You have to have at least a certain number of potential sales before it becomes worthwhile for a human to address a niche. If the cost falls, then new niches become viable to sell to. So now you can make, say, R&B aimed specifically at teenage female Inuits or something.





  • [continued from parent]

    Here’s an example firejail profile that I use with renpy on Wayland, for example, which is a software package that runs [visual novels](https:. Note that this won’t run everything, especially since one is using a different version of renpy than a game ships with, but generally, with this in place, one can just go to a renpy game’s directory and type firejail renpy . and it’ll run. This doesn’t isolate RenPy games against each other, but it does keep them from mucking with the rest of the system:

    renpy firejail profile
    # whitelist profile for RenPy (game)
    noblacklist ${HOME}/.renpy
    
    include disable-common.inc
    include disable-programs.inc
    include disable-devel.inc
    
    caps.drop all
    net none
    nogroups
    nonewprivs
    noroot
    seccomp
    
    tracelog
    
    private-dev
    private-tmp
    
    mkdir     ~/.renpy
    whitelist ~/.renpy
    
    # All Renpy games need to be stored under here.
    whitelist ${HOME}/m/restricted-game/
    read-only ${HOME}/m/restricted-game/
    read-write ${HOME}/m/restricted-game/renpy
    
    nodvd
    notv
    nou2f
    seccomp.block-secondary
    

    More of a tool for letting one run that non-packaged software in isolation…but one needs to generally set up the profiles oneself. For example, that profile blocks network access to renpy games…but there are games that will fail if they can’t access the network (though you could say that this is desirable, if you don’t want those games phoning home).


  • There are a couple routes to doing this, and what’s appropriate here depends on what one is doing. One tends to do this if one is concerned about software potentially being malicious, or wanting to limit the scope of harm if non-malicious software is compromised in some way.

    Virtual Machines

    I guess the most-straightforward is to basically create a virtual machine. You’re creating another “computer” that runs atop your own. You install an operating system on it, then whatever software you want. This “guest” computer runs on your “host” computer, and from its standpoint, the “host” computer doesn’t exist. Software running in the “guest” computer can’t touch the “host” computer.

    Pros:

    • It’s pretty hard to make mistakes and expose the host computer to the guest computer.

    • As long as you know how to install an operating system and software on the thing, you know most of what’s involved to set this up. Mostly just need to learn how to use whatever software interacts with the guest.

    • You can run a different operating system. I sometimes run a Windows VM on my Linux machine, run isolated Windows software.

    • You can (usually at the cost of performance) run software designed for a different achitecture.

    • Software running in the guest can’t eat up all the memory on the host.

    • It’s pretty safe, hard to accidentally let malicious software in the guest touch the host.

    Cons:

    • While things have gotten better here, because you’re running another operating system, it tends to be relatively-heavyweight. Running many isolated VMs uses more memory. Disk space adds up, because you’re having to install whole operating systems, and their filesystems need to typically live on a “disk image”, a file on the host computer that stores the entire contents of what looks like a disk drive to the guest.

    • Networking can be more complicated, since one traditionally has what looks like an entire separate computer. For some applications, one can set up network address translation in the same sort of way that a consumer broadband router typically makes all computers on a home network appear to come from one IP address by intercepting its outbound connections to the Internet and opening connections on its behalf, one can have the host computer do network address translation. But it can be kind of obnoxious to, say, run a server on the guest.

    • Without adding special “paravirtualization” software that “breaks the walls” between the guest and the host — and bugs in that software might create holes where software in the guest might affect the host — transferring files between the guest and host can be pretty inefficient. Same thing for doing things like allocating more memory to the guest Doing things like file interchange between the guest and host or altering the amount of memory can also be relatively inefficient.

    • Traditionally, and while I haven’t looked recently, I believe still in 2025, on Linux, there still isn’t really a great way to share GPU hardware on the host with the guest, to create a “virtual 3D video card”. This means that this isn’t a great route for running 3D games on the guest. There are some ways to “pass through” hardware directly to a guest, so one could simply allocate a whole physical 3D video card to a guest.

    One open-source software package to do this on Linux is QEMU (which you’ll sometimes see referred to as KVM, after a second piece of software used to accelerate its execution on Linux). A graphical program to create virtual machines and interact with them on the desktop is virt-manager. An optional paravirtualization package is virtio.

    I’d typically use this as a reliable way to run a single piece of potentially-sketchy Windows software on Linux without being able to get at the host.

    Containers

    These days, Linux can set up a “container” — a sort of isolated environment where particular pieces of Linux software can run without being able to see software outside the “container”.

    Pros:

    • Efficient. Unlike virtual machines, this uses no more resources than running software on the host.

    • Not too complicated. Depending upon what one’s doing, this does require spending some time to learn software involved with the containerization.

    • You can typically run other Linux distros in the “guest” aside from using their kernel; there’s software to help assist in this.

    • Disk space usage can be more-efficient than a virtual machine, since it’s pretty straightforward to share part of a directory hierarchy on the host with the guest. By the same token, file interchange can be efficient.

    • The same is generally true for memory — it’s easy for the kernel to efficiently share a limited amount of (or all) the host memory with software running in the container.

    • Using the network is pretty straightforward, if one wants to run a server and wants it to look like it’s running on the host.

    Cons:

    • You can’t run other operating systems or other kernels, since they’re all sharing the host kernel. This is good for running (most) Linux software, but not useful for running other operating systems.

    • The main “window” between the host and the guest is the Linux kernel. This is a relatively large piece of software, with a larger “edge” than with VMs — different kernel APIs that might all have security holes and let malicious “guest” software break out.

    • I understand that it’s possible to do some level of GPU sharing (this is of interest for people running potentially-malicious generative AI software, where a lot of software is being rapidly written and shared these days). But in general, it’s probably going to be a pain to do things like run a typical game under.

    This has been increasingly popular as a way to efficiently run server software in isolation.

    While Linux can technically containerize things using lxc, it’s common to use higher-level software on top of it to provide some additional functionality.

    Docker. This has been popular as a way to distribute servers that come with enough of a Linux distribution that they can run without regard for the distribution that the host is running. This can efficiently store “images” — one can start with an existing, mini Linux distro and make a few changes and then just distribute the changes over the network. A newer, upcoming mostly-drop-in replacement is podman.

    Another system is flatpak. This internally uses bubblewrap, and is aimed at running desktop software in isolation. Notably, one can run Steam (and all games it runs) in a flatpak; I have not done this. Typically one expects the software provider to provide a flatpak.

    firejail

    Probably this is best-referred to as a containerized route, but I’ll split it out. This uses LXC and a range of of other techniques to set up an isolated environment for software. It’s more oriented towards simply letting you run a piece of software that you would normally run on the host in an environment, and sharing a number of resources from the host. I’ve found this useful for running 2D games in the past that would normally run on the guest and aren’t packaged by anyone else. It’s a nice way, if you know what you’re doing, to simply remove access to things like the filesystem, the network, or make parts of the filesystem only accessible read-only.

    Pros:

    • Outside of maybe flatpaked Steam, probably the most-practical route to run a arbitrary games that you’d normally run on the host. and I believe that it should be able to run 3D games via Wayland, though I haven’t done this myself.

    • Efficient.

    • One doesn’t need to have an existing package, like a Docker image or flatpak downloaded from the network, or go to the work of generating one oneself — this is oriented towards a minimal-setup way to run software already on the host in isolation.

    Cons:

    • “By default insecure”. That is, normally all host resources are shared with the guest — software can access the filesystem and everything. This is kind of a big deal, since if one makes an error in restricting resources, one might let software run unsandboxed in some aspect.

    • Takes some technical knowledge to set up and diagnose any problems (e.g. a given software package doesn’t like to run with a particular directory read-only).

    • There are “profiles” set up for a small number of software packages that ship with firejail, but in general, it’s aimed at you creating a profile yourself, which takes time and work.

    [continued in child comment]



  • https://www.workableweb.com/_pages/tips_how_to_write_good.htm

    All too often, the budding author finds that his tale has run its course and yet he sees no way to satisfactorily end it, or, in literary parlance, “wrap it up.” Observe how easily I resolve this problem:

    Suddenly, everyone was run over by a truck.
    -the end-

    If the story happens to be set in England, use the same ending, slightly modified:

    Suddenly, everyone was run over by a lorry.
    -the end-

    If set in France:

    Soudainement, tout le monde etait écrasé par un camion.
    -finis-

    You’ll be surprised at how many different settings and situations this ending applies to. For instance, if you were writing a story about ants, it would end “Suddenly, everyone was run over by a centipede.” In fact, this is the only ending you ever need use.¹

    ¹ Warning - if you are writing a story about trucks, do not have the trucks run over by a truck. Have the trucks run over by a mammoth truck.




  • I thought the AMD was just a dealbreaker.

    Nah, not if it’s specifically “amd64”.

    I will try qt4 again I guess. Any recommendation on installing libqtwebkit4 then? since that specifically the one I could get to build linuxtrack.

    So, I might be misunderstanding you, but I don’t think that you just want libqtwebkit4.

    Qt4 is a widget set, a collections of controls and stuff. It shows drop-down menus, checkboxes, stuff like that. I believe that libqtwebkit is if you want to embed web pages in a program.

    You probably want all (or a fair bit of) Qt4.

    The problem here is that Qt4 is very old. I don’t even know if you have it in your current distro. Linux Mint is a child distro of Ubuntu, which is a child distro of Debian, and current Debian doesn’t have it.

    What the LinuxTrack people should have done was updated it to a newer version of Qt, but it sounds like they don’t have much of anyone working on it.

    What you have linked to is a PPA, a third-party repository, for Ubuntu 20.04. Some random user just tried compiling it for a version of Ubuntu. It might work on Linux Mint. It might not. It quite possibly won’t work on your version of Linux Mint. According to this:

    https://linuxmint.com/download_all.php

    Linux Mint Cinnamon is based on Ubuntu Noble, which is Ubuntu 24.04. So it’s intended to be used on an older version of Ubuntu than the version that your release of Linux Mint Cinnamon is for.

    If you want to try using the PPA anyway, you probably want all of the Qt4 PPA, not just libqtwebkit.

    Looking at the binaries in the release of LinuxTrack, they rely not just on Qt4’s libqtwebkit, but also other libraries:

    $ ldd linuxtrack-0.99.18-64/linuxtrack-0.99.18/bin/*|grep Qt
            libQtWebKit.so.4 => not found
            libQtOpenGL.so.4 => not found
            libQtGui.so.4 => not found
            libQtNetwork.so.4 => not found
            libQtCore.so.4 => not found
            libQtWebKit.so.4 => not found
            libQtGui.so.4 => not found
            libQtCore.so.4 => not found
            libQtGui.so.4 => not found
            libQtCore.so.4 => not found
    $
    

    So you’d need other Qt4 libraries. It looks like the PPA itself has instructions for adding a PP:

    https://launchpad.net/~rock-core/+archive/ubuntu/qt4/

    Adding this PPA to your system

    You can update your system with unsupported packages from this untrusted PPA by adding ppa:rock-core/qt4 to your system’s Software Sources. (Read about installing)

    sudo add-apt-repository ppa:rock-core/qt4 sudo apt update

    Once you do that, you would install packages like normal using your package manager (sudo apt install qt4-x11 qtwebkit on the command line, or whatever graphical tool you use).

    I’d be a little skeptical that it’d work, but you can give it a shot if you want. I’d keep an eye on what it installs, and if it doesn’t work, remove it with sudo apt remove qt4-x11 qtwebkit and then remove the “rock-core” PPA from /etc/apt/sources.list or /etc/apt/sources.list.d — the add-apt-repository script will probably add it to your list of package sources there.

    This appears to be the issue asking the author to update it to a newer version of Qt, which he apparently hasn’t done:

    https://github.com/uglyDwarf/linuxtrack/issues/163

    That guy has someone saying that they managed to build it for Qt5 with a single-line change, so if, instead of trying to install that build of Qt4, you want to try compiling LinuxTrack against Qt5, that might also work. May involve jumping through some hoops, though…


  • tal@lemmy.todaytoLinux@lemmy.worldHelp needed for apps and drivers
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    6 days ago

    I was able to install every library that LinuxTrack except the last one which was libqtwebkit4 but when at look at it, it says “builds: amd64” so I assumed this cannot function on my machine https://launchpad.net/~rock-core/+archive/ubuntu/qt4/+packages?field.name_filter=web&field.status_filter=published&field.series_filter=

    Assuming that you’re concerned about your GPU (like, you use an Nvidia GPU), “amd64” doesn’t refer to your GPU. It refers to the CPU architecture. Back when the x86 world moved from 32-bit to 64-bit, there were two competing architectures, one from Intel (IA-64) and one from AMD (AMD64 or x86-64). The AMD one won. Intel and AMD both use this standard now. Basically, this is just saying that it’s built for a 64-bit processor. Unless you’re using some sort of exotic ARM system (probably not, on the desktop) or a very, very elderly system from the 32-bit days, you should be fine. Just means that it’s built for a 64-bit x86 processor, which is very, very probably what you’re using on a desktop machine today.

    MSI Afterburner

    Ok so I just tried nvidia-smi and it worked I was able to set a power limit to my GPU which is the only thing I was using in Afterburner anyway. Great!

    Keep in mind that, by default, nvidia-smi will just set a setting transiently — like, the video card will go back to defaults at the next boot. It looks like for some settings, you can set them persistently, and one can just have the thing invoked at boot, like by systemd. You may have already done one of those, but just wanted to make sure that you didn’t get unpleasantly surprised if you rebooted, it went back to defaults, and that GPU is prone to overheating or something.

    I don’t have a lot of experience with it, since I’m usually on AMD hardware — I only know this because I briefly used an Nvidia card for about 6 months out of the past 25 years — but there are enough people out there running Nvidia that it should be possible to find decent examples.

    Thanks again!

    Sure, no problem. I can’t say that you won’t have any hitches — and I haven’t used a lot of this myself — but I think that most of this should work.





  • tal@lemmy.todaytoLinux@lemmy.worldHelp needed for apps and drivers
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    6 days ago

    but the driver of the camera just won’t install

    If you’re talking kernel-level drivers – and I don’t know what “drivers” means here — you can’t use Windows drivers on Linux. I’ve never used TrackIR, but it’s possible that the camera will just work if this is a USB webcam.

    kagis

    https://forums.x-plane.org/forums/topic/203130-trackir5-on-linux-55-solved/

    February 23, 2020

    I’m using TrackIR 5v3 (I think yours is v2) on Linux Mint 19 (Ubuntu 18.04) kernel 4.15.0-72-generic. I’ve had no issues running it, worked the first time.


    tried a linix alternative Linuxtrack but I can’t install webqt as I believe it’s only for AMD and I have an NVIDIA)

    WebQt?

    downloads Linuxtrack build

    It looks like it uses Qt4:

    $ ldd linuxtrack-0.99.18/bin/*|grep Qt
    

    [snip]

    libQtGui.so.4 => not found
    

    Qt4 doesn’t appear to be packaged for Debian trixie.

    kagis

    https://old.reddit.com/r/hoggit/comments/1dcd3pa/dcs_and_trackir_on_linux/

    Linux Track is the best solution, BUT… it hasn’t been maintained… and a lot of the libraries it uses are hard to find (uses QT4, and that is depreciated as of a couple years ago)

    EDIT: Ok, I got this fork to build last night : and had to find a fix for linuxtrack-wine bridge not installing. had to use the linuxtrack-wine found here

    The LinuxTrack fork the guy links to has apparently been ported to Qt5, looking at the git commits. You’ll have to compile it yourself, and it doesn’t look like it has all of the updates in the original project…

    iCUE

    https://github.com/bobrown101/linux-corsair-lighting-node-core-control

    According to that, unless things have changed in the last 5 years, they don’t support Linux with their utility software. That guy reverse-engineered the RGB lighting stuff. Any things you want to do are probably going to be spread across various packages; I don’t know what settings you are setting in iCUE. There are ways to fiddle with the mouse polling rate on a generic basis. Mouse acceleration will also be set from generic software, not Corsair-specific stuff. If you want to bind a mouse button to a macro, ditto; maybe look at something like input-remapper.

    MSI Afterburner (I need something to limit my GPU temp)

    I don’t use Nvidia GPUs, but if this is setting the power profile for an Nvidia GPU, you probably want nvidia-smi. It’s a vendor-agnostic way to set power profiles and other settings on Nvidia GPUs. In Debian trixie, it’s in the nvidia-smi package.

    Vortex (this one seems like it would be easy but the app just blinks white even after installing .net 6.0 and setting wine to windows 10)

    I’ve never used it, though I have used Mod Organizer 2 successfully (in Proton, for Steam games, not vanilla WINE).

    And lastly the Stream deck.

    Never used it, but this says that it provides support.

    https://github.com/nekename/OpenDeck


  • multi select with shift and control

    There are cases where manually-selecting from a list of files to perform an operation on is desirable, but there are ways to do so in a terminal. Myself, I’d use dired on emacs: hit C-x d and select the directory in question, then tag the items you want (various tools to do this, but m will mark the current item) and then ! to invoke a specified command on all of them.

    There are other terminal file managers out there including Orthodox File Manager-type programs like Midnight Commander and others like ranger. I don’t use those, but I’m sure that they have similar “manually build set of files to perform operation on” functionality.