Nix needs a typed language, better CLI interface, better documentation, a distributed cache, to to get rid of the monorepo / single source of control, to support different builders (aka not just bash), and have a less toxic community.
For nix to be used by normal, non technical users, it must have a GUI. That is not optional.
Nix could be great for a lot of things, but it’s not the only solution and won’t be the last.
What did you actually do?
Maven and Cradle might be terrible, but C and C++ have fucking nothing in terms of dependency management. Even C# has something that few people use, but it has something. C and C++ are such a shit show to build. It’s so bad they had to invent languages to build them and they regularly fuck up (CMake, make, bison, scons, meson, …).
Pull a C or C++ project on a distro or environment and try to build it and you have to dive in the abyss of undeclared dependencies. And good fucking luck with glibc and glib dependencies. If the dev doesn’t know which version they were actually using, it’s up to you to find out. Fun for the entire family!
Can’t imagine there is any. You need to learn three scripts to read Japanese fluently IINM. Katagana, Hirigana and something else… Probably someone who speaks Japanese can say.
Are you maybe forgetting to quote the packages? kitty nix-shell --packages "$packages" --run fish
Ah, yep. Indeed. Thanks for pointing that out.
Nothing to do with being Chinese. Amazon is going to brick some of their devices that play music in cars IIRC. There are western companies that made pace makers who closed source their communication protocols, went bust, and now the pace makers are in patients who have no way to service them.
Opensourcing after deprecation should be written into law.
I don’t know a single devops who uses it. Not a single person in the tech companies I’ve been in had even heard of it. When I presented it to resolve problems it could resolve, one response was “but I watched a video that said it’s hard to learn” (one from distrotube, I think) and another was “it doesn’t work on mac, does it?” and that was that.
Lol, are you unhappy somebody disagrees with you? Quite childish.
It’s finally ready for mass adoption, IMO
No way. It’s still a specialist OS. There’s no way I’m putting this into the hands of a linux newbie or even the average linux user. There config still doesn’t have a UI, the flakes vs non-flakes debate is still in full swing (nixpkgs doesn’t have flakes), the doc is far, far, far from user friendly, writing a nix package is still not easy, and so much more.
Nix for sure was (and probably is) ahead of its time, but the UX is amongst the worst I’ve experienced - and I’ve written init
and upstart
services and configured my network with ipconfig
before networkmanager was stable.
So the certs end up in these files:
Only the first one is mentioned on stackoverflow as being used by Go on debian.
Curl seems to have its default location compiled in by passing --with-ca-bundle
, but after installing curlFull
and running curl-config --ca
, it doesn’t look like that was used and the “default” path is guessed.
Looking further in the curl
derivation there are these lines for darwin :
lib.optionals stdenv.isDarwin [
# Disable default CA bundle, use NIX_SSL_CERT_FILE or fallback to nss-cacert from the default profile.
# Without this curl might detect /etc/ssl/cert.pem at build time on macOS, causing curl to ignore NIX_SSL_CERT_FILE.
"--without-ca-bundle"
"--without-ca-path"
]
So, check the value of NIX_SSL_CERT_FILE
outside nix shell
and within. The path might have to be set there. I dunno how to do that automatically with nix shell
, so it might have to be done manually.
Could you provide more information? Snippets of the config that are relevant e.g custom TLS certs config, what does the flake look like, self-signed TLS certs? What exactly is breaking? curl https://localhost:8080/something
?
Have you compared the environment variables?
Privacy dies in thunderous applause. Good job on voting in the conservatives and ludites everybody (or not voting at all)! A+ participation in democracy.
Re-installing an OS is easy
Hmmm, this is where our opinions diverge. It’s easy when things go right. UEFI and MBR changed that. And I’ve had a few linux installations fail for obscure reasons (mostly hardware support).
Installers also say “backup your data” but if you’re coming from windows, what do you do when your stuff is on onedrive? What if you know nothing about partitioning and the installer just wipes the entire disk clean even though you expected your D:/ with your backup to be kept?
Oh, an should you keep that windows recovery partition? What’s on there? How do you access the data to check?
There are a bunch of things to consider when installing to prevent data-loss and IMO they aren’t as straightforward as they seem.
Doing a regular system update or upgrading from one LTS release to another are comparable to oil-check and changing a tire. Installing an OS, IMO, not so much.
There’s just a lot of stuff going on and everybody can make an argument for knowing something:
And so on. It’s all true, but you only have so many hours in a day, and everybody has a different life. You could live in the most affluent society and be dealing with stuff that has nothing to do with computers.
Also, who decides what’s “basic knowledge”? I know a lot about software, what I know about hardware is minimal. What’s minimal to me though might be advanced to another and vice versa.
We should be trying to be more empathetic. Recommending an advanced Linux OS to a newbie isn’t empathetic. Expecting a user to know how to install an OS isn’t empathetic.
We still suffer from the runtime errors that could’ve been caught at compilation time.
Anti Commercial-AI license