• 1 Post
  • 346 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle
  • litchralee@sh.itjust.workstoSelfhosted@lemmy.worldPassword managers...
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    20 hours ago

    For a single password, it is indeed illogical to distribute it to others, in order to prevent it from being stolen and misused.

    That said, the concept of distributing authority amongst others is quite sound. Instead of each owner having the whole secret, they only have a portion of it, and a majority of owners need to agree in order to combine their parts and use the secret. Rather than passwords, it’s typically used for cryptographically signing off on something’s authenticity (eg software updates), where it’s known as threshold signatures:

    Imagine for a moment, instead of having 1 secret key, you have 7 secret keys, of which 4 are required to cooperate in the FROST protocol to produce a signature for a given message. You can replace these numbers with some integer t (instead of 4) out of n (instead of 7).

    This signature is valid for a single public key.

    If fewer than t participants are dishonest, the entire protocol is secure.


  • Used for AI, I agree that a faraway, loud, energy-hungry data center comes with a huge host of negatives for the locals, to the point that I’m not sure why they keep getting building approval.

    But my point is that in an eventual post-bubble puncture world where AI has its market correction, there will be at least some salvage value in a building that already has power and data connections. A loud, energy-hungry data center can be tamed to be quiet and energy-sipping based on what’s hardware it’s filled in. Remove the GPUs and add some plain servers and that’s a run-of-the-mill data center, the likes of which have been neighbors to urbanites for decades.

    I suppose I’d rehash my opinion as such: building new data centers can be wasteful, but I think changing out the workload can do a lot to reduce the impacts (aka harm reduction), making it less like reopening a landfill, and more like rededicating a warehouse. If the building is already standing, there’s no point in tearing it down without cause. Worst case, it becomes climate-controlled paper document storage, which is the least impactful use-case I can imagine.



  • Absolutely, yes. I didn’t want to elongate my comment further, but one odd benefit of the Dot Com bubble collapsing was all of the dark fibre optic cable laid in the ground. Those would later be lit up, to provide additional bandwidth or private circuits, and some even became fibre to the home, since some municipalities ended up owning the fibre network.

    In a strange twist, the company that produced a lot of this fibre optic cable and went bankrupt during the bubble pop – Corning Glass – would later become instrumental in another boom, because their glass expertise meant they knew how to produce durable smartphone screens. They are the maker of Gorilla Glass.


  • I’m not going to come running to the defense of private equity (PE) firms, but compared to so-called AI companies, the PE firms are at least building tangible things that have an ostensible alternative use. A physical data center building – even one located far away from the typical metropolitan area that have better connectivity to the world’s fibre networks – will still be an asset with some utility, when/if the AI bubble pops.

    In that scenario, the PE firm would certainly take a haircut on their investment, but they’d still get something because an already-built data center will sell for some non-zero price, with possible buyers being the conventional, non-AI companies that just happen to need some cheap rack space. Looking at the AI companies though, what assets do they have which carry some intrinsic value?

    It is often said that during the California Gold Rush, the richest people were not those which staked out the best gold mining sites, but those who sold pickaxes to miners. At least until gold fever gave way to sober realization that it was overhyped. So too would PE firms pivot to whatever comes next, selling their remaining interest from the prior hype cycle and moving to the next.

    I’ve opined before that because no one knows when the bubble will burst, it is simultaneously financially dangerous to: 1) invest into that market segment, but also 2) to exit from that market segment. And so if a PE firm has already bet most of the farm, then they might just have to follow through with it and pray for the best.


  • I presume we’re talking about superconductors; I don’t know what a supra (?) conductor would be.

    There are two questions here: 1) how much superconducting materials are required for today’s state-of-the-art quantum computers , and 2) how quantum computers would be commercialized. The first deals in material science and whether more-capable superconductors can be developed at scale, ideally for room-temperature and thus wouldn’t require liquid helium. Even a plentiful superconductor that merely requires merely liquid nitrogen would he a bit improvement.

    But the second question is probably the limiting factor, because although quantum computers are billed as the next iteration of computing, the fact of the matter is that “classical” computers will still be able to do most workloads faster than quantum computers, today and well into the future.

    The reality is that quantum computers excel at only a specific subset of computational tasks, which classically might require mass parallelism. For example, breaking encryption algorithms is one such task, but even applying Shoe’s Algorithm optimally, the speed-up is a square-root factor. That is to say, if a cryptographic algorithm would need 2^128 operations to brute-force on a classical computer, then an optimal quantum computer would only need 2^64 quantum operation. If quantum computers achieve the equivalent performance of today’s classical computers, then 2^64 is achievable, so that cryptographic algorithm is broken.

    If. And it’s kinda easy to see how to avoid this problem: use “bigger” cryptographic algorithms. So what would quantum computers be commercialized for? Quite frankly, I have no idea: until such commonly-available quantum computers are available, and there is a workload which classical computers cannot reasonably do, then there won’t be a market for quantum computers.

    If I had to guess, I imagine that graph theorists will like quantum computers, because graphs can increase in complexity really fast on classical machines, but is more tame on quantum computers. But the only commercial applications from that would be for social media (eg Facebook hires a lot of graph theorists) and surveillance (finding correlations in masses of data). Uh, those are not wide markets, although they would have deep pockets to pay for experimental quantum computers.

    So uh, not much that would benefit the average person.





  • As a practical matter, relative directions are already hard enough, where I might say that Colorado is east of California, and California is west of Colorado.

    To use +/- East would mean there’s now just a single symbol difference between relative directions. California bring -East of Folorado, and Colorado being +East of California.

    Also, we need not forget that the conventional meridian used for Earth navigation is centered on Greenwich in the UK, and is a holdover from the colonial era where Europe is put front-and-center on a map and everything else is “free real estate”. Perhaps if the New World didn’t exist, we would have right-ascension based system where Greenwich is still 0-deg East and Asia is almost 160-deg East. Why would colonialists center the maps on anywhere but themselves?




  • Restaurants (including franchises of chains) are indeed a major segment of small businesses. Looking more broadly, any industry which: 1) offers a service/product/utility, and 2) has proven to not have a tendency to inflate beyond its fundamental target audience, those are likely to be small businesses. Those are the parameters which stave off any sort of corporate takeovers and consolidations, because they won’t invest in a small business if the prospect of infinite growth isn’t there. So the business stays small. And small is often perfectly fine.

    That is to say, restaurants (humans can only eat so much food), bicycle stores (humans can only ride so much per day), and local produce shops (even in the Central Valley of California, there’s only so much produce to sell, and humans can’t eat infinite quantities) have these qualities.

    But compare those to a restaurant supply warehouse or music equipment store, since those items can be shipped and need no customization by the end user. Consolidation and corporate meddling is possible and probable.

    Then you have industries which are often local and small but are prone to financial hazards, such as real estate agents and used car lenders. Because they get paid as a percentage of the transaction size, if the price of houses or cars go up in an unchecked fashion, the profit margins also increase linearly, which makes them more tempting for corporate involvement.

    There are corporate-owned national chains of real estate agents, self storage, department stores, and payday loan offices in the USA. But I’m not aware of a national chain for bicycle or bicycle accessories. Even regional chains for bicycles are few and far between. Some consolidation has happened there, but by most definitions, a bicycle shop is very much a small business.



  • https://ipv6now.com.au/primers/IPv6Reasons.php

    Basically, Legacy IP (v4) is a dead end. Under the original allocation scheme, it should have ran out in the early 1990s. But the Internet explosion meant TCP/IP(v4) was locked in, and so NAT was introduced to stave off address exhaustion. But that caused huge problems to this day, like mismanagement of firewalls and the need to do port-forwarding. It also broke end-to-end connectivity, which requires additional workarounds like STUN/TURN that continue to plague gamers and video conferencing software.

    And because of that scarcity, it’s become a land grab where rich companies and countries hoard the limited addresses in circulation, creating haves (North America, Europe) and have-nots (Africa, China, India).

    The want for v6 is technical, moral, and even economical: one cannot escape Big Tech or American hegemony while still having to buy IPv4 space on the open market. Czechia and Vietnam are case studies in pushing for all-IPv6, to bolster their domestic technological familiarity and to escape the broad problems with Business As Usual.

    Accordingly, there are now three classes of Internet users: v4-only, dual-v4-and-v6, and v6-only. Surprisingly, v6-only is very common now on mobile networks for countries that never had many v4 addresses. And it’s an interop requirement for all Apple apps to function correctly in a v6-only environment. At a minimum, everyone should have access to dual-stack IP networks, so they can reach services that might be v4-only or v6-only.

    In due course, the unstoppable march of time will leave v4-only users in the past.


  • You might also try asking on [email protected] .

    Be advised that even if a VPN offers IPv6, they may not necessarily offer it sensibly. For example, some might only give you a single address (aka a routed /128). That might work for basic web fetching but it’s wholly inadequate if you wanted the VPN to also give addresses to any VMs, or if you want each outbound connection to use a unique IP. And that’s a fair ask, because a normal v6 network can usually do that, even though a typical Legacy IP network can’t.

    Some VPNs will offer you a /64 subnet, but their software might not check if your SLAAC-assigned address is leaking your physical MAC address. Your OS should have privacy-extensions enabled to prevent this, but good VPN software should explicitly check for that. Not all software does.


  • Connection tracking might not be totally necessary for a reverse proxy mode, but it’s worth discussing what happens if connection tracking is disabled or if the known-connections table runs out of room. For a well-behaved protocol like HTTP(S) that has a fixed inbound port (eg 80 or 443) and uses TCP, tracking a connection means being aware of the TCP connection state, which the destination OS already has to do. But since a reverse proxy terminates a TCP connection, then the effort for connection tracking is minimal.

    For a poorly-behaved protocol like FTP – which receives initial packets in a fixed inbound port but then spawns a separate port for outbound packers – the effort of connection tracking means setting up the firewall to allow ongoing (ie established) traffic to pass in.

    But these are the happy cases. In the event of a network issue that affects an HTTP payload sent from your reverse proxy toward the requesting client, a mid-way router will send back to your machine an ICMP packet describing the problem. If your firewall is not configured to let all ICMP packets through, then the only way in would be if conntrack looks up the connection details from its table and allows the ICMP packet in, as “related” traffic. This is not dissimilar to the FTP case above, but rather than a different port number, it’s an entirely different protocol.

    And then there’s UDP tracking, which is relevant to QUIC. For hosting a service, UDP is connectionless and so for any inbound packet we received on port XYZ, conntrack will permit an outbound packet on port XYZ. But that’s redundant since we presumably had to explicitly allow inbound port XYZ to expose the service. But in the opposite case, where we want to access UDP resources on the network, then an outbound packet to port ABC means conntrack will keep an entry to permit an inbound packet on port ABC. If you are doing lots of DNS lookups (typically using UDP), then that alone could swamp the con track table: https://kb.isc.org/docs/aa-01183

    It may behoove you to first look at what’s filling conntrack’s table, before looking to disable it outright. It may be possible to specifically skip connection tracking for anything already explicitly permitted through the firewall (eg 80/443). Or if the issue is due to numerous DNS resolution requests from trying to look up spam sources IPs, then perhaps either the logs should not do a synchronous DNS lookup, or you can also skip connection tracking for DNS.


  • https://github.com/Overv/vramfs

    Oh, it’s a user space (FUSE) driver. I was rather hoping it was an out-of-tree Linux kernel driver, since using FUSE will: 1) always pass back to userspace, which costs performance, and 2) destroys any possibility of DMA-enabled memory operations (DPDK is a possible exception). I suppose if the only objective was to store files in VRAM, this does technically meet that, but it’s leaving quite a lot on the table, IMO.

    If this were a kernel module, the filesystem performance would presumably improve, limited by how the VRAM is exposed by OpenCL (ie very fast if it’s just all mapped into PCIe). And if it was basically offering VRAM as PCIe memory, then this potentially means the VRAM can be used for certain RAM niche cases, like hugepages: some applications need large quantities of memory, plus a guarantee that it won’t be evicted from RAM, and whose physical addresses can be resolved from userspace (eg DPDK, high-performance compute). If such a driver could offer special hugepages which are backed by VRAM, then those application could benefit.

    And at that point, on systems where the PCIe address space is unified with the system address space (eg x86), then it’s entirely plausible to use VRAM as if it were hot-insertable memory, because both RAM and VRAM would occupy known regions within the system memory address space, and the existing MMU would control which processes can access what parts of PCIe-mapped-VRAM.

    Is it worth re-engineering the Linux kernel memory subsystem to support RAM over PCIe? Uh, who knows. Though I’ve always like the thought of DDR on PCIe cards. All technologies are doomed to reinvent PCIe, I think, said someone from Level1Techs.