I’m the administrator of kbin.life, a general purpose/tech orientated kbin instance.

  • 2 Posts
  • 338 Comments
Joined 1 year ago
cake
Cake day: June 29th, 2023

help-circle


  • Hmm. That would mean it’s likely one of the following (well perhaps more options, but these spring to mind)

    • A windows machine that has the network set as a public network, or netbios specifically blocked on LAN.
    • A windows machine that has all the netbios services disabled.
    • Not a windows machine, or a container as others suggested that’s running some kind of IIS install
    • Not a windows machine at all but for some weird reason IIS files and a web server setup.

    I think you suggested in another comment, that it’s not in your DHCP client list but has an IP in your normal range. Which suggests it is setup with a static IP. That is odd.

    Some other people suggested it could be a container that is using a real IP rather than the NAT that docker etc usually use. I do know that you can use real IPs in containers, I’ve done it on my NAS to get a “proper” linux install on top of the NAS lite linux that is provided. But I would have expected that you’d know about that, since it would require someone to actually choose the IP address to use.

    If you have managed switches you could find which port on which switch the MAC address (as found by lookuping up the arp record for the IP using arp -a) is on (provided the switch allows access to the forwarding tables). Of course, if they’re on Wi-Fi it’s only going to lead to the access point they’re connecting to.









  • When I was talking about memory, I was more thinking about how it is accessed. For example, exactly what actions are atomic, and what are not on a given architecture, these can cause unexpected interactions during multi-core work depending on byte alignment for example. Also considering how to make the most of your CPU cache. These kind of things.


  • I’d agree that there’s a lot more abstraction involved today. But, my main point isn’t that people should know everything. But knowing the base understanding of how perhaps even a basic microcontroller works would be helpful.

    Where I work, people often come to me with weird problems, and the way I solve them is usually based in low level understanding of what’s really happening when the code runs.


  • I’ve always found this weird. I think to be a good software developer it helps to know what’s happening under the hood when you take an action. It certainly helps when you want to optimize memory access for speed etc.

    I genuinely do know both sides of the coin. But I do know that the majority of my fellow developers at work most certainly have no clue about how computers work under the hood, or networking for example.

    I find it weird because, to be good at software development (and I don’t mean, following what the computer science methodology tells you, I mean having an idea of the best way to translate an idea into a logical solution that can be applied in any programming language, and most importantly how to optimize your solution, for example in terms of memory access etc) requires an understanding of the underlying systems. That if you write software that is sending or receiving network packets it certainly helps to understand how that works, at least to consider the best protocols to use.

    But, it is definitely true.


  • The problem with wifi is that things will go downhill quickly once you have too many stations online. Even if they’re not actively browsing, the normal amount of chatter that a network has will often just slow things right down. It would need to be split into smaller wifi networks linked somehow and that means someone needs to be in a central location that is easily traced.

    In theory I guess someone with a very fast connection could run a layer 2 VPN. Then you could all run a routing protocol over that network which is accessed over the internet.

    Lot’s of ways to do it really. Wifi alone is probably the worst though.



  • I mean you could have an open wifi mesh and/or a network of either cheap fibre/ethernet with open switches. Then using OSPF or a similar routing protocol that supports routing over LAN networks you could handle the routing between all the remote networks.

    I think you’d need to break the network up at some points to break down the broadcast domains. You could do a similar thing to defederating, by not accepting certain routes, or routes from certain OSPF nodes.

    Issues with LANs that get too big without splitting into a new LAN (limiting broadcast domains) and definitely even the most modern wifi becomes problematic with a number of active stations online (wifi is half duplex in operation). So multiple channels and some backbone either over point to point radio links, or cable to connect wifi zones and alternate channels would improve things somewhat.

    Not sure why you’d want to do something like this. But the tech to do it is fairly inexpensive.


  • I’ve used IPv6 at home for over 20 years now. Initially via tunnels by hurricane electric and sixxs. But, around 10 years ago, my ISP enabled IPv6 and I’ve had it running alongside IPv4 since then.

    As soon as server providers offered IPv6 I’ve operated it (including DNS servers, serving the domains over IPv6).

    I run 3 NTP servers (one is stratum 1) in ntppool.org, and all three are also on ipv6.

    I don’t know what’s going on elsewhere in the world where they’re apparently making it very hard to gain accesss to ipv6.