Yes, chip and pin has been the established norm for decades now. Wait until we tell them about the last time most of us wrote a cheque (check)!
I’m the administrator of kbin.life, a general purpose/tech orientated kbin instance.
Yes, chip and pin has been the established norm for decades now. Wait until we tell them about the last time most of us wrote a cheque (check)!
Hmm. That would mean it’s likely one of the following (well perhaps more options, but these spring to mind)
I think you suggested in another comment, that it’s not in your DHCP client list but has an IP in your normal range. Which suggests it is setup with a static IP. That is odd.
Some other people suggested it could be a container that is using a real IP rather than the NAT that docker etc usually use. I do know that you can use real IPs in containers, I’ve done it on my NAS to get a “proper” linux install on top of the NAS lite linux that is provided. But I would have expected that you’d know about that, since it would require someone to actually choose the IP address to use.
If you have managed switches you could find which port on which switch the MAC address (as found by lookuping up the arp record for the IP using arp -a) is on (provided the switch allows access to the forwarding tables). Of course, if they’re on Wi-Fi it’s only going to lead to the access point they’re connecting to.
I don’t even think my current wifi kit has WPA (1) as an option. It’s WPA2 or 3 only I’m pretty sure.
So, as others have saId this is just an unconfigured IIS server, which implies it’s either a windows machine, or a windows based VM, well or someone put the default IIS files on another server, but that’s unlikely.
When you say “weird” IP I’d wonder what you mean by that.
I think since it’s probably a windows machine, from another windows machine typing nbtstat -A <ip> should give you the computer name and workgroup or domain they belong to. See if it matches anything you expect on your network.
If not, maybe it’s time to change your WPA wifi key.
Don’t need the router. If you’re on windows or linux, you just ping the ip then enter ‘arp -a <ip>’ it will show the MAC address for the IP from your machine’s arp cache.
Packets are lost all the time. Especially when uploading or downloading.
Anyone running a webserver and looking at their logs will know AI is being trained on EVERYTHING. There are so many crawlers for AI that are literally ripping the internet wholesale. Reddit just got in on charging the AI companies for access to freely contributed content. For everyone else, they’re just outright stealing it.
It’s specifically implied? :P
“it goes up to eleven”
When I was talking about memory, I was more thinking about how it is accessed. For example, exactly what actions are atomic, and what are not on a given architecture, these can cause unexpected interactions during multi-core work depending on byte alignment for example. Also considering how to make the most of your CPU cache. These kind of things.
I’d agree that there’s a lot more abstraction involved today. But, my main point isn’t that people should know everything. But knowing the base understanding of how perhaps even a basic microcontroller works would be helpful.
Where I work, people often come to me with weird problems, and the way I solve them is usually based in low level understanding of what’s really happening when the code runs.
I’ve always found this weird. I think to be a good software developer it helps to know what’s happening under the hood when you take an action. It certainly helps when you want to optimize memory access for speed etc.
I genuinely do know both sides of the coin. But I do know that the majority of my fellow developers at work most certainly have no clue about how computers work under the hood, or networking for example.
I find it weird because, to be good at software development (and I don’t mean, following what the computer science methodology tells you, I mean having an idea of the best way to translate an idea into a logical solution that can be applied in any programming language, and most importantly how to optimize your solution, for example in terms of memory access etc) requires an understanding of the underlying systems. That if you write software that is sending or receiving network packets it certainly helps to understand how that works, at least to consider the best protocols to use.
But, it is definitely true.
The problem with wifi is that things will go downhill quickly once you have too many stations online. Even if they’re not actively browsing, the normal amount of chatter that a network has will often just slow things right down. It would need to be split into smaller wifi networks linked somehow and that means someone needs to be in a central location that is easily traced.
In theory I guess someone with a very fast connection could run a layer 2 VPN. Then you could all run a routing protocol over that network which is accessed over the internet.
Lot’s of ways to do it really. Wifi alone is probably the worst though.
In fact, forget the internet!
I mean you could have an open wifi mesh and/or a network of either cheap fibre/ethernet with open switches. Then using OSPF or a similar routing protocol that supports routing over LAN networks you could handle the routing between all the remote networks.
I think you’d need to break the network up at some points to break down the broadcast domains. You could do a similar thing to defederating, by not accepting certain routes, or routes from certain OSPF nodes.
Issues with LANs that get too big without splitting into a new LAN (limiting broadcast domains) and definitely even the most modern wifi becomes problematic with a number of active stations online (wifi is half duplex in operation). So multiple channels and some backbone either over point to point radio links, or cable to connect wifi zones and alternate channels would improve things somewhat.
Not sure why you’d want to do something like this. But the tech to do it is fairly inexpensive.
I’ve used IPv6 at home for over 20 years now. Initially via tunnels by hurricane electric and sixxs. But, around 10 years ago, my ISP enabled IPv6 and I’ve had it running alongside IPv4 since then.
As soon as server providers offered IPv6 I’ve operated it (including DNS servers, serving the domains over IPv6).
I run 3 NTP servers (one is stratum 1) in ntppool.org, and all three are also on ipv6.
I don’t know what’s going on elsewhere in the world where they’re apparently making it very hard to gain accesss to ipv6.
The subject of how humans might perceive four dimensional space is covered in a later book of three body novels (Remembrance of earth’s past series). The author describes is as being able to see into sealed three-dimensional objects as if they had an open top. As such you could easily traverse into sealed rooms etc from such a perspective.
I thought it was quite an interesting idea.
Pics AND/OR it didn’t happen.
I have auto redirect to 443. But --nginx works fine. I think it overrides stuff for whatever the specific url used is.
I think cheques were almost irrelevant in the 90s already. For sure I remember the two things I had to write a cheque for. DVLA for the olde tax disk every year, and the speeding fine I got in 1996.
Until around 2005, I was using my 1994 issued chequebook, crossing out the 19, initialling the change and writing in the full year.
I still have the chequebook issued in 2005 to this day.