The source and destination addresses don't change. If a bomb takes out a router in-between (the military scenario DARPA had in mind), it is NOT IP (L3) or TCP (L4) that handles it. Rather it is a dynamic routing protocol that informs all affected routers of the changed route. Since the early days of the Internet, that's been the job of routing protocols.
For smaller internets, protocols such as RIP (limited to 16 hops) broadcast routing information from each still-working router to other routers. Each router built a picture of the internet (simplifying a bit here, RIP and similar protocols used "distance vector" routing, but other more advanced routing protocols did have each a picture of the internet). So when a packet arrived at its router, that router can forward the pack towards the destination. Such protocols are "interior" routing protocols, used within an ISP's network.
The Internet is too big for such automatic routing and uses an "exterior" routing protocol called BGP. This protocol routes packets from one ISP to the next, using route and connectivity information input by humans. (Again I'm simplifying a bit.)
Wifi uses entirely different protocols to route packets between cells.
Fun fact: wifi is not an acronym for anything, the inventors simply liked how it sounded.
t was made to sound like Hi-Fi, which stands for high fidelity, and Wireless, but "wireless fidelity" is a meaningless phrase and not what it was intended to directly mean.
This. Except worse, during busy days you had to stand on line for an hour or more for a turn on the machines. I believe the skill of debugging by mentally stepping through a program's execution came from such long run times, a useful skill many younger programmers lack.
Yep I really hate the characterisation that tried to imply people are weaker or worse because they lack a contextually relevant skill.
I spent about 6 months teaching myself how to tie a set of useful knots, and the reality is by now I can't do most of them anymore because day to day it turns out I just never need to tie a Midshipmen's knot (it's super useful when the siruation arises..which is rarely for an IT worker).
The computer can single-step through the program far more accurately than you can. You can inspect the full state of the CPU and memory at any moment of execution. The debugger can tell you the real, exact value of a variable at runtime.
There is simply no reason to try doing this in your head. You're worse at it than the debugger is. And I say this as someone who does have the skill. It's just not necessary.
> They don't? It is taught in schools in the early elementary level. I see no indication that most are failing.
Programming in elementary schools typically involves moving a turtle around on the screen. (My mother taught 4th grade in New York for many years, and I believe her when she explained the computer instruction.)
Economically valueable programming is much more complex than is taught in many schools through freshman college. (I taught programming at the college level from 1980 till I retired in 2020.)
Because economically valuable programming has to consider what to program, not simply follow the instructions handed down by a teacher of exactly where and how to move a turtle on the screen. But nobody questions "what to program" not being hard. It was explicitly asserted in the very first comment on this topic as being hard and that has also carried in the comments that have followed.
Your post reminded me when Yahoo IM updated their chat protocol to an incompatible version with a gradual rollout! Half their eight servers used v1 and half v2. A v1 only client would connect half the time depending on which server the round-robin DNS sent you to. This took me forever to figure out, but the fix was to put the IP address of the four v1 servers in the hosts file. (Until the client updated its support for v2.)
> And, especially what most people call big-endian, which is a bastardized mixed-endian mess of most significant byte is zero, while least significant bit is likewise zero.
In the 1980s at AT&T Bell Labs, I had to program 3B20 computers to process the phone network's data. 3B20s used the weird byte order 1324 (maybe it was 2413) and I had to tweak the network protocols to start packets with a BOM (byte order mark) (as the various switches that sent data didn't define endianess), then swap bytes accordingly.
While I have no personal experience with the 3B2 series, its documentation[1] clearly illustrates the GP's complaint: starting from the most significant binary digit, bit numbers decrease while byte addresses increase.
As for networking, Ethernet is particularly fun: least significant bit first, most significant byte first for multi-byte fields, with a 32-bit CRC calculated for a frame of length k by treating bit n of the frame as the coefficient of the (k - 1 - n)th order term of a (k - 1)th order polynomial, and sending the coefficients of the resulting 31st order polynomial highest-order coefficient first.
I was in charge of the firmware for a modem. I had written the V.42 error correction, and we contracted out the addition of the MNP correction protocol. They used the same CRC.
The Indian (only important because of their cultural emphasis on book learning) subcontractor found my CRC function, decided it didn't quite look like the academic version they were expecting, and added code to swap it around and use it for MNP, thus making it wrong.
When I pointed out it was wrong, they claimed they had tested it. By having one of our modems talk to another one of our modems. Sheesh.
This is an excellent lesson for data transport protocols and file formats.
> I had to tweak the network protocols to start packets with a BOM (byte order mark) (as the various switches that sent data didn't define endianess), then swap bytes accordingly.
(A similar thing happened to me with the Python switch from 2 to 3. Strings all became unicode-encoded, and it's too difficult to add the b sigil in front of every string in a large codebase, so I simply ensured that at the very few places that data was transported to or from files, all the strings were properly converted to what the internal process expected.)
But, as many other commenters have rightly noted, big-endian CPUs are going the way of CPUs with 18 bit bytes that use ones-complement arithmetic, so unless you have a real need to run your program on a dinosaur, you can safely forget about CPU endianness issues.
My fantasy: After the salesman says (for the 4th time), "Sorry, the manager won't approve that price, but if you could add X hundred dollars, I'm sure I can convince them!", I wait until they are through high-fiving each other and then tell the salesman "Sorry, my trust manager didn't approve that price. I'm sure I can convince him if you lower the price by X hundred dollars".
My reality: I use my bank's car-buying service and pay the bank's negotiated price.
Age verification is simple! A Kelvar strap must be attached to the user before the device will power up. A probe in the strap takes a drop of blood from the user and analyzes the protein markers to determine the user's age. (See the Stanford U. study for details.)
Surprised G. Orwell or A. Huxley didn't think of it first.
Not enough, the user may get someone else to wear the strap for them. The only solution is Neuralink(tm) with a built in secure element and DRM to ensure that content is delivered directly from the source to the age verified user’s brain, without any so called “analog hole” through which minors or non-paying users could view the content.
For smaller internets, protocols such as RIP (limited to 16 hops) broadcast routing information from each still-working router to other routers. Each router built a picture of the internet (simplifying a bit here, RIP and similar protocols used "distance vector" routing, but other more advanced routing protocols did have each a picture of the internet). So when a packet arrived at its router, that router can forward the pack towards the destination. Such protocols are "interior" routing protocols, used within an ISP's network.
The Internet is too big for such automatic routing and uses an "exterior" routing protocol called BGP. This protocol routes packets from one ISP to the next, using route and connectivity information input by humans. (Again I'm simplifying a bit.)
Wifi uses entirely different protocols to route packets between cells.
Fun fact: wifi is not an acronym for anything, the inventors simply liked how it sounded.
reply