Previous: PPP Up: IP On Thin Wires Next: Recommendations
There seems to be an impression abroad that PPP is slower than SLIP. Several times, someone has commented that they're considering both SLIP and PPP for a given application where either might be appropriate, but they've heard such a performance comparison story from ``someone in the group.'' When asked in what way PPP might be slower than SLIP, they usually mention per-packet protocol overhead, but sometimes other issues.
Here are the usual responses to the most common objections. Remember that most people considering both SLIP and PPP are concerned about asynchronous dial-up IP between specific pairs of hosts that can easily run either SLIP or PPP, and don't care about connecting different manufacturers' routers with T1, or carrying any other protocol family besides IP.
This is an issue of the underlying transport, not of the protocol itself. In fact, this is an advantage in flexibility for PPP: the capability to escape almost any character is there if needed.
In our experience (running dial-up IP protocols between UNIX workstations), after the need to bring up the line occurs, the modems take 24 seconds to dial, connect a local call, train with V.32/V.42bis, and exchange the login/password handshake. After the login sequence is complete, the ``other end'''s PPP process is started (so far, the requirements are the same whether with SLIP or PPP), and the PPPs typically negotiate their way up to readiness to pass IP packets within three seconds. Since the telephone company and the modems will absorb eight more time than the protocol, even on a local call, the extra time spent negotiating options is negligible.
Besides, PPP can be configured to insist upon any of the values and not negotiate them, or assign them on the fly. The ends can come more quickly to agreement.
If a PPP implements address/control field and protocol field compression, a typical PPP packet carries only three octets more baggage than a typical SLIP packet. Two of those three octets are used for FCS error calculations. It is is entirely an individual value judgement whether it matters that data is being delivered correctly at the link level, or whether that sort of worrying should be left to the higher-level protocols in the stack. The author has had a file corrupted inconveniently by non-UDP-checksummed NFS over SLIP, and he doesn't want it to happen again. He's willing to pay two octets per frame, even at 2400 baud.
Also, the typical MTU in SLIP implementations is smaller (256 or or 576 or 1006, non-negotiable) than the typical MTU of a PPP link (1500, negotiable), so the protocol overhead octets can be amortized over a larger number of data octets. Of course, if a fast link to be used only for bulk data transfer, PPP can be instructed to negotiate a truly moby MTU and push the data:overhead ratio even higher. The user will realize actual benefits anywhere from negligible to negative, but it could be done.
Cycles are cheap. Memory is cheap. A good PPP implementation can run in user space on a slightly aging UNIX workstation (a Sun SPARCstation-1) at its full async speed, doing all the frame assembly and disassembly and FCS and everything else, while using only a few hundred Kilobytes of memory and only a few percent of the host's CPU. Even running at T1 speeds, the daemon will only use a few percent of the CPU capacity (the tunnel driver will be busy, though). It can even be performing other gateway-oriented tasks like packet filtering and still provide good performance with negligible system overhead, but that's irrelevant to this point. If you're using an under-powered machine, you have other problems besides your communications overhead.
As in argument above, the peace of mind is worth
the small cost in CPU and memory to calculate the FCS.
SLIP is indeed much easier to implement than PPP. But there now exist several freely available PPP implementations that may be used as the starting point for either free or commercial implementations. The engineering effort invested in PPP for IP now will pay off in the future when it will be much easier to plug in support for other protocol families. There's no reason to invest any more engineering effort in SLIP, except maybe for backwards compatibility.
From the user's point of view, PPP provides the flexibility to more easily solve a wider range of configuration problems. And you may find out that you need Magic Numbers (loop detection), CHAP (authentication) or LQM (link quality monitoring) someday, even if you don't think so now.
The SLIP spec in RFC 1055 [Romkey 1988] itself describes SLIP as a non-standard, and lists several deficiencies. The PPP spec in RFC 1331 [Simpson 1992] is the product of the Internet Engineering Task Force's Point to Point Protocol Working Group, and its introduction describes how it addresses each of those deficiencies. PPP is under active development and is experiencing directed evolution at the hands of a varied group of talented engineers; SLIP has stagnated for several years, having exhausted its potential. SLIP is John the Baptist, self-described as only pointing the way to far greater things to come. SLIP isn't fit to untie PPP's sandals.
SLIP is too simple to expand with the capabilities of PPP. By the time you've done that, you'd be basically running PPP anyway, so why not just start afresh?