Previous: PPP Up: IP On Thin Wires Next: Recommendations

Performance Issues

There seems to be an impression abroad that PPP is slower than SLIP. Several times, someone has commented that they're considering both SLIP and PPP for a given application where either might be appropriate, but they've heard such a performance comparison story from ``someone in the group.'' When asked in what way PPP might be slower than SLIP, they usually mention per-packet protocol overhead, but sometimes other issues.

Here are the usual responses to the most common objections. Remember that most people considering both SLIP and PPP are concerned about asynchronous dial-up IP between specific pairs of hosts that can easily run either SLIP or PPP, and don't care about connecting different manufacturers' routers with T1, or carrying any other protocol family besides IP.

  1. RFC 1144 ``VJ'' TCP header compression [Jacobson 1990] can be implemented over both (When SLIP is run with VJ, it's often called CSLIP), so that's not an issue for comparison. Just be sure yours does it.

  2. Both PPP and SLIP have mechanisms for escaping characters in the data part of the frame that might be interpreted as part of the framing itself, so that's a wash too.

  3. Unlike SLIP, PPP can operate even in the presence of in-band XON/XOFF (``software'') flow control. Therefore, PPP may need to escape those characters in the data, thus requiring more octets to traverse the wire.

    This is an issue of the underlying transport, not of the protocol itself. In fact, this is an advantage in flexibility for PPP: the capability to escape almost any character is there if needed.

  4. Since it has no option negotiation phase during link startup, SLIP can get under way more quickly, and therefore makes better use of precious and expensive telephone time. This is perhaps a valid critique -- If both ends know a priori that they want to exchange certain protocol families and use certain addresses and certain types of compression, why not hot-wire the values and just do it?

    In our experience (running dial-up IP protocols between UNIX workstations), after the need to bring up the line occurs, the modems take 24 seconds to dial, connect a local call, train with V.32/V.42bis, and exchange the login/password handshake. After the login sequence is complete, the ``other end'''s PPP process is started (so far, the requirements are the same whether with SLIP or PPP), and the PPPs typically negotiate their way up to readiness to pass IP packets within three seconds. Since the telephone company and the modems will absorb eight more time than the protocol, even on a local call, the extra time spent negotiating options is negligible.

    Besides, PPP can be configured to insist upon any of the values and not negotiate them, or assign them on the fly. The ends can come more quickly to agreement.

  5. SLIP has a much simpler framing scheme, when compared to PPP's kitchen sink approach, so its per-packet overhead is much lower, which means that the percentage of the link bandwidth devoted to paying cargo is much higher.

    If a PPP implements address/control field and protocol field compression, a typical PPP packet carries only three octets more baggage than a typical SLIP packet. Two of those three octets are used for FCS error calculations. It is is entirely an individual value judgement whether it matters that data is being delivered correctly at the link level, or whether that sort of worrying should be left to the higher-level protocols in the stack. The author has had a file corrupted inconveniently by non-UDP-checksummed NFS over SLIP, and he doesn't want it to happen again. He's willing to pay two octets per frame, even at 2400 baud.

    Also, the typical MTU in SLIP implementations is smaller (256 or or 576 or 1006, non-negotiable) than the typical MTU of a PPP link (1500, negotiable), so the protocol overhead octets can be amortized over a larger number of data octets. Of course, if a fast link to be used only for bulk data transfer, PPP can be instructed to negotiate a truly moby MTU and push the data:overhead ratio even higher. The user will realize actual benefits anywhere from negligible to negative, but it could be done.

  6. Since SLIP's framing scheme is so simple, and since it specifies no finite state machine to implement and no error checking, it absorbs far less system resources than PPP.

    Cycles are cheap. Memory is cheap. A good PPP implementation can run in user space on a slightly aging UNIX workstation (a Sun SPARCstation-1) at its full async speed, doing all the frame assembly and disassembly and FCS and everything else, while using only a few hundred Kilobytes of memory and only a few percent of the host's CPU. Even running at T1 speeds, the daemon will only use a few percent of the CPU capacity (the tunnel driver will be busy, though). It can even be performing other gateway-oriented tasks like packet filtering and still provide good performance with negligible system overhead, but that's irrelevant to this point. If you're using an under-powered machine, you have other problems besides your communications overhead.

    As in argument above, the peace of mind is worth the small cost in CPU and memory to calculate the FCS.

  7. Since SLIP is so minimalist in its approach to the problem, it is much quicker to implement and verify than PPP, which seems unnecessarily complex and general for most people's problems.

    SLIP is indeed much easier to implement than PPP. But there now exist several freely available PPP implementations that may be used as the starting point for either free or commercial implementations. The engineering effort invested in PPP for IP now will pay off in the future when it will be much easier to plug in support for other protocol families. There's no reason to invest any more engineering effort in SLIP, except maybe for backwards compatibility.

    From the user's point of view, PPP provides the flexibility to more easily solve a wider range of configuration problems. And you may find out that you need Magic Numbers (loop detection), CHAP (authentication) or LQM (link quality monitoring) someday, even if you don't think so now.

  8. But everyone does SLIP, and hardly anyone has PPP. It seems as if SLIP is the standard way of doing things.

    The SLIP spec in RFC 1055 [Romkey 1988] itself describes SLIP as a non-standard, and lists several deficiencies. The PPP spec in RFC 1331 [Simpson 1992] is the product of the Internet Engineering Task Force's Point to Point Protocol Working Group, and its introduction describes how it addresses each of those deficiencies. PPP is under active development and is experiencing directed evolution at the hands of a varied group of talented engineers; SLIP has stagnated for several years, having exhausted its potential. SLIP is John the Baptist, self-described as only pointing the way to far greater things to come. SLIP isn't fit to untie PPP's sandals.

  9. If the links are already running SLIP, there is a significant effort required to change over to PPP. Both ends of each link must be able to run it. Why didn't they just design PPP to be upward compatible with SLIP, so we could phase it in gradually?

    SLIP is too simple to expand with the capabilities of PPP. By the time you've done that, you'd be basically running PPP anyway, so why not just start afresh?

http://WWW.MorningStar.Com/