(Source: arstechnica.com)

The rise and fall of FireWire—IEEE 1394, an interface standard boasting high-speed communications and isochronous real-time data transfer—is one of the most tragic tales in the history of computer technology. The standard was forged in the fires of collaboration. A joint effort from several competitors including Apple, IBM, and Sony, FireWire was a triumph of design for the greater good. It represented a unified standard across the whole industry, one serial bus to rule them all. Realized to the fullest, FireWire could replace SCSI and the unwieldy mess of ports and cables at the back of a desktop computer.

Yet FireWire’s principal creator, Apple, nearly killed it before it could appear in a single device. And eventually the Cupertino company effectively did kill FireWire, just as it seemed poised to dominate the industry.

The story of how FireWire came to market and ultimately fell out of favor serves today as a fine reminder that no technology, however promising, well-engineered, or well-liked, is immune to inter- and intra-company politics or to our reluctance to step outside our comfort zone.

The beginning

“It actually started in 1987,” Michael Johas Teener, the chief architect of FireWire, told Ars. He was then a system architect in National Semiconductor’s marketing department, there to impart technical knowledge upon the clueless sales and marketing staff. Around that time, talk started about a new generation of internal bus architectures. A bus is a kind of channel over which various types of data can flow between computer components, and an internal bus is for expansion cards like scientific instruments or dedicated graphics processing.

The Institute of Electrical and Electronics Engineers (IEEE) quickly saw emerging efforts to build three incompatible new standards—VME, NuBus 2, and Futurebus. The organization looked upon the situation with disdain. Instead, they suggested, why not work together?

Teener was appointed chair of this new project to unify the industry around a single serial bus architecture. (“Serial” meaning that they transfer one bit at a time, rather than multiple bits simultaneously—parallel is faster, given the same signal frequency, but it comes with a higher overhead and has efficiency problems as you scale up the signal frequencies.)

“Real quickly there were some people—including a guy named David James, who was with Hewlett-Packard architecture labs at the time—who were saying, ‘Yes, we want a serial bus, too,'” Teener said. “‘But we want it to go off the bus to connect to low-speed or modest-speed peripherals,’ like floppy disks and keyboards and mice and all kinds of other stuff like that.”

Enter Apple

Teener joined Apple in 1988. Shortly after he arrived, Apple began looking for a successor to the Apple Desktop Bus, ADB, which was used for very low-speed devices such as keyboards and mice. Apple wanted the next version to be able to carry audio signals. Teener had just the thing.

This early glimmer of FireWire was too slow for the company’s purposes, however. The earliest designs were for a speed of 12 megabits a second (1.5 MB/s); Apple wanted 50. The company feared it would have to go optical (read: expensive) to get there.

To enable this mixed use, Teener and James—who had also joined Apple—invented an isochronous transport method—meaning transfers at regular intervals. This guaranteed the timing of data arrival. Guaranteed timing meant it could handle high-bit-rate signals much more efficiently, and it would lock down the throughput so that there’s no jitter on the latency—whatever millisecond delay there was in going through the interface to the computer would always be the same, no matter the circumstances. This made the isochronous transport method ideal for multimedia purposes like professional audio and video, which previously required special hardware to transfer onto a computer for editing.

Apple assigned analog engineers Roger Van Brunt and Florin Oprescu to the group to design the physical layer—the wires and electrical signals that run on them—and to implement the technology in a faster interface. Van Brunt determined that they could avoid optics by using a twisted pair of wires. That would get them the extra speed without increasing the cost.

“About that time some guys from IBM, of all places, were looking for a replacement for SCSI,” Teener recalled. “And since we were using SCSI at the same time, we were thinking maybe we would use this as a replacement for that. We joined forces. But they wanted 100 megabits a second.”

To get the extra bandwidth, the team turned to a company called STMicroelectronics. These guys had a trick that would double the bandwidth on a cable at no cost thanks to a clocking mechanism (in layman’s terms, a way of coordinating the behavior of different elements in a circuit) called data-strobe encoding.

Now they needed a connector. “We had marching orders to make it unique so that somebody could just look at the connector and tell what it was,” recalled Teener. Macs of the era had three different round connectors; PCs likewise had a mix of similar-looking connectors.

They asked Apple’s resident connector expert what they should use. He noted that Nintendo’s Game Boy link cable was unlike anything else, and they could make it unique to their technology by swapping the polarization around. The connector could use exactly the same technology—same pins and everything—and it would look different. Better yet, the Game Boy link cable was the first major connector that put the fragile springy parts inside the cable. That way, when the springy bits wear out, you just have to buy a new cable rather than replace or repair the device.

The final design specification ran over 300 pages—a complex technology with elegant functionality. Ratified as IEEE 1394 in 1995, it allowed for speeds up to 400 megabits (50 MB) per second, simultaneously in both directions, over cables up to 4.5 meters long. Cables could power connected devices with as much as 1.5 amperes of electrical current (at up to 30 volts). As many as 63 devices could be networked together on the same bus, and all were hot-swappable. Everything was configured automatically on connection, too, so you didn’t need to worry about network termination or device addresses. And FireWire had its own micro-controller, so it was unaffected by fluctuations in CPU load.

More Info: arstechnica.com

Advertisements