Signal-Processing Resources for High-Density Integrated-Media Communications

Although software-based, it was not long ago that communications equipment for both customer-premises and carrier applications was fixed- function. And more often than not these products were limited to processing one media technology, such as voice, fax, video, or data. After all, the networks that served as the media transport were typically developed to support one medium, such as voice or data, so there was little incentive for the equipment designer to take on the added complexity of integrated media.

 

Not any more. The move to next-generation converged-media networks implies a move to next-generation integrated-media at the network’s edge. Today, media and applications flexibility are demanded by the equipment designer. As always, high density, low power per port, and low price are major concerns. And availability and maintainability are always important.

 

But overlaying all this is a dramatically changing regulatory, competitive, standards, and financing environment. As the new network emerges, vendors and carriers must place bets on untested standards. Move too soon and run the risk of adopting the wrong ‘standard’; wait too long and be left behind. And with capital markets taking no prisoners, don’t make the wrong decision.

 

This is the environment in which the equipment developer must learn to thrive.

Designing In The Midst Of Uncertainty

 

This degree of uncertainty rewards the designer that can delay investments. In his paper Real Options, Irreversible Investment and Firm Uncertainty, Laarni T. Bulan points out:

 

Timing flexibility is important because most investments are irreversible due to sunk costs. The firm would want to avoid situations where it cannot recoup the entire amount (or the entire cost) of its investment if the investment opportunity turns out to be unprofitable. The implication of these two assumptions is that the option to delay investment has value. By delaying investment during times of uncertainty, the manager can wait for uncertainty to be resolved (or, perhaps, abate somewhat). If the resolution of uncertainty is favorable, then the firm will benefit from higher profits. It the resolution of uncertainty is unfavorable, then the firm avoids losses from foregoing the investment altogether.

 

This means that technologies that permit the designer to delay ‘sunk investments’ allow the equipment vendor to cope with low capitalizations and turbulent market conditions. How? Consider the designer that elects to invest in licensed cores, ASICs, all-proprietary boards, chassis, backplanes, and board-level software environments. Such a project requires a headcount in the hundreds, over $50M in capital, and nearly 48 months in market leadtime. Compare this with a competitor that elects to take advantage of the latest in catalog silicon and standards-based solutions available on value-adding components.

 

If the product is a high-capacity multi-service-access, gateway, or mediation switching device, for the proprietary chassis substitute CompactPCI with the latest high-speed packet bus, immediately slashing investment by millions. Instead of designing processor boards, they are purchased. Network access? Purchased. Host OS? Linux. Media- processing resources? Maybe.

 

Whether or not a commercial off-the-shelf media-processing resource can be used depends on the system requirements. But the DSP industry has made great progress in the last few years to provide catalog-based answers that eliminate the need to license a processor core and fund the up-front charges of a foundry. DSPs are now rated at 4,800 MIPS. What about packet processing? ASICs are no longer required to meet high- performance packet-processing requirements. A new category of chip, the Network processor, now provides 2.5M packets-per-second of packet- processing power…and that’s the low end. Newer editions provide the wire-speed processing required by the highest-speed backbone routers.

 

So an equipment vendor with system-platform-level requirements that can be met by outsourcing value-adding components can develop an advantage over the company that invests tens of millions in the three years preceding the project start date of the vendor that takes the value- adding approach. The second vendor can exercise ‘real options’. In fact, the second vendor, by electing to base his application on an open- standards telephony API can choose to develop all application-level functions before purchasing anything. Middleware software can abstract more than the underlying hardware and software; it can abstract uncertainty.

 

The goal of Commetrex is to make the development of products with the most demanding requirements similar to fast prototyping.

Multi-Vendor Media Technologies

 

Of course, the hardware resource does nothing without the media- processing software. And that can be expensive and complex. Most?perhaps all?of the required technology may be available ‘pre- cooked’ on a catalog DSP-resource board. If not, and the board is an open design, the developer can add the missing technology. If the DSP- resource board is a closed-architecture offering it must either be tossed in favor of a proprietary approach or augmented by additional hardware to support the additional media. Moreover, if the system requirements specify a proprietary form factor for the hardware, the designer may have no choice but to design a proprietary media-processing resource board. But with the new products coming to market today, coming up short one media technology need not mean the extra time and development investment it implied only a year ago.

 

No equipment vendor has the time or capital to develop all the necessary foundation technologies alone. Today’s technology is so complex that no single media-technology vendor can do it alone. So the equipment vendor may end up sourcing these technologies from multiple companies. Quite possibly, that means negotiating with multiple vendors, integrating technologies in multiple formats into a proprietary media-processing software framework. But it need not.

 

MSP ConsortiumThe MSP Consortium, Inc. has published an open specification, M.100, that the embedded-system developer can use to lower the cost of developing a multi-stream multi-media telecommunications resource. It does this by specifying a stream- processing environment and the APIs a controlling entity needs to implement specified media-processing functions. M.100 also specifies the algorithm wrapper and a packaging utility that creates the executable stream-processing system resource.

 

What all this means is the equipment developer can either license an M.100-conforming environment (keeping $500,000 and 14 months in the bank) or develop a proprietary implementation. Either way, the designer can then source off-the-shelf media technologies from any of several vendors that supply M.100-conforming media-processing products.

 

M.100 is not necessary for a single-stream single-media product. But it, or a proprietary substitute, is required when high-channel-count integrated- media are design requirements. What we at Commetrex call “Any Media…Any Port…Anytime” technology. M.100 does not require a coprocessor or host processor, but the most efficient designs that support high-capacity integrated-media systems do have high-capability embedded processors. In fact, the most efficient designs have all but a small execution environment partitioned onto a co-processor and/or host processor. Other than the actual signal processing, most of the ‘heavy lifting’ in a high-capacity system is in deciding what to do, when to do it, where to do it, and how to do it, none of which are DSP-oriented tasks, making them well suited to scalar processors or network processors.

 

So M.100 allows the system developer to license an M.100-conforming media-processing environment (Commetrex’ is called OpenMedia), license M.100-conforming stream-processing technologies, snap them together, and get on with system integration.

The DSP

 

Embedded-system developers take pride in squeezing the last byte and last fraction of a MIPS out of hand-coded assembly, but that is very expensive and time consuming. Although it makes sense for ultra-high- volume fixed-function products, it only takes a week of engineering time to pay for an extra chip of RAM for 2,000 units. And the MIPS continue to follow Mr. Moore’s curve.

 

In a fixed-function system there is little code needed to decide what algorithm to attach to which stream and when. But in high-capacity any- media anytime systems, a much higher proportion of the system code is concerned with these functions. This is why co-processors are more common under these circumstances. As the fraction of system resources allocated to these ‘administrative’ tasks increases it becomes more important to factor the consideration of where these resources should be allocated into the design.

 

For example, if all of the stream-processing resources for a 100-channel system are handled by one DSP, proportionally fewer resources will be required to allocate calls to DSPs and bring the required stream- processing tasks into execution than if the system were implemented on 10 DSPs capable of handling 10 streams each. With high MIPS concentration, the allocation of a task to a DSP is less complex (one instead of 10 DSPs), and there are fewer MIPS wasted. (Think of wasted RAM in paged-memory systems with a small page size.)

 

So in high-capacity systems that support any media on any call on any port, very high-capacity DSPs will generally yield a more efficient and cost- effective system design.

 

As we have shown, the media complexity of the new network, uncertain capital markets, in-flux regulations, and emerging standards requires new approaches to solve the time-to-market puzzle. Increasingly, equipment developers are turning to enabling-technology vendors to dramatically reduce sunk investments and time-to-market.