CEO Commentary Archives

Telephone Numbers

Telephone numbers, actually session addressing, is one of the major issues, problems, or opportunities in telecom, depending on your point of view. Along with so-called Net Neutrality, it is one of the critical battles being waged in the quest for a competitive communications market. It is a major concern to many telecom strategists because it partitions session-addressing universes: the traditional telecom universe (10-digit (E.164) numbers), SIP addressing, the Skype universe (your Skype ID), the Google universe (your G-Mail account), and so on (Yahoo!, AOL, MSN, etc.).

 

There is an emerging international standard, ENUM, that is modeled on the Internet’s Domain Naming System (DNS). ENUM maps 10-digit telephone numbers to Internet address resources via an ENUM server.

 

In the US, you must be a legal “telephone company” to get a block of telephone numbers to assign to subscribers, Skype, Google, and AOL chose to develop their own schemes to identify a user. So, except for Google, which uses a protocol, XMPP, that supports addressing-server federation, each of these services is an addressing island, reducing the value of each network compared with its value to the user if all addresses can be reached, regardless of network. Currently, each of these network developers jealously guards their control of addressing on their network.

 

Ten-digit telephone numbers that are within the North American numbering plan are administered by NeuStar, Inc, the North American Numbering Plan Administrator (NANPA). Assignments are made based on the central office designated by the carrier, which is hopelessly antiquated for the Internet age. The carriers have an interest is keeping this system in place for as long as possible since it allows them to exert some degree of market control (they own the numbers). (Actually, with number portability, you own the number, but it can only be transferred between telcos.)

 

Not only is this system exclusive, it is country-specific, and, it is specifically designed to support the legacy model of the country-specific telecom monopoly. The ITU, an agency of the UN, administers the country codes. However, Skype, for example, wasn’t and isn’t interested in national borders. Except in a few countries, the Internet is a global platform, and Skype needed a global session-addressing scheme. It had one sitting on the shelf from the company’s Kazaa origins, and it used it.

 

However, there is an emerging international standard, ENUM, that is modeled on the Internet’s Domain Naming System (DNS). ENUM maps 10-digit telephone numbers to Internet address resources via an ENUM server. The ENUM server provides the requesting client (for example, a SIP proxy) with the stored information, such as the subscriber’s VoIP SIP address, fax address, e-mail address, or other resources, such as secondary addresses, to services to which the number’s owner has subscribed. The network operated by Telecom Austria (which uses Commetrex’ BladeWare), has had ENUM in commercial operation since December 2004.

 

Although ENUM, especially carrier-owned private ENUM, still pays its respects to the established order, the extensible nature of the information (Internet resources) provided by ENUM servers can effectively decouple the location of value-adding services from the provision of transport. So don’t expect carriers to warmly embrace ENUM, as it makes service invocation seamless.

 

There’s been plenty of discussion lately in the US about how to foster innovation. All the US Congress needs to do to insure that market forces in telecom will continue to support innovation is to ensure an open, national ENUM system similar to the one in Austria.

If you want to know more about what Commetrex is doing to “Make FoIP Work,” give us a call.

We Make FoIP Work!

I recently spoke with a vice president of engineering at a large enterprise-fax vendor. Our customer wanted to know if we had a program to establish interop certifications with IP carriers. Prior to that I had spoken with a product-management director at another large enterprise-fax server vendor (actually, our largest customer), and he wanted to know if we had a certification lab for various gateways. I didn’t have good answers to either question. We just don’t have the resources and scale to make a reasonable dent in the massive numbers of carriers and gateway vendors.

 

Now that enterprises are extending the reach of their IP networks by connecting directly with IP carriers and SIP trunking, we are confronted with challenges of a different sort. The front lines of the FoIP wars have moved.

 

I imagine that with all the OEMs we have out there, nearly all of the carriers and gateways have been covered by now. And, on a practical basis, most of the carriers use Commetrex’ T.38 technology in their gateways. Moreover, most of the value-adding service networks use Commetrex-based fax media servers. And then there was our hugely successful T.38 Interoperability Test Lab, which, in 2002-2004, made Commetrex’ T.38 the industry’s T.38 interoperability benchmark. But if you’re looking for certifications, other than our own, we don’t have them.

 

Now don’t get me wrong, I understand the marketing value of these certifications, especially when the “other guy” has them, and he does. But with BladeWare’s six years of field experience, we have found that, on a practical basis, their importance is secondary to the question of performance and transaction success in carrier networks. This is because certifications are done in near-lab-like environments, but the challenges come in real-life networks.

 

In previous letters I’ve noted that the industry’s deployment of FoIP is proceeding in phases. Phase I, which extended from T.38’s publication in October 1998 to just a few years ago, was characterized by the need for interoperability between intra-enterprise network elements, such as between an ATA and a gateway. These problems were relatively easy to solve since both manufacturers were usually eager to solve these problems.

 

But now that enterprises are extending the reach of their IP networks by connecting directly with IP carriers and SIP trunking, we are confronted with challenges of a different sort. The front lines of the FoIP wars have moved. Yes, sometimes we encounter rear-guard harassment of registration, authentication, and configuration problems, but they are usually dispatched with a good read of a Wireshark capture. Today’s real battles go way beyond simple interop and equipment configuration to getting a call from an ATA or an IP-based fax server to a PSTN-connected fax terminal after transiting multiple IP-carrier networks.

 

We are fully engaged in “making FoIP work.” Today, we are working with carriers to correct the problems within their networks that are keeping FoIP from achieving the transaction success rates of PSTN fax. If IP is going to replace TDM, it has to happen, and Commetrex is leading the way.

 

If you want to know more about what Commetrex is doing to “Make FoIP Work,” give us a call.

Innovation Grows The Industry

We like to think of Commetrex as the leading innovator of fax technology without qualification. We’re not the biggest, but when it comes to innovation, we’re the best. Consider: In the mid-90s we produced the industry’s first software fax add-in product, by adding fax to the NMS voice boards. In 2000, we invented TerminatingT38. Then, there was Multi-Modal Terminating Fax, which supported both T.38 and G.711 IP-fax termination. In 2002, it was the T.38 Interop Lab. And now, we believe we’ve solved a huge problem that’s been plaguing the industry ever since enterprises began to use SIP trunking and direct SIP peering for their IP-fax servers, gateways, and ATAs. We believe this to be a breakthrough in IP-based fax reliability.

 

Phase II, the use of FoIP for carrier-based calls, got underway a few years ago as some of the major IP carriers, such as XO Communications and Global Crossing, began to offer T.38 service agreements to enterprise customers. The VoIP service providers began to offer SIP trunking and direct SIP peering, obviating the need for enterprise gateways. And suddenly, FoIP became a big problem for the fax-server vendors and their enterprise customers.

 

In partnership with Copia International, a noted vendor of mission-critical fax-broadcast servers and BladeWare user, Commetrex was able to achieve broadcast success rates on a par with those obtained with a multi-line intelligent fax board. We’ve even applied for a patent. The new technology works entirely within the framework of the SIP and T.38 standards. We have done extensive A-B testing with VoIP service providers and IP carriers and have found that servers that only support T.38 have an outbound fax-call completion rate that is five-percent lower than BladeWare equipped with G.711 pass-through support and Smart FoIP.

 

The ITU’s T.38 recommendation was released in 1998, and, for 10 years, what we call T.38 Phase I, the use of T.38 was relegated to intra-enterprise applications, so interop and transaction reliability were fairly easy to achieve since the user controlled all the equipment. The problem has to do with enterprise user’s recent push to use FoIP (fax over IP) beyond the confines of enterprise gateways and analog telephone adapters (ATAs). There was little choice in the matter since almost no carriers supported T.38, and what we call “G.711 pass-through fax”, where the fax call is, essentially, treated as a voice call, was found to be too error-prone over the public Internet. But now we’re finally entering “Phase II” of T.38 deployment Phase II, the use of FoIP for carrier-based calls, got underway a few years ago as some of the major IP carriers, such as XO Communications and Global Crossing, began to offer T.38 service agreements to enterprise customers. The VoIP service providers began to offer SIP trunking and direct SIP peering, obviating the need for enterprise gateways. And suddenly, FoIP became a big problem for the fax-server vendors and their enterprise customers.

 

Instead of working with the more-accessible gateway manufacturer, the business user had to try to work with the less-accessible service provider and IP carrier when he found that, for some reason, the reliability of making a fax call was well below the standards established by old reliable TDM fax over the PSTN. The problem has become so bad that many businesses still use the PSTN for all faxing, even though the enterprise has moved to 100-percent VoIP for voice. The reasons for this state of affairs were many. Some vendors even blamed the T.38 recommendation for the problem. But the industry has responded with improvements in T.38 interoperability and broader deployments of T.38-capable networks. Even so, problems placing calls from ATA-connected fax terminals and T.38-based fax servers persist.

 

According to Steve Hersee, CEO of Copia International, “Our broadcast-fax customers, looking for the promised saving of FoIP, have been asking for FoIP-based fax broadcast for the last few years. So, we ran some trials with our CopiaFacts server running on Commetrex’ BladeWare FoIP platform. Our first tests were with BladeWare supporting both T.38 and G.711. We were disappointed as the completion rates were 15-percent below that of our PSTN fax boards. Then, the Commetrex engineers dug into the problem and found that many of the fax sessions failed when BladeWare accepted the network’s SIP re-invite to go from G.711 to T.38. After several weeks of testing, retesting, and head scratching, they found that the re-invites from our carrier partners were often arriving so late that the server and the PSTN-connected fax terminal were well into the G.711-based T.30 fax transaction, with the switchover to T.38 effectively killing the session. This was proven when we disabled G.711 support in BladeWare and, suddenly, the success rate shot up 10-percent.”

 

But we still weren’t satisfied. We want BladeWare to not just be nearly as good, but as good as the traditional fax server for connection rates. And in T.38-only mode, we still had five-percent of the sessions failing when the re-invite was so late in arriving that the called terminal would just give up waiting for a response in G.711 mode. So, we went to work and developed our patent-applied-for T.38 fax-server technology that solves that problem by allowing the fax to complete in G.711 mode if the T.38 re-invite arrives too late.

 

And our users don’t need to be concerned with the G.711 pass-through session. In much of the world our carrier friends have virtually eliminated packet loss, leaving PCM-clock synchronization as the chief concern. But even that works because of Commetrex’ proprietary “smart buffering” technology for G.711 mode. It eliminates PCM clock synchronization as a source of error in G.711 pass-through calls, which is the reason for longer faxes failing.

 

With this latest improvement in the state of the art, we believe our innovation creds remain intact.

Pardon The Expression: ‘A New Paradigm?’

If you read our Commetrex Outlook newsletter, you know that BladeWare now supports the entire Sangoma product line of PCI and PCI Express PSTN-interface boards. Suddenly, Commetrex has the industry’s broadest line of products for the enterprise-fax OEM. “But how can that be?” you say. “The incumbents have spent decades developing their boards and HMP fax platforms.” Well, yes, but they were unable to take advantage of the industry’s evolution, which is proceeding along the same path blazed by the PC industry.

 

So why is this happening now? One answer is HMP. Because Sangoma developed its hardware product line to meet the need for PSTN interfaces created by HMP IP-PBXs, such as Asterisk, there was no need for DSPs or media processing except for, possibly, echo cancellers. This had the effect of partitioning Sangoma’s product-development investment onto PCM-interface boards, while Commetrex was investing in telephony middleware and media-processing (BladeWare).

 

Imagine, for a moment, how inefficient it would be if Dell, Toshiba, and HP had to develop hard drives and micro-processors in order to come out with a new laptop. That would be ridiculous given today’s margins, time-to-market, and ROI pressures. But when I was a young engineer, that’s exactly what IBM had to do, and it’s what the incumbent fax players had to do back in the ‘90s. But it’s not what Commetrex has to do today–perhaps yesterday, but not today.

 

When we began developing our fax technologies back in the mid-‘90s, there were no open-architecture DSP boards to run our modems. And forget about HMP, the MIPS just weren’t there. As a cash-strapped start-up we needed to get to market fast. What to do? The answer we came up with was to prevail upon our good friends at NMS (where I had been VP, Marketing and Sales) to give us access to their embedded code (Remember the VBX boards? Probably not.) so we could turn their voice boards into voice-fax boards by adding our newly developed fax-modem software. This produced our first product, MultiFax for VBX, which, in 1993, was the industry’s first third-party software add-in product. NMS later licensed the software, which is where NaturalFax came from. We went on from there to become a major licensor of fax technologies to carrier, test-equipment, and semi-conductor OEMs, while we were busy developing BladeWare, our HMP telephony platform.

 

But we never forgot the lesson of how powerful a market force technical specialization could be.

 

So, in early 2009, following some major design wins for BladeWare in the enterprise-fax OEM market, it became obvious that our OEM customers wanted more. They were enjoying the benefits of BladeWare’s high function and performance at an unbeatable price, and wanted the same in applications where PSTN connectivity was required. BladeWare had been designed to readily support hardware as both PSTN interfaces and media-processing resources, so we knew if we could find a product line that met our requirements it would be simple to add it to BladeWare.

 

So, what were those requirements?

  • Analog boards supporting both station and office trunks up to 24 ports, preferably configurable.
  • E1/T1/J1 digital boards supporting at least four spans, robbed-bit, and ISDN signaling.
  • ISDN BRI supporting at least four lines yielding 8 ports.
  • PCI and PCI Express in a 2U form factor.
  • Windows and Linux support.
  • Costs low enough to price our products well below the incumbents and still meet target margins. (This was aided in some cases by high incumbent pricing and others by the cost disadvantages of non-HMP solutions.)

 

Then, Sangoma entered the picture. They meet these requirements and then some. And today we’ve added a Sangoma resource service manager (RSM) to work alongside our SIP RSM, giving the OEM, in addition to the above,

  • TerminatingT38 V3 IP fax with V.34 Fax Modem support,
  • Terminating G.711 pass-through fax with V.34,
  • Analog station and office trunks to 24 ports requiring only one expansion slot,
  • Single-, dual-, quad-, and octal digital boards,
  • BRI to 24 line/48 ports, and
  • Unbeatable pricing.

 

So why is this happening now? One answer is HMP. Because Sangoma developed its hardware product line to meet the need for PSTN interfaces created by HMP IP-PBXs, such as Asterisk, there was no need for DSPs or media processing except for, possibly, echo cancellers. This had the effect of partitioning Sangoma’s product-development investment onto PCM-interface boards, while Commetrex was investing in telephony middleware and media-processing (BladeWare). This tightening of investment focus at the two companies had the effect of producing best-in-class products for both companies. And they are even more powerful when combined.

 

The times, they are a-changin’.

Whither The Enterprise Fax Server?

It’s ironic that, to a point, the smaller the business the greater the productivity benefit of computer-based fax servers, yet the fax-server industry, for the most part, has been unable to develop a business model that addresses the needs and budget of the small business.

 

Solving these cost and channel-friction problems will allow the small-medium enterprise (SME) to afford this productivity-boosting product when combined with Commetrex’ “realistic” fax-resource costs. This allows the OEMs to finally construct a business model that can address the SME market. But why the interest in the SME, anyway? Well, one good reason is that for every one company that can afford a $3,000-plus fax system there are a 100 companies that can afford one for under $1,000.

 

But why does the small business see a disproportionately greater benefit? It begins with the absence of administrative employees. So, for example, the CEO sends his own fax and checks the fax machine himself to see if that important PO or contract has arrived. Send a fax? Print the document; then prepare a cover page. Go to the printer, pick up the two printouts, go to the fax machine, key in the destination number. No fax answer; it’s the recipient’s voice number. Back to the office to look up the correct number, and so on. Fifteen minutes wasted.

 

The ability of a computer-based fax server to turn this time sink into a one-minute exercise is well documented. No printouts are required. (Save a tree!) Typically, no cover page needs to be filled in manually since the recipient is already in the server’s phone book. Inbound faxes are announced via a pop-up, end up in the recipient’s inbox, and don’t need to be printed.

 

Until recently, the price of the enterprise-fax OEM’s product, at $4,000-plus, was quite simply out of the reach of a small business. Several factors contributed to this, but recent technical innovations from Commetrex have eliminated some of them. The remaining ones vanish with a little marketing innovation. Consider:

  • Fax-platform pricing,
  • Sales channel costs, and
  • Installation and maintenance complexity and costs.

 

Ten years ago, enterprise fax was based on on-premises systems provided by the fax-server vendor. And nearly all of these products were based on a hardware-software platform and multi-line-fax resources that cost over $500 per channel, inviting disruptive innovation and increased competition. In the last few years, much of that competition has come from functional substitution by the use of so-called virtual-fax services where the service provider takes care of the fax sending and receiving, using e-mail for the final or initial leg of the transaction. But with fax resources becoming 100-percent software based, the artificially high per-channel cost of resources for the server vendor were bound to fall, and they have.

 

In addition to those high built-in costs, the complexity of the TDM-based systems demanded a high-cost channel that required high price points for its support. Businesses typically do not purchase a $10,000 system that must connect with the telephone network, the corporate LAN, and other corporate infrastructure by going on-line and placing an order. Multiple sales calls are required to resolve the many issues that inevitably arise: type of network interface, gateways, if required, and infrastructure connections, such as with Microsoft Exchange. And what about NAT and firewall traversal if the system is IP-based?

 

Then there’s installation. If it’s a TDM-based system, trunks must be purchased and installed. If a PBX is to serve as the fax system’s front end, station-interface boards need to be added. What about those gateways for IP installations? Do they support T.38 Fax Relay? Are they compatible? Does the access provider support T.38? If not, and G.711 is to be used for fax transport, will clean faxes result?

 

Solving these cost and channel-friction problems will allow the small-medium enterprise (SME) to afford this productivity-boosting product when combined with Commetrex’ “realistic” fax-resource costs. This allows the OEMs to finally construct a business model that can address the SME market. But why the interest in the SME, anyway? Well, one good reason is that for every one company that can afford a $3,000-plus fax system there are a 100 companies that can afford one for under $1,000.

 

Non-commodity products, such as communications systems, typically exhibit a non-linear price-vs.-unit-volume curve. There is always a price high enough to ensure that the volume will be zero. As the selling price is reduced, those companies with the greatest need and highest ability to pay will purchase. As price is further reduced, volumes increase slowly since the price is still high relative to value and capacity to pay for the bulk of the market, so significant price reductions result in a relatively small pick up in volume. However, in most markets there is a price point—a knee in the curve–where volumes increase significantly, and incremental decreases in price result in large increases in volume, yielding a major boost in sales and margins for the OEM.

 

So, how do we get rid of the obstacles and get this valuable communications facility to the companies that need it most? The answer begins with Commetrex’ all-software BladeWare HMP fax platform. For a four-port system, platform cost to the OEM plummets from $3,000 to under $1,000, even when the PC is included. If the computer platform can be piggybacked onto an existing platform, such as an IP PBX, the entire fax platform cost approaches just $100 per port. But we can get it even lower by integrating the fax-server functions into an IP PBX.

 

Today, IP PBXs support voice messaging, but why not fax? After all, fax is now a software function, not necessarily a stand-alone server. Right? Well, the answer is “right”, provided the I-PBX vendor either integrates fax into his system or allows the fax-server software to be co-resident on the same computer and OS.

 

OK. That takes care of the first point, platform cost, but what about the sales channel with its sales and installation costs? If the fax functions are integrated into the PBX, they will be sold along with the PBX, just as the voice-mail features are today. This makes the fax function a PBX add-on in the sales process, greatly reducing channel cost by essentially eliminating the fax-specific sales costs. Moreover, with the fax function a PBX feature, installation costs are greatly reduced.

 

But what about the gateway and/or the VoIP-carrier questions? The best answer is to get rid of the gateway and use a service provider that delivers error-free T.38 faxes, such as Packet8 (8X8). Service providers, such as Packet8, offer SIP trunking over DSL with faxes delivered with T.38. It is this combination—SIP trunking and T.38—that produces the ultimate in low sales and installation costs. And the user’s satisfaction is high with trouble-free faxes.

 

Commetrex has been pitching this to the I-PBX vendors for 7 years now, and it’s finally paying off. They are beginning to bring the solution in house, rather than simply pointing to a high-cost fax-server product. One quick way to do that and deliver a high-function fax-server capability is to partner with an enterprise fax-server vendor. So they are integrating and they are partnering. In any event, we’re getting closer to a dramatic expansion of the market for enterprise computer-based fax.

Asterisk, YATE, Freeswitch, And BladeWare … BladeWare?

There’s been plenty of buzz on the Internet of late regarding the suitability PC-based open-source PBXs that use host MIPS for media processing (HMP) for use in so-called carrier-class applications. The blogs and discussion forums have been comparing these PC-based software-only telephony platforms, such as Asterisk and Freeswitch, to what is frequently referred to as “real” carrier-grade—read commercial—hardware-based products.

 

So the discussion usually centers on how well a system scales in practice, not why. We have the same problem when comparing Commetrex’ BladeWare telephony platform to that of other HMP and hardware-based systems. Other than BladeWare and Commetrex’ Open Telecommunications Framework®, on which it is based, I can’t find another system, open-source or otherwise, HMP or otherwise, with an open distributed client-server architecture, which happens to be the foundation of BladeWare’s scalability.

 

Not many of these commentators bother to address what is actually meant by “carrier grade”. Perhaps it’s like pornography: I can’t define it, but show me an example and I’ll tell you whether on not it is. But those that do bother to discuss the term often come down to considering “scale”. Can the system scale? If it can scale to thousands of ports, it’s carrier-class; if it can do so without falling over, it’s carrier-grade. I know…I know…there’s a lot more to it, but that’s beside the point because what I really want to talk about is HMP system architectures that support scaling to carrier-class dimensions.

 

When these discussions get cranked up, the comparison is inevitably between what can be done on one PC versus what can be done on a DSP-based system that scales by adding DSP blades. But is the discussion being properly framed? Should the comparisons be between a PC and non-PC-based systems, or should the comparisons be made between architectures and their scalability, regardless of the underlying hardware? And what about software-only systems whose architecture supports a hardware-software architecture, or even a fully embedded version as well as HMP? It turns out that the PC versus non-PC debate is actually a proxy for the real issue of scalable architectures.

 

But, in all fairness, it’s difficult to compare system architectures since there is little publicly available information that really exposes the architecture of these systems in a useful way. The vendors of commercial (real) products don’t normally publish their product’s architecture for competitive reasons. And open-source is notorious for its lack of documentation. For example, I’ve been unable to find a usable diagram of the Asterisk architecture on the Web. It may be there, I just can’t find it. There’s one in the Asterisk Handbook, by Mark Spencer, but it is not detailed enough to support an analysis.

 

So the discussion usually centers on how well a system scales in practice, not why. We have the same problem when comparing Commetrex’ BladeWare, telephony platform to that of other HMP and hardware-based systems. Other than BladeWare and Commetrex’ Open Telecommunications Framework®, on which it is based, I can’t find another system, open-source or otherwise, HMP or otherwise, with an open distributed client-server architecture, which happens to be the foundation of BladeWare’s scalability.

 

Even if you limit a processor blade running OpenMedia, BladeWares’ media-processing environment, to say 200 simultaneous T.38 fax transactions or G.711 voice calls, a 10-blade chassis will yield 2,000 channels. Is that carrier-class? No? How about multiple 7 U chassis in a 7-foot relay cabinet? Five of them get you 10,000 ports. Are we there yet? Yes, there are issues of footprint and power that would need to be compared, but the discussion here is whether the architecture supports carrier-class capacities, and it does, even though it’s a PC-based HMP system.

 

OTF Kernel is Commetrex’ vendor- and hardware-independent telephony-middleware system kernel that provides the core telephony system services for a high-capacity digital-media system in a Linux or Windows NT environment. It’s used by developers of access equipment, gateways, service platforms, switching systems, and enterprise equipment that require strategic control of their product platform, yet see that time-to-market, investment hurdles, and on-going maintenance preclude internal development.

OTF Kernel can be extended by the OEM to support multi-vendor system services, media resources, and client APIs. This means that the equipment developer can gain the time-to-market advantages of a closed-architecture integrated platform, such as those available from several vendors, yet maintain control of his strategic product platform as if it were developed in house. With OTF, the system developer can assume full control of his strategic platform in every dimension.

Commetrex’ key requirements for OTF, a service platform for any system required to process telephony media streams, were

  • Scalability,
  • Feature extensibility,
  • Vendor independence,
  • Resource independence,
  • Resource efficiency,
  • Availability, and
  • Portability.

 

The requirement for scalability led to the selection of a distributed client-server architecture. The result is that OTF can be scaled by adding industry-standard processors (PCs) to provide more compute resources in any dimension: client applications, system services, signaling, and media-processing resources. We call the HMP version of OTF, “BladeWare”, since it can leverage the architecture of blade-servers to scale. With OTF, server blades are added just as DSP blades are added in DSP-based systems. In fact, OTF also allows the processors to be DSP based. It just does not matter.

 

Regression problems are eliminated since adding a new service does not touch the balance of the system. This is facilitated by OTF’s use of a standard entity shell, called the Application Interface Adapter (AIA). The AIA ensures that the developer’s OTF entity meets all requirements for inclusion in an OTF system domain, whether it’s host-based or embedded.

 

Vendor independence means that nothing in the system is tied to a proprietary Commetrex product, including media-processing resources. In fact, Commetrex developed a related software environment, OpenMedia, that hosts independent media-processing technologies, just as effectively and efficiently as it hosts Commetrex’ media-processing products. OEM customers have used OTF with more than Commetrex DSP and TDM-interface boards. Dialogic (Brooktrout and Dialogic) boards are being used today in OTF-based systems.

 

Resource efficiency is a benefit of OTF’s shared-resource management facility, which supports multi-vendor applications with resource sharing on a common system platform. OTF abstracts signaling, switching, conferencing, and media resources through software objects that model those resources and are managed by the system. The resource manager’s API allows client applications and system services to assemble the required resources on a per-call basis and relinquish them upon call completion, making them available to other applications. Other applications, even from other vendors, can then use the same resource. Moreover, this architecture naturally supports hot-swap for high availability since resources are dynamically allocated.

 

OTF allows the OEM to bypass the several-dozen developer-years of effort required to produce a viable system foundation, which, prior to OTF Kernel and OpenMedia, has been required if the developer intended to maintain control of his strategic platform. And all of Commetrex’ extensive library of media technologies are available for use in OTF systems.

Redefining Hosted Media

The Pulver people asked me to participate in the VON.x Spring 2008 panel, Redefining Hosted Media, which prompted me to think about some of the considerations a service provider needs to make when planning how a new service deployment will be implemented. One of the big benefits of new service architectures and software-only implementations is how a service network’s core can be either repurposed or extended at a very low cost compared with the initial buildout.

 

Even though a service-network architecture may support today’s application, it might not support tomorrow’s.

 

What I intend to discuss on the panel is how a vendor may offer a network architecture that will do a fine job of implementing the current service, voice messaging, for example, but be unable to be extended to support voice-fax unified messaging if the original voice media server does not support fax…and most don’t.

 

The central point of my presentation is that the service network should have third-party call control (3PCC) from the perspective of the gateway and the media servers, enabling the call to be moved from one media server to another as the caller’s responses and the application logic require. In a SIP service network, 3PCC requires a back-to-back user agent (B2BUA). A B2BUA, essentially a switch, mediates a call by maintaining call state for each of the two call legs, in this case the media-gateway and the media-server call legs. Since the B2BUA can, therefore, exert its will over both call legs, it’s free to move the call’s media stream from, for example, the voice-messaging media server to the fax media server when the application requires.

 

But you can implement a voice-messaging system quite nicely without a 3PCC entity by routing all calls directly to the voice media server (where else?). The voice media server might include a VXML voice browser that accesses VXML scripts from a Web server. OK so far, but suppose the service provider wants to later add fax to his UM system and the voice media server does not support fax. You then have the need for the call’s media stream to be moved from the voice to the fax media server. Although the voice media server may be able to move the RTP stream through itself to the fax media server, what happens when the fax media server wants to REINVITE the gateway over to T.38? Ooops! Voice media servers don’t do this.

 

Since the voice media server isn’t going to do it. You need … third-party call control!

 

The point? Even though a service-network architecture may support today’s application, it might not support tomorrow’s. The IMS architecture is specifically designed to support service networks that have multiple application servers and multiple media servers. So, even if you don’t intend to acquire a full-up IMS network, consider using the basic IMS network architecture and you’ll enjoy many of the benefits. You can then layer in additional components as follow-on projects.

Telephony & The Web

Something’s going on here. In the not-too-distant past, detecting change in telecom was a little like watching a glacier move. But not today. Keeping up with all the changes is a challenge, and one of the rapidly moving areas is the service network. We all know about IMS, but can we pick out a watershed event from the clutter of innovation? Perhaps we should keep an eye on the separation of content and application logic from the service network.

 

VoiceXML supports the architectural separation of application logic from the authentication, routing, and billing functions of the service network.

This is big.

 

Yes, I’m talking about VoiceXML, a mark-up language specified by the Worldwide Web Consortium (www.w3c.org), the same folks that brought you HTML, XML, SOAP, and other recommendations critical to the growth of the Web.

 

VoiceXML is a markup language that allows Web developers to implement voice dialogs between caller and machine, where the caller’s input is typically DTMF or speech. Since the dialogs are specified by Web-oriented developers in a high-level language, we have skill-set separation and a big jump in programming productivity. But it’s much more than a software productivity tool. It is, perhaps, more important that VXML supports the architectural separation of application logic from the authentication, routing, and billing functions of the service network.

 

This is big.

 

Why? Because the people that are close to the enterprise-based service (SOA) user or telecom subscriber’s needs are able to move at a faster pace than those whose primary concern is the network infrastructure…the network elements that provide access, OSS/BSS, and media-server functions. Let the latter move at something closer to the “telecom pace”, and let the user-facing folks move at Web pace. VoiceXML allows that to happen.

 

The developers of the VoiceXML language were targeting enterprise applications, and that’s still its primary use. But now the advantages of separating the application from the more telephony-specific network elements, and placing it on a Web server where it can be fetched on a per-call basis are so pronounced, the technology is being increasingly adopted by telecom service providers.

 

In these applications, the VoiceXML interpreter is integrated into what is known as a VXML browser (or simply a “voice browser”). A voice browser is analogous to a graphical Web browser, such as Firefox and Microsoft Internet Explorer®. But instead of rendering and interpreting HTML (like a graphical browser), the voice browser renders and interprets the VoiceXML script, which determines how the service application interacts with the caller/subscriber. Rather than clicking a mouse and using a keyboard, the caller uses her voice and a telephone (and the phone keypad) to access Web-based information and services.

 

One of the primary functions of the voice browser is to fetch VoiceXML documents from the Web server, just as a graphical Web browser fetches HTML documents. The request to fetch a document can be generated either by the interpretation of another VoiceXML document, or in response to an external event, such as a SIP-based command from an application server in an IMS network. The VoiceXML browser uses HTTP over a LAN or the Internet to fetch the documents (the very same HTTP requests that are used by the graphical Web browser).

 

The voice browser interprets and renders the VoiceXML document. It manages the dialog between the application and the user by causing audio prompts to be played and accepting and acting on the caller’s input. The action might involve jumping to a new dialog, fetching a new document, or submitting user input to the Web server for processing.

 

Since the user’s interaction is with a Web server, the server can be connected to enterprise or carrier databases without requiring that the database interaction be any different than it is with non-telecom applications, leading to operational efficiencies.

 

The VoiceXML Forum (now 376 companies strong) published VoiceXML 1.0 in 2000 and then transitioned control of the specification to the World Wide Web Consortium (W3C). Since then, the W3C has published VoiceXML 2.1, and is currently working on VoiceXML 3.0 (“V3”).

 

Despite the substantive opportunity that exists in this marketplace, there are only a few significant VoiceXML platform vendors. A number of industry moves have affected that: VoiceGenie Technologies, the number-two player, was acquired by the number-one player, GenesysLabs. Vocalocity, the leader in VoiceXML OEM solutions, was sold to Zivva, which took the Vocalocity name and ownership of OpenVXi, but no longer focuses on the OEM marketplace. This means that OpenVXi, the most widely-used VoiceXML interpreter, has become dormant following its ownership change. The result is an increased market need and the opening for a new player to emerge, particularly with V3 on the horizon. Commetrex is that player and BladeWareVXML is the product.

 

We took OpenVXi and enhanced it, resulting in dramatically improved performance in an interpreter that strictly adheres to the 2.1 standard. But that’s not all we did to it. So, give it a test drive, as BladeWareVXML is now on www.sourceforge.net.

When Will We See The Last Gateway?

Well, don’t hold your breath! As you know, when it comes to technical inertia, telecom can’t be beat. But we are seeing some movement, and it’s called SIP trunking, with the new SIPconnect recommendation providing the catalyst.

 

Although SIP trunking has not reached the tipping point, we now have multiple carriers, PBX vendors, and a hosted-solution vendor. And, if you need a value-adding media server with SIP trunking support, check out BladeWare.

 

In 2004, Chris Gatch, Cbeyond Communications’ CTO, pulled together folks from Avaya, Broadsoft, Centerpoint Technologies, Cisco, and Mitel to form the SIPconnect initiative. Their objective was to improve the interoperability between SIP-based premises systems and SIP-enabled service providers so that business systems could place and accept calls to and from the PSTN without enterprise gateways.

 

With SIP trunking, all parties benefit.

  • Gateway costs go away for the enterprise.
  • Without unnecessary analog-digital conversions, call quality goes up.
  • Direct SIP signaling supports greater function.
  • Power and footprint are reduced.

 

The recommendation, which you can download from www.sipforum.org, points out that all of the necessary IETF RFCs needed for SIP trunking already exist, but the “sheer number of these standards documents, service providers, and equipment manufacturers have no clear ‘master reference’ that outlines which standards they must specifically support in order to ensure success.” SIPconnect solves the problem by providing a clear framework of MUSTs, SHALLs, and MAYs. It provides a reference architecture by addressing protocols, messages, codec support, packetization, fax and modem handling, DTMF handling, NAT, and authorization and security by referencing the appropriate RFCs.

 

And it’s gaining traction. Cbeyond has been joined by (we believe, as some of these vendors don’t have information on their support for SIPconnect on their Websites) 360 Networks, Bandtel, Bandwidth.com (it’s all over their Website), IP-Only, Level3, and Voex on the carrier side. And at least 15 IP PBX vendors are supporting the recommendation including Altigen, Allworx, Avaya, Digium, Epygi, Fonality, Guardian, Linksys, ShoreTel, SwitchVox, Talkswitch, and Telechoice.

 

Oh! And Commetrex. We’re adding SIPconnect support to BladeWare. It will be available in early Q3.

 

I spoke with Chris Gatch, Cbeyond CTO and SIP Forum Board Member about the effect SIPconnect was having on SIP trunking. He said, “The SIP Forum is pleased with the rate of industry adoption of SIPconnect. Given the number of PBXs and service providers that now support the standard, it’s apparent this initiative is having a positive impact on the industry and driving SIP-trunking implementations.”

 

In the interest of disclosure, Cbeyond has been Commetrex’ service provider for over five years, and we are currently installing SIPconnect for a new IP PBX. Also, Cbeyond’s fax-to-email service is based on 14 servers continuously running BladeWare Fax-to-Email. Cbeyond’s SIPconnect includes a bunch of SIPconnect trunks and DIDs, so what we don’t need for basic voice and fax we’ll use for testing SIPconnect on BladeWare, our HMP media sever. If you’re an OEM, you will be able to use BladeWare as the foundation of your media server or IP PBX, and it will give you a SIPconnect interop-proven value-adding platform.

 

There is little on the Cbeyond Website about SIP trunking and SIPconnect, but you can learn more about their free half-day VAR SIPconnect training course which is based on Cbeyond’s over two years of experience with the recommendation. The course addresses issues like LAN configuration and firewalls in a SIP-trunking environment. To sign up for the course (and learn more about Cbeyond), send an email to sales.engineers@cbeyond.net.

 

The recommendation is available for download from the SIP Forum at http://www.sipforum.org/sipconnect. In addition, the Bandwidth.com Website is loaded with useful information and so is the BandTel site. VoEX has some interesting white papers. Check them out.

 

Also, as possibly the only hosted-solutions vendor to the IP service provider to have announced SIPconnect support, Broadsoft deserves special mention. Announced at Spring VON 2007, Broadworks Business Trunking release 14, with SIPconnect support, is now generally available.

 

So, if you are an OEM developing a media server or IP PBX, consider beginning with BladeWare, a SIPconnect-proven platform. If you’re an IP service provider to the SME, ask your system integrator for SIPconnect support. If you’re an enterprise, ask your service provider and PBX vendor for SIPconnect support and skip the investment in gateways. Oh, yes. Even though BladeWare handles G.711 pass-through faxes just fine, all cases ask for T.38 support to be included.

Here Comes Web 2.0!

Just when we thought we had today’s technology framework figured out, here comes the Web 2.0 crowd to make things complicated. According to Wikipedia, which, itself, is an example of Web 2.0, says Web 2.0 is characterized by applications that use the Web as a platform. Web 2.0 is an “architecture of participation” that innovates through the “assembly of systems and sites.” Web 2.0 is ad hoc and dynamic; stasis is anathema. The hallmark of Web 2.0 is collaboration, but not just any collaboration…facile collaboration.

 

All the US Congress needs to do to insure that market forces will continue to support innovation and the boost it provides to America’s economic competitiveness is to ensure Net Neutrality and an open, national ENUM system similar to the one in Austria.

 

And if you have your finger on the pulse of telecommunications, you might feel it quickening. The ITU standards framework is being ignored by Web 2.0, even when it provides a telephony function. Do you find that where the ITU use to be your standards focus, it has now shifted to the IETF and the W3C. And although the 3GPP had some decidedly un-Web 2.0 organizers, its IMS specification could acquire many Web 2.0 attributes, depending on how deeply the incumbents get their hooks into it. Speaking of incumbents, all this must be at least a little scary to them. An open collaborative framework is just not in their DNA, and, predictably, they’re responding with a conservative rear-guard action.

 

So don’t be too surprised if you read the following news release in a few years:

 

Dateline: Mountain View, CA November 17, 2011 – Google, Inc. announced today that it has reached agreement with AT&T to acquire all of the outstanding stock of the telecom operator. The all-cash transaction requires the approval of the shareholders of AT&T, but it is widely assumed that the offer will be enthusiastically accepted since the teleco’s fortunes have been sinking as the Web 2.0 phenomenon has swamped the sluggish telco sector.

 

Far fetched? I don’t think so. AT&T’s strategy is to stifle competition and invest in IP TV to counter the cable operators. But, according to Google:

 

Google’s mission is to make the world’s information universally accessible and useful. Google Talk, which enables users to instantly communicate with friends, family, and colleagues via voice calls and instant messaging, reflects our belief that communications should be accessible and useful as well. We’re committed to open communications standards, and want to offer Google Talk users and users of other service providers alike the flexibility to choose which clients, service providers, and platforms they use for their communication needs.

 

This reads as if it’s a Web 2.0 manifesto.

 

So where and how will the Web 2.0 phenomenon affect telephony? That’s anyone’s guess, but you can see some of it happening today at AOL, Google, Skype, and Yahoo! They have shown an utter disregard for telephony tradition. For example, Google Talk uses XMPP for signaling, and it’s all open.
And speaking of signaling, telephone numbers (actually session addressing) remains one of the major issues, problems, or opportunities in telecom, depending on your point of view. Along with Net Neutrality, it is one of the critical battles being waged in the quest for a competitive communications market. Session addressing separates universes: the traditional telecom universe (10-digit numbers), the Skype universe (your Skype ID), the Google universe (your G-Mail account), and AOL (your AOL account), and so on (Yahoo!, MSN, etc.).

 

Since only legal “telephone companies” can get a block of telephone numbers to assign to subscribers in North America, Skype, Google, and AOL chose to develop their own session-addressing schemes. So, except for Google, which uses XMPP, a protocol that supports presence and addressing-server federation, each of these services appear to be an addressing island, holding in check the value of each network. And each network developer will jealously guard their control of addressing on their network…at least initially. One recalls when AOL came out with IM, and offered it as an AOL exclusive. Then Microsoft did the same thing. But it didn’t take the two giants long to figure out that Metcalf’s law (The value of a network geometrically related to the number of connections.) applied to them as well, and they federated their addressing. So they will probably follow suit. However, among the Internet companies, a further complication for voice peering will be media-stream compatibility, since they use different codecs, and none of them directly support data or fax.

 

Ten-digit telephone numbers that are within the North American numbering plan are administered by NeuStar, Inc, which was appointed by the FCC to administer the plan as the North American Numbering Plan Administrator (NANPA). Assignments are made based on the central office designated by the carrier, a system hopelessly antiquated for the Internet age, since before not too long there will be no central offices. The carriers have an interest is keeping this system in place for as long as they can, since it allows them to exert some degree of market control (We own the numbers). Actually, with number portability, you own the number, but today, it can only be transferred between telcos on your behalf.

However, there is an emerging international addressing standard, ENUM, which is modeled on the Internet’s Domain Naming System (DNS). ENUM maps 10-digit telephone numbers to Internet address resources via an ENUM server. The ENUM server provides the requesting client (for example, a SIP proxy) with the stored information, such as the subscriber’s VoIP SIP address, e-mail address, or other resources, such as secondary addresses to services to which the number’s owner has subscribed. The network operated by Telecom Austria (which uses Commetrex’ BladeWare for fax services), has had ENUM in commercial operation since December 2004.

 

The extensible nature of the information (Internet resources) provided by ENUM servers can effectively decouple the telephone number from the subscriber’s local-access service provider, as well as the location of value-adding services, from the provision of transport. Don’t expect US-based carriers to warmly embrace ENUM, since it makes service invocation seamless and removes control of 10-digit numbers from the local telco.

 

All the US Congress needs to do to insure that market forces will continue to support innovation and the boost it provides to America’s economic competitiveness is to ensure Net Neutrality and an open, national ENUM system similar to the one in Austria.

Let’s Get Movin’!

I had lunch with an investment-banker friend a few months ago. He asked me about my views on the current market conditions and drivers for Commetrex’ value-adding products. I explained that the market was extremely weak, and that my hope was that we were having lunch during the market’s bottom. But that, in my opinion, the current conditions of lowered investment in product development, standards and regulatory uncertainty, and rapid changes in technology should combine to increase demand for value-adding products. He asked why that hadn’t increased the sales of the industry’s incumbents. I explained that even the strongest drivers could not sell products for projects that did not exist. But there is another factor that will become extremely important as the market turns around and new development projects are funded.

 

Although SIP trunking has not reached the tipping point, we now have multiple carriers, PBX vendors, and a hosted-solution vendor. And, if you need a value-adding media server with SIP trunking support, check out BladeWare.

 

The value-adding component market began when developers of enterprise systems were offered a way to avoid the investment in the underlying platform of a digital-media telephony system, procuring it instead from a company such as Brooktrout or NMS Communications. Product development was extremely efficient since only the application had to be developed. This increased efficiency created a $10-billion industry. But there was a downside: the developer had to cede control of his product platform to his vendor. If a new project required an unsupported media technology, for example, there were usually no options, including developing it in house since the platform’s closed architecture did not allow it. Lack of control of the platform is the primary reason the carrier-equipment industry has been reluctant to embrace the model.

 

If a platform vendor’s system framework, streams framework, and media technologies were perfect for an application, but the platform vendor’s proprietary hardware did not support the system requirements, the OEM was out of luck. A $10-15-million investment and a two-year time-to-market delay was required to develop these system elements. One thing the industry did supply the developer of digital-media systems was the media technologies. Texas Instruments’ Third-Party Developer’s Network was there to offer licensed media technologies for any member to TI’s TMS320 family of DSPs.

 

But there were no licensed system-framework software products.

 

And there were no licensed vendor-independent media-processing frameworks.

 

OK, you already know where I’m heading: Commetrex has these products: Open Telecommunications Framework? Kernel and OpenMedia. OTF Kernel is a resource- and vendor-independent digital-media client-server system framework. And OpenMedia is a streams framework that allows the OEM to integrate first- and third-party media technologies on the same system. And it works with and without OTF Kernel. So the OEM can pick and choose which major system components to make and which to buy. It’s no longer an all-or-nothing deal.

 

We at Commetrex believe that this decomposition of the high-capacity integrated-media telephony system heralds a new value-adding architecture for the industry. This new architecture promises to bring the efficiencies formerly only enjoyed by the computer industry to telecom.

 

For example, Urmet Sistemi (Rome, Italy), a media-server manufacturer, was looking to improve margins and increase system density of its next-generation product by developing a proprietary media blade. Urmet was able to build the business case for the project by licensing Commetrex’ OTF Kernel.

 

Structurally, this is similar to a PC OEM licensing Windows.

 

Another OEM needed to produce an IP endpoint on a proprietary form factor. The system was small enough to not require a system framework. But at 64 channels and multiple media (voice, fax, and data), a high-function media-streams framework was required. The answer was OpenMedia.

 

These products and the architectural decomposition behind them represent just the next step in the industry’s march to improved efficiency. It all began in 1984 with Dialogic’s voice board, which put anyone with a PC-AT in the telecom-system business. The next step was NMS’s MVIP initiative, which made systems more extensible and media-rich. CompactPCI, with the objective of creating an open hardware ecosystem, was an important step. It’s now being followed by Advanced Telecommunications Computing Architecture (ATCA), which updates the standard to meet today’s density and power requirements.
Now comes Commetrex with a standards-based (ECTF S.100 and MSP Consortium) decomposition that lets the OEM take advantage of ATCA without having to cede control of his strategic product platform or invest the millions of dollars or years required to develop the necessary system-framework software to do so.

 

I hope you have the time to check out this new architecture we call Open Telecommunications framework (https://commetrex.com/OTF_Portal.html), and its two category-defining products, OTF Kernel (link to https://commetrex.com/products/CTMiddleware/OTF/OTFKernel.html) and OpenMedia (link to https://commetrex.com/products/mspe/openmedia/OpenMediaSDK.html). These two products give you a foundation for PSTN, IP, or dual-network systems. In fact, these products are the foundation of Commetrex’ host-signal processing system, BladeWare.

 

In closing, I wish you the very best of everything in the coming year.

The End Of Telephony?

After a 13-decade run, telephony as a separate industry will cease to exist within the next 15 years. Bell patented the telephone in 1876. Since telegraphy predated telephony by about 30 years, in a sense, data networking predated voice. But just like the telegraph, telephony required a specific physical plant comprised primarily of wires and poles. Telephone companies had to acquire rights-of-way. People (manual switchboard operators) handled call routing. Eventually, the single-call-per-wire network was replaced by today’s time-division multiplexed (TDM) network. But TDM is still telephony-specific, so we still need telephone companies. But not for too much longer.

 

After a 13-decade run, telephony as a separate industry will cease to exist within the next 15 years.

 

Today, the TDM network is being replaced by packet-based (primarily IP) broadband-access that gives businesses and consumers relatively high-speed telecom-non-specific access to the Internet. Incumbent carriers still use traditional TDM telephony on top of the digital-subscriber line through line-sharing technologies. But maintaining the TDM network is putting the incumbents at a cost disadvantage. Moreover, VoIP is typically much more feature-rich than a TDM phone, and the cost of maintaining the TDM network in some cases exceeds the cost of installing new IP telephony infrastructure.

 

With broadband “pipes” connecting subscribers to a converged network, voice becomes just another application. Yes, it’s an application that stresses the IP network in ways that data applications, such as HTTP and file transfer, do not, but it’s still just another application. And, without a telephony-specific infrastructure we don’t need telcos; we need broadband-access providers.

 

Now, the telcos will tell you that the access network and the IP networks behind it really is telephony specific. They have a quality of service that supports telephony. They have built-in billing, and other features needed by a telco, such as application-level routing. But as the robustness of the public network improves, the audio quality of Vonage and Skype improves, and it’s already pretty good. Moreover, the telcos know it. That’s why we’re already seeing attempts to block competitive VoIP by some carriers.

 

You can count on the incumbents, and even the competitive carriers, to do what they can to erect barriers to services offered by independent providers. Remember the IN—the Intelligent Network? It was supposed to enable third-party value-adding service providers to easily connect with the PSTN. Never really happened. Now comes IP Multimedia Subsystem (IMS), which holds the promise of separating the provision of services from the supporting network. That’s good. But it also embeds the service core deep within the telco’s network. That’s bad.

 

The incumbent-centric Alliance for Telecommunications Industry Solutions (ATIS), which is defining the “Next-Generation Network (NGN) Framework” for North American carriers, deleted the characterization of the NGN, “Unrestricted access by users to different service providers,” which was in an earlier draft, from section 1.1 of its document, “Part I: NGN Definitions, Requirements, and Architecture”. This speaks volumes about their intent. Yes, the document mentions third-party providers, but so did the IN documents.

 

However, it’s not a private party everywhere. Take Austria, the first country where ENUM was available for commercial services. ENUM is a standard that maps global telephone numbers (E.164 numbers) to SIP addresses. So, if you register your telephone number with an ENUM domain, any call to that number will be sent to the SIP addresses you have registered. Then, the Austrian regulators (RTR) went one better. They created a special value-adding ENUM service exchange, 780. With 780 registration, you don’t begin with a telephone number. Instead, registration gets you the number, which begins with 43 780. Once you receive the call you can do what you wish, including providing valuable services, even including voice.

 

In 2006 the US Congress will likely rewrite the laws governing telecom regulation. The Telecom Reform Act of 1996 needs to be replaced, but with what? If telephony is becoming an application on the Internet, a so-called information service, the incumbent telcos will be challenged by the Internet crowd unless the ILECs can keep them out, and you can bet the telcos will put up a fight. The fur will surely fly since the stakes are huge. After all, we’re talking about the future of the Internet. Will it remain the platform for freewheeling entrepreneurism, including the voice arena? Or, will it follow the low-innovation path that has characterized telephony for the last 129 years?

 

If you believe that dollars buy votes, bet on the telcos. According to Business Week, The incumbent telcos and the major cablecos “invested” $3-million in congressional candidates in 2005 through October 31, while the Internet companies, such as Yahoo! and e-Bay, have contributed less than $1,000,000. A late December article in the Atlanta Journal-Constitution reported that BellSouth had “entertained” at least 80 US congressmen and aides through October 2005. And if you’re not cynical enough to believe that funding congressional campaigns and buying swank dinners can affect legislation, consider that some states have passed laws prohibiting municipalities from offering WiFi to its citizens. Now, Pete Sessions (R-Texas) has proposed Federal legislation banning municipal wireless networks. Expect the battle to be waged as Congress rewrites the Telecommunications Act of 1966.

 

OK, what’s the answer?

 

We must separate access and services, not just in different “planes”, but as different businesses. To do so requires that offering access services be both profitable and competitive. And we’ll probably need more than competition between DSL/FTTP, cable, wireless, and broadband over powerline. Why not multi-tenant fiber in the access network? Why not private ownership of access? Why not ENUM with only enough restrictions and safeguards to make it fair and unabused?

 

But the one thing we should do, if nothing else, is pay attention to what Congress does this year.

Where Do We Go From Here?

I was thinking about the state of public communications networks when the brouhaha over what the school system should teach in biology in a neighboring county, evolution or what some of the parents call “Intelligent Design”, came to mind. You probably find that a little weird, but the thought chain had to do with the evolution of the network and how today it really represents Dumb Design, at least for tomorrow’s network. Unfettered survival of the fittest never happened in telecom. Instead, Government policies have created mutant giants from protected telecom monopolies.

 

Can reasonably fair market forces be brought to bear on telecom? While today’s regulatory environment (in the US) might have been appropriate for the low-innovation past, is it what we need tomorrow?

 

How should they be handled? Can reasonably fair market forces be brought to bear on telecom? While today’s regulatory environment (in the US) might have been appropriate for the low-innovation past, is it what we need tomorrow? Do we need more competition in the access network? Will the way numbers are assigned today be optimum for the next-generation network? What is the NGN, anyway, and how will its architecture be defined. Who will define it?

 

One of the promises of the Internet is that of the “Stupid Network” (not Dumb Design). The hope is that intelligence will move out of the network to the edge…to the user, lowering the barriers to market entry for innovative services. It will be interesting to see whether the Alliance for Telecommunications Industry Solutions (ATIS, www.atis.org), which is comprised of all the ILECs as well as many others, comes up with standards that move intelligence from the edge into the network, increasing the control of the incumbents. I’m hopeful that ATIS and the 3GPP’s IMS will provide a standards foundation that allows wireline, wireless, and cable to converge by offering seamless access to a service core that presents a level playing field to all value-adding service providers. That’s the promise. Let’s see how it plays out.

 

Of this I am sure: something has to happen. We will see the industry evolve into something quite different, since its current structure just won’t realize the network’s value potential. The current telecom environment is, for the most part, an artifact of the regulated past, and not suitable for telecom’s future. But what must happen to allow the forces of the market come to bear, while recognizing that some providers have a huge competitive advantage gained by over 100 years of regulation? Will we see intelligent or dumb design? And how quickly will it evolve, or will we see atavisms appear?

 

The FCC does not appear up to the task. It seems that it fails to shape the network except through reactive decisions. There is little evidence that it is proactively shaping policy to address the fundamental issues facing the industry. Perhaps the judicial and legislative framework in which is works does not permit it to do so. Can it do a better job of understanding the competitive dynamic, defining competitive layers, and then exposing them to competition? I don’t think so. The FCC must operate within a legislative framework, and the courts are supposed to interpret the laws. So real fundamental change must come from congress. And it doesn’t appear to be interested.

 

Nearly a year ago at my VON Fall 2004 presentation I said, “Just as the business of building roads is not the business of building trucks, and they are not the business of hauling goods. So too, the business of building networks is not the business of building network equipment, and these are not the same as the business of offering value-adding services. It works in transportation and it will work in telecom. The 1984 AT&T Consent Decree separated equipment from the mix, but networks and network services are still stubbornly entwined.” Obviously, this is true in wireline telephony, cable, and cellular.
So, where would separation of transport and service get us? A level playing field for all, that’s where. Need a group of numbers? Go to NANPA and get them. Need a network? Build one or rent one. Get the access you need and launch that innovative service.

 

For example: Suppose you want to deploy a messaging service: voice, fax, and e-mail. Using today’s technology and network design, you would build servers, at $80,000 apiece, secure colo space, and install and maintain this infrastructure. First year tab for Tier 1-and-2 coverage: over $9,000,000.