In this article, Rory Conaway summarizes the difficulties and failures of early municipal Wi-Fi networks, the problems with mesh networks and the solution he is proposing.
* * * * * * *
I’ll be right upfront and tell you that this article is about a system that I’ve designed and that we are going to sell as a complete solution. However, it’s so unique and press releases are too brief that I thought an article explaining our system in greater detail would be a good idea. I’m not going to spill all the secrets but some of the concepts used and comparisons to existing municipal products are valuable to municipalities trying to decide between nothing and really expensive fiber. It’s wonderful that Google wants to pay to wire two cities on the planet, but what about the other 2,469, 499 (reference: WikiAnswers)? This is almost like starting over from Chapter 1 but it is actually an evolution of everything I’ve talked about in the previous Tales From The Towers chapters with some secret sauce thrown in.
A brief walk through the history of municipal wireless networks
Let’s start with the original dream of mesh systems. Coverage everywhere, cheap to deploy and maintain, and the digital divide was supposed to come down like the Berlin Wall. Yes, and I’ve got some oceanfront property here in Arizona to sell you. Then Earthlink collapsed due to technical cluelessness, followed into the abyss by MetroFi, which died from drinking too much advertising Kool-Aid. Not counting the tens of millions of dollars along with the hundreds of millions of investor money plowed into hardware manufacturers such as Ricochet, Vivato, SkyPilot, and a few others that will never get recovered. I believe the idea was oversold badly. Every city wanted it and probably used the same financial spreadsheet author for their city savings that the for-profit companies were using for their investors. My other conspiracy theory is that those accountants moved into the banking industry and came up with the great idea of credit default swaps that collapsed the financial industry a few years later.
However, the need for municipal wireless never went away and instead grew. Everyone wants broadband everywhere and between tablets and smartphones, the demand for bandwidth is increasingly exponentially. The only way that need is going to be fulfilled is if someone breaks the last-mile monopolies of cable and telco operators. Cellular companies, who were supposed to be able to break it with advanced high-bandwidth technologies, have instead created their own monopolies and limit you to monthly allotments you wouldn’t tolerate from wireline services. Then they laughingly try to also sell it as residential coverage!
Other wireless options are not as practical or affordable
Compared to satellite broadband service, which has a place anywhere there is no other option other than 2 cans and a string (or better known as dial-up POTS line), cellular broadband is not that bad. Of course if you want to use any of the really cool features that have been introduced into the Internet over the last, oh I don’t know, 10 years, then prepare to open of the checkbook. If you hit the monthly bandwidth limit of your cellular contract, just watching a single HD streaming movie could cost you $40. Of course, having some congressmen, senators, and the FCC in your pocket to protect that monopoly by limiting some of the most efficient small business operators on the planet to just a few crowded frequency ranges helps too.
Point-to-multipoint (PTMP) networks just don’t function well in most municipal areas due to density, vegetation, buildings, vertical assets, to name just a few challenges. The hub-and-spoke model will never come out of the farmlands. It will never compete in most cities with the same level of density of wire line systems although it will always have a little niche around Aunt Bessie’s cow. Unless an equipment manufacturer comes up with some really inexpensive and flexible equipment that can handle 500Mbps through subspace (and I don’t see that happening in the near future), the municipal idea is actually the concept that has the best chance, mesh or otherwise. Of course, if someone did figure out subspace, the politicians would immediately trash the concept of eminent domain, confiscate it, and sell it to the highest bidder which, hmmm, happens to be the same monopolies that own them.
The concept of a lot of local access points (APs) is still the best idea to penetrate the suburbs and high-density user markets; figuring out how to make that happen the right way logistically and financially is the critical issue. Before we go into how to solve it though, we need to know the nature of the problems we face and how to get around them. The solutions can be found in previous articles I have written here (the Tales From The Towers series) but a quick review never hurts.
Problems encountered in deploying large scale mesh Wi-Fi networks
Most mesh APs are not designed to deliver the kind of capacity that is needed for today’s users. Mesh systems also need to be deployed on street poles, close to the users due to limitations that were so eloquently explained to us by Shannon and limited by our government. Assuming that your city doesn’t have fiber to every street pole (wait, that’s Triadland), and I rather doubt it, APs with a single 5.8GHz radio for backhaul need several injection points since they lose ½ the total bandwidth every hop. Throw in the idea that 300Mbps in 802.11N ranks right up there with Santa Claus and the Tooth Fairy, the reality is at best, the Gateway radio has about 80Mbps maximum to start and believe me, I’m being overly generous in some cases, in a 20MHz channel. 40MHz wide 5.8GHz backhaul is hard to deploy due to interference and rarely, if ever, means a doubling of bandwidth.
Let’s assume that a mesh radio really can get 80Mbps from the gateway or egress point of a cluster to the first hop. Then what happens when it makes the second hop if it only has 1 backhaul radio internally? That means the second hop will only be one half of that or 40Mbps. The reality is also that it drops the total throughput back to the first hop to 40Mbps also and that’s assuming no users are connected to the first hop AP. Users that were connected to the first hop AP just had their bandwidth drop significantly. Now, not every manufacturing has that exact problem as some use a store and forward methodology that maintains the bandwidth of the first hop at the expense of the second hop or increased latency. There is no free lunch here and it still amazes me to see even new mesh radios that have hit the market in the last 2 years still use a single radio for backhaul. They simply can’t be wire line replacements with 1 radio. In the real world with trees and buildings, you might have to live with 5-10 hops before the next bandwidth injection point and I’m constantly designing systems around this limitation.
The other thing that makes no sense to me is 1 radio, 3 antennas and 3 radios, 3 antennas. The former means that instead of using 3 inexpensive radios and some type of synchronization, that money went into an amplifier and a splitter. End result: ½ the bandwidth being lost. If you are going to create a dual-polarity 802.11N MIMO system with dual-frequency, why cripple it with a single radio backhaul? I’ve never understood that and it took some digging at the FCC into the Motorola 7181 to even figure that much out. The brochure wasn’t much help.
The latter is even weirder. 3 radios means that each radio/antenna has 120 degrees of coverage. It’s clearly not as limiting as a single radio and is sort of actually clever. It’s cheaper than the Rolls Royce concept of Bel Air with their original 4 radios (3 at 120 degrees for backhaul and one 2.4GHz radio for coverage) systems costing $12,000 per AP, and it solves the idea that no single radio is going to handle a hop that is 180 degrees apart. However, you do have to now watch orientation on a street so that corners don’t end up on the same radio if they are the primary path or you lose ½ your bandwidth. The problem then becomes that the center of one or 2 antennas, the highest gain point of the antenna, is pointing into the corner directly, which is usually where a building or house lies, and the edges of the antenna, the lowest gain point, are aiming down the street. The alternative is that there is no dropoff at the edges of the 120 degree antennas but they you are sacrificing antenna gain. Like I said, weird idea.
Trees are a huge problem since they are usually right in line with the light poles along the street and block egress injection bandwidth. Since the backhaul of most mesh radios is 5.8GHz, and that signal isn’t getting through trees with full-modulation at any distance, if at all, that’s a problem. It’s also one of the reasons that most mesh systems had to go back and install 2-4 times more radios than the software model suggested. Most models only worried about the 2.4GHz coverage. None of them could possibly look at the growth rate of every single tree and how it affected the backhaul in software. The way it was handled, adding more mesh radios, added more problems than it fixed. By adding another 2.4GHz radio, the interference level was increased if you didn’t need the additional coverage, or you just added 1 more hop, thus reducing the bandwidth down the street even further. To make it worse, many trees hang over into the street.
The catch is that this isn’t just one problem, it’s actually two problems. Since we are using 2 different frequencies on the APs, we need to address it with 2 different RF analyses. Historically, mesh companies have dealt with it as one and have come up with the same solution — more APs. However, more APs also means more capital costs and if there is only 1 internal backhaul radio, more mesh hop bandwidth loss or more injection points.
This problem alone creates lot of headaches for mesh companies and costs them a lot of money. Logistically most trees don’t overhang that far or in some cities, they are trimmed back. That means that the more centrally mounted the AP is over the street, the less chance it will have tree obstruction. However, that may also be the difference between paying for a police officer to direct traffic or paying a traffic control company to put up barriers for install and service. The expense goes way up and continues if APs have to be replaced in the future. For cost reasons, the solution we came up with allows either scenario but is cheaper in the long term if it’s mounted at the vertical pole. When engineers designed these systems, they sat back and used software to tell them how many poles to skip. I would be surprised based on deployment results, if the engineers actually went on-site and visually inspected every single AP location for proper placement. It’s these details that have to be considered for the financial viability of a large scale wireless network. The solution found for tree obstructions for the backhaul in S.P.I.R.I.T. is much simpler and less expensive.
Here we have evaluated just one of the most common challenges that we ran into with S.P.I.R.I.T. For those of you who have followed my articles, the answer is obvious and is already covered. I’m going to make you do some homework on this one. The best part is that we didn’t lose any appreciable bandwidth, added very little latency, 1-2ms under average load, and didn’t have to add a several thousand dollar AP. However, when this system is deployed, engineers are going to have to do a site survey of the location of every single AP for proper placement. Nike is going to be sending me thank you letters for all the shoes that are going to wear out getting engineers into the field.