Verifying Radio Network Coverage

Coverage prediction software models the propagation of radio waves, as part of the process of designing radio networks. It takes into account (among other things) distance from transmitters, variations in topography, the curvature of the earth and changes in atmospheric density at altitude. But it is only part of the story.

In this article, Tait coverage expert Stephen Bunting takes us beyond the mathematical theory, to explain how Coverage Verification Testing increases confidence in radio network coverage.

How coverage engineers prove that your radio network is giving you the coverage you paid for.

Coverage predictions define the mathematical likelihood that a randomly-selected location will have a signal strength equal to, or greater than your specified threshold. But there are limits. Because it is averaged out across your entire service area you cannot be 100% confident that a specific signal level will occur at a given time and place. No matter how good the data, it cannot perfectly represent the real world.

Once your network is in place, you will want to confirm that you are getting what you paid for – effectively a coverage guarantee. Coverage Verification Testing can give a specified confidence (usually 99%) that your network is delivering the area reliability you specified, by randomly sampling coverage across the network service area.

To verify that your installed network meets coverage requirements, your coverage must be specified as either:

  • Covered area reliability – the proportion of randomly-selected locations within the predicted coverage boundary where service can be expected,
  • Service area reliability – the proportion of all locations within the service area where service can be expected.

The level of service you require also needs to be defined. Where possible, the service threshold should be a single, measurable, objective value. Common coverage design thresholds are signal strength (RSSI) and Bit Error Rate (BER), which may have been derived from a specified DAQ requirement.

Coverage Verification Testing (CVT) physically measures area reliability in a robust, repeatable and affordable way. In this situation, reliability refers to the proportion of locations that meet or exceed the coverage design threshold.

Where to sample

Statistical sampling requires each sample to be randomly and independently selected. Obviously, if all samples were taken right next to radio sites, the test would not be valid. Nor would taking all samples in deep valleys at the edge of coverage. Neither example would provide an accurate measure of reliability.

If time and money were no object, every possible location could be tested, and a very precise reliability measure could be achieved. Clearly this is impractical; another approach is needed, to balance precision and affordability. This requires a controlled randomization approach, that balances random sampling and even distribution, by spreading sufficient samples evenly across the service area.

To distribute samples across the service area in an unbiased way, coverage engineers create a test grid, which divides the service area into evenly-sized test tiles, typically one-to-two kilometers square. A random sample is taken within each test tile. So, while not random in the strictest sense, sampling is randomized within a test tile. When designing the coverage verification test, the coverage engineer can adjust the tile size, to ensure that enough samples are taken to meet specified confidence levels, while keeping the sampling as evenly-spread as possible. The tiles themselves are not tested; they are simply a device to distribute the samples.

(As the actual sampling is performed by vehicles on public roads – which conveniently replicates actual mobile radio use – any tiles that do not have at least partial road access are excluded.)

The method

If random signal quality samples are taken, you can estimate the percentage of locations that meet or exceed the coverage design threshold. The associated degree of confidence will then depend on the number of samples you take.

For example:

A radio network has specified area reliability of 90%, and coverage design threshold is -100dBm signal strength. You require a confidence level – the likelihood of the testing being accurate – of 99%.

The number of samples will depend on the reliability specification and predicted reliability – the smaller the gap between them, the more samples are required. Let’s look at some possible outcomes, based on different numbers of samples.

Looking at the table below, the first two examples fall well short in terms of confidence, due to their very small sample numbers.  The third example exceeds both reliability and confidence, suggesting that the system may in fact have more radio sites than necessary to meet the specified criteria.

The final example – with 900 samples and measured reliability around 92% – meets the confidence criteria, and best represents a realistic, well-executed CVT.

What happens if we increase the confidence figure further? Diminishing returns set in quite quickly: 99.9% confidence requires 2150 samples. That is a significantly greater sampling overhead, so the sampling cost can get out of hand quite quickly.

So to sum up, the theoretical nature of coverage prediction can provide only part of the story. Physical, location signal measurements from a well-designed and executed coverage verification test can verify it in a robust, repeatable and affordable way.


This article is taken from the latest issue of Connection Magazine. Read the full piece hereAnd if you like our articles, subscribe to Connection to be the first to know when new issues are released!

Leave a Reply

Your email address will not be published. Required fields are marked *