MUMBAI: Gone are the days when television channels and shows had a free reign over claiming leadership as per their whims and fancies by conveniently tweaking ratings data with permutations and combinations that best suit them.
In light of false claims of leadership citing selective data by television channels, which in turn misleads the public at large, the new ratings measurement body Broadcast Audience Research Council (BARC) India has established bounds within which ratings can be used, particularly in the public domain.
BARC India has laid down the following tests, which must be applied before making a claim of leadership:
1) The period of comparison must cover at least four consecutive weeks of data.
2) The period of comparison must cover at least four consecutive clock-hours of data.
3) The tabulations used must be direct outputs of BARC India’s BMW user interface. Any number derived by extrapolating or interpolating BMW outputs is not permitted for use in the public domain.
Additionally, claims of leadership must meet the following standards:
1) Clear definition of target audience within BARC India audience taxonomy
2) Clear definition of comparison set
3) Period of comparison to cover at least 4 consecutive weeks
4) Period of comparison to cover at least 4 consecutive clock-hours
5) All data must be available directly and without interpolation or extrapolation from the BMW.
Viewership Research – A domain of statistics
BARC India has been established in pursuit of the vision of measuring “What India Watches.” At current reckoning, India has over 153 million TV homes, about 77 million each in rural and urban households. No presently available technology can capture and report every home on a ‘census’ basis. In the event, the only way of approaching the measurement task is by carefully recruiting and then closely tracking a representative sample drawn from this huge population.
The scale of the sample bears a direct relationship to width and depth of coverage it can realistically provide. One comes at the cost of the other. The greater the width over which a sample is distributed, the less the depth of coverage that will be available for a particular geography. Statistics provides reliable techniques of sampling to best capture diversity in populations and analytical techniques to quantify the errors in the estimates produced. It should be intuitive that errors tend to ‘average out’ across large aggregates but get amplified when small slices are examined. A well-designed sample makes the sampling logic and the errors of estimate associated with it, explicit.
Measurement of television viewership boils down to answering the following questions:
· What was watched? (Content attribution)
· Who watched? (Reach)
· When and for how long was it watched? (Time spent)
The concept of ‘rating’ is merely the product of the second and third.
Rating = Reach x Time Spent
BARC collects data and publishes measurement statistics on all these variables at both Household and Individual levels. Sampling ratios vary across different geographies and town classes. What may be measurable as a slice or segment in one market or geography may be too small to measure in another.
Challenges of sampling India
While India is only the second most populous country, its economic, ethno-cultural, geographic, social and demographic diversity is by far the most multi-hued on the planet. A wide and constantly expanding spectrum of television channels seeks to slice and segment this variegated audience. Disparate rates of economic advancement across linguistic/geographic segments are echoed in the range of broadcast content that courts them. More simply, greater prosperity cues greater choice. Though a large proportion of cable or DTH homes pay a monthly subscription, only a small portion of this reaches broadcasters. Not surprisingly, a numerically dominant majority of mostly small channels realises nothing from subscriptions and is wholly advertising dependent.
By its nature, advertising is data driven. Ad placement is based on finding the right segment at the right time at the most competitive price. The first two considerations are all about audience measurement while the third reflects commercial negotiation, which is also inextricably linked with it.
This, then, is the great measurement conundrum. The more desperately a channel needs measurement to survive commercially, the harder it is to measure.
Priorities and Choices
Panel size, while designed to grow steadily over the years, is defined at a given moment. BARC India assigns responsibility for assigning measurement priorities and making allocation choices to its Technical Committee. The Committee comprises representatives drawn from the stakeholder community and has to do the intricate balancing act between keeping the coverage wide enough to justify the “What India Watches” vision and delving deep enough to find and measure the burgeoning ‘long tail’.
BARC India’s panel is already without precedent in terms of its coverage of Urban India. With its imminent expansion into rural India, it will be entering virgin ground for television measurement. As new markets get covered, or previously covered markets are put under higher magnification, many new audience segments, and by implication, content delivery opportunities are bound to be revealed. More measurement and better measurement will fire up the creative engine and a feedback loop will raise the bar further on future needs from the BARC India panel.
The Indian broadcasting industry is entering a virtuous cycle of better measurement leading to more content differentiation leading to even better measurement and so on.
Measurement and Comparison
The two are inseparable. The moment anything is measured it becomes possible to compare it with another thing measured using the same metric. With television viewership, it is almost a reflex. Any content producer, or advertising inventory trader, starts comparing her reach, time spent and ratings with those secured by her competitor(s) no sooner than the week’s data are published. On the one hand, it serves a crucial function in terms of content evaluation and planning. On the other, it helps set prices for trading advertising inventory. In both instances, the key players are looking closely at their “Share of Market;” creative content professionals seek to lead/dominate share of time spent, at least within their genre and ideally across multiple genres; advertising sales people want to win the maximum and highest-value-per-viewer revenue and by implication starve their competition. This is fine so far as it stays within the broadcast organisation. Issues begin only when these professionals use the data for establishing their leadership to their respective ‘customer’ communities. When a television station announces that it is “Number 1” in its genre and offers BARC India data to substantiate this claim, the claim is no longer an internal issue but has entered public discourse.
Ratings Leadership
In its most essential sense, television measurement is just a special case of attempting to make sense of human behaviour. The constant battles between fickleness and loyalty, emotion and intellect, frivolity and seriousness play out vividly in the way in which we wield the remote. As is commonplace in nature, order eventually arises from this chaos.
One aspect of this order is a marked propensity to Inertia. Purchase behaviour, of which viewership behaviour is a special case, is known to fall into two broad patters, ‘Repertoire’ and ‘Subscription.’ ‘Repertoire’ purchasing is when a consumer has a set of acceptable, quasi-peer, brands across which she switches. Conversely, ‘Subscription’ connotes a high level of loyalty to a single brand. In general, television viewing falls in the ‘Repertoire’ basket. Only the rarest content gets into ‘Subscription’ when it gets seen as ‘appointment viewing.’
It is with this context that ‘Leadership’ in television must be understood. A leader is not created overnight. A given moment or in a given day part on a particular day, may show someone ahead or someone behind. This does not constitute leadership. Using such a momentary blip is a very weak foundation on which to base a leadership claim.
Rules for commercial use of BARC India data
1 All BARC India data is based on a sample, not census, of India’s television viewing population.
a. Samples produce estimates of population parameters that lie within a range or ‘interval’. The midpoint of the range is used as the point estimate but what the sample actually produces is an ‘interval estimate’.
b. Some events are commonplace in the population; others appear less often. The rarer an event is, the harder it is to detect in a sample. Here is an example. A Cricket match is viewed by 30 per cent of all viewers in a population of 10 million. A Golf tournament is viewed by 0.1 per cent of all viewers in the same population. A sample of 632 individuals would suffice to estimate the Cricket match viewership with a 10 per cent Relative Error, i.e. ±3 per cent of the population parameter, or between 27 per cent and 33 per cent. To get the same relative accuracy for the Golf tournament, i.e. to get an estimate within ±0.01 per cent, we would need a sample of over 263,000 individuals. However, if we were prepared to accept a 100 per cent Relative Error, i.e. range of ±0.1 per cent or 0-0.2 per cent, the sample size comes down sharply to 2687 individuals.
c. Two events cannot be meaningfully contrasted if both are rare. Imagine comparing the Golf tournament cited above with a Chess Championship also watched by 0.1 per cent of the population.
Assume that we are working with a sample of 2703 to keep both estimates in the 0-0.2 per cent range. Let us say that the sample produces an estimate of 0.05 per cent for Chess and 0.17 per cent for Golf. It would be tempting to declare Golf more popular by a factor of 3:1 but this would simply be a trick played by the sample and a grievous falsification of reality.
2. BARC India data are best understood as ‘Time Series’ data and not ‘Point’ data.
a. Aggregating across periods, for example by using moving totals or moving averages damps out random variability. BARC India encourages use of 4-, 8- or 12-weekly moving totals or moving averages when evaluating a proposition.
b. Time series data provide insights that a point does not. While unusual, extraordinary events will trigger the occasional spike in viewing, most viewing follows almost metronomically predictable patterns. The illustration below tracks overall viewership measured across the entire BARC India panel for four consecutive weeks between May and June 2015.
Every genre/type of content creates a mix of appointment and occasional viewing. Plotting the viewership across multiple weeks helps to visualise the direction in which its popularity is headed. Two points on the path may suggest a pattern contrary to the broad trend and only plotting multiple periods can reveal this. Selective use of BARC India data to bestow an artificial advantage on a channel is not permitted.
c. ‘No. 1’, ‘Leader’, ‘Winner’ and such like adjectives make sense in an Olympics athletic event but only serve to mislead in the context of viewership measurement. Viewers do not tune into a winning or losing channel. For a viewer, the channel they choose to watch at a particular moment, however popular or not might be with the rest of the universe of viewers, wins their attention for as long as they stay on it. As options multiply, programming targets ever more tightly defined audience/need combinations.
Even the biggest entertainment channel may not appear at all in the viewing repertoire of an International News addict. Audiences can and will be defined in endless combinations of gender-age-NCCS segment- geography-town class. Even if two channels pick nearly identical target audiences, they will attempt to differentiate their content from one another. While some viewers may consistently pick one over the other, there will be many who will distribute their time across both.
d. Audience shares are designed to mislead, particularly when comparing small channel platforms.