Second-Screen Advertising : A Tale of Two Assumptions

Fresh off completing a second screen ad study on a notable TV show .. (you know, the kind where you go in with a ‘lets-confirm-or-deny-what-we-think-we-know’ and come out with some of .. ‘hmm-now-we-know-what-we-didn’t-know-we-didn’t-know’). Wanted to use this moment to reflect on a topic of much procrastination — the emerging economics of Second Screen Ads. So here goes — with the caveat and adapted Yogi-ism – 90% of all prognostication is 50% mental.

As with any emerging market, Second Screen Ad prognostications are heavily dependent on your assumptions, which might include:

  • Pace of growth of TV Apps Ecosystem in both volume and ‘share of TV eyeballs’
  • Emergence of new Ad Units. Surely there’s more to second screen ads than banners & videos. Something that matches the increasing computing power of tablets, and the increasing short spans of focus on any single screen at a time. But what this is is anyone’s guess – and until then, there’s banners and click-thrus.
  • The relative valuation of new Ad Units to the incumbent ones. What should the base case of the Ad Unit revenue breadown projection? Banner Ad or more? CPM,CPA,CPC,CP(TBD?)?
  • The Trajectory of ‘input costs’ – What assumptions should be made about the costs of creating large varieties of ad units for the same campaign.
  • Analytics & Reporting. How about the potential ‘apples and oranges’ issues of adding up the engagement across devices, exposures and interaction styles into something simple and tractable.

Given a position on the above, we have sunny side uppers, the whoops, is that a Chasm I see? crowd (harkening back to the Gorilla Game), and the umbrella-carrying sun worshippers.

  • Sunny Side Uppers The assumptions made by sunny side uppers such as this, focus on the *potential* total addressable market. If you take the best case – a) 80% of all (4 billion!?) TV watchers have TV Apps running on their tablets while they watch TV, b) their TV App dwell time is a significant fraction of the program length, and c) their dwell time is handsomely rewarded by some unspecified but superior-to-banner ad style, with a lucrative ‘action-per-thousand-eyeballs’ metric.
  •  Whoops, is that a Chasm I see?  This school of thought is that a large pivot needs to happen in the second screen space before early adoption turns into revenue. Some studies point out that actual second screen use is close to 10% than 80%. This is in line with VC Fred Wilson’s 30/10/10 rule of thumb for any App (TV or non-TV). If you go with Fred’s rule, less than 10% of your app downloaders are active users, and about 1% of your downloaders might be on concurrently (for Social TV or Social Ads). Combine this with my earlier blog (pie chart included below for convenience) showing that most apps have 100K downloads or less – and you have an ad targeting over 10K users and social experiences across 1K users. Not that 10K is a small number, but nowhere near enough for banner style ad economics to add up to more than a month’s rent on a studio apartment.

TV App Graphs

  • Umbrella-carrying Sun Worshippers There is a ‘third way’ – one that acknowledges but triages the best of both the above. It acknowledges the ‘chasm’ view that getting the cross product of app eyeballs and dwell time in the millions isn’t exactly around the corner. However, the sunny side school does have a point that new ad categories may change the economics away from the obsession around eyeball counts in the millions and trying to conjure up large $ numbers from minuscule web by CPM’s. A number of companies are working in this space without yet showing their hand, and a maybe plus a perhaps don’t add up to two is’es. However, between YuMe’s award winning multi-screen ad units, and Adobe’s multi-screen ad inventory management, there are in-market examples of tangible current products trying to create a superior ad experience, nuanced analytics .. and therefore a more viable economic base.

And in the process of building a viable economic edifice for multi-screen TV, second screen ads could blur the lines between ads and content, click thru’s and audience participation- and in an oh-by-the-way manner, commoditize user panels. After all – why ask a user what they would do, when you can know what they


The State of TV (2nd Screen) Apps : The iOS View

In an earlier post, I talked about the State of Second Screen Apps on Android. No discussion can be complete without an iOS equivalent (after all, they were first off the starting block, and still the first target of many app developers). Not surprisingly, measuring TV App activity on iOS is a different set of challenges than Android.

  • On the ‘iTunes wins here’ category – unlike the Android marketplace API tediousness, iTunes provides a simple market query interface and a no-muss, no-fuss JSON structure in return. 
  • Conversely in the ‘ah, typical Apple’ category – getting iTunes download counts is like pulling teeth, where Google Play was happy enough to give one qualitative Android download numbers without much ado. ..

Given that the download numbers are kinda important (in fact, kinda the point) – one is left with two choices on inferring the download numbers.

The first alternative is to use this marketing heuristic of estimating the number of downloads as 30 times the number of user ratings. In addition to my skepticism of any such one-size-fits-all formulas (Occam’s Razor notwithstanding), this has been verified only for Paid Apps. And the number of TV Apps that are paid is not large enough for this inference to be useful for that population.

The second alternative is to combine iTunes metadata about Apps with a bit of web scraping from app search engines such as Xyologic. This idea has legs, but the effort is significant, especially as Xyologic is only one of several App search engines and likely not the gospel truth.

Here, I settle on the third alternative – a hybrid of the previous two. As with the first approach, I apply a static multiplier from # user ratings to #downloads for any TV App. But as with the second approach, I sample Xyologic data to calculate this multiplier for each of the following quantized range of downloads : <50, 50-10K, 10K-50K, 50K-250K and >250K downloads. The quantization is just as well – as it turns out that this multipler is a) signficantly dependent on the download range and b) different for each download range.

The table below shows what multiplier needs to be used on the number of user ratings returned by iTunes for a particular download category, to arrive at a download number resembling what Xyologic provides.

iOS TV App Downloads

Eyeballing the multipliers :

  • It’s intuitive that the multiplier should decrease with more popular apps (as it does here). A lower multiplier means more comments per 1000 downloads, and popular Apps are likely to have more engaged users and therefore a greater proportion of user reviews. 
  • The multiplier of 180 for the [10 to 50K] download range is roughly equivalent to 5 reviews per 100 downloads, which is also the average reviews:download ratio on the Android TV App Marketplace (as I described here).

Combining this model of calculating downloads with the iTunes ‘TV App’ data, yields an App population (with clean records) of about 633 Apps. The App distribution  looks something like the below.

iOS TV App

Somewhat surprisingly (or not depending on your p.o.v), this looks almost identical to the distribution I published earlier on Android TV Apps (and re-included below for visual convenience).

android TV App

The similarity of the two datasets might arguably increase the credibility of both datasets (unless they are erroneous in remarkably similar ways). A intuition around for the similarity across platforms is that:

  • most serious App developers have both Android and iOS offerings
  • they take the platforms equally seriously and do about as good an execution job on both platforms.
  • but it could also be – that App popularity is a function of marketing budget, and not platform – as the predominant way of finding TV Apps (as opposed to regular apps) is still via the program, not via search engines such as Xyologic, Hunch or Play.
  • it could also be that the quality of a TV App experience is driven heavily by access to supplementary TV content (all of which is a walled garden). And the range of app developer access to TV content is somewhat agnostic to the App development platform

These and other related conundrums will be the topic of a future set of musings.

TV App Engagement : Beyond Download Counts

Those of us in the TV App space would like to understand the extent and trajectory of user interest in TV apps. A positive trajectory might indicate early excitement maturing into a user habit, upon which an industry can then exist. Unfortunately, app downloads (especially in a free app ecosystem) indicate awareness, but fall short of calibrating involvement. All of us who download apps are familiar with the “use once, forget forever” set in our Apps folders. More nuanced engagement studies around dwell time or biometrics are closely guarded secrets, revealed on an uneven basis across the App Ecosystem. The question is – can we do better than downloads in calibrating App Engagement across the TV App Ecosystem, using marketplace data that is commonly available for most apps?

One simple approach is to compute normalized ratings (i.e. reviews as a fraction of total downloads). The intuition is that people are more persuaded to take the trouble to rate/review an app if it is interesting. And if it is interesting, they probably use it more. Below, I’ve calculated a Figure of Merit (1000*#Reviews/#Downloads) and its associated behaviors.

A summary of the Figure of Merit data  (ignoring ‘tip of tail’ apps with less than 5000 downloads and less than 50 reviews) yields the following : 

    • Average Figure of Merit – 5.25 (i.e on average about 5 reviews per 1000 downloads)
    • Average TV App Rating – 4.1
    • Average App Ratings Count – 1250

The middle 80% of apps get between 2 and 20 reviews per 1000 downloads, with a distribution that looks something like the below.


As one goes to either end of the distribution, the very popular apps get an order of magnitude more engagement (and conversely on the long tail)


Overall, the use of this Figure of Merit (normalized ratings) is moderately useful, and generally correlate with intuitive notions of the quality of the Apps. Qualitative observations based on Figure of Merit data include :

  • Global Brand + App Strategy does well as a pair – 50% of apps in the top 25 on the FOM scale (and FOM numbers ranging from 15 to 200) are either global brands or brands with strong local presences (e.g.  media companies of note in Sweden, Vietnam and India).
  • Brand awareness doesn’t save bad apps –  There are a number of well known content brands that have put out apps without a discernable content strategy or user benefit. These apps garner downloads .. and user disappointment. 42  out of the 73  apps with over 250K downloads have below average ratings.  8 (of those 42) are from global media/entertainment companies, have over 250K downloads and a rating of well under 3 (compared to a median rating of 4.1). So, if you are a large and well known brand and put out an app that ‘snookers’ people – people will take the trouble to publicly call you out.
  • Sports has a natural engagement advantage Sports TV Apps score marginally higher on average ratings (4.25 vs 4.1) but about 25% higher in terms of the average Figure of Merit score. Thus people are more vocal (and generally more positive) about Sports Apps.

The State of TV (& 2nd Screen) Apps : Android

Thinking. Folks (including me) have been talking about the Appification of TV via second screen TV apps for sometime now. This expectation has led to a plethora of announced and implied corporate second screen strategies. No one seems to have put a number on how many of these announcements are backed up by concrete apps, and how those apps are doing. This is a ‘start small’ exercise towards putting some numbers to that picture.

The simple exercise here is to collect and quantify app data in the Android marketplace by doing a bit of marketplace scraping  and some R ‘data wrangling’ to understand  the Google Play TV App landscape.

The Data at First Blush. To my knowledge there are no public Java API’s to datamine Google Play, but thanks to the android market api, and some tediousness – one gets the following preliminary result. There are about 500 (543 being the exact result returned by the API) TV apps in Android. Their breakdown in terms of categories, and the relative distribution of app downloads is shown in the pie charts below.


Some of the more interesting observations here are:

  • Equal distributions of apps self-categorizing as Media, Entertainment & Games. Games is the Rodney Dangerfield category – with the exception of Mark Suster, no one else in the community has called out Games as a category deserving of respect, and perhaps a disproportionate amount of investment.
  • A 10-15% representation of Sport? – both surprising and not so surprising. Sports Apps aren’t easy to write, but if written well they find a ready and enthusiastic audience.
  • Now to the downloads. If one thinks of 100K or less downloads as the ‘poverty line’ (i.e. no amount of cleverness can lead you to a lucrative $ number in terms of app monetization), about 66% of TV Apps live below the poverty line.

Speculation. So why is the data the way it is? A few theories:

  • Why so many games? Because independent publishers can create compelling (largely textual) experiences even without access to copyrighted TV related content — that is assuming a Twitter future that is still somewhat ‘open access’. 
  • Why the relative paucity of apps? 500 is a lot better than the 5 interactive TV applications that was the recent past, but a disproportionately small proportion of the app space. Why – because good apps need good content. On mobile, the content is the user + web services, for TV Apps – the content is (copyrighted) TV.
  • Why the paucity of downloads? Because a TV show is currently the best way of getting a TV app discovered, and not every app developer owns a TV show. Advances in the App Discovery space could go a long way in making the download picture less bleak. A product punch line from Apple’s acquisition of Chomp, and more activity around TV App containers could change the picture rather quickly.

Unfinished Business. There’s a bunch of stuff I haven’t covered here (left for future little experiments). The state of TV Apps on iOS. A monetization argument for why I consider 100K downloads the poverty line. Extent of replication of capability (e.g. TV guide) across geographies. App property variance across large studios vs small developers. And several other topics.

Fine Print. 

  1. The Android Market scraping isn’t foolproof – due to its limited query capability. True – however, 550 is a large enough sample size that I would posit it to mirror the actual TV app population in terms of statistical behavior, even if the actual population size is off by a bit.
  2. Google Play isn’t 100% of the Android market. It may not include the dark matter (other marketplaces, or direct downloads from large publishers). But for our purposes, it’s close enough.
  3. Why focus on downloads, after all downloads do not equal engagement? – It’s true that downloads may not imply dwell time. But lack of downloads is likely to imply lack of engagement with an App, which is the pertinent point here.