1410-72
LIBRARY OF CONGRESS
Copyright Royalty Board
[Docket No. 16-CRB-0009-CD (2014-17)]
Distribution of Cable Royalty Funds
AGENCY: Copyright Royalty Board (CRB), Library of Congress.
ACTION: Final allocation determination.
SUMMARY: The Copyright Royalty Judges announce the allocation of shares of cable
royalty funds for the years 2014, 2015, 2016, and 2017 among six claimant groups.
DATES: This determination is effective [INSERT DATE OF PUBLICATION IN THE
FEDERAL REGISTER].
ADDRESSES: The final determination is posted in eCRB at https://app.crb.gov/. For
access to the docket to read the final determination and submitted background
documents, go to eCRB and search for docket number 16-CRB-0009-CD (2014-17).
FOR FURTHER INFORMATION CONTACT: Anita Brown, CRB Program
Specialist, (202) 707-7658, crb@loc.gov.
SUPPLEMENTARY INFORMATION:
Final Determination of Royalty Allocation
The purpose of this proceeding is to determine the allocation of shares of the
2014-2017 cable royalty funds among six claimant groups: the Joint Sports Claimants,
Commercial Television Claimants, Public Television Claimants, Canadian Claimants
Group, Settling Devotional Claimants, and Program Suppliers.1 The parties have agreed
The program categories at issue are as follows: “Canadian Claimants.” All programs broadcast on
Canadian television stations, except: (1) live telecasts of Major League Baseball, National Hockey League,
and U.S. college team sports, and (2) programs owned by U.S. copyright owners; “Commercial Television
Claimants.” Programs produced by or for a U.S. commercial television station and broadcast only by that
station during the calendar year in question, except those listed in subpart (3) of the Program Suppliers
category; “Devotional Claimants.” Syndicated programs of a primarily religious theme, but not limited to
programs produced by or for religious institutions; “Joint Sports Claimants.” Live telecasts of professional
to settlements regarding the shares to be allocated to the Music Claimants and National
Public Radio (NPR). Joint Notice of Settlement Regarding 2014-2017 Royalty Claims of
Music Claimants . . . at 1-2 (June 29, 2022); Joint Notice of Settlement and Motion for
Final Distribution Regarding Royalty Claims of National Public Radio at 1 (Jan. 7, 2022).
Between 2016 and 2022, the Judges ordered partial distributions of the 2014-2017
cable funds to the “Phase I” participants (including Music Claimants and NPR) according
to allocation percentages agreed upon by the participants. Order Granting Motion for
Partial Distribution (May 22, 2019); Order Granting Motion for Partial Distribution,
Docket No. 16-CRB-0009 CD (2014) (Aug. 15, 2016); Order Granting Motion for Partial
Distribution, Docket No. 16-CRB-0020 CD (2015) (June 6, 2017); Order Granting
Motion for Partial Distribution, Docket No. 17-CRB-0017 CD (2016) (Jul. 30, 2018).
In 2022, the Judges ordered the final distribution of the settled shares from the
remaining funds to Music Claimants and National Public Radio. Order Granting Motion
for Final Distribution to National Public Radio (Feb. 14, 2022), Order 23 Granting 201415 Cable Final Distribution to Music Claimants . . . (Dec. 7, 2022).
When the Judges ultimately order the final distribution of the remaining 2014-17
cable royalty funds, they will direct the Licensing Division of the Copyright Office to
adjust distributions to each participant to account for partial distributions and to apply the
allocation percentages determined herein.
Based on the record in this proceeding, the Judges make the following allocation
of deposited royalties.

and college team sports broadcast by U.S. and Canadian television stations, except programs in the
Canadian Claimants category; “Program Suppliers.” Syndicated series, specials, and movies, except those
included in the Devotional Claimants category. Syndicated series and specials are defined as including (1)
programs licensed to and broadcast by at least one U.S. commercial television station during the calendar
year in question, (2) programs produced by or for a broadcast station that are broadcast by two or more
U.S. television stations during the calendar year in question, and (3) programs produced by or for a U.S.
commercial television station that are comprised predominantly of syndicated elements, such as music
videos, cartoons, “PM Magazine,” and locally-hosted movies; “Public Television Claimants.” All programs
broadcast on U.S. noncommercial educational television stations. Order Lifting Stay and Adopting
Claimant Categories (Apr. 5, 2021). The categories are mutually exclusive and, in aggregate,
comprehensive.

Table 1: Royalty Allocations
2014
Basic Fund
CCG
CTV
JSC
Program Suppliers
PTV
SDC
3.75% Fund
CCG
CTV
JSC
Program Suppliers
SDC
Syndex Fund
Program Suppliers

2016

6.19
20.55
36.13
21.21
11.07
4.85

14.59
19.78
11.42
28.29
19.18
6.74

14.60
17.36
10.72
25.53
24.78
7.01

15.77
17.50
12.36
23.29
25.25
5.83

6.96
23.11
40.63
23.85
5.45

18.05
24.48
14.13
35.00
8.34

19.41
23.08
14.25
33.94
9.32

21.10
23.41
16.53
31.16
7.80

100

100

PTV and JSC filed timely requests for rehearing on September 21, 2023
(Rehearing Requests). The Judges issued their ruling on the Rehearing Requests on
March 21, 2024 (Order on Rehearing), denying rehearing on any basis asserted by JSC in
its Rehearing Request and granting rehearing on a basis asserted by PTV in its Rehearing
Request to correct arithmetic errors. This Final Determination includes the corrections
contained in the Initial Determination of Royalty Allocation (Corrected and Redacted)
filed on March 29, 2024, which addressed technical and clerical errors.2 This Final
Determination also includes the corrections set forth in the March 29, 2024 Order on
Rehearing, which is included herein, as “Addendum A”, to be published in the Federal
Register.3
I. BACKGROUND
A. Legal Context
In 1976, Congress granted cable television operators a statutory license to enable
them to clear the copyrights to over-the-air television and radio broadcast programming
which they retransmit to their subscribers. The license requires cable operators to submit

See Initial Determination of Royalty Allocation (Corrected and Redacted) at 1.
See Order on Rehearing at 83 n.63 (“To the extent that corrections set forth in this Order might be
construed to reach beyond those identified in the Motions for rehearing or the rehearing authority in 17
U.S.C. 803(c)(2), the Judges also make such corrections under their authority to correct technical or clerical
errors in 17 U.S.C. 803(c)(4). For this reason, the Judges set forth the analysis herein also as a written
addendum to the Initial Determination, which is distributed to the participants of the proceeding via this
Order and will be published as part of the Final Determination, pursuant to 17 U.S.C. 803(c)(4).”)
2
semi-annual royalty payments, along with accompanying statements of account, to the
Copyright Office for subsequent distribution to copyright owners of the broadcast
programming that those cable operators retransmit. See 17 U.S.C. 111(d)(1). To
determine how the collected royalties are to be distributed among the copyright owners
filing claims for them, the Copyright Royalty Judges (Judges) conduct a proceeding in
accordance with chapter 8 of the Copyright Act. This determination is the culmination of
one of those proceedings.4 Proceedings for determining the distribution of the cable
license royalties historically were conducted in two phases. In Phase I, the royalties were
divided among programming categories. The claimants to the royalties have previously
organized themselves into eight categories of programming retransmitted by cable
systems: movies and syndicated television programming; sports programming;
commercial broadcast programming; religious broadcast programming; noncommercial
television broadcast programming; Canadian broadcast programming; noncommercial
radio broadcast programming; and music contained on all broadcast programming. In
Phase II, the royalties allotted to each category at Phase I were subdivided among the
various copyright holders within that category.5 In the most recent proceeding, regarding
cable royalties for the 2010-2013 period, the Judges broke with past practice by
combining Phase I and Phase II into a single proceeding in which the functions of
allocating funds between program categories and distributing funds among claimants

Prior to enactment of the Copyright Royalty and Distribution Reform Act of 2004, which established the
Judges program, royalty allocation determinations under the section 111 license were made by two other
bodies. The first was the Copyright Royalty Tribunal, which made distributions beginning with the 1978
royalty year, the first year in which cable royalties were collected under the 1976 Copyright Act. Congress
abolished the Tribunal in 1993 and replaced it with the Copyright Arbitration Royalty Panel (“CARP”)
system. Under this regime, the Librarian of Congress appointed a CARP, consisting of three arbitrators,
which recommended to the Librarian how the royalties should be allocated. Final distribution authority,
however, rested with the Librarian. The CARP system ended in 2004. See Copyright Royalty Distribution
and Reform Act of 2004, Pub. L. No. 108-419, 118 Stat. 2341 (Nov. 30, 2004).
The Judges last adjudicated an allocation (Phase I) determination for royalty years 2010 to 2013.
See Final Allocation Determination, Distribution of the 2010 to 2013 Cable Royalty Funds, 84 FR 3552
(Feb. 12, 2019) (2010-13 Determination).
within those categories proceeded in parallel.6 This determination addresses the
Allocation Phase for royalties collected from cable operators for the years 2014, 2015,
2016 and 2017.
The statutory cable license places cable systems into three classes based upon the
fees they receive from their subscribers for the retransmission of over-the-air broadcast
signals. Small- and medium-sized systems pay a flat fee. See 17 U.S.C. 111(d)(1).
Large cable systems (“Form 3” systems)7—whose royalty payments comprise the lion’s
share of the royalties distributed in this proceeding—pay a percentage of the gross
receipts they receive from their subscribers for each distant over-the-air broadcast station
signal they retransmit.8 The amount of royalties that a cable system must pay for each
broadcast station signal it retransmits depends upon how the carriage of that signal would
have been regulated by the Federal Communications Commission (“FCC”) in 1976, the
year in which the current Copyright Act was enacted.
The royalty scheme for large cable systems employs a statutory device known as
the distant signal equivalent (DSE), which is defined at 17 U.S.C. 111(f)(5). The cable
systems, other than those paying the minimum fee, pay royalties based upon the number
of DSEs they retransmit. The greater the number of DSEs a cable system retransmits the
larger its total royalty payment. The cable system pays these royalties to the Copyright

Second Reissued Order Granting in Part Allocation Phase Parties’ Motion to Dismiss Multigroup
Claimants and Denying Multigroup Claimants’ Motion for Sanctions Against Allocation Phase Parties,
Docket No. 14-CRB-0010-CD (2010-13) (Apr. 25, 2018). The Judges discontinued use of the terms Phase
I and Phase II and use the terms Allocation Phase and Distribution Phase instead. Id. n.4. This
determination addresses the Allocation Phase of the proceeding.
7 “Form 3” cable systems, so named because they account to the Copyright Office for retransmissions and
royalties on “Form 3.” The Form 3 filing is required because they have semiannual gross receipts in excess
of $527,600. These systems must submit an SA3 Long Form to the US Copyright Office. They are the
only systems required to identify which of the stations they carry are distant signals. Royalty payments
from Form 3 systems accounted for over 90% of the total royalties that cable systems paid during 2014–
2017. Expert Report of Christopher J. Bennett, Ph.D., Amended Corrected, Trial Ex. 7203, ¶ 11 n.2
(Bennett ACWDT).
8 The cable license is premised on the Congressional judgment that large cable systems should only pay
royalties for the distant broadcast station signals that they retransmit to their subscribers and not for the
local broadcast station signals they provide. However, cable systems that carry only local stations are still
required to submit a statement of account and pay a basic minimum fee. See Distribution Order,
Distribution of the 2000–2003 Cable Royalty Funds, 75 FR 26798 n.2 (May 12, 2010) (2000-03
Distribution Order).
Office. These fees comprise the “Basic Fund.” See 17 U.S.C. 111(d)(1)(B). In addition
to the Basic Fund, large cable systems also may be required to pay royalties into one of
two other funds that the Copyright Office maintains: the Syndex Fund and the 3.75%
Fund.
As noted above, the utilization of the cable license is linked with how the FCC
regulated the cable industry in 1976.9 FCC rules at the time restricted the number of
distant broadcast signals a cable system was permitted to carry (“the distant signal
carriage rules”). National Cable Television Assoc., Inc. v. Copyright Royalty Tribunal,
724 F.2d 176, 180 (D.C. Cir. 1983). FCC rules also allowed local broadcasters and
copyright holders to require cable systems to delete (or blackout) syndicated
programming from imported signals if the local station had purchased exclusive rights to
the programming (“syndicated exclusivity” or “syndex” rules). Id. at 187. In 1980, the
FCC repealed both sets of rules. Id. at 181.
The Copyright Royalty Tribunal (CRT) initiated a cable rate adjustment
proceeding to compensate copyright owners for royalties lost as a result of the FCC’s
repeal of the rules. Final rule, Adjustment of the Royalty Rate for Cable Systems; Federal
Communications Commission’s Deregulation of the Cable Industry, Docket No. CRT 812, 47 FR 52146 (Nov. 19, 1982). The CRT adopted two new rates applicable to large
cable systems making section 111 royalty payments. The first, to compensate for repeal
of the distant signal carriage rules, was a 3.75% surcharge of a large cable system’s gross
receipts for each distant signal the carriage of which would not have been permitted
under the FCC’s distant signal carriage rules. Royalties paid at the 3.75% rate—
sometimes referred to by the cable industry as the “penalty fee”—are accounted for by

FCC regulation of the cable industry was impacted by passage of the 1976 Copyright Act that created the
compulsory license for cable retransmissions codified in section 111. See Report and Order, Docket Nos.
20988 & 21284, 79 F.C.C. 663 (1980), aff’d sub nom. Malrite T.V., v. FCC, 652 F.2d 1140, 1146 (2d Cir.
1981).
the Copyright Office in the “3.75% Fund,” which is separate from royalties kept in the
Basic Fund. See id.; see also 17 U.S.C. 111(d); 37 CFR part 387.The second rate the
CRT adopted, to compensate for the FCC’s repeal of its syndicated exclusivity rules, is
known as the “syndex surcharge.” Large cable operators were required to pay this
additional fee for carrying signals that were or would have been subject to the FCC’s
syndex rules. Syndex Fund fees are accounted for separately from royalties paid into the
Basic Fund.10
Royalties in the three funds—Basic, 3.75%, and Syndex—are the royalties to be
distributed to copyright owners of non-network broadcast programming in a section 111
cable license distribution proceeding. See 37 CFR part 387.11
Cable system operators are required to file Statements of Account with the
Copyright Office detailing subscription revenues and specific television signals they
retransmit distantly, and to deposit section 111 royalties calculated according to the
reported figures. Testimony of Gregory S. Crawford, Ph.D., Corrected (2010-2013),
Trial Ex. 7031, ¶ 74 & n.37 (“Crawford 2010-2013 CWDT”).
B. Posture of the Current Proceeding
In February 2019, the Copyright Royalty Board (CRB) published notice in the
Federal Register announcing commencement of proceedings and seeking Petitions to

In 1989, in response to changes in the cable television industry and passage of the Satellite Home Viewer
Act of 1988, the FCC reinstated syndicated exclusivity rules. The reinstated rules differed from the
original syndex rules, giving rise to a petition to the CRT for adjustment or elimination of the syndex
surcharge. See Final Rule, Adjustment of the Syndicated Exclusivity Surcharge, Docket No. 89-5-CRA, 55
FR 33604 (Aug. 16, 1990). The CRT held that “the syndicated exclusivity surcharge paid by Form 3 cable
systems in the top 100 television markets is eliminated, except for those instances when a cable system is
importing a distant commercial VHF station which places a predicted Grade B contour, as defined by FCC
rules, over the cable system, and the station is not “significantly viewed” or otherwise exempt from the
syndicated exclusivity rules in effect as of June 24, 1981. In such cases, the syndicated exclusivity
surcharge shall continue to be paid at the same level as before.” (Id. See Final Rule, Cable Television
Services; Program Exclusivity in the Cable and Broadcast Industry, 54 FR 12913 (Mar. 29, 1989), aff’d
sub nom. United Video, Inc. v. FCC, 890 F.2d 1173 (D.C. Cir. 1989); 47 CFR 73.658(m)(2) (1989); 47
CFR 76.156 (1989). The present proceeding deals only with allocation of those royalties among copyright
owners in the various program categories.)
11 The CRB last adjusted cable Basic, 3.75%, and Syndex rates in 2021, for the period January 1, 2020,
through December 31, 2024. See Final Determination, Adjustment of Cable Statutory License Royalty
Rates, Docket No. 20-CRB-0008-CA (2020-2024), 86 FR 72845 (Dec. 23, 2021). This adjustment was
pursuant to a negotiated agreement.
Participate to determine distribution of 2014, 2015, 2016, and 2017 royalties under the
cable and satellite licenses.12
On March 20, 2019, the Judges issued a Notice of Participants and Order for
Preliminary Action to Address Categories of Claims. On April 5, 2021, they issued an
Order . . . Adopting Claimant Categories in which they identified eight categories of
claimants for the proceeding: (1) Canadian Claimants, (2) Commercial Television
Claimants; (3) Devotional Claimants, (4) Joint Sports Claimants, (5) Music Claimants,
(6) National Public Radio, (7) Program Suppliers, and (8) Public Television Claimants.
National Public Radio and Music Claimants reached settlements with the other claimant
groups and received respective final distributions. Order Granting Motion for Final
Distribution to National Public Radio (Feb. 14, 2022), Order 23 Granting 2014-15 Cable
Final Distribution to Music Claimants . . . (Dec. 7, 2022).
With the settlement of the Music Claimants’ share, only the Program Suppliers
claimant group has an interest in the royalties in the Syndex Fund. Program Suppliers’
Post Hearing Brief ¶ 81 (PS PHB). Public TV Claimants claim a share only of the Basic
Fund. Public Television’s Post-Hearing Brief at 83 (PTV PHB).
The hearing in the present proceeding commenced on March 20, 2023, and
concluded on April 20, 2023. During that period, the Judges heard live testimony from
33 witnesses and admitted written and designated testimony from a number of additional
witnesses. The Judges admitted into the record more than 400 exhibits. Many motions

Notice…, Distribution of Cable Royalty Funds, Docket No. 16-CRB-0009-CD (2014-17), 84 FR 2930
(Feb. 8, 2019); Notice…, Distribution of Satellite Royalty Funds, Docket No. 16-CRB-0010-SD (2014-17),
84 FR 2931 (Feb. 8, 2019). The CRB received Petitions to Participate from Broadcast Music, Inc.
(“BMI”), the American Society of Composers, Authors and Publishers (“ASCAP”), and SEASAC
Performing Rights (jointly, the “Music Claimants”); Canadian Claimants Group (“CCG”); Global Music
Rights; Public Broadcasting System (“PBS”) on behalf of Public Television Claimants (“PTV”); Settling
Devotional Claimants (“SDC”); Joint Sports Claimants (“JSC”); Major League Soccer (“MLS”);
Multigroup Claimants; Commercial Television Claimants represented by the National Association of
Broadcasters (“CTV”), National Public Radio for NPR Joint Claimants (“NPR”); David Powell; and the
Motion Picture Association of America for MPAA-represented Program Suppliers (“Program Suppliers”
or “PS”). Subsequently, MLS filed a notice that it would not participate separately in the allocation phase,
eCRB no. 26935, and Mr. Powell was dismissed as a participant, eCRB. no. 22314. Multigroup Claimants
expressed an intention to participate in the allocation phase, eCRB no. 25455, but did not file a written
direct statement and did not participate.
related to the hearing were filed and ruled on. Participants made closing arguments on
June 12, 2023, after which time the Judges closed the record.
C. Allocation Standard
Congress did not establish a statutory standard in section 111 for the Judges (or
their predecessors) to apply when allocating royalties among copyright owners or
categories of copyright owners. However, through determinations by the Judges and
their predecessors (the Copyright Royalty Tribunal, the CARPs, and the Librarian of
Congress), the allocation standard has evolved, and the present standard is one of
“relative marketplace value.”13 See Distribution Order, Distribution of the 2004 and
2005 Cable Royalty Funds, 75 FR 57065 (Sept. 17, 2010) (2004-05 Distribution Order).
“Relative marketplace values” in these proceedings have been defined as
valuations that “simulate [relative] market valuations as if no compulsory license
existed.” Final Rule, Distribution of 1998 and 1999 Cable Royalty Funds, 69 FR 3608
(Jan. 26, 2004) (1998-99 Librarian Order). Because such a market does not exist (having
been supplanted by the regulatory structure), the Judges are required to construct a
“hypothetical market” that generates the relative values that approximate those that
would arise in an unregulated market. 2004-05 Distribution Order at 57065; see also
Program Suppliers v. Librarian of Congress, 409 F.3d 395, 401-02 (D.C. Cir. 2001)
(“[I]t makes perfect sense to compensate copyright owners by awarding them what they
would have gotten relative to other owners ….”).14
II. INTRODUCTION TO REGRESSION SECTION
Four parties have proposed that the Judges utilize regression analysis to estimate
the relative marketplace value of each party’s programs distantly retransmitted by CSOs

In this proceeding, the Judges distinguish between “relative values” (to describe the allocation shares),
and absolute “fair market values.” Because the royalties at issue in this proceeding are regulated and not
derived from any actual market transactions, they do not correspond with absolute dollar royalties that
would be generated in a market and thus would not reflect absolute “fair market value.”
14 The Judges discuss the relative marketplace value standard in more detail, infra, as applied to the facts of
this proceeding.
during the four-year period 2014-2017. Each party relies on testimony from economic
experts to support its position. CCG relies on the testimony of Dr. Lisa George. CTV
relies on the testimony of Dr. Leslie Marx and the supportive testimony of Dr. Cristopher
Bennett. Program Suppliers rely on the testimony of Dr. Cleve Tyler and the supportive
testimony of Dr. Gray. Finally, PTV relies on the testimony of Dr. John Johnson.
Two parties oppose all of the regression approaches on which each of the above
parties relies. The SDC, through the testimony of economists Drs. Erkan Erdem and
Daniel Rubinfeld, oppose the regression approach for many of the same reasons it
(unsuccessfully) opposed the regressions proffered in the 2010-13 allocation proceeding,
which was the most recent section 111 allocation proceeding. However, the SDC has
also presented arguments that are differentiated from those it made in that prior
proceeding. JSC, although it relied in part on a regression approach in the prior
proceeding, opposes the regression approaches through the testimony of two economists,
Dr. W. Robert Majure and Dr. John Asker, and a statistician, Mr. R. Garrison Harvey.
Dr. Marx, identified above as an expert who relies on the regression approach,
does so only for the 2014 royalty year. For the 2015-2017 period, she opposes the use of
the regression approach, based on industry changes that she maintains (consistent with a
criticism from the other opposing experts listed above) diminished the quality of the
available economic data necessary to conduct an appropriate regression.
The models of each of the four experts who proffered regression analyses are
discussed individually below, together with the rebuttals levied by the opposing experts.
However, in order to understand and contextualize the regression-related evidence, it is
helpful to address several overarching issues that color the Judges’ analysis and
conclusions. Accordingly, before jumping into the specific regression models, the Judges
first (1) consider in greater detail their allocation standard of “relative marketplace
value”, (2) address the changing impact of the “minimum fee” in the 2014-2017 period,

(3) evaluate assertions of inappropriate econometric practice (“specification searching”)
that may compromise the regression approaches, and (4) analyze questions regarding
whether certain types of PTV programs are properly included within the regression
analyses.
After clearing this analytical underbrush, the Judges proceed to a discussion of the
sequential presentation of the parties’ regression models, followed by the Judges’
“Analysis and Conclusions” regarding those models. Finally, the Judges consider several
additional important issues arising from the regressions that relate specifically to (1) the
CCG claims for Canadian programming issues and (2) the 3.75% Fund.
III. THE DATA RELIED ON BY THE PARTIES
All of the parties’ experts who relied on data detailing royalty reporting and
programming information essentially utilized the same data sources and processed the
data in basically the same manner. Specifically, the parties engaged in the following
steps:
1. Establish a method to link the CSOs distant signal carriage to the programs
carried on each signal, by merging CSO and distant signal carriage data to
television programming and scheduling data (as detailed below).
2. Obtain a dataset on distant signal carriage from Cable Data Corporation
(CDC), that covers each semiannual accounting period from 2014-1 through
2017-2 for the larger “Form 3” cable systems.15 CDC compiles and digitizes
this dataset data directly from the SA3 Statement of Account (SOA) forms
that Form 3 cable systems are required to file semiannually at the Licensing

“Form 3” systems are cable systems with semiannual gross receipts in excess of $527,600 that are
required to submit an SA3 Long Form to the US Copyright Office. They are the only systems required to
identify which of the stations they carry are distant signals, and they account for over 90% of the total
royalties paid by all cable systems during 2014–2017.
Section of the Copyright Office. (The CDC data is set forth in the Written
Direct Testimony of Jonda K. Martin.)
3. Obtain through these SOAs, for each CSO, information about its (a)
ownership, rates, gross receipts, total number of subscribers, and communities
served, and (b) the identity of every broadcast television station carried and a
calculation of royalties owed for the transmission of distant signals under
section 111.
4. Obtain station, program, and scheduling data from Red Bee Media (formerly
FYI Television, Inc.) to merge with the foregoing carriage and royalty data.
(Red Bee Media is an international broadcasting and media services company
that publishes television airing data, using programming data that it sources
directly from stations in the form of interactive program guides.)
5. Examine the Red Bee Media’s database of U.S. and Canadian broadcast and
cable channels carried by U.S. CSOs, together with network data and detailed
program and scheduling data for the period January 1, 2014, through
December 31, 2017, to identify, per station, (a) program titles, (b) program
type/category, (c) originating station, and (d) date and time of program airing.
6. Obtain Canadian television program log data from the Canadian
Radio‑Television and Telecommunications Commission (CRTC), which
regulates and supervises broadcasting and telecommunications within Canada.
7. Develop and apply an algorithm, using the aforementioned data, that assigns
program airings to their correct categories.
8. Review and confirm the results and make any modifications that are
appropriate.
Amended Corrected Written Direct Testimony of Christopher Bennett, Ph.D., Trial Ex.
7203, ¶¶ 10-27 (Bennett ACWDT) (describing the CTV data process); Corrected Written

Direct Testimony of R. Garrison Harvey, Trial Ex. 7105, tech. app., pt. A (Harvey
CWDT) (describing the JSC data process); Written Direct Testimony of John H. Johnson,
IV, Trial Ex. 7300, ¶¶ 46-51 & app. G (Johnson WDT) (describing the PTV data
process); Written Direct Testimony of Lisa M. George, Ph.D., Trial Ex. 7403, at 47-50 &
app. B (George WDT) (describing the CCG data process, also supplemented with U.S.
Census income information); Amended Corrected Written Direct Testimony of Jeffrey S.
Gray, Trial Ex. 7605, ¶¶ 16-18; 32-34, & 39 n.23 (describing the Program Suppliers’ data
process).
Given the voluminous nature of the data relating to programming and minutes, the
data-related processes suffered from several hiccups during assembly and analysis for the
several experts. The record reflects that most of the data-based problems were resolved
before the experts filed their direct testimonies, and there were some data-related
amendments and corrections set forth in subsequent testimonies. To the extent any of the
data problems were unresolved, material, and need to be addressed in order for the Judges
to properly allocate shares, those data problems are discussed in this determination.
IV. THE ROLE OF REGRESSION ANALYSIS IN THE STATUTORY
CONTEXT
Section 111 sets forth no standard for the Judges (or their predecessors) to apply
in allocating royalties arising from the payments made by CSOs. This was no mere
oversight. The legislative history makes it clear that Congress intentionally omitted a
standard to guide the Judges:
[T]he bill does not include specific provisions to guide … determining the
appropriate division among competing copyright owners of the royalty fees
collected from cable systems under section 111 [because] it would not be
appropriate to specify particular, limiting standards for distribution. Rather,
the Committee believes that the [adjudicator] should consider all pertinent
data and considerations presented by the claimants.

House Report No. 94-1476, Notes of Committee on the Judiciary. This standardless
delegation has led the parties, as well as the Judges and their predecessors, to invoke an
evolving set of five broad factors, that have waxed and waned, to consider when
allocating royalties among program category claimants. As the Judges recounted in a
prior proceeding:
[T]he standards for determining distribution awards have changed
dramatically since the inception of the license. In the first Phase I
[allocation] proceeding, the Copyright Royalty Tribunal identified three
primary factors to guide its determinations: (1) The harm to copyright
owners caused by distant signal retransmissions; (2) the benefit derived by
cable systems from those retransmissions; and (3) the marketplace value of
the copyrighted works retransmitted. 45 FR 63026, 63035 (September 23,
1980). The Tribunal also identified two secondary factors: (1) The quality
of the retransmitted material; and (2) time-related considerations. Id. By
the time of the last fully litigated Tribunal determination, the Tribunal
dropped its consideration of the two secondary factors. 57 FR 15286 (April
27, 1992). The first CARP to undertake a Phase I distribution, the 1990–92
proceeding, discarded the ‘‘harm’’ criterion in its consideration …. That
action was upheld by the Librarian of Congress and, subsequently, the Court
of Appeals. Nat’l Ass’n of Broadcasters v. Librarian of Congress, 146 F.3d
907 (DC Cir. 1998). The 1998–99 CARP refined the approach further still,
noting that ‘‘every party to this proceeding appears to accept ‘relative
marketplace value’ as the sole relevant criterion that should be applied by
the Panel.’’ CARP Report at 10 (emphasis in original). As a consequence,
the CARP announced that its ‘‘primary objective is to ‘simulate [relative]
market valuation’ as if no compulsory license existed.’’ Id. The Librarian
upheld this conclusion as well, and the Court of Appeals once again
affirmed. Program Suppliers v. Librarian of Congress, 409 F.3d 395 (DC
Cir. 2005).
Distribution Order, Distribution of the 2000–2003 Cable Royalty Funds, 75 FR 26798,
26801-02 (May 12, 2010) (2000-03 Distribution Order).16
The D.C. Circuit Court of Appeals has recognized that “the process that Congress
ordained” has placed the Judges and their predecessors in a context where “mathematical

“Fee-generation,” discussed elsewhere in this determination, is a method proffered to identify relative
marketplace value. Id. at 26804 (the “fee generation approach should be accorded deference, not as the
methodology to determine the relative marketplace value but as a methodology to determine that value.”).
Other approaches proffered more recently have been advanced in order to apply the present standard,
“relative marketplace value.” See 2010-13 Determination at 3556 (identifying [r]egression analyses, CSO
survey results, viewership measurements, a changed circumstances analysis, and a cable content analysis”
as approaches to estimate relative marketplace value).
exactitude … appears well-nigh impossible [and] rough justice in dividing up the royalty
pie seems to be …inevitable.” Nat’l Ass'n of Broadcasters v. Copyright Royalty
Tribunal, 772 F.2d 922, 926 (D.C.Cir.1985) (emphasis added) (“NAB”). Moreover,
despite the shifts in the administrative standard for allocating royalties, the D.C. Circuit
has continued to note this practical concern. See, e.g., Settling Devotional Claimants v.
Copyright Royalty Board, 797 F.3d 1106, 1121 (D.C. Cir. 2015).
It is in the context of this “rough balancing of hotly competing claims,” NAB at
940, that the Judges find it appropriate to rely (in part) on regression approaches in this
proceeding. The counter-argument that the regressions do not generate a proxy for price
that meets the exactitudes of econometric theorizing may be correct, but it appears to be a
precise answer to the wrong question, namely, what is the price that would obtain in a
marketplace ill-defined in the record in this proceeding?
The Judges have experience in considering market proxies when exercising their
companion jurisdiction of setting royalty rates for certain forms of music and sound
recording distributions. In those proceedings, the parties proffer, and the Judges
consider, benchmark evidence from analogous markets, market-based evidence from the
regulated market itself, economic models, economic experiments, and survey evidence –
all in an attempt to identify applicable market factors. Often, more than one of these
approaches are proffered in the same proceeding, and the Judges consider whether to
apply more than one model in rendering a determination. Here, the parties have provided
evidence from the regulated market itself, in the form of regression analyses, and survey
evidence, in the form of the Bortz Survey.
Focusing here on the criticism of the regression evidence generated from the
regulated market itself,17 the Judges consider the emphasis of the regression opponents
upon the exactitude of the price proxies, and find that fixation to be dubious. As the
The Judges focus on the Bortz Survey infra.

Judges have explained, also in their rate determinations, intellectual property goods
(whether retransmitted television stations or streams of musical works or sound
recordings) are often licensed at various royalty rates because the nature of these goods
invites price discrimination. See, e.g., Final rule and order, Determination of Royalty
Rates and Terms for Making and Distributing Phonorecords (Phonorecords III), 84 FR
1918, 1980 (Feb. 5, 2019) (dissent, Strickler, J.) (for intellectual property goods there
“exist many alternative rate structures with varying rates for various segments of the
market … forms of ‘price discrimination,’ which, in the broadest sense, means simply a
departure from a single, per-unit price.”). Thus, the very idea of a single econometrically
correct price for the royalties at issue in this proceeding is fanciful, particularly in the
absence of any evidence of such prices or even a methodology to establish price.
Additionally, in line with the D.C. Circuit’s acknowledgment that these allocation
proceedings may afford the Judges only the ability to dispense “rough justice,” the Judges
note an economic corollary: It is better to be “roughly correct” than “precisely wrong.”18
Similarly, in matters of econometrics, Professor Kennedy, cited infra by parties on both
sides of the regression divide in this proceeding, has cautioned econometricians against
making what he calls “Type III errors[,] . . . when a researcher produces the right answer
to the wrong question.” Peter Kennedy, A Guide to Econometrics 391 (5th ed. 2003).
Indeed, Professor Kennedy, then echoing the quote attributed to Keynes, advises that in
econometric practice “a corollary of this rule is that an appropriate answer to the right
question is worth a great deal more than a precise answer to the wrong question.” Id.
In this proceeding, counsel for the SDC, a party vigorously advancing the pricebased criticism of the regressions, argues that application of any regression analyses
would indeed be “rough” but acknowledges that, as for “justice,” only the Judges could

Attributed to John Maynard Keynes. See, e.g., https://graciousquotes.com/john-maynard-keynes/ (last
accessed August 28, 2023).
say. 6/12/23 Tr. 6007-08 (closing argument). Counsel is essentially correct on both
points. First, the use of regression analyses is not precise, but rather “rough,” at least
compared to the exactitude of a full-fledged hedonic regression or a discrete choice
approach noted by SDC’s economic witnesses as possible alternatives (but not proffered
as alternative models). And further, Congress most clearly left to the Judges the decision
as to the standard to be applied and the methods by which the standards could be
effectuated.19
V. MINIMUM FEE ISSUE
A. CCG Position on the Minimum Fee Issue
CCG argues that “[it] is incorrect to claim that regressions are not useful . . .
because of the minimum fee structure,” or because of “the presence of more minimum
fee or ‘excess capacity’ systems” in the 2015-2017 period compared to the prior four
years. Proposed Findings of Fact and Conclusions of Law of the Canadian Claimants
Group (CCG PFF) at 72-73. In support of this argument, CCG asserts that the
regressions proffered in this proceeding do not require accurate measures when the
royalty fees “actually paid” are the minimum fees, even though they may be “poor
proxies for price.” CCG PFF ¶ 197 (and record citations therein) (emphasis added).
Rather, CCG maintains that the regression coefficients – which are calculated using
unpaid subscriber-group base fees – nonetheless provide useful information regarding the
correlation between “carriage decisions and royalty payments.” CCG PFF ¶ 197 (and
SDC’s counsel’s argument was in line with the D.C. Circuit’s understanding that the Judges must by
necessity engage in “rough justice” in these allocation proceedings, but he protested that any rough variant
of justice that relied on one or more of these regressions would not constitute “rough economic justice.” Id.
(emphasis added). The Judges disagree, as do their predecessors who have relied on these models, and as
do the economists/econometricians who have proffered regression-based models in this and prior
proceedings. In this regard, the Judges were struck by a warning given by SDC’s counsel that, if the
Judges “adopt[ed] the Tyler [M]odel on a theory of “rough economic justice” without discarding the
“relative market value” standard, [they] would inhibit the parties’ ability to present top-shelf
economists ….” SDC PHB at 64 (emphasis in original). The Judges agree with Program Suppliers’
counsel who rightly took umbrage at the “not-so-subtle condescending posture of this remark ….”
Program Suppliers PHRB at 41. The expert witnesses certainly do disagree among each other, but the
experience and education of the economists/econometricians who have proffered their regression
approaches belie the ad hominem argument by SDC’s counsel.
record citations therein). In further support, CCG cites to a statement by the Judges in the
prior proceeding, citing Final Allocation Determination, Distribution of Cable Royalty
Funds, Docket No. CONSOLIDATED 14-CRB-0010-CD (2010-2013), 84 FR 3552,
3555-56 n.17 (Feb. 12, 2019) (2010-13 Determination).20
CCG acknowledges though that reliance in these regressions on minimum-feepaying CSOs generates “measurement error,” but claims that this is not a concern,
because it is “an ordinary part of regression . . . reduc[ing] precision but . . . not bias[ing]
claimant shares.” CCG PFF ¶ 198 (citing 4/18/23 Tr. 5125-26 (George)). In fact, CCG
maintains that the data pertaining to CSOs that pay only the minimum fee reveals that, for
them, the value of the distant signal is essentially zero – information that could not have
been ascertained from data in an unregulated market.21 CCG ¶ 199 (citing 4/18/23 Tr.
5139-41 (George); Written Rebuttal Testimony of Lisa George, Trial Ex. 7404, at 15-16,
47 (George WRT)).
Focusing on the dramatic increase in the number of minimum-fee-only CSOs,
CCG dichotomizes this cohort. With regard to CSOs that “do not carry distant signals” at
all, CCG reasons that their voluntarily refusal to retransmit means that they cannot be
used to determine the value of distant signals in a regression.22 CCG PFF ¶ 201 (citing
George WRT at 15; 4/18/23 Tr. 5141 (George). And, with regard to the CSOs that do
carry some distant signals, but still have “excess capacity” and thus also pay only the
minimum fee, CCG maintains that “these are the same ones that would determine value
absent the compulsory license.” CCG PFF ¶ 201 (citing George WRT at 15; 4/18/23 Tr.
5141 (George)).

In fact, footnote 17 cited by CCG does not address this minimum fee issue.

The minimum fee is a fixed (sunk) cost. A CSO that pays only the minimum fee has a marginal royalty
cost to retransmit a signal equal to zero. Thus, a minimum-fee-paying CSO’s decision not to retransmit any
signal indicates that the net value of retransmittal is zero for that CSO (and may even be negative given
transmission and/or opportunity costs).
22

CCG maintains that these non-transmitting CSOs also cannot be utilized in the Bortz Survey.

B. Program Suppliers Position on the Minimum Fee Issue
According to Program Suppliers, notwithstanding the increase in the number of
minimum-fee-only CSOs, regression remains the most useful technique for estimating
relative marketplace value. Program Suppliers’ Proposed Findings of Fact and
Conclusions of Law (PS PFF) at 78. They note that, despite this increase, still “20% of
CSOs who carry distant signals have a calculated royalty fee which is approximately the
size of the minimum fee.” This “cluster of CSOs at the threshold . . . provides evidence
that . . . certain CSOs that paid the minimum fee nevertheless engaged in economic
decision-making with regard to distantly retransmitted signals carried.” Amended and
Corrected Written Direct Testimony of Cleve B. Tyler, Ph.D., Trial Ex. 7600, ¶¶ 151-52
(Tyler ACWDT). Further elucidating this point, Program Suppliers rely on additional oral

testimony by Dr. Tyler, explaining that his regression model “is based in part on the . . .
likely uncertainty, at the time that carriage decisions are made, as to whether the
minimum fee or the calculated rate [i.e., the base rate] would bind . . . increas[ing] the
economic content within the decision-making process, even where the minimum fee
ultimately binds.” PS PFF ¶ 323 (citing 4/19/23 Tr. at 5521-22 (Tyler)) (emphasis
added).23 Further in this regard, Program Suppliers aver that even CSOs with zero distant
signal carriage derive “option value” from the section 111 license, because they are
always permitted (“privileged” in the language of section 111) to engage in such
retransmission. Tyler ACWDT ¶ 102. According to Dr. Tyler, the base fee calculation
would tacitly reflect this option value. Id.
In any event, Dr. Tyler rejects as “too extreme” the alternative of “[d]ropping
most of the observations” by excluding the minimum-fee-only CSOs, because that would
implicitly incorporate the assumption that “there is essentially no value associated with

In a following colloquy with Judge Strickler, Dr. Tyler acknowledged that, by contrast, where the base
fees calculated by CSOs were well below the minimum fee ultimately paid, their base fees provided “less
economic content.” 4/19/23 Tr. 5525 (Tyler).
any of the minutes for the systems paying the minimum fee.” 4/19/23 Tr. 5474 (Tyler).
In support of this point, Program Suppliers note that “[n]o expert in this proceeding took
the approach of dropping minimum fee systems from the analysis.” PS PFF ¶ 327 (and
record citations therein).24, 25
Despite Program Suppliers’ assertion that there is economic evidence from the
carriage decisions of minimum-fee-only CSOs, they acknowledge that there is also merit
to considering a version of the model that includes only CSOs paying above the minimum
fee. Tyler ACWDT ¶¶ 155-156. According to Dr. Tyler, this restricted data set presents
with the “highest degree of confidence” the CSO tradeoffs between different stations and
categories of minutes. Tyler ACWDT ¶ 155. To this end, Dr. Tyler undertook a
“sensitivity” analysis that considered only CSOs paying more than the minimum fee, and
determined the following estimated shares (and standard errors):
FIGURE 6.3
Model Including Only CSOs Paying More than the Minimum Royalty
Year

2014
2015
2016
Program
Suppliers

JSC

29.1%
32.4%
(4.7%)
(9.2%)
41.0%
2.1%
(2.4%)
(1.5%)
31.3%
1.3%
(3.0%)
(1.9%)
33.0%
0.5%
(2.2%)
(1.0%)
Adjusted 83.8%
R2:

CTV

PTV

SDC

CCG

11.3%
(2.6%)
11.3%
(2.2%)
13.3%
(3.4%)
9.9%
(2.0%)

14.3%
(1.9%)
12.7%
(0.8%)
14.7%
(0.8%)
14.2%
(0.8%)

5.1%
(1.2%)
9.7%
(1.2%)
8.3%
(1.0%)
7.8%
(1.0%)

7.6%
(1.1%)
23.2%
(0.9%)
31.1%
(1.4%)
34.6%
(2.1%)

Tyler ACWDT fig.6.3.

This argument is misleading. As described infra, the SDC, JSC, and CTV, through their experts, all
relied on the large number of minimum-fee-only CSOs as a basis to throw out the regressions entirely for
the 2015-2017 period (and the SDC and JSC also reject the minimum-fee-only data for 2014 as part and
parcel of their wholesale rejection of the regression approach).
Program Suppliers also note that the Bortz Survey likewise considers the stated preferences of survey
respondents whose systems pay only the minimum fee. PS PFF ¶ 328.
According to Dr. Tyler, these shares are sufficiently close to the shares he
proposes through his analysis of all CSOs, i.e., including those only paying the minimum
fee. Compare Tyler ACWDT fig.3.2, with Tyler ACWDT fig.6.3. According to Dr.
Tyler, this “sensitivity” comparison of his recommended share allocation and the
allocation generated by above-minimum-fee-only CSOs reveals that his “modeling
approach . . . is reasonably robust and . . . sufficiently reliable for informing allocation of
the 2014-2017 Cable Royalties among the Allocation Phase claimant categories.” Tyler
ACWDT ¶ 105.
C. PTV Position on the Minimum Fee Issue
PTV, like CCG, finds economic significance in the choices of a CSO “to
retransmit a distant signal to particular subscriber groups” despite the fact that the CSO
pays the minimum fee, relying in part on Dr. Marx’s testimony that those choices reveal
only ordinal preferences as to distant programming types. Public Television’s Proposed
Findings of Fact and Conclusions of Law (PTV PFF) ¶ 58 (citing, inter alia, 4/11/23 Tr.
4165 (Marx)). Thus, PTV finds it appropriate to rely on what it describes as the “ample
variation in the decision-making of CSOs that pay the minimum fee . . . to . . . inform[] . .
. relative marketplace value. . . .” PTV PFF ¶ 59.
As an alternative basis for finding relevance in the decision-making of CSOs that
paid only the minimum fee after the WGNA conversion, PTV finds relevance in the fact
that many CSOs had distantly carried certain PTV signals pre-conversion together with
WGNA, paying above the minimum fee, and continued to transmit that companion signal
post-conversion, when only the minimum fee applied. According to PTV, this continuity
of PTV carriage is record evidence of the value of the PTV carriage during the minimumfee-only periods. PTV PFF ¶ 60; Johnson WRT ¶ 78 (“The WGN conversion in 2015
does not mean the value of KAET-DT [Public Television signal] declined or disappeared
altogether.”); see generally Johnson WRT ¶ 79 (As in the KAET example, “there were

1,115 CSO-Public Television distant signal combinations in the 2015-2017 period where
the CSO paid a minimum fee during those years [and] [f]or 55 percent of these
combinations, the same CSO also carried the same Public Television distant signal, at a
different point in time, when it paid section 111 royalties greater than the minimum
fee.”(emphasis added)).
As another alternative, Dr. Johnson, on behalf of PTV, and like Dr. Tyler,
undertook a “sensitivity test” that excluded the minimum-fee-paying CSOs. According
to PTV, the results of this sensitivity test were sufficiently consonant with the coefficients
in Dr. Johnson’s preferred “baseline” fee-based regression, which included the minimumfee-only CSOs, to suggest that decisions made by CSOs that paid minimum fees are
informative as to the question of relative value. PTV PFF ¶ 84 (and record citations
therein); compare Johnson WDT fig.11 (baseline model coefficient, with Johnson WDT
fig.14 (“sensitivity test” coefficients excluding minimum-fee-paying CSOs). This
consonance was important, according to Dr. Johnson, because it justified his use of the
“baseline” model, which, because it included the minimum-fee-paying CSOs, relied on
18,666 observations, and therefore was more precise than his “sensitivity test” approach.
Johnson WDT ¶ 84.
From yet another economic perspective, PTV maintain that for minimum-feepaying CSOs making some retransmissions, the value of the retransmitted programming
must have some marginal value, in excess of “opportunity costs” regarding alternative
uses of bandwidth including streaming alternatives. PTV PFF ¶¶ 62-63. Taken together,
PTV asserts that the foregoing facts support the inclusion of the base-fee decisions of
minimum-fee-paying CSOs. PTV PFF ¶ 97.
D. CTV Position on the Minimum Fee Issue
CTV presents a nuanced argument regarding the relevancy of minimum-fee-only
CSOs, consistent with the opinions of their economic expert, Dr. Leslie Marx. On the

one hand, CTV and Dr. Marx maintain that the retransmission decisions of minimum-feeonly CSOs were not so numerous as to preclude the use of base fee data from minimumfee-only CSOs in a regression for the years 2010-2013 (addressed in the prior
determination) and for 2014 (the earliest year addressed in the present proceeding).
4/11/23 Tr. 4157 (Marx) (testifying that “the mere presence of royalties from excess
capacity CSOs” does not make the fee-based regressions invalid” because “it's a matter of
degree ….”). On the other hand, CTV and Dr. Marx maintain that the retransmission
decisions of the minimum-fee-only CSOs were so pervasive during the years 2015-2017
as to preclude the use of fee-based regressions for those three years. Id. at 4157-58. See
generally Commercial Television’s Proposed Findings of Fact and Conclusions of Law
(CTV PFF) at 38 (describing CTV’s and Dr, Marx’s approach as measured, because it
“utilize[ed] a fee-based regression only for 2014, [which was] the sole year at issue in
this proceeding without significant marketplace changes.”)26
CTV continues its argument on this point by pointing out that when a CSO elects
to carry a set of distant signals resulting in a payment higher than the minimum fee, that
indicates the CSO sufficiently values the programming minutes bundled into the carriage
to make it willing to pay marginal royalty payments above the minimum fee. Written
Rebuttal Testimony of Leslie M. Marx, Ph.D., Trial Ex. 7208, ¶ 21 (Marx WRT).
Alternatively stated, for these CSOs which CTV accurately describes as “abovecapacity”, i.e., retransmitting more than 1.0 DSE and thereby paying above the minimum

This nuanced position is not an inconsistent economic argument. Rather, it is an argument regarding data
differentiation and the concomitant weighing of evidence. CTV and Dr. Marx assert that, as a matter of
“degree,” too high a percentage of the number of CSOs paying only the minimum fee (and/or too high a
percentage of all royalties paid by minimum-fee-only CSOs) will render the incorporation of the
retransmission decisions of those CSOs (and/or the royalties they paid) fatal to a fee-based regression.
However, they assert that when those minimum-fee-only CSOs and their royalties are only approximately
half of the CSOs and royalties paid, as in the 2010-2013 period, and when they principally apply to CSOs
with only one subscriber group (and thus are excluded anyway from the Crawford-style regression), their
inclusion is too small to preclude use of a fee-based regression. See generally CTV PFF at 20 et seq. (“The
lack of informative data renders any fee-based regression inappropriate and unreliable for 2015, 2016 and
2017.”).
fee, the base fee royalties reported by their subscriber groups are their actual royalty
payments, revealing the CSO’s perceived value of the distantly retransmitted stations and
their constituent programs. Written Rebuttal Testimony of Christopher Bennett, Ph.D.,
Trial Ex. 7035, ¶ 15 (Bennett WRT); CTV PFF ¶ 158.
To contrast from the “above-capacity” CSOs, CTV and its experts examine the
carriage decisions of CSOs that had carried WGNA in 2014, either solely or with other
signals, but could not, and thus did not, carry WGNA after 2014. CTV asserts that
because the WGNA conversion generated the explosion of minimum-fee-only CSOs, the
majority of the royalties and CSOs do not reflect incremental costs associated with
incremental carriage. CTV PFF ¶¶ 177, 186. This change is reflected in a series of
figures presented by Dr. Marx. First, she demonstrates the share of royalty payments by
CSOs carrying distant signals relative to the minimum fee, across the relevant years:
Figure 2: Shares of royalty payments, by the extent of royalties relative to the
minimum fee
Bucket

2014-1

2014-2

2015-1

2015-2

2016-1

2016-2

2017-1

2017-2

$106.9

$108.5

$85.1

$80.2

$72.7

$71.6

$73.0

$73.4

59%

57%

24%

12%

7%

7%

7%

7%

= minimum fee

23%

24%

18%

14%

2%

3%

3%

3%

 minimum fee

18%

19%

58%

74%

91%

91%

91%

90%

75%-99%

34%

38%

4%

4%

3%

5%

5%

4%

50%-75%

10%

8%

9%

6%

4%

4%

5%

4%

25%-50%

19%

19%

16%

20%

20%

21%

20%

25%

% of
minimum fee

Royalties paid by
CSOs carrying distant
signals ($ millions)
 the minimum fee

< 25%
38%
35%
71%
70%
73%
71%
70%
67%
Note: For each accounting period (2014-1 – 2017-2), the SOA reports the imputed royalties for a given
subscriber group of a CSO. The sum across the CSO’s subscriber groups is the imputed royalties of the
CSO. For each CSO, I calculate the minimum fee as 1.064% of the CSO’s gross receipts. I categorize
CSOs as (1) “minimum fee” CSOs if they paid [99%, 101%] of the calculated minimum fee, (2) “above the
minimum fee” CSOs if they paid more than 101% of the calculated minimum fee, and (3) “excesscapacity” CSOs if their imputed royalties are less than 99% of the calculated minimum fee. Excesscapacity CSOs are further categorized into those whose imputed royalties are [75%, 99%), [50%, 75%),

[25%, 50%), and less than 25% of the calculated minimum fee. The share of royalties in each category is
the share of royalties associated with CSOs in each category in that accounting period. Source: CDC data

Next, Dr. Marx identifies the percentage of all CSOs carrying distant signals that
are paying the minimum fee over the relevant years:
Figure 3: Categorization of CSOs, by the extent of royalties relative to the minimum
fee
Bucket

2014-1

2014-2

2015-1

2015-2

2016-1

2016-2

2017-1

2017-2

819

585

515

508

48%

47%

30%

21%

19%

19%

20%

19%

= minimum fee

35%

36%

25%

16%

6%

7%

6%

8%

 minimum fee

17%

17%

45%

63%

74%

74%

74%

73%

75%-99%

17%

22%

10%

11%

11%

14%

14%

11%

50%-75%

18%

13%

12%

11%

9%

12%

13%

13%

25%-50%

21%

22%

23%

23%

26%

24%

23%

23%

% of
minimum fee

Count of CSOs
carrying distant signals
 the minimum fee

< 25%
43%
42%
55%
55%
54%
51%
49%
53%
Note: For each accounting period (2014-1 – 2017-2), the SOA reports the imputed royalties for a given
subscriber group of a CSO. The sum across the CSO’s subscriber groups is the imputed royalties of the
CSO. For each CSO, I calculate the minimum fee as 1.064% of the CSO’s gross receipts. I categorize
CSOs as (1) “minimum fee” CSOs if they paid [99%, 101%] of the calculated minimum fee, (2) “above the
minimum fee” CSOs if they paid more than 101% of the calculated minimum fee, and (3) “excesscapacity” CSOs if their imputed royalties are less than 99% of the calculated minimum fee. Excesscapacity CSOs are further categorized into those whose imputed royalties are [75%, 99%), [50%, 75%),
[25%, 50%), and less than 25% of the calculated minimum fee. The share of royalties in each category is
the share of royalties associated with CSOs in each category in that accounting period. Source: CDC data

These data present the contrast between how the actual royalty obligations
through 2014 were directly linked to base fees at the subscriber-group level and the actual
royalty obligations in the 2015-2017 period where they were instead predominantly a
function of the minimum fee. CTV PFF ¶ 167 (citing Bennett WRT fig.5). Likewise, Dr.
Marx testified that there was no substantial dissimilarity in the 2010-2014 period
between: (1) the overall regression coefficients (not allocation shares) for all CSOs and
(2) the regression coefficients for only CSOs carrying fewer distant signals than the

minimum fee would permit, which Dr. Marx aptly described as “excess capacity” CSOs.
Marx WRT ¶ 62. This substantially similarity was depicted as follows by Dr. Marx:
Figure 4: Normalized coefficients from Crawford model using 2010-2014 data
Program Commercial
Suppliers
TV

Samples

Sports

PTV

Canadian Devotional

All CSOs
CSOs with no excess
capacity
Average absolute
difference

76.2%

4.0%

7.7%

2.9%

7.5%

1.7%

77.2%

3.9%

7.8%

3.0%

7.4%

0.8%

0.4%

Note: Estimated coefficients multiplied by 1,000,000.
Source: Crawford CWDT; CDC data and Red Bee Media data

Moreover, according to Dr. Marx, many of the CSOs with “excess capacity” also
had less than the two subscriber groups necessary to be observed by the Crawford
regression, thus making their “excess capacity” status inconsequential to the regression
for this independent reason. 4/11/23 Tr. 4157 (Marx).
The scenario for the 2015-2017 period was drastically different, according to Dr.
Marx. She also presents coefficients (not allocation shares) for this latter three-year
period, and shows how the coefficients for all CSOs differed from those with no excess
capacity:
Figure 5: Normalized coefficients from Crawford model using 2015-2017 data

All CSOs
CSOs with no excess
capacity
Average absolute
difference

Sports

Program
Suppliers

Commerci
al TV

PTV

Canadian

Devotion
al

63.7%

3.3%

9.9%

3.9%

14.7%

4.5%

15.0%

2.2%

17.9%

2.8%

43.4%

18.7%

17.0%

Note: Estimated coefficients multiplied by 1,000,000.
Source: CDC data and Red Bee Media data

Marx WRT fig.5.
With regard to the necessity of at least two subscriber groups within a system
during an accounting period (required by Dr. Crawford’s system-accounting period fixed

effect), Dr. Marx reported that, beginning in 2015, fully 62% of CSOs, accounting for
almost 35% of total royalties, did not satisfy this requirement. Amended Corrected
Written Direct Testimony of Leslie M. Marx, Ph.D., Trial Ex. 7204, ¶ 58 (Marx
ACWDT). By 2017, 93.8% of the royalties were paid via the minimum fee, rather than
the base fees. CTV ¶ 189 (citing Marx WRT, fig.14).
Although CTV and Dr. Marx do not consistently characterize the evidentiary
weight of the royalty data from “excess-capacity” CSOs as wholly uninformative, they
unambiguously report Dr. Marx’s own opinion that the 2015-2017 minimum fee royalty
data is decidedly “less informative” than the royalty data from CSOs that transmitted
more than 1.0 DSE. Marx WRT ¶ 22.
Further bolstering the point that minimum-fee-only-CSO royalty data dominated
the 2015-2017 landscape, CTV points to the following data:
CSO carriage of fewer distant signals after 2014 sharply increased the percentage
number of excess capacity CSOs, from less than 20% of CSOs in 2014 to 73% of
CSOs in 2016 onward. Marx WRT ¶ 64.
The percentage of CSOs paying more than the minimum fee decreased from 48%
in 2014 to only 19% by the end of 2017 (measured by including CSOs with zero
retransmittals).
CTV PFF ¶¶ 209-210 (and record citations therein).
Based on the foregoing, CTV relies on Dr. Marx’s conclusions that:
The changed circumstances in the real-world market have infected the quality of
the data and reduced the quantity of the data utilized by the proffered fee-based
regressions making those regressions in the 2015 to 2017 timeframe unreliable.
4/11/23 Tr. 4510-12 (Marx).
A regression requires reliable data that fits the underlying assumptions, otherwise
the model is putting “garbage in” and getting “garbage out.” The data no longer
represents carriage decisions based off of royalty payments from the CSOs.
4/11/23 Tr. 4147; 4194 (Marx).
CTV PFF ¶¶ 299-300. See also Marx WRT ¶ 82 (“[F]or a minimum fee-paying CSO, the
inclusion of a distant signal in the channel line-up to a subscriber group . . . reflects the
CSO’s choice over other alternative signals that also have no incremental cost. This can

be informative as to the value of the program minutes on whatever signal the CSO elects
to offer.”).
E. JSC Position on the Minimum Fee Issue
Like CTV, JSC contrasts the 2010-2014 period with the years 2015-2017. In the
former period, JSC notes, most CSOs calculated “a Base Fee + their 3.75% Fee that
equaled or exceeded the Minimum Fee.” More particularly, JSC specifies that, “in 2014,
71.8% of all CSOs calculated a Base + 3.75% Fee that met or exceeded their minimum
fee obligation, and during the 2010-13 period, 73.0% of all CSOs did so . . .
account[ing]for 76.5% of total royalties paid in 2014 and 79.9% of total royalty fees paid
during the 2010-13 period.” Proposed Findings of Fact and Conclusions of Law of the
Joint Sports Claimants (JSC PFF) ¶ 17 (citing 3/30/23 Tr. 2578 (Majure); Harvey CWDT
¶ 17 & tbl.3; Corrected Bortz Report, Trial Ex. 7101, at 9 (Bortz Report).
Further, JSC maintains that even if an economic model could produce reliable
ordinal rankings, which none of the regressions in evidence attempted, it is not possible
to make the leap from such rankings to cardinal relative values, i.e., allocation of specific
royalty amounts to each of the claimant categories in this proceeding. 3/30/23 Tr. 251213 (Asker).
JSC also maintains that the base fee calculations of any minimum-fee-only CSO
cannot reveal the programming preferences of such CSOs or otherwise be useful in the
estimation of relative marketplace value. Specifically, JSC first maintains that “[a]ny
alleged uncertainty about application of the Minimum Fee is speculative.” Reply
Proposed Findings of Fact and Conclusions of Law of the Joint Sports Claimants (JSC
RPFF) at 11. Not only does JSC find this uncertainty to be speculative, they further
argue that it is “highly unlikely that most Minimum Fee CSOs would have been uncertain
about whether a carriage decision would affect their royalty payment.” JSC RPFF ¶ 32.
In support of this point, JSC notes that, after 2014, among minimum-fee-only CSOs that

retransmitted at least one distant signal, approximately 86% calculated a base fee +
3.75% Fee that was 75% or less of the CSO’s minimum fee. JSC RPFF ¶32. Further to
this point, JSC takes note of Dr. Tyler’s acknowledgement that “the further you are away
from the minimum fee threshold, the less likely it would be that there would be that risk
of exceeding it.” JSC RPFF ¶ 32.27
In further criticism of the usefulness of regressions, particularly for the two-year
2016-2017 period, JSC notes that only 55.2% of [CSOs chose to carry] distant signals.
Harvey CWDT ¶ 26. JSC further notes that, out of this 55.2%, approximately 74% paid
only the minimum fee.
Additionally, JSC notes that during the two-year 2016-2017 period, 14% of all
CSOs met or exceeded the minimum fee, accounting for but 6.8% of total royalty
payments, which reflected a 91% decrease compared to 2014. Harvey CWDT tbl.11.28
With regard to 2015, JSC relies on Mr. Harvey’s finding that, after he removes
reported WGNA carriage, 72% of CSOs carrying at least one distant signal then paid
only the minimum fee. JSC notes that Mr. Harvey found that only 13.4% of CSOs
calculated a minimum fee, accounting for 85.2% of total royalty payments for that year.
JSC PFF ¶ 46 (citing the Harvey CWDT).29 Considering these 2015 data from the
opposite perspective, JSC cites Mr. Harvey’s calculation that only 13.4% of CSOs

However, JSC also acknowledges that the Bortz Survey, on which it relies, likewise “decided to adopt
Base [Fee] + 3.75% Fee … weighting “[o]nce Bortz realized that many … systems were paying the
Minimum Fee….” JSC RPFF ¶ 105.
More particularly, in the years 2016-2017, only 3.2% of CSOs calculated a base fee + 3.75% Fee that
“met” (rather than “exceeded) the minimum fee. JSC PFF ¶ 54 (citing Harvey CWDT tbl.14).
It is hardly clear that Mr. Harvey was justified in removing reported carriage of WGNA in 2015. The
record reflects the existence of SOAs filed for 2015 that reported such carriage, and there is uncertainty as
to whether those SOAs were erroneous or whether there was residual WGNA carriage as WGNA
transitioned from a broadcast channel to a cable station. But see Kent Gibbons, WGN America Converts to
Cable in Five Markets, Broadcasting & Cable (Dec. 14, 2014) (“Tribune Media Co. said its WGN America
is debuting on cable television systems in Chicago, Boston, Philadelphia, Seattle and Washington, D.C.,
starting Tuesday, as it begins converting from a superstation to a cable network … on Comcast systems
[with] more launches and conversions … happening on distributors this month and throughout 2015.”)
(emphasis added).
calculated a base fee + a 3.75% fee in excess of the minimum fee, reflecting only 9.8% of
the total royalties paid in that year. JSC PFF ¶ 47 (further the Harvey CWDT).
JSC also relies on another of its expert witnesses, the economist Dr. W. Robert
Majure, who explained that, in the 2015-2017 period, most CSOs that formerly carried
WGNA under the section 111 license chose not to replace it with an equivalent number
of DSEs, and as a result “made far less use of the section 111 license.” JSC PFF ¶ 49
(citing Written Direct Testimony of W. Robert Majure, Ph.D., Trial Ex. 7103, ¶ 77 (Majure
WDT)).
Based on these data related to the minimum fee, JSC maintains that the fee-based
regressions, as they relate to the 2015-2017, period wrongly use base fees (with or
without the 3.75% fee) as “price proxies,” in that when the minimum fee binds, the
marginal royalty cost of carriage is zero. JSC PFF ¶¶ 148-152 (and record citations
therein).
In econometric terms, Dr. Asker, on behalf of JSC, measured the alleged errors
that Drs. George, Johnson, and Tyler introduced into their regressions by using the
incorrect base-fee-related price proxies. These alleged "measurement errors,” according
to Dr. Asker, were correlated with the variables measuring distant signal content minutes
in the entire 2014-2017 period and equal the difference between the improper price
proxies y and the zero price implied by the payment of the minimum fee. Written
Rebuttal Testimony of John Asker, Ph.D., Trial Ex. 7114, ¶ 79 (Asker WRT).
JSC further notes in this regard that Dr. George herself conceded that the link
between base rate royalties and actual CSO demand is “not super tight,” and adds the
very sort of “measurement error to the dependent variable” that Dr. Asker has calculated.
JSC PFF ¶ 154 (citing Dr. George’s hearing testimony).
Dr. Asker also takes issue with the regression experts’ use of the base fee as a
price proxy even for CSOs paying above the minimum fee. He explains that for a

perfectly rational CSO calculating price, the true marginal cost of distantly retransmitting
a local station in this context – the difference in cost to the CSO between retransmitting
and not retransmitting – is not the base fee, but rather the difference between (1) the total
fees that would bind, which may have been the minimum fee, without retransmitting that
local station, and (2) the total base fees that would bind (the minimum fee having been
exceeded) if that local station was distantly retransmitted. See Asker WRT ¶¶ 59-77
(applying the definition of price, stated in ¶ 61, as “the extra expenditure required to have
it, as compared to not having it.”).
Finally, JSC takes note of Dr. Asker’s point that it is standard practice among
statisticians and econometricians to test the validity of a regression against other available
external evidence, as a sort of “reality filter.” JSC PFF ¶ 169 (citing Asker WRT ¶ 104);
see also 3/28/23 Tr. 1910-11 (Harvey) (agreeing with Judge Strickler that “validity test”
is synonymous with “reality filter”). Here, JSC points out that the validity of the
regressions is refuted by the fact that, during the 2015-2017 period, CSOs did not behave
in accordance with the assumption behind the regressions. That is, despite the
assumption that the incremental benefits of distant carriage were positive (according to
the regression estimates) and the incremental royalty cost was zero, most CSOs elected
not to add additional distant signals. Thus, the regressions purportedly were invalid,
unrealistic, and self-contradictory (“false within their own premise” one might say),
according to JSC. Written Rebuttal Testimony of W. Robert Majure, Ph.D., Trial Ex. 7104,
¶¶ 15, 47-50 (Majure WRT); 3/30/23 Tr. 2594-95, 2598-99 (Majure).
F. SDC Position on the Minimum Fee Issue
At the outset, when framing the relevant minimum fee issue, the SDC maintain
that, “while it may be true” that CSOs’ ordinal decision-making shows their ranked
preferences, “no regression model in this case has been specified for such a theory.” SDC

PFF ¶ 39. Rather, these regressions consider the calculated (but not paid) base fees (and
the 3.75% Fee, depending on the regression at issue) of these minimum-fee-only CSOs.
But the SDC maintain that the minimum fee “confounds any interpretation of a
fee-based regression” premised on the CSOs’ “willingness-to-pay.” Settling Devotional
Claimants’ Proposed Findings of Fact and Conclusions of Law (SDC PFF) at 27. In this
regard, the SDC point to the testimony of several experts who opine that the minimum
fee structure “largely obviate[s] the purported causal theory based on ‘willingness-topay,’” because the minimum-fee-only CSOs “are required to pay a minimum fee
equivalent to a 1.0 DSE . . . whether they are ‘willing’ or not.” SDC PFF ¶ 60 (citing
Asker WRT ¶¶ 78-86; Marx WRT ¶ 22.). Stating the point in economic terms, the SDC
state that “there is no marginal cost” incurred by a CSO unless and until “the minimum
fee is exceeded.” SDC PFF ¶ 60.
The SDC do not limit their criticism of the minimum fee issue to the regressions
proffered in this proceeding. They also look back to the 2010-13 proceeding, where
“approximately 50% of the CSOs paid only the Minimum Fee,” which, the SDC maintain
now (as they did in the 2010-13 proceeding), constituted a “serious problem” for the
Crawford regression upon which the Judges relied in the prior proceeding. SDC PFF ¶
61.
But the SDC assert that their criticism in the 2010-13 proceeding is even more
relevant in the present proceeding, in that this minimum fee problem is “exacerbated after
2014, [because] the proportion of fees paid by systems paying the Minimum Fee went up
from 39.2% to 93.8%.” SDC PFF ¶ 62 (citing Ex. 7204 at 29, Marx ACWDT ¶ 65). In
this environment, the SDC maintain, it is difficult to see how any inferences could be
drawn about “willingness to pay.” SDC PFF ¶ 62.
The SDC then evaluate the attempts by the regression experts to address the
minimum fee issue, as summarized below:

--The SDC acknowledge that Dr. Tyler’s “sensitivity test of this issue,” in which
he dropped the minimum-fee-only CSOs, “might provide some rough guidance as
to the potential direction and magnitude of bias introduced by the presence of
minimum fees.” SDC PFF ¶ 63 (emphasis added) (citing Tyler ACWDT ¶ 156).
But the SDC take note of what they characterize as “the vast amount of data” that
Dr. Tyler had to discard to apply this sensitivity test, leading the SDC to conclude
that Dr. Tyler’s attempt to drop all minimum-fee-paying CSOs was “probably too
extreme.” SDC PFF ¶ 63 (citing 4/19/23 Tr. 5473-74 (Tyler).
--Dr. Johnson’s sensitivity test, in which he too applied his model only to systems
paying above the minimum fee, resulted in large swings in the JSC coefficients,
rendering them statistically insignificant. SDC PFF ¶ 104.
--The SDC acknowledge that Dr. Marx “makes good points about the
confounding effects of minimum fee-paying systems . . . in the 2015-2017
timeframe,” but find “her position on the reliability of the model before 2015 . . .
too convenient to credit.” Harkening back to their criticism of the 2010-13
Determination’s adoption of the Crawford regression, the SDC maintain that Dr
Marx’s Bayesian regression for 2014 is deficient with regard to this minimum fee
issue because “‘CSOs paying the minimum fees accounted for a large proportion
already before the conversion of WGNA,’” and any 2014 modeling “‘should have
been specified’” to address this issue. SDC PFF ¶ 130 (citing Written Rebuttal
Testimony of Daniel L. Rubinfeld, Trial Ex. 7505, ¶ 95 (Rubinfeld WRT) (“The
fact that Dr. Crawford’s model does not hold up when applied to 2014-2017 data
in the current proceeding reveals that the regression specification put forth by Dr.
Crawford was not robust or informative.”).
G. The Judges’ Analysis and Conclusions Regarding the Minimum Fee Issue
The Judges find that the dramatic increase in the number of minimum-fee-only
CSOs (i.e., those with no distant retransmittals and those with some distant retransmittals
but with “excess capacity”) renders regression analyses that include those CSOs less
reliable and thus can be accorded only very limited economic evidentiary weight.
Moreover, the Judges accord significantly more evidentiary weight to regression
modeling that focuses only on the CSOs that actually revealed their preferences by
willingly paying above the minimum fee, i.e., at the base fee level.
In particular, as discussed infra, the Judges rely on the Tyler Model, as Dr. Tyler
applied his model to the CSOs paying above the minimum fee. See Tyler ACWDT ¶ 156
& fig.6.3 (discussed infra). Although there is hardly a consensus as to the adoption of
this variant of the Tyler Model, the Judges are struck by the supportive argument of the

SDC, set forth below, regarding the Tyler Model as applied to above-minimum-feepaying CSOs:
Dr. Tyler, whose rate-based methodology is the most explicitly based on a
“minimum willingness to pay” theory … offers a sensitivity test of this
issue. Tyler [ACWDT] ¶ 156. (It is a fairer sensitivity test than Dr.
Johnson’s similar test, which was selected retrospectively out of hundreds
of tests that were tried and is performed in the presence of the distortion of
multiple misspecifications). Dr. Tyler’s sensitivity test might provide some
rough guidance as to the potential direction and magnitude of bias
introduced by the presence of minimum fees.
SDC PFF ¶ 156. See also 4/19/23 Tr. 5473 (SDC’s counsel’s statement to Dr. Tyler on
cross-examination) (“I do want to point out to your credit that your first sensitivity test
tries to address this issue.”). This argument is generally consistent with Dr. Tyler’s
response to SDC counsel on this point, agreeing that it was important to be “cognizant”
of this minimum fee issue and that it be “considered and addressed” because there is
“reasonable disagreement about how to handle the issue.” Id. at 5473-74.
The Judges do not see the disagreement as necessarily “reasonable” regarding
whether to rely on the calculated base fee data of all CSOs (including the CSOs paying
only the minimum fee) or only those who actually paid their calculated base fees. But,
however one couches this disagreement, the Judges find the latter approach appropriate,
and that – to borrow the SDC’s phrase – the variant of the Tyler Model in Figure 6.3 of
the Tyler ACWDT offers the Judges’ “rough guidance” in the allocation of shares.30
With regard to the issue of precision, mathematical or economic, the Judges do
not adopt Dr. Asker’s analysis, discussed above, that the appropriate method to calculate
royalties for above-minimum-fee-paying CSOs should be based on the difference
between (1) the actual royalty amount paid when a distant station is added; and (2) the
amount that the CSO would have paid pursuant to the minimum fee calculation if it
Evidence that provides “rough guidance” is useful evidence in these proceedings. As noted elsewhere in
this determination, the D.C. Circuit has acknowledged that the nature of this statutorily-mandated, but
statutorily standardless, allocation process can require a measure of “rough justice,” in the face of
inevitable mathematical imprecision.
would bind in the absence of transmittal of that station. Although in theory that would
appear to be a rational approach, there is no evidence that any CSO actually engages in
such an activity. Further, as the Judges note elsewhere in this determination, they credit
the designated testimony of Ms. Hamilton, a cable industry expert, who stated that the
amount of money at issue regarding section 111 royalties is essentially de minimis to the
CSOs (although quite significant to the parties in this proceeding), and that the CSOs do
not devote much attention to issues regarding distant retransmittals. In this context, and
in the absence of any evidence to the contrary, the Judges cannot assume, let alone apply,
a pricing rationale that suggests a tunnel-vision sort of hyperrationality, when Ms.
Hamilton’s testimony suggests a broader rationality, whereby CSOs rationally apply their
scarce time and attention to more economically consequential matters.31
VI. THE ALLEGATIONS OF “SPECIFICATION SEARCHING”32
A. Allegations of Concealed Specification Searching by Dr. Crawford Applicable
to the Present Proceeding
In their determination in the 2010-13 cable proceeding, the Judges relied
predominantly, although not solely, on the fee-based regression model presented by Dr.
Crawford, who was then a witness on behalf of CTV. In deciding to rely on Dr.
Crawford’s regression (the Crawford Model), the Judges credited his testimony denying
allegations by the SDC that he had improperly attempted and rejected many alternative

This finding is consistent with a broader point made by the economist Ronald Coase, who won the Nobel
Prize for his foundational work on transaction costs, regarding an overemphasis on what he coined
“blackboard economics.” As Dr. Coase explained: “[When] [t]he policy under consideration is one which is
implemented on the blackboard [and] [a]ll the information needed is assumed to be available and the
teacher plays all the parts … there is no counterpart to the teacher within the real economic system … no
one who is entrusted with the task that is performed on the blackboard.” R. Coase, The Firm, the Market,
and the Law 19 (1990). Substitute “expert witness” for “teacher” and “in the testimony” for “on the
blackboard” and Dr. Coase’s point applies here.
Specification searching (also known as “data fishing.”) is defined as “the practice of searching numerous
research methodologies – including different models, design components, analytical methods, and
hypotheses – and selectively reporting only those that produce significant or otherwise favorable results.
H. Bavli, Credibility in Empirical Legal Analysis, 87 Brook. L. Rev. 501, 509 (2022).
regression models. 2010-13 Determination at 3566-3567; see also SDC PFF ¶ 68 (and
record citations therein).
The SDC maintain that three of the four fee-based regression models presented in
this proceeding, PTV’s, CCG’s, and CTV’s, are based upon the Crawford Model. In
order to understand the relationship of these three models to the Crawford Model, the
SDC argue (and the Judges agree) that it is necessary to understand the characteristics
and history of the Crawford Model, comparing what was known at the time of the 201013 cable proceeding with what was subsequently uncovered. SDC PFF ¶ 69 (and record
citations therein).
To begin its review of the Crawford Model, the SDC point to the basic hypothesis
undergirding the approach – attempting to “relat[e] a measure of royalty fees to numbers
of [program] category minutes.” SDC PFF ¶ 70. The SDC state that, although the
Crawford Model “followed a framework that somewhat resembled … the model offered
by Dr. Waldfogel [the Waldfogel Model] in the 2004-05 cable proceeding,” Dr. Crawford
actually made “multiple dramatic departures.” SDC PFF ¶ 70 (citing 2010-13
Determination at 3557 for a description of Dr. Waldfogel’s model). Dr. Crawford
departed from the Waldfogel Model, according to the SDC, because after he “tested Dr.
Waldfogel’s model as a starting point using 2010-13 data (which he falsely denied
doing), the Waldfogel [M]odel yielded implausible results … demonstrating, at a
minimum, that [the Waldfogel Model] … performed poorly on out-of-sample data.”
SDC PFF¶ 70 (and record citations therein). Moreover, the SDC assert that Dr. Crawford
undertook, but failed to disclose, his sensitivity testing when he constructed the Crawford
Model, which showed that the results of the Waldfogel Model were extremely sensitive
to annual changes, suggesting that the Waldfogel Model may have been “selected to fit
the data in 2004-05.” SDC PFF ¶ 70 (and record citations therein).

Expanding on the foregoing, the SDC imply that specification searching is
widespread, noting that “[a]t least 10 different expert witnesses have presented at least 10
different fee-based regression models in the last five allocation proceedings: Dr. Rosston
(CTV, 1998-99 cable), Dr. Waldfogel (CTV, 2004-05 cable), Dr. Crawford (CTV, 201013 cable), Dr. Israel (JSC, 2010-13 cable), Dr. George (CCG, 2010-13 cable, 2014-17
cable), Dr. Heeb (CTV, 2010-13 satellite), Dr. Gray (PS, 2010-13 satellite), Dr. Johnson
(PTV, 2014-17 cable), Dr. Tyler (PS, 2014-17 cable), and Dr. Marx (CTV, 2014-17
cable). Further, the SDC emphasize that only Dr. George has appeared more than once,
and that her models in the 2010-13 proceeding and in this proceeding are “very different”
from each other. SDC PFF ¶ 73 (and record citations therein).
Dr. Erdem, also, later discovered, based on CTV’s compelled production in the
2010-13 satellite case, that Dr. Crawford had actually tested many different functional
forms before deciding to use the log-linear form. Only then did he perform the
appropriate statistical test (the “Box-Cox” test), which Dr. Erdem claims “specifically
rejected the log-linear form.” Dr. Erdem further claims that Dr. Crawford improperly
failed to run the test on the independent variables, limiting the test to the dependent
variable (the royalty measure). Amended Written Direct Testimony of Erkan Erdem,
Ph.D., Trial Ex 7502, ¶¶ 41-42 (Erdem AWDT); see also Supplemental Written Rebuttal
Testimony of Erkan Erdem (2010-13 satellite proceeding), Trial Ex. 7054, ¶¶ 16-18 &
Ex. 3. See SDC PFF ¶ 76 (and record citations therein).
According to Dr. Erdem, the failure of Dr. Crawford and CTV, in the 2010-13
cable proceeding to disclose, in Dr. Crawford’s direct testimony or in discovery, this
testing and the results thereof served to conceal the potential for “distortion and bias” in
the Crawford Model arising from the use of a “linear form” of a control variable for the
number of subscribers in the subscriber group during the prior accounting period (the socalled “lagged subscribers”) as affecting the dependent variable (royalties) expressed not

in level (i.e., linear) form, but rather in log form. See Erdem AWDT ¶¶ 51, 71; see also
Asker WRT ¶¶ 98-99; Written Rebuttal Testimony of R. Garrison Harvey, Trial Ex.
7106, ¶¶ 194, 197, 202 & Ex. H (Harvey WRT); see also SDC PFF ¶ 77.
The SDC maintain that the foregoing exemplifies the “poor economic practice”
and econometric “sin” of specification searching broadly undertaken by Dr. Crawford.
SDC PFF ¶ 87 (citing Kennedy, supra, at 367).33 Moreover, the SDC assert that Dr.
Crawford did not merely commit the “sin” of specification searching; he also lied by
repeatedly denying his econometric misconduct. Erdem AWDT ¶ 36; Written Rebuttal
Testimony of Erkan Erdem, Ph.D., Trial Ex. 7503, ¶ 77 (Erdem WRT). According to the
SDC, Dr. Crawford instead “acknowledged performing only a single alternative
analysis,” and the Judges trusted and relied on his testimony. SDC PFF ¶ 88 (citing
2010-13 Determination at 3568 (finding that Dr. Crawford “had not run such an
alternative regression by generating a regression and then discarding it ….”)). In fact,
according to the SDC, Dr. Crawford “had performed and rejected . . . undisclosed
alternative models . . . with different combinations of variables, interactions of variables,
no fixed effects, different forms of fixed effects, and a wide range of functional forms . . .
produc[ing] wide ranges of implied shares, including 0% shares for every … category in .
. . some models.” SDC PFF ¶ 88 (and record citations therein).
According to the SDC, a telltale sign that Dr. Crawford had engaged in
specification searching was the Crawford Model’s inclusion of “indicator variables that
had no function . . . [given] his system-accounting period fixed effects . . . [thereby]

A pernicious aspect of covert specification searching is that it masks from the reader (whether Judge,
adversary party, journal editor or academic referee) conduct that bears importantly on the regression
ultimately produced. The classic example of a simple hidden specification search is the following:
“[Although] the probability of flipping a coin and obtaining heads in ten consecutive flips out of ten tries is
almost zero. … if 15,000 individuals attempt this, it is virtually certain that one or more will succeed.” M.
Klock, Finding Random Coincidences while Searching for the Holy Writ of Truth: Specification Searches
in Law and Public Policy or Cum Hoc Ergo Propter Hoc, Wis. L. Rev. 1007, 1010 (2001). An
experimenter who “searches” for, and reports only, the 1 out of 15,000 times the experiment generates ten
consecutive heads, and who conceals the 14,999 times this result did not occur, is misrepresenting his or
her work and the usefulness of the result.
suggesting that he had tested the regression with no fixed effects or at other levels of
fixed effects . . . . [But] Dr. Crawford repeatedly denied trying a specification without
fixed effects or at a different level of fixed effects.” SDC PFF ¶ 90 (and record citations
therein). Moreover, the SDC claim that, in response to a question from Judge Feder, Dr.
Crawford lied by claiming he did not test regressions without fixed effects; his test
results, later produced in the satellite proceeding, showed that he “ran most of his
hundreds of models without fixed effects and at different levels of fixed effects, searching
for the best results.” SDC PFF ¶ 91 (and record citations therein) (emphasis added).
Returning to the issue of whether to transform variables from linear to log form,
the SDC claim to have identified “[p]erhaps the clearest fingerprint” of Dr. Crawford’s
specification search. Specifically, although Dr. Crawford had testified that he did not
perform a sensitivity test on a log-log form of regression because he “strongly fe[lt] that
including log subscribers is not an appropriate specification as an explanatory variable”,
this “was a lie” because the discovery in the satellite proceeding showed that Dr.
Crawford did test a log-log form of regression, which resulted in “an approximately 10point drop in CTV shares (about an $80 million value).” SDC PFF ¶ 93 (and record
citations therein).
After reviewing the satellite discovery, which included approximately 500
regression model runs, and weighing it against Dr. Crawford’s cable testimony, SDC
expert Dr. Rubinfeld stated: “I’ve never seen anything on this scale ….” 4/6/23 Tr. 3638
(Rubinfeld). The SDC characterize Dr. Crawford’s purported specification searching and
related alleged untruths as “[e]vidence of fraud in a past proceeding” that constitutes
“changed circumstances,” thus “requir[ing] a reevaluation of those characteristics of a
Crawford-like regression that have infected the regression models presented in this
proceeding.” SDC PFF ¶ 96 (emphasis added).

In this regard, the SDC take particular note that Dr. Marx acknowledges that
because her Bayesian model relies directly on Dr. Crawford’s results her results are
unreliable if Dr. Crawford’s results are unreliable. SDC PFF ¶ 129 (citing 4/11/23 Tr.
4323-24 (Marx)).
B. CCG Response Regarding Alleged Specification Searching by Dr. Crawford
CCG’s “primary response” to the SDC’s claim is that any specification searching
by Dr. Crawford is irrelevant because “regression has the advantage of transparency and
replicability.” CCG PFF ¶ 217 (and record citations therein). This occurred in the
present proceeding, CCG maintains, as the work of various experts presenting testimony
in this case showed, that every aspect of a regression such as the Crawford Model could
be and was examined and tested. 4/18/23 Tr. 5177-79 (George); George WRT at 53.
Further, CCG maintains it is appropriate for experts in the present proceeding not
to “mov[e] away from an approach that the Judges have found highly useful in
determining relative market value” unless there were “clear theoretical or empirical
reasons” to do so. CCG PFF ¶ 218 (and record citations therein). CCG analogizes to the
“academic setting,” in which “differing views” among econometricians can be
“addressed through the ‘referee’ process … where the most important criterion for
evaluating a proposed alternative model is whether the proposed change undermines the
theoretical relationships in some way ….” George WRT at 52.
Applying the foregoing points, Dr. George was unconcerned that Dr. Crawford’s
procedures appeared to include “more than one model.” She analyzed the Crawford
Model on its merits, concluding that it “was tightly linked to the economics of the cable
marketplace and estimated to minimize bias.” It was on this basis, as well as the Judges’
endorsement of the model, that Dr. George used the Crawford Model as the basis for her
work in this proceeding. 4/18/23 Tr. 5131, 5176 (George); George WDT at 6; Ex. 7404;
George WRT at 10-11, 13, 43-44; see also CCG PFF ¶ 220.

C. CTV Response Regarding Alleged Specification Searching by Dr. Crawford
When asked whether she believed Dr. Crawford had or had not engaged in
improper specification searching, Dr. Marx demurred stating that she was “not offering
that opinion.” 4/11/23 Tr. 4119 (Marx). When asked specifically about the more
detailed arguments made by the SDC witnesses regarding Dr. Crawford’s alleged
specification searching based on supplemental discovery Dr. Marx sought to make sure
her “no-opinion” testimony was unambiguous:
I want to be clear that I didn’t reach an opinion about whether or not [Dr.]
Crawford had a fair underlying theoretical structure behind the regressions
that he ran. I didn’t see anything in what I reviewed that raised red flags that
that was not the case, but what I saw was consistent with or at least not
inconsistent with proper econometric practice.
4/11/23 Tr. 4121 (Marx) (emphasis added). See also 4/11/23 Tr. 4226 (Marx) (testifying
similarly in response to questioning by Judge Strickler); 4/11/23 Tr. 4257 (Marx) (same).
On cross-examination, Dr. Marx elaborated while reiterating her “no opinion” regarding
the characterization of Dr. Crawford’s consideration of hundreds of regression
alternatives:
[Dr. Marx]
[I]n my direct testimony … I wanted to emphasize that I am not opining
that [Dr.] Crawford had an underlying theoretical structure. I’m just saying
that what I saw was consistent with that. What I saw was not inconsistent
with proper econometric practice, but I’m not offering an opinion about
what [Dr.] Crawford was thinking in the process of running these tests. And
I’m not trying to speak for [Dr.] Crawford.
[SDC counsel Mr. MacLean]
So you would agree that … running hundreds of different models and then
selecting models based on preferred or expected results or what you referred
to as casting about, that would not be a good research practice …?
[Dr. Marx]
It is not a good research practice to cast about without thinking and without
an underlying theoretical structure … without the underlying economics
being kept in mind. The mere observation of a large number of regressions

being run, by itself, in the context of the 2010 to 2013 proceeding, I don’t
find at all surprising, and seeing that did not raise any concerns in my mind
about either the reliability of the work or my ability to use my usual
procedure and thinking to assess the reliability of the work.
4/11/23 Tr. 4325-27 (Marx).
However, after being confronted with Dr. Crawford’s testimony that he had
“perform[ed] only one alternative analysis, that he hadn’t provided” in discovery, in
contrast to what was uncovered in the satellite discovery, Dr. Marx acknowledged that as
to Dr. Crawford’s oral testimony “there are statements that were made that seem in
retrospect not accurate.” 4/11/23 Tr. 4332 (Marx). Dr. Marx then nonetheless retreated
to one of her stock statements, asserting that “nothing that I saw raised any concerns in
my mind that [Dr.] Crawford’s results were not reliable ….” 4/11/23 Tr. 4334 (Marx).
Accordingly, rather than render her own judgment as to the appropriateness of Dr.
Crawford’s conduct or adjust her application of the Crawford Model in light of these
issues, Dr. Marx testified that she reviewed and assessed Dr. Crawford’s 2010-13
regression model as she would consider any such model, whether in her role as an
economist or as an academic journal referee (which is a function she performs). On this
basis, she determined that Dr. Crawford’s model was reliable, i.e., regardless of any of
the specification searching and dissembling that SDC claimed had been uncovered in the
satellite proceeding discovery. Marx WRT ¶¶ 42-54; 4/11/23 Tr. 4112-20, 4325-4327,
4334 (Marx); CTV PFF ¶¶ 366-69; Reply of the Commercial Television Claimants to
Proposed Findings of Fact and Conclusions of Law (CTV RPFF) ¶ 169.
A key reason why Dr. Marx declined to express an opinion as to Dr. Crawford’s
alleged specification searching is the following: What the SDC characterize as Dr.
Crawford’s wrongful experimentation with alternative model specifications, Dr. Marx
maintains it can also be understood as a form of sensitivity analysis – not only a standard
activity, but actually a best practice in econometric analysis. Marx WRT ¶ 10; 4/11/23
Tr. 4120-21 (Marx). More broadly, CTV asserts that what Drs. Erdem and Rubinfeld

criticize as evidence of the improper practice of specification searches can all be
understood as the “standard practice of economists” – involving “[r]obustness checks,
sensitivity analyses, and differences across economists in regression specifications.”
CTV PFF ¶ 371 (citing Marx WRT ¶¶ 31-36).
D. PTV Response Regarding alleged Specification Searching by Dr. Crawford
PTV’s expert economic witness, Dr. Johnson, did not address the soundness of
Dr. Crawford’s 2010-13 regression methodology, which, to repeat, the SDC economic
experts characterize as the wrongful undertaking of a specification search.34 But PTV
emphasizes that, although Dr. Johnson acknowledges that his own regression analysis is
based on the economic theory and principles underlying Dr. Crawford’s regression
analysis, Dr. Johnson modified and improved some aspects of Dr. Crawford’s regression
model. PTV PFF ¶¶ 113, 115 (citing Crawford WDT ¶¶ 32-36, 46.) Thus, PTV argues,
even if Dr. Crawford engaged in wrongful specification searching to construct his 201013 model, “it makes no sense for it to adversely affect the reliability of Dr. Johnson’s
regression specification, which has a different set of variables and has been tested on the
2014–17 data.” PTV PFF ¶ 143.
E. Allegations of Concealed Specification Searching by Dr. Johnson in This
Proceeding
Turning from the work of Dr. Crawford to the work of Dr. Johnson, on behalf of
PTV in the present proceeding, the SDC accuse PTV and Dr. Johnson of similar
misconduct as they allege was committed by Dr. Crawford in the 2010-13 proceeding.
SDC charge that Dr. Johnson concealed numerous regression modeling tests in discovery,
limiting production to only a few sensitivity tests. SDC PFF ¶105. Despite this modest
discovery, based on the documentation that had been produced by PTV, Dr. Erdem saw

Dr. Johnson testified he never received Dr. Crawford’s workpapers unearthed in discovery in the 201013 satellite proceeding on which the SDC relies for its specification search allegation (despite the
production of those documents by the SDC to all counsel, including PTV’s counsel, in this proceeding.).
evidence suggestive of specification searching. 4/5/23 Tr. 3429; 4/6/23 Tr. 3552-55
(Erdem). These suspicions gave rise to the SDC’s motion to compel SDC’s production
of all regression models that Dr. Johnson had considered, and the Judges granted the
motion. See Order 24 Granting the SDC Motion to Compel PTV to Produce Documents
(Jan. 19, 2023).
F. SDC Assertions After Further Discovery
After PTV was compelled by the Judges to provide further discovery, it produced
documents revealing that Dr. Johnson’s team had selected the four models that he
presented out of more than four hundred models. He and his professional subordinates
had actually engaged in over 400 runs of regression approaches over several different
data sets (resulting in numerous different results in terms of program category
coefficients implied allocation shares). Erdem WRT ¶ 82; Supplemental Written
Rebuttal Testimony of Erkan Erdem, Trial Ex. 7504, ¶ 3 n.3 (Erdem SWRT); 4/5/23 Tr.
3403 (Erdem); SDC PFF ¶106. Further, the SDC cataloged the use by Dr. Johnson and
his professional subordinates of 44 different dependent variables (including log
transformations) and wide ranges of shares (negative as well as positive) in all claimant
categories. Erdem WRT ¶ 82; Supplemental Written Rebuttal Testimony of Daniel L.
Rubinfeld, Trial Ex. 7506, ¶ 21, tab 2 (Rubinfeld SWRT).
Dr. Erdem analyzed these tests according to dates and sequence numbers included
in the documents produced by PTV and claimed to find that the successive testing by Dr.
Johnson and/or his team was correlated with a steady rise in PTV’s allocation share.
Erdem SWRT Ex. 2.
The SDC dismissed as implausible Dr. Johnson’s explanation of this correlation.
Specifically, the SDC rejects Dr. Johnson’s claims that the correlation was a

“coincidence” or that it could be explained by incomplete and erroneous data that needed
to be corrected or updated. SDC PFF ¶ 109 (citing 3/22/ Tr. 737-39 (Johnson)).35
In any event, Dr. Erdem testified that if Dr. Johnson and his team were not
engaged in specification searching, the allocation results arising from the data updates or
corrections should have been more randomly distributed, and, further, that as a matter of
regression methodology it was inexplicable that data changes would serve to generate
hundreds of regressions with different combinations of specifications. 4/6/23 Tr. 356567 (Erdem). Moreover, Dr. Erdem accused Dr. Johnson and his professional
subordinates of self-servingly searching not only for the specifications that would
increase PTV’s allocation share, but also of attempting to search for an optimal
combination of a specification set and a dataset for increasing PTV’s allocation share.
4/6/23 Tr. 3552-55 (Erdem). As purported proof, Dr. Erdem points to his running of Dr.
Johnson’s preferred (“baseline”) model, but with Dr. George’s dataset, which caused
PTV’s allocation share to decrease by 8 percentage points, with the share of every other
category increasing. Erdem WRT Ex. 8.
In addition to the more technical econometric evidence relied on by the SDC, they
also point to physical evidence. Specifically, the SDC relies on notes left by a project
manager on this assignment, Ms. Yan, which showed the search criteria that Dr.
Johnson’s team applied: a search for positive and statistically significant coefficients on
all content and a high allocation share for PTV, denoted in a document as “PBS↑” (i.e.,
an “increase value to shift w/ lots of minutes”). Erdem SWRT ¶¶ 8-9 & app. E; SDC
PFF ¶ 114. The SDC’s other econometric expert, Dr. Rubinfeld, using the essentially

It is important to note here that the SDC is mischaracterizing Dr. Johnson’s specific testimony. He
clearly did not say the correlation was a mere coincidence or explainable as a data issue. Rather he
claimed in his testimony that the increase in PTV shares was coincidental with and caused by the inputting
of additional and correct data, and that it was the data that generated PTV’s higher share. See 3/22/23 Tr.
738 (Johnson) (“I completely refute … that it's a coincidence. The reason that this happened is … tied to
specific data issues … [and] the data is what it is.”) (emphasis added).
synonymous phrase “p hacking” to describe the alleged specification searching conduct
of Dr. Johnson’s professional subordinates, asserts that this behavior “invalidates” Dr.
Johnson’s statistical tests. Rubinfeld SWRT ¶ 23. SDC’s counsel characterizes this note
from Ms. Yan as the proverbial “smoking gun.” SDC PFF ¶ 115.
The SDC further assert that when the hundreds of regression models developed by
Dr. Johnson and his team were culled to a sub-group of those with “positive and
statistically significant coefficients for all categories,” only four had higher share
allocations for PTV. Moreover, Dr. Erdem opined that these other four had data and
statistical anomalies that would have made them difficult for Dr. Johnson to defend in
any event. 4/5/323 Tr. 3424-25 (Erdem). The SDC thus concludes that Dr. Johnson and
his team essentially chose the model with the highest PTV share that they thought they
could defend. SDC PFF ¶ 116.
The SDC also maintain that there was an intentional separation between Dr.
Johnson and other professionals at his consulting firm, Edgeworth Economics
(“Edgeworth”) intended to shield Dr. Johnson from regression specifications that would
have generated lower shares for PTV – a form of “plausible deniability.” In support of
this assertion, the SDC point to written communications within Edgeworth indicating that
certain documents needed to be kept from Dr. Johnson or else PTV would be required to
turn them over in discovery. See, e.g., Erdem SWRT ¶ 8 (reproducing notes of
Edgeworth employee Eduardo Munoz-Alonso, dated 7/8/2021, distinguishing between
material for “John’s report (he’ll see) [and] other stuff (John won’t)”; Erdem SWRT ¶¶ 89 & app. E (5/26/22 note written by Esther Yan, 5/26/2022 stating “Anything we show
John gets turned over. …”); and Erdem SWRT ¶ 8 (an email containing a link to CDC
distant signals data sent to Dr. Johnson’s team includes the caveat: “…these data files are
being shared for consulting purposes only and should not be shared with John”).

Looking at the entirety of the record regarding the procedures undertaken by Dr.
Johnson and others at Edgeworth, Dr. Rubinfeld, one of the two SDC expert witnesses,
opined:
Dr. Johnson’s practices (or the practices of other experts or their staff on
behalf of PTV Claimants) are counter to sound empirical research practices.
Their analyses involve the misuse of the regression methodology to obtain
statistically significant results that deliver coefficient values that generated
relatively high shares for PTV Claimants.
Rubinfeld SWRT ¶¶ 28-30.36
G. Rebuttals to the SDC’s Assertions of Specification Searching
Dr. Johnson maintains that the SDC and other critics of his work (including Dr.
Tyler and Mr. Harvey) have misunderstood the nature of the many regression
specifications that were generated and run on behalf of PTV. More particularly, he
explains in detail that he and his team ran many of the regression specifications for the
purpose of testing the data, a process that needed to be repeated to incorporate corrections
and updates to the data. 3/21/23 Tr. 416-23, 627–745 (Johnson) (explaining the
regression log, the research process, Edgeworth team structure and personnel, timing of
data receipts and updates from vendors and scope of discovery productions). See also
PTV PFF ¶¶ 139, 145.
Dr. Johnson further maintains that assuming arguendo there was any untoward
activity in the nature of a specification search, it is essentially a moot point because

A JSC expert statistical witness, Mr. Harvey, likewise concluded that Dr. Johnson had engaged in a
specification search. However, the JSC did not emphasize this point, maintaining instead that “it is
unnecessary to conclude that Dr. Johnson intentionally searched for a specification favoring PTV in order
to find his model untrustworthy [because] the selection of data inputs and specifications” was improperly
undertaken. JSC PFF ¶¶ 195-196 (and record citations therein).
Program Suppliers’ expert economic witness, Dr, Tyler, also concluded that the work by Dr. Johnson
and/or his team “provides evidence that, rather than letting the facts of the industry guide the modeling
decision, [they] tested many different models, and then sought to justify certain specifications with
economic theory.” PS PFF ¶ 377 (and record citations therein). Further, Program Suppliers maintain that
“[t]he evolution of Dr. Johnson’s calculated shares for PTV over time provides evidence that data mining
[i.e., specification searching] and/or overfitting occurred.” Id. Further, Program Suppliers find it
problematic that, in this context, “[o]ut of the many regression specifications that Dr. Johnson ran, he
selected for his baseline model one in which the PTV share is substantially higher than the median results
from the models considered ….” Id. at ¶¶ 377-378 (and record citations therein).

through discovery (including the discovery PTV at first withheld and later produced only
in response to an order compelling production) every regression specification that he and
his team ran has now been produced. This production, according to Dr. Johnson,
eliminates any concern that the Johnson Model was misspecified, whether intentionally
or otherwise. 3/21/23 Tr. 641 (Johnson) (“Again, you actually have everything. . . . I
followed … what counsel instructed me to do in terms of what I was required to turn
over. And when we were required to turn over everything, everything has been turned
over that my team ever ran, so we have given you everything.”). See also PTV PFF ¶
146.
Additionally, many of the regression models generated and run by Dr. Johnson
and other professionals at Edgeworth Economics (Dr. Johnson is the founder and CEO),
according to Dr. Johnson, reflected their efforts to understand the Crawford Model
proffered in the 2010-13 proceeding and to determine whether the Crawford Model could
be applied to the 2014-17 data. 3/21/23 Tr. 367–68, 370–73 (Johnson). Those purposes,
PTV maintain, are inconsistent with a characterization of their work as specification
searching. Public Television’s Reply Proposed Findings of Fact and Conclusions of Law
(PTV RPFF) ¶ 208.
Overall, given the full disclosure of all the work by Dr. Johnson and his fellow
professionals at PTV, PTV maintains that this comprehensive body of evidence shows
that the Johnson Model generated regression results that are unbiased and best reflect the
data available to be input into the Johnson Model. PTV RPFF ¶ 210.
H. The Judges’ Analysis and Conclusions
As an initial matter, the Judges reject SDC’s argument that Dr. Crawford’s
deviations from the prior regression models presented by Drs. Joel Waldfogel and
Gregory Rosston ipso facto demonstrate, or even suggest, that Dr. Crawford engaged in
the wrongful process of specification searching. The record reflects no legal, economic

or econometric principle that an expert cannot alter, revise, add to or subtract from a prior
economic model. Indeed, the history of the Judges’ acceptance of fee-based regression
models as evidence shows quite the opposite. A brief examination of the evolution the
regression methodology, set forth immediately below, makes that clear.
In the allocation (Phase I) proceedings for the 1998-99 royalties, the CARP
described the first fee-based regression relies upon in such proceedings:
Dr. Rosston's regression attempts to analyze the relationship between
royalties paid by cable operators for the carriage of distant signals in 19981999 and the quantity of programming minutes by programming category
on those distant signals. … It compares the relative volume of the various
Phase I categories of programming contained in the station signals actually
purchased by CSOs in 1998 1999 with the total royalties each CSO actually
paid for that programming … identifying the amount of royalties as the
dependent variable ….
…
Dr. Rosston included more than royalties and programming minutes in the
dataset he used for his regression analysis. In order to account for the nonprogramming factors that may affect the royalties paid by a cable system,
Dr. Rosston added the following variables: (1) the number of subscribers to
the cable system in the prior period (the so-called "lagged subscribers"
variable); (2) the number of activated channels for the cable system; (3) the
average household income of the market in which the cable system was
located; (4) the total number of local channels carried; (5) a variable to
account for the payment of 3.75% royalties; and (6) a variable to account
for the carriage of partially distant signals.
Report of the Copyright Arbitration Royalty Panel to the Librarian of Congress, in
Docket No. 2001–8 CARP CD 98–99 (‘‘1998–99 CARP Report’’) at 45-46 (Oct. 21,
2003). The CARP accepted Dr. Rosston’s fee-based regression, but only as corroborative
of survey results also in evidence. Id. at 50. The CARP declined to give more
evidentiary weight to the Rosston regression, relative to the Bortz Survey (which the
CARP found to be “extremely robust,” id. at 30).
In the allocation (Phase I) proceeding for the 2004-05 years, the Judges received
in evidence the Waldfogel fee-based regression. Dr. George has described in her

testimony in this proceeding the key changes made by Dr. Waldfogel to the Rosston
regressions:
(1) estimating the marginal value of additional programming minutes (regression
coefficients) using pooled data for all years, improving the precision of the
estimates;
(2) calculating claimant shares using only compensable programming; and
(3) estimating the regression model with a sample of programming covering three
full weeks per accounting period.
George WDT at 24 n.22. See also 2004-05 Distribution Order at 57068 (noting that the
Waldfogel regression was “similar” to the Rosston regression, not identical).
Similarly, in the 2010-13 proceeding, the Judges found that the regression
approach on which they relied – the Crawford Model – reflected an improvement over
the Waldfogel Model, because, inter alia, the Crawford Model: (1) relied on more
granular subscriber group data (made available by statutory changes in CSO reporting
requirements); and (2) employed “fixed effects” to diminish the impact of potentially
“omitted variables.” 2010-13 Determination at 3569. See also George WDT at 24-26
(identifying the improvements made by Dr. Crawford).
This history clearly shows that the Judges have not found that the mere presence
of model modifications reveals any inherent defect in fee-based regressions writ large or
in any such model in particular. Rather, a modification of a fee-based regression model
may properly reflect (1) improvements in the model; (2) improvements in the data; (3)
changes in the underlying industry; (4) changes in applicable economic theory; and/or (4)
wrongful specification searching. Without further analysis, deviations from prior models
is not itself informative.
But the SDC maintain that Dr. Crawford’s development of his model was – to say
the least – troubling, and not consistent with an attempt simply to improve upon prior
regression models or to generate a more relevant model for this proceeding. As noted

supra, SDC argue essentially that Dr. Crawford engage in the improper process of
specification searching, and lied on the witness stand to cover-up that improper conduct.
To summarize, SDC contends that Dr. Crawford lied under oath about the following:
-- his testing of many different functional forms
-- his development and rejection of many undisclosed alternative models
-- his inclusion of indicator variables with no apparent function
-- his running of hundreds of models without Fixed Effects when he actually ran
these models at various levels of Fixed Effects.
See SDC PFF ¶¶ 90-91, 99, 106.
As Chief Judge Shaw noted at the hearing, the Judges are not in a position to find
whether Dr. Crawford did or did not engage in improper professional conduct, as alleged
by SDC, because he is not appearing as a witness in this proceeding. 3/22/23 Tr. 894-95
(Shaw, C.J.) Thus, the Judges were loath to conduct a “trial-within-a-trial” as to Dr.
Crawford’s work and procedures.
However, that is hardly the end of the matter. SDC has presented compelling
evidence of potential specification searching and dissembling by Dr. Crawford.
Moreover, SDC provided to the other parties in this proceeding, as voluntary discovery
disclosures, Dr. Crawford’s internal workpapers, which the Judges had ordered produced
in the 2010-13 satellite proceeding that followed on the heels of the 2010-13 cable
proceeding – disclosed only after SDC’s Motion to Compel and the Judges’ in camera
review of those documents.
The fee-based regression experts view Dr. Crawford’s potential transgressions
with less concern. Dr. George, CCG’s expert witness, maintains that Dr. Crawford’s
non-disclosures and untruths, as cataloged and characterized by SDC, are of no
consequence, because she, and the other experts, were able to examine the Crawford
Model as it was presented, and evaluate it on its merits. George WRT at 53. In essence,
this response is in the nature of a “no harm, no foul” rationale for disregarding any of Dr.
Crawford’s alleged improprieties as alleged by SDC. And, in that context, Dr. George

examined the Crawford Model and found no cause to reject it as a starting point for her
analysis (although she modified the Crawford Model to account for marketplace changes,
arising predominantly from the WGNA conversion, that she found to necessitate
modeling changes particularly with regard to the use of fixed effects). George WRT at
50-54.
Dr. Marx’s carefully repeated testimony is similar, but nuanced, hedged and cast
in the form of a double negative: “[W]hat I saw was consistent with or at least not
inconsistent with proper econometric practice.” 4/11/23 Tr. 4121 (Marx). She does
make a more specific defense of Dr. Crawford, offering her opinion that, the “mere
observation of a large number of regressions” in Dr. Crawford’s workpapers is “not
surprising,” and is what one would expect to see as a “sensitivity” analysis, which is a
“best practice” in regression modeling. Marx WRT ¶ 10. As a final defense of Dr.
Crawford’s modeling conduct, Dr. Marx analogizes his proffer of expert testimony before
the Judges to an academic economist’s submission of a proposed article to a professional
journal, which would be reviewed by an editor and referees, in a process that is within the
ambit of Dr. Marx’s professional responsibilities. In that context, Dr. Marx would not
require that all the modeling decisions by the econometrician be set forth in the proposed
article, 4/11/23 Tr. 4328 (Marx) (“in my work as a professional economist, as a referee,
as an editor, I don't expect to see the full list of every regression that was ever run.”) and
she notes that she was able to evaluate Dr. Crawford’s submission on its own merits, like
a proposed article, without all the prior regression runs. Id. at 4111-4115. 37

The Judges also take note of Dr. Marx’s awkward position as to this issue. As SDC notes, she is a
partner at Bates White, an economic and econometric consulting firm (in addition to her position as an
economics professor at Duke University’s Fuqua School of Business). Dr. Crawford likewise is a partner
at Bates White (as is another CTV testifying expert in this proceeding and in the 2010-13 proceeding, Dr.
Bennett). Further, Dr. Crawford testified in the prior proceeding on behalf of CTV, whereas Dr. Marx is
the economic expert now testifying on behalf of the same party, CTV.
The Judges find that the other experts in this proceeding – particularly Drs.
Johnson, George and Marx – who proffered fee-based regression models – were
obligated to adequately address the impact of Dr. Crawford’s workpapers, as well as the
assertion that they demonstrated he lied in his testimony in the prior proceeding. This
obligation existed because, as SDC witness Dr. Rubinfeld testified, in his decades of
experience, he has “never seen anything on this scale” where “a researcher selected a
model from hundreds that were tried.” 4/6/23 Tr. 3638 (Rubinfeld). The economists’
careful analysis of Dr. Crawford’s work is necessary, because – as explained in more
detail infra – the discovery of his potential concealment and dissembling, which was
unearthed in discovery in the satellite proceeding, may have been procedural in origin,
but procedural matters can be outcome-determinative, or at least impactful as to the
outcome of a legal proceeding.38 As explained below, Drs. George, Johnson and Marx
all failed in this regard.
The fundamental problem with the self-exculpations by these experts is that they
failed to address an issue that the Judges made explicit in the 2010-13 Determination.
Specifically, in response to the SDC’s speculation that Dr. Crawford had engaged in
specification searching, the Judges agreed that the problem inherent in such improper
behavior was that it would “consum[e] … ‘phantom degrees of freedom,’ i.e., ‘variables
that were tried and rejected – rather than included in the regression model in evidence.’”
2010-13 Determination at 3566.39

Courts have long been concerned with whether what appears facially to be procedural is in actuality
outcome-determinative. See Erie R. Co. v. Tompkins, 304 U.S. 64 (1938). The Judges in the present case
expected the same concern from the economic experts in the context of their analysis.
39

As the Judges noted in that prior proceeding:
‘Degrees of freedom’ are defined “[i]n multiple regression analysis, [as] the number of
observations minus the number of estimated parameters.” [citation omitted] Accordingly,
statisticians understand “degrees of freedom’ to be measures of how much can be learned from a
regression, with the quality of knowledge improved by increasing the number of observations,
reducing the number of estimated parameters, or by some combination of both that serves to widen
the difference between the number of observations and parameters. [citation omitted] … [A]
‘phantom degree of freedom’ can be generated when the modeler reduces the number of

In that prior proceeding, the Judges found that the record did not reveal evidence
of specification searching (recall that this finding was made prior to the CTV’s compelled
production of Dr. Crawford’s workpapers in the companion satellite proceeding).
However, in response to an SDC Motion to Strike Dr. Crawford’s testimony, which the
Judges denied given the absence of evidence of specification searching, they did reserve
the right to reduce the weight they accord to the regression Dr.] Crawford presented. Id.
n.64. Ultimately though, the Judges declined to reduce the weight they accorded to Dr.
Crawford’s regression analysis based on the claim of specification searching. Id.
Of course, between the two cable proceedings then and now, the satellite
proceeding intervened. In Order 24 in the present proceeding, the Judges granted SDC’s
Motion to Compel another party, PTV, to produce document that might reflect
specification searching by its expert Dr. Johnson (discussed infra). The Judges’
discussion of specification searching in Order 24 also bears on the Judges’ present
consideration of how Dr. Crawford’s modeling procedures impacted the models proffered
by Drs. George, Johnson and Marx in this proceeding, all of which were based on the
Crawford Model. In summary fashion,40 below is what the Judges stated regarding
specification searching in Order 24:
-- the particular importance of discovery relating to econometric evidence is
underscored by the potential for models to be manufactured in the service of a
particular result, where findings are presented with “notoriously misleading
accounts of how the research itself was conducted.”
-- it is important that econometricians explain fully their specification search in
order to judge how the results may have been affected.
-- econometricians should disclose “all the regressions that were run, not just the
good ones … basically an ‘honesty is the best policy approach.”
parameters by his or her rejection of other models that would have added a greater number of
parameters – nothing more has really been learned but the explicit number of degrees of freedom
appears larger, as an artifact (a ‘“phantom’) arising from the econometrician’s rejection of models
containing additional parameters. [citation omitted].
2010-13 Determination at 3566 n.63.
Although the following is a summary, with citations omitted, the Judges adopt in full herein their
reasoning in Order 24.
-- these criticisms of special import here, where the applied econometric work can
affect the allocation of significant royalty sums.
-- specification searching is a concern here because the “hired gun” role of the
expert creates an environment in which specification searching can cause “farreaching harm.”
--but what can be construed as improper “specification searching” can “in fact
constitute good econometric practice” by using the empirical evidence to rank
models and let the data speak for itself;
--adding specifications to the modeling can assist in solving the econometric
problem at hand
--suppressing failed specifications and arbitrarily presenting one successful
specification is a “spurious success,” but it is not necessarily dishonest.
--it would be fallacious to prefer not to search but simply to write down a model
and to conduct a one-shot test….
--there are search methodologies that support, rather than distort statistical
hypothesis tests.
--specification searches are necessary, provided there is a “full accounting” of all
alternative models, specifications and datasets
Order 24 at 48-51 & n.65. (citations omitted).
In sum, as one authority cited by the Judges concluded: “[T]here are good and
bad search procedures.” Order 24 at 51 (emphasis added).
The foregoing summary makes clear that, on the surface, the methods and practice
of an econometrician may look either like improper specification searching or like a
proper searching for the appropriate model specifications. In order to determine which
characterization is more accurate, further expert analysis is needed.
However, as to this, the parties that relied on the Crawford Model punted. Most
startingly, Dr. Johnson testified that he never received the satellite case documents that
SDC’s counsel produced to PTV’s counsel (and to all counsel) or the testimony by Dr.

Erdem in the satellite proceeding that was designated as evidence (Ex. 7054) in this
proceeding by the SDC. 3/21/23 Tr. 340-41; 611, 616-17 (Johnson).41
For her part, Dr. Marx in essence simply restates the difficult nature of the
process, testifying that she was unable to distinguish Dr. Crawford’s process as either an
improper specification search or a useful sensitivity search. But Dr. Marx did not
indicate that she examined the documents produced by SDC in any detail approximating
the analysis engaged in by Dr. Erdem on behalf of SDC, before figuratively throwing up
her hands and declaring the characterization of Dr. Crawford’s position as unknowable.
Moreover, although Dr. Marx was troubled by Dr. Crawford’s apparently false statements
under oath, she remained incurious as to whether his troubling testimony was indicative
of a covering-up of specification searching.42
Moreover, when the specification process has been shrouded, as here, the position
taken by Drs. Johnson and George becomes untenable. Their analysis and replication of
the Crawford Model is materially incomplete, given that it has credibly been described as
allegedly constructed by a specification search that may have generated the “phantom
degrees of freedom” discussed supra, or through a process which is analogous to the
equivalent of the spurious coin flip experiment also discussed supra. The problem for the
regression experts who ignore the evidence of potential specification searching is that
they simply cannot appreciate the problems that may have been generated, unless and
The record does not reflect whether PTV’s counsel ever provided copies of these materials to Dr.
Johnson.
The SDC also convincingly explained that whatever it was that Dr. Crawford was doing, it did not
qualify as a “sensitivity” test. Settling Devotional Claimants’ Proposed Reply Findings of Fact and
Conclusions of Law ¶ 2. The Judges agree. A sensitivity test is “[t]he process of checking whether the
estimated effects and statistical significance of key explanatory variables are sensitive to inclusion of other
explanatory variables, functional form, dropping of potential out-lying observations, or different modes of
estimating.” 2010-13 Determination at 3562 n.48 (citation omitted). But the same authority quoted in note
34 situates the “sensitivity analysis” as occurring after the econometrician has estimated his or her original
model, not during the specification process. Wooldridge, Introductory Economics 687 (3d ed. 2006). To
engage in what would otherwise be a sensitivity analysis in order to search a model places the cart before
the horse, and may be a telltale sign of “data mining,” i.e., specification searching. See Wooldridge, supra,
at 688 (The “inclination … to try different models, different estimation techniques, or perhaps different
subsets of data until the results correspond more closely to what was expected [is] data mining[which]
violates the assumptions we have made in our econometric analysis.”).
until they have engaged in a reasonable forensic analysis of the work (and workpapers) of
the expert who constructed the model at issue.
The failure of Drs. George, Johnson and Marx to thoroughly re-examine the
Crawford Model in light of the discovery obtained by SDC in the 2010-13 satellite
proceeding has consequences. Although, as noted supra, the Judges are not in a position
to engage in a “trial within a trial” and render findings regarding the Crawford Model in
this proceeding (where Dr. Crawford is absent), these three expert witnesses were not
similarly constrained. They had a duty to review all materials relevant to their
assignments, in a sufficient manner, and the satellite discovery pertaining to Dr.
Crawford’s work clearly falls within that category of materials. For Dr. Johnson to have
not even received that material is inexplicable. For Dr. Marx to acknowledge the
problematic nature of Dr. Crawford’s apparent dissembling under oath without further
analysis of his work is troubling. And for Dr. George to dismiss the assertions of
improper specification searching by claiming that she could independently evaluate the
Crawford Model is to dismiss the very idea that specification searching may generate
hidden problems.
Indeed, among the witnesses proffering regressions, only Dr. Tyler appeared to
respond reasonably, relying (in part) on the troubling facts uncovered in the satellite
proceeding regarding Dr. Crawford’s processes to generate his own model that deviated
in important ways from the Crawford Model.
The impact of Dr. Crawford’s troubling conduct is that it raises an issue familiar
to judges and lawyers in another context – how to handle testimony and evidence that
may be characterized as the “fruit of the poisonous tree.” Although this evidentiary
concept is typically pertinent to the criminal law, it is instructive in other areas, including
intellectual property matters:
The animating principle of the fruit of the poisonous tree doctrine is
causation: If you had not violated the law, you wouldn't have found the

evidence, and you wouldn't have followed whatever investigative path that
was triggered by finding that evidence. The newly discovered evidence-the
fruit-is tainted by the poison of the illegal search. Civil law also concerns
itself with chains of causation … [b]ut it does not typically apply the logic
of the fruit of the poisonous tree to chase down every consequence of a
wrong.
M. Lemley, The Fruit of the Poisonous Tree in IP Law, 103 Iowa L. Rev. 245, 246
(2017). As to the present issue, the “fruit of the poisonous tree” logic – if the source of
the evidence or evidence itself is tainted, then anything gained from it is tainted as well –
has application because it would be inequitable for the Judges to adopt regression
evidence built on the Crawford Model, when the witnesses who proffered that evidence
inadequately addressed reasonable questions regarding the appropriateness of the
methods used to generate the Crawford Model.
If the Crawford Model had been the first regression model utilized in these
allocation proceedings, the Judges might consider rejecting the models proffered by Drs.
George, Johnson and Marx for their failure to address in more and sufficient detail how
the factual bases for the allegations of Dr. Crawford’s specification searching impacted
their models. But, as described supra, the Crawford Model itself was built upon, but
differentiated from, the prior regressions produced by Drs. Rosston and Waldfogel and
relied upon by the Judges. Thus, the regression models of Drs. George, Johnson and
Marx are not the product merely of the Crawford Model, but also of those models that
preceded it. Moreover, Drs. George and Johnson take pains to explain how their models
are different from Dr. Crawford’s, particularly in the reduction or elimination,
respectively, of fixed effects, in order to generate more observations (as discussed
elsewhere in this determination).43 So, it is clear that these two experts engaged in
independent economic analysis separate and apart from what was undertaken by Dr.
Crawford.

Whether those particular differentiations from the Crawford Model were appropriate is likewise
discussed elsewhere in this determination.
The consideration of Dr. Marx’s full adoption of the Crawford Model, as it
pertained to the year 2013, in order for her to generate her Bayesian model for 2014, must
be considered separately. Dr. Marx explicitly relies on the Crawford Model, despite her
inability to explain or address his apparent prevarications and despite her unwillingness
to determine whether his methods constituted specification searching, sensitivity analysis
or something else. However, Dr. Marx’s qualitative and directional economic (not
econometric) testimony regarding the years 2015-2017 are not compromised in this
regard.
Accordingly, among the regression approaches proffered in this proceeding, the
experts’ responses and non-responses to Dr. Crawford’s conduct lead the Judges, ceteris
paribus, to give diminished weight to the Johnson and George Models, and the least
weight to the Marx Model for 2014. The Judges do not diminish the weight they shall
give to the Tyler Model on this basis, given his deviation from the Crawford Model.
I. The Allegation That Dr. Johnson Engaged in Improper Specification
Searching
Unlike the specification searching issue regarding the Crawford Model, there is
no valid allegation that Dr. Johnson made any material misrepresentations in his
testimony. Although SDC correctly notes that PTV did not provide full discovery of the
work by Dr. Johnson and other professionals at Edgeworth until compelled to do so
pursuant to SDC’s motion and the Judges’ Order 24, PTV appears to have withheld
production of documents regarding this regression work based on its understanding that
the Federal Rules of Civil Procedure do not require production of documents which
related to regressions that an expert had rejected or had not otherwise seen or upon which
he did not rely.44

In Order 24, the Judges noted that, although they look to the Federal Rules of Civil Procedure for
guidance, they are bound on this issue by 37 CFR 351.10 (e), regarding the production of documents
However, the Judges remain troubled, as they so expressed in Order 24, that PTV
appeared to allow for the creation of two different “teams” within Dr. Johnson’s firm –
one identified as the “consulting team,” and the other as the “testifying” team. As noted
supra, the regression-related documents generated by the “consulting team” were not
provided to Dr. Johnson. The Judges noted in Order 24 that a “consulting team” of
experts can be utilized by a party’s law firm, to allow for work product confidentiality in
connection with the law firm’s evaluation of the facts. However, as Order 24 further
explained, when the “consulting team” is created withing the same firm of economists
who are also preparing testimony and actually testifying, there is the risk that work by the
“consulting” team will be utilized as a screening device for work that should have been
undertaken by the “testifying” team. Thus, the use of a “consulting” team can allow a
party to also cloak from discovery expert work by claiming the protection of the workproduct rule.
This is essentially what SDC alleges, when it points to evidence, as noted supra,
that Edgeworth had shielded Dr. Johnson from certain documents. Moreover, the
soundness of the “wall” between the “consulting” team and the “testifying” team was
questionable, given that the “consulting” team was led by Drs. Michael Kheyfets and
David Colino, but they also were the senior members of the “testifying” team that
reported to Dr. Johnson, along with dual team members Dr. Stephanie Cheng and Esther
Yan. 3/21/23 Tr. 664-65 (Johnson). Additionally, when PTV first produced documents
to SDC, it did not also provide a privilege log describing the Edgeworth documents
otherwise withheld because of an assertion of a privilege relating to a consulting team.
(After SDC’s motion to compel, PTV provided a privilege log, but, after Order 24 issued,
PTV produced virtually all of the previously withheld material.) Thus, the Judges find

relating to an expert witness’s methodology, and that this rule also applies to the production of documents
in discovery pertaining to expert methodology.

some evidence that PTV attempted to avoid discovery of the work undertaken by the firm
it engaged for expert work in this proceeding – the work that has been characterized by
SDC as evidence of specification searching.45 This evidence serves to diminish the
Judges’ reliance on the Johnson Model that was generated out of this scenario, although
the Judges stop well short of any finding that Edgeworth, or any of its professionals,
engaged in any misconduct.46
Turning to the substance of the documents produced in response to Order 24, the
Judges are struck, as was SDC, with the sheer number of regression runs undertaken by
Edgeworth. In particular, the Judges agree with SDC that the experimentation with 44
dependent variables is specifically troubling, as it suggests that the model-building was
not well-grounded in economic theory.
Also troubling was the fact that, over a prolonged period, successive testing by
Dr. Johnson and other Edgeworth Economics professionals was highly correlated with a
steady rise in PTV’s allocation shares. Although the Judges disagree with SDC’s
distortion of Dr. Johnson’s testimony as to the “coincidental” nature of this correlation, as
noted supra, the Judges do not find any sufficient basis in the record to explain this
correlation between sequential regression runs and the growth of PTV’s allocation share.
Although PTV argues that this correlation subsided as data corrections were completed,
PTV presented no sufficient basis to rebut SDC’s charge that data changes should not
consistently be correlated with the growth of PTV’s share allocation, as opposed to a
randomized effect on share percentages.47
The Judges take particular note of the fact that an e-mail that was withheld from Dr. Johnson as
“consulting” team material contained “a link to CDC distant signals [with] the caveat: ‘. . . these data files
are being shared for consulting purposes only and should not be shared with John’.”. Rubinfeld SWRT at
6. It is difficult to fathom why raw data regarding distant signals would be withheld from the testifying
expert.
Rather, the Judges perceive from the facts that PTV and its experts took a very aggressive litigation
posture, one that SDC successfully challenged, leading to the issuance of Order 24.
The Judges are less concerned with SDC’s assertion that proof of PTV’s specification searching is
supported by evidence that PTV’s goal was to maximize PTV’s share. The Judges are not naïve, and they
On balance, the Judges find that the regression analyses undertaken on behalf of
PTV at least demonstrate an appearance – in the words of SDC’s expert, Dr. Rubinfeld –
of practices that ran “counter to sound empirical research practice,” and that this work
may well have been undertaken with an overzealous attempt “to obtain … results that …
generated relatively high shares for PTV Claimants.” Rubinfeld SWRT ¶ 28. For this
reason – and for other reasons set forth elsewhere in this determination – the Judges give
reduced weight to the Johnson Model.
VII. ISSUES SPECIFIC TO PTV
A. How Should “Must-Carry” PTV Stations be Analyzed in the Regression
Analyses?
1. PTV’s Position on the “Must-Carry” Issue
PTV first emphasizes its legal argument. They begin by acknowledging that
under the Cable Television Consumer Protection and Competition Act of 1992 (the
“Cable Act”) and the regulations of the Federal Communications Commission (“FCC”)
(the “must-carry” rules), CSOs are required to retransmit certain broadcast signals. PTV
PFF ¶ 70 (citing 47 U.S.C. 534–35). Nonetheless, PTV maintain that “the Judges and
their predecessors … have never found that must-carry requirements materially affect the
value of distant retransmissions of Public Television programming.” PTV PFF ¶ 71
(emphasis added).
PTV follows this legal point with a factual issue, challenging the testimony of
JSC’s witness, Mr. Harvey, who identifies 15.5 percent of PTV distant signals as having
been retransmitted in compliance with these must-carry rules, using criteria that Mr.
Harvey believed were “generally indicative” of must-carry carriage. PTV PFF ¶ 72.

recognize that experts will work to produce the best results for the party on whose behalf they provide
testimony. Rather, the Judges are concerned with whether the evidence suggests that experts may have
engaged in any inappropriate or questionable acts in the course of attempting to maximize the return to the
party on whose behalf they give testimony.

Specifically, Mr. Harvey categorized distantly retransmitted signals as “must-carry” if
they were:
(1) carried to all subscriber groups within the system,

(2) local to at least one subscriber group within the system, and

(3) were licensed to a community whose reference point was within 50 miles of
the location where the CSO received signals for cable distribution (the
“headend”).
PTV PFF ¶ 72 (and record citations therein). A primary assertion by PTV is that,
because of the third criterion above, these stations, designated as “must-carry” while
technically “distant” within the meaning of section 111, “were more likely to reflect the
demands and preferences of regional viewers” and thus contained “valuable
programming.” PTV PFF ¶ 72 (and record citations therein).
But PTV takes issue with the entirety of Mr. Harvey’s approach to designating
“must-carry” stations. First, PTV points out that “even … expert witnesses whose
opinions rely on Mr. Harvey’s must-carry analysis” acknowledge that his analysis “did
not definitively identify must-carry signals.” PTV PFF ¶ 73 (and record citations therein)
(emphasis added).
Second, PTV argues that “Mr. Harvey failed to provide a reason for adopting his
first criterion that the must-carry rules should apply to signals carried “to all subscriber
groups within the system.” PTV PFF ¶ 74 (and record citations therein). PTV maintains
that there presumably would be no reason to use that as a criterion unless he thought that
the must-carry law required carriage “to all subscriber groups within the system.” PTV
PFF ¶ 74 (and record citations therein). More particularly, PTV understands that a “cable
system,” as defined in the must-carry rules, “is a smaller unit than the ‘cable system’ as

defined in section 111.” PTV PFF ¶ 75 (and record citations therein). Thus, PTV argues
that “carriage of such a signal to all of the subscriber groups in a system may be evidence
of that cable system’s choice to carry that signal more broadly than the must-carry rules
require.” PTV PFF ¶ 75 (and record citations therein). PTV concludes that Mr. Harvey’s
must-carry analysis “likely results in overstating the [number] of [PTV] signals subject to
mandatory carriage, perhaps dramatically so.” PTV PFF ¶ 75 (emphasis added).
PTV further makes what can be characterized as a “no changed circumstance”
argument. Specifically, PTV points out that Mr. Harvey fails to address the fact that
mandatory carriage of PTV distant signals has become more expansive since the 2010–
2013 proceeding, and that no party argued in that proceeding that the must-carry rules
had any material impact on relative market value. Further, PTV avers that “the fraction of
PTV signals that Mr. Harvey identified as ... must-carry declined substantially over the
period from 2014 to 2017,” suggesting that, even under his analysis, “the share of PTV
distant retransmissions that were subject to must-carry is less than in prior proceedings.”
PTV PFF ¶ 76 (and record citations therein).
Additionally, PTV asserts that Mr. Harvey incorrectly implied that PTV’s
multicast streams48 are subject to the must-carry rules. PTV PFF ¶ 77 (and record
citations therein). To the contrary, PTV avers that “it is undisputed that the must-carry
rules do not require CSOs to retransmit those non-primary signals of a PTV broadcast
station, and all carriage of Public Television multicast streams was due to the voluntary
choice of the cable operators.” PTV PFF ¶ 77 (and record citations therein).
Beyond its legal and factual arguments, PTV adds an argument based on
economic analysis. Taking on a point made by another JSC witness, Dr. Majure, PTV
opines that “there is no basis to assume that any distant signal carried pursuant to the
must-carry rules provide ‘$0’ of value to the CSO, as Dr. Majure argues.” PTV PFF ¶ 78
The Judges define and discuss “multicast streams” infra.

(and record citations therein). More particularly, PTV explains that “[p]eople are
routinely required to purchase things, such as health insurance and seat belts, which they
may value highly.” PTV PFF ¶ 78 (and record citations therein). See also PTV PFF ¶ 81
(“Dr. Majure’s theory of ‘$0’ value fails [to pass through a] ‘reality filter’ [by]
suggest[ing] that all local [PTV] programming has [zero] value.”)
Changing tacks, PTV points out that, without dispute, “many CSOs chose to
retransmit [PTV] distant signals when they could have carried another distant signal
instead.” PTV PFF ¶ 79 (and record citations therein) Additionally, PTV compares this
CSO decision-making to the CSOs’ responses to the Bortz Survey, in which “[s]everal
CSOs that carried the purportedly must-carry [PTV] distant signals attributed significant
value to those Public Television distant signals in their [survey] responses . . . . ” PTV
PFF ¶ 79 (and record citations therein).
PTV further points to various “sensitivity tests” undertaken by Drs. Johnson,
Bennett and George, all of which “found that those purportedly must-carry Public
Television distant signals do not have relative marketplace value that is statistically
significantly different from the relative marketplace value of other Public Television
distant signals.” PTV PFF ¶ 82 (and record citations therein). Thus, PTV takes issue
with any implicit assumption “that any distant signal carried pursuant to the must-carry
rules are, on average, less valuable to the CSOs.” PTV PFF ¶ 82.
But PTV also acknowledges the presence of an indemnification provision in the
must-carry statute, whereby Congress exempted from mandatory carriage any
noncommercial educational signals that qualify as distant signals, “unless [the
noncommercial educational broadcast station] indemnifies the cable operator for any
increased copyright costs resulting from carriage of such signal.” PTV PFF ¶ 84 (quoting
47 U.S.C. 535(i)(2)). Thus, a CSO “was eligible for indemnification only if and to the
extent that its section 111 royalty fee increased due to the carriage of a distant signal that

was subject to must-carry; and the station then had the choice of declining
indemnification, in which case the [CSO] was released from any must-carry obligation.”
PTV PFF ¶ 84. Nonetheless, PTV criticizes any party seeking to exclude must-carry
stations from the regressions based on this statutory provision, which cancels out any
royalty payment, because PTV argues (echoing its criticism of Mr. Harvey’s analysis),
that no party has “reliably identified any distant signals that are subject to mandatory
carriage … for which the retransmitting cable operator received indemnification.” PTV
PFF ¶ 85 (and record citations therein).
PTV also makes a more general argument that would apply to PTV “must-carry”
stations, even assuming they had no value. Specifically, PTV maintains that “[a] feebased regression model is designed to estimate the average relative value of
programming in a bundle, such that bundling of programming of different values does not
bias the regression estimates of relative marketplace value.” PTV PFF ¶ 91.
2. The Other Parties’ Positions Regarding PTV “Must-Carry” Signals
As a matter of legal interpretation, JSC argues that it would not be reasonable to
remove from the hypothetical market any statutory provisions that apply to the distant
signal market, other than the section 111 license. JSC PFF ¶ 2 (and record citations
therein). Applying this approach, JSC notes that, as a matter of statutory law, the Must
Carry statutory and regulatory provisions are not found within the section 111 license
provisions, but rather are statutorily set forth at 47 U.S.C. 535, and therefore should
remain in effect in the hypothetical market the Judges must construct in this proceeding.
And, because the Must-Carry provisions preclude any finding of Willingness-to-Pay and
fail to reveal CSO’s preferences, it is also economically reasonable to maintain the impact
of the Must Carry provisions on the regression approach by excluding such stations from
that valuation methodology. JSC PFF ¶ 3 (and record citations therein).

JSC also points to the following 1992 legislative history of the must-carry
provisions as supporting, from both the legal and economic perspectives, a finding that
must-carry PTV stations do not generate additional value that can be incorporated into
the fee-based regressions:
The [House Committee on Energy and Commerce] Committee believes that
absent statutory carriage requirements, there is a substantial likelihood that
local public television stations will be deleted, will not be carried, or will be
switched to undesirable channels on cable systems. Because cable operators
are for-profit enterprises, they necessarily seek to provide customers with
the package of programming and services that will maximize the operators’
profits. As commercial enterprises, cable operators ordinarily lack strong
incentive to carry programming that does not attract sufficient dollars or
audiences. Traditionally, public television has provided precisely the type
of programming commercial broadcasters and cable operators find
economically unattractive. For this reason, the Committee believes that,
without ‘must carry’ provisions, public television service increasingly will
become unavailable to cable subscribers.
JSC PFF ¶ 475 (citing Trial Ex. 1003 (House of Representatives Report 102–628) at 62).
JSC points out that this was not only the Congressional viewpoint at the time of
enactment of the must-carry law, but also that PTV has continued to agree with
Congress’s assessment of the economic circumstances described in the above legislative
history, insisting that public television stations need must-carry status to guarantee
carriage. JSC PFF ¶¶ 476-478, 488-489 (and record citations therein).
Last, but certainly not least, in apparent response to PTV’s criticism of Mr.
Harvey’s estimate of the number of must-carry stations, JSC suggests that PTV knew or
should have known how many of the stations it represents in this proceeding in fact were
must-carry stations. JSC PCOL ¶ 13 (“When a party is in a position to proffer testimony
or evidence that would elucidate a point, or rebut an adverse point, but declines to do so,
a finder of fact may determine that the testimony would not have been supportive of that
party’s position.”) (citing Final Rule and Order, Determination of Rates and Terms for
Digital Performance of Sound Recordings and Making of Ephemeral Copies to Facilitate
Those Performances (Web V), 86 FR 59452, 59476 (Oct. 27, 2021) (Web V Final

Determination), (citing in turn Huthnance v. District of Columbia, 722 F.3d 371 (D.C.
Cir. 2013)), aff’d NRBNMLC v. CRB, 77 F.4th 949, 2023 WL 4831376 (July 28, 2023).
a. The SDC Position on the “Must-Carry” Issue
The SDC apply their broad criticism of minimum-fee-only CSOs to the question
of how to address the must-carry PTV stations: “[N]o inference can be drawn regarding
‘willingness to pay’ or any other potential theory on the basis of cable system decisionmaking in the presence of mandatory carriage of certain PTV signals.” Asker WRT ¶ 17
n.11; 4/11/23 Tr. 4319-21 (Marx); see also SDC PFF ¶ 64.
Like the JSC, the SDC maintain that, as a legal issue, the Judges’ consideration of
economic market forces to determine relative market value does not mean that the
statutory must-carry rules should be ignored:
The task in these royalty distribution proceedings is to determine the relative
value of the relevant program categories in a hypothetical market that exists
in the absence of the section 111 compulsory license. There is no basis for
assuming away the existence of other aspects of the regulated market, nor
has any party in this proceeding presented a rational framework by which
one could pick and choose which other aspects of the regulated market
would survive. At a minimum, the Retransmission Consent and Must-Carry
Requirements set forth in the Communications Act and Federal
Communications Commission’s (“FCC”) rules would continue to regulate
the relationship between broadcast stations and CSOs. See 47 U.S.C.
325(b); 47 CFR 76.55, 76.64.
SDC PFF ¶ 218.
The SDC also emphasize a point central to their general criticism of the fee-based
regressions – the impact of geography on retransmission decisions:
Unlike commercial stations, the must-carry zone for noncommercial
stations is determined by distance from the cable system rather than by
DMA [Designated Market Area]: a noncommercial station is entitled to
cable carriage under the FCC’s must-carry rules if its city of license is
within 50 miles of the cable system’s principal headend. 47 CFR 76.55.
SDC PFF ¶ 222. Further, the SDC note the indemnification provision, discussed supra,
also compromises the attempt to derive marketplace evidence of the value of must-carry
stations:

[Although] [u]nder section 111, a noncommercial station is only considered
“local” within 35 miles of the cable system’s headend . . . [a] cable operator
is not required to carry a noncommercial station that would be considered
distant for copyright purposes unless the noncommercial station agrees to
indemnify the CSO for any increased copyright liability resulting from such
carriage.
Presumably, this indemnification requirement would be moot in the absence
of section 111, because there would be no cost at all to cable systems
carrying noncommercial signals within the FCC’s 50-mile must-carry zone
in the absence of section 111. There is no basis to believe the inapplicability
of the indemnification requirement would affect the relative marketplace
value of noncommercial stations, as carriage of noncommercial stations
would still result from the federal must-carry mandate rather than any CSO
choice.
SDC PFF ¶ 222 (citing 17 U.S.C. 111(f)(4)).
b. The CTV Position on the “Must-Carry” Issue
CTV emphasizes the substantial importance of the must-carry issue, noting first
that “[d]uring 2014-2017, no less than 33.9% PTV signals were carried pursuant to mustcarry rules.” CTV PFF ¶ 249 (citing Harvey CWDT ¶ 87; 3/28/23 Tr. 1836-37
(Harvey)). See also CTV PFF ¶¶ 256-57 (42.6% of all PTV distant reported base fee
royalties are from PTV signals subject to the must-carry rule.)
CTV also expands upon the evidentiary point made by JSC, noted supra,
regarding PTV’s failure to produce evidence as to the number of must-carry stations:
PTV, the claimant with the most accurate information regarding PTV
distant stations carried by CSOs pursuant to the must-carry rules, has
provided no evidence or statistics to refute the foregoing. At most, PTV
economics witness Dr. Johnson contends that Mr. Harvey’s findings are
speculative, but he neither contested nor provided any alternative
calculations to Mr. Harvey’s conclusions.
CTV PFF ¶ 258. Echoing the criticism noted supra, CTV maintains that carriage of a
PTV signal under the must-carry rules does not reflect a CSO’s revealed preference
through a weighing of incremental costs versus incremental benefits, and thus does not
reflect relative marketplace value. CTV PFF ¶ 272 (and record citations therein).
Moreover, CTV also points out that even when CSOs retransmitting must-carry
stations pay more than the minimum fee, they nonetheless cannot reveal a willingness to

pay for that programming because of the indemnification obligation, discussed supra, of
PTV stations to pay back CSOs for any additional royalty costs associated with the
required (i.e., must-carry) retransmission of its programming. CTV PFF ¶ 259.
CTV further notes the “material” effect of the must-carry issue on PTV’s
regression and allocation shares, both individually and jointly. CTV PFF ¶ 264. Pointing
to a sensitivity analysis by one of its expert witnesses, Dr. Bennett, CTV notes that
eliminating the royalty payments the Johnson Model has attributed to must-carry stations
substantially reduces the PTV values on either attribute, and in combination. Bennett
WRT ¶ 95. These adjustments are shown in the figure below:
Figure 38: Effect of removing must-carry PTV stations on Dr. Johnson’s estimated
PTV shares
Percentage point change in PTV shares
Year

Johnson’s
PTV shares

2014
2015
2016
35.88%
46.20%
53.43%
58.87%

1.

Excluding mustcarry in allocation
only

2.

-0.28%
-8.03%
-10.75%
-9.43%

Excluding mustcarry in regressing
only

3.

Excluding must-carry in
both regression and
allocation

-9.19%
-3.55%
-3.40%
-3.44%

-9.43%
--1.36%
-14.05%
-12.94%

Bennett WRT fig.38.
Similarly, Dr. Bennett undertakes the same adjustment to Dr. George’s regression
coefficient and allocation share regression results for PTV:
Figure 21: Impact of removing must-carry PTV stations from Professor George’s
implied shares for PTV
Percentage point change in PTV shares
Year

George’s
PTV
shares

2014
2015
2016
30.2%
36.6%
41.6%
47.0%

1.

Excluding mustcarry in allocation
only

-0.2%
-5.6%
-8.2%
-7.7%

2.

Excluding mustcarry in regression
only

-10.3%
-6.0%
-6.3%
-6.9%

3.

Excluding mustcarry in both
regression and
allocation

-10.5%
-11.1%
-13.7%
-14.1%

And in like fashion, Dr. Bennett makes the same must-carry adjustment for PTV to Dr.
Tyler’s analysis:
Figure 52: Impact of Tyler’s implied shares for PTV after removing must-carry
PTV stations in allocation

Year

Tyler’s
PTV
shares

2014
2015
2016
14.02%
27.87%
37.38%
40.39%

Percentage point change in PTV shares
1.

Excluding mustcarry in allocation
only

-0.45%
-5.85%
-8.38%
-7.72%

2.

Excluding mustcarry in
regression only

3.

-0.20%
0.27%
0.54%
0.81%

Excluding must-carry
in both regression and
allocation

-0.64%
-5.62%
-7.90%
-6.98%

In conclusion, CTV underscores the existence of a consensus on this must-carry
issue, noting that Drs. Marx, Bennett, and Majure all agree that including PTV mustcarry stations in the regressions results in an overestimation of the value of PTV content
for all four years. CTV PFF ¶ 534 (and record citations therein).
c. The Program Suppliers Position on the “Must-Carry” Issue
Program Suppliers join with the other parties that maintain the FCC’s must-carry
rules should still be deemed by the Judges to apply in their modeling of the economic and
marketplace environment necessary to allocate the royalties at issue. That is, in the
hypothetical environment, even though the section 111 conditions are relaxed, Program
Suppliers argue that the parties must “still continue to be subject to the same must-carry
rule and agreement obligations . . .. ” PS PFF ¶ 101 (and record citations therein).
However, Program Suppliers take issue with any assertion that accounting for
PTV’s must-carry stations would have a significant effect. Their expert, Dr. Tyler, noted
that Dr. Bennett’s calculations – reproduced supra – showed that removing the mustcarry stations (that were identified by Mr. Harvey) from the Tyler Model barely changed
the PTV share allocation. 4/19/23 Tr. 5456 (Tyler). Moreover, Dr. Tyler opines,
consistent with the testimony by PTV’s expert Dr. Johnson that, “even with must-carry,

CSOs may still have some value related to that carriage.” 4/19/23 Tr. 5456 (Tyler).49 See
also PS PFF ¶ 337.
d. The CCG Position on the “Must-Carry” Issue
CCG is part of the chorus asserting that the Judges should include the impact of
the must-carry provisions in their economic analysis of relative marketplace value. CCG
PFF ¶ 62. However, CCG parts company with those parties arguing that the compelled
nature of such retransmission decisively compromises the informational worth of that
carriage in estimating such value.
Specifically, Dr. George, CCG’s expert, like Dr. Johnson, analogizes public
television programming to other “real-world examples” of goods that have value,
notwithstanding the fact they are mandated by the government. In this regard, as
examples, she points to health insurance, which she says generates value, and to
automobile airbags and seatbelts which, although mandated, increase the value of an
automobile. Similarly, she points to the federal government requirement that individuals
carry health insurance to argue that the mandate does not mean that the product does not
have value to them. 4/18/23 Tr. 5346. (George). Based on these analogies, CCG
maintains that the must-carry rules have a positive effect on the value of PTV
programming. CCG PFF at 81. See also CCG PFF ¶ 224.
Nonetheless, Dr. George recognizes the possibility of an alternative finding – that
any assertion of value in must-carry stations would be rejected. Accordingly, she turns to
Dr. Bennett’s analysis cited supra – at Bennett WRT fig.21 – which she recognizes as
showing the “downward adjustments” to her “regression” to account for a finding of the
absence of value in PTV’s must-carry signals. CCG PFF ¶ 225 (and record citations
therein).

But note Dr. Marx’s point that must-carry stations that were distantly retransmitted by CSOs paying only
the minimum fee would not generate a CSO royalty obligation, mooting the need for a royalty
indemnification payment. Marx WRT ¶ 79.
3. The Judges’ Analysis and Conclusions Regarding the “Must-Carry”
Issue
The Judges agree with JSC and CTV, based on the caselaw cited by JSC, that
PTV, whose clients include the public television stations that are in fact subject to mustcarry requirements, bore the twin burdens of proof – the burden of producing evidence
and the burden of persuasion – regarding which stations were subject to the must-carry
provisions and which were not. Further, because PTV is seeking a determination
including must-carry station data in the regression, those burdens are apportioned to PTV
as a matter of statute. See 5 U.S.C. 556(d).
But rather than produce such evidence or prove its significance, PTV elected to
attack Mr. Harvey’s attempt to estimate the number of must-carry stations. Those attacks
are insufficient. The Judges first take note that PTV argues only that Mr. Harvey
“perhaps” or “likely” overstated the number of must-carry stations. But Mr. Harvey
engaged in a reasonable attempt to estimate this number, which PTV could have set forth
in its submissions, but did not.
Further, the Judges do not credit PTV’s argument that the must-carry status of
some PTV stations can be deemed irrelevant because the issue of must-carry stations was
not raised in previous section 111 allocation proceedings. Each of these proceedings is
de novo in nature, and the determination is based on the evidentiary record in that
proceeding, as well as on the pertinent findings and conclusions in prior proceedings.
Although regurgitated factual argument from prior findings may be summarily rejected
by reference back to the findings in prior determinations, and although renewed legal
arguments are cabined by the precedential effect of prior determinations, new arguments
are not similarly restricted. Moreover, the absence of an issue in a prior proceeding, such
as the impact of the must-carry status of PTV stations, certainly does not preclude
consideration of that issue in this proceeding.

The Judges also reject the argument made by PTV and CCG that the must-carry
stations have value, notwithstanding that indemnification provisions would offset any
royalty payments. There are two reasons why this argument is incorrect. First, the point
is not that the programs on must-carry stations, including those subject to royalty
indemnification payment back to the CSOs, lack value; rather, the point is that they lack
objective and measurable value. On the issue of objective value, the experts for PTV and
CCG mistakenly seek to analogize must-carry PTV stations to two “must-buy”
automobile attributes, seat belts and air bags, and to “must-carry” health insurance, which
come at a cost. There are two problems with this argument. First, although one can quite
reasonably argue that these coerced purchases are beneficial, from an economic point of
view the purchase does not reveal a buyer’s preference because seatbelts, air bags, and
health insurance are coerced, not voluntary.50 Second, a price proxy could likely be
generated for seat belts and air bags by comparing the retail price of cars immediately
before and after their inclusion was mandated for new cars, or by comparing the spread in
price between new cars (with such a safety device) and used cars (lacking such safety
devices). Regressions seeking to use such data would be true, full-fledged hedonic
regressions. But here, the task is markedly different and more difficult, because no such
historical or comparative comparisons were possible. Thus, as noted elsewhere in this
determination, the regressions are “inspired” by, and in the nature of, hedonic
regressions, using the context of section 111 to identify the market-related revealed
preferences of CSOs, just as fee-based regressions have been utilized in previous
allocation proceedings. But the attempted analogy to market-generated attributes

It might be reasonable to assume that a consumer would prefer an automobile with these safety features
over an automobile lacking them, or the protection of health insurance rather than the risk associated with
its absence, but without a structure for monetizing such preferences, the measure is only ordinal in nature,
rather than cardinal. PTV alludes to this problem when, as noted supra, it notes that these are items that
purchasers “may” value. But that implies that they may not value them in a context where there is an
associated out-of-pocket or opportunity cost.
included in market-priced products misses the mark and continues the unfortunate
strained attempts by the experts supporting and criticizing fee-based regressions to
compare the fee-based regressions to hedonic regressions.
As to the issue of measurable value, PTV and CCG fail to address the fact that, if
these stations do not generate net royalties, then the regressions should not be attributing
(correlating) their minutes with royalties. The regressions will not “see” the
indemnification payments made by the PTV stations back to the CSOs who made royalty
payments. Thus, to the extent these royalty payments are recorded as base fee payments
on the SOA forms relating to subscriber groups, they will falsely be “seen” by the
regressions as indicating that the minutes were associated (correlated) with additional
royalties, when that was not the case. As several witnesses have noted, the regressions
are “dumb,” and will calculate whatever it is they are programmed to calculate. It is up
to the econometrician who constructs and evaluates the regression to “think,” and decide
whether the regression has reflected reality (legal, institutional, and economic) in a proper
manner. The Judges find that Mr. Harvey made a prima facie case regarding the number
of PTV stations that were must-carry.
The Judges also do not credit PTV’s point that many CSOs chose to retransmit
PTV signals when they could have carried another distant signal instead. Not only does
that point ignore the problem of whether a station was subject to indemnification, it also
indicates merely an ordinal preference.
The Judges also reject the argument that the regressions can include the mustcarry station data because CSOs responded to the Bortz Survey by attributing value to
such signals. This “whataboutism” argument holds no purchase – either the data belongs
in the regressions, or it does not. The Bortz Survey is a form of model seeking to address
relative marketplace value from a different perspective, and the requisites or output of
one model do not necessarily map onto another model. Cf. NRBNLMC v. CRB, supra,

slip op. at 41 (affirming the Judges’ Web V rate determination that a finding applicable to
one economic model (the issue of opportunity cost) did not automatically apply to the
same issue when addressed in a different type of model).
PTV’s assertions regarding the value of any adjustment regarding presence of
must-carry stations with their attendant indemnification requirements is merely an
argument regarding the extent of the adjustment, not regarding the need for one. As
noted, the extent of the adjustment varies, depending upon how it is applied and to which
regression model it is applied. The Judges consider that point in making their
adjustments, infra.
Finally, the Judges agree with the argument that the legislative history relating to
the must-carry provisions, and PTV’s own prior positions, reflect an understanding that
public television stations need must-carry status in order to obtain carriage. Such realworld facts serve as “reality filters” that can and should override the “dumb” manner in
which a regression “sees” the royalty and carriage data.
For these reasons, the Judges find that PTV failed to discharge its evidentiary
burdens, failed to demonstrate that Mr. Harvey’s estimation should be rejected by the
Judges, and failed to adequately demonstrate the existence of value in must-carry stations
sufficient to include them as part of the relative marketplace value generated by the
regression approach.
In terms of the necessary adjustments, the Judges agree with Dr. Bennett’s
approach, in which he eliminates the value attributed to the must-carry stations in both
the regressions and the allocations, as there is no evidence or testimony sufficient to
warrant only an adjustment in one of these regards. Thus, the Judges agree with the
adjustments in column number 3 in Dr. Bennett’s adjustment made in figures 38, 21 and
52, respectively, set forth supra.

B. Are PTV’s Multicast Stations Exempt From Royalty Payments?51
The parties dispute whether multicast stations should be included in the fee-based
regressions. Before setting forth the parties’ respective positions, it is helpful to set forth
a brief history of the relevant statutory provisions and the industry reaction. In this
regard, the SDC’s overview of the context is accurate and succinct:
Prior to the analog-to-digital television transition, a broadcast station could
transmit only a single stream of programming. The transition to digital
broadcasting, completed for all full-power stations in 2009, enabled stations to
broadcast multiple streams of programming, i.e., a “primary stream” and one or
more “multicast streams.”
Accordingly, the Satellite Television Extension and Localism Act (“STELA”) of
2010 added a DSE for distant transmissions of multicast streams. STELA, Pub.
L. 111-175, 124 Stat. 1218, 1239 (2010).
Certain multicast streams were temporarily exempted from having a DSE value
assigned, including those that (a) had been carried by a CSO prior to February 27,
2010, or (b) had an agreement in place prior to June 30, 2009, for free carriage on
a CSO. See STELA, 124 Stat. 1218, 1239; see also Marx ACWDT ¶ 70.
The Association of Public Television Stations (“APTS”) entered into such an
agreement with the National Cable and Telecommunications Association
(“NCTA”) in 2005, which was renewed in 2016 …. [REDACTED]. . . .
The PBS-NCTA agreement governed carriage of PTV stations during the 20142017 time period and required participating CSOs to carry up to four
programming streams per PTV station (i.e., the primary stream and three
multicast streams). The agreement thus served to “exempt” up to three multicast
streams per station from generating copyright liability until its expiration and
renewal in 2016, at which time the exempted multicast streams were reclassified
for royalty purposes as “non-exempt” streams with a DSE value of 0.25.
SDC PFF ¶¶ 223-224 (and record citations therein). Accord PTV PFF ¶ 67 (and record
citations therein).

The definition of multicasting is not in dispute. Basically, it refers to “a type of national television
service designed to be broadcast terrestrially … on their digital subchannels … by the conversion from
analog to digital television broadcasting, which le[aves] room for additional services to be broadcast from
an individual transmitter . . . . ” Digital multicast television network, Wikipedia,
https://en.wikipedia.org/wiki/Digital_multicast_television_network (last visited Aug. 9, 2023). The
exempt/non-exempt nomenclature is somewhat confusing; “exempt” means CSOs do not pay section 111
royalties, and “non-exempt” means CSOs shall pay section 111 royalties (unless, by agreement with the
copyright owners, section 111 royalty payments are waived).
The record in this proceeding also reflects the parties’ and the industry’s
awareness of the terms of the 2016 renewal of the 2005 PBS-NCTA agreement
referenced above. Accordingly, although the Judges denied the post-hearing admission
of the PBS-NCTA agreement into the record,52 the Judges have relied upon the record
evidence of the parties’ understanding of that agreement. [
1. PTV’s Position on Multicast Stations
PTV maintains that, for the years 2016 and 2017, multicast stations should be
treated like all other distantly retransmitted broadcast stations for the purposes of
establishing relative marketplace value through the fee-based regression analysis, noting
that, under section 111, they “are assigned the same DSE value as that station’s primary
stream.” PTV PFF ¶ 66 (citing 17 U.S.C. 111(f)(5); PTV PFF ¶ 67 (and record citations
therein).
PTV distinguishes the multicast stations from the must-carry rules, asserting “it is
undisputed that the must-carry rules do not require CSOs to retransmit those non-primary
signals of a [PTV] broadcast station, and all carriage of PTV multicast streams was due to
the voluntary choice of the cable operators.” PTV PFF ¶ 77 (and record citations
therein). PTV acknowledges that PTV primary and multicast stations are functionally
retransmitted distantly as a “bundle,” but that fact is neither unique to distant carriage of
PTV stations nor consequential with regard to the inclusion of the multicast stations in a
fee-based regression model. As to the latter point, PTV asserts that, because “[a] feebased regression model is designed to estimate the average relative value of
programming in a bundle, such … bundling of programming of different values does not
bias the regression estimates of relative marketplace value.” PTV PFF ¶ 91. More
particularly, PTV explains that the Waldfogel-style regressions of Drs. Johnson and
George rely on “average relative valuations,” and that programming which does not
See Order 41 Denying as Moot Public Television’s Motion for Reconsideration of Order 33.

correlate with higher royalties “will be factored into the regression.” PTV PFF ¶ 91
n.140 (citing George WDT at 51; 4/18/23 Tr. 5170–74 (George); 3/21/23 Tr. 350, 45658:15, 595 (Johnson); Johnson WRT ¶ 65.
Because he understood that programming of multicast streams on distantly
retransmitted broadcast signals to be compensable under section 111, Dr. Johnson applied
his regression model to estimate the average relative value of distantly retransmitted
programming inclusive of multicast streaming. And, as indicated supra, he understood
that, to the extent CSOs might value PBS primary and multicast streams differently, these
different values for “multicast streams would be averaged out by the subscriber-weighted
distant minutes.” PTV PFF ¶¶ 133-34 (and record citations therein).
PTV also notes how relative values, as between JSC and PTV programming,
moved in opposite directions during the 2014-2017 period. That is, in 2015, when
WGNA converted from a broadcast station to a national cable network, JSC could not
claim section 111 royalties for sports programming that was televised on WGNA. But
for PTV, the converse was the case: Compensable programming arguably increased
when in 2016 multicast stations transformed from being statutorily exempt (no right to
section 111 royalties) to non-exempt (royalty-generating). PTV PFF ¶ 135
2. CCG’s Position on Multicast Stations
CCG argues that the minutes of programming on the PTV multicast stations that
were reclassified from exempt to non-exempt should be included in the fee-based
regressions because their continued retransmission as royalty-generating stations is the
consequence of deliberate strategies by CSOs. CCG PFF at 25. Specifically, CCG relies
on the fact that the substantial portion of stations that had been distantly retransmitted by
Bright House (an MSO) while exempt (from royalties) continued to be retransmitted in
2016 as non-exempt (royalty-bearing) contemporaneously with the acquisition of Bright

House by a larger MSO, Charter Communications (formerly Time Warner Cable). CCG
PFF ¶ 79 (citing Marx ACWDT ¶ 78).
According to Dr. George, Charter Communications could have chosen to cease
distantly retransmitting these PTV multicast stations after they became non-exempt
(royalty-bearing), but for commercial purposes they elected to maintain carriage,
indicating that Charter Communications perceived value in these multicast stations.
George WRT at 20. In this regard, Dr. George concluded that the fact that Charter
decided to include the PTV signals in its cable lineup and treat those PTV signals as paid
while deciding not to carry other distant signals “reveals the relative value of the
programming to the cable system.” George WRT at 20. See also CCG PFF ¶ 547.53
3. CTV’s Position on Multicast Stations
Like, CCG, CTV states that the reclassification of PTV multicast signals from
exempt to “paid” (i.e., non-exempt, or royalty-bearing) had a “significant impact in the
industry.” CTV PFF at 17. But quite unlike CCG, CTV disagrees with the inclusion of
the “paid” multicast signal minutes in the fee-based regressions. After reciting the same
industry merger history recounted supra, CTV PFF ¶ 75, CTV notes that the
reclassification of these multicast PTV stations increased both (1) PTV subscriberweighted minutes and (2) the data inputted into the regression (seeking to measure the
correlation between category minutes and royalties). CTV PFF ¶ 76.
More particularly, 231 PTV signals were reclassified from exempt to paid from
2014 to 2017, “with over 90% of the reclassification of PTV minutes taking place in 2016
and 81% of those reclassifications associated with Charter Communications’ acquisitions
of Time Warner and Bright House.” CTV PFF ¶ 77 (and record citations therein). CTV
further notes the combined industry concentration of Charter Communications, Time

Program Suppliers are essentially in agreement with CCG in this regard. See PS PFF ¶ 387 (citing Tyler
WRT ¶ 71 for the assertion that “non-exempt signals are part of the question studied and properly included
in the analysis.”).
Warner, and Bright House prior to the 2016 merger, together accounting for 26.2% of
total cable industry subscribers. CTV PFF ¶ 78.
But CTV argues that the reclassification had no impact on whether those PTV
multicast minutes should have been inputted into the fee-based regressions. Specifically,
CTV asserts, “The increase in PTV paid minutes did not create any changes subscribers
would notice; there was no change in channel line-ups, viewer access to programming, or
content broadcast. Rather, PTV signals that had previously existed on channel lineups
became ‘nonexempt.’” CTV PFF ¶ 79 (and record citations therein). Thus, CTV
concludes that the reclassification merely “created an illusion” of an increase in the
number of distantly retransmitted PTV minutes.” CTV PFF ¶ 237 (and record citations
therein).
4. SDC’s Position on Multicast Stations
The SDC echoes Dr. Marx’s position on behalf of CTV, that, although
reclassification from exempt to non-exempt “changes the reporting of PTV minutes in
the data, [it] does not change the content or value that CSOs offer to their subscribers.”
SDC PFF ¶ 241 (citing Marx ACWDT ¶ 71).
Further, Dr. Marx takes note, in her consideration of the Charter acquisitions
discussed supra, of the existence of the PBS-NCTA agreement in place that maintained
the exempt (no royalty) status of a number of public television stations. 4/11/23 Tr. 4272
(Marx).
5. JSC’s Position on Multicast Stations
JSC takes note that, although the number of primary PTV signals did not increase
significantly, “CSOs … began carrying significantly more PTV multicast channels, with
the share of PTV volume comprised of multicast channels nearly doubling between the
beginning of 2014 and the end of 2017.” JSC PFF ¶ 74 (and record citations therein)
(emphasis added). More particularly, JSC acknowledges that some of this increase in

reported PTV multicast carriage is attributable to the change in status of certain PTV
multicasts from “exempt” to “non-exempt,” as a result of Charter Communications’
acquisitions of Time Warner Cable and Bright House Networks in 2016. JSC PFF ¶ 75
(and record citations therein).
But JSC rejects the notion that the increase in non-exempt (royalty-bearing)
multicast carriage reflects an increase in value for which the PTV allocation should
increase. In support of this argument, one of JSC’s economic experts, Dr. Majure opines
that (1) mere reclassification from exempt to non-exempt itself does not reflect an
increase in value and (2) CSOs chose to carry additional PTV multicasts during 20152017 when doing so was typically cost-free, even if they were non-exempt) because their
carriage addition did not cause the CSO to exceed the minimum fee. JSC PFF ¶¶ 76-77
(and record citations therein).
Moreover, JSC relies on the testimony of PTV’s own witness, Dr. Johnson, who
acknowledged that the PBS-NCTA agreement provides for CSOs who were NCTA
members to carry up to three PTV multicasts in addition to the carriage of the primary
PTV signal, that PTV would not require payment for the carriage of these multicasts, and
that, should the CSO incur financial liability under section 111 for such multicast
carriage, PTV would be obligated to either indemnify the CSO for the royalty costs (as
with must-carry primary signals), or waive the PTV station’s right to compel carriage.
JSC PFF ¶ 7 (citing 3/22/23 Tr. 985-88 (Johnson)).
Based on the foregoing, JSC claims that, without the multicast provisions in the
PBS-NCTA agreement, which JSC characterizes as “marketplace” facts, CSOs would
pay “little or nothing” for the programming on the multicast stations. JSC PFF ¶ 9 (and
record citations therein). See also JSC PFF ¶¶ 25, 395; Harvey CWDT tbls.37-39.

6. The Judges’ Analysis and Conclusions Regarding Multicast Stations
The Judges have the same type of problem with PTV’s claim for royalties for the
multicast programming as they do for the must-carry station programming discussed
supra. That is, there was evidence available to be produced by PTV, namely the PBSNCTA agreement as well as the number of entities it represents that would provide
significant marketplace evidence of how PTV stations and the licensor CSOs valued
multicast station programming. But, as noted supra, PTV did not produce either this
agreement or the number of entities bound by it as evidence, although its own expert
witness testified as to some of the agreement’s contents.
Thus, the Judges were deprived of full knowledge of the terms of the agreement,
the parties’ fulsome testimony as to the meaning of its provisions and the number of
entities signing on to the agreement. Moreover, PTV opposed the admission of that
agreement into evidence. See Order 41 Denying as Moot Public Television’s Motion for
Reconsideration of Order 33. Accordingly, the Judges here, too, find that PTV bore, but
failed to discharge, the burdens of production and persuasion with regard to the details of
the agreement and the extent of its coverage. See Web V Final Determination at 59452;
Huthnance v. District of Columbia, 722 F.3d 371 (D.C. Cir. 2013); see also 5 U.S.C.
556(d) (placing the burden of proof regarding facts on the party seeking an order based
on those facts).
Nonetheless, relevant terms of the PBS-NCTA agreement were well-understood
by the parties, without dispute. As noted supra, PTV’s own expert, Dr. Johnson,
understood what the agreement provided with regard to multicast stations and the absence
of a royalty obligation attendant to their carriage. This constitutes a market-based fact,
which has two implications. First, as a direct agreement among parties in the sector at
interest in this proceeding, it is an agreement that reflects actual value, not hypothetical
value. As such, it is more credible than attempts to tease out market value via regression-

derived price proxies or a constant sum survey such as the Bortz Survey. Second, within
the context of a fee-based regression, the existence of such zero valuations would
certainly affect the regression as well as the number of minutes by which the impacted
PTV regression coefficient would be multiplied. But without any information regarding
the number of PTV stations covered by the PBS-NCTA agreement, the Judges cannot
simply assume that no multicast stations that generated zero net royalties were covered
by this agreement.54
If the Judges had full information regarding the PBS-NCTA agreement from
PTV, whose clients are signatories thereto, as well as information from PTV regarding
the number of its station clients and base fee royalties impacted by the agreement, the
Judges’ analysis could have been different. For example, the Judges are not convinced
that the fact that these signals had been exempt (not royalty-bearing) previously is a
dispositive point. The argument in favor of that position is that the mere change in legal
obligation has no impact on economic value. But a simple thought experiment
demonstrates the paucity of that reasoning: What if these multicast signals had started off
as non-exempt (royalty-bearing) and then were changed to exempt (non-royalty-bearing)?
It would have been the same change, only in reverse. Would the original classification
remain in place in this juxtaposed scenario, such that royalties would continue to be
included in the regression?
Also, there was a contentious dispute regarding whether the multicast PTV
stations’ programming was “duplicative” of the PTV primary signal programming or of
each other. Questions arose regarding whether duplication should be narrowly tailored to

The fact that Charter changed some PTV multicast stations from exempt (non-royalty-bearing) to nonexempt (royalty-bearing) after acquiring certain CSOs is anecdotal evidence that suggests these PTV
multicast stations were generating royalties, but anecdotes are not substitutes in this context for more
comprehensive data. (And some of these royalty-bearing PTV stations may also have been retransmitted by
CSOs with excess capacity, thereby not actually generating any revealed preference information for the
retransmitting CSOs.)
mean the retransmitting of the identical program at the identical time, at the same
proximate time or within a certain period of time, and whether different episodes from
the same series retransmitted at the same or some proximate time or day were likewise
duplicative. But without information as to whether any multicast station that had
retransmitted such potentially duplicative programming was contractually unable to
generate royalties under the PBS-NCTA agreement in any event, these issues of potential
duplication appear to be indeterminate.55
VIII. PARTIES’ POSITIONS REGARDING REGRESSION MODELS
A. Introduction
Four parties, CCG, CTV (for 2014 only), Program Suppliers and PTV, through
their expert witnesses, proffer regressions that they assert are useful methodologies to
determine relative market value. An overview of each regression model and the
criticisms thereof are set forth below.
B. CTV’s Regression Approach: The Marx Model
On behalf of CTV, Dr. Leslie Marx56 adopted a fee-based regression model (the
“Marx Model”) applicable to 2014, but not for the 2015-2017 period, because she found
that data issues rendered the use of such a regression approach “substantially less reliable
and informative” for the 2015-2017 timeframe. 4/11/23 Tr. 4117 (Marx). More
particularly, for 2014, she adopted a “Bayesian” approach in her fee-based regression

As explained infra, among the regression approaches, the Judges rely on the Tyler Model’s allocation of
shares based upon CSOs that actually paid the base fee (not the minimum fee). But although Dr. Bennett’s
testimony (Bennett WRT fig.52) provides evidence for a downward adjustment of PTV’s share to reflect
the Must Carry issue discussed supra, the Judges see no clear evidence in the record to identify how much
of a downward adjustment should be made to the PTV share to reflect the Multicast and Duplicative
Programming issues. However, because the PBS-NCTA agreement indicates that CSOs would carry up to
three Multicast stations as Must Carry stations, i.e., without a net royalty obligation, the Judges find that
their application of Dr. Bennett’s downward adjustment for Must Carry stations essentially embodies any
Multicast adjustment, including any duplicative programming within those Multicast channels.
Dr, Marx was received by the Judges as an “expert economist and econometrician with experience in
statistical methods and measurements.” 4/11/23 Tr. 4109 (Marx).
model, using that methodological technique to mitigate concerns regarding the reduction
in the quantity and quality of 2015-17 data.
At a high level, she described the Bayesian approach as “a technique that allows
[an econometrician] to use results from one period and add additional data to it to then
update . . . inferences based on . . . that earlier period.” 4/11/23 Tr. 4209:3-6 (Marx).57
According to Dr. Marx, three basic reasons supported her use of a Bayesian regression:
1. In the prior proceeding, the Judges found Dr. Crawford’s approach to be
appropriate for allocating, inter alia, 2013 royalties.
2. The 2014 data largely patterns the 2013 data analyzed by Dr. Crawford
because (unlike the 2015-2017 data) the 2014 data had not been affected by
the growing predominance of excess capacity CSOs, reductions the number of
SGs, or the reclassification of PTV stations.
3. Although the 2014 data alone would not be robust enough to adequately or
reliably model a regression, the Bayesian approach incorporates a
methodological technique that helps to resolve concerns regarding the
quantity of data.
4/11/23 Tr. 4207-08 (Marx).
Accordingly, Dr. Marx ran her Bayesian fee-based regression only for 2014. The
estimates she generated from her regression generated 2014 shares aligned with the
shares calculated from Dr. Crawford’s fee-based regression in the 2010-13
Determination. 4/11/23 Tr. 4126:16-4127:4. (Marx).58

See also Marx ACWDT ¶ 101 (“Bayesian regression is a well-accepted tool in economic and scientific
research that is well-suited to situations in which the researcher has a ‘prior belief’ about the distribution
(e.g., mean and variance) of parameters of interest and wishes to use additional data in order to update
conclusions about the parameters.”).
In her Bayesian model, Dr. Marx adopted Dr. Crawford’s model that had removed simultaneous
“duplicated minutes” (i.e., minutes of distantly retransmitted programming that were also transmitted on
local stations), opining that CSOs would not realize incremental value from offerings of duplicative
programming. 4/11/23 Tr. 4213 (Marx). In this regard, Dr. Marx’s approach deviated from the Judges’
As noted supra, Dr. Marx found that the data generated for the 2015-17 period
was insufficient to allow her to use a fee-based regression for those years. To be clear,
the paucity of data she identified was not a data collection problem, but rather what she
considered to be an insufficient quantity of data borne from significant “changed
circumstances,” namely the 2015 conversion of WGNA to a cable station from a local
station that had previously been the most distantly transmitted. These changed
circumstances led Dr. Marx to highlight as a key finding from her analysis that “a
regression similar to [Dr.] Crawford’s would [be] less informative and less reliable.”
Marx ACWDT ¶ 9(c) (emphasis added); see also Marx ACWDT ¶ 67 (reiterating after
her full analysis that in her opinion a Crawford-style regression would be “less
informative and less reliable for estimating relative marketplace value after 2014.”)
(emphasis added).
In granular detail, Dr. Marx identified the following dramatic modeling
ramifications arising from the WGNA conversion:
1. The fulsome data set utilized by Dr. Crawford in the 2010-13 proceeding did
not exist for the 2015-2017 period.
2. The number of CSOs carrying at least one distant signal declined substantially
after
2014. More particularly, more than 800 CSOs carried distant signals in 2014,
but only approximately 500 CSOs carried distant signals by 2017.
3.

Total royalties declined by approximately 32% from 2014 to 2017.

4. There was a dramatic reduction in the number of subscriber-weighted
minutes.

prior determination in which they found a problem with Dr. Crawford’s duplicated minutes analysis and
elected instead to rely upon his nonduplicated minutes analysis. See 2010-13 Determination at 3562. Dr.
Marx’s specific change in this regard does not materially affect the Judges’ consideration of her Bayesian
approach in this proceeding.

5. The number of “excess capacity” CSOs increased dramatically.59
6. More than 90% of royalties in 2016 and 2017 were paid by these “excess
capacity” CSOs, i.e., systems that could have carried more DSEs but declined,
notwithstanding the zero marginal royalty cost associated with additional
carriage.
7. Alternately stated, less than 10% of the SG-level calculated royalties reported
by
CSOs reflect royalties actually paid for retransmission of signals by CSOs in
2016 and 2017.
8. Consequently, all the royalties calculated for each subscriber group in a cable
system do not represent actual or incremental costs paid by the CSO because
of the minimum fee requirement.60
9. Underscoring the impact of the WGNA conversion, 92% of CSOs that had
previously carried WGNA (with or without an additional distant signal) in
2014, were paying only the minimum fee.
10. Finally, whereas the percentage of all CSOs that carried no distant signals had
increased from only 13% in 2014, 30% in 2015, 44.6% in 2016, and then to
44.8% in 2017.
CTV PFF ¶¶ 93-94; 156-163; 167; 170; 195-199 (and record citations therein).
With regard to the effect of these changed circumstances on a fee-based
regression, Dr. Marx testified that Dr. Crawford’s regression model relies on variation
between the distant retransmission decisions at the SG level – but only within a given

In her rebuttal testimony, Dr. Marx coined the apt phrase “excess capacity CSO” as an identifier of a
CSO that distantly retransmitted less than one Distant Signal Equivalent (DSE), had the capacity to
distantly retransmit one or more additional distant signals without increasing its royalty obligation above
the minimum fee, and yet chose not to make any such additional retransmissions. Marx WRT ¶¶ 6, 13. The
Judges adopt this phrase throughout this Determination.
The minimum fee issue is separately discussed elsewhere in this determination. It is referenced in this
section discussing the experts’ models to provide a more complete context.
CSO. Marx ACWDT ¶ 57. Thus, the Crawford Model included a CSO only if the CSO
had at least two SGs. But with the dramatically changed circumstances caused
principally by the WGNA conversion and the resulting increase in the number of excesscapacity CSOs, there were far fewer CSOs in the 2015-2017 period who created the
necessary multiplicity of SGs. Id. More particularly, Dr. Marx relied on the following
facts:
1. In 2015, 62% of CSOs – accounting for almost 35% of total royalties – did not
meet the Crawford regression threshold that a CSO have at least two
subscriber groups.
2. The proportion of CSOs with fewer than two SGs increased from 54.9% to
68.8%.
3. The percent of CSOs with zero SGs increased from 13% to 44.8% from 2014
to 2017.
4. The number of CSOs qualified to be included in a Crawford fee-based
regression continued to decline throughout the relevant time period, with only
31.2% of CSOs included in 2017.
Marx ACWDT ¶¶ 58-59 & fig.12; 4/11/23 Tr. 4178 (Marx).
These are the detailed changed circumstances, referred to supra, which Dr. Marx
found to render a Crawford fee-based regression less informative and reliable in the
present proceeding than in the 2010-13 proceeding. Marx ACWDT ¶¶ 64, 67. More
particularly, she noted that, in her opinion, the relatively small percent of CSOs that
otherwise satisfied the requisites for inclusion in a Crawford-style regression could not be
considered a representative sample or a representation of the Willingness to Pay of the
larger CSO market. 4/11/23 Tr. 4161, 4173 (Marx).61
Dr. Marx testified that the other regression experts essentially agreed with her opinion that the Crawfordstyle fee- based regression would suffer from an absence of sufficient data on SG variations within a CSO.
For the foregoing reasons, Dr. Marx utilized a fee-based regression only to
estimate the regression coefficients and share allocations for 2014. Her results – are set
forth in the figures below:
Figure 6. Regression coefficients on minutes of claimant group programming:
Crawford (2010-2013) and Bayesian updates (2014), including duplicative minutes
Year

20102013
Program
Suppliers

Sports

Commercial
TV

Public TV

Devotional

Canadian

2.31

32.55

4.88

1.84

1.08

4.08

2.39

35.16

4.44

1.41

1.11

3.95

Source: Crawford CWDT, Figure 15; CDC data and Red Bee Media data.
Note: All estimates are statistically significant; for coefficients with standard errors, see Appendix C.

Figure 7. Regression coefficients on minutes of claimant group programming:
Crawford (2010-2013) and Bayesian updates (2014), excluding duplicative minutes
Year

20102013
Program
Suppliers

Sports

Commercial
TV

Public TV

Devotional

Canadian

2.49

34.96

5.77

1.98

1.17

4.26

2.73

43.01

5.64

1.62

1.31

4.11

Source: Crawford CWDT, Figure 18; CDC data and Red Bee Media data.
Note: All estimates are statistically significant; for coefficients with standard errors, see Appendix C.

Source: Marx ACWDT ¶¶ 37, 39, figs.6-7.62

She identified such agreement in the testimonies of Drs. George, Johnson and Tyler by their relaxation of
the number and types of “fixed effects” used by Dr. Crawford to isolate the correlation of category minutes
and royalties which his regression seeks to identify. However, as discussed in more detail infra, Dr. Marx
criticizes the removal of some or all of these “fixed effects” by these other experts as introducing “omitted
variable bias” into their regressions, thus compromising their usefulness in this proceeding. See Marx WRT
¶¶ 14, 20 & 37; 4/11/23 Tr. 4179, 4181, 4255 (Marx) (removing “fixed effects” in order to introduce into
the model different variations across CSOs and across time to address the problem of fewer subscriber
groups is improper because it generates a new problem – the introduction of “omitted variable bias,” which
metaphorically was adding “garbage” into their regressions). The Judges consider the alteration of “fixed
effects” by these other experts, and the criticisms of that decision infra, in their consideration of those
proffered regression models.
In her Bayesian regression for 2014, Dr. Marx adjusted the valuation analysis for PTV by addressing
certain alleged anomalies in the PTV minutes, including those arising from the presence of PTV “must
carry” stations, the transition of PTV stations from exempt (no royalty paid) to non-exempt (royalty paid)
and the indemnification of CSOs for royalties paid to transmit PTV signals. The figures reproduced in the
text, supra, from Dr. Marx’s WRT embody Dr. Marx’s conclusions in these regards. The Judges consider
these PTV-specific issues elsewhere in this Determination.
Dr. Marx then multiplied the subscriber-weighted minutes for each program
category, as calculated by another CTV expert, Dr. Christopher Bennett,63 by her
Bayesian coefficients (as adjusted pursuant to her PTV analysis) and she estimated the
following allocation shares for 2014:
(A) Applying Dr. Marx’s ’s preferred analysis excluding duplicated minutes:64
Estimated 2014 Shares
PS—19.73%
JSC—43.89%
CTV—15.56%
PTV—16.41%
SDC—0.48%
CCG—3.93%.
(B) Applying the inclusion of duplicated minutes as in the 2010-13
Determination:
Estimated 2014 Shares
PS—20.69%
JSC—41.73%
CTV—13.94%
PTV—18.85%
SDC—0.47%
CCG—4.31 %.
Marx ACWDT ¶ 39.

See Bennett ACWDT figs.1 & 2.

In the 2010-13 Determination, the Judges adopted Dr. Crawford’s model that included duplicate minutes
because the duplicated minutes calculation was more accurate than the unduplicated minutes calculation.
See 2010-13 Determination at 3565. Dr. Marx calculates coefficients (and thus shares) under both
scenarios, noting that there is minimal difference between the two approaches. Marx ACWDT ¶ 38.
1. Dr. Marx’s “Directional” Analysis for 2015-2017
Having rejected the use of a fee-based regression to estimate relative marketplace
value for the 2015-2017 period, Dr. Marx switches gears in two contexts. First, she shifts
the demand-side focus, by analyzing how choices of downstream consumers of cable
television programming have purportedly changed – and how those changes impact the
“derived demand”65 for categories of programming delineated in this proceeding.
Second, Dr. Marx uses this analysis to provide what she describes as a “directional”
approach, which she opines should guide the Judges regarding the relative increases or
decreases in category royalty shares. This “directional” approach is in contrast to both
the regression and the survey methods for ascertaining relative marketplace value, which
seek to provide specific estimates of the category values. See Marx ACWDT ¶ 83 (“This
is a ’directional’ analysis in that I do not quantitatively measure the effect of streaming
on relative market values.”).66
More particularly, Dr. Marx evaluates the changes in how consumers viewed
cable television programming content in the 2014-2017 period, compared to viewing in
prior years. Specifically, Dr. Marx examined how the introduction and growth of
streaming of programming through over-the-top (OTT) platforms during the 2014–2017
period affected not only how consumers chose to access content but also, derivatively, the
“differential effects” of this change in distribution “across the claimant groups.” Marx
ACWDT ¶ 82.
Dr. Marx’s directional “derived demand” evaluation proceeds as follows:

In the 2010-13 Determination, the Judges explained that the concept of “derived demand” was applicable
to “[t]he demand for programming at each step in the [distribution] chain . . . all the way to the television
viewer,” although, with regard to distant retransmissions of local stations, this derived demand is impacted
by “the role of bundling and ‘niche’ programming” that can affect “the premium that certain categories of
programming fetch in an open market” that would impact “value among disparate program categories” in
these allocation proceedings. 2010-13 Determination at 3600.
Dr. Marx’s “directional” analysis is akin to the testimony of television industry witnesses discussed infra.
In fact, Dr. Marx opines that her “directional” analysis is consistent with the testimonies of five industry
witnesses – Mr. Singer, Mr. Warren, Ms. Witmer, Mr. Hartman and Ms. Alany. 4/11/23 Tr. 4234 (Marx).
1. She summarizes the expansion of streaming prior to and during the 2014–
2017 period.
2. Dr. Marx then uses viewership data67 to identify evidence indicating how the
growth of streaming was likely to have increased or decreased the relative
value of the claimants’ respective program categories groups to a CSO. More
particularly, Dr. Marx opines that a program category with “content [that] had
a larger shift to streaming would, all else equal, be likely to have a decrease
in relative importance when it comes to delivery as a distant signal by CSOs
[and] [c]onversely, claimant groups whose content had smaller shifts to
streaming likely would, all else equal, have an increase in relative
importance.”
3. She next reviews data on household viewership over the relevant period,
focusing on the “directional relative effects of streaming growth on CTV,
PTV, and Program Suppliers categories . . . .”68
Marx ACWDT ¶¶ 83-84.
Through this analysis, Dr. Marx reaches the following conclusions:
1. From as far back as 2010, “streaming and smart device penetration have
increased while CSOs have lost subscribers.”
2. Viewership data reveals a reduction in TV viewership over the 2014–2017
period.

Dr. Marx relies on local viewing data generated by the Nielsen audience research firm. The probative
value, vel non, of viewership data, and local viewership in particular, as a proxy for changes in the relative
marketplace value of distantly retransmitted local stations, is discussed infra.
Dr. Marx focuses on these three categories because her data source only contains one Canadian station,
and because the small size of the SDC category renders it less reliable and impactful. She also testifies that
“sports content is more challenging to evaluate with this [Nielsen] data due to geographic and temporal
variation in ratings driven by factors unrelated to the growth of streaming,” and that she understood
“streaming of [JSC] content was limited during the 2014-2017 period.” Marx ACWDT ¶ 84 n.66.
3. Because of increased streaming and lower cable subscribership the
“importance of “PTV and Program Suppliers content appear[s] to have
diminished . . . relative to CTV content.”
4. Although the data reveal a decline in the absolute number of households
watching content within the CTV, PTV, and Program Suppliers categories, the
relative declines were greater for PTV’s and Program Suppliers’ content than
for CTVs’ content.
5. The absolute and relative decline in the share of viewership on cable of
Program Suppliers content is consistent with the contemporaneous
improvement in the “quality of streaming video content provided on platforms
such as Netflix, Amazon Prime Video, and Hulu.”
6. In addition to licensed TV shows, these streaming platforms also transmit
original content which they have produced, with quality levels generating
Emmy Award nominations, indicating the growing and high quality of content
carried by streaming platforms.
Marx ACWDT ¶¶ 85-98 & figs.21-24.
Applying the foregoing to the Judges’ present task of estimating relative
marketplace value across the claimant categories, Dr. Marx concludes as follows:
In sum, streaming grew rapidly during 2014–2017 [and] Nielsen data show
concomitant declines in viewership of the PTV and Program Suppliers
claimant groups’ content. CTV content viewership also declined, but that
decline was smaller than for PTV and Program Suppliers. This implies that
the growth of streaming likely had a greater adverse impact on Program
Suppliers and PTV claimants than on CTV claimants. All else equal, this is
consistent with a higher relative market value for CTV claimants over the
2014–2017 period as compared with Program Suppliers and PTV claimants.
Marx ACWDT ¶ 99.

2. Rebuttals to Dr. Marx’s Analyses
a. Rebuttals to Dr. Marx’s WDT by SDC Witness Dr. Erdem
One of the SDC’s expert economic witnesses, Dr. Erkan Erdem,69 characterizes
Dr. Marx’s rejection of the applicability of the fee-based regression approach in a broader
context than Dr. Marx. Instead, Dr. Erdem avers that the inconsistency between the
2010-2013 data and the data over the entirety of the 2014-2017 period reveals something
more profound: that the “Crawford model was made specifically only for the 2010-2013
data . . . [and] is not robust enough to measure the market value of distant minutes per
claimant to fit data from other proceedings.” Erdem WRT ¶ 126. By this criticism, Dr.
Erdem tacitly criticizes Dr. Marx’s Bayesian approach for not applying her criticism with
appropriate breadth, maintaining that “[e]ven if there is a shift in the trend of this
proceeding’s data, [her modeling] should still theoretically be useful for this proceeding,
if one were to believe it was useful in the first place, since they are dealing with the same
variables.” Erdem WRT ¶ 126. See also Erdem WRT ¶ 130 (opining that Dr. Marx was
wrong to maintain that after the WGNA conversion all that was needed was “an
adjustment . . . in the Crawford model” because, although “[t]he underlying trends in the
data . . . shifted, . . . the variables used are still the same, as well as the computation of
distant minutes and distant signals.”).
Whereas Dr. Erdem finds the forgoing criticism of the use of a Crawford feebased regression as incomplete, he finds a second criticism by Dr. Marx to be
exaggerated. Specifically, he takes issue with her concern that the number of CSOs with
two or more subscriber groups had decreased after 2014, thereby reducing the presence of
the sufficient observations of programming decisions arising from the different stations
retransmitted by such subscriber groups. Erdem WRT ¶ 131. Dr. Erdem finds this

Dr. Erdem was received as an expert in the fields of economics, econometrics, and data analysis. 4/5/23
Tr. 3395 (Erdem).
criticism overblown because the percentage of CSOs with fewer than two subscriber
groups only increased from 54.9% to 68.8% from 2014 to 2017, and the CSOs thus
excluded from the fee-based regressions would “only account for 38% of the total
royalties.” Erdem WRT ¶ 131. Thus, he finds Dr. Marx’s reliance on this changed
circumstance as obscuring his essential point, to wit, that if the Crawford Model had been
“correctly specified in the first place” it would not need “to be adjusted for changes in the
data,” but rather “should be able to withstand [data changes] to remain accurate.” Erdem
WRT ¶ 131.
Finally, but in the same vein, Dr. Erdem disagrees with Dr. Marx’s conclusions
that the reduction in the percentage of CSOs paying only the minimum fee limits only the
applicability of the fee-based regression approach, as opposed to (as Dr. Erdem
maintains) demonstrating the overall incorrectness of the model’s specifications. Erdem
WRT ¶ 132. More specifically, Dr. Erdem characterizes the 39% of CSOs paying only
the minimum fee in 2014 as itself a “large proportion,” which would have required the
Crawford Model, or a model fashioned in the manner of the Crawford Model, to have
been “specified” for this effect. Instead, Dr. Marx treats the increase in the shift in CSOs
paying minimum fees after 2014 only as grounds to find the fee-based regression model
inapplicable for 2015-2017, rather than misspecified, and she wrongly deemed the 39%
figure in 2014 sufficient to incorporate into her Bayesian regression. Erdem WRT ¶
132.70

The SDC’s other econometric expert, Dr. Rubinfeld, criticizes Dr. Marx’s use of a fee-based regression
in her Bayesian approach for the same reasons he criticizes fee-based regressions writ large, and those
criticisms are addressed elsewhere in this determination. But the Judges note here that Dr. Rubinfeld found
Dr. Marx’s “directional” analysis for 2015-2017, relating to the growth of streaming as impacting relative
share values, as proof that “the regression specification put forth by Dr. Crawford was not robust or
informative [because] the model does not adequately characterize the changing U.S. video distribution
marketplace.” Rubinfeld WRT ¶ 95.
b. Rebuttals to Dr. Marx’s WDT by Program Suppliers Witness Dr.
Tyler
Dr. Tyler71 levies three criticisms at Dr. Marx’s direct testimony. First, he
criticizes Dr. Marx’s regression-based approach for estimating 2014 values for the same
reason he criticizes all the other fee-based regression proffered in this proceeding (and
Dr. Crawford’s model as well).72 That is, Dr. Tyler criticizes Dr. Marx’s 2014 modeling
because her dependent variable, as in the models of Drs. Crawford, George and Johnson,
is “a royalty amount.” Written Rebuttal Testimony of Cleve B. Tyler, Ph.D., Trial Ex.
7601, ¶ 30 (Tyler WRT). Dr. Tyler’s criticism of this form of dependent variable is that it

“contain[s] a substantial amount of variability due to factors other than categories of
distantly retransmitted minutes for a subscriber group.” Tyler WRT ¶ 31. According to
Dr. Tyler, these models then need to include fixed effects to limit this unrelated
variability, but Dr. Crawford’s model – subsumed in Dr. Marx’s 2014 model – suffers
from a loss of information arising from these fixed effects.
Moreover, Dr. Tyler notes that, for the 2015-2017 period, Professor Marx’s
inability to apply a fee-based regression arises from data limitations generated by the
WGNA conversion, but such data limitations are obviated by the change in the dependent
variable to his Subscriber Group Royalty Percentage (“SGRP”), which he avers does not
require fixed effects, and thus his model does not discard information from the substantial
number of CSOs that have just one Subscriber Group. Tyler WRT ¶ 70.
Dr. Tyler also maintains that because Dr. Marx relies on Dr. Crawford’s 20102013 model, she began her regression analysis from an “imprecise starting point” and a
potentially biased “prior belief.” Tyler WRT ¶ 57. That is, because Dr. Tyler is of the

Dr. Tyler was received as an expert in the fields of economics, data analysis, and econometrics. 4/19/23
Tr. 5428 (Tyler).
To be clear, Dr. Tyler does not criticize Dr. Marx’s application of a Bayesian approach to the 2014
allocation issue.
opinion that Dr. Crawford’s process in generating his model generates “serious
questions,”73 she has implicitly ported those problems into her model, which “cast[s] a
substantial shadow of doubt on any of her conclusions.” Tyler WRT ¶ 57.
Finally, Dr. Tyler takes aim at Dr. Marx’s default to a directional analysis in
which she opined that expanded streaming services likely “reduc[ed] the value of
Program Suppliers and PTV claimants’ retransmitted programming relative to the
programming offered by CTV claimants.” While not disputing the relative value shift
posited by Dr. Marx, Dr. Tyler maintains that an appropriate regression analysis, such as
his approach, would capture this effect and in a manner superior to the inappropriate
speculation embodied in Dr. Marx’s “directional” analysis. Tyler WRT ¶ 72.
c. Rebuttals to Dr. Marx’s WDT by Program Supplier Expert Dr. Gray
Dr. Gray74 raises the following criticisms of Dr. Marx’s approach:

1. In support of her “directional” analysis, Dr. Marx claims only that local
viewership declined for each of the Program Suppliers, Commercial Television,
and Public Television claimant categories, but she fails to provide information on
the level or trend of distant viewing of these locally produced programs. Written
Rebuttal Testimony of Jeffrey S. Gray, Trial Ex. 7606, ¶¶ 47-48 (Gray WRT).

2. Relatedly, although the Judges have previously ruled that local viewing patterns
are not probative of distant viewing patterns, absent contemporaneous local and
distant measures demonstrating that local viewing patterns are sufficiently
informative as to subscribers’ distant viewing patterns, Dr. Marx offers only local
viewing data, which the Judges have previously found not probative of distant
viewing pattern rather than evidence of distant viewing patterns. Gray WRT ¶ 48

Dr. Tyler’s criticisms of Dr. Crawford’s work are set forth at Tyler ACWDT ¶¶ 106-127 tech. app. A.
The Judges discuss elsewhere in this determination the impact of the criticism of Dr. Crawford’s work on
the fee-based regressions proffered in this proceeding.
The Judges received Dr. Gray as an expert in the fields of economics, statistics, and econometrics.
4/13/23 Tr. 4850 (Gray).
n.40 (citing Order Reopening Record and Scheduling Further Proceedings,
Consolidated Docket Nos. 2012-6 CRB CD 2004-2009 (Phase II) and 2012-7
CRB SD 1999-2009 (Phase II) at 3-4 (May 4, 2016)).

3. Dr. Marx fails to account for the substantially diminished number of households
which even had distant-retransmitted access to CTV programming in the years
2015-2017. Thus, she fails to address the fact that “the relative number of
subscribers receiving [CTV] programming on a distant basis declined
precipitously over the 2014-2017 royalty years,” as shown even in “[s]tatistics
presented in Dr. Marx’s direct testimony show[ing][CTV’s] share of claimant
category minutes weighted by the number of distant subscribers reached [had]
declined 72% between 2014 and 2017.” Gray WRT ¶ 49.
d. Rebuttals to Dr. Marx’s WDT by PTV’s Expert Dr. Johnson
Dr. Johnson75 recognizes that he and Dr. Marx essentially agree as to the use of
fee-based regressions and allocation methodologies for 2014, but that they disagree with
regard to the usefulness of a fee-based regression to determine allocation shares for the
2015-2017 period. Johnson WRT ¶ 88. With regard to the latter three years, Dr. Johnson
takes issue with Dr. Marx’s opinion that the WGNA conversion would necessarily
“‘exclude a large proportion of CSOs and royalties from the analysis,’” rendering a feebased regression approach “‘less informative and reliable.’” Johnson WRT ¶ 89. More
particularly, Dr. Johnson criticizes Dr. Marx for not presenting in her WDT “any
regression analysis or testing that would support this claim,” and, moreover, that although
she produced what appeared to be “computer code . . . appl[ying] Dr. Crawford’s model
to the entire 2014-2017 period,” she did not provide any explanation how that code might
have supported her otherwise conclusory opinion that a fee-based regression for the

The Judges received Dr. Johnson as an expert in the fields of economics and econometrics. 3/21/23 Tr.
362 (Johnson).
2015-2017 period would be “‘less informative and less reliable.’” Johnson WRT ¶ 89
n.163.
Regarding Dr. Marx’s substitution of her “directional” analysis for a regression
approach to analyze the 2015-2017 period, Dr. Johnson raises two criticisms. First, he
finds her decision to not apply any modeling approach for that period to be too severe.
Johnson WRT ¶ 91. Second, Dr. Johnson criticizes Dr. Marx’s “directional analysis” as
lacking any specificity, information or guidance as to what any particular claimant
groups’ royalty shares should be in the 2015-2017 period. Rather, her analysis is nothing
more than a recitation of purported “qualitative changes Dr. Marx believes were ‘likely’
to have happened.” Johnson WRT ¶ 92.
e. Rebuttals to Dr. Marx’s WDT by CCG’s Expert Dr. George
Dr. George76 first addresses Dr. Marx’s critique of Dr. Crawford’s model
somewhat obliquely – not by disputing the critique that his model reduces the available
number of meaningful variations (among subscriber groups within CSOs) but by
purportedly failing to recognize (as Dr. George opines) that relaxing fixed effects in Dr.
Crawford’s model would increase the number of subscriber group variations, thus
salvaging the use of a fee-based regression. That is, an adjustment allowing for
“estimating coefficients from variations within systems over time rather than within each
system each accounting period,” allows for a regression to analyze “all systems carrying
distant signals in two or more accounting periods [to be] included, regardless of the
number of subscriber groups.” George WRT at 18.77

The Judges received Dr. George as an expert in the fields of economics, with experience in econometrics,
media markets, and industrial organization. 4/18/23 Tr. 5111 (George).
Dr. George acknowledges that relaxing Dr. Crawford’s “fixed effects” in this manner risks the
introduction of bias from omitted variables created by industry and system changes over time left
unobserved by the regression, but she believes this trade-off is acceptable. George WRT at 18. By
contrast, Dr. Marx maintains that allowing for the introduction of potential “omitted variable bias” would
invite application of the metaphor “garbage in, garbage out.”
Further, Dr. George “agrees with Dr. Marx that programming on streaming
services is likely a closer substitute for [PTV] and Program Supplier programming than
other claimant types,” Dr. George finds that Dr. Marx’s analysis “likely overstates the
relative decline of Program Supplier and Public Television programming relative to
Commercial Television content.” George WDT at 21. She reaches this finding by noting
that Dr. Marx’s reliance on local (rather than distant) viewing neglects the likely fact that
local CTV news programming would be less popular in distant markets, whereas
Program Suppliers’ content is not geographically distinct and would not be less valued
for this reason. George WRT at 21.
Finally, Dr. George takes issue with Dr. Marx’s use of a Bayesian regression
incorporating 2013 data into the methodology used to calculate 2014 share estimates.
Dr. George emphasizes that pooled data from 2010-2013 reflects the choices
made by CSOs in that earlier period with different market conditions. In this regard, Dr.
George notes that decisions in 2010–2013 reflect neither the WGNA conversion nor later
cable industry acquisitions and entry. George WDT at 22.78
3. The Judges’ Analysis and Findings Regarding the Marx Model and
Directional Approach
Having considered all aspects of the CTV Marx Model and directional analysis
presented by Dr. Marx, as well as all the criticisms of those approaches contained in the
submissions by the other parties, the Judges find as follows:
1. Dr. Marx’s Bayesian modeling, ceteris paribus, is an appropriate econometric
tool to use in the process of estimating relative marketplace value across the
program categories for 2014. The Judges do not credit Dr. George’s criticism that

None of the JSC witnesses levied substantive criticisms of Dr. Marx’s 2014 Bayesian regression or her
2015-2017 “directional” analysis. This is perhaps unsurprising, because a JSC expert witness, Dr. Majure,
does not take issue with the results of Dr. Marx’s 2014 Bayesian regression or with her “directional”
analysis.
Dr. Marx’s Bayesian approach is deficient because it pools 2014 data with data
from the 2010-2013 period. Dr. Marx opined, and the Judges agree, that 2014 was
sufficiently similar to this prior period to justify the Bayesian approach.79
2. Dr. Marx’s directional analysis for the 2015-2017 period can be useful,
despite the absence of any allocation share estimates, in that it suggests to the
Judges which of the quantitative estimates on which the Judges do rely could
be more probative, in that they are consonant with Dr. Marx’s directional
analysis. However, in the present proceeding, as discussed infra, the Judges
adopt the Tyler Model as a regression model that is probative of relative
marketplace values over the entire 2014-2017 period. Accordingly, the
Judges find Dr. Marx’s ‘directional” analysis, although useful, not as
probative or definitive as the Tyler Model. Nonetheless, the Judges will
utilize the Marx Model, as appropriate, to reconcile differences between the
Tyler Model and the adjusted Bortz approach undertaken infra.
3. Nonetheless, the Judges emphasize the appropriateness of Dr. Marx’s
‘directional” analysis, because they do not want to leave the implication that
such qualitative analyses are inappropriate. Dr. Marx’s 2015-2017 directional
analysis was an appropriate alternative to a fee-based regression – because (as
discussed elsewhere in this determination) the WGNA conversion
substantially increased the number of minimum-fee-only CSOs and the
number of CSOs with less than two subscriber groups – reducing significantly
the number of CSOs and subscriber groups that was accepted by the Judges in
the 2010-13 Determination. In this regard, the Judges do not credit Dr.
Erdem’s reliance on separate arguments, seeking to discredit Dr. Marx’s use

The Judges also note that Dr. George herself pooled data from 2014 with the 2015-2017 data, where the
data distinction was dramatic, having arisen from the WGNA conversion.

of the regression approach have evidentiary weight commensurate for 2014,
regarding the impact of (a) the reduction in the number of CSOs with two or
more subscriber groups; and (b) the increase in the number of minimum-feeonly CSOs. Rather, Dr. Marx has considered the combined effect of these
factors.

4. Although Dr. Marx’s “directional” approach is probative and useful, she
overstated the point that the reduction in above-minimum-fee-paying CSOs
rendered their revealed preferences without benefit. Rather, their channel
selections/programming preferences are also probative and useful, even if less
so than in the 2010-13 Determination because of the reduction in the number
of such CSOs and in the percentage of royalties they represent.

5. Dr. Marx’s allocation shares related to “duplicated” minutes is superior to her
share allocation excluding “duplicated” minutes, because the Judges adopted
the former in the 2010-13 proceeding, because of problems relating to the
latter as described in the prior determination. See 2010-13 Determination at
3565, 3569, 3591, and 3610-11.

6. The evidentiary weight of Dr. Marx’s “directional” analysis for the 2015-2017
period is not diminished due to her reliance on local viewership data, because
the evidence in this proceeding indicates that a substantial percentage of
distant viewing is retransmitted to areas in close proximity to the origin of the
local signal. See, e.g., Erdem WRT 59 (“91% of systems are retransmitting
the same signal on a local basis to some subscriber groups and on a distant
basis to other subscriber groups [and] [of]f these systems, on average, 76% of

the channels that are distant to a subscriber group are retransmitted as local to
another subscriber group ….”).

7. Dr. Marx’s “directional” analysis provides evidence suggesting that PTV and
Program Suppliers content declined in viewership relative to CTV, implying,
ceteris paribus, a higher relative share value for CTV. The Judges note that
Dr. George agrees with this point (but see point (8) below).

8. However, the Judges’ prior reluctance to use viewership as a direct proxy for
value in the allocation (Phase I) proceedings cautions against applying too
much probative weight to this “directional” analysis. Accordingly, the Judges
adopt Dr. Gray’s criticism regarding Dr. Marx’s reliance on local viewership
data, but only as a caution regarding its evidentiary weight. In this regard, Dr.
George agrees that the weight placed on Dr. Marx’s viewership-based
approach be limited.

9. The Judges further limit the evidentiary weight of Dr. Marx’s “directional
“analysis, because, as Dr. Gray further notes, Dr. Marx’s own data shows that
CTV’s share of claimant category minutes declined significantly between
2014 and 2017.
C. Program Suppliers’ Regression Approach: The Tyler Model
On behalf of Program Suppliers, its expert witness, Dr. Tyler, proffered a
regression analysis that, while within the broad category of fee-based regressions, is
differentiated in ways that Dr. Tyler opines to be important in this proceeding. The
Judges’ review of his testimony, infra, highlights these broad similarities and the
assertedly important differences.

At a high level, Dr. Tyler agrees with the finding in the 2010-13 Determination
that regression analysis is very informative for estimating relative marketplace value in
this case. But by way of differentiating his approach, Dr. Tyler notes that a regression
seeking to establish relative marketplace value should estimate incremental value, which
he posits here to be the marginal value of an additional minute of different types of
programming content relative value – rather than a value relative to a reference or base
category, as in the other proffered regressions. Tyler ACWDT ¶ 65; 4/19/23 Tr. 5439-40
(Tyler).
Next, Dr. Tyler notes that – although the statutory royalty formula in section 111
prevents the setting of market prices for distantly retransmitted stations – a regression can
observe how CSOs reveal their preferences for different types of stations bundling
various types of programming content, given the pre-existing section 111 royalty rate
provisions. In turn, the observations of the decision-making by CSOs provides insight
into their willingness-to-pay (WTP) for different programming categories on their
distantly retransmitted local stations. The final link in this analytic chain, according to Dr.
Tyler, is that the regression can measure this WTP and thus estimate the “relative values
of market outcomes” that cannot be directly observed. Tyler ACWDT ¶ 65.
More particularly, Dr. Tyler explains that regression analysis as applied to
determine relative marketplace value in these proceedings “exploits the fact that CSOs
make choices as to which bundles of content they retransmit.” Tyler ACWDT ¶ 66. He
adds that the regression will estimate the incremental royalty amount that CSOs paid (or,
more accurately appeared willing to pay)80 to acquire different types of content, which,

The Judges understand that Dr. Tyler found it necessary to include this qualifier because in a majority of
instances in the 2015-2017 period, CSOs paid the minimum fee rather than the “base fee” calculated on a
subscriber group basis. See Tyler ACWDT ¶ 67 (tacitly acknowledging that where the minimum fee is
binding, a fee-based regression does not provide the CSOs’ actualized revealed preferences, but rather only
“insight into how the CSOs would actually value these program categories in an unregulated market.”).
In this regard, the Judges discuss elsewhere in this determination the distinction in evidentiary value
between instances where the CSO actually pays the calculated subscriber group base fee, and instances
where the CSO actually pays the minimum fee (not the calculated subscriber group base fee).

he opines, “is akin to finding the relative value of programming content, based on actual
choices made by marketplace participants.” Id. Finally, Dr. Tyler explains that the
“marginal values” calculated via his regression must be multiplied by the quantities of
minutes “to compute relative marketplace value.” Tyler ACWDT ¶ 68.
Notwithstanding his broad agreement with other experts in this and prior
proceedings that fee-based regressions are useful, he parts company with them in an
important way. Rather than start from the assumption that Dr. Crawford’s 2010-13 model
is useful or correct, Dr. Tyler constructed a regression model that differed from the
approach taken by Dr. Crawford and from Drs. Johnson George and Marx (for 2014),
whose approaches were modified versions of Dr. Crawford’s model. More specifically,
he avers that the Tyler Model diverges importantly and beneficially from prior fee-based
regressions and from the fee-based regressions proffered by the other experts here,
because of his model’s use of a Rate as the dependent variable.
In this regard, Dr. Tyler explains that Crawford-style regressions use actual dollar
royalty amounts as the dependent (left-hand side) variable, which is problematic because
“substantial variability exists across the royalty amounts calculated for each subscriber
group . . . . ” More particularly, because “copyright royalties are determined on the basis
of gross receipt percentages . . . greater [dollar] royalty amounts . . . for a subscriber
group [may occur] for no other reason than that one CSO has more subscribers or higher
prices, or both, than another CSO.” Tyler ACWDT ¶ 83.
Accordingly, a regression model using royalty amounts calculated (such as the
Crawford Model) “must control for these sources of variability to attempt to isolate the
incremental value of minutes by category type.” Tyler ACWDT ¶ 83. This control is
made in the Crawford-style regression by the use of “fixed effects,” which “discard
information from the substantial number of CSOs that have just one subscriber group,” a
loss of data that is unnecessary in the Tyler model. Tyler WRT ¶ 70.

Dr. Tyler’s use of royalties as a percentage of gross receipts, at the subscriber
group level, allows him to calculate what he coins (as noted supra) the “Subscriber
Group Royalty Percentage” (“SGRP”). When the SGRP is regressed against the number
of transmitted minutes for each category, Dr, Tyler obtains coefficients for his regression
equation that he describes as “represent[ing] a type of price.” Tyler ACWDT ¶ 84.
This attempt by Dr. Tyler to characterize the SGRP dependent variable as a “type
of price” is no mere academic detail. By making this characterization, Dr. Tyler claims
that his model sits within a well-accepted class of econometric regressions known as
“hedonic regressions,” which he defines as follows:
Hedonic regression . . . model[s] . . . estimate the influence that various
factors have on the price of a good, or sometimes the demand for a good. In
a hedonic regression model, the dependent variable is the price (or demand)
of the good, and the independent variables are the attributes of the good
believed to influence utility for the buyer or consumer of the good. The
resulting estimated coefficients on the independent variables can be
interpreted as the weights that buyers place on the various qualities of the
good.
Tyler ACWDT ¶ 85.
Dr. Tyler then constructs his purported hedonic regression by using what he
describes as the calculated “actual”81 royalty rate per subscriber – determined by the base
fee royalty as a percent of each subscriber group’s gross receipts. Tyler ACWDT ¶ 87.
He proceeds to weight the regression model by the gross receipts of the CSOs, which he
opines is “consistent with assessing relative marketplace value [because] [s]ubscriber
groups with larger gross receipts would tend to contain more information [and] CSOs

The word “actual” in this context is rather Orwellian. For the 2015-2017 period, a substantial majority of
the CSOs in which the subscriber groups are situated “actually” paid the minimum fee. A Base Fee was
“actually” calculated, as required by the regulations, but not “actually” paid, because the Minimum Fee
bound. Dr. Tyler’s misleading semantic use of the adjective “actual” does not assist the Judges in deciding
whether any or all of the Base Fee calculations have objective evidentiary weight.
would be expected to scrutinize decisions regarding distantly retransmitted signals more
carefully when there are more dollars at stake.” Tyler ACWDT ¶ 88.82
Dr. Tyler’s regression model “includes interaction terms for each year . . . which
allows for estimated valuations that vary” for each year in the 2014-17 period. This
annualizing of the valuations is distinguishable from the “pooled” approach of other
regression experts in this proceeding, who (in the models proffered in their direct
testimonies) “pool” their data across all four years. Dr. Tyler rejects this approach and
utilizes an annualized approach instead, because, he opines, utilizing the same coefficient
across the four years is both (1) legally inappropriate because calculating share
allocations for specific years is statutorily required and (2) inconsistent with “best
practices” for hedonic regressions (data permitting), which allow the underlying
relationships between types of minutes and SGRP to vary over time. Tyler ACWDT ¶
91.
Summing up, Dr. Tyler identifies what he understands to be the many advantages
of his model:
1. SGRP – as a type of price – reflects a “minimum willingness to pay” and thus
has a “clear economic interpretation.” PS PFF ¶ 285 (and record citations
therein).
2. The focus of the regression is on “nearly 20,000 observations/data points, and
more than 2,000 distinct pricing relationships, providing the variation needed
for a meaningful regression. PS PFF ¶ 286 (and record citations therein).
3. By using SGRP as the dependent variable instead of royalties (in any
functional form), the Tyler Model is not influenced by variability in gross

The use of weights in hedonic regressions has support in the economic literature. See Tyler ACWDT ¶
88 n.72 (citing sources). (Dr. Tyler also includes a sensitivity analysis in which he shows the results of his
model without weights Tyler ACWDT § VI.G. (tech. app. C)).

receipts caused by the number of subscribers in a subscriber group or higher
CSO subscription prices arising from for example, the number and type of
cable networks carried, the quality of (or deficiency in) customer service, and
the bundled pricing of cable, internet and/or phone. Unlike the regressions
that use royalties as the dependent variable, the Tyler Model does not need to
control for these statutorily unrelated effects, thus avoiding the potential for
bias when fixed effects are introduced. PS PFF ¶¶ 290-292 (and record
citations therein).
4. Because the SGRP is a “type of price” the Tyler Model is “closer” to the
definition of a traditional hedonic regression and “closer to the definition of a
traditional hedonic model.” PS PFF ¶ 293 (and record citations therein).
5. By establishing values and shares for each year, rather than pooling the results
over the four-year period, the Tyler Model: (a) is in line with the Judges’
statutory task; (b) captures annual industry changes; and (c) is consistent with
“best practices for hedonic regressions.” PS PFF ¶¶ 294-296 (and record
citations therein).
6. The Tyler Model looks at the more economically logical hypothetical
marginal expansion per minute of a program type to determine value rather
than the hypothetical shift of minutes among program categories. PS PFF ¶
298 (and record citations therein).
7. The Tyler Model avoids the problem inherent in the other regressions that
must rely on incorrect subscriber number estimates. PS PFF ¶¶ 299-300, 358,
360-3623 (and record citations therein). Unlike the models proffered by Drs.
George, Johnson and Marx, the Tyler Model is not based on the Crawford
Model. Therefore, unlike those models, the Tyler Model is not tainted by the
potential “specification searching” suggested by the high number of models

and specifications tested by Dr. Crawford. Moreover, Dr. Tyler only
considered the results of fewer than two dozen models (all linear in functional
form) many of which were robustness/sensitivity checks and not generated as
potential alternative base models. PS PFF ¶¶ 305, 307, 311-313, 315-316,
376-379 (and record citations therein).
8. Despite its differentiation from the Crawford Model, particularly with regard
to the SGRP as the dependent variable in the Tyler Model and in the absence
of a need for fixed effects, the Tyler Model is an improvement of the feebased regression approach, not a departure. PS PFF ¶ 317 (and record
citations therein).
9. The Tyler Model does not cherry-pick or otherwise overstate an allocation
share for Program Suppliers, for whom Dr. Tyler presented testimony. PS PFF
¶ 308-309 (and record citations therein).
Applying his model in the foregoing manner, Tyler estimates royalty shares (and
standard errors) for each year as follows:
FIGURE 3.2
Royalty Allocations Based on Regression Analysis
for Basic Fund, 2014-2017
Yea
r
201
4
201
5
201
6
201
Program
JSC
Suppliers
26.6%
37.2%
(3.8%)
(7.5%)
39.7%
2.8%
(1.5%)
(1.0%)
34.0%
2.5%
(1.5%)
(0.9%)
31.8%
1.8%
(1.1%)
(1.0%)
Adjusted R2: 83.3%

CTV

PTV

SDC

CCG

11.3%
(2.6%)
10.2%
(1.5%)
8.2%
(1.8%)
6.9%
(0.9%)

14.0%
(1.7%)
27.9%
(0.6%)
37.4%
(0.7%)
40.4%
(0.6%)

4.3%
(0.9%)
6.2%
(0.6%)
4.4%
(0.6%)
4.0%
(0.4%)

6.5%
(0.9%)
13.3%
(0.5%)
13.6%
(0.5%)
15.2%
(0.9%)

1. Criticisms of the Tyler Model
a. Criticisms of the Tyler Model by SDC Expert Witness Dr. Erdem
Dr. Erdem opines that, notwithstanding Dr. Tyler’s claim that his model is
differentiated to address defects in the approach used by Dr. Crawford, the Tyler Model
“essentially carries the same flaws.” Erdem WRT ¶ 43. But before examining alleged
flaws in the Tyler Model, Dr. Erdem acknowledges that, in his opinion, the other
regression experts’ modeling is more “egregious” than Tyler’s model. Erdem WRT ¶
121. More particularly, Dr. Erdem recognizes that Dr. Tyler has made what Dr. Erdem
understands to be the following salutary changes from the approach used by Dr.
Crawford:
1. A change in the dependent variable from the log of royalties into a
fees/revenue ratio.
2. The removal of fixed effects.83
3. Division of each claimant category into “Canada” and “non-Canada” zone
minutes.84
4. Removal of the effect of “the number of subscribers” by “divid[ing] the . . .
fees paid by a metric [gross receipts] that scales with the number of
subscribers.”
Erdem WRT ¶¶ 43, 61.
However, according to Dr. Erdem, despite the positive significance in these model
changes, the core principle of the Tyler Model remains unchanged from other
regressions, because “the dependent variable Dr. Tyler uses is still driven by fees [and]
attempt[s] to estimate the relationship between fees and programming minutes.” Erdem

Dr. Erdem opined that the inclusion of fixed effects obscured the more impactful predictive effects of
other independent variables on the royalty-based related dependent variable.
The experts’ treatment of issues relating specifically to the Canada Zone is set forth infra in this
determination.
WRT ¶ 43.85 More granularly, Dr. Erdem criticizes Dr. Tyler’s use of the SGRP as the
dependent variable because it “basically boils down to the number of DSEs.” In this
regard, Dr. Erdem further opines:
This is because a system’s royalty fees are calculated by multiplying their
revenues by a specified amount that increases as the system adds more
DSEs, so dividing the fees by revenue will produce a number that correlates
strongly with the number of DSEs the system carried. As a result, Dr. Tyler
is essentially saying that DSEs equate to market value.
Erdem WRT ¶ 122. Dr. Erdem asserts that this change in the dependent variable from the
log of royalties to the SGRP does not cure the fundamental problem in all fee-based
regressions, to wit: fee-based regressions are “trying to calculate market value when no
market exists, using variables determined by regulation.” Erdem WRT ¶ 122.
b. Criticisms of the Tyler Model by SDC Expert Witness Dr. Rubinfeld
Dr. Rubinfeld testifies about the deficiencies in all the fee-based regressions, but
he pointedly criticizes Dr. Tyler for characterizing his regression as a hedonic regression.
Rubinfeld WRT ¶ 71. Dr. Rubinfeld levies this objection because he is of the opinion
that Dr. Tyler’s dependent variable, the SGRP, does not equate or analogize to a “market
price” – a necessary element for a regression to qualify as hedonic. Rubinfeld WRT ¶ 71.
Thus, according to Dr. Rubinfeld, Dr. Tyler’s dependent variable, the SGRP, falls victim
to the same deficiency as the other regressions, in that there is “no reason to believe that a
regression based on statutory royalty fees—whether in dollar terms or expressed as a
percentage of gross receipts—will identify the marginal value of programming that
would prevail if the royalty fees were determined in a free market.” Rubinfeld WRT ¶
75.

This is a reprise of the overarching criticism that Dr. Erdem made in the 2010-13 Determination, which
was rejected by the Judges.
However, Dr. Rubinfeld approvingly cites Dr. Tyler’s testimony (in the same vein
as Dr. Erdem) for its critique of the modeling undertaken by Dr. Crawford. In this
regard, Dr. Rubinfeld notes:
1. Dr. Tyler examines Dr. Crawford’s regression model to the 2014-2017 data
available in the current proceeding and finds a “serious” underlying modeling
problem in the fact that “the Crawford Model estimates zero shares for JSC in
2014 (as well as the other years) . . . .”
2. Dr. Tyler analyzes the troubling pattern of the regression’s “residuals” in Dr.
Crawford’s model – again using 2014-2017 data – and finds that the latter’s
regression model is “not well specified for the 2014-2017 data.”86
Rubinfeld WRT ¶ 93.
In sum, Dr. Rubinfeld does not find economic support for Dr. Tyler’s regression
model, but does find common cause with Dr. Tyler’ broad criticism of other fee-based
regressions.87
c. Criticisms of the Tyler Model by CTV Expert Witness Dr. Bennett
As an initial criticism, Dr. Bennett avers that Dr. Tyler’s use of his SGRP as the
dependent variable, instead of royalties, may potentially and illogically fail to link
“variation in the composition of minutes [to] value unless that variation is also
accompanied with a change in . . . the SGRP.” Bennett WRT ¶ 124. To make this point,
Dr. Bennett hypothesizes a scenario in which two minimum-fee-paying CSOs make
subscriber-increasing changes in distantly retransmitted stations, thus increasing
More technically, Dr. Rubinfeld (like Dr. Erdem) finds the “hammer-shaped pattern of residuals violates
the classical zero conditional mean of the disturbance assumption for the OLS estimator to be unbiased.”
Erdem WRT ¶ 93. This means that the residuals exhibit non-random data points, whereas a well-specified
regression would contain have random error terms. In (perhaps) somewhat less technical terms, Dr.
Rubinfeld is agreeing with Dr. Tyler that the unexplained portions of the Crawford Model are actually
correlated with one or more omitted independent variables.
87 Another SDC expert witness, Mr. John Sanders, likewise does not “endorse” Dr. Tyler’s modeling, but
relies on Dr. Tyler’s critiques to discredit the fee-based regressions proffered by other experts. See, e.g.,
Sanders WRT ¶ 3 nn.4, 9, & 20. Mr. Sanders also notes the divergence of Dr. Tyler’s estimated share for
PTV and, respectively, SDC content, from the results of other fee-based regressions as, in his opinion,
indicative of the unreliability of such regressions in these proceedings. Sanders WRT ¶¶ 11, 18.
royalties, but each maintains the same SGRP because royalties have not increased
(remaining at the minimum fee level). Bennett WRT ¶ 125.
Moving to another critique, Dr. Bennett opines that Dr. Tyler’s regression sample
“is based on a relatively small and non-representative sample of the CSOs whose royalty
payments comprise the aggregate of the royalty pool.” Bennett WRT ¶ 135. Dr. Bennett
does not suggest that this small sample is unique to Dr. Tyler among the regression
experts, acknowledging that this applies to “the other witnesses relying on regressions for
2014-2017.” Bennett WRT ¶ 136.88
d. Criticisms of the Tyler Model WDT by JSC Expert Witness Dr.
Majure
In addition to his general criticisms of all fee-based regressions, Dr. Majure levies
criticisms that he aims most particularly against Dr. Tyler’s regression approach. Dr.
Majure acknowledges that “[p]rior to WGNA’s conversion, there was some variation in
the royalty rate a CSO would pay for incremental content,” such that only “[t]he
regressions that rely on data for 2015–2017 have little to no connection with how much
CSOs value the content.” Majure WRT ¶¶ 75, 77. Thus, he opines that “only after the
WGNA conversion [the regressions] do not – and cannot – estimate the value of a minute
of content to CSOs.” Majure WRT ¶ 75.
Dr. Majure maintains that the Tyler Model well-demonstrates the foregoing point,
and that the Tyler Model essentially estimates only “the equation given by the statutory
formula . . . .” Majure WRT ¶ 78. Thus, he opines that the SGRP in the Tyler Model
does not establish a “price” that can be explained and applied as in a bona fide hedonic
regression. Majure WRT ¶¶ 78-79 (“For example, [in the Tyler Model] the ‘price’
calculated for the subscriber groups of a CSO carrying a full DSE or less than a full DSE

Although Dr. Bennett does not state here why the sample is so truncated, the Judges understand this point
to be based on the growing number of CSOs, without any distant retransmissions and thus no subscriber
groups, which Dr. Bennett indicates increased over the 2015-17 period.
across all subscriber groups would be 1.064 percent of the subscriber group’s revenues
multiplied by its total number of DSEs.”).
However, Dr. Majure is careful to acknowledge that “the statutory formula could
lead to variation in Dr. Tyler’s ‘price’ beyond what comes from the DSE value” in 2014
but “this is not the case after 2014 [because] after 2014, the vast majority of subscriber
groups belong to CSOs that paid the minimum fee, leaving little variation in the
percentage of royalties they would owe.” Majure WRT ¶ 80. Thus, Dr. Majure appears
to recognize that for 2014 the Tyler Model presented an acceptable proxy for “price” as
its the dependent variable.
e. Criticisms of the Tyler Model WDT by JSC Expert Witness Mr.
Harvey
Although Mr. Harvey opines that the Tyler Model, like the other regression
models, is unable to correctly value JSC programming for the 2015-17 period, he
acknowledges that the Tyler Model is superior to the others in one respect: it calculates
annual coefficients rather than “pooled” coefficients for all four years (2014-2017).
Harvey WRT ¶¶ 28, 35.
But Mr. Harvey is otherwise decidedly critical of the Tyler Model – maintaining
first that it does not “reliably estimate[e] [JSC] value[] in 2015-2017,” because “[s]ixtysix percent (4 of 6) of the compensable sports coefficients are not statistically
significantly different than zero.” Harvey WRT ¶ 45 & tbl.9.
Next, Mr. Harvey separates out minimum fee systems from the Tyler Model, in
order to isolate those CSOs making retransmission decisions that Mr. Harvey asserts had
economic consequence in terms of royalty payments. Harvey WRT ¶ 46 & tbl.10. He
then turns to various “sensitivity tests” undertaken by Dr. Tyler, that were not contained
in the Tyler Written Direct Testimony but which were produced in discovery by Program
Suppliers. Harvey WRT ¶ 68. Looking at these tests, Mr. Harvey notes that Dr. Tyler

“selected a specification that, among his many sensitivity analyses, resulted in one of the
lowest shares for JSC and one of the highest for Program Suppliers.” Harvey WRT ¶ 70.
See also Harvey WRT fig.6.89
f. Criticisms of the Tyler Model by CCG Expert Witness Dr. George
At the outset, Dr. George, avers that Dr. Tyler’s model “diverges from economic
theory” through his consideration of the SGRP, rather than a measure of royalties, as the
dependent variable affected by claimant programming minutes. George WRT at 11-12.
More particularly, Dr. George maintains that this change in the dependent variable:
removes the link between the value of distant signal programming to [CSO]
and royalty cost that lies at the heart of the theoretical framework [and]
effectively replicates the regulatory formula [rather than] reflect value.
George WRT at 12. Further to this point, Dr. George asserts that the inclusion of the
SGRP as the dependent variable “attenuate[s]” the differentiated marginal value of
assorted types of programs. She explains that by looking at royalties from all
retransmitted programming as a proportion of gross receipts, the Tyler Model
“understates the value of high-quality, differentiated program and overstates the value of
undifferentiated, low-quality programming.” George WRT at 12.
Another criticism levied against the Tyler Model by Dr. George is that (as with
the Johnson Model, discussed infra) it suffers from the consequential defect of:
includ[ing] no fixed effects at all [and the] coefficients [thus] are estimated
using variation across different cable systems . . . the variation most likely
to be contaminated by the effect of unobserved factors, also known as bias
from omitted variables . . . [the coefficients therefore] cannot be relied on
to reflect underlying value.
George WRT at 13 (emphasis added).

To be clear, figure 6 generated by Mr. Harvey shows that the share allocations arising from the proffered
Tyler Model were neither higher than all the Program Supplier shares nor lower than all the JSC shares
generated by the sensitivity tests. Moreover, Mr. Harvey does not state why the sensitivity test results
should have led Dr. Tyler to alter his share allocations, nor does Mr. Harvey state why Dr. Tyler should
have abandoned the Tyler Model merely because the shares differed in the sensitivity test, albeit not in a
manner that even Mr. Harvey avers had called into question the model’s robustness.
g. Criticisms of the Tyler Model by PTV Expert Witness Dr. Johnson
Although Dr. Johnson finds that he and Dr. Tyler agree on a number of points, see
Johnson WRT ¶ 26, Dr. Johnson takes issue with the following aspects of Dr. Tyler’s
WDT.
At the outset, Dr. Johnson criticizes Dr. Tyler’s use of the SGRP as the dependent
variable in the Tyler Model because, according to Dr. Johnson, “the SGRP does not
capture the CSO decision-making process and identify their valuation of such
programming,” because the SGRP essentially replicates the statutory formula without
regard to “the type of programming . . . on the signals the CSO retransmits.” Johnson
WRT ¶ 34. Thus, according to Dr. Johnson, the SGRP dependent variable in the Tyler
Model fails to capture the “chain of logic” of the correlation in the fee-based regressions,
i.e., that “[t]o the extent . . . a CSO’s bundle of programming includes more valuable
programming, the price of that bundle will be higher, the CSO’s gross receipts will be
higher, and thus the amount of royalties that the CSO pays will be higher.” Johnson
WRT ¶ 35.
Next, Dr. Johnson looks at the “sensitivity tests” Dr. Tyler applied to his own
model and notes “the extreme variability in Dr. Tyler’s regression results” uncovered by
these tests relative to Dr. Johnson’s more stable results, which, according to Dr. Johnson
“suggests that modeling royalty amounts rather than the statutory royalty rate is more
appropriate.” Johnson WRT ¶ 40.
2. The Judges’ Analysis and Findings Regarding the Tyler Model90
The Judges make the following findings with regard to the Tyler Model:

The Judges’ analysis and findings in this section are separate and apart from their analysis and findings
on the specific issues considered in separate sections of this determination.
1. Dr. Tyler’s measurement of “an additional minute” of programming content, as
contrasted with a “value relative to a reference or base category” in other
regressions, is appropriate, but neither approach is superior inter se.
2. The base fee calculations of minimum-fee-only CSOs do provide some “insight”
into how those CSOs might actually value different program categories, but that
“insight” is limited, because it is predominantly informative as to ordinal rankings
of relative value, rather than cardinal measures, as required in these proceedings.
See 2010-13 Determination at 3578 (“the Judges do not place much weight on the
relative rankings of the program categories”); cf. Phonorecords III, Initial Ruling
and Order after Remand at 38 (July 1, 2022) (distinguishing the benefit of an
economic model’s “insight” from a useful “real-world relationship”).
3. A CSO whose base fee calculations are more proximate to the minimum fee it
eventually paid would be more probative of CSOs’ willingness-to-pay than when
there is a large gap between the calculated base fee and the paid minimum fee,
because the CSO could have understood that the base fee might bind. However,
the record provides insufficient evidentiary basis to apply this point in the present
proceeding.
4. On the present factual record, the Tyler Model’s SGRP is preferable to the log of
royalties, or royalties themselves, as the dependent variable in a fee-based
regression, because it does not require the use of questionable controls and fixed
effects, and remains appropriate even in the absence of such controls and fixed
effects. However, the log of royalties, or royalties themselves, are appropriate
dependent variables, provided the factual record and the specifications of the
regression are appropriate.
5. The Tyler Model is not a hedonic regression as generally understood by
economists, because it is not based on actual market prices. Dr. Tyler at times

acknowledges this point, by describing his SGRP as a “type” of price, rather than
an actual price and by also describing the SGRP as “closer” to the definition of a
traditional hedonic model. However, the approach taken by the Tyler Model is in
the nature of a hedonic regression, in that it utilizes a similar approach by creating
a useful proxy for price proxy in the form of a budget constraint, i.e., the SGRP.
(See also the discussion regarding “relative marketplace value” supra and the
section, infra, comparing the Tyler Model to a “fee generation” approach).
6. The Tyler Model’s use of weighting of each CSO’s gross receipts is appropriate
of the CSOs because the decisions by CSOs with larger gross receipts will have a
greater impact on the royalty pool making the programming category information
they provide more important.
7. The Tyler Model, calculating coefficients for each year, is superior to the other
regression models in this proceeding to the extent those models were originally
proffered as “pooled” models, using one coefficient for the entire 2014-2017
period. (However, this advantage is mitigated where there is evidence or
testimony that such “pooled” models were themselves subsequently recalculated
on an “unpooled” basis either by the proffering regression expert or by other
expert witnesses in their rebuttal testimonies.)
8. The Tyler Model provides sufficient variation among the CSOs’ decisions
because it contains approximately 20,000 data points for observation, and more
than 2,000 distinct pricing relationships. 4/19/23 Tr. 5436 (Tyler).
9. The Tyler Model is superior to the other fee-based regressions by not requiring as
a control variable an estimate of the number of subscribers in a subscriber group,
which cannot be estimated without measurement error. PS PFF ¶¶ 300, 360-362
(and record citations therein). This issue is a critical reason why the Judges give

greater weight to the Tyler Model vis-à-vis the other regression models, and thus
necessitates getting “into the weeds” for a more detailed explanation.
The control for the number of subscribers is very important in the other fee-based
regressions where the dependent variable is a functional form of royalties,
because the number of subscribers clearly would have a substantial effect on the
level of royalties (i.e., more subscribers = more royalties). Moreover, the number
of subscribers must be controlled because the number of subscribers could also be
positively correlated with the number of minutes. Thus, it must be controlled in
order to isolate the “effect” of interest, which is the impact of different program
category minutes on the royalties. However, there is no data available regarding
the number of subscribers in a subscriber group, and the other fee-based
regression experts are forced to make an estimate by “proportionally assigning
the number of overall CSO subscribers to each subscriber group based on the
gross receipts for each subscriber group.” Tyler WRT ¶ 41 (emphasis added).
The problem with this estimate is two-fold, inaccuracy and impact on the
regression. As Dr. Tyler explains:
The estimate is “inaccurate because allocating the number of subscribers based on
the distribution of gross receipts is akin to assuming that customers in each
subscriber group are paying the same monthly rates on average. [T]his
assumption is flawed because, as Dr. Johnson acknowledges, CSOs may
broadcast one set of stations to one set of subscribers and a different set of
stations to another set of subscribers [and] cable prices vary across customer type,
geography, and over time. . . . The only way that subscriber groups would have
the same average prices is if they all bought the same products at the same prices
in the same proportions across groups. Thus, one would expect the average prices

to be different across subscriber groups, not the same as assumed by Dr. Johnson
and Dr. George.” Tyler WRT ¶¶ 42-43; 45-46.91
This inaccurate estimate of the number of subscribers is also impactful on the
other fee-based regressions that must use the number of subscribers as a control
variable. Dr. Tyler explains:
For example, assume that customers in suburbs have a higher average price than
downtown customers, such that Dr. George and Dr. Johnson undercount
subscribers in the suburbs and overcount subscribers in urban areas. The
types of distantly retransmitted signals that are broadcast to these two types of
customers are likely to vary. Thus, the use of inaccurate subscriber group numbers
would lead to a mismeasurement of the incremental value of the minute categories
in the regression analysis.
In short, the use of inaccurate subscriber group numbers is potentially a serious
problem for Dr. George and Dr. Johnson. The use of “filled-in” data when actual
numbers are not available may have introduced bias into their results and this
could have important consequences for their estimates.
Tyler WRT ¶¶ 49, 52.
10. Because the Tyler Model is not based on the Crawford Model, it is not tainted by
the potential “specification searching” that haunts the Crawford Model through its
consumption of “phantom degrees of freedom,” as discussed in the 2010-13
Determination. Moreover, there is no persuasive evidence that Dr. Tyler engaged
in anything that could be construed as specification searching.

Dr. Tyler provides an empirical example of the varying subscription rates among a CSO’s subscribers.
Tyler WRT ¶ 44.
11. The Tyler Model is also not the subject of the criticisms levied against the other
fee regressions. For example. Dr. Erdem applauds the Tyler Model for its
abandonment of the royalty-based dependent variable, the unnecessity and
removal of fixed effects and the use of a dubious measure of the number of
subscribers as a control variable.
12. The overarching criticism that Dr. Erdem does levy against the Tyler Model are
insufficient to damage its usefulness. Specifically, Dr. Erdem states the obvious
as a criticism: “[T]rying to calculate market value when no market exists . . . .”
Erdem WRT ¶ 122. But that is simply a restatement of the problem created by the
structure of section 111. As the Judges explain in more detail elsewhere in this
determination, as they explained in the 2010-13 Determination and as
acknowledged by the D.C. Circuit, the regressions identify market-based behavior
among CSOs, in the form of revealed preferences for different program
categories, and such behavior is relevant evidence useful for estimating relative
marketplace value. And, with specific reference to the Tyler Model, the SGRP is
reflective of, first, the budget constraint that limits the CSOs’ distant
retransmittals and, second, the program categories they select when so
constrained. (This point is discussed further infra in the discussion of the Tyler
Model to a fee-generation approach.)
13. The other SDC expert, Dr. Rubinfeld, likewise applauds Dr. Tyler’s approach to
the problem, agreeing with him that there exist serious modeling problems in
connection with the Crawford Model and those based on that model. However,
Dr. Rubinfeld – like Dr. Erdem – restates the statutory problem – the absence of a
“market price,” in order to argue that the Tyler Model is not a true “hedonic”
regression. (Dr. Majure makes the same argument.) As noted supra, the Judges
find that the Tyler Model is not a true “hedonic” regression, as Dr. Tyler (albeit

sometimes grudgingly) seems to concede. However, as discussed in more detail
elsewhere in this determination, the Judges find the Tyler regression to be a
“Hedonic-inspired” regression, useful in this proceeding to identify an appropriate
market-factor driven allocation of royalty shares.
14. Dr. Bennett’s attacks on Dr. Tyler for originally engaging in an erroneous
critique of the Crawford Model is inconsequential. Dr. Tyler acknowledged his
error and withdrew the portion of his original WDT that contained his erroneous
critique of the Crawford Model. There is no reason to consider this issue relevant,
and, if anything, it indicates that Dr. Tyler is willing to acknowledge a mistake.
15. More broadly, the Judges do not find the criticisms by Dr. Bennett or by Dr.
George that relate to Dr. Tyler’s other criticisms of the Crawford Model to be
relevant to the issues pertaining to the Tyler Model itself.
16. Dr. Bennett’s Tyler Model-specific criticism – regarding the impact of channel
lineup changes by two hypothetical CSOs paying the minimum fee – is of no
consequence in the Judges’ analysis, because the Judges – as discussed elsewhere
in this determination – are focusing on the above-minimum-fee CSOs in their
application of the Tyler Model. More specifically, the Judges credit the testimony
of JSC’s expert, Mr. Harvey, who separated out the minimum-fee-only systems
from the Tyler Model, in order to isolate those CSOIs making transmission
decisions that had economic consequences in terms of royalty payments. See
Harvey WRT ¶ 46 & tbl.10.
17. The Judges do not question the Tyler Model for selection a specification that
resulted in “one of the lowest shares for JSC and one of the highest for Program
Suppliers.” Absent a showing of specification searching, which is not even
alleged against Dr. Tyler, these results are not indicative of any wrongdoing.

18. Dr. Majure’s criticism that the Tyler Model essentially estimates only “the
equation given by the statutory formula” is incorrect. See the discussion of the
Tyler Model as related to a “fee generation” approach, infra.
19. The absence of a “reference category (a/k/a “numeraire” or index) in the Tyler
Model is not a fault. As noted above, the Tyler Model measures the minimum
willingness to pay for an additional minute of distant programming across each
program category, not the value of a minute of one program replacing minutes
from a reference category.
20. Any greater precision or stability in the Johnson Model compared with the Tyler
Model is a consequence of Dr. Johnson’s decision to remove “fixed effects” from
his model where, unlike in the Tyler Model, the dependent variable was royaltybased, not the SGRP. That is, Dr. Johnson obtained more precision, but at the
expense of generating “omitted variable bias.” Although this econometric jargon
suggests an analysis “deep in the weeds,” it is of great importance: Precision and
stability are not particularly helpful if the model is measuring the wrong thing –
here, with the Johnson Model more in the nature of predicting the royalty level
by omitting “fixed effects” rather than focusing on the effect of program category
minute on royalties (subject to the cost constraint reflected in the SGRP).
D. CCG’s Regression Approach: The George Model
Dr. Lisa George, a CCG expert witness,92 explicitly relied on Dr. Crawford’s
approach from the 2010-13 proceeding, “[b]ecause [Dr.] Crawford’s approach was
determined by the Copyright Royalty Board to be ‘highly useful in estimating relative
values’ . . . . .” George WDT at 26-27.93 More particularly, Dr. George followed Dr.
Dr. George was received as expert witness in the “field of economics, with experience in econometrics,
media markets, and industrial organization.” 4/18/23 Tr. 5111 (George).
The Judges must emphasize here the fact that the SDC provided to CCG (and all of the other
participants), in voluntary discovery in the present proceeding, promptly after the filing of written direct
Crawford’s approach by “estimat[ing] a regression model at the subscriber group level
with fixed effects and [royalties as] a logged dependent variable.” George WDT at 27.
However, Dr. George adjusted the specifications in her model in a manner that
differentiated her model from Dr. Crawford’s model in two ways to reflect: (1) changes
in the distant signal market; and (2) to address comments from the Judges in the 2010-13
Determination. George WDT at 27. The key differentiators are (1) Dr. George’s
inclusion of separate “system accounting period fixed effects” rather than Dr. Crawford’s
“interacted system-accounting period fixed effects” and (2) the elimination of an
interacting of controls for the (a) top multi-system operators (MSOs) with (b) lagged
subscribers (i.e., subscribers from the preceding accounting period). George WDT at 27.
More particularly, Dr. George significantly reduced the number of fixed effects in
her preferred regression model compared to Dr. Crawford’s number of fixed effects.
Specifically, Dr. George testifies that her preferred model “includes one fixed effect for
each system plus one for each accounting period (number of systems plus 8 [six-month
accounting periods]),” whereas Dr. Crawford’s model included “one fixed effect for
every system every accounting period (number of systems times 8 [six-month accounting
periods])”. George WDT at 27 (emphasis added). According to Dr. George, this
deviation for Dr. Crawford’s approach was measured and beneficial:
Since fixed effects operate by narrowing the variation used to identify
coefficients, my specification is less restrictive than [Dr.] Crawford’s. In
other words, I make use of variation within cable systems over time but not
across cable systems. [Dr.] Crawford’s specification did not make use of
variation within cable systems over time or across cable systems, identifying
coefficients using only variation within systems each accounting period.

statements, copies of materials from the 2010-13 satellite allocation proceeding that at the least suggested
Dr. Crawford may have engaged in inappropriate specification searching in the development of his
regression framework. However, neither Dr. George nor any other CCG witness specifically addressed in
written rebuttal testimony the discovery from the 2010-13 satellite proceeding suggesting Dr. Crawford’s
potential specification searching. (However, Dr. George more generally explained how she was able to
evaluate Dr. Crawford’s regression work, even though she did not address the discovery suggestive of Dr.
Crawford’s specification searching and of dissembling in his testimony before the Judges in the 2010-13
proceeding. See George WRT at 50-54.)

George WDT at 27 (emphasis added).
As in the Crawford Model, Dr. George’s dependent variable is the natural log94 of
royalty fees and, as in the Crawford Model, is related by the regression to the subscriber
groups’ respective distant programming minutes for each claimant’s program category.
George WDT at 51. The regression process produces an estimate of coefficients, one for
each claimant program category, showing the effect of one additional programming
minute on the natural log of royalty payments. George WDT at 51. She then uses these
coefficients to calculate, in dollars, the “average marginal value” of an additional
programming minute for each claimant category. George WDT at 51-52.
To calculate shares, Dr. George likewise adopts the method used by Dr. Crawford
and, indeed, consistently across fee-based regression models. That is, she multiplies
these average marginal values by compensable programming minutes for each subscriber
group, thus producing a value of compensable programming for each claimant program
category. For each category, she uses that category’s values as a numerator in a fraction
where the denominator is the sum of the totals over each claimant.
Dr. George reported the following claimant shares:

94Technically,

the ‘‘natural log’’ (shorthand for logarithm) is ‘‘[a] mathematical function defined for a
positive argument; its slope is always positive but with a diminishing slope tending to zero,’’ and it ‘‘is the
inverse of the exponential function X = ln(ex).’’ James H. Stock & Mark W. Watson, Introduction to
Econometrics 821 (3d ed. 2015). Practically, for purposes of applied econometrics, using the logarithmic
functional form, which shows the percentage changes in the variables, may be more practical.

Table 22: Implied Claimant Shares, 2014-2017
Program
Suppliers

Joint
Sports

Commercial
TV

Public TV

Devotional
Claimants

Canadian
Claimants

20.86%
(1.99%)

25.64%
(5.16%)

14.88%
(2.13%)

30.21%
(2.74%)

1.91%
(0.49%)

6.49%
(0.95%)

31.71%
(1.75%)

3.61%
(0.94%)

12.04%
(1.72%)

36.56%
(1.89%)

2.41%
(0.55%)

13.67%
(1.91%)

29.53%
(1.61%)

3.45%
(0.90%)

11.43%
(1.65%)

41.59%
(1.99%)

1.70%
(0.39%)

12.30%
(1.75%)

26.11%
(1.43%)

3.23%
(0.85%)

10.19%
(1.49%)

47.03%
(2.08%)

1.40%
(0.32%)

12.03%
(1.73%)

Note: The table reports the implied claimant shares of distant signal royalties each year derived from the
regression model, which includes system and accounting period fixed effects. Standard errors in
parentheses

Highlighting an important aspect of her analysis, Dr. George states that “[a]s expected,
estimated shares for 2014 are substantially different from those for 2015–2017 due to exit
of WGNA.” George WDT at 57.
Delving deeper into her regression equation, Dr. George explains that she
includes a number of control variables. As she explains, “[T]hese control variables are
included in the econometric model based on the expected economic relationship with
royalty payments [and] [e]ach of these terms has been included in prior regression
models for these proceedings.” George WDT at 54.
Specifically, Dr. George includes, explicitly or implicitly, the following controls:
CSOs paying minimum fees
CSOs paying into the 3.75 fund
CSOs paying into the Syndex fund
Canada Zone System in Canadian re-transmission zone
Number of permitted stations in the subscriber group
Number of distant stations in the subscriber group
Number of local stations in the subscriber group

Activated channels in the prior accounting period (lagged channels in subscriber
group)
Subscribers in prior accounting period (lagged subscribers in subscriber group)
Median income in primary county served by the system
System operated by top MSO, i.e., Comcast, Verizon, AT&T, Charter, Cox, Time
Warner, Cablevision, Altice.
George WDT at 53 tbl.19. Dr. George explained her reasons for including these controls
as
follows:
[I]indicators for systems paying minimum fees, syndicated exclusivity surcharges,
or 3.75 fees as well as the number of permitted stations carried in the subscriber
group [are] all variables expected to be correlated with royalty payments.
An indicator for systems in the Canadian Zone is needed because re-transmission
rules are different in this region and may affect subscribers and royalty payments.
The (lagged) number of subscribers is an important control because royalties
increase with gross receipts, which in turn increase with the number of
subscribers. The number of subscribers is entered in lagged form to avoid the
possibility of reverse causality biasing the coefficients on program minutes.
(Channels activated enters as a lag for the same reason.)
The number of distant stations is included to ensure that the coefficients on
programming minutes are estimated all else equal. In other words, estimates of the
. . . coefficients should measure how a change in claimant minutes affects royalty
payments holding constant the total number of distant minutes broadcast, which is
a function of the number of distant signals re-transmitted.
Indicators for each of the top MSO’s (Comcast, Verizon, AT&T, Charter, Cox,
Time Warner, Cablevision and Altice) are included to account for potential
differences in strategies that might affect the demand for system offerings not
otherwise included in the econometric model. For example, changes in strategy by
Time Warner Cable systems acquired by Charter Communications would be
captured by the MSO indicators. While [Dr.] Crawford included indicators for
only the top six MSO’s, I add Cablevision and Altice because the largest
transaction in the 2014–2017 period was the Altice acquisition of Cablevision,
which was the 7th largest MSO at the time of acquisition.
George WDT at 53-54.

To determine whether her regression model was robust to certain specification
changes, Dr. George conducted sensitivity checks whereby she made certain changes to
her model. Specifically, she conducted the following three robustness/sensitivity checks:
(1) Changing her regression model specifications to include “interacted systemaccounting period fixed effects (number of systems times 8).”
(2) Changing her regression model specifications to include “not only indicators
for the top MSO’s but also these indicators interacted with lagged subscribers.”
(3) Changing her regression model to include “both adjustments [i.e., (1) and (2)
above] . . . thus correspond[ing] to the model estimated by [Dr.] Crawford for his
2010–2013 analysis.”
George WDT at 58.
Dr. George found that the estimated shares in these three robustness/specification
tests “are close to those derived from the preferred model.” George WDT at 59; see also
id. at tbls.25-26. She also notes that the confidence intervals are tighter in the third
alternative robustness/sensitivity checks, see George WDT tbl.27, reflecting the smaller
standard errors contained in that check, which she attributes to the fact that the changed
specifications in that checks are “restricting the variation on which coefficients are
estimated.” George WDT at 61-62. Despite her acknowledgement that this greater
precision is “useful,”95 Dr. George is willing to tolerate “the point estimates from [her
preferred] baseline model because they make use of more variation in the data while still
precisely estimated.” George WDT at 62.

The Judges understand that the usefulness of this greater precision is that the increased types of fixed
effects limit the variation in the regression to variation caused by the difference in programming category
minutes, whereas Dr. George prefers to obtain additional data points in order to observe more variation,
notwithstanding that relaxing fixed effects in these manners opens the door for bias, in the form of
variations caused by unobserved variables otherwise captured by the fixed effects. The Judges discuss this
tradeoff in greater detail elsewhere in this determination.
1. Criticisms of the George Model
a. Criticisms of the George Model by SDC Expert Witness Dr. Erdem
Beyond his criticisms of the Crawford Model that are derivatively applicable to
Dr. George’s model, Dr. Erdem levies further criticisms of the George Model. He asserts
that although she has altered and reduced the number of fixed effects from the Crawford
Model, her alterations do nothing to redeem her approach. Rather, he notes that Dr.
George’s specifications continue to remain very close to those in versions that Dr.
Crawford ran in the previous proceeding.
But, Dr. Erdem acknowledges that, unlike in the Crawford Model, Dr. George
applies two separate fixed effects for accounting period and system ID, and yet he finds
this to be a difference that fails to rescue her model from the overfitting defects that he
claims to pervade Dr. Crawford’s regression approach. Dr. Erdem also opines that. Dr.
George retains some variables from the Crawford Model which lack a “clear basis for
their helpfulness in the model, such as the lag of subscribers (subscribers in the previous
accounting period).” Erdem WRT ¶ 41. Finally, he opines that Dr. George aggravates an
already-present overfitting problem by adding “other variables such as median county
income,” without adequately supporting her decisions. Erdem WRT ¶ 41.
b. Criticisms of the George Model by SDC Expert Witness Dr. Rubinfeld
Dr. Rubinfeld likewise notes that although Dr. George essentially “applied Dr.
Crawford’s specification to the 2014-2017 data,” she “replaced system-period fixed
effects with separate system and period fixed effects [and dropped] [s]ome explanatory
variables . . . . ” But, like Dr. Erdem, he did not find that these alterations salvaged her
model from the defects that, in his opinion, pervade the Crawford Model and, indeed, all
fee-based regressions. Rubinfeld WRT ¶ 94.96

Another SDC Expert, Mr. Sanders, essentially echoes and refers to the critiques by Drs. Erdem and
Rubinfeld. But Mr. Sanders also notes that Dr. George’s approach is remarkable when compared with
c. Criticisms of the George Model by JSC Expert Witness Mr. Harvey97
Mr. Harvey opines that Dr. George introduced “multicollinearity”98 into her
regression by including “a variable on the independent side of [her] regression equation[]
that controls for the number of distant stations broadcast to the subscriber group.”
Harvey WRT ¶ 170. Mr. Harvey understands that this control variable was likely
introduced “to control for non-compensable broadcast minutes, such as Big-3 minutes,”
but he asserts that the regression should have been specified by “simply includ[ing] the
‘Big-3’ variables . . . achiev[ing] the same stated goal more directly while avoiding
problems of multicollinearity.” Harvey WRT ¶ 174.
There is a formal statistical test to identify multicollinearity called the variance
inflation factor (VIF). Harvey WRT ¶ 176. When he ran the VIF test on the George
Model, Mr. Harvey found meaningful multicollinearity between these variables. Harvey
WRT ¶ 182. Accordingly, Mr. Harvey performed a sensitivity test on the George Model
in which he removed the distant stations and permitted stations variables. Harvey WRT ¶
183. The resultant change in the coefficients for the program categories translated into
revised share allocations that included substantially higher JSC shares, as set forth in the
table below:99

other fee-based regressions proffered in this proceeding, in that “the various regressions yield significantly
divergent results which raise[] the questions not just of which ones are wrong but whether any of them
could be right,” and he particularly notes the divergence among the SDC share across the fee-based
regressions. Sanders WRT ¶ 18.
The criticism of the George WDT by the two other JSC expert witnesses, Drs. Majure and Asker, relate
to broader themes common to the fee-based regression, discussed separately in this determination. Mr.
Harvey also raises the broad-based criticisms that are discussed separately herein.
98

For a definition of “multicollinearity,” see 2010-13 Determination at 3562 n.47.

Mr. Harvey also administered two other sensitivities to address this multicollinearity: (1) adding a
control variable for non-compensable minutes to the model and (2) including compensable claimant
minutes in the regression and dropping the number of permitted and distant stations. In both tests, he
reports that the multicollinearity fades, and the share allocations also change, with JSC shares again
increasing compared to the JSC shares in the George Model. Harvey WRT ¶¶ 185-187.
Table 31: George Regression Model Share Estimates
Exclude Distant and Permitted Station Variables
Educational
2014
2015
2016
2017
2014-2017
% Change
in Total vs
Base Model

Joint
Sports

7.0%
15.5%
18.6%
21.5%
12.0%

-67.8%

71.8%
18.6%
18.7%
18.0%
48.9%

290.7%

Devotional Canadian
1.1%
2.5%
1.8%
1.5%
1.4%

-22.5%

10.5%
40.6%
38.4%
38.5%
22.8%

124.9%

Commercial Program
TV
Suppliers
3.3%
4.9%
4.9%
4.5%
3.9%

-69.0%

6.4%
17.9%
17.5%
15.9%
11.0%

-57.2%

Sources:
• Electronic file “programs/208_george_regressions.do”.

d. Criticisms of the George Model by CTV’s Expert Witnesses Dr. Marx
and Dr. Bennett
CTV’s experts criticize the George Model for the following reasons:
1. Because of the dramatic increase in the number of minimum-fee-only CSOs,
the George Model relies too heavily on royalty payments that do not reflect
the revealed preferences of CSOs. CTV PFF ¶¶ 289, 302 (and record citations
therein).
2. The “pooling” of data to generate common coefficients within each claimant
category skews the share allocations because of the sharp distinction between
2014 and 2015-2017 due to the WGNA conversion. Moreover, the
“precision” generated by lumping all the data points together across these four
years is overhyped, because it is a statistical precision unreflective of reality,
and Dr. George did not perform any statistical tests to confirm that pooling
was appropriate. CTV PFF ¶¶ 331, 334 (and record citations therein); Bennett
WRT, figs.12-13; see also 4/18/23 Tr. 5309, 5366-68 (George).
3. Dr. Bennett unpooled Dr. George’s calculations, revealing the lack of actual
precision compared with her pooled approach. CTV PFF ¶¶ 335-36, 342 (and
record citations therein).

e. Criticisms of the George Model by Program Suppliers’ Expert
Witness Dr. Tyler
Dr. Tyler levied the following criticisms at the George Model:
1. Royalties in any functional form are inferior as the dependent variable
compared with the SGRP in the Tyler Model. PS PFF ¶¶ 351-52 (and record
citations therein).
2. Pooling of data across all four royalty years is distortionary and improper. PS
PFF ¶ 363 (and record citations therein).
3. Dr. George’s reliance on the Crawford Model, without regard to the potential
specification searching that may have marred its genesis, calls into question
the reliability of the George Model. By way of example, Dr. Tyler takes note
of the “hammer-shaped” graphical plotting of residuals in the George Model,
which would typically be random rather than concentrated (in “hammershaped” form), as indicative of one or more model specification errors, such
as the omission of important independent variables or improper or
mismatched functional forms (e.g., the misapplication of the linear form or an
improper log transformation of data). PS PFF ¶ 365.
2. The Judges’ Analysis and Findings Regarding the George Model100
The Judges make the following findings with regard to the George Model:
1. The George Model reasonably altered the Crawford Model by estimating a
model with fewer fixed effects, in an attempt to increase the number of
observations lost after the WGNA conversion, by attempting to balance
precision with an acceptable increase in omitted variable bias.

The Judges’ analysis and findings in this section are separate and apart from their analysis and findings
on the specific issues considered in separate sections of this determination.
2. The George Model reasonably included control variables in order to isolate
the effect of interest, the correlation between program category minutes and
royalties.
3. Dr. George utilized appropriate sensitivity tests that modified her fixed
effects, which showed a level of robustness in the George Model.
4. But Dr. George’s tolerance for greater bias, in the form of omitted variable
bias, eliminated the benefit created by the Crawford Model that gave the
Crawford Model a level of primary weight vis-à-vis other methodologies for
estimating relative marketplace value.
5. There is no sufficient evidence that the George Model suffers from overfitting,
and her decision to include certain control variables, such as a control for
“median county income,” was a reasonable exercise of discretion that an
econometrician could make in specifying her model.
6. The George Model reasonably utilized the Big 3 network minutes as a
reference category (a/k/a numeraire or index). Contrary to Mr. Harvey’s
critique, this which was unrelated to the separate control in the George Model
for the number of distant stations, which was included in order to avoid a
cause of changes in the number of minutes that would bias the relationship
between program category minutes and royalties which was the “effect” the
regression was seeking to evaluate.
7. The pooling of all four years over the 2014-2017 period in the George Model
was inappropriate, given the substantial break in market conduct created by
the WGNA conversion commencing in 2015.
8. Dr. Bennett’s recalculation of an unpooled version of the George Model is a
more probative model.

9. Dr. Bennett’s further revision of the George Model, correcting for an admitted
error in her JSC programming mis-categorization, is more accurate than the
George Model originally proffered by Dr. George.
10. The non-random (hammer-shaped) residuals in the George Model are
suggestive of omitted variables or misspecification of functional form, as in
the Crawford Model upon which the George Model is predicated, and appear
to be examples of the problems that may have arisen because of Dr.
Crawford’s alleged specification search.
E. PTV’S Regression Approach: The Johnson Model
Dr. Johnson, PTV’s expert witness,101 constructed a fee-based regression model
based on the framework of a “Waldfogel-type” regression. Johnson WDT ¶ 55. He also
acknowledges that he reviewed Dr. Crawford’s testimony from the 2010-13 proceeding,
and that his model “generally follows the framework used by [Dr.] Crawford” and,
parenthetically, he notes a general consistency with the model proffered by Dr. Joel
Waldfogel in a prior proceeding. Johnson WDT ¶ 57. See also 3/21/23 Tr. 367-68
(“[T]he starting point . . . was to look at the prior work, particularly [Dr.] Crawford's
Waldfogel-type regression model that was adopted in the prior proceeding. . . . However,
I did not, and my assignment was not to just simply blindly accept Dr. Crawford's work,
but to put it to the test, understand what it did, understand how it worked, and then build
that model and determine whether it could apply here.”).102

Dr. Johnson was received as an expert in “economics and econometrics.” 3/21/23 Tr. 362 (Johnson).

However, Dr. Johnson testified that he did not review – or even have access to – Dr. Crawford’s
underlying regression workpapers from the 2010-13 satellite allocation proceeding (regarding the same
regression model as in the 2010-13 cable allocation proceeding), even though PTV’s counsel had received
those workpapers in voluntary disclosures made by the SDC. 3/21/23 Tr. 340-41 (Johnson). (The hearing
record does not indicate whether or not PTV’s counsel provided those workpapers to Dr. Johnson.). See
also 3/21/23 Tr. 617 (Johnson) (Dr. Johnson acknowledging that he also never saw designated testimony
filed in the present proceeding by the SDC comprising their experts’ testimony in the satellite proceeding,
with Dr. Crawford’s documents attached).
Dr. Johnson also “assessed the Judges’ deliberation from the previous
proceeding,” and “address[ed] econometric modeling concerns . . . raised by the Judges
in the previous proceeding [and] changes in the industry from the 2010-2013 to the 20142017 period.” Johnson WDT ¶ 57.
Dr. Johnson identifies the following aspects of his regression model:
1. The regression analyzes each subscriber group in each six-month accounting
period.
2. The dependent variable is the “natural log” of the base royalties accrued by a
CSO for each subscriber group in an accounting period.
3. The explanatory variables include – as the variable of interest – the number of
minutes of each claimant group’s programming content distantly retransmitted
to that subscriber group in that accounting period.
4. The coefficients for this explanatory variable for each claimant group’s
content, which estimate the percentage change in base royalties (the
dependent variable) associated with an additional minute of that type of
content.
5. The control variables below:
a. A control for the number of subscribers in each subscriber group and
accounting period, because, “[in] addition to being driven by CSOs’
distant retransmission decisions, royalties paid also increase with the
number of subscribers (and associated gross receipts) in each subscriber
group.” By adding a control variable for the number of subscribers, the
regression accounts for this relationship.
b. A control for the number of distant broadcast stations retransmitted by
each CSO to its subscriber groups because it “creates a ‘control group’
against which the relative marketplace valuations for each claimant group

at issue are estimated[,]” with this control group consisting of
“programming that is either ‘off-air,’ ‘Big 3’ network programming that is
not compensable or associated to any relevant claimant group, or content
for which program information was not specified in the data, including
‘To Be Announced’ programs.”
c. An indicator variable for CSOs that paid the minimum fee, in order to
account for the possibility that decision-making is systematically different
between CSOs that paid the minimum fee (i.e., those that potentially could
have retransmitted distant signals without experiencing an increase in their
royalty payment) and CSOs that paid royalties above the minimum fee
(and thus, would have faced an incremental cost to any additional distant
signal). This indicator variable does not separate out the model’s reported
coefficients, but “allows [the] model” to generate information “to account
for these differences . . . . ”
d. An indicator variable distinguishing between subscriber groups that also
generated 3.75 fees (in addition to the base fee payments included in the
regression) and subscriber groups that did not generate 3.75 fees.
Johnson WDT ¶¶ 55-56.103
Dr. Johnson also emphasizes what he has omitted from his regression model that
had been included in Dr. Crawford’s model. First, Dr. Johnson omits a set of controls in
the form of “system-accounting period fixed effects.” Although Dr. Johnson
acknowledged that these fixed effects had attempted to establish a relative value unbiased
by factors irrelevant to the correlation at interest (the effect of programming minutes on
the log of royalties) by isolating and comparing variation only in “a given CSO’s

Note that the Johnson Model includes far fewer control variables than the George Model. See text
following this footnote.
retransmission decisions across its subscriber groups,” Dr. Johnson wanted to address the
Judges’ statement that in the 2010-13 Determination that they were “troubled” by Dr.
Crawford’s inadequate response to the argument that these controls “‘effectively
discarded” approximately 15% of his observations [generated by] “approximately half of
all systems in his data set . . . . ”’ Johnson WDT ¶ 59. Dr. Johnson claimed that the same
issue exists to a greater extent in the present proceeding, because “49 percent of CSOs
that retransmitted at least one distant signal reported only one subscriber group,” thus
excluding them from the regression through the inclusion of these “system-accounting
period fixed effects.” Johnson WDT ¶ 59.104
Second, Dr. Johnson also omits from his regression several so-called “lagged”
variables included by Dr. Crawford, because these “lagged” variables “assume[] that
outcomes from an earlier point in time affect outcomes in the present time.” Johnson
WDT ¶ 59 & n.84. Whatever merit lie in these lagged variables was a moot point for Dr.
Johnson, because he found that the available data was insufficient to measure this
“lagged” effect, and because the data did not allow for subscriber groups to be
“consistently tracked over time” (due to, most noteworthily, the WGNA conversion and
the cable system acquisitions by Charter Communication). More particularly, and by
way of example, Dr. Johnson explained that there was insufficient data to construct a
“prior period” for the first six-month period of 2014, which (if he had retained the lagged

To be clear, in the 2010-13 proceeding, the Judges found that Dr. Crawford’s use of these fixed effects
and other controls did not “diminish the Judges’ reliance on Professor Crawford’s regression analysis.”
More particularly, the Judges explained that Dr. Crawford’s “use of “system-accounting period fixed
effects” was the “result of a tradeoff,” necessitated by Dr. Crawford’s use of a “subscriber group analysis
[which] reduced the number of observations in [Dr.] Crawford’s data set.” Although this decision could
result in an “overfitting” of the model (see 2010-13 Determination at 3565 defining “overfitting”), his use
of data from the entire population of Form 3 CSOs provided him with a wealth of data that mitigated a
potential problem with regard to potential overfitting arising from sampling that provided too little data
relative to the number of parameters.” 2010-13 Determination at 3566-67 & n.65. The Judges discuss
elsewhere in this determination the impact of the decision by Dr. Johnson (and Dr. George) to make a
different trade-off in their regression models through their handling of this specific fixed effects issue,
particularly in the context of the purpose of these fee-based regressions as “explanatory” of an isolated
“effect,” rather than “predictive” of the total royalties paid.
subscriber variable) would have “effectively discard[ed] data on CSO distant
retransmission decisions [for] about one-eighth of all data.” Johnson WDT ¶ 59.105
Further, Dr. Johnson excluded from his model the following additional controls
included by Dr. Crawford in his model, which Dr. Johnson found to be “redundant or
inappropriate . . . [and] also hind[rances] to the model’s ability to perform the task at
hand”:106
1. A control for county-level median income, which Dr. Crawford had included
to account for variation in demand for cable services by impacting the number
of subscribers, the total CSO revenue and, accordingly, “the royalty paid by
that CSO. Dr. Johnson omitted this control because he found it to be
redundant and confounding, in that it seeks to control for the number of
subscribers, which is already included in the model at the more informative
subscriber group level. This subscriber count at the subscriber group level,
according to Dr. Johnson, implicitly takes into account of variations in
demand and the impact of relatively different values in high-demand areas.
2. Controls for the number of local stations and the (lagged) number of activated
channels. Although Dr. Crawford opined that these controls would have the
salutary effect of “account[ing] for other features of the cable service on
which distant signals may be offered which could influence the number of
subscribers to that service,” Dr. Johnson found these controls unnecessary and
potentially problematic because (1) Dr. Crawford did not explain how the
second of these controls, i.e., the number of local and “activated” channels

That is, if the lagged variable control was included despite the unavailability of data for the second
accounting period of 2013, the model would not have generated results in a consistent manner for the first
accounting period of 2014, and one accounting period reflects 1/8 of the eight six-month accounting
periods in the four-year 2014-2017 period.
Dr. Johnson also discarded controls from the Crawford Model “for whether a CSO lies in the area where
it is permissible to carry Canadian signals (“Canada zone”).” The Judges consider the Canada zone issues
separately, infra.
would impact CSOs’ decision-making process with respect to distant channels
and (2) as proffered proxies for factors that might “influence the number of
subscribers,” they too are redundant and potentially confounding, given the
presence in the regression model of a direct control for the number of
subscribers.
3. Controls for the six largest MSOs, which Dr. Crawford included “to capture
potential differences in factors not included in the econometric model that
could shift demand for bundles that include imported distant broadcast
signals.” Dr. Johnson notes that Dr. Crawford provided no explanation as to
what “factors” these controls were intended to reflect, and Dr. Johnson asserts
that these controls are redundant and potentially confounding. Dr. Johnson
avers that potential differences between and among the six largest MSOs
“could shift demand,” and thus “[r]eflect[] valuable information for the
model’s estimation of relative value.”
Johnson WDT ¶ 60.
Dr. Johnson further explains that his regression (like the regressions of Dr.
George and Dr. Crawford, and the 2014 Bayesian regression by Dr. Marx) calculated the
relative coefficients for the six compensable program categories by relating them to a
“control group” of program minutes that are “non-compensable” in section 111
proceedings. Specifically, Dr. Johnson testified:
The number of distant broadcast stations [compensable and noncompensable] retransmitted by each CSO represents the universe of that
CSO’s distantly retransmitted content. . . . [T]he difference between the
universe of content and that corresponding to the claimant groups at issue
is content that does not correspond to any claimant group at issue. This nonclaimant content “control group” is a mix of programming that is either
“off-air,” “Big 3” network programming that is not compensable or
associated to any relevant claimant group, or content for which program
information was not specified in the data, including “To Be Announced”
programs. [The] model is specified in a way that allows for the “control
group” content to have absolute value to subscribers (and thus to cable

operators), even if it is not compensable in this proceeding. However, using
this content as a control group allows my model to estimate relative
valuations for the compensable claimant groups.
Johnson WDT ¶ 55 n.76.107
Utilizing the foregoing inputs, Dr. Johnson calculates regression coefficients
estimated by his model, as well as the associated standard errors. Johnson WDT fig.11.
In words, Dr. Johnson helpfully describes these coefficients, which are the common
output of fee-based regressions, as
measur[ing] the percent change in royalties associated with an additional
minute of each claimant’s programming, after controlling for the other
relevant factors present in the regression [and] represent[ing] the relative
value of each claimant group’s content on a per-minute basis.
Johnson WDT ¶ 61. Dr. Johnson, in the model he recommends (his “baseline” model),
and like Dr. George and Dr. Crawford – but unlike Dr. Tyler – did not generate separate
coefficients for each of the four years. Crawford WDT fig.14. (However, Dr. Johnson
did an annualized break-out as well. See 3/21/23 Tr. 467-68 (Johnson).)
Dr. Johnson reports that the estimated regression coefficients in his preferred
“baseline” model “are all statistically significant, at the 99 percent level or higher.”
Johnson WDT ¶ 62. In lay terms, he again helpfully explains that this level of statistical
significance means that “given the data analyzed, [the] regression can reject with 99
percent (or higher) certainty the hypothesis that an additional minute of programming of
each of the claimant groups has no effect on royalties.” Johnson WDT ¶ 62. According
to Dr. Johnson, his regression can estimate coefficient value with this high level of
“precision” because the model is based on “over 18,000 subscriber group-level
observations . . . . ” Johnson WDT ¶ 62.

This “control group” is alternatively denominated by the experts in this proceeding as a “numeraire,” a
“reference group,” and a “benchmark.” The Judges discuss the use of this device to stablish coefficients in
their Analysis, infra.
Next, Dr. Johnson uses these coefficient values to generate his estimated royalty
shares, in dollars, undertaken in all fee-based regressions. Specifically, and as in
regressions proffered in previous proceedings and in this case, he multiplies the
coefficient by the total number of compensable minutes for the respective program
category. This product generates the shares of base royalties associated with each
claimant group in each year. Johnson WDT ¶ 63.
In the figure below, Dr. Johnson presents the implied shares of the Basic Fund
royalty, but excluding the 3.75 Fund and the Syndex Fund royalties that can also accrue
to one or more of the six claimant groups:
FIGURE 13
IMPLIED BASIC FUND ROYALTY SHARES
BASELINE MODEL
2014-2017
Claimant

2015

2017

[a]

[b]

[c]

[d]

[e]

2014 2017
[f]

Public Television

35.9%

46.2%

53.4%

58.9%

48.5%

Joint Sports

17.1%

2.4%

1.8%

1.7%

5.8%

Devotional Programs

0.9%

0.8%

0.7%

0.6%

0.7%

Canadian Claimants

4.2%

7.8%

6.8%

6.3%

6.3%

Commercial
Television

16.1%

9.1%

8.2%

7.2%

10.2%

Program Suppliers

25.8%

33.7%

29.0%

25.3%

28.5%

Sources: CDC Royalties Data; CRTC Program Logs; Red Bee Data.

Johnson WDT ¶ 67 fig.13.
Dr. Johnson explains why the implied relative share values are starkly different:
[A]lthough the relative value of a minute of [JSC] content, on average, is
typically larger than that of other content types, the quantity of compensable
[JSC] content is relatively small (and decreased substantially after the WGN
conversion). As a result, the implied royalty share for Sports claimants is
smaller than . . . for . . . Program Suppliers, which had a lower per-minute

value but much more distantly retransmitted content during the relevant
period.
Johnson WDT ¶ 66.
In addition to his foregoing proffered regression model, Dr. Johnson performed
what he described as a sensitivity analysis, to test the robustness of that model against
alternative specifications and to assess the “key drivers” of the results of his model.
Johnson WDT ¶ 68.108 Specifically, Dr. Johnson conducted two such analytical tests.
First, he looked at the subset of CSOs from his proffered model that only “paid
above the minimum fee.” Johnson WDT ¶ 68. His purpose in performing this test was to
address the concern in the 2010-13 Determination that the “carriage decisions of CSOs . .
. pay[ing] minimum fees [were] ‘potentially less informative than discretionary decisions
by CSOs to incur an additional royalty expense in order to distantly retransmit particular
stations.’” Johnson WDT ¶ 68 (citing 2010-13 Determination at 3575). This first
sensitivity test, according to Dr. Johnson, found “positive relative valuations” for the
coefficients of all six claimant categories, although the valuations were “not statistically
significant” for the JSC and SDC content. Johnson WDT ¶ 69 fig.14, cols. [a]-[c]; and
app. K.109 Apparently focusing on the absence of statistical significance for the JSC and
SDC content, Dr. Johnson concludes that this sensitivity test shows the appropriateness –
indeed, the “importance” – of his proffered model’s inclusion of “CSOs that paid
minimum fees,” because exclusion of such CSOs “would cause the model to lose
precision with respect to” the JSC and SDC claimant content. Johnson WDT ¶ 69. In

Dr. Johnson also asserted that he performed two “other sensitivities,” on missing CCG programming
data and program descriptions that were ambiguous as to the claimant category to which they belonged,
respectively. Johnson WDT ¶ 49 n.64 & ¶ 50 n.68. But although he tried to categorize these tests in this
manner, by his own acknowledgement, the “purpose of those tests [was to] assess[] the effects of different
approaches to treating the imperfections in the available data.” Johnson WDT ¶ 68 n.102.
Dr. Johnson does not report share allocations for minimum-fee-only CSOs in his WDT. However, in
response to criticism of his direct testimony, Dr. Johnson included in his WRT figures showing a close
relationship between: (a) the allocation shares based on the subscriber group Base Fees calculated (but not
paid) by these minimum-fee-only CSOs on an annualized (unpooled) basis for 2014-2017; and (b) the
allocation shares in his proffered baseline model (presented on an unpooled basis) for all CSOs considered
in his analysis. Johnson WRT app. D, figs.D-6 and D-7.
further support of his interpretation of this sensitivity test results, Dr. Johnson adds that
CSOs paying only the minimum fee nonetheless “still make affirmative distant
retransmission decisions that can be informative about the relative value of content.”
Johnson WDT ¶ 69 & n.103.
In his second sensitivity/robustness analysis, referred to supra, Dr. Johnson
“allow[s] the coefficients to vary from year to year.” Johnson WDT ¶ 68; see also id. at
fig.14, cols. [a], [d]-[g]. He opines that this analysis “indicates . . . there is a statistically
significant difference” in the coefficient values between 2014 and 2015-2017 for JSC
program content. Johnson WRT ¶ 121 fig. K-3 (notes).
According to Dr. Johnson, this second sensitivity test shows the following:
1.

Relative marketplace values for the PTV, SDC, CCG, CTV and Program
Suppliers claimant categories were not statistically different across the 2014 to
2017 period.

2. However, the relative marketplace value of JSC content significantly declined
from 2014, when WGNA was the most distantly retransmitted signal
(broadcasting high volumes of MLB, NBA, and NFL game content), to the
2015-2017 period, after WGN converted to a cable network, and the volume
of such games was concomitantly significantly reduced.110
3. This second sensitivity test demonstrates that Dr. Johnson’s proffered baseline
model has “appropriately captur[ed]” the declining value of JSC content in the
average “over the entire 2014-2017 period . . . . ”
Johnson WDT ¶¶ 70-71; see also id., fig.15.

The coefficient for JSC content in the 2015-2017 period remained high, but was not statistically
significant. Johnson WDT ¶ 70 & fig.14.
1. Criticisms of the Johnson Model111
a. Criticisms of the Johnson Model by CCG Expert Witness Dr. George
Dr. George levies the following criticisms of the Johnson Model:
1. The Johnson Model produces biased results because it excludes 3.75% fees,
failing therefore to reflect the full willingness-to-pay of all claimant
categories, either in the base fee or the separate 3.75% calculations made by
Dr. Johnson. George WRT at 23-24.
2. The Johnson Model is “subject to bias from unobserved market characteristics
and time trends” because Dr. Johnson abandoned all system effects and
accounting-period effects, whether separately considered (as in the George
Model) or interacted (as in the Crawford Model), without appropriately
considering how that abandonment would likely generate omitted variable
bias. The omitted variables risk inclusion of bias regarding variations in
programming. Moreover, Dr. Johnson misconstrued the 2010-13
Determination as justification for this error. George WRT at 24-25.
3. Dr. Johnson’s substitution of “contemporaneous” for “lagged” subscribers
“undermines causal inference” because “[l]agged control variables . . .
common in applied regression . . . minimize the potential for unobserved
shocks [that can] bias coefficients . . . such as the acquisition of a cable system
by a large MSO . . . . ” Further, the lagged subscriber input has been used in
fee-based regressions since Dr. Waldfogel’s regression in the 2004-05
proceeding and Dr. Johnson wrongly claims that “lagged subscriber” data was

An overarching procedural critique of the manner in which Dr. Johnson generated his model – alleging
that he engaged in improper econometric activities, in the form of what is known as “specification
searching” and George WRT at its related questionable activities, “data mining” and “p-hacking”, is
separately discussed elsewhere in this determination.
unavailable, because they are readily available from Cable Data Corporation.
George WRT at 26-27.
4. The Johnson Model excludes controls – included in past proceedings – for
unobservable factors that undermine causal interpretation, specifically
excluding controls for market income, the number of local stations offered,
and MSO ownership of CSOs. Dr. Johnson fails to recognize that “these
controls establish the ‘all else equal’ conditions that allow coefficient
estimates to take a causal interpretation as value per minute.”112 Because “it is
not possible to express, let alone control for, all the factors that vary across
cable systems,” the econometrician must judiciously use control variables
(and fixed effects, discussed supra), or otherwise bear “the burden . . . to
justify why coefficients are not absorbing the effects of omitted variables and
warrant the desired causal interpretation.” George WRT at 27-30.
b. Criticisms of the Johnson Model by PTV Expert Witness Dr. Bennett
Dr. Bennett lodges the following criticisms specific to the Johnson Model:
1. The base fees and the 3.75% Fees reported by CSOs are decoupled from each
other and are often less than the CSOs’ actual royalty payments. Bennett
WRT ¶¶ 66-69, figs.24-26. This is problematic because CSO carriage
decisions underlying the base fees and the 3.75% fees are “inextricably
linked,” in that the cost factor in the decision whether to add a station is based
on the total royalty cost, which includes both the (1) the base fee or minimum
fee, as applicable, and (2) the 3.75% fee. But by treating the two royalty
funds separately, the Johnson Model materially increases PTV’s overall share,

For example, Dr. George notes that FCC data indicates that cable subscription prices (and thus royalties)
are lower in less wealthy markets. Likewise, Dr. Crawford showed in 2010–2013 that “top MSO’s earned
higher revenues per subscriber than other systems, suggesting that large MSO’s are able to charge higher
prices for cable packages.” George WRT at 28.
compared to what it would be if the two royalty funds were jointly considered.
Bennett WRT ¶¶ 74-78, figs.27-28.
2. Dr. Johnson provides no basis for extrapolating from the subset of Subscriber
Groups with positive Base Rate fees to the broader royalty pool. Bennett
WRT ¶¶ 70-73.
3. The Johnson Model excludes fixed effects, which means that his regressions
do not account for omitted variable bias. But Dr. Johnson introduces the risk
of such bias based on a trumped-up concern that the Judges noted in the 201013 Determination but which had no impact. Moreover, the resulting bias in
the regression coefficient is caused by eliminating fixed effects that would
have impacted royalties but were unrelated to program category minutes, for
example, where different CSOs charge different subscription prices because of
differences in the number of specialty channels they provide in their basic
service. Similar omitted variables arise when fixed effects are eliminated
because of uncontrolled differences in subscription revenue (and thus section
111 royalties) between and within MSOs. Bennett WRT ¶¶ 79-89, figs.29-35.
4. Dr. Johnson’s decision to eliminate fixed effects was particularly puzzling,
because he had the endorsed Dr. Crawford’s “regression framework” as
“appropriate” for present purposes and acknowledges that he “generally
follows the framework used by [Dr.] Crawford.” Nonetheless, he eliminated
Dr. Crawford’s fixed effects, inflating PTV’s shares as reported in the
Johnson WDT. See Bennett WRT ¶¶ 90-92, figs.36-37.
c. Criticisms of the Johnson Model by PTV Expert Witness Dr. Marx
Dr. Marx essentially echoes the criticisms of Dr. Bennett with regard to Dr.
Johnson’s allegedly improper removal of fixed effects from the regression. She
emphasizes that Dr. Johnson did not appear to test or evaluate the size or direction of the

bias created by eliminating fixed effects, even for 2014, which was “a year that in most
significant respects was similar to 2010–2013, which is the time period for which the
Judges found the Crawford regression with fixed effects to be ‘highly useful.’” Marx
WRT ¶ 39.
d. Criticisms of the Johnson Model by Program Suppliers Expert
Witness Dr. Tyler
Dr. Tyler does not raise any specific criticisms of the Johnson Model. Rather, he
criticizes it in the same way he criticizes all the other regressions that use a form of
royalties as the dependent variable (as explained supra, in the Judges’ summary of Dr.
Tyler’s advocacy for the model he has proffered in this proceeding). See Tyler WRT ¶
29. To summarize, Dr. Tyler rebutted the Johnson Model by asserting the following:
1. The Johnson Model needed to avoid the substantial degree of variability,
causing a loss of observations.
2. The Johnson Model, like the George Model, “guesses” at the number of
subscribers in each Subscriber Group, introducing potential bias into the
regression.
3. The Johnson Model, like the George Model, has “hammer-shaped” residuals,
which indicate that a regression is misspecified.
See Tyler WRT ¶¶ 29-55.
e. Criticisms of the Johnson Model by SDC Expert Witnesses Dr. Asker,
Dr. Majure, and Mr. Harvey
JSC’s several expert economic witnesses levy the following criticisms at the
Johnson Model:
1. The Johnson Model (like the George Model) improperly engages in the
pooling of data across the 2014-2017 period to estimate a single coefficient

for each program category. According to the JSC economic witnesses, such
pooling generally results in “unreliable” coefficients and, specifically, led in
this case to an underestimation of JSC’s 2014 share. More particularly, three
JSC experts testified as follows:
a. Dr. Asker testified that “there was a significant change in behavior
following the conversion of WGNA in 2015. . . . To adopt a specification
that doesn’t recognize that change and then allow the regression to adjust .
. . is a considerable flaw.” 3/30/23 Tr. 2431 (Asker); see also Asker WRT
¶ 103.
b. Dr. Majure testified that “[t]he data are very different between these two
periods, reflecting changes in distant signal carriage patterns from the exit
of WGNA. Given the differences in the data, it is important to run separate
regressions on the different time periods.” Majure WRT ¶ 38.
c. According to Mr. Harvey, the Johnson Model estimates that JSC went
from the highest per minute value in 2014 to the lowest in 2015-2017 and,
moreover, CSOs would pay less for a minute of JSC content during 20152017 than for a minute of any of the other claimant categories. Harvey
WRT ¶ 37 tbl.5; 3/28/23 Tr. 1883-87, 1889-90 (Harvey).
d. Mr. Harvey further testified that for the 2015-2017 period data alone,
using the Johnson Model (and the George Model) generated JSC sports
coefficients that were not statistically significant and, according to Mr.
Harvey, were thus unreliable in that the data implied that JSC
programming had no value in those years. Harvey WRT ¶¶ 37-38 & tbl.5.
e. Mr. Harvey calculated that when a 2015-17 coefficient is estimated
only for systems paying more than the minimum fee, the Johnson Model

then estimates a statistically significant negative coefficient for JSC
content. Harvey WRT ¶ 38 & tbl.6; 3/28/23 Tr. 1895-96 (Harvey).
2. The Johnson Model lacks “robustness” and is “unstable.” According to Mr.
Harvey, these defects are evidence that Dr. Johnson had engaged in a
specification search (discussed elsewhere in this Determination). But Mr.
Harvey asserts that even if Dr. Johnson had not engaged in an intentional
specification search, his many specifications generated results that evidenced
the lack of robustness and stability. 3/28/23 Tr. 2091 (Harvey); Harvey WRT
¶ 155 & tbl.26; see also JSC PFF ¶ 196.

3. Reiterating a criticism rejected in the 2010-13 Determination, the Johnson
Model (like the George Model) wrongly utilizes a log-linear specification,
with the dependent variable (royalties) expressed in log form and the
subscriber count variable expressed in linear form. Harvey WRT ¶ 170;
3/28/23 Tr. 1965-66 (Harvey).
4. The Johnson Model wrongly omits fixed effects (as also noted by other
witnesses, discussed supra). According to Mr. Harvey, applying the fixed
effects contained in the George Model triples Dr. Johnson’s estimate of the
JSC share. Harvey WRT ¶ 111 & tbl.5.
f. Criticisms of the Johnson Model by SDC Expert Witness Dr.
Erdem113
Dr. Erdem levies the following criticisms at the Johnson Model:
1. The specifications in the Johnson Model (i.e., Dr. Johnson’s preferred
“baseline” model) is but “a stripped-down version” of the fatally flawed

In addition to the specific criticisms by Dr. Erdem of the particulars of the Johnson Model, Dr. Erdem
criticizes Dr. Johnson for engaging in the improper process of specification searching (also described as
“data mining” and “p-hacking”). The Judges consider that issue separately in this Determination.
Crawford Model, shorn of “numerous control variables such as MSO
indicators and the lag of subscribers and . . . fixed effects . . . . ” Erdem WRT
¶ 42.
2. When the Johnson Model’s regression was run “using the CCG data that Dr.
George used for her regressions . . . PTV shares decreased by eight points
[and] [e]very other claimant . . . had their implied shares . . . with] JS[C]
[gaining] a five-point increase in shares.” This allegedly indicated that “[t]he
processed data that PTV used for their regression was clearly made to benefit
their shares . . . . ” Erdem WRT ¶¶ 98-99.
3. All of Dr. Erdem’s sensitivity tests showed a similar tendency, i.e., compared
to the Johnson Model, “all the sensitivities . . . [gave] PTV lower implied
shares.” Erdem WRT ¶ 101.
2. The Judges’ Analysis and Findings Regarding the Johnson Model114
The Judges make the following findings with regard to the Johnson Model:
1. Although Dr. Johnson used the Crawford Model as his “starting point,” he
made changes to the Crawford Model.
2. A major change Dr. Johnson made to the Crawford Model was to eliminate all
“fixed effects” in the Johnson Model.
3. By removing all “fixed effects,” Dr. Johnson altered the Crawford Model by
eliminating the protection against “omitted variable bias.” That is, Dr.
Johnson failed to capture the effects of differences among systems (CSOs)
and across accounting -periods that impacted the dependent variable in the
Johnson Model, i.e., the log of royalties. The absence of these “fixed effects”

The Judges’ analysis and findings in this section are separate and apart from their analysis and findings
on the specific issues considered in separate sections of this determination.
therefore rendered significantly reduces the evidentiary usefulness of the
Johnson Model.115
4. A purpose in Dr. Johnson’s removal of “fixed effects” from his regression
model was to generate what he understood to be a sufficient number of
observations of CSO decisions regarding program category retransmittal
decisions (through their retransmitted channel selections) to generate the
variation needed for a useful regression. These additional observations were
required because, after the WGNA conversion, there was a significant
reduction in the number of CSOs with two or more subscriber groups,
reducing the variation created by the “fixed effects” control in the Crawford
Model. But, as Dr. Marx, for example, has explained, this attempt at greater
“precision” came at the unacceptable expense of the generation of “omitted
variable bias” discussed above.
5. Dr. Johnson’s further claim – that he eliminated “fixed effects” in response to
a statement in the 2010-13 Determination that the Judges were troubled by the
resulting loss of 15% of the otherwise observable CSO decisions – is a red
herring. The Judges in the 2010-13 Determination did not rely on the loss of
such observations as a basis for diminishing the evidentiary weight of the
Crawford Model. And regardless, if the lost number of observations increased
in the present proceeding because of the aforementioned reduction in useful
subscriber groups, the more appropriate response was not to inject “omitted
variable bias” into the regression, but rather to utilize other approaches (as, for
example, in the Tyler Model).

The irony of this criticism is that Dr. Johnson relied on the Crawford Model as a “starting point” for his
modeling, deemphasizing the need to develop an independent economic theory, and ignored the potential
specification searching in Dr. Crawford’s modeling, but removed the feature of the Crawford Model
(“fixed effects”) that was the positive basis for the Judges’ elevation of the regression approach to a
position of evidentiary primacy in the 2010-13 Determination.
6. Dr. Johnson’s inclusion in his regressions of data regarding the programming
decisions of the vast majority of CSOs paying the minimum fee or less
significantly reduces the evidentiary weight of the Johnson Model for the
three-year 2015-2017 period. (This finding of course also applies to the
George and Tyler Models.) These decisions did not reveal their preferences
in a cardinal manner, that is, these CSOs did not reflect relative values
because their choices did not affect the actual fees paid. At most, their
decisions reflected ordinal values, in terms of which program categories they
valued more than others, but not how much more, which is necessary for the
distribution of the royalty fund.116
7. But Dr. Johnson properly relied on the data relating to the subset of CSOs in
his model that only paid above the minimum fee. The Judges credit that data
as reflective of actual economic decision-making that is useful in determining
the allocation shares in this proceeding. This cohort of CSOs can properly be
viewed as essentially the only CSOs who provide revealed preference
information as to the variation in relative values among the program
categories (in contrast with CSOs who did not retransmit any distant local
stations or those with “excess capacity”), which in that sense is a cohort unto
itself, rather than a sub-sample. On the other hand, this cohort can also
reasonably be viewed as but a small sample of all the CSO, which reduces the
evidentiary weight of their preferences. Both perspectives on the revealed
preferences of these above-minimum-fee-paying CSOs are properly
considered in weighting the various strands of useful evidence in order to
allocate royalty shares in this proceeding.

In the 2010-13 Determination by contrast, as Dr. Marx has explained, the Judges found there was a
sufficiently high percentage of CSOs paying above the minimum fee and thus making decisions with an
economic (royalty) impact that served as a strong evidentiary basis for allocating shares.
8. The probative value of the Johnson Model is incomplete and thus weakened,
because it excludes the 3.75% fees paid by most of the claimants, thus not
reflecting the full willingness-to-pay of all claimant categories. Further, Dr.
Johnson’s separation of the basic royalty fund and the 3.75% royalty fund
materially increased PTV’s overall share.
9. The probative value of the Johnson Model is weakened because it wrongly
substitutes “contemporaneous” for “lagged” subscribers. This substitution is
incorrect because: (a) lagged controls minimize the subsequent impact of
potential unobserved factors such as the acquisition of a CSO by a large MSO;
(b)”lagged” subscribers were used since the Waldfogel regression in the 200405 proceeding; and (c) contrary to Dr. Johnson’s assertion, “lagged
subscriber” data was available from Cable Data Corporation, the source of
much of the data utilized in the regressions proffered in this and prior
allocation proceedings.
10. The probative value of the Johnson Model is weakened because its omission
of certain control variables lessens its ability to identify the causal
interpretation of interest, i.e., the correlation between program category
minutes and the log of royalties. Specifically, the evidentiary weight of the
Johnson Model is compromised by its exclusion of control variables for
market income, the number of local stations offered and MSO ownership of
CSOs. In this regard, Dr. Johnson has essentially ignored the 2010-13
Determination which explains at length why the inclusion of an MSO control
variable is necessary. 2010-13 Determination at 3566-67 (describing
“differences . . . among the six largest MSOs in terms of their average receipts
per subscriber . . . . suggest[ing] . . . important differences . . . regarding their
signal carriage strategies, pricing, and other relevant dimensions,” and

contrasting “a regression without the six MSO Interaction variables [where]
unobserved differences in average revenue per subscriber could bias estimates
of relative value of different programming.”).117
11. The Johnson Model improperly “pools” data across the 2014-2017 period to
estimate a single coefficient for each program category. Although “pooling”
in this manner is not inherently improper in these allocation proceedings,
when there is a sharp demarcation in the relevant data, as existed here as of
2015 upon the WGNA conversion, “pooling” data to generate a single
coefficient obscures reality. The most consequential impact of “pooling” was
the underestimating of the JSC share for 2014 and its overestimation for the
years 2015-2017.
IX. A GENERAL CRITICISM OF THE REGRESSIONS: DR. ERDEM’S EIGHTMODEL ARGUMENT IN REBUTTAL TO THE USE OF THE PROFFERED
REGRESSIONS
Undaunted by the Judges’ findings in the 2010-13 Determination discussed supra,
Dr. Erdem endeavors to convince the Judges to reverse course by once more presenting
an argument that all fee-based regressions should be rejected as probative evidence of
relative market value, as that standard has been defined by the Judges and their
predecessors.118 To this end, Dr. Erdem presented in rebuttal eight models as pedagogical

One might fairly ask: Why rely on Dr. Crawford’s specification decisions now, after raising the
concerns about his potential specification searching? The answer is that Dr. Crawford’s detailed and
persuasive explanation for adding this additional control variable in the course of specifying his model was
a reason why the Judges did not agree with the SDC in the 2010-13 proceeding that it was evidence of
inappropriate specification searching. The troublesome facts were generated subsequently, in the discovery
phase of the companion 2010-13 satellite proceeding.
Nothing in the prior determinations precludes the Judges from considering what appear to be new
arguments by Dr. Erdem, because the Judges’ (and their predecessors’) reliance on fee-based regressions
constitutes a factual finding, not a legal conclusion, and thus there is no “precedent” that precludes a new
line of factual expert argument. See 2010-13 Determination at 3557 & n.26 (distinguishing “legal
precedent” from the oxymoronic concept of a “factual precedent.” See also 17 U.S.C. 803(a) (directing the
Judges to act on the basis of both: (1) “a written record” which includes record evidence; and (2) prior
“determinations and interpretations” of identified judicial and administrative entities.).
tools only (not as proposed models for use in allocating shares). He and the SDC aver
that his are “simple models,” demonstrating that “all fee-based regression models” do not
estimate “any plausible measure of fair market value,”119 but rather are “leveraged on
correlations driven predominantly by geography (location of cable systems and the
subscriber groups) and other features of the copyright royalty system ….” SDC PFF ¶ 44
(quoting Erdem WRT ¶ 2).120
The Judges go through each of the eight models below. Also set forth below are
the rejoinders to these models presented comprehensively through the submission by
CCG and the testimony of CCG’s economic expert, Dr. George.
A. Erdem’s Rebuttal Model 1
Model 1 shows “a negative correlation between the number of minutes
retransmitted on a distant basis and the amount of subscriber group base fees.” SDC PFF
¶ 45 (citing Erdem WRT ¶¶ 52-53). This means, according to Dr. Erdem, that subscriber
groups retransmitting fewer distant minutes tend to pay more in royalty fees. Erdem
WRT ¶ 53. Dr. Erdem interprets these negative coefficients as a “hedonic” regression,
implying that CSOs place negative value on retransmission of distant signals.” SDC PFF

However, factual matters that the Judges decided in the 2010-13 Determination need not be fully
revisited in this proceeding, in the absence of any new persuasive argument to the contrary. Such factual
matters include: (1) the rejected sweeping claim that fee-based regressions do not embody economic
principles such as profit maximization (see 2010-13 Determination at 3560), (2) the rejected
characterization of fee-based regressions as merely “volume analyses” (see id. at 3560-61), (3) the rejected
argument that it was wrong for fee-based regressions to ignore distant local signals that CSOs chose not to
carry (see id. at 3563), and (4) the rejected argument that the fee-based regressions used the wrong form for
the control variable for number of subscribers (see id. at 3563-64).
It is not lost on the Judges that Dr. Erdem uses the phrase “fair market value” here, rather than the actual
standard of “relative marketplace value.” In the 2010-13 Determination, the Judges explicitly distinguished
the two concepts. 2010-13 Determination at 3555 n.17 (“Because the royalties at issue in this proceeding
are regulated and not derived from any actual market transactions, they do not correspond with absolute
dollar royalties that would be generated in a market and thus would not reflect absolute “fair market
value.”) See also the Judges’ discussion of the “relative marketplace value” standard, supra.
Elsewhere in his testimony, Dr. Erdem offers a more sinister conclusion from his “eight-model”
analysis: “[A]s I will show, it is precisely these modeling choices that allow the analyst to select a model
based on expected or desired results.” Erdem WRT ¶ 51. Thus, his argument is that the very structure of
the fee-based regressions provides all the expert witnesses, not just the two he singled out, Drs. Crawford
and Johnson, with the opportunity to engage in specification searches.
¶ 45 (citing Erdem WRT ¶ 53) (emphasis added).121 Given the perverse nature of this
result, the SDC maintains that its negative value puts the lie to the claim that the number
of minutes has something “to do with value,” but rather shows that the regression
coefficients are artifacts “of the regulatory structure.” SDC PFF ¶ 45.122
Dr. Erdem advances what he argues is an alternative explanation for the inverse
relationship between minutes and royalties that he claimed to identify: “[This] result can
be explained by distance between the signal and the subscriber group [because] I argue
that the number of subscribers reduce with distance, implying that the signal is being retransmitted to fewer subscribers over longer distances.” Erdem WRT ¶ 53. See also
Erdem WRT ¶ 59 (“91% of systems are retransmitting the same signal on a local basis to
some subscriber groups and on a distant basis to other subscriber groups[,] … [and] on
average 76% of the channels that are distant to a subscriber group are retransmitted as
local to another subscriber group”); SDC PFF ¶¶ 46-47; see also Bennett ACWDT ¶ 33
(Across 2014–2017, nearly 95% of the distant signals imported were within 150 miles of
the community served, and over 97% were within 200 miles.).
B. Erdem’s Rebuttal Model 2
In his second rebuttal model, Dr. Erdem analyzed the relationship between
claimant category minutes and base royalty fees. He testified that, quite similar to the
results from his Model 1, he found that a negative or statistically insignificant
relationship largely persists (except for JSC minutes). As with Model 1, Dr. Erdem
interprets this result through the lens of a hedonic regression, finding that it implies that
CSOs place a negative value on all distant retransmissions of local programming, except

The Judges discuss elsewhere in this determination the concept and label of a hedonic regression and
their significance in this proceeding.
Dr. Erdem states that to test the hypothesis of a positive correlation, on average, between royalties and
minutes, he would need to “control[] for appropriate variables.” Erdem WRT ¶ 52. However, there is no
sufficient indication in the record that Dr. Erdem applied control variables, or any other controls through
fixed effects with regard to his Model 1.
for JSC. Erdem WRT ¶ 54. And also as with Model 1, Dr. Erdem recognizes that these
results are “counterintuitive” in the context of reflecting value, but rather are a function
of the fragmentation of subscriber groups. Erdem WRT ¶ 54. See also SDC PFF ¶ 48.123
C. Erdem’s Rebuttal Model 3
In his third rebuttal model, Dr. Erdem tested the effect of the number of
subscribers in a subscriber group (the independent variable) on subscriber group royalty
fees and found a strong positive correlation. Erdem WRT ¶ 58. Dr. Erdem, again
viewing the modeling as a hedonic regression, has a ready and what he describes as an
obvious explanation for this positive correlation: [C]able systems place a high positive
value on the number of subscribers in a subscriber group.” Erdem WRT ¶ 58. As
alternatively stated by Dr. Erdem, “[W]e may need to treat the number of subscribers as a
measure of volume.” Erdem WRT ¶ 58. Relatedly, Dr. Erdem opines that “there is a
negative correlation between the number of subscribers in a subscriber group and the
number of distant minutes the subscriber group receives” – meaning that, for the more
populous subscriber groups, fewer distant signals (and minutes) are retransmitted to them
and, thus, the more sparse the number of subscribers in a subscriber group, the greater the
number of distant signal minutes. According to Dr. Erdem, this negative correlation is
inconsistent with the positive correlation between distant minutes and royalties posited by
the theoretical underpinnings of the fee-based regressions. See Erdem WRT ¶ 59.124
D. Erdem’s Rebuttal Model 4
Dr. Erdem’s Model 4 seeks to address a finding from his Model 3: “[T]he
relationship between the number of subscribers and royalty fees is positive.” Erdem

Again, Dr. Erdem does not indicate whether he applied control variables, and, if he did, what they were.

The Judges note Dr. Tyler’s testimony, discussed elsewhere in this determination, that there is no data
identifying the number of subscribers in a subscriber group, in the course of his positive differentiation of
the Tyler Model from the other regression models (which unlike the Tyler Model, must estimate the
number of such subscribers in an inaccurate manner). It is not apparent from the record that Dr. Erdem had
estimated the number of such subscribers in an accurate manner.
WRT ¶ 58 & fig.4. Keeping with his interpretive context, which treats these regressions
as hedonic in nature, Dr. Erdem posits that “[a]n analyst … will conclude that [CSOs]
place a high positive value on the number of subscribers in a subscriber group,” such that
“we may need to treat the number of subscribers as a measure of volume.” Erdem WRT
¶ 58. But he then asks, rhetorically: Could it be that, on average, subscriber groups with
fewer subscribers receive more distant minutes of programming? Erdem WRT ¶ 58
(emphasis added). Dr. Erdem then turns to his next pedagogical regression model, Model
4, to address this issue.
Dr. Erdem’s Model 4 indeed demonstrated a “negative correlation between the
number of subscribers in a subscriber group and the number of distant minutes the
subscriber group receives.” Erdem WRT ¶ 59. Dr. Erdem explained his intuitive
explanation for this negative correlation:
One of the principal reasons why a rational CSO might choose to use
subscriber groups is because the cable system’s reach straddles the edge of
the 35-mile radius in which a station is considered “local” for cable royalty
purposes. In this situation, a signal is “local” to some subscribers and
“distant” to other subscribers. The cable system can save money by
breaking its subscribers into geographically based subscriber groups so
that it is paying for the distant retransmission only for the subscribers
receiving it on a “distant” basis.
Erdem WRT ¶ 59 (emphasis added). Dr. Erdem then presents the data (discussed supra)
regarding the localized emphasis on “distant” retransmission contiguous to the 35-mile
legal boundary between local and distance transmissions. Erdem WRT ¶¶ 59-60.
Dr. Erdem recognizes that the several regression experts sought to remove this
cost-based negative effect of the number of subscribers in a subscriber group on the
number of distant minutes a subscriber group receives. First, he noted that Dr. Tyler,
with his SGRP divided the dollar value of fees (the numerator in Dr. Tyler’s SGRP) by “a
metric that scales with the number of subscribers,” i.e., total receipts. (the denominator in
Dr. Tyler’s SGRP). Second, as an alternative approach, Drs. George and Johnson (and
apparently Dr. Crawford previously) introduced a control variable to remove the

influence of the number of subscribers (whose increasing numbers would increase
receipts and potentially increase royalties either through higher binding base fees or by
triggering a base fee obligation in excess of the minimum fee that would otherwise bind).
Erdem WRT ¶ 61.125
E. Erdem’s Rebuttal Model 5
Dr. Erdem then apparently adds to his pedagogical model the control variable that
Drs. George and Johnson include, “controlling for the number of subscribers.” When Dr.
Erdem does so (using lagged and unlagged subscriber numbers, respectively in his
modeling), he finds that his “correlation between total minutes and royalty fees is now
positive.” Erdem WRT ¶ 62 & fig.6 (emphasis added). He emphasizes that what he
terms the “fixed price” for the retransmissions in his modeling is “based primarily on the
type and number of signals and revenues for the subscriber group,” despite the fact that
“[r]evenues are largely based on the number of subscribers.” Erdem WRT ¶ 62.
What still remains uncontrolled, Dr. Erdem, notes, is the “impact … from the
number of distant signals.” Erdem WRT ¶ 62. He notes the perhaps self-evident point
that “[t]he more signals there are, the more minutes there are, so I would expect a positive
relationship after controlling for subscribers.” Erdem WRT ¶ 62.
F. Erdem’s Rebuttal Model 6
Dr. Erdem then breaks the retransmitted minutes into their respective
programming categories, and proceeds to test whether the positive correlation between
total minutes and royalties (which the regression experts understood to exist) continues to
hold on a per-category basis. Erdem WRT ¶ 63. He finds that this positive relationship
between minutes and royalties – on a program category basis – remains positive and is
statistically significant for four of the six category participants – PTV, Program

Note that when discussing Model 7 considered infra, Dr. Erdem admits that “inclusion of a variable for
subscribers … could be justified as a volume-based control.” Erdem WRT ¶ 69.
Suppliers, JSC, and the SDC. However, his modeling resulted in mainly positive but
statistically insignificant results for CTV and CCG, and, for a minority of CCG
observations, a negative relation. (Dr. Erdem’s modeling also showed negative
correlations for “network programming” (not a category at issue). Erdem WRT ¶¶ 63-64
& fig.7. Dr. Erdem interpreted these results to mean that “the control for the number of
subscribers lifted the coefficients for program categories into positive territory by
removing the influence of the number of subscribers, but not enough to give all
categories a positive and statistically significant coefficient.” Erdem WRT ¶ 64.
Dr. Erdem asserts that these results “pose a problem for any analyst hoping to
interpret the model as a hedonic regression.” Erdem WRT ¶ 65. More particularly,
continuing from the binary perspective of whether the fee-based regressions are hedonic
or not, he unambiguously opines that these regressions are invalid because they are not
hedonic, in that “[t]he price is not actually varying based on the valuation of minutes,”
but rather varying based on “other factors such as the type of signal or the revenue-persubscriber for the subscriber group or system.” Erdem WRT ¶¶ 65-66.
Dr. Erdem then states that the regression analyst who nonetheless “wishes” to
describe his or her regression as hedonic must manipulate the negative coefficients into
positive coefficients, so that they “appear” plausible as proxies for prices. Erdem WRT ¶
67.
It is in this context that Dr. Erdem accuses the regression experts of “leveraging”
the “negative coefficients for network programming” (which are ineligible for an
allocation of the royalties to be divided in this proceeding). Erdem WRT ¶ 68. To
generate this leverage, Dr. Erdem asserts that the fee regression analysts engage in two
manipulations (1) they add another control variable for “the number of distant signals,
which correlates directly with the total number of minutes” and (2) they exclude the
variable for “the number of distant minutes of network programming,” “render[ing] all

category coefficients ‘relative’ to the negative coefficient for network programming.”
Erdem WRT ¶ 68. Dr. Erdem emphasizes the elementary point that “[b]ecause any
number is positive in relation to the largest negative number, the exclusion of the variable
for network programming has the effect of lifting the variables for all category minutes
comfortably into positive territory, creating an apparent positive and statistically
significant correlation where there previously was none in some categories.” Erdem
WRT ¶ 68.
G. Erdem’s Rebuttal Model 7
To the adjustments included through Models 1-6, Dr. Erdem now injects a control
for “the number of distant stations on royalty fees.” Also, his Model 7 “drops network
distant minutes in order to get relative numbers” in the manner undertaken by the fee
regression experts. Erdem WRT ¶ 69.
Although (as noted supra) Dr. Erdem concedes that the prior “inclusion of a
variable for subscribers … could be justified as a volume-based control,” he finds “no
econometric justification for seeking to value category minutes relative to the negative
coefficient value of network programming.” Erdem WRT ¶ 69. He states that as a
general matter, “even if one believed that the coefficients were related to value, there
could be no justification for trying to measure value relative to an arbitrarily chosen
category with a negative value.” Erdem WRT ¶ 69 (emphasis added).
Dr. Erdem also characterizes the negative coefficient for network programming as
“an artifact of the operation of the copyright royalty system, not a measure of how much
anyone values programming, and certainly not a measure of how programming would be
valued in the free market.” Erdem WRT ¶ 70. Alternately stated, he declares that [t]here
is no intuitive reason why network programming would be expected to have negative
market value when retransmitted on a distant basis.” Erdem WRT ¶ 70.

Dr. Erdem does acknowledge that, through what he calls this excluded network
minute “manipulation,” all the coefficients in the categories of interest (for the distant
retransmission that is permitted by law) now become positive. Erdem WRT ¶ 70 (“This
is exactly how Professor Crawford’s model – and, by extension, Dr. George’s model and
Dr. Johnson’s model – works.”).126
From this point forward, Dr. Erdem maintains that the fee-based regression
experts “are free from the constraints of econometric reasoning.” More particularly, he
asserts they can, without appropriate justifications use various (1) control variables, (2)
fixed effects, (3) transformations and functional forms, and (3) unspecified miscellaneous
fine-tuning, all in the service of “generat[ing] whatever coefficients [they] desire or
expect.” Erdem WRT ¶ 71.
H. Erdem’s Rebuttal Model 8
The final model, Model 8, is actually not a “model” at all, but rather Dr. Erdem’s
more particular catalog of “manipulations” in which a fee-based regression expert could
engage, with a model built up through Dr. Erdem’s Models 1-7. Without linking any of
the following “manipulations” specifically to any of the experts in this proceeding, Dr.
Erdem states in this “Model 8” that the following “manipulations” are possible:
1. “bringing in variations in the number of subscribers to increase or decrease the
effect on the dependent variable. For example, we can try the lagged number
of subscribers;”
2. “add[ing] interactions with the number of subscribers” (as he states Dr.
Crawford did in his model); and
3. “add[ing] fixed effects, which controls for any variation due to inherent
characteristics of a subscriber group.”

To be clear, Dr. Erdem does not lodge this criticism at Dr. Tyler’s model.

Dr. Erdem does not assert that such additions would be ad hoc, but rather that, consistent
with the fundamental defect he finds in the fee-based regressions, they would “merely
leverage the features of the copyright royalty system.” Erdem WRT ¶ 72.
I. Dr. George’s and CCG’s Rejoinder to Erdem’s Modeling Exercise127
At a high level,128 CCG takes issue with the SDC’s emphasis on the assertion that
fee-based regressions are predominantly rooted in correlations with (a) the geographic
location of CSOs and their constituent subscriber groups and (b) statutory features of the
copyright royalty system. In this regard, CCG essentially attacks this assertion as much
ado about nothing, because the reason why CSOs and their subscriber groups retransmit
signals as they do does not bear on the fundamental point of the regressions, i.e., to
identify what the CSOs actually retransmit in order to appropriately compensate
copyright owners. Dr. George emphasizes that whether or not subscriber group
configurations are geographic artifacts, they nonetheless reflect the strategic profitmaximizing decisions of CSOs as to where they will transmit distant signals. It is this
profit maximizing retransmission decision that is the kernel of information that provides
insight into “what would determine relative market value absent regulation.” See George
WDT at 15-17, 27-28; The Canadian Claimant Groups’ Reply to Proposed Findings of
Fact and Conclusions of Law (CCG RPFF) ¶¶ 21-22.
More broadly, CCG characterizes Dr. Erdem’s eight-model analysis as
incomplete and economically flawed. In this regard, Dr. George criticizes Dr. Erdem’s
rebuttal pedagogical modeling because therein he analyzes relationships in the data
across CSOs, whereas the George Model emphasizes variation within CSOs to identify

CCG and Dr. George, among the other regression experts and parties, were the ones who responded to
Dr. Erdem’s testimony, apparently because Dr. Erdem’s pedagogical modeling was based on “Dr. George’s
methodology and production.” Erdem WRT ¶ 51 n.23.
This high-level “General Criticism” also responds specifically to Dr. Erdem’s Model 4 discussed supra,
regarding “geographic” effects, which are “key” elements of Dr. Erdem’s general critique of fee-based
regressions. See 4/6/23 Tr. 3643-44 (Rubinfeld) (identifying “changes in the number or size of subscriber
groups” as a “key issue.”).
coefficients. Thus, CCG and Dr. George essentially attack Dr. Erdem’s modeling as a
straw man exercise. CCG RPFF ¶ 22; George WDT at 27-28.
At the conceptual economic level,129 Dr. George takes note of the point (identified
by the Judges supra) that Dr. Erdem has contextualized his analysis in the wrong
economic and legal standard:
SDC’s false criticism that regressions are not driven “by any plausible
measure of fair market value” suggests that measuring fair market value was
a goal of regression. … No pro-regression expert claims that correlations
are driven by “fair market value.” As the Judges wrote in the prior
proceeding: “In this proceeding, the Judges distinguish between ‘relative
values’ (to describe the allocation shares), and absolute ‘fair market values.’
Because the royalties at issue in this proceeding are regulated and not
derived from any actual market transactions, they do not correspond with
absolute dollar royalties that would be generated in a market and thus would
not reflect absolute ‘fair market value.’”
CCG RPFF ¶ 12 (quoting 2010-13 Determination at 3555 n.17) (emphasis added).
More granularly, CCG asserts that the negative correlations in Dr. Erdem’s
modeling between royalties (the dependent variable) and, respectively, (a) total distant
minutes (Model 1), (b) claimant distant minutes (Model 2), and (c) subscriber group size
(Model 3), do not, as Dr. Erdem claims, reveal a modeling “hurdle” or “problem” that
bedevils the fee-based regressions. Rather, it is claimed that Dr. Erdem’s first three
pedagogical rebuttal models fail to consider that CSOs configure their subscriber groups
strategically to maximize profits and therefore will only retransmit distant signals to
groups of subscribers when the anticipated benefit (essentially, more new or retained
subscriptions) exceeds the anticipated costs (royalties). CCG RPFF ¶ 24 (citing, from the
2010-13 proceeding, Crawford CWDT ¶¶ 66-68; Israel WDT ¶¶ 12-14.).

All the economic experts in this proceeding agree that the initial step in building a regression model is to
identify “a theory that describes the variables to be included in the study.” American Bar Association,
Econometrics, Legal, Practical and Technical Issues 8 (1st ed. 2005) (“ABA Econometrics”). See also
Stock & Watson, supra note 92, at 282 (“First, a core or base set of regressors should be chosen,” which
includes the “variables of primary interest” and the “control variables” suggested by, inter alia, “economic
theory.”) (emphasis added); Kennedy, supra, at 391 (identifying as “Rule 1” of applied econometrics:
“Use common sense and economic theory.”) (emphasis added).” Perhaps even more pertinent here is
Professor Kennedy’s “Rule 2,” which states that an econometrician must avoid attempting to “produce[] the
right answer to the wrong question.” Id. at 391.
CCG also takes note of the finding in Dr. Erdem’s Model 3130 of “a positive
relationship between the number of distant signals and subscriber group royalties,”
suggestive of the regression experts’ hypothesis that cable systems place a high positive
value on the number of subscribers in a subscriber group.” CCG RPFF ¶ 24.
Unsurprisingly, CCG and Dr. George do not disagree with his finding.131
CCG and Dr. George then address Dr. Erdem’s next point regarding: (1) the
purportedly problematic “negative correlations” in Models 4 and 6 “between the number
of subscribers in a subscriber group and the number of distant minutes the subscriber
group receives” (Erdem WRT ¶ 59); and (2) the attempt to control for the number of
subscribers considered in Model 5 (Erdem WRT ¶ 62). In defending the use of a control
for the “number of subscribers” as an important feature for a fee-based regression, CCG
states:
The negative correlations documented in Dr. Erdem’s models are not
“problems.” The negative correlation with subscriber group size results
from the strategic choices of cable systems to minimize the cost associated
with distant signal carriage. The negative coefficient for one category of
minutes reflects the fact that programming minutes per station sum to a total
of 24 hours per day. The goal of the regression is to evaluate how royalty
expenditure correlates with claimant programming on distant signals
retransmitted, all else equal. A control variable for the number of
subscribers in a subscriber group creates these all-else-equal conditions.
CCG RPFF ¶ 25 (citing George WDT at 52-54; George WRT at 60) (emphasis added).
Turning to Dr. Erdem’s pedagogical rebuttal Model 7, Dr. George and CCG assert
that Dr. Erdem has changed the fee-based regression modeling in two ways by (1)
“excluding the variable for network minutes” and (2) “including a variable for the
number of distant signals.” CCG RPFF ¶ 26. Regarding the alleged error in Dr. Erdem’s
exclusion of the network minutes variable, CCG avers:

CCG misidentifies this point as within Dr. Erdem’s Model 4. CCG RPFF ¶ 24.

As noted supra regarding CCG’s and Dr. George’s “General Criticisms” of Dr. Erdem’s pedagogical
modeling, they dispute his assertion in Model 4 that fee-based regressions do not reflect the category-bycategory preferences of CSOs as revealed by the minutes of program categories retransmitted.
Since all stations broadcast approximately 24 hours per day, and subscriber
groups must have whole numbers of distant signals, programming minutes
sum to a constant equal to the number of distant signals times 24 hours per
day for 6 months. Dr. Erdem has … effectively forc[ed] one of the program
categories to produce a negative coefficient. [Dr.] Crawford and [Dr.]
George address th[is] … by specifying a model with a control for … a
reference category of “big-3” network minutes. Network minutes are a
convenient reference choice because they are non-compensable and no
coefficient for this category need be estimated.
CCG RPFF ¶ 26 (citing George WRT at 31-32, 57-58, 63-64) (emphasis added).132
Turning to her objection to Dr. Erdem’s second alteration identified in the
immediately preceding paragraph, viz., removing of the control for the number of distant
signals, Dr. George responded as follows:
[R]emoving the control for distant stations changes the interpretation of
program coefficients so that they no longer show the effect of an additional
program minute taking away a minute of network or off-air
programming. … removing the control for distant signals [thus] alters the
“all else equal” framework of the model so that program coefficients no
longer isolate the effect of additional program minutes, but instead also
capture the (omitted) incremental value of additional distant signals.
George WRT at 57-58 (emphasis omitted); see also id. at 63-64.133
Finally, in responding to Dr. Erdem’s conclusory Model 8, CCG concludes by
describing Dr. Erdem’s pedagogical exercise as merely his recapitulation and criticism of
“his own incomplete models,” rather than “a criticism of the well-specified Crawford
[M]odel or those presented in this proceeding.” CCG RPFF ¶ 21. See also George WRT
at 53 “(just as there is the potential for experts to ‘cherry-pick’ results, there is the
potential for adversaries to ‘cherry-pick’ their critiques.”).

Moreover, Dr. George pointed out that, at first, Dr. Tyler made the same mistake as Dr. Erdem,
neglecting to include or address this reference category when critiquing the Crawford Model. When he
realized his error, Dr. Tyler withdrew his attempted replication of the Crawford Model. See George WRT
at 31-32; see also Bennett WRT ¶¶ 127-134.
Dr. George had the opportunity to express this criticism in her WRT because Dr. Erdem had made this
particular criticism in his amended direct testimony (which he later incorporated it into his eight-model
exercise.)
J. The Judges’ Analysis and Conclusions
The Judges find that Dr. Erdem’s pedagogical eight-model approach does not
support an abandonment of the Judges’ long-standing reliance on fee-based regressions
as evidence of relative market value in these section 111 allocation proceedings. The
Judges make this finding based on the following:
1. The Judges agree with SDC’s counsel that Dr. Erdem’s eight-model analysis
is not substantively any different than what he presented in the 2010-13
proceeding. As such, it does not raise new factual arguments.
2. Dr. Erdem acknowledges at the outset that his critique is intended to show that
the fee-based regressions fail to generate “fair market value.” This is a
consequential error on his part, because (a) the Judges’ long-existing standard
is “relative marketplace value,” (b) the Judges expressly distinguished their
standard from “fair market value” in the 2010-13 Determination, and (c) Dr.
Erdem did not attempt to explain his switch in standards. Accordingly, it
appears to the Judges that Dr. Erdem expressly characterized his eight-step
modeling approach in a manner that attempted to answer “the wrong
question,” in violation of Professor Kennedy’s econometric “Rule #2”
discussed supra.
3. Dr. Erdem’s approach is to build up from models which lack control variables,
and then to posit that the relationships he finds are inconsistent with the
hypothesis behind the fee-based regressions. But that approach leaves out all
the control variables that the fee-based regression experts have included in
their models, essentially causing Dr. Erdem’s simple models to be burdened
by omitted variables, which cause regressions to suffer from the eponymously
named “omitted variable bias.” Moreover, in Models 1 and 2, Dr. Erdem is

thus not even engaged in “multiple regression” analysis, because he is
analyzing only the effect of a single independent variable.
4. Related to the immediately preceding criticism, Dr. Erdem’s rebuttal
modeling approach thus reflects his own modeling choices and approach, not
one utilized by the fee-based regression experts. Thus, his approach is in the
nature of a straw man argument. Moreover, his approach does not appear to
be so much pedagogical in nature, but rather more of an attempt to utilize his
rebuttal testimony to set forth the rudiments of an alternative modeling
exercise – after SDC had declined to proffer any such modeling approach in
its original or amended written direct statement (when it was fully aware of
the points it subsequently raised on rebuttal through Dr. Erdem’s eight-model
approach).
5. Dr. Erdem does not clearly explain how he estimated the number of
subscribers in a subscriber group. If he did so by the same estimation
approach as Drs. George, Johnson, and Marx (via Dr. Crawford) then his
criticism is as questionable as their analyses in this regard. Moreover, the
deficiency of this criticism underscores the relative strength of the Tyler
Model, which did not require a control for the number of subscribers, given its
use of SGRP as the dependent variable.
6. The Judges cannot credit Dr. Erdem’s criticism of the relationship between the
negative coefficients he discussed and the use of a “reference category” of
“Big-3” network minutes in the fee-based regressions. The Judges are struck
by the fact that Dr. Erdem ignored the rationale given by Dr. George (and
other regression experts), viz., that a “reference category” serves as a measure
of value generated by the regression but not a value at issue under the
statutory scheme, and thus the six categories of value can be measured against

that “reference category.” (Other experts have characterized such a “reference
category” approach as an “index” or “numeraire.”134). Any sufficient
criticism of this approach would need to address the “reference category”
purpose head-on, rather than ignore it.
7. Further, with regard to the reference category issue, the Judges agree with Dr.
George that Dr. Erdem’s rejection of a referenced category/numeraire
effectively forced program categories at issue in this proceeding to produce a
negative coefficient, because in a 24-hour day, absent this control, any
increase in one royalty-generating category’s minutes would necessarily
reflect a decrease in another category’s minutes.
Separate and apart from the Judges’ evaluation of Dr. Erdem’s testimony, as
discussed above, the Judges note an aspect of Dr. Erdem’s testimony that called into
question its reliability. By way of brief background, Dr. Erdem testified in the 2010-13
proceeding as well as the present proceeding, and his testimony was consistently reliable
and thought-provoking, regardless of whether the Judges ultimately agreed with his
opinions. But he also inexplicably endorsed in his testimony the present Bortz Survey as
“very useful.” 4/5/23 Tr. 3465 (Erdem). Dr. Erdem’s testimony in this regard was
inexplicable – and jarring – because SDC did not seek to have Dr. Erdem qualified as a
survey expert, he was not received as such by the Judges, and, perhaps even more
unsettling, he pronounced his endorsement of the Bortz Survey “sight unseen,” that is, he
endorsed it without reading it. 4/5/23 Tr. 3466 (Erdem) ([Q]: “[I]n your initial testimony
that was submitted in this proceeding, you expressed your support for the Bortz Survey

See, e.g., 4/11/23 Tr. 4141-42 (Marx) (referring to “the Big 3 network programming” – which is already
available on local affiliates in the CSO system and therefore has the lowest coefficient – as the “numeraire”
that allows for the six category values coefficient values to be positive in relationship to those “numeraire”
/ ”reference category” minutes.)
sight unseen, correct? [Dr. Erdem] That's correct.”) (emphasis added)). Nonetheless, Dr.
Erdem continued to attempt to justify this testimony in colloquy with Judge Strickler:
Q: Dr. Erdem, but you are not qualified as a survey expert. How can you weigh
the value of a survey …. I understand [you] to say while there may be no perfect
way to estimate relative market value, you say I'll tell you one way that isn't, and
that's these fee-based regressions. I understand your testimony. But why would
we credit your testimony about the survey being appropriate when it comes to that
issue? You're just a lay witness.
Dr. Erdem: You are correct, Your Honor, I am not a survey expert as an
economist.
4/5/23 Tr. 3476 (colloquy) (emphasis added). Dr. Erdem could have chosen to stop there,
but he elected to keep digging, seeking to justify his Bortz Survey endorsement:
Dr. Erdem: I am involved in projects and analyses that rely on survey
methodologies and survey data. I have a team that supports me in those.
* * * * *
Judge Strickler: Before you gave your testimony in this case about the Bortz
Survey being an appropriate tool to measure relative market value, did you
consult with that survey team?
Dr. Erdem: I did, Your Honor. You may recall the name Hilary Johnson, who is
my director. She is a statistician by training. And I also have a Ph.D. statistician
who supported me in the 2010-'13 proceeding. He reviewed the materials. … I
had conversations with him about methodology. So I had a team that supported
me in my reports.
Judge Strickler: Well, I don't remember you saying anything in your testimony
that you relied on your survey team in any way. Hilary Johnson's name I recall,
[but] [s]he didn't testify in her written testimony … about the survey at all, did
she?
* * * * *
Dr. Erdem: Correct.
Judge Strickler: Why didn't she give testimony that the Bortz Survey was a good
and proper way to estimate value if she's an expert in this field and you're not?
Dr. Erdem: That's a good question.
Judge Strickler: That's why I asked it.

Dr. Erdem: [W]e didn't specifically focus on the methodology aspects of Bortz
Survey, you are correct in that.
Judge Strickler: Thank you, Doctor.
4/5/23 Tr. 3476-79 (colloquy) (emphasis added).
The foregoing rather remarkable testimony damaged Dr. Erdem’s credibility,
suggesting he would be willing to testify regarding matters as to which he lacked both
expertise and knowledge. Moreover, it is ironic that he would attempt to salvage his
Bortz Survey opinion by reference to his “team” of other professionals with the necessary
background to offer such an opinion, only to admit in short order under questioning from
the bench that they did not “specifically focus on the methodology aspects of the Bortz
Survey.” His testimony in this regard is rich with irony because Dr. Erdem is the witness
who has most forcefully attacked Dr. (John) Johnson of PTV for delegating work to his
team of professionals without personal involvement or knowledge of the work of the
team.
Thus, separate and apart from the enumerated points set forth above that lead to
the Judges’ finding that Dr. Erdem’s eight-model analysis is insufficient to invalidate the
use of fee-based regressions, his foregoing survey-related testimony casts doubt as to his
credibility.
In sum, the Judges find that Dr. Erdem’s eight-model pedagogical exercise is
insufficient to discredit fee-based regressions as a form of evidence on which the Judges
may rely.
X. SUB-CATEGORY VALUES
JSC, through its statistical expert, Mr. Harvey, ran what he described as “validity
tests” that decomposed certain program categories to isolate the coefficients attributable
to the decomposed elements. Specifically, he concentrated on (1) paid programming
(including “infomercials”) within the Program Suppliers category and (2) the rare NFL
football games that appeared on distantly retransmitted local stations (as opposed to being

broadcast on network or cable stations, which are noncompensable in these section 111
proceedings). Harvey WRT ¶¶ 71-90.
With regard to paid programming, Mr. Harvey separated the paid programming
out of the Program Suppliers category and created a new category for paid programming.
Joint Sports Claimants’ Post-Hearing Brief in Support of Proposed Royalty Allocations at
32-33 (and citations therein) (JSC PHB). Performing this task on the Johnson Model,
Mr. Harvey calculated that the coefficient for paid programming is larger than the
coefficients for the other Program Suppliers content, PTV content, SDC content, and
CCG content, and that, on average, the Johnson regression would assign paid
programming a share of about 6.8% of the royalty pool per year. JSC PFF ¶ 176. For
further perspective, Mr. Harvey computed that this paid programming share is greater
than the share of royalties that the Johnson Model assigned to the approximately 2,000
annual JSC games, and approximately three times greater than all the 2015-2017 royalties
for all JSC content. Id.
In response, Program Suppliers argues that Mr. Harvey failed to properly place
his findings within the context of the regression approaches in these proceedings.
Specifically, PTV’s expert, Dr. Johnson, testified that it was incorrect to decompose the
entire category of Program Suppliers’ programming and focus on any one sub-category,
because the regressions offer “average relative valuations” for entire categories. More
granularly, Program Suppliers take note of the following testimony on this issue by
PTV’s expert, Dr. Johnson:
[I]t is an average relative valuation, so I don't think that's an appropriate use
of the model. But his theory is that paid-programming has no value at all,
but he didn't remove them from the model. If he had simply removed the
minutes that he thinks are problematic, he would have found that the
estimates really don't change very much at all. So I just don't think that's a
valid critique.
3/21/23 Tr. 605 (Johnson).

As a second response, Program Suppliers assert that Mr. Harvey’s paid
programming argument is “cherry-picked,” because he admitted to running other
“validity tests whose subject matters and results he and JSC did not produce in these
proceedings.” PS PFF ¶ 346 (and record citations therein).
CCG, relying on the testimony of its economic expert, Dr. George, also levied
Program Suppliers’ first criticism above, asserting that Mr. Harvey’s validity test on paid
programming ignores the very purpose of the fee-based regressions: to estimate the
average relative values of the six programming categories at issue. CCG PFF ¶ 148 (and
record citations therein). CCG adds, in this regard, that none of the economic expert
witnesses who proffered fee-based regressions in this proceeding has maintained that it
was the purpose or capacity of their models to precisely estimate the relative value of
sub-groups of programs. Id.
At the hearing, Dr. George provided further detail with regard to this criticism:
So the paid programming is fixed hours at night. There's just not
independent variation with other Program Supplier category. So … when
[Mr. Harvey] breaks this up, he effectively forces one of the coefficients to
be negative because … you can't really independently increase paid
programming without decreasing the other Program Suppliers'
programming.
* * * * *
[T]he coefficients for claimant programming … reflect an average. So right
now the values per minute are telling us the average of the different -- like
the diversity of this kind of programming. So, Program Supplier
programming has different sorts of things. And so the value per minute is
an average [a]nd we're applying it to quantities. And so if I were to design
a regression that really wanted to get at the value of paid versus non-paid
programming, I could do that, but it would be a pretty different model.
4/18/23 Tr. 5163, 5166-67 (George).
In their post-hearing filings, JSC responds by emphasizing more narrowly that
this “validity” test reveals the pitfall of the regression models’ use of retransmission
decisions by minimum fee-paying CSOs:

The failure of the regressions to accurately capture revealed preferences
from Minimum Fee CSOs is clearly demonstrated by Mr. Harvey’s validity
tests, which reveal that the regressions would attribute substantial value to
programming with no value (i.e., infomercials) ….
JSC PHRB at 16-17 (and citations therein) (emphasis added).
With regard to the rare NFL game that appeared on a distantly retransmitted
station (as opposed to a broadcast or cable network), Mr. Harvey performed an additional
“validity” test. Specifically, he separated NFL games from other JSC content, in order to
ascertain whether the regression models had the capacity to realistically estimate the
relative value of NFL programming. JSC PFF ¶ 180. Mr. Harvey found that across the
Johnson, George, and Tyler Models, the NFL retransmissions had lower coefficients than
other JSC programming (and sometimes negative coefficients). JSC PFF ¶¶ 181-85 (and
record citations therein). Based on these results, Mr. Harvey opined that these regression
models were unable to identify realistic values because the high value of NFL games on
television is common knowledge and undisputed, and should have been confirmed by this
validity test. JSC PFF ¶ 180.
In response, Program Suppliers and Dr. Tyler first reiterate the same points they
made with regard to Mr. Harvey’s “validity test” pertaining to paid programming, i.e., (1)
that the regressions offer average relative values across a category, (2) the program
category is too small to generate meaningful results, and (3) the test was “cherry-picked”
out of a number of validity tests that Mr. Harvey elected not to disclose. But Program
Suppliers specifically hones in on the second criticism above, that the program category
is simply too small. In this regard, Program Suppliers maintain:
During the 2014-17 time period, WNBC (one of the handful of distant
signals that Mr. Harvey chose to highlight) carried just one compensable
regular season NFL game, meaning that compensable regular season NFL
content accounted for less than one one-hundredth of one percent of the
content on that station.
PS PHB at 3 & 25 (citing to PS PFF ¶ 174 (and record citations therein)). See also
3/29/23 Tr. 2062-64 (Harvey) (admitting to this percentage calculation).

Regarding this NFL “validity test,” CCG made the same argument it made in
criticism of Mr. Harvey’s “validity test” relating to paid programming, described supra.
In her oral testimony, Dr. George elaborated more broadly regarding the attempt to
decompose JSC programming into the rarely retransmitted NFL games, stating that Dr.
Harvey failed to appreciate that because “there's a fixed number of regular season and
post-season games in the NFL ... we don't have independent variation there [and] our 24
model isn't capable of [that] separation … and it doesn't need to. 4/18/23 Tr. 5162
(George).
In his oral testimony, Dr. Johnson had a response to Mr. Harvey’s NFL decomposition of JSC programming that was consonant with the former’s response
regarding the paid programming issue. Dr. Johnson testified:
Mr. Harvey argues that he can change the model and try to separate out NFL
or playoffs. He says: Look, I get nonsensical results. I get negative values
for these things. The problem is … he is trying to parse the regression so
finely that he has got less than .01 and .04 of the total minutes that are used
in the entire estimation. … The model wasn't intended to only estimate
isolated values for NFL and playoffs. It's an average relative valuation for
the claimants. It can do that well. And that's the purpose of the model.
3/21/23 Tr. 605-06 (Johnson).135
The Judges find that Mr. Harvey’s “validity tests” do not serve to invalidate the
usefulness and relevance of the regressions proffered in this proceeding. There are
several reasons for this finding.
First, the Judges agree with the criticisms that Mr. Harvey’s “validity tests” fail to
appreciate the fact that the regressions are estimated average valuations. When an
average is de-composed, looking at any one element in the average fails to consider the

The Judges find no merit in the allegation that Mr. Harvey may have “cherry-picked” which “validity
tests” to produce. The issue here is the importance, vel non, of his validity tests. In that regard, the Judges
find that the tests he discussed in his WRT, including but not limited to the ones highlighted here, all suffer
from the problems inherent in de-composing the regression results. Moreover, because Mr. Harvey is a
JSC witness, it was incumbent upon JSC to bear the burdens of production and persuasion regarding the
impact of these de-composed sub-categories on the regression results, burdens which they have not
satisfied.
average itself and, depending on the question at hand, may offer an interpretation that is
off-point.136
Second, if it in fact is the case that paid programming, by some other metric, or by
the use of common sense, can clearly be found to have far less value than other program
types, the fact that the regression provides paid programming with value via the
averaging function of the regression does not mean that the Program Suppliers category
(where paid programming is situated) received an inflated coefficient. In this regard, the
Judges note Dr. Johnson’s testimony, cited above, in which he notes that Mr. Harvey did
not even attempt to show how, if at all, the coefficients in the regression would have
changed if he had simply removed the paid programming minutes from the regression.137

For example, consider the grade point average (GPA) of a college student for a semester, where the
student received 3 As in English Literature, World History, and Economics, and one C in biology.
Assuming an A=4.0 points and a C = 2.0 points, the student has a GPA of 3.5. This is the relevant data
point if one wants to know generally whether the student is performing well. But if the question is whether
the student is showing an aptitude to perform well in medical school, the de-composition is more
appropriate, because the 2.0 in biology is the more relevant data point. Here, there is no reason why the
paid programming or the NFL data points should be separated out, when the purpose of the regression is to
obtain the average.
It appears that there would be no change. A simple thought experiment is instructive. Assume the
Program Suppliers category consists of two types of programs: (1) situation comedies and (2) paid
programming. For simplicity, assume equal subscriber minutes for both categories and that each situation
comedy has the same value to a CSO as any other situation comedy, and each Paid Programming segment
has the same value as another such segment to a CSO. Also assume a reality, such as Mr. Harvey has not
unreasonably posited, that all paid programming has zero value to a CSO.
Because the regression is constructed to correlate royalties with minutes of programming, none of
the minutes attributable to paid programming would correlate with royalties because it is assumed CSOs do
not value paid programming. So, all the royalties attributable to Program Suppliers would have been
generated by the situation comedies. However, the total subscriber minutes would include both situation
comedy and paid programming minutes, reducing the per minute coefficient value (and diluting (by 50%)
the value generated by the situation comedies).
Consider some hypothetical numbers: Situation comedies and paid programming each accounted
for 262,800 minutes (50% of the 525,600 minutes in a year). The regression, de-composed, gives situation
comedies, hypothetically, a .0005 coefficient. But paid programming gets a zero coefficient. The average
coefficient across both categories is .00025 which, when multiplied by the number of annual programming
minutes (as the regressions do) of 525,600, yields 131.4, and that is the figure that would be compared to
the figure similarly computed for the other claimant categories.
What if we excluded paid programming from the regression? There would be 262,800 minutes of
situation comedy programming, with a coefficient value of .0005, as assumed. What would be the figure to
be used for allocation purposes? It would be 262,800 x .0005, which also equals 131.4. Thus, there is no
reason to assume zero-value paid programming is inflating the value of the category in which it is situated
if the validity/reality assumption of zero value is correct. (Economists will recognize this result as
analogous to the point made by Nobel laureate George Stigler in his explanation of block-booking of
movies by a studio to a theatre. See G. Stigler, United States v. Loew's Inc.: A Note on Block-Booking,
1963 Sup. Ct. Rev. 152 (1963)).

Third, Mr. Harvey indicates that the paid programming issue is a factor (or
perhaps more of a factor) as it pertains to minimum-fee-only CSOs, as noted supra. But
because the Judges are relying on the results from the cohort of above-minimum-fee-only
CSOs, Mr. Harvey’s point in this regard is of less importance.
Further, the program categories were configured by the parties. Although the
parties have raised the issue of whether the definitions of the program categories should
be changed, the categorizations in this proceeding are the same as the parties have long
utilized. The Judges understand these program categories to have been designed to
reduce transaction costs, so that each sub-category, or each program, does not make its
own claim for royalties, rendering the process prohibitively costly. (The bifurcation of
the process into allocation (formerly Phase I) and distribution (formerly Phase II)
proceedings is in furtherance of the reduction in transaction costs.)138
However, these tests do underscore the importance of integrating the Bortz
Survey as an approach to ascertaining relative marketplace value. It may be the case that
a small number of games has value, outside of what is measured by the regression, in
retaining subscribers, a measure of value which might be captured by the Bortz Survey,
but not by the regressions.
More broadly, the question of the value of different sub-categories of
programming takes on salience when the issue is whether certain types of programming
have a relative marketplace value independent of the number of minutes they contribute
to the category in which they are situated. And an entire category may have value not
reflected in the minutes of programming associated with that category and its
programming. That is, because these various categories and sub-categories are bundled
together in the local stations that are distantly retransmitted, minutes alone may well not

If paid programming indeed contributes little or nothing in royalties, the Program Suppliers’
representative may address that in the distribution (Phase II) process, but that is of no moment in this
proceeding.
reflect the relative values of key drivers of the decision of a CSO to retransmit a station
with a bundle of programming category content. For this reason, the Judges are also
utilizing the results of the Bortz Survey, which reflect (albeit imperfectly) how CSOs
value different types of programming.
XI. REGRESSION DECISION
A. Regression Analyses
In the 2010-13 Determination, the Judges placed “primary reliance” on a
regression analysis139 to allocate royalty shares among the six program categories. 201013 Determination at 3610. In particular, they found a regression model presented by
CTV’s econometric expert, Dr. Gregory Crawford, “on balance … to be highly useful in
estimating relative values in this proceeding.” Id. at 3569. Accordingly, the Judges gave
greater weight to regression analysis than they had in prior proceedings, both in absolute
terms and relative to other evidence and approaches, such as surveys and descriptive
industry witness testimony. An important reason for the Judges’ increased reliance on
regression analysis was that this methodology approached the relative marketplace value
from the perspective of what CSOs actually had done in terms of deciding which distant
signals to retransmit on their systems.” Id. at 3610 (emphasis in original).
The general form of this regression model is identified, alternatively, as a “feebased” regression, a “Waldfogel-style” regression, and, subsequent to the 2010-13
proceeding, a “Crawford-style” regression.140 At a high level, a fee-based regression is
characterized by the following elements:
1. It attempts to correlate variation in the program category composition of
distant signal bundles with the royalties paid by CSOs to estimate the relative
marketplace value of programming;

139
For an overview of the general concept of regressions, see 2010-13 Determination at 3556.
The Judges use these monikers interchangeably in this determination.

2. It regresses observed royalty payments for the bundle on the numbers of
minutes in each programming category; and
3. It may employ econometric controls in the form of “control variables” and
“fixed effects” in order to isolate the correlation between the dependent
variable (some measure of royalties) and the independent (explanatory)
variable of interest (the number of programming minutes) from the controlled
other drivers of CSO payments.
See 2010-13 Determination at 3557 (record citations omitted).
In proceedings prior to the 2010-13 Determination, the Judges (and their
predecessors) relied on fee-based regressions but did not place a primary weight on this
approach. In the allocation proceeding for 1998-99 royalties, a Copyright Arbitration
Royalty Panel (CARP) relied on such a regression model put forth by an economist, Dr.
Gregory Rosston, not as a primary allocation measure, but rather as corroboration of the
allocation shares generated by the Bortz survey. See 1998–99 CARP Report at 46.
Subsequently, in the allocation proceeding for 2004-05 royalties, the Judges relied on the
fee-based regression model advanced by Dr. Joel Waldfogel (the now eponymous
“Waldfogel-regression”) as “generally reasonable” and thus “helpful to some degree”
because it “more fully delineat[es] all of the boundaries of reasonableness with respect to
the relative value of distant signal programming” and “provid[es] some additional useful,
independent information about how cable operators may view the value of adding distant
signals based on the programming mix on such signals.’’ 2004-05 Distribution Order at
57063, 57068. Accordingly, the Judges found, as did their predecessors in the 2004-05
proceeding, that the fee-based regression approach served to “corroborate” some aspects
of the Bortz survey and that it also served “to provide an independent reasoned basis” for
departing in one respect from the Bortz methodology. Id. at 57069.

Chronologically, the 2010-13 Determination was the next allocation decision to
consider the evidentiary weight to be given to a fee-based regression. In that case, the
Judges elevated the regression methodology, namely the model proffered by Dr. Gregory
Crawford (the Crawford Model), to a primary body of evidence in terms of explanatory
power. The Judges noted that the Crawford Model, like the Rosston and Waldfogel
regressions that preceded it, contained a useful differentiating feature: In contrast with
the survey approach, regression modeling “analyzed value from the perspective of what
CSOs actually had done in terms of deciding which distant signals to retransmit on their
systems.” 2010-13 Determination at 3610.141 But why did the Judges elevate the feebased regression approach from the junior status of corroborative tool to a position of
evidentiary primacy?
The answer mainly lies in the improved way in which the Crawford Model was
constructed. Explaining this answer requires the Judges to present a brief tutorial on
regressions, based upon the testimony of the econometricians in this proceeding, the
textbooks they cited, and the background information set forth in the 2010-13
Determination.
Regression analysis is a “method of determining the relationship between two or
more variables, and it can be a valuable tool for resolving factual disputes.” 2010-13
Determination at 3556 (citation omitted). When a regression attempts to identify the
correlation between a “dependent variable”142 and more than one “independent
variable,”143 the approach is known as a “multiple regression analysis.”144 This is the
By contrast, the survey approach, as in the Bortz Survey proffered in this proceeding, asked each CSOemployed survey respondent, for a given year: “What percentage, if any, of [a] fixed dollar amount would
your system have spent for each category or programming?” Bortz Survey, app. B, attached to Trautman
WDT (emphasis added).
142 Typically, the dependent variable has been a functional form of royalties, see 2010-13 Determination at
3557 n.27, but in this proceeding, Dr. Tyler specifies a different dependent variable, the SGRP.
143 An “independent variable” serves to explain the dependent variable and is therefore also described as an
“explanatory” variable. 2010-13 Determination at 3567.
144 Multiple regression analysis ‘‘is the technique used in most econometric studies, because it is well
suited to the analysis of diverse data necessary to evaluate competing theories about the relationships that
technique that was employed by Dr. Crawford (and Dr. George) in the 2010-13
proceeding and in the present proceeding by Drs. George, Johnson and Tyler.145 Multiple
regression “is the technique used in most econometric studies, because it is well-suited to
the analysis of diverse data necessary to evaluate competing theories about the
relationships that may exist among a number of explanatory facts.” 2010-13
Determination at 3556 (citing ABA Econometrics, supra note 127, at 4). The basic
notation for a multiple regression would be, for example:
Y = a + bX + cZ + u
where
Y is the dependent variable
X is an independent (explanatory) variable
Z is a different independent (explanatory) variable
a is the intercept with the vertical axis (on a graphed
regression)
b is the coefficient (value) of X
c is the coefficient (value) of Z
u is the error term, a/k/a the “regression residual”
(reflecting unobserved factors that determine Y)
See 2010-13 Determination at 3556 n.23; Stock & Watson, supra note 92, at 158-59. If
econometricians are specifically interested in the impact of, say, independent
(explanatory) variable X on dependent variable Y, they will hold constant the effect of
any other independent (explanatory) variable, such as Z in the above example, which
reclassifies Z as a “control variable.”146
Because of changes in generated data as a result of statutory changes that
occurred subsequent to the determination covering the 2004-05 royalty years, Dr.
Crawford was able to construct a fee-based regression with more granular detail. The
Judges explained this change in data generation in their 2010-13 Determination:

may exist among a number of explanatory facts.’’ 2010-13 Determination at 3556 (citing ABA
Econometrics, supra note 127, at 4).
145 Dr. Marx utilized a Bayesian regression (described in detail infra) for 2014 that builds upon the multiple
regression work done by Dr. Crawford for 2013.
146 For the definition of a “control variable” see 2010-13 Determination at 3558 n.33.

Between the time of the last adjudicated cable royalty allocation proceeding
and the present [2010-13] proceeding, Congress passed the Satellite
Television and Localism Act of 2010 (STELA). Before STELA, cable
operators were required to pay for the carriage of distant signals on a
system-wide basis, even though each signal was not made available to every
subscriber in the cable system. … STELA … amend[ed] section 111(d)(1)
of the Copyright Act, which details the method by which cable operators
can calculate royalties on a community-by-community or subscriber-group
basis. Id. From the 2010/1 accounting period and all periods thereafter,
cable operators have been required to pay royalties based upon where a
distant broadcast signal is offered rather than on a system-wide basis.
2010-13 Determination at 3554 (emphasis added).
This statutory change permitted the participants in these section 111 allocation
proceedings to analyze relative value at the subscriber-group level. 2010-13
Determination at 3554 (citing Corrected Written Direct Testimony of Gregory Crawford,
Ex. 2004 (Crawford CWDT) ¶ 66). More particularly, Dr. Crawford’s regression “looked
for a correlation in a subscriber group between changes in the number of minutes of
programming the subscribers watched by categories and changes in the percentage of
royalties the subscriber group paid while holding constant other potential explanatory
variables (called control variables).” 2010-13 Determination at 3558. As Dr. George
succinctly explained in her testimony in the present proceeding, “[w]ith [Dr.] Crawford’s
specification, coefficients are identified using only variation within systems in each
accounting period.” George WDT at 9 (emphasis added).
Dr. Crawford’s approach thus required the existence of at least two subscriber
groups in a cable system in order for the retransmission (and thus the programming)
decisions of a cable systems operator (CSO) to be used in the regression. The purpose of
so limiting the regression was to focus on the relationship at interest in the regression,
which is the association between the minutes of per-category programming retransmitted
and the CSO’s royalties calculated at the subscriber group level. However, by so doing,
the Crawford Model reduced the number of observations that it could utilize. In the
2010-13 proceeding, the Crawford model was criticized by the SDC and one of its

experts, who argued that his regression approach was “compromised” by this limitation,
which “‘effectively discarded’ approximately 15% of his observations by disregarding
observations from systems with a single subscriber group … ‘approximately half of all
systems in his data set’ ….” 2010-13 Determination at 3566 (citations omitted).
But what the SDC saw as vice, Dr. Crawford (and ultimately, the Judges)
understood as virtue. That is, Dr. Crawford included this combined control limiting the
observations to intra-cable system subscriber group variations in a particular six-month
accounting period in his regression to avoid introducing (i.e., to control for) effects on
royalties of different business strategies among CSOs (“system” effects) and different
economic conditions over time (“accounting period” effects). In a regression, these two
joint interactive controls are examples of a particular form of control known as a “fixed
effect.”147
Additionally, while his regression was a work in process, Dr. Crawford added
another fixed effect for the “top-six” MSOs148 for similar reasons, i.e., to control for their
variable “average receipts … signal carriage strategies, pricing, and other relevant
dimensions.” 2010-13 Determination at 3567 (record citations omitted).
More broadly, Dr. Crawford explained that his fee-based regression was intended
to explain the association between program category minutes and royalties paid. To that
end, it was necessary to control for other factors, specifically including “the numbers of
local and distant stations, the number of activated cable channels, and the size of the
CSO.” 2010-13 Determination at 3558 (record citations omitted). These were in

For the definition of “fixed effects,” see 2010-13 Determination at 3563 n.52. Graphically, the inclusion
of “fixed effects” generates different intercepts, such that “a” in the example supra would have a different
value for each “fixed effect.” (Econometricians sometimes describe “fixed effects” as a type of “control
variable,” but they are more often specifically characterized as “indicator” or “dummy” variables. See
2010-13 Determination at 3562 n.45.
148 “MSO” is an acronym for a “multi-system operator,” for example Verizon, 3/21/23 Tr. 347 (Johnson),
and refers to “ an operator of multiple cable or direct-broadcast satellite television systems [and is] usually
reserved for companies that own multiple cable systems, such as Altice USA, Charter Communications,
Comcast and Cox Communications ….” List of Multiple-System Operators, Wikipedia,
https://en.wikipedia.org/wiki/List_of_multiple-system operators (last visited Aug. 10, 2023).
addition to other independent variables that Dr. Waldfogel identified as “control
variables”, including “the number of subscribers, local median income, and the number
of local channels.” 2010-13 Determination at 3557.
In the present proceeding, Dr. George has well stated the role of control variables
in multiple regressions relied upon by Dr. Crawford and by experts in the present
proceeding:
The purpose of control variables is to account for factors other than
coefficients of interest that might affect the dependent variable. In the case
at hand, control variables are chosen to account for market factors other than
distant signal programming minutes that might affect royalty payments. Of
particular concern are factors that affect demand for cable services, which
in turn can affect the number of subscribers, system revenue, and royalty
payments. Failing to control for factors that shift demand and are correlated
with programming minutes can lead to bias in the … coefficients that are of
primary interest. Income, the number of local stations and (lagged) number
of activated channels are all factors that might affect the number of
subscribers or revenue so are included as controls.
George WDT at 52. Indeed, as the Judges explained in the 2010-13 Determination, Dr.
Crawford’s approach was designed so as to accept some loss of precision (i.e., a greater
variance and larger standard errors) in exchange for less bias (by excluding other
independent variables). This tradeoff is an inevitable problem for an econometrician, and
how an econometrician balances these impacts is just as much an art as it is a science.
2010-13 Determination at 3565 & n.59. The Judges noted though, that the tradeoff was
moderated because Dr. Crawford “used the universe of all programming on all distant
signals, rather than a sampling” which created a “rich data set” that served to “mitigate”
the impact of his fixed effects “so that his parameters remained relatively precise.” 201013 Determination at 3569.
Accordingly, in the 2010-13 Determination, the Judges essentially agreed with
Dr. Crawford’s modeling decision to include his fixed effects, because he threaded the
needle, minimizing bias while maintaining a sufficiently precise relationship between
per-category programming minutes and royalties generated. Indeed, a key reason the

Judges elevated the Crawford Model to primary evidentiary status was that “his use of a
fixed effects approach avoided the criticism that he had omitted key variables.” 2010-13
Determination at 3569 (citing Crawford CWDT ¶ 107; 2/28/18 Tr. 1398 (Crawford))
(emphasis added).
According to all the experts utilizing fee-based regressions, in whole or in part,
this econometric virtue extended through 2014. But in 2015, a commercial earthquake
struck the retransmission market: WGNA, by far the most distantly retransmitted
channel, converted from a broadcast station into a cable channel. See, e.g., Majure WDT
¶ 75 (JSC expert witness noting that “[t]he removal of the widely carried WGNA
materially changed the manner in which CSOs used the section 111 license.”). This
metamorphosis had several dramatic effects, one of which was the diminished evidentiary
value of Dr. Crawford’s new approach of limiting the observations to subscriber group
variations within a cable system (accomplished by imposing his systems-accounting
period fixed effects.)149
After the WGNA conversion, commencing in 2015, the number of cable systems
with more than one subscriber group declined significantly. Moreover, what had been a
robust source of data for analysis of variation of distantly retransmitted program
categories among the local channels distantly retransmitted by CSOs had shrunk. To
address the loss of this robust set of data, the fee-based regression experts in the present
proceeding each constructed a model that, although premised on the Crawford Model,
sought a work-around for this significant change.
Dr. Johnson addressed the problem by eliminating all fixed effects from his
preferred model, i.e., the “baseline” model presented in his WDT. In doing so, the
Johnson Model was able to generate observable data points that showed programming

The WGNA conversion also (1) substantially reduced the number of CSOs paying the base fee (and
concomitantly increased the converse, the number of CSOs paying only the minimum fee) and (2)
drastically reduced the number of JSC subscriber-minutes distantly retransmitted.
variations not just among subscriber groups within a cable system in a specific
accounting period (as the Crawford Model had done), but also program variations among
subscriber groups across systems and across (not within) the six-month accounting
periods in the SOAs.
Curiously, Dr. Johnson’s justification for this change was that it allowed for an
increase in the number of observations for his regression, thus addressing what he
understood to be a key concern of the Judges in the 2010-13 Determination. Compare
Johnson WDT ¶ 59 (“Professor Crawford’s model was criticized because it ‘effectively
discarded’ approximately 15% of his observations … which totaled approximately half of
all systems in his data set”) with id. at ¶ 62 (touting his model for containing “18,000
subscriber group-level observations”).
The Judges in that proceeding did not find the level of number of Dr. Crawford’s
observations to be a debilitating problem, declining to find that the Crawford Model was
overfit. Rather, the Judges instead found that Dr. Crawford’s balancing of a
minimization of explanatory bias with an acceptable loss of measurement precision was
appropriate to the task the regression was seeking to measure, i.e., the correlation
between program category minutes and the log of royalties paid. In so finding, the
Judges had acknowledged the value of the fixed effects (and the control variables) in his
model in allowing for the isolation of the correlation. 2010-13 Determination at 3569.
Accordingly, Dr. Johnson’s claimed justification for eliminating all of these
important fixed effects rings hollow. Moreover, their absence from his model increased
the bias in his measurements, which meant that the correlation was subject to
mismeasurement. More particularly, the bias in question is what econometricians and
statisticians in general refer to as “omitted variable bias.” Here, the “omitted variables”
are the ones that the Crawford Model had accounted for with its fixed effects, but which
Dr. Johnson injects into his model by eliminating the fixed effects. Accordingly, by this

change, the Johnson Model became less probative of the claimed correlation between
program category minutes and royalties, and for that reason alone the Judges place less
weight on the Johnson Model in this proceeding than they did on the Crawford Model in
the 2010-13 Determination.
Dr. George, unlike Dr. Johnson, did not eliminate all fixed effects. Rather, as
discussed supra, she eliminated some, retained and/or modified others, and included new
fixed effects. Most importantly, the George Model modified Dr. Crawford’s “systemsaccounting period fixed effects.” Whereas the Crawford Model limited the observed data
points to differences among subscriber groups within a cable system during an
accounting period, Dr. George relaxed that fixed effect. Specifically, she only limited the
number of observed data points by separately fixing the effect at the “systems” level and
at the “accounting period” levels. So, for example, if there were two subscriber groups in
the Verizon Buffalo cable system, the Crawford Model would only observe the variations
between them in a given (six-month) accounting period. By contrast, the George Model
would: (1) observe variations between those two subscriber groups in the given (sixmonth) accounting period; and also (2) beyond the (six month) accounting period. Thus,
Dr. George maintained a fixed effect that still controlled for the difference in CSO
business practices and a fixed effect control for changes over time (the “accounting
period” control), but, unlike Dr. Crawford, she did not combine the two fixed effects.
Alternately stated, Dr. George sought to address the loss of observable data points
caused by the 2015 WGNA conversion by making a different tradeoff in the inevitable
bias/variance dilemma faced by the econometrician in this context. She opted for
somewhat more bias, accepting somewhat less precision, in order to generate what she
understood to be a useful number of observations for her regression to analyze.
Although Dr. George makes a less draconian change from the Crawford Model
than the Johnson Model does in this regard, she nonetheless introduces “omitted variable

bias” into her regression. That is, by allowing variations over time (within a cable
system) to impact the correlation, the George Model treats temporal changes as reflective
of a correlation between program category choices and royalties.150 In sum, the George
Model introduces omitted variable bias that was absent from the Crawford Model, but to
a lesser degree than the Johnson Model. Accordingly, ceteris paribus, the Judges give
more evidentiary weight to the George Model than to the Johnson Model.
By contrast, Dr. Tyler’s approach circumvents this fixed effects dilemma. As
explained supra, the Tyler Model does not use royalties (linear or log form) as the
dependent variable. Rather, the Tyler Model uses the SGRP as the dependent variable.
Recall that the SGRP is a fraction: the dollar amount of base fee royalties calculated by a
subscriber group divided by the SG’s gross receipts. The Tyler Model then looks at the
variability in this SGRP across all cable systems. So, what happens to the effects arising
from different CSOs (the “systems” effects) and the changes over time (the “accounting
period” effect) for which Drs. Crawford and George (but not Dr. Johnson) sought to
control with “fixed effects”? As Dr. Tyler explains, the system and temporal
(“accounting period”), indeed, essentially all fixed effects, are rendered inapplicable
when the dependent variable is the SGRP, rather than a form of royalties:
The Crawford Model used fixed effects. The inclusion of fixed effects
would make sense if the SGRPs varied across CSOs due to unobserved
factors in the marketplace (other than and apart from choices related to
stations, and the minutes in those stations). If that were the case, the use
of … fixed effects would focus the model on the economic decision-making
by a CSO for an accounting period across subscriber groups, having
controlled for these unobserved factors.
However, my model … instead … us[es] SGRP for the dependent variable.
The SGRPs for each subscriber group are specified by statute (following the
carriage decisions made by CSOs) – an industry characteristic that greatly
reduces (and possibly eliminates) concerns over unobserved factors that
might impact SGRPs.
This bias is particularly pertinent vis-à-vis the cleave between 2014 and the 2015-2017 period, given the
WGNA conversion that shook the distantly retransmitted sector. Moreover, Dr. George (like Dr. Johnson)
“pooled” her data and applied it to generate one set of coefficients spanning the entire four-year (2014-17)
period. By relaxing the fixed effects to obscure the impact of changes over time, the George Model failed
to appropriately address the WGNA-conversion effect.
Tyler ACWDT ¶ 87 n.71. Program Suppliers added an equivalent explanation of this
point in their post-hearing briefing:
Substantial irrelevant variability exists across the royalty amounts
calculated for each subscriber group. For example, greater royalty amounts
might be determined for a subscriber group for no other reason than one
subscriber group has more subscribers or higher prices, or both, than another
subscriber group. PFF ¶¶ 290, 351. And those prices may vary based on
factors like cable networks carried, customer service, bundling with internet
and phone, or other factors unrelated to distant signal carriage. PFF ¶ 290.
A regression model using royalty amounts calculated as the dependent
variable must control for these sources of variability in an attempt to isolate
the incremental value of minutes by category type. PFF ¶ 290. Unlike
royalty dollar amounts, SGRP does not vary across CSOs due to unobserved
factors in the marketplace – other than from choices related to distant
signals. Thus, because the Tyler Model uses the more targeted SGRP, and
not royalties, the Tyler Model can more precisely measure the incremental
value of various types of minutes within each year. PFF ¶¶ 291-92. With
less irrelevant variability to explain in the dependent variable, the Tyler
Model can focus on the relationships at issue in a way that other models,
which use royalties as the dependent variable, cannot. PFF ¶¶ 291-92.
Furthermore, because SGRP does not vary for reasons unrelated to distant
signal carriage, fixed effects (meant to control for unobserved sources of
irrelevant variability) are not necessary. PFF ¶ 292.
PS PHB at 38.
Thus, the Judges understand that other demand effects (such as the impact on
demand from differences in, e.g., service quality, pricing, etc.) impact the gross receipts,
not the royalty decisions.151 The Judges further note that – although other parties and
their experts criticize the Tyler Model for not including fixed effects and note how shares
would change in fixed effects were added – none of the partis or experts addresses Dr.
Tyler’s point, discussed supra, that when the dependent variable is the SGRP rather than
a form of royalties, fixed effects are unnecessary because there is no variable omitted that
will impact the dependent variable.

Critics of the Tyler Model maintain that by avoiding the fixed effects problem in this manner, the Tyler
Model throws out the baby with the bathwater, in that it fails to correlate the royalties paid with the discrete
categories of program minutes, which is the entire point of the exercise. That is, the Tyler Model allegedly
fails to uncover the variation in royalties associated with different categories of programming minutes.
(And, as some econometric critics of the Tyler Model have testified, it merely “reproduces the statutory
formula.”). As explained infra, the Tyler Model, like the other regression approaches, multiplies its derived
coefficients by the number of program minutes associated with each of the six program categories,
generating allocation shares on a per-program category basis.
Another way to understand the evidentiary problem caused by eliminating or
relaxing the fixed effects (as in the Johnson and George Models (but not the Tyler
Model)) is to consider a crucial point made in the 2010-13 Determination and again in
this proceeding – the difference between an “explanatory” regression and a “prediction”
regression. In this regard, the Judges stated in the 2010-13 Determination:
The Waldfogel-type regression is an example of modeling utilized to
explain the effects of different program categories on the relative payment
of royalties – rather than an attempt to predict the level of royalties. Thus, …
the choice of variables can reasonably be based on the “underlying
theoretical model.” [G. Shmueli, To Explain or to Predict?, 25 Statistical
Science 289, 290-91, 297 (2010)]; see also F. M. Fisher, Econometricians
and Adversary Proceedings, 81 J. Am. Stat. Ass’n 277, 279 (1986) (“There
is a natural view that models are supposed to do nothing other than
predict …” resulting in the “danger” of ignoring “better models that do not
fit or predict quite so well but are in fact informative about the phenomena
being investigated.”) (emphasis added).
2010-13 Determination at 3564. As in that prior proceeding, the purpose of the fee-based
regressions is to “explain” the posited correlation between distantly retransmitted
program minutes and royalties. It is unsurprising that other variables may be more useful
as “predictors” of royalties, but that is quite another matter. In this regard, in the 2010
Determination the Judges approvingly cited the following testimony by Dr. Crawford:
Dr. Erdem misunderstands the purpose of an econometric analysis in this
proceeding. . . . For the goal of prediction, the focus is on finding the
explanatory variables that best predict the outcome of interest …. [I]f the
goal is to predict stock prices[,] and the price of tea in China helps, then …
include it in the model (and don’t worry about the economic interpretation
of its coefficient).
That is not the purpose in this proceeding, however. In this proceeding,
experts are using econometric analyses to help the Judges determine …
relative marketplace value …. The dependent variable in these regressions,
the royalties cable operators pay for the carriage of the distant signals, are
informative of this relationship …. The key explanatory variables in this
relationship, the minutes of programming of the various types carried on
distant signals, are informative as the impact they have on royalties reveals
the relative market value of each programming type. Other explanatory
variables are included in the model to control for other possible
determinants of cable operator royalties. This helps improve the statistical
fit of the regression (to “reduce its noise”), providing more precise estimates
of the impact of programming minutes that are the focus of the analysis…

The goal here is to find the econometric model that can best reveal relative
marketplace value. Doing so means crafting the econometric model to
reflect the institutional and economic features of the environment that is
generating the data being used …. Crawford WRT ¶¶ 91–94 (footnotes
omitted) (emphasis added).
2010-13 Determination at 3564. No critic of the regression approach has persuasively
addressed this finding in the 2010-13 Determination that relies on the distinction between
a regression designed for “prediction” and a regression designed to measure the “effect”
of a variable of interest, has persuasively addressed this finding in the 2010-13
Determination that relies on the distinction between a regression designed for
“prediction” and a regression designed to measure the “effect” of a variable of interest,
Consistent with this testimony, the Judges held that it is not their “statutory task . .
. to identify and rank all the causes of a change in total royalties.” Rather, the Judges’
“legal, regulatory, and economic task … is to determine the relative market value of
different categories of programming,” and thus correlations between royalties and other
independent variables, for example, between royalties and the number of subscribers, “is
not in furtherance of that objective.” 2010-13 Determination at 3564.
The WGNA conversion not only reduced the number of subscriber groups, as
discussed supra, but also significantly reduced the number of CSOs that actually paid the
base fee, as opposed to the minimum fee. A number of experts captured this undisputed
effect, and Dr. Marx’s testimony below in this regard is clear and illustrative:
For necessary context, it is instructive at the outset of this section to consider how
the minimum fee issue was addressed in the 2010-13 Determination. There, the Judges
found as follows:
1. “[A] CSO’s decision to distantly retransmit any particular station, when that
CSO is otherwise obligated to pay the minimum royalty fee, does not indicate
a direct correlation between the decision to retransmit and the decision to
incur a royalty obligation.” 2010-13 Determination at 3568.

2. “[D]uring the 2010-2013 period, on average 527 out of the 1,004 Form 3
CSOs analyzed (52.5%) chose to retransmit the exact or fewer number of
signals than the regulated fees permitted [and] 83 paid the minimum fee yet
elected not to retransmit any local stations. … Those decisions reveal that the
CSO has concluded (whether by analysis or resort to a heuristic) that any of
the marginal costs (physical or opportunity) associated with retransmission
likely exceed the value to the CSO of such retransmission, even accounting
for minimum royalties, which the CSO must pay in any event.” 2010-13
Determination at 3568.
3. “Although there is no marginal royalty cost associated with th[e] decision [to
retransmit stations when … obligated to pay only the minimum royalty], the
CSO’s decision as to which stations to retransmit remains a function of
choice, preference, and ranking. Thus, the CSO in this context would still
have the incentive to select distant local stations for retransmission that are
more likely to maximize CSO profits, through either an increase in
subscribership or, as Ms. Hamilton emphasized, by avoiding the loss of
subscribers through the preservation of ‘legacy carriage’ through the nonanalytical heuristic of maintaining the status quo.” 2010-13 Determination at
3569.
4. “There are substantial economic bases for this finding. Because the ‘tax’ of
the minimum fee is paid regardless of whether distant retransmission occurs,
that ‘tax’ is also in the nature of a sunk cost. Fundamental economic analysis
provides that a seller should ignore sunk costs when making marginal
decisions (although they should try to recoup these costs if the buyers’
willingness-to-pay allows it). Nonetheless, a CSO that decides to distantly
retransmit a station when the marginal royalty cost is zero has revealed that

the particular station contains programming that would increase marginal
value to that CSO, over and above the next best alternative ‘retransmittable’
local station and above any other marginal costs (e.g., physical retransmission
costs or the opportunity cost of foregoing a different type of cable channel in
the CSO’s channel lineup).” 2010-13 Determination at 3569.
5. “CSOs that pay only the minimum royalty fee and elect to distantly retransmit
one station might have elected to pay a positive fee in the absence of the
minimum fee. For example, assuming Program Suppliers’ programs were
more valuable to a CSO than the minimum fee and disproportionately more
valuable than any other program category, that CSO would have retransmitted
a station that disproportionately included Program Supplier content and
willingly paid the minimum fee (or more).” 2010-13 Determination at 3659.
6. ”[A]n analysis of the CSOs paying only the minimum fee might provide some
useful information. However, … the record does not provide an adequate
basis to incorporate any “relative value” differences based on a distinction
between CSOs that do and do not pay only the minimum fee.” 2010-13
Determination at 3582. See also id. at 3575 (“[T]he Judges find no basis in
the record by which they could or should make a reasonable ‘relative value’
adjustment based on whether a CSO did or did not pay only the minimum
fee.”).
7. “[T]he data regarding the carriage decisions of CSOs who pay only the
minimum fee should not be disregarded [because] even when a CSO is
obligated to pay the minimum royalty fee, it still has the incentive to select
stations for distant retransmission that it believes will maximize the benefits
(or, in economic terms, utility) to the CSO. However, because carriage
decisions are not tied even indirectly to a contemporaneous discretionary

decision to pay royalties (beyond the mandatory minimum 1.064% for the first
DSE), they strike the Judges as potentially less informative than discretionary
decisions by CSOs to incur an additional royalty expense in order to distantly
retransmit particular stations.” 2010-13 Determination at 3575.
The Judges consider these minimum-fee-related points in the context of the
present factual record, which reveals a dramatically different retransmittal landscape for
the final three years of the period at issue, 2015 through 2017.152
There is a sub-group within the minimum-fee-only CSOs that decided not to
distantly retransmit any local signals despite their duty to pay the minimum fee. Exactly
what this decision indicates as to their revealed preferences is unclear from the record.
One industry witness suggests that some or all of these CSOs had alternative uses for
their bandwidth, for, e.g., other cable programming or internet traffic. Written Rebuttal
Testimony of Lynne Costantini, Trial Ex. 7304, at 4-5 (Costantini WRT); 3/27/23 Tr. 1597-

1605 (Costantini). But several other witnesses testified that bandwidth concerns no
longer existed in the 2014-2017 period, because cable television had converted from
analog to digital signals. Written Direct Testimony of Allan Singer, Trial Ex. 7108, ¶ 15
n.1 (Singer WDT); Written Rebuttal Testimony of Allan Singer, Trial Ex. 7109, ¶ 8 n.1;
(Singer WRT); 4/3/23 Tr. 2764-65 (Singer); Written Rebuttal Testimony of Melinda
Witmer, Trial Ex. 7115, ¶ 13 n.3 (Witmer WRT); 4/10/23 Tr. 4069-70 (Witmer).
Other evidence indicated that CSOs that previously retransmitted WGNA until its
conversion to a cable channel simply found no other value in alternative out-of-market
local channel programming sufficiently attractive to existing or potential new subscribers
that was worth retransmitting. Of course, this argument raises its own questions,
because, given that the marginal royalty cost is zero, the presumption of economic
rationality strongly suggests that, ceteris paribus, these CSOs would have distantly
The Judges discuss the minimum fee issues separately and in depth elsewhere in this determination.

retransmitted some out-of-market local channels’ programming.153 But the reasonable
presumption of economic rationality requires the presumption that these CSOs were
incentivized not to distantly retransmit additional stations. One logical reason would be
that they saw no value at all in retransmitting those stations and programming, such that
any organizational effort in that regard would be a soft cost sufficient to preclude such
transmissions. In this regard, the Judges again take note of Ms. Hamilton’s designated
testimony, in which she emphasized the de minimis nature of the revenues at issue with
regard to these potential retransmissions.154
But the foregoing points hardly end this analysis. When CSOs have “excesscapacity” to retransmit signals/programming at zero marginal royalty cost, or when a
CSO has declined to exercise its section 111 “privilege” to retransmit any signals or
programming, they have differentiated themselves from above-minimum-fee-paying
CSOs in a manner that is of both significant economic and of evidentiary importance.
The minimum-fee-paying CSOs have revealed a marginal willingness-to-pay of zero for
the distant retransmittal of local broadcast stations. The several parties and their
economic experts opposing the regression approach in this proceeding make a reasonable
objection that it is improper to treat the calculated-but-unpaid base fees of these CSOs as
any evidence of the revealed preferences and willingness-to-pay of a minimum-fee-only

This point also applies to CSOs that distantly retransmitted some local stations, but had excess capacity,
i.e., the capacity to distantly retransmit more of these stations and still not pay more than the minimum fee.
154 Ms. Hamilton’s point would tend to explain more than why some CSOs do not retransmit any signals. It
may explain, for example, why Bortz Survey respondents have a myriad of job titles, and why the
respondents are not consistently the same from year-to-year (i.e., that no one is really dedicated to this
function). Her point would also seem to explain why the CSO decisions from 2010-13 and from 2014 were
so consistent: because concomitant with Ms. Hamilton’s de minimis argument is her point that the CSOs
focused on preserving existing subscribers whose subscription decisions might turn on the continued
presence of niche programming from distantly retransmitted stations. Indeed, Ms. Hamilton seems to have
been prescient: After 2014, the abandonment of all distant retransmissions by CSOs that had only distantly
retransmitted WGNA is consistent with her emphasis on legacy carriage. (That is, viewers who had valued
WGNA enough to subscribe to a CSO on that basis were no longer legacy viewers who could be retained
once WGNA converted.)
The Judges are also struck by the absence of evidence that would be compelling, to wit, the absence of
evidence that any CSO has marketed its service to any subscribers who might be induced to remain or
become subscribers based on the program offerings by out-of-market stations they distantly retransmit.
The Judges decline to take administrative notice that CSOs (or their subscribers) actually contemplate these
offerings when considering subscription decisions; in fact, the Judges’ own “reality filter” would suggest
that the opposite presumption would be more realistic.
CSO. But, assuming, arguendo, that this reasonable objection is entirely correct,155 what
is the appropriate way to consider the decisions of CSOs who do not reveal a positive
value for such distant retransmittals?
The Judges find that these CSO decisions can be construed in two ways. First,
they can be considered to reveal a zero value for these retransmittals, given that the
marginal royalty cost of retransmittal is zero through a retransmittal of 1.0 DSE. And
second, they could be construed as simply not providing any useful data regarding the
value the CSOs assign to these retransmittals, because that value, although perhaps
positive, is still less than the (non-royalty) cost of retransmitting.156 But in either
construal, the relevant takeaway is that these CSO decisions do not provide the Judges
with any useful information157 regarding the relative value of the retransmittal of the
various programming categories, the determination of which is the statutory task assigned
to the Judges under section 111.
So understood, why should the decisions of these minimum-fee-only CSOs serve
to diminish the economic and evidentiary usefulness of the decisions of the other CSOs
who pay base fees above the minimum fee. That is, it is misleading, to say the least, to
categorize the base-fee-paying CSOs as merely a small cohort of the larger population of
CSOs, when they are differentiated by the key marker for section 111 purposes: whether
they assign a relative value to the retransmittals and thus relative values to the
retransmitted programs. The Judges find it more accurate and appropriate to consider the

It is not entirely correct. As noted by Dr. Tyler, discussed infra, the calculated-but-unpaid base fees of
CSOs that ultimately pay the minimum fee would have some probative weight as those base fees approach
the minimum fee, given the uncertainty, ex ante royalty payment, as to whether the base fee or the
minimum fee would ultimately bind. However, the record does not provide the Judges with disaggregated
data sufficient to analyze the minimum-fee-paying CSOs on this basis.
156 These non-royalty costs include, but are not necessarily limited to, (1) the physical cost of retransmittal
and (2) the transaction costs and opportunity costs associated with expending effort making retransmittal
choices regarding distant local stations that had de minimis value (the choices, if not the stations and
programming themselves) relative to the other decision-making undertaken by CSOs.
157 That is, a zero value for all retransmitted programming is invariant and thus uninformative of relative
value, and an absence of a revealed value fails to provide absolute value as well as relative value.
base-fee-paying CSOs essentially as a separate cohort of CSOs whose decision-making is
pertinent to a regression analysis in this statutory context.
Indeed, this is precisely how the Judges perceived the issue in the 2010-13
Determination. There, only a minority of CSOs, 47.5% paid above the minimum fee, but
their decisions were extrapolated to the entire market. 2010-13 Determination at 3568
(“during the 2010–2013 period, on average 527 out of the 1,004 Form 3 CSOs analyzed
(52.5%) chose to retransmit the exact or fewer number of signals than the regulated fees
permitted [and] 83 paid the minimum fee yet elected not to retransmit any local stations”
– meaning that less than half of CSOs “voluntarily paid a royalty greater than the
minimum fee.”). Nonetheless, the Judges deemed that minority of CSOs sufficient to
justify using the entirety of the base fee calculations (whether paid or unpaid) to establish
relative marketplace value.
But that extrapolation was hardly precise in the context of the slight majority
presence of minimum-fee-only CSOs, a context which could have suggested a need for a
proportionate weighting of the decisions of the base-fee-paying CSOs.158 But, when the
base-fee-only CSOs are considered as the separate and only cohort actually revealing
their relative programming valuations, rather than a mere subsample of the entire
population of CSOs, then their revealed preferences are seen to reflect 100% of the
information regarding relative value generated from CSO decision-making. Implicitly,
that is what the Judges did in the 2010-13 Determination.

Dr. Marx noted that the 52.5% of CSOs not covered in the Crawford Model included many that had only
one subscriber group and would have been excluded from Dr. Crawford’s regression anyway, so 80% of all
the CSOs eligible for inclusion in the Crawford Model (and their programming and royalty data) were in
the regression. There are two problems with this point. First, because only 80-85% of the CSOs were
covered, even then the evidentiary weight of the decision-making of those CSOs should have been
discounted proportionately, if proportionality is relevant. Indeed, in this proceeding, Dr. Marx testified
that, in her opinion, whether to consider the revealed preferences of some CSOs should be a matter of
“degree,” which is distinct from treating some proportion as a tipping point sufficient to be used en toto.
Second, the reason why “only” 47.5% of the CSOs were included in the Crawford Model is not really
relevant to the question of why this minority cohort should generate the entirety of revealed preference
value for regression purposes.
Further, the Judges are mindful of the testimony by Dr. Marx (herself no fan of
the application of the fee-based regression for the 2015-2017 period) that “the most
informative observations in a Crawford-style regression are ones in which a CSO elects
to pay more than the minimum fee in royalties in order to carry additional distant signals
….” Marx WRT ¶ 64 (emphasis added).159
Colloquially, the issue may be characterized as whether the Judges should let the
perfect be the enemy of the good. Here, the “perfect” fact pattern would be where all or
most of the data is generated by CSOs paying above the minimum fee. That is not the
factual context here. But there is “good” evidence from the CSOs who did retransmit
enough programming to trigger the base fees of their subscriber groups, and that the
Judges do not ignore that data.160
Accordingly, the Judges will give due weight to the minority of CSOs that, in the
2015-2017 period, paid above the minimum fee and thus revealed their preferences by
paying an additional royalty in order to retransmit one or more additional stations. To be
clear, in their weighing of this evidence, the Judges perceive the above-minimum-fee
CSOs as providing evidence from three perspectives: (1) reflecting 100% of all the CSOs
who did reveal their preferences in a cardinal manner, which supports the assignment of
due weight to their station and programming choices; and (2) reflecting only a minority
of the revealed preferences of the CSOs that found the value in distant retransmissions of
local broadcast stations sufficient to add such stations to their lineup – a lower
percentage which therefore would support a lower evidentiary weight; and (3) reflecting

Dr. Marx also equates a CSO paying above the minimum fee with a CSO that “pays the minimum fee
with no capacity for carrying additional signals.” Marx WRT ¶ 64. The Judges disagree. Such a
minimum-fee-paying CSO is not revealing a preference in the same manner as a CSO paying above the
minimum fee, but rather is taking full advantage of the zero-marginal-royalty cost feature of the minimum
fee obligation. The Judges find it more appropriate to treat such minimum-fee/no-excess-capacity CSOs in
the same manner as an excess-capacity CSO because the actual marginal cost of their respective
retransmittal preferences is zero.
Even information from data that includes CSOs paying only the minimum fee has an evidentiary
purpose, as noted infra regarding an adjustment to the allocations based on the Tyler Model.
the revealed preference of an even smaller slice of CSOs and their programming, thus
supporting the lowest level of evidentiary weight among these three perspectives.161
B. A Separate Criticism: The Tyler Model as a “Fee Generation” Model
Two parties, SDC and PTV, ask the Judges to reject the Tyler Model by
characterizing it as “similar” to a “fee generation” approach to the section 111 royalty
allocation issue, asserting that this approach is improper and has been rejected previously
by the Judges and their predecessors. SDC and JSC are incorrect, and this criticism
deserves its own separate section.
The fee generation approach has been defined as ‘‘a valuation method that
attempts to measure the amount of royalties actually generated by a particular claimant
group.’’ Report of the Copyright Arbitration Royalty Panel to the Librarian of Congress,
Docket No. 2001–8 (CARP CD 98–99) at 60. In its attempt to characterize the Tyler
Model as a fee generation approach, SDC maintains as follows:
[Dr. Tyler’s] approach could be viewed as similar in notion to the “fee
generation” approaches that the Judges and their predecessors rejected in
days long past (see, e.g., 2004-05 Distribution Order, 75 FR at 57071-73
(“[F]ee generation is not persuasive as the best method for determining
relative marketplace value because of the Canadian Claimants’ failure to
firmly link the relationship between section 111 royalties to that value”)).
SDC PFF ¶ 138. See also 6/12/23 Tr. 6007 (SDC counsel’s closing argument)
(describing the Tyler Model as “a fee-generation methodology.”).
Similarly, PTV argues:
Dr. Tyler’s regression resembles the fee generation methodology, which
attempts to assess relative value based on statutory royalties generated by
cable retransmissions. [The] [j]udges have repeatedly considered and
rejected the fee generation methodology because the statutory royalties do
not relate to the relative value of the distantly retransmitted programming.
PTV PFF ¶ 159.

As noted supra, the Judges will discuss infra the evidentiary weights they apply, in combination with
the evidentiary weights they give to all of the probative evidence.
Of course, to assert, as SDC and PTV do, that the Tyler Model may merely
“resemble,” or be “similar to” a fee generation model, is also to say that the Tyler Model
is not a fee generation model. Moreover, the Judges disagree with these fee-generationbased arguments for two further reasons. First, the assertion that the Judges have rejected
the fee generation methodology is simply wrong. Second, the argument (that the Tyler
Model’s passing resemblance to a fee-generation approach invalidates its use) fails to
address the particular merit of this approach given the evidentiary record.
With regard to the prior rulings regarding fee-generation approaches, Program
Suppliers accurately and compellingly demonstrate the incorrectness of the claim that
these rulings have rejected a fee-generation approach and precluded its use (or the use of
any similar model) in these allocation proceedings. Specifically, Program Suppliers
emphasize the Judges’ most recent ruling on this issue, in the 2010-13 proceeding:
[T]he Judges ruled that fees-based regression analyses are distinguishable
from analyses of fees-generated. In their post-Initial Determination Order
Denying Rehearing [in the 2010-13 proceeding] … the Judges specifically
rejected the claim that fee-based regressions are the same as “fee
generation” approaches. They held that fee-based regressions “identif[y] a
positive statistical relationship between (a) royalties paid by CSOs; and (b)
program categories on distant local stations that had been retransmitted to
subscribers by CSOs. Clearly, any ‘fee generation’ approach that did not
make use of this regression approach is distinguishable.” See Order
Denying Rehearing at 5 (emphasis added).
Even if the Tyler Model could be likened to a fee generation approach, SDC and
PTV are wrong to suggest that such approaches have been categorically rejected by the
Judges and their predecessors. Again, the Judges considered and rejected the identical
argument in their Order Denying Rehearing:
[N]either the Judges nor their predecessors have categorically rejected use
of the broad category of fee generation approaches to ascertain relative
value in section 111 allocation proceedings. As the Librarian concluded
when accepting in full the CARP Report for the 1998-99 distribution years:
“[W]hile it is true that fees generated do not measure the absolute value of
programming, it does not mean that they are not capable of measuring the
relative value of programming between the claimant groups.” Librarian’s
Order, 69 FR at 3618 (emphasis added). In that Order, the Librarian
expressly noted that ‘there does exist precedent,’ in the 1990-1992 CARP

Report, for using the “fee generation” approach to determine relative market
value. Id. When the Judges succeeded to the CARP’s jurisdiction, they
likewise stated that “we are not persuaded that we are precluded from ever
considering fee generation as a distribution methodology ….” 2000–03
Determination, 75 FR at 26805. In fact, in the [Initial 2010-13]
Determination, the Judges acknowledged the ongoing use of a fee
generation approach in particular instances, notwithstanding that it had been
“generally discounted” in some prior cases. See Determination at 48 n.45;
78 n.145.
Program Suppliers’ Reply to Proposed Findings of Fact and Conclusions of Law (PS
RPFF) ¶ 88 (and record citations therein). See also id. ¶ 96. Program Suppliers have also
properly relied on the earlier rulings of the Judges and their predecessors in this regard.
See 2000-03 Distribution Order at 26805 (after detailing the “origins” and the “history”
of the fee generation approach, the Judges stated this approach never had been “flatly
rejected … as a methodology,” and the Judges thus held that they were “not persuaded
that we are precluded from ever considering fee generation as a distribution methodology
….”); 1998-99 Librarian Order at 3606, 3618 (the CARP panel rejecting opposition to
“the fee generation method” because “there does exist precedent” for using this
methodology). More broadly, the Judges’ predecessors have long understood the
appropriateness of incorporating fee-generation models in the precise process in which
the present Judges are now engaged – analyzing, weighing, and combining multiple
approaches to the allocation of royalties – when, as now, the Judges cannot identify only
“a single formula or rationale adequate to reach our determination and allocations in [the]
proceeding.” 1979 Cable Royalty Distribution Determination, 47 FR 9879, 9892 (Mar. 8,
1982) (considering a fee generation approach together with eight other allocation
methods) (emphasis added).
As to the second point, assuming arguendo the Tyler Model bears a passing
resemblance to a fee-generation approach, the Judges find, on this evidentiary record,
such affinity constitutes virtue rather than vice. A key criticism of the Tyler model’s feegeneration resemblance is premised on the fact that both appear to “ignore[] variation

relevant to revealing CSO preferences” among program categories. CTV PFF ¶ 354 (and
record citations therein); accord CCG PFF ¶186 (and record citations therein) (“Dividing
the royalty payment by gross receipts removes the variation different signals contribute to
revenue.”). However, that argument misapprehends Dr. Tyler’s approach. It is decidedly
not merely a “measure [of] the amount of royalties actually generated by a particular
claimant group,” which is the definition of a fee-generation model, as set forth supra.
Rather, the Tyler Model calculates coefficients that “represent the incremental impact on
the SGRP for each type of compensable minute.” Tyler ACWDT ¶ 90. Further, the
Tyler Model then weights these coefficient values by total receipts, Tyler ACWDT ¶ 88,
and then multiplies these weighted coefficients by the number of minutes of each
claimant’s program category. Tyler ACWDT ¶ 144. That is quite different from the basic
fee-generation approach.
But the proponents of the other fee-based regressions are onto something in their
observation that the Tyler Model generates less variation than would otherwise be
captured when the dependent variable is royalty-specified rather than specified as the
SGRP. However, the Judges see this distinguishing feature of the Tyler Model as an
improvement over the other fee-based regressions proffered in the present case.
From the perspective of the parties proffering fee-based regressions, the only way
to estimate the appropriate variations among program categories is by utilizing a royaltybased parameter (the log of royalties to be precise) as the dependent variable. That is,
these more traditional forms of fee-based regressions posit that there is an ascertainable
and measurable correlation between program category minutes and the log of royalties,
detectable once sufficient fixed effects and control variables are specified. So, there is a
black-and-white debate: Which is the preferrable dependent variable for the fee-based
regressions in the present case, a royalty based variable or Dr. Tyler’s SGRP?

Recall that the first step in any regression modeling is to identify an economic
theory which will guide the selection of model specifications. What is that economic
theory? Perhaps the more salient phrasing of this question is: What economic theory is
most consonant with the record evidence of the industry details? Let’s take stock:
1. The royalties paid by CSOs for 1.0 DSE is a minimum of 1.064% of gross
receipts, with two marginally lower brackets of percentage rates for additional
DSEs, flattening out at 0.330% at 5.0 DSE. A CSO needs to decide how
many, if any, local broadcast stations to distantly retransmit.
To answer this question, all the economist witnesses attempt to zero-in on
what, in their respective opinions, would constitute economically rational
decision-making. However, in identifying what is rational, they implicitly
assume a CSO would be able to determine if it is retransmitting a profitmaximizing or a sub-optimal bundle of distant programming, but there is no
record evidence as to how a CSO would know this.
More particularly, there is no evidence of a measure of estimated subscribers
retained, obtained, or lost, or of a change in subscription rates, caused by
distant retransmission decisions. Are such changes even occurring because of
the configuration of distantly retransmitted stations? On this, the record is
barren.162
2.

But, as all the witnesses acknowledge, over the last three years of the relevant
period, 2015-2017, the overwhelming percentage of CSOs pay only the

The Judges also find it telling that there is no evidence in this proceeding, nor apparently in any other
allocation proceeding, that any CSO has solicited subscriptions by touting its distantly retransmitted lineup.
That this dog has not barked speaks loudly as to the de minimis impact of the distant retransmission market.
Also absent from the record is any evidence that there is a derived-demand effect at play. That is, there is
no evidence that consumers make subscription decisions based on the programming content of distant
retransmissions. In this regard, a corollary to the need for identifying an economic theory from the record
evidence to guide this Determination is the concomitant need for a “reality filter,” by which the Judges can
address the reality that the market in question is relatively miniscule (although substantial royalty dollars
are most certainly at stake!).
minimum fee, and the vast majority of section 111 royalties are generated by
those minimum-fee-paying CSOs. That is, most CSOs do not even retransmit
enough distant signals to trigger a base fee obligation. Moreover, a large
minority of those CSOs elect not to retransmit any signals, demonstrating, as
Dr. George notes, that they have a zero willingness-to-pay for programming
that is royalty costless. Why have these changes occurred?
3. The answer is to be found in the evidentiary record. An industry expert
witness, Sue Ann R. Hamilton (whose 2010-13 testimony was properly
designated as evidence in this proceeding by Program Suppliers), stated (as
summarized in the 2010-13 Determination) that:
[A] CSOs’ selection of stations for distant retransmission is marked
by inertia, not by an affirmative analysis and weighing of alternative
stations, [because: (1)] distant retransmission costs represent a nonmaterial expenditure for CSOs compared with their other more
expensive programming and carriage decisions [and (2)] CSOs are
more concerned with losing existing subscriber [‘legacy distant
carriage’] if they drop certain stations and the associated programs
than they are with whether or not any new retransmitted station and
its associated programs might entice new subscribers[, or with]
adjusting the roster of distantly retransmitted stations.
2010-13 Determination at 3567 (emphasis added).
4. Ms. Hamilton’s testimony regarding the CSO’s primary concern over
retaining legacy subscribers proved prescient when CSOs did not
meaningfully substitute for the lost sports programming on WGNA, but rather
just retransmitted fewer stations and programs, and thus defaulted to a binding
minimum fee rather than a calculated base fee. That is, the phenomena that
Ms. Hamilton described has been validated by the impact of the WGNA
conversion. JSC professional and college team sports that were retransmitted
on WGNA clearly were valuable, both in terms of the regressions (with the
highest coefficients) and in terms of the survey results. But when WGNA

converted to a cable station, despite the high value of JSC programming (its
coefficient fell but remained higher than other category coefficients), JSC
programming value vis-à-vis the retransmission sector, as measured by the
regression methodologies, dropped precipitously, because the number of
subscribers to whom JSC sports were transmitted dropped by over 90%.
Although at first blush it may seem odd given the high value of JSC
programming that CSOs did not “backfill” that loss, Ms. Hamilton’s “inertia”
and “legacy” arguments explain the absence of such a “backfill.”163 Such
inertia, and the loss of WGNA as a legacy channel, apparently made it not
worth the effort for CSOs to search for and retransmit a sufficient number of
replacement channels and programs.
5. In the context of this backdrop, Dr. Erdem’s drumbeat that CSOs’ priority is
to minimize their costs takes on a bit more significance. CSOs appeared to be
relatively less concerned with the “demand side” for distantly retransmitted
channels and programming, and thus, relatively more concerned with the
“supply side,” particularly with the royalty costs.
6. In this more cost-centric context, Dr. Tyler’s regression appears to the Judges
to better reflect the realities of the market than the other fee-based
regressions. The Tyler Model does not put the cart before the horse; that is, it
does not place priority on program category (“demand side”) decisions.

The loss of WGNA should be contrasted with the loss years earlier of TBS, another sports-based
superstation that had been distantly retransmitted. That loss did not eliminate all such sports-basedsuperstation retransmittals, because WGNA remained available. But after WGNA transformed itself into a
cable station, there was no other sports-based superstation to substitute in order to satisfy legacy viewers of
such programming. (Also, recall that the JSC is simply a representative of the major professional sports
leagues and the NCAA, and the record does not reflect that they suffered any economic loss because of the
reduction of subscriber minutes distantly retransmitted. Indeed, the Judges take administrative notice that
their games have been aired on ESPN and other cable stations, national networks, and regional sports
networks. The Judges decline to assume that these leagues and associations voluntarily abandoned local
broadcasting and thereby deprived themselves of profits, but rather they assume these sports leagues and
associations moved to these more lucrative distribution methods.)
Rather, it prioritizes the “budget constraint” (“supply side”) decisions of
CSOs, by which they calculate the percentage of their subscriber group’s
gross receipts they will pay in royalties.
7. However, for those CSOs transmitting above 1.0 DSE, they have economic
decisions to make regarding the mix of programming they will transmit via
their signal decisions. Given the economics and reality of this retransmission
market, as described above, only then will the relative value of program
categories be of material economic importance. It is at this stage that the
Tyler Model generates information as to relative value, through the Tyler
model’s coefficients.
8. To return to the issue at hand, as its critics assert: Does the Tyler Model
identify fewer variations across program categories compared to the other
regression models? Apparently, the answer is yes. But those other
regressions, although not without evidentiary value, do not appear to be as
consonant with the evidentiary record as the Tyler Model.
C. The Economics of the Tyler Model
The foregoing points help to focus on the underlying economics of the Tyler
Model. By using the SGRP as the dependent variable, the Tyler Model reflects economic
principles relating to the value of a “public good,” which is a good “for which the
marginal costs of providing it to an additional person are strictly zero and for which it is
impossible to exclude people from receiving the good.” Joseph E. Stiglitz & Jay L.
Rosengard, Economics of the Public Sector 107 (4th ed. 2015). But when the good is
excludable, but still bears a marginal cost of zero (non-rivalrous in “econo-speak”), it is
considered an “impure” (or “quasi-”) public good. See also 3/27/23 Tr. 1496 (Boyle) (a
PTV expert witness with a Ph.D. in applied economics agreeing that there are

“characteristics” and “elements” of a “quasi-public good” in these distantly retransmitted
channels and programs.).
Unlike “private goods” (rivalrous and excludable), the demand curve for public
goods, impure or otherwise, “can be thought of as a ‘marginal willingness-to-pay’ curve
[which], at each level of output of the public good, . . . says how much the individual
would be willing to pay for an extra unit of the public good.” Stiglitz & Rosengard,
supra, at 107. This is consistent with the economic logic of the Tyler Model. See Tyler
ACWDT ¶ 67 (“Even though the amount of the royalty is determined by statute – and so
constitutes a measure of minimum willingness to pay as opposed to the outcome of a
negotiation – the estimated incremental royalties for the different program types relative
to one another provide insight into how the CSOs would actually value these program
categories in an unregulated market.”) (emphasis added). Also, the Tyler Model’s SGRP
is in the nature of an economist’s “budget line” (a/k/a “budget constraint”), limiting the
combinations of goods that a buyer can purchase. See Robert S. Pindyck & Daniel L.
Rubinfeld, Microeconomics 82 (8th ed. 2013).164 The Tyler Model’s SGRP identifies the
percentage of total costs (including profits, which reflect opportunity costs) incurred by
CSOs across their subscriber groups in the form of section 111 royalties. With that
percentage/budget line established, the Tyler Model then allocates the portions of the
weighted category minutes attributable to that SGRP calculation.
In sum, there is a real economic and market-based foundation for the Tyler Model
in the context of the present record relating to the 2014-2017 retransmission market.
Moreover, the Tyler Model is essentially a fee-based regression, with characteristics of
the fee-generation approach, constructed in a manner that reflects both Ms. Hamilton’s

The Judges examined two of the expert witnesses at the hearing regarding the concept of the “budget
line” as it relates to the estimation of section 111 royalties. See 3/23/23 Tr. 1080-86 (PTV’s Johnson);
4/3/23 Tr. 2671-73 (JSC’s Majure). Dr. Johnson found the concept applicable to the regressions at issue,
but Dr. Majure disagreed.
persuasive testimony and the reduction in distant retransmissions following the WGNA
conversion.
XII. CANADA ZONE
CTV maintains that Dr. George’s calculation of the CCG share is incorrect for
two related reasons: (1) the George Model as specified implies that CCG had
compensable programming outside the Canada Zone; and (2) the George Model
overrepresents the Canada Zone. CTV PFF ¶ 330.
This problem arises because the George Model assumes that CCG programming
would be available and valuable throughout the United States (i.e., outside of the
Canadian Zone) if one assumes the inapplicability of this geographic limitation in the
section 111 license for purposes of estimating relative marketplace value for CCG
programming. Dr. George explains why this assumption is adopted in the George Model:
It is in most circumstances right to infer that programming on distant signals
re-transmitted has higher value than other programming not transmitted.
The primary exception is when cable systems are prohibited from carrying
particular signals, such as the case with Canadian signals outside of the
Canadian re-transmission zone.
…
Failing to control for the fact that transmission of Canadian stations is
prohibited outside of the Canadian re-transmission zone introduces
downward bias in the value of Canadian Claimant programming since the
absence of carriage is equated with zero value.
…
It is worth repeating that the underlying economic framework is what
governs model specification. The prohibition on distant signal carriage on
its face imposes a restriction on cable system choices so must be reflected
in the model. No further “evidence” is needed, or, in fact, possible, since we
cannot observe prohibited carriage.
George WRT at 16, 25-26 (emphasis added).
Program Suppliers, through Dr. Tyler, makes the same argument as CTV, and
responds to Dr. George’s point above as follows:
Within the Canada zone, CSOs can choose among all of the content
categories. But outside the Canada zone, CSOs do not have the option of

choosing CCG content. There is a difference between having something
available and not chosen versus not having something available at all.
Estimating the relationships separately when the Canadian minutes are
available or not recognizes this, and this approach makes more economic
sense.
PS PFF ¶ 297 (and record citations therein).
Dr. Bennett, on behalf of CTV, calculated and tabulated the impact on allocation
shares of the difference between the approaches of Drs. George and Tyler as summarized
above165:
Figure 20. Comparison of CCG shares from George Model with and without
correcting for imbalance.
Total royalties
Year
paid by
system

2014
2015
2016
$225,787,643
$207,614,933
$200,603,016
$200,192,670

CCG shares and
royalties

CCG shares and royalties corrected for imbalance

Allocation (%)

Allocation ($)

Allocation (%)

Allocation ($)

6.5%
13.7%
12.3%
12.0%

$14,662,427
$28,373,785
$24,679,633
$24,090,393

5.6%
8.7%
8.0%
8.3%

$12,746,544
$18,147,786
$16,045,116
$16,549,108

Bennett WRT ¶ 55 & fig.20.
Based on the foregoing, the Judges find that the George Model of Canadian
programming’s relative marketplace value is not adequately proven by her assumptions
regarding the value of such signals if Canadian signals had been made available outside
the Canada Zone. Rather, such values are speculative, and no extrapolations can be
credibly made from the royalty data. To be clear, the Judges are not saying that
programming on Canadian signals would not have value outside of the Canada Zone.
But, like the programming retransmitted by minimum-fee-only CSOs, the value of
retransmitted programming is not subject to accurate measurement via a revealed
preference approach that is the economic concept behind these regressions. Indeed,

Dr. Bennett also accounted for the fact that the George Model “assigns too much weight to the minutes
within the Canada Zone … because [the George Model] bases [its] weights on the minutes within [its] nonrepresentative regression sample (which is over-representative of the Canada Zone) instead of on the
contribution that each zone makes to the aggregate royalty pool.” Bennett WRT ¶ 54. See also id. ¶ 50 &
fig.17.
because this point applies even with regard to minimum-fee-only CSOs who actually
retransmitted distant programming, a fortiori it applies to the hypothetical retransmission
of programming outside of the Canada Zone. Further, not only is the value of any
hypothetical retransmission outside the Canada Zone speculative, there is also no
showing that, as a technical matter, such transmissions further away from the Canada
Zone would be feasible. See SDC PFF ¶ 219 (“[W]hile the statutory limitation restricting
carriage of Canadian television stations to within 150 miles of the U.S.-Canada border or
north of the forty-second parallel (the “Canadian Zone”) is set forth in section 111(c)(4)
and could therefore be rendered inapplicable in a hypothetical market without the section
111 compulsory license, the laws of physics would still operate as a practical physical
limitation on Canadian station broadcast signals, absent an alternative (and more costly)
delivery method such as fiber or satellite feeds.”) (emphasis added).166
XIII. THE JUDGES’ ALLOCATION OF SHARES PURSUANT TO THE
REGRESSION APPROACH
The Judges have considered all of the regression models proffered by the parties
in this proceeding. None of the models were excluded from consideration. Based on the
Judges’ analysis and conclusions regarding each model, as set forth supra, and comparing

The Judges note CCG’s argument that in prior proceedings, including one applying the fee-generation
approach, the Judges and their predecessors did not make this geographic distinction. See CCG PFF ¶ 567568 (and cases cited therein). But those cases either did not involve regression analysis or did not rely on
the regression approach (Dr. Rosston’s model) as anything other than corroboration. In the regression
context, the Judges find it too speculative to assign value by correlating royalties to distant minutes that
were never retransmitted. Moreover, although the Tyler Model, on which the Judges place the most
evidentiary weight among the regression models, resembles a fee-generation approach, it is not a feegeneration approach, as discussed supra. As the Judges have also noted supra, a benefit of the Tyler Model
is that it better looks at the actual nature of the market and uses the evidence available over the years in
question. To allow for value to be estimated by consideration of hypothetical programming retransmission
outside of the Canada Zone would be inconsistent with this “real-world” rationale for crediting the Tyler
Model. Additionally, because the regression approach, unlike the constant sum survey approach, is based
on what CSOs actually retransmitted, in order to identify their market-based revealed preferences from
those actual decisions, a grafting of the hypothetical retransmission of Canadian signals onto that approach
appears inconsistent to the Judges. However, the Judges emphasize that these critiques apply only to the
regression models of relative marketplace value, and are not intended to address any other adjustments that
have been proffered in connection with the Bortz Survey, or with any other evidence, in this proceeding.
each of them, the Judges find the Tyler Model to be the most appropriate regression
model in this record.167 To recapitulate the principal reasons:
1.

On the present factual record, the Tyler Model’s SGRP is preferable to the
log of royalties, or royalties themselves, as the dependent variable in a feebased regression.

2. The Tyler Model avoids the conundrum of the variance/bias dilemma that is
of particular concern in this case for other proffered regression models. By
contrast, Drs. George and Johnson found themselves on the horns of this
dilemma. They require fixed effects to avoid bias by isolating the effect of
program category minutes on royalties. But given the post-WGNA
conversion, the use of fixed effects, as in the model applied in the prior
proceeding, would not generate enough observations. And yet relaxing or
eliminating fixed effects to obtain more observations weakens the isolation of
the effect of interest, the impact of program minutes on royalties, and creates
bias.
3. Among the control variables which the Tyler Model does not require is the
control for the number of subscribers in a subscriber group, which is required
in the other fee-based regressions, but cannot be estimated without
measurement error.
4. The Tyler Model utilizes as a useful analogy to price a price proxy in the form
of a budget constraint, i.e., the SGRP.

The Judges also considered variations proffered by Drs. Johnson and George on their preferred models
in their direct and rebuttal testimonies. Although some of those iterations mitigated certain problems in
their models, none of them was sufficient to overcome the Judges’ preference for the Tyler Model.
5. Although the Tyler Model is not based on a hedonic regression,168 it can
reasonably be described as a “hedonic-inspired” regression.169
6. The Tyler Model’s use of weighting by each CSO’s gross receipts is
appropriate.
7. The Tyler Model calculates coefficients for each year, rather than “pooling”
the data to generate a single coefficient for each program category across all
four years.
8. The Tyler Model provides sufficient variation among the CSOs’ decisions.
9. There is no credible evidence (or even a credible allegation) that Dr. Tyler
engaged in anything that could be construed as specification searching. In
fact, the SDC and JSC experts – who criticize the other regression models
(including Dr. Marx’s) for ignoring the impact of potential specification
searching – acknowledge that the Tyler Model alone is free from this
infirmity. The Judges agree, because the absence of specification searching in
connection with the Tyler Model allows it to be transparent and, specifically,
free from the consumption of “phantom degrees of freedom.”
10. The alleged superficial resemblance of the Tyler Model to a fee-generation
model is not only factually off-the-mark and legally irrelevant, the shared
characteristics of the two models in fact better reflect the real-world decisionmaking of CSOs, as described in Ms. Hamilton’s testimony.
However, as also discussed supra, the Judges cannot simply adopt (for all
circumstances) the Tyler Model to the extent it includes the base fees of CSOs who only

Dr. Tyler at times appears to describe his approach as a “hedonic” regression, see Tyler ACWDT ¶¶
10(e), 85, perhaps on the mistaken belief that such a label was necessary to enhance his approach.
Cf. Final rule and order, Determination of Royalty Rates and Terms for Making and Distributing
Phonorecords (Phonorecords III), 84 FR 1918, 1947-48, 1950 (Feb. 5, 2019) (the Judges relied in part
upon an economic model that was admittedly not an established model (the Shapley Model), but rather was
a Shapley-“inspired” model), vacated and remanded on other grounds sub nom. Johnson v. Copyright
Royalty Board, 969 F.3d 363 (D.C. Cir. 2020).
paid the minimum fee from 2015-2017. Rather, for those years, the Judges, for the most
part, rely on Dr. Tyler’s calculation of allocation shares as derived from the coefficients
he calculated for the CSOs paying more than the minimum fee.
In applying Dr. Tyler’s approach, the Judges first note that, for 2014, the
allocation of shares can be identified by reference to all the CSOs, including those who
paid the minimum fee, as explained, for example by Dr. Marx. See Marx ACWDT ¶ 34
(“data on programming minutes and royalties based on the carriage of distant signals for
2014 are a close match to comparable data from the 2010–2013 proceeding.”) The
allocation shares for 2014 in the Tyler Model, using the data for all CSOs in the
regression, are the following:
Allocation shares for 2014 in the Tyler Model
Year

Program
Suppliers

JSC

CTV

PTV

SDC

CCG

26.6%

37.2%

11.3%

14.0%

4.3%

6.5%

(3.8%)

(7.5%)

(2.6%)

(1.7%)

(0.9%)

(0.9%)

(standard errors in parentheses)
See Tyler ACWDT fig.3.2.
However, for the years 2015-2017, the Judges principally rely on Dr. Tyler’s
allocation share calculations pertaining only to the CSOs who paid more than the
minimum fee, i.e., those whose preferences were revealed by their retransmission
decisions. These allocation shares as calculated by Dr. Tyler are the following:
Fig.6.3 Royalty Allocations based on Tyler Model Regression
(only CSOs Paying More than the Minimum Royalty)
Year Program Suppliers
29.1%
2014
(4.7%)
41.0%
2015
(2.4%)
31.3%
2016
(3.0%)
33.0%
2017
(2.2%)

JSC
32.4%
(9.2%)
2.1%
(1.5%)
1.3%
(1.9%)
0.5%
(1.0%)

CTV
11.3%
(2.6%)
11.3%
(2.2%)
13.3%
(3.4%)
9.9%
(2.0%)

PTV
14.3%
(1.9%)
12.7%
(0.8%)
14.7%
(0.8%)
14.2%
(0.8%)

SDC
5.1%
(1.2%)
9.7%
(1.2%)
8.3%
(1.0%)
7.8%
(1.0%)

CCG
7.6%
(1.1%)
23.2%
(0.9%)
31.1%
(1.4%)
34.6%
(2.1%)

See Tyler ACWDT fig.6.3.
Dr. Tyler noted that these 2015-2017 share allocations were not “strikingly”
different from the share allocations he recommended by reliance on his regression results
for all CSOs, even if they paid only the minimum fee. Tyler ACWDT ¶ 103. Moreover,
as a theoretical economic matter, Dr. Tyler opined that he was not aware of “any logic, a
priori,” that there would be any difference in “relative marketplace values” as between
“Above Minimum Fee CSOs” and “Positive Carriage Minimum Fee CSOs” (i.e.,
including excess capacity CSOs). Id. In this regard, compare Tyler ACWDT fig.6.3
(above) with Tyler ACWDT fig.3.2 (below):
Figure 3.2 Royalty Allocations based on Tyler Model Regression (all CSOs)
Yea
r
201
4
201
5
201
6
201
Program
Suppliers
26.6%
(3.8%)
39.7%
(1.5%)
34.0%
(1.5%)
31.8%
(1.1%)

JSC

CTV

PTV

SDC

CCG

37.2%
(7.5%)
2.8%
(1.0%)
2.5%
(0.9%)
1.8%
(1.0%)

11.3%
(2.6%)
10.2%
(1.5%)
8.2%
(1.8%)
6.9%
(0.9%)

14.0%
(1.7%)
27.9%
(0.8%)
37.4%
(0.7%)
40.4%
(0.6%)

4.3%
(0.9%)
6.2%
(0.6%)
4.4%
(0.6%)
4.0%
(0.4%)

6.5%
(0.9%)
13.3%
(0.5%)
13.6%
(0.5%)
15.2%
(0.9%)

However, Figure 6.3 reports an anomalous increase in the share allocated to the
CCG claimants. This anomaly is explainable.
CCG programming is unique among the program categories in this proceeding
because it is limited in geographic scope to CSOs located within a 150-mile belt below
the U.S./Canadian border. See CCG PFF ¶ 59 (“Under the section 111 compulsory
license, it is prohibited for a cable company to distantly retransmit a Canadian broadcast
signal to communities located more than 150 miles from the United States-Canada border
and also south of the 42nd parallel.”) (citing 17 U.S.C. 111(c)(4)(A)).
As such, the data reported in Tyler ACWDT fig.6.3 – limited to CSOs paying
above the minimum fee – would reflect the unique value of Canadian programming in
that region. More particularly, CCG programming is uniquely valuable in the Canada

Zone in good measure because of the retransmittal of French language programming, a
niche sub-category. See CCG PFF ¶ 20 (“The programming on Canadian Frenchlanguage stations plays an important role for Americans living in the northeast United
States and either speak French or have French ancestry. … An example … is in the
successful grassroots campaign of Sanford, Maine residents who lobbied the Metrocast
cable company and their local government to restore carriage of the CBC’s Frenchlanguage station CKSH.”); see generally id. ¶ 19 (noting the distinct nature of French
language programming in demand by CSOs to serve residents of “New York, Vermont,
Maine, New Hampshire, and Massachusetts – that have a sizeable proportion of residents
with connections to the French language through a current spoken language or
ancestry.”); see also Written Direct Testimony of Beverley Kirshenblatt, Trial Ex. 7400, p. 6
(Kirshenblatt WDT) (the CBC programming on Canadian stations retransmitted into the
Canada Zone is provided “in English, French, and eight Indigenous languages …
broadcast …from around the world [as] a pan-Canadian service reflect[ing] Canada and
Canadians in both official languages … and is a significant contributor to the cultural
fabric of Canada through the promotion and creation of a variety of programming.”).
Thus, in addition to the demand for the usual complement of distantly
retransmitted programming that exists throughout the wider United States, in the Canada
Zone there exists this additional demand. Such greater demand means that CSOs would
choose to pay more than the minimum fee by adding CCG stations, and thus Canadian
claimant programming, to their channel lineup. Accordingly, CSOs in the Canada Zone
would very likely be overrepresented in the cohort of above-minimum-fee-paying CSOs
in Tyler ACWDT fig.6.3.
The problem this creates, for present purposes, is that the Judges are allocating a
royalty pool for which, over the period 2015-2017, more than 90% of the funding came
from minimum-fee-only CSOs. Thus, while the data from above-minimum-fee-paying

CSOs (i.e., in Tyler ACWDT fig.6.3) provides useful economic evidence of CSOs’
revealed preferences for other claimant categories, with regard to CCG content and value,
this data is distortionary as applied to the Judges’ task of allocating all U.S. royalties.
Confirmatory of this distinction is the fact that CCG itself has not proposed that it
receive the anomalously high allocations suggested by the data in Tyler ACWDT fig.6.3
(23.2% in 2015, 31.1% in 2016, and 34.6% in 2017). Rather, CCG has proposed that it
receive 14.8% for 2015, 13.7% for 2016, and 13.6% for 2017. CCG PFF ¶ 617 fig.53.
Further, CCG filed its Proposed Findings of Fact on June 15, 2023, and it was aware of
the higher CCG shares in Tyler ACWDT fig.6.3 since that document was filed on
September 2, 2022. And yet at no time did CCG ever seek to adopt the higher CCG share
set forth in Tyler ACWDT fig.6.3.
Accordingly, in their allocations based on the Tyler Model regression, for 20152017, the Judges utilize the CCG shares reported at Tyler ACWDT, Fig.3.2. The
difference in shares, compared to the CCG share in Tyler ACWDT 6.3, is allocated
proportionately among the other five categories, as set forth in the table for Adjustment A
below:
Adjustment A Table
Year

Program
Suppliers

JSC

CTV

PTV

SDC

CCG

26.6%

37.2%

11.3%

14.0%

4.3%

6.5%

46.29%

2.37%

12.76%

14.34%

10.95%

13.3%

39.25%

1.63%

16.68%

18.43%

10.41%

13.6%

42.79%

0.65%

12.84%

18.41%

10.11%

15.2%

The Judges recalculated the shares of the other five claimant categories by: (1) calculating the percentage
each category represents of all the categories’ shares except CCG; (2) multiplying each percentage by the
reduction in the CCG share generated by replacing the CCG column of Tyler ACWDT fig.6.3 with Tyler
ACWDT fig.3.2; and (3) adding that product to the shares of each claimant category.

A further adjustment is still required. As noted supra regarding the PTV share,
the Judges are adopting the downward adjustments made by Dr. Bennett to reflect the

presence of Must Carry PTV stations. See Bennett WRT fig.52. The Judges apply those
adjustments, and recalculate the shares of the other parties as set forth in the table for
Adjustment B below:
Adjustment B Table
Year

Program
Suppliers

JSC

CTV

PTV

SDC

CCG

26.80%

37.48%

11.38%

13.36%

4.33%

6.55%

47.67%

2.44%

13.14%

11.78%

11.28%

13.70%

40.75%

1.69%

17.32%

15.32%

10.81%

14.12%

44.07%

0.67%

13.23%

15.96%

10.41%

15.66%

The Must Carry adjustment in Bennett WRT fig.52 was based on the PTV shares of all CSO royalties,
whereas the Judges are applying this adjustment to the shares of CSO royalties attributable to shares
generated by CSOs paying above the minimum fee (subject to the prior adjustment for CCG, discussed
supra). So, for 2014, the percentage point adjustment to the PTV share is the percentage point adjustment
in Bennett WRT fig.52. For 2015-2017, the percentage point adjustment to the PTV share is calculated for
each year by (1) finding the percentage of PTV shares reflected by the PTV shares from Tyler WRT fig.6.3
÷ PTV’s shares from Tyler WRT fig.3.2, (2) multiplying that percentage by the percentage point
adjustment in Bennett WRT fig.52, and (3) subtracting that product from the PTV share from the table
above.
The shares of the other claimants are adjusted upward by: (1) calculating the percentage each category
represents of all the categories’ shares except PTV, (2) multiplying each percentage by the Bennett Must
Carry adjustment (reduced as set forth above), and (3) adding that product to the shares of each claimant
category.

There remains a final adjustment. The Judges note that PTV argued that a
significant number of its stations were retransmitted by CSOs together with WGNA prior
to the WGNA conversion, thereby generating a base fee royalty and an expressly
revealed preference and willingness-to-pay. PTV further notes that post the WGNA
conversion, many of these CSOs continued to retransmit the same PTV station, but this
did not trigger the base fee because the minimum fee applied (with WGNA gone). PTV
maintains that the pre-WGNA conversion carriage is probative of the fact that the postWGNA conversion evidences economic value as if it were generating base fee royalties.
PTV PFF ¶ 60 (and record citations therein). The Judges agree.
On this issue, there is evidence in the form of Mr. Harvey’s analysis done on
behalf of JSC. Specifically, Mr. Harvey reported:
The number of PTV Only systems increased after the WGNA conversion
from 44 at the end of 2014 to 173 by the end of 2017. PTV Only Systems

that had carried WGNA and PTV in 2014 account for three-fifths of that
increase.
Harvey WDT ¶ 106. The Judges find that Mr. Harvey’s reporting demonstrates that 44%
of the PTV stations that were identified as retransmitted by minimum-fee-paying CSOs
after the WGNA conversion had been transmitted pre-conversion and generated base fee
royalties. That is persuasive evidence of ongoing marketplace value. Accordingly, the
Judges use that factual finding to increase by 44% the PTV share modification, as set
forth in the table for Adjustment C below:
Adjustment C Table
Applying The PTV Adjustment to Reflect WTP of CSOs that
Maintained PTV Carriage After WGNA Conversion

Year

Program
Suppliers

JSC

CTV

PTV

SDC

CCG

44.87% 2.30%

12.37%

16.96%

10.62%

12.90%

37.51% 1.56%

15.94%

22.06%

9.95%

13.00%

40.39% 0.61%

12.12%

22.98%

9.54%

14.35%

The Judges recalculated the shares of the other five claimant categories by: (1) calculating the percentage
each category represents of all the categories’ shares except PTV, (2) multiplying each percentage by the
increase in the PTV share generated by adjusting to reflect WTP of CSOs that maintained PTV carriage
after WGNA conversion, and (3) subtracting that product from the shares of each claimant category.

Returning to Tyler ACWDT fig.6.3, upon which the Judges principally rely, the
Judges’ decision to utilize and adjust the share allocations therein is strengthened by
consideration of the confidence intervals at various levels of statistical significance,
relating to those share allocations. That is, those confidence intervals serve to confirm
the reasonableness of their share allocation approach. In that regard, as set forth in the
table below, only one claimant category, JSC, has a negative low range bound in its
confidence interval at the 90%, 95%, and 99% confidence intervals. Moreover, the
negative value diminishes, as the confidence interval widens. The Judges do not find that
this one lower bound issue is sufficient to call into question the usefulness of the share
allocations on which they rely.

Additionally, at the 55% confidence interval, this lower bound in fact turns
positive, as also noted in the table below.
55%/90%/95%/99% Confidence Intervals for Claimant Shares from Tyler Only
CSOs Paying More Than Minimum Fee Model
Source: Derived from data in Tyler ACWDT fig.6.3.
Claimant

Share

Program
Suppliers

41.0%
(2.4%)
2.1%
(1.5%)
11.3%
(2.2%)
12.7%
(0.8%)
9.7%
(1.2%)
23.2%
(0.9%)

JSC
CTV
PTV
SDC
CCG

55% Confidence
Interval

90% Confidence
Interval
95% Confidence
Interval

99% Confidence
Interval

39.19% to 42.81%

37.05% to 44.95%

36.3% to 45.7%

34.82% to 47.18%

0.97% to 3.23%

-0.37% to 4.57%

-0.84% to 5.04%

-1.76% to 5.96%

9.64% to 12.96%

7.68% to 14.92%

6.99% to 15.61%

5.63% to 16.97%

12.10% to 13.30%

11.38% to 14.02%

11.13% to 14.27%

10.64% to 14.76%

8.79% to 10.61%

7.73% to 11.67%

7.35% to 12.05%

6.61% to 12.79%

22.52% to 23.88%

21.72% to 24.68%

21.44% to 24.96%

20.88% to 25.52%

2016
Program
Suppliers
JSC
CTV
PTV
SDC
CCG

31.3%
(3.0%)
1.3%
(1.9%)
13.3%
(3.4%)
14.7%
(0.8%)
8.3%
(1.0%)
31.3%
(1.4%)

29.04% to 33.57%

26.37% to 36.24%

25.42% to 37.18%

23.57% to 39.03%

-0.13% to 2.735%

-1.83% to 4.43%

-2.42% to 5.02%

-3.59% to 6.19%

10.73% to 15.87%

7.71% to 18.89%

6.64% to 19.96%

4.54% to 22.06%

14.10% to 15.30%

13.38% to 16.02%

13.13% to 16.27%

12.64% to 16.76%

7.55% to 9.06%

6.66% to 9.95%

6.34% to 10.26%

5.72% to 10.88%

30.04% to 32.16%

28.80% to 33.40%

28.36% to 33.84%

27.49% to 34.71%

2017
Program
Supplier
JSC
CTV
PTV
SDC
CCG

33.0%
(2.2%)
0.5%
(1.0%)
9.9%
(2.0%)
14.2%
(0.8%)
7.8%
(1.0%)
34.6%
(2.1%)

31.34% to 34.66%

29.38% to 36.62%

28.69% to 37.31%

27.33% to 38.67%

-0.26% to 1.26%

-1.15% to 2.15%

-1.46% to 2.46%

-2.08% to 3.08%

8.39% to 11.41%

6.61% to 13.19%

5.98% to 13.82%

4.75% to 15.05%

13.60% to 14.80%

12.88% to 15.52%

12.63% to 15.77%

12.14% to 16.26%

7.05% to 8.56%

6.16% to 9.45%

5.84% to 9.76%

5.22% to 10.38%

33.01% to 36.19%

31.15% to 38.05%

30.48% to 38.72%

29.19% to 40.01%

Note: standard errors in parentheses.

The Judges take note of the 55% confidence level because, as they stated in the
2010-13 Determination, there is nothing sacrosanct about the three confidence levels of

90%, 95%, and 99% when a court is considering econometric analyses. In this regard,
the Judges take note of the position of the United States Supreme Court regarding the
limited evidentiary value of confidence intervals/statistical significance. See Matrixx
Initiatives, Inc. v. Siracusano, 563 U.S. 27, 40 (2011) (“the premise that statistical
significance is the only reliable indication of causation …is flawed.”).
In this regard, the Judges stated in the 2010-13 Determination:
A statistical significance level of .01, .05 and .1 … is “often referred to
inversely as the … confidence level,” equivalent to 99%, 95% and 90%,
respectively. [ABA ECONOMETRICS at 18]. Although “[s]ignificance levels
of five percent and one percent are generally used by statisticians in testing
hypotheses … this does not mean that only results significant at the five
percent level should be presented or considered [because] [l]ess significant
results may be suggestive, even if not probative, and suggestive evidence is
certainly worth something.” [F. M. Fisher, Multiple Regression in Legal
Proceedings, 80 Colum. L. Rev. 717-718 (1980)]. Thus, “[in] multiple
regressions, one should never eliminate a variable that there is a firm
foundation for including, just because its estimated coefficient happens not
to be significant in a particular sample.” Id. However, care must be taken
not to confuse the “significance level” with the “preponderance of the
evidence” standard, because “the significance level tells us only the
probability of obtaining the measured coefficient if the true value is zero,”
so one cannot “subtract[] the significance level from one hundred percent”
to determine whether a hypothesis is more or less likely to be correct. Id.
See also D. Rubinfeld, Econometrics in the Courtroom, 85 Col. L. Rev.
1048, 1050 (1985) (“[I]f significance levels are to be used, it is
inappropriate to set a fixed statistical standard irrespective of the substantive
nature of the litigation.”); D. McCloskey & S. Ziliak, The Standard Error
of Regressions, 34 J. Econ. Lit. 97, 98, 101 (1996) (“statistically significant”
means neither “economically significant” nor “significant [in] everyday
usage [where] ‘significant’ means ‘of practical importance’ ….”).
2010-13 Determination at 3571 n.78. The Judges apply the foregoing principles here. To
be clear, the Judges are not substituting the significance levels/confidence levels for the
preponderance of evidence (marginally greater than 50%) standard. Rather, the Judges
are looking to various levels of statistical significance/confidence intervals to determine
the probability of obtaining Dr. Tyler’s measured coefficient if the true value was in fact
zero. And, the Judges are not wedded to the convention of the 90%, 95% and 99%
confidence levels, because they agree with Dr. Rubinfeld, whose treatise is cited above,

for the proposition that “if significance levels are to be used, it is inappropriate to set a
fixed statistical standard irrespective of the substantive nature of the litigation.”
The nature of this litigation, as the D.C. Circuit has held (discussed elsewhere in
this determination) is an intensely practical endeavor, one in which mathematical
precision is not possible, and where “rough justice” is the norm. In this regard, the
Judges also follow – in addition to the Supreme Court holding in Matrixx – the guidance
of two scholars (also quoted above in the 2010-13 Determination) who have written
extensively to caution, as a matter of economic ethics, against a fixation on statistical
significance:
Statistical significance is not equivalent to economic significance nor
to …legal … significance. … The core problem is that statistical
significance is neither necessary nor sufficient for testing … material fact
in a court of law ….
Stephen T. Ziliak & Deirdre McCloskey, Lady Justice Versus Cult of Statistical
Significance, in George F. DeMartino & Deirdre McCloskey, The Oxford Handbook of
Professional Economic Ethics 352-53 (2016). The need to avoid overreliance on low levels

of statistical significance (i.e., large confidence intervals) has been emphasized by Dr.
Kennedy, in his textbook cited by the parties and the Judges in this proceeding. See
Kennedy, supra, at 366 (listing as one of his “Ten Commandments of Applied
Econometrics”: “Do not confuse statistical significance with meaningful magnitude.”).
Accordingly, the Judges note specifically that the table above shows, with regard
to the confidence intervals for Dr. Tyler’s shares, only positive numbers for all claimant
categories in 2015 at the 55% level. Further, the table also shows only positive numbers

for all claimant categories for all confidence levels in all years except for JSC, with a
lower bound value for JSC of only -0.13 in 2016 and -0.26 in 2017.170
Although the Judges find these data to be persuasive in demonstrating that Dr.
Tyler’s shares are reasonable, they are concerned that the intervals remain somewhat
wide, and they do not simply dismiss out-of-hand the one negative lower bound at the
higher confidence intervals. Relatively wide ranges in regression results have been a
previous concern in these proceedings, as noted with regard to the Waldfogel Model
applied to the 2004-05 proceeding:
[W]hile the Waldfogel regression analysis provides useful information, we
also find that there are limits to that usefulness in corroborating the Bortz
survey, largely stemming from the wide confidence intervals for the
Waldfogel coefficients. Thus, the implied share of royalties calculated by
Dr. Waldfogel would change substantially if the true value of the variable
was at one end of the confidence interval rather than at the point estimate
value used by Dr. Waldfogel in his calculations. … Nevertheless, while one
may question the precision of the results on this basis, it only cautions
against assigning too much weight to its corroborative value.
2004-05 Distribution Order at 57063, 57068.
The reconciliation is different here than in the 2004-05 proceeding, because here
the Judges are considering the regression evidence and the Bortz Survey evidence as
essentially equally weighted and useful (but not flawless) evidence, rather than treating
the regression evidence as merely corroborative of the survey evidence. Likewise, the
reconciliation will be different than in the 2010-13 proceeding, because the Judges are
not giving any primacy to the regression evidence in this proceeding, given how the
changes in the retransmission sector after the WGNA conversion have affected the
available data. But the overall point remains: As in prior proceedings, the Judges take
note of the wide confidence intervals (and the negative JSC coefficient at the lower

The negative JSC number at the higher confidence intervals may be the consequence of the lower
number of minutes in the regression after the full WGNA conversion. As noted supra with regard to small
sub-categories of programming, when there are very few minutes in the regression, the estimates can be
inaccurate.
bound), as one reason to balance the shares implied by the Tyler Model, as adjusted
above, against the results of the Bortz Survey, also as adjusted.
XIV. 3.75% FUND
In the 2010-13 Determination, the Judges made no distinction within the
regression approaches themselves between allocation shares attributable to the Basic
Fund and to the 3.75% Fund. Rather, as here, the Judges first made their overall
allocation share decision after applying all the useful evidence, including evidence from
the surveys and regressions. Only then did the Judges consider how to allocate the
claimants’ royalty shares as between the Basic Fund and the 3.75% Fund.
Specifically, the Judges in the 2010-13 Determination engaged in the following
approach in reconciling the 3.75% Fund with the Basic Fund: (1) The Basic Fund
percentage allocations were made without disaggregating royalties attributable to the
3.75% Fund and (2) the 3.75% Fund percentage allocations were made by “reallocat[ing]
the PTV share from [the Basic Fund] proportionally among the categories that participate
in that fund.” 2010-13 Determination at 3611. In reaching this ruling, the Judges
“considered and rejected PTV’s arguments that the allocations of Basic Fund royalties
must be adjusted to account for PTV’s non-participation in the 3.75% Fund.” Id. (It is
undisputed that PTV cannot receive any share from the 3.75% Fund.)
In the present case, all the parties, except PTV, made arguments and presented
testimony proposing that the Judges make the 3.75% Fund allocations in the same
manner as in the 2010-13 Determination.171 PTV, however, through the Johnson Model,
has departed from the prior approach and calculated, via regression analysis, separate
allocations for the Basic Fund and for the 3.75% Fund. According to PTV, this is

CTV, through its counsel, proposed an alternative method for allocating the 3.75% Fund in its RPHB at
64-65. However, this proposed alternative was not linked to any portion of the record, directly or
indirectly. Factual assertions cannot be made after the close of evidence and, in any event, cannot be made
by counsel. The Judges therefore do not consider CTV’s alternative 3.75% Fund proposal. See Johnson v.
Copyright Royalty Board, 969 F.3d 363, 383 (D.C. Cir. 2020) (rejecting the Judges’ reliance on a party’s
proposal made “for the very first time after the evidentiary record was closed.”).
warranted because, even though it was not the method used previously, the Judges have
acknowledged the “need to allocate the Basic Fund and the 3.75% Fund separately.”
PTV PHRB at 36-37. But PTV elides the fact that Dr. Johnson’s separate modeling of
the two rates is not how the separate allocations were accomplished in the 2010-13
Determination, as noted supra.
As other parties note, the approach sought by PTV and Dr. Johnson is not only
inconsistent with the Judges’ prior approach, but also inconsistent with the facts and with
economic theory. As Dr. George comprehensively explained:
Dr. Johnson’s model produces biased results because it excludes 3.75%
fees. Dr. Johnson’s model relates base rate royalties rather than total
royalties to claimant programming minutes. … [T}his approach does not
align with the economic theory that supports regression estimates in these
proceedings. Specifically, profit maximization dictates that systems add
distant signals if the full incremental value exceeds the full incremental cost.
By excluding royalties associated with 3.75% fees, coefficient estimates do
not reflect the full cost of distant signal carriage and hence do not reflect the
full value of claimant programming. Stated another way, a cable system’s
choice to carry a signal subject to 3.75% fees reveals the system’s
willingness to pay for signals to be higher than the royalty expenditure Dr.
Johnson includes in his regression. Omitting 3.75% fees from the dependent
variable will produce regression coefficients that systematically overstate
the value of public television programming not subject to 3.75% fees and
systematically understate the value of other programming.
Dr. Johnson separately estimates his regression model using only fees paid
to the 3.75% fund. This model suffers from the same problem as considering
base rate royalties alone: the dependent variable does not reflect the full
incremental costs of carriage, so the model produces biased estimates of
program values. These estimates also cannot be used to estimate the relative
market value of programming because they do not reflect the economic
choices of systems in the cable marketplace.
George WRT at 23-24 (emphasis added). See also Commercial Television Claimants’
Post-Hearing Brief in Support of Proposed Royalty Allocations at 48 (CTV PHB) (“Dr.
Johnson’s isolation of the base and 3.75% fees is inconsistent both with basic economic
intuition and statistical evidence of a correlation between those carriage decisions and
thus does not account for the link between these retransmission decisions.”); PS PHB at
55 (“There is no rational economic reason to exclude decisions relating to the carriage of
non-permitted stations in assessing CSO preferences.”).

The Judges agree that it makes no economic sense to separate out the two royalty
fund payments when the CSOs would economically make no distinction between the two
funds when identifying their royalty costs and benefits. (That is, money is fungible, and
the CSOs would be indifferent as to how their royalty payments were divided between
the two funds.) Further, the Judges are struck by the fact that PTV and Dr. Johnson did
not take note of this point when proposing their novel approach, and that PTV’s novel
approach just so happened to significantly increase PTV’s allocation share in the Basic
Fund. See CTV PHB at 48 (and record citations therein) (“[I]f Dr. Johnson had estimated
his regression using both the base fee and 3.75% fee, the implied shares for PTV would
have dropped by more than 5% … from 2015 to 2017.”). See also Johnson WRT tbl.4
(acknowledging a five-percentage point increase in PTV’s Basic Fund over the 20142017 period, from 43.5% to 48.5% (an 11.5% increase in PTV’s share), by separating out
the allocations for the two funds).
Accordingly, nothing was persuasively presented in the regression analyses to
support a deviation by the Judges from establishing the 3.75% fund allocations as they
adopted in the 2010-13 Determination.
XV. INDUSTRY EXPERTS
A. Assumptions Regarding CSO Behavior
PTV offered industry expert testimony Lynne Costantini who testified that cable
companies evaluate whether to add, delete or maintain channels on their lineups by
analyzing the overall value a particular channel adds to their content offerings and the
ability of the programs on the channel to attract and retain pay TV subscribers, within the
context of the programming mix on the then-current lineup, as well as technological and
economic constraints.172 Written Direct Testimony of Lynne Costantini, Trial Ex. 7301, at 5

This testimony is consistent with the Judges’ findings in prior distribution proceedings 2010-13
Determination at 3590 (“CSO executives’ valuations reflect their conclusions regarding the extent to which
(Costantini WDT); 3/27/23Tr. 1591-92 (Costantini). She then offered her opinion that,
based on the aforementioned programming goals of CSOs, the relative value to cable
companies of programs included in PTV Distant Broadcast Stations had increased.
Costantini WDT at 8-10. Several other industry experts attested to the value of
programming that attracts and retains subscribers. See, e.g., Written Direct Testimony of
Kate Alany, Trial Ex. 7302, at 2 (Alany WDT); Singer WDT at 7-8; Written Direct

Testimony of Daniel Hartman, Trial Ex. 7110, at 7-9 (Hartman WDT); Witmer WRT at
7; Written Direct Testimony of Alex Paen, Trial Ex. 7603, at 13.
Sue Ann Hamilton, an industry expert whose testimony on behalf of Program
Suppliers in the 2010-13 Cable Proceeding has been submitted as designated testimony in
this proceeding, testified that a CSO’s selection of stations for distant retransmission is
marked by inertia, not by an affirmative analysis and weighing of alternative stations.
Written Direct Testimony of Sue Ann Hamilton (2010–2013), Trial Ex. 7061, at 7
(Hamilton WDT (2010-13)). She identified two reasons for CSO inertia. First, distant
retransmission costs represent a non-material expenditure for CSOs compared with their
other more expensive programming and carriage decisions. Id. at 9. Second, she testified
that CSOs are more concerned with losing existing subscribers if they drop certain
stations and the associated programs than they are with whether or not any new
retransmitted station and its associated programs might entice new subscribers. Id. In
industry jargon, CSOs are more concerned with legacy distant signal carriage than with
adjusting the roster of distantly retransmitted stations. Id. at 15. Thus, Ms. Hamilton
implied, any correlation between program categories and royalties is spurious, because it
is “inconsistent with [her] understanding of how CSOs actually make distant signal
carriage decisions.” Id.

the category of programming contributes to the return on that investment; i.e., helps the cable system attract
and retain subscribers.”)

The Judges again find that Ms. Hamilton was a knowledgeable and credible
witness, particularly with regard to the de minimis impact of distantly retransmitted
stations on CSOs and the importance of “legacy carriage.” Moreover, the Judges take
note that CSO time and effort are themselves finite resources (opportunity costs), and, as
Ms. Hamilton implied, it would behoove a rational CSO to expend more of those
resources making carriage and programming decisions with a greater financial impact.173
Based on the entirety of the record, the Judges do not find that the relative
unimportance of distantly retransmitted stations to a CSO has deprived the regressions in
evidence of value in this proceeding. Even if CSOs emphasize legacy carriage over
potential increases in value from adding or substituting different local stations for distant
retransmission, otherwise well-constructed regressions remain a reliable approach to
capture the relative values of those legacy-based decisions. The Judges are mindful that
regression analyses provide benefit because they look for a correlation between economic
actors’ choices (the independent explanatory variables) and the dependent variables as
potential circumstantial evidence of a causal relationship, but they do not purport to
explain what lies behind such a potential causal relation.
B. Value
1. Volume of Programming Minutes
Several industry expert witnesses testified that, from a distributor’s perspective,
the value and volume of certain categories of programming are not correlated. See, e.g.
Witmer WRT at 11; 4/10/2023 Tr. 4050:11-4051:8 (Witmer); 4066:1-3, Singer WDT at
19; Singer WRT at 8; Hartman WDT at 23; Written Rebuttal Testimony of Daniel
Hartman, Trial Ex. 7111, at 9 (Hartman WRT); Written Direct Testimony of John S.

Given the low value of retransmitted stations, a CSO might rationally emphasize the value of “legacy
carriage” as a heuristic (without further analytical effort), assuming as Ms. Hamilton implies, that
eliminating a distantly retransmitted legacy station and its programs is more likely to cause a loss in
subscribers than a change in station lineup is likely (without further and costly analytical effort) to increase
the number of subscribers.
Sanders, Trial Ex. 7500, at 25 (Sanders WDT).174 Such testimony was generally offered
to challenge the regression analyses that look to the relationship between the total
royalties paid by cable operators for carriage of distant signals and the quantity of
programming minutes by programming category a reliable methods to assign relative
market value. A similar indication, that value and volume of certain categories of
programming are not necessarily correlated, was also expressed by industry experts who
testified on behalf of proponents of regression analyses using minutes of programming.
For instance, Lynne Costantini, industry expert offered by PTV, testified that “you don’t
sell programming or buy programming based upon the number of minutes.” 3/28/23 Tr.
1735-36 (Costantini). However, industry experts also cautioned against simply looking
at the price of programming and not weighing the volume of licensed content available to
consumers when assessing relative marketplace value. 4/19/23 Tr. 5406-07 (Homonoff).
Based on the entirety of the record, the Judges are not persuaded by industry
expert testimony that the value and volume of programming are not correlated. The
industry expert evidence is set against the more well-established sound economic
reasoning underlying the regression analyses in this proceeding. The explanation for the
Judges finding logical economic bases to rely on allocations based on programming
minutes by programming category from the regression analyses is addressed supra.
That is not to say that regressions correlating program category minutes and a
measure of royalties is necessarily the only way to determine value. As discussed
elsewhere in this determination, and as confirmed by some of the industry testimony, the
Judges recognize that certain categories of programming, particularly JSC programming,
bundled together with programming from other claimant categories, can have a value (in
terms of retaining or adding subscribers) necessarily that is not well-correlated with

At the same time, several of the same JSC experts conceded that there is a relationship between price, or
willingness to pay, and quantity of live team and professional sports games. 4/3/23 Tr. at 2798-99 (Singer);
4/05/23Tr. at 3317, 3318 (Warren); 4/10/23 Tr. at 4072-73 (Witmer).
overall program minutes. To the extent that this bundling of programming with varying
values is not smoothed out by the averaging undertaken by the regressions, survey
analysis would be an appropriate tool to identify such value to a CSO within a station
bundle.
2. Unique Niche Content
CCG, JSC, and SDC assert that the regression analyses fail to adequately capture
the value of “niche” programming or to appropriately reflect the testimony of industry
expert fact witnesses concerning the salient market conditions in the cable industry
during the years at issue in this proceeding. CCG PFF at 178-79 and record citations
therein; SDC PFF at 64 and record citations therein; JSC PFF at 58-59 and record
citations therein.175 The Judges were urged to test the validity of regression analyses
against other evidence of value, as a “reality filter.”176
JSC’s industry expert witnesses testified that JSC content is unique as
“perishable” content. 4/3/23 Tr. 2750 (Singer). That is, each live game is a singular,
real-time event. Mr. Singer asserted that JSC content is largely unique in the marketplace
as among the last regularly scheduled “tune-in” programs. He added that live sports
competitions are mostly only important while they are taking place, do not lend
themselves to recording, and are not compelling on replay. He further stated that sports
are popular with a passionate segment of customers of the type that television distributors
focus on retaining. Singer WRT at 4-5.177 Such sentiments, offered as an indication of

JSC also asserted that regression analysis was unreliable as it overvalued certain content types in
relation to JSC content, pointing to valuations of paid programming, devotional content, and public
television content. JSC PFF at 60-65 and record citations therein
Asker WRT at 45 (“It is standard practice in econometric research to test the external validity of
findings whenever alternative methods are available to answer the same question.”); Harvey WRT at 38-41;
3/28/23Tr. 1910:3-1911:3 (Harvey) (agreeing with Judge Strickler that “validity test” is synonymous with
“reality filter”); 4/18/23 Tr. at 5168:8-5169:8 (George) (urging that the reality filter should reflect the
relevant marketplace being considered/measured), See also, CCG PPFCOL at 31 and record citations
therein.
Mr. Singer asserted that games are particularly valuable cases of retransmission to geographic areas with
deep affinity to specific teams. Singer WDT at 17-18. Several examples of such transmissions were cited
the unreliability of regression analyses and their results, were reiterated by additional JSC
industry expert witnesses. 4/5/23 Tr. 3349-50 (Hartman); Witmer WRT at 9; 4/10/23 Tr.
4061-62 (Witmer); Hartman WDT at 10; JSC PFF at 134-41 and record citations therein.
SDC points to similar assertions from industry experts regarding the value of its
niche content. Written Direct Testimony of Toby Berlin, Trial Ex. 7508, at 7-10; Written
Rebuttal Testimony of John S. Sanders, Trial Ex. 7501, at 27 (Sanders WRT); SDC PFF
at 76-79 and record citations therein. Program Suppliers noted that niche programming is
not limited to devotional content, and that non-JSC “Other Sports” programming is
valued as niche programming. PS PFF at 18-19, citing 4/10/23 Tr. at 3824-25 (Berlin),
and other record citations therein. Similarly, CCG observed that its programming,
including French content, qualifies as unique and valuable niche programming that
attracts and retains subscribers. CCG PFF at 178-79, citing Kirshenblatt WDT at 10-18.
The Judges find Mr. Singer and Mr. Berlin to be particularly credible witnesses in
relation to their testimony regarding the unique value of JSC content and SDC content in
relation to the other content categories during the relevant time period. Based on the
entirety of the record, the Judges are persuaded that evidence of the unique value of
CCG, JSC, and SDC content serves as a limitation on the applicability of certain
proposed regression analyses and their resulting proposed allocation results. These
validity test or reality filter findings do not negate valid application of regression analyses
as a basis for allocation. However, these factors are taken into account within the Judges’
weighting of the allocation methodologies, including application of the Bortz survey, as
addressed infra.

to by JSC. JSC PFF at 28-30 and record citations therein. This assertion was disputed by Program
Suppliers as merely anecdotal. PS PFF at 43-44 and record citations therein.

3. Streaming and Availability on Other Platforms
JSC testified that the value of programming is diminished when that same type of
content is available elsewhere, especially for cheaper or no cost. 4/3/23 Tr. 2749
(Singer); 4/5/23 Tr. 3357 (Hartman); Hartman WDT at 18-19. The JSC industry expert
witnesses testified that there is a lower risk of losing any subscribers when such content
is not carried. Witmer WRT at 14; 4/5/23 Tr. 3378:12-24 (Hartman); Hartman WDT at
18-19. These sentiments were echoed by PTV’s industry expert Lynne Costantini.
Costantini WDT at 7; 3/28/23 Tr. 1718-19 (Costantini).
SDC pointed to testimony of a similar dilutive effect from streaming, regarding
Program Suppliers’ programming. SDC noted that syndicated series and movies,
represented in Program Suppliers content, historically had often exclusively run on
broadcast stations, but were increasingly becoming available on streaming platforms,
which grew in popularity during the relevant period. SDC PFF at 108, citing Costantini
WDT at 7; Hartman WRT at 10-11. SDC also argued that its content did not suffer from
a dilutive effect from streaming, as streaming services were not designed to cater to
devotional audiences, thus preserving the retentive value of SDC content to CSOs. SDC
PFF at 109-10 and record citations therein.
Program Suppliers asserted that while syndicated shows and movies are available
on streaming platforms, that does not necessarily detract from the value of such programs
on distant signals. It noted that that as streaming rose, the volume of Program Supplier
content carried on distant signals rose as well. 4/19/23 Tr. at 5408 (Homonoff).
CCG testified that while significant CCG content was offered through streaming,
it was generally only after exclusive premier via broadcast. Kirshenblatt WDT at 11-13;
see also Written Direct Testimony of Tom Cox, Trial Ex. 7401, at 1-2. PTV offered
testimony that during the relevant years significant portions of PBS programming were
offered and viewed free through various digital streaming options. PTV also testified that

PBS sold streaming devices related to such free streaming content. 3/27/23 Tr. 1545-50
(Alany).
CTV offered that during the relevant period, the dilutive effects of streaming were
not present for original live and local CTV programming or for JSC programming, which
was largely unavailable on streaming platforms. Written Rebuttal Testimony of Robert
Papper, Trial Ex. 7206, at 45 (Papper WRT); Written Rebuttal Testimony of Mike
Vaughn, Trial Ex. 7205, at 4. CTV PFF at 11-15, and record citations therein. CTV’s
industry experts, as well as Professor Marx, were especially convincing in distinguishing
the effects that streaming had on CTV content versus other types of programming. See,
e.g., 4/11/23 Tr. 4240:22-4241:12; Tr. 4234:6-10 (Marx).
The Judges find credible evidence that Program Suppliers’ content was more
predominantly available through streaming channels during the relevant period.
Therefore, based on the entirety of the record, the Judges find evidence of dilutive effects
to be persuasive as an indicator of decreased relative value of Program Suppliers content.
Additionally, the Judges find that CTV content, especially original live local news
content, was generally not diluted by streaming and that this is a persuasive indicator of
relative increased value of CTV content. The Judges apply these factors into their
weighting of allocation methodologies. Duplication
Industry executives testified that duplicative content does not add value as it does
not further CSOs’ goals of subscriber retention. Singer WRT at 15-16; Hartman WRT at
12; Witmer WRT at.15. JSC asserted that a significant proportion of the programming on
distant PBS signals was duplicative of what was already available from CSOs to
subscribers, and reiterated that such duplication did not provide value. Harvey CWDT at
51; Witmer WRT at 14; 4/10/23 Tr. 4064:18-4065:4 (Witmer).178 JSC pointed to a study
that found rates of duplication for these programs to be as high as 98.9%. Harvey CWDT
SDC offered a similar view of PTV content. See SDC PPFCOL at 112, record citations therein.

at 55 tbl.28. Mr. Papper also asserted that programming on PTV stations is mostly
duplicative and much of it at the exact same time. Papper WRT at 15. Mr. Papper
provides specific examples to demonstrate duplicative airing of programming, all
demonstrating higher duplication than the overall result average. Id at 16-41. Mr. Papper
notes that the duplication was a bit lower in 2016 and 2017, but there still is significant
duplication of programming. Id. at 41. In contrast, duplication with CTV signals was
perceived as minimal. Id. at 42. Mr. Papper argues the large amount of duplicative
programming rarely provides a good reason to import a distant PTV signal unless there
really is not a local one. He argues this is supported by the data in which during the 20142017 period, only slightly more than a third of the systems and slightly over a quarter of
the subscriber groups had both a distant and local PTV signal. Id.
The assertions against finding value of duplicative programming were criticized
for treating programs as duplicative even if they did not air at the same time on both the
distant and the local signal or even if the distant and local signals aired different episodes
of the same program. Johnson WRT Ex. 7303 at 40-44.179 Dr. Johnson argued that
different episodes of the same program are distinct programming, and a single episode of
a program can create incremental value if shown at a different time. Dr. Johnson
conducted an analysis of duplication and found that only approximately 20 percent of
PTV programs were retransmitted to subscriber groups at the same time as a local
broadcast. Id. at 41. JSC addressed the former point by the minimal value of time-shifted
programming does not accrue to retaining cable subscribers. 4/3/23 Tr. 2764:13-19
(Singer).
Based on the entirety of the records, the Judges find that significant duplicative
content does not, in general, have the same value as non-duplicative programming. The

PTV’s witness Ms. Alany acknowledged duplication as an issue, suggesting that local public television
stations may adjust programming schedules in order to avoid or minimize duplication, but did not offer any
evidence of such adjustments having taken place. Alany WDT at 21; 3/27/23 Tr. 1557:20-25 (Alany).
industry experts presented reliable testimony that simultaneous or near simultaneous
programming does not enhance the ability to attract and retain customers. However, the
Judges also find that time shifted programming does have some value to customers,
affording them greater flexibility in their viewing, and therefore provides customer
retention value to CSOs. The Judges address this factor in making adjustments to
regression methodologies (the Bennett adjustment) and in the Judges’ weighting of the
allocation methodologies.
4. Bandwidth
Ms. Costantini testified that CSOs’ programming decisions should reflect the
highest and best use of scarce bandwidth, and that all decisions to carry programming are
thus necessarily indicative of value. Regarding bandwidth issues, Ms. Costantini
challenged the testimony of other industry experts (addressed below) by asserting that
bandwidth considerations were a significant factor in the programming decision-making
of cable companies during the relevant time period. Costantini WRT at 3-6. She testified
that during the relevant period, many cable companies provided three distinct products:
pay TV, broadband internet (important to support internet video products) and IP phone,
each of which competed within the CSO that was seeking the most profit able uses of
appropriate amounts of bandwidth. Ms. Costantini testified that CSOs placed more value
on broadband Internet than CSO television programming. Costantini WRT at 4; 3/27/23
Tr. 1597-1605 (Costantini). In support of this view, she pointed to her professional
experience while seeking cable distribution during the period 2012–2016, including
negotiations with CSOs that oftentimes cited bandwidth allocation as a reason not to
carry a new channel. Costantini WRT at 5-6. However, Ms. Costantini also testified to
an inability to determine whether “most or many or the majority” of CSOs even provided
Internet service (bandwidth) during the relevant time period. 3/27/23 Tr. 1613
(Costantini).

Ms. Witmer testified that during the relevant period, advances in digital
technology meant that bandwidth was no longer a significant driver of carriage decisions.
Witmer WRT at 7 n.3. Ms. Witmer asserted that deployment of switched digital
technology, headend consolidations, and reclamation of analog bandwidth cable channels
opened up considerable digital bandwidth on systems that enabled the launch of more
channels and other consumer products such as telephone and broadband services.
Several other industry experts also testified that bandwidth was no longer a constraint
during the relevant period. Singer WDT at 7; Singer WRT at 5; 3/30/23 Tr. 2595:132597:24 (Majure); 4/3/23 Tr. 2764:20-2765:14 (Singer).
Based on the entirety of the record, the Judges are not persuaded that bandwidth
remained a significant concern for most CSOs who the record established employed more
advanced technology than in previous periods. Bandwidth allocation may have been a
legitimate but un-specific concern for smaller CSOs that had not employed improved
digital technologies in the early years of the relevant time period. However, on the
current record, the Judges are not able to perceive any reliable scope of bandwidth being
a significant concern for CSOs in relation to programming decisions. Therefore, the
issue does not impact the Judges consideration of the methodologies or resulting
allocations offered this proceeding.
5. Other Factors: Cost, Acclaim, Trust
Ms. Alany’s offered testimony to indicate relative market value of PTV content is
demonstrated by production cost and quality/acclaim of content as well as the level of
trust that PBS enjoys in the public eye. See, e.g., Alany WDT at 6–12, citing PBS Trust
Brochures 2014-2018; 3/27/23 Tr. 1535:16–1537:1 (Alany). Other industry experts also
offered similar testimony regarding production cost matters and quality/acclaim. 4/13/23
Tr. at 4918-21 (Paen).

In response, other expert witnesses argued that such characteristics do not equate
to the ability to attract and retain subscribers and economic value. Singer WRT at 17-18;
Hartman WRT at 13-15; Witmer WRT at 16. Ms. Witmer, on behalf of JSC, added that
the notion that costs of such programming should be considered in royalty share
allocation is contrary to the standard for determining the share allocation, namely what
would a cable system pay for the content absent the section 111 license. Witmer WRT at
15.
Based on the entirety of this record, the Judges are not persuaded that issues of
production cost, quality/acclaim of content or the level of trust that a producer enjoys in
the public eye are meaningful toward the Judges’ determination of relative market value.
The Judges understand that, at some level, programming cost and acclaim may impact
value. However, the present record does not equip the Judges to evaluate these factors on
a comparative level. Sufficiently established studies of comparative public trust in a
producer’s content especially, news content, might be properly presented as a valid
indication of relative market value. However, the present record, including PBScommissioned trust survey, does not provide a reliable basis for determining the ability to
attract and retain subscribers or for adjusting the Judges’ determination of relative market
value. In this regard the Judges note that PTV did not adequately correlate levels of
public trust with what CSO might be willing to pay for programming. Therefore, these
factors do not impact the Judges’ weighting of the main methodologies or resulting
allocations offered this proceeding.
C. Industry Experts Regarding Bortz Survey Respondents’ Identity and
Capacity
In her rebuttal and hearing testimony, for PTV, Ms. Costantini challenged the
Bortz survey by asserting that the survey likely did not reach the correct executive that is
most responsible for carriage programming decision-making in more than 75 percent of

the surveyed cable systems across the four years for the following reasons. Costantini
WRT at 6-10, 18-47; 3/27/23 Tr. 1621-25, 1595-96 (Costantini). She maintained that the
survey likely did not interview the individuals most responsible for programming
carriage decisions for these cable systems. Id. She appeared to accept that Bortz Media
used the Television & Cable Factbook (Factbook) to identify contacts for each respective
system, particularly telephone numbers, and that Bortz Media usually selected the seniormost executive from that cable system to list as the initial point of contact or the survey
questionnaire. Costantini WRT at 6-7. However, she indicated the approach was faulty
because the Factbook does not specifically identify programming carriage decisionmakers. She stated that in her experience job position titles at cable companies are
insufficient without other data points to assess whether the individual is likely to be most
responsible for programming decisions. She testified that in the majority of instances, the
description of Bortz respondents’ positions do not indicate programming decisionmaking responsibilities. Costantini WRT at 8.
Ms. Costantini also noted that while some respondents are unlikely to be most
responsible for programming carriage decisions, especially for larger cable companies, in
some instances, they may provide valuable input regarding programming carriage to the
ultimate decision-makers. She added that the persons holding regional management
positions are not necessarily more likely to be most responsible for making programming
decisions and that at larger cable companies persons holding regional management
positions would not be the persons most responsible for making programming decisions.
3/27/23 Tr. 1621-22 (Costantini). She also found that it would be highly unlikely for the
title or position of the person most responsible for making programming decisions at a
cable system to change year to year, as was alleged to be the case in the Bortz survey.
Costantini WRT at 9. These factors led Ms. Costantini to opine that Bortz likely did not
interview the persons most responsible for programming carriage decisions for more than

75% of the surveyed cable systems across the four survey years. A summary of these
issues was included as Table 1 to her rebuttal testimony. Costantini WRT at 18-47.
Ms. Costantini added that the Factbook data are potentially unreliable as a
foundation from which Bortz could ascertain the persons most responsible for making
programming decisions at the surveyed CSOs. Costantini WRT at 6-7. She also found
fault with the Bortz survey’s failure to attempt to independently validate the respondents’
roles and responsibilities utilizing publicly available sources such as LinkedIn or cable
companies’ websites, or by asking other questions to confirm they were speaking to the
appropriate person. Costantini WRT at 6-7.
Ms. Costantini testified that the questions asking respondents to assign
importance, cost, and value to programming on distant broadcast stations are inconsistent
with how programming carriage decisions are made by cable companies. Costantini
WRT at 10. She maintained that station carriage decisions are not made based upon
inclusion or exclusion of a category or genre of programming, but rather on the entire
bundle of the distant broadcast station’s programming schedule. Costantini WRT at 9.
Ms. Costantini opined that the Bortz survey questions lacked the qualitative and
quantitative specificity needed for respondents to accurately answer questions and that
respondents would not necessarily understand the terminology used in the questions, and
that the questions do not sufficiently address the interplay and overlap across some
categories. Costantini WRT at 12-13. A similar concern was also asserted by Sue Ann
Hamilton who testified in the 2010-13 Cable Proceeding that the programming categories
adopted in royalty distribution proceedings are unique and “quite different from the
industry understanding of what programming typically falls in a particular programing
genre.” Hamilton WDT (2010-13) at 10. Oral Testimony of Sue Ann Hamilton (201013), Trial Ex. 7063, at 4309, 4312; Written Rebuttal Testimony of Sue Ann Hamilton
(2010-13), Trial Ex. 7062, at 17-18 (Hamilton WRT (2010-13)). For example, she

testified that “most cable operators” would not recognize that pre- and post-game
interviews and highlight compilation telecasts would fall into the Program Suppliers
category, or that locally produced high school team sports would fall into the Commercial
Television category. Id. at 11. Ms. Hamilton further opined that cable operators were
not likely to differentiate between network and non-network sports telecasts and that
migration of live team sports programming to regional cable networks further
complicates the equation. See Hamilton WRT (2010-13) at 17-18.
Ms. Costantini criticizes the Bortz Survey for not providing enough information
and time for the respondents to answer the questions accurately. Ms. Costantini expressed
doubt that any respondent could accurately answer the survey questions in the course of
the telephone interview. She also testified that it is highly doubtful that the respondent
would need access to extensive information that would not be readily available to most
respondents. Costantini WRT at 10-13.
Mr. Singer and Ms. Witmer, testifying on behalf of JSC, disagreed with Ms.
Costantini regarding inappropriate respondents in the Bortz survey. They testified that,
while ultimate responsibility for carriage decisions may be at the corporate level, the
individuals with the knowledge of why specific distant signals were carried, and why
they were valuable to the system in a specific area, would be at the local or regional level.
4/3/23 Tr. 2769-73 (Singer); 4/10/23 Tr. 4054-55, 4061 (Witmer). Mr. Trautman also
agreed with this assessment, adding that there is no one-size-fits-all standard for what
position or level within a cable system is going to be associated with the person most
responsible for programming decisions. 4/3/23 Tr. 2845-46; 2849 (Trautman).180 Mr.
Singer noted that the relevant titles at cable systems for individuals responsible for
programming were “all over the place” and that there was not necessarily just one person

JSC also noted that in a prior proceeding the Judges noted that it is not unreasonable to think that CSOs
have maintained an institutional memory of the requirements of these proceedings. JSC RPFF at 32 and
citations therein.
responsible for programming carriage decisions at CSOs. 3/20/23 Tr. 2770-71 (Singer).
Ms. Witmer also testified that the titles of relevant executives were a legacy of the history
of lots of small systems that rolled up into bigger consolidated systems, and often had
various titles, and they were not necessarily consistent from one system to the next.
4/10/23Tr. 4060-61 (Witmer).
Mr. Trautman testified that use of the Factbook as an initial point of contact or the
survey questionnaire is a feature, not a flaw, of the Bortz survey that is an effective tool
for assuring survey respondents are qualified. 4/3/23 Tr. 2848-49 (Trautman). He added
that while the initial target is often not the survey respondent because ultimately, the
survey’s goal is to speak with the person most responsible for carriage decisions. Id.
Regarding the alleged difficult of accurately answer the survey questions or
understand the categories at issue, Ms. Witmer testified that the respondents would have
been able to answer the questions. She further testified that the categories of
programming listed in the questionnaire make sense to her as a cable executive. She
explained that it is common in the cable industry for channels to have different kinds of
content on them, but that people working the cable industry and the programming area
would be more than capable of understanding the categories of content separate and apart
from particular linear channels. 4/10/23 Tr. 4052-55 (Witmer).
Regarding the alleged complexity of addressing the complexity of the Bortz
questions, JSC pointed to designated testimony from the 2010-13 proceeding from Mr.
Hartman who explained that “when you look at the type of linear channels that we
negotiate for, they really do fall into categories.” Mr. Hartman also testified that “it’s our
day-to-day job to kind of know . . . that type of programming.” 2010-13 Hartman Oral
Testimony Tr., Trial Ex. 7056, at 74-75.
While Ms. Costantini raises some reasonable concerns about the Bortz survey,
including concerns that the titles of some respondents may not be indicative of those most

responsible for programming carriage decisions, the Judges observe that her criticisms
were routinely accompanied by significant caveats, such as being generally applicable,
and focused on larger cable companies. Furthermore, the Judges note her acknowledging
that “there are lots of corner cases” regarding appropriate titles of respondents.181 3/27/23
Tr. at 1621-22 (Costantini). Based on the entirety of the record, the Judges are not
persuaded that the issue of the respondents’ titles is reason to disregard reliance on the
Bortz survey. Furthermore, the Judges find that use of the Factbook as a starting point in
pursuing the appropriate respondents is not unreasonable. The Judges do not discount the
reasonable concerns that were established regarding titles, which is a factor the Judges
take into account within the Judges’ weighting of the Judges’ reliance on the various
allocation methodologies.
Additionally, the Judges find some aspects of Ms. Costantini’s criticism of the
Bortz survey questions are undermined by her testimony, which depicted a high level of
competency as a cable industry executive who possessed a detailed understanding of
nuances underlying the questions in the Bortz survey. The Judges note Ms. Costantini’s
testimony of her own prior roles in which she held significant responsibility for
programming carriage decisions for the Time Warner cable system and was
[REDACTED] 3/27/23 Tr. at 1642-43 (Costantini). Ms. Costantini’s written and oral
testimony indicated that she would be capable of providing meaningful responses to the
sort of questions posed in the Bortz survey, including while in roles that she was not the
person most responsible for programming carriage decisions.
With regard to the categories in the Bortz survey questions and the categories in
this proceeding, the Judges observe that they have not changed for decades, giving CSOs
time to acquaint themselves fully with the programming comprising each agreed

The reference to lots of “corner cases” represents the use of an engineering term indicating a situation
that occurs outside normal operating parameters. See Corner Case, Wikipedia,
https://en.wikipedia.org/wiki/Corner_case (last visited Aug. 28, 2023).
category. In the Judges view, it is not unreasonable to conclude that, even with changes
in personnel, the CSOs have maintained an institutional awareness of the subjects and
categories at issue in the survey and in this proceeding, and therefore that the Bortz
respondents had adequate ability to understand the relevant terminology in the Bortz
questions.
Based on the entirety of the record, the Judges find that the industry experts that
responded to the Bortz survey were sufficiently equipped to offer reliable evidence
indicative of relative marketplace value. The Judges do not find that the respondents’
capacity to accurately answer the survey questions or understand the categories at issue
serves as a reason to disregard the Bortz survey. Furthermore, the Judges do not find that
respondents’ capacity serves as a significant negative factor in the weighting of the
various allocation methodologies at issue in this proceeding.
In sum, the Judges agree that the Bortz surveys are far from a perfect measure of
relative market value, as discussed infra. However, based on the entirety of the record,
the Judges find that despite the offered criticisms, the surveyed cable system executives
were sufficiently identified, competent and familiar with the subject matter to provide
reasonably reliable responses.182
XVI. CHANGED CIRCUMSTANCES
The Judges may vary from prior decisions when there are (1) changed
circumstances from a prior proceeding or (2) evidence on the record before the Judges
that requires prior conclusions to be modified regardless of whether there are changed
circumstances.183

Regarding faulting the survey for excluding PTV-only CSOs from the 2014 through 2017 surveys
received in this proceeding, the Judges address and account for the issue infra/supra (addressing
application of adjustment).
183

2010-13 Determination at 3557 citing 1998-99 Librarian Order at 3613-14.

In the 2014-2017 period, several widely agreed upon changed circumstances have
taken place including 1) WGNA’s conversion to a cable network,184 2) the
reclassification of PTV signals from exempt to non-exempt,185 and 3) the rise in
streaming on alternative platforms.186 Additionally, the Judges observe that the record
regarding the conduct and development of the survey and regression methodologies has
become more detailed than in prior proceedings. Based on the agreed upon record and
Judges’ findings here and throughout the determination, the Judges find that significant
changed circumstances occurred across the relevant period.
XVII. SURVEY EVIDENCE AND EXPERT TESTIMONY RELYING ON
SURVEYS
A. Background
Three of the six parties in this proceeding rely on survey evidence to support their
arguments concerning the allocation of shares of the subject royalty funds. For more than
40 years, a survey approach has been offered in royalty distribution proceedings before
the CRB and its predecessor bodies (the CRT and CARP), more recently in Distribution
of the 2004 and 2005 Cable Royalty Funds187 and Distribution of Cable Royalty Funds,
Docket No. CONSOLIDATED 14-CRB-0010-CD (2010-2013).188 In the latter
proceeding, data from three separate surveys administered to cable system operators
(CSOs) were offered during the hearing, and then analyzed by the Judges in connection
with their final allocation distribution. See 2010-13 Determination at 3582; 4/3/2023 Tr.
2825 (Trautman). In this proceeding, only one survey was conducted for use in possible

See, e.g., Harvey CWDT ¶ 7. (Distant signal carriage patterns in 2014 closely resembled those from the
2010-2013 period. By contrast, starting in 2015, following the conversion of WGNA from a superstation to
a cable network at the end of 2014, CSOs significantly decreased their use of the section 111 license, with
the vast majority of systems electing to carry far fewer distant signals.); See also, Marx WRT ¶¶ 6, 60;
Marx ACWDT at 16, 20-26, ¶ 43; Bennett ACWDT at 11.
185

See, e.g., Marx ACWDT ¶¶ 76-77, pp.28-29.

See, e.g., Witmer WRT ¶ 33, p.14; Costantini WDT ¶ 20, p.7; Alany WDT at 12.

See 2004-05 Distribution Order.

See 2010-13 Determination at 3552, 3582.

litigation in connection with royalty distribution pursuant to section 111 of the Copyright
Act, produced during discovery in accordance with applicable regulations,189,190 and then
offered by a party during the hearing. In particular, JSC, as supported by fact and expert
testimony, argues that a constant sum survey (in which survey respondents allocate a
fixed sum across different categories, at least in this case, adding up to 100 percent) is
well-suited to revealing relative market values of distant signal programming to CSOs.
Specifically, JSC argues that the Bortz Surveys,191 which it commissioned and offered for
the years 2014 through 2017, reliably reveal market value relevant to this proceeding.192
See, e.g., JSC PHB at 43-71; 4/3/2023 Tr. 2822-23 (Trautman). CTV and SDC also make
arguments that rely on the Bortz Surveys, as did some of their experts who testified
during the hearing. See, e.g., CTV PHB at 1-3, 42-79; Settling Devotional Claimants’
Post-Hearing Brief at 64-85 (SDC PHB). Yet, CCG, Program Suppliers and PTV,
supported by testimony of their experts, oppose reliance on the Bortz Surveys. See, e.g.,
Post-Hearing Brief of The Canadian Claimants Group at 50-77 (CCG PHB); PS PHB at
9-10, 57-77; PTV PHB at 38-71, 81-82.
In addition, CTV called as an expert witness, Prof. Robert A. Papper,193 who
testified as to trends in the local television news industry, and particularly his opinion as
to the impact of those trends on the relative value of CTV programming during the period

See, e.g., Order 27 Granting in Part and Denying in Part PTV Motion to Compel JSC to Produce
Documents (Feb. 15, 2023); Order 30 On Public Television’s Order to Enforce Order 27 (Mar. 31, 2023);
Order 31 Further to Order 30 on Public Television’s Motion to Enforce Order 27 (Apr. 12, 2023).
The Judges entered a Protective Order on February 17, 2022, pursuant to a Joint Motion filed by all
participants. Order No. 27 created a subset of further restricted information consisting of the identities or
other personally identifiable information (PII) of Bortz Survey respondents for the years 2014-2017. See
Order 27 at 5 n.6, 57.
191 JSC

presented the Bortz Survey in documentary form in a report, entitled “Cable Operator Valuation of
Distant Signal Non-Network Programming: 2014-17” (Bortz Report). During the hearing, the Bortz Report
was received into evidence as Trial Ex. 7101. 3/20/2023 Tr. 305, 316.
JSC offered the first Bortz Survey to the CRT in 1983. 4/3/2023 Tr. 2824-25 (Trautman); Bortz Rep.
app. A; 2010-13 Determination at 3582.
Prof. Papper was qualified as an expert in broadcast and digital journalism. 4/11/23 Tr. 4370 (Papper).
He was retained by the National Association of Broadcasters on behalf of CTV (i.e., the CTV claimants in
this proceeding). Papper WDT at 1.
2014-2017. His opinion relied in large part on the results of an annual survey that he has
directed for many years, which is called the Radio Television Digital News Association
Annual Survey (RTDNA Survey),194 especially articles and studies (mainly authored or
co-authored by Prof. Papper) that concern the results of the RTDNA Surveys for the
period 2014-2017. RTDNA Survey information, and the articles and studies on which
Prof. Papper relied, are appended to his written direct testimony. See, e.g., 4/11/23 Tr.
4361-63 (Papper); Written Direct Testimony of Robert Papper, Trial Ex. 7201 (Papper
WDT); Papper WRT.
An issue was raised as to whether or not large portions of Prof. Papper’s
testimony should be viewed as the introduction of a survey or surveys, governed by 37
CFR 351.10(e) and, if so, whether CTV has complied with the production requirements
set forth therein. Indeed, before the hearing, Program Suppliers filed their Motion in
Limine to Exclude Portions of the Testimony of Professor Robert A. Papper (MIL)
(eCRB no. 27485). In denying the MIL, the Judges determined, inter alia, that the written
direct and rebuttal testimonies, including the portions subject to the MIL, “express
detailed opinions based in large part on certain RTDNA Surveys, allowing Professor
Papper to be examined on his opinions,” but that “would not necessarily mean that the
surveys were offered or received into evidence.” Order 29 at 8. Application of section
351.10(e) was not required at that time. Id. Program Suppliers made similar objections to
portions of the Papper testimonies during the hearing. See 4/11/23 Tr. 4354-55, 4366
(Papper); 4/12/23 Tr. 4445-52 (Papper). Subsequently, Program Suppliers filed their
Motion to Strike Portions of the Written and Oral Testimony of Robert A. Papper (eCRB
no. 28213). As discussed in Order 39 denying the motion to strike, the RTDNA Surveys
were not conducted for the purpose of litigation or offered independently during the

The RTDNA survey was conducted for at least two decades before Prof. Papper began to administer it
in 1994. 4/11/23 Tr. 4367 (Papper).
hearing as evidence. Rather, the RTDNA Surveys were relied on by Prof. Papper in
forming and presenting his expert opinions, and the weight to be accorded data from the
RTDNA Surveys shall be determined within the context of evaluating Prof. Papper’s
expert opinions.
B. The Bortz Surveys
1. Conduct of the Bortz Surveys for 2014 Through 2017
During the hearing, JSC called James M. Trautman, Managing Director of Bortz
Media & Sports Group, Inc. (aka Bortz Media), to sponsor the Bortz Surveys, and their
report (Bortz Report) which formed part of Mr. Trautman’s written direct testimony.
Indeed, the Bortz Surveys, including their report, were prepared under Mr. Trautman’s
direct supervision at the request of Major League Baseball, the National Football League,
National Basketball Association, Women’s National Basketball Association, National
Hockey League and the National Collegiate Athletic Association (i.e., JSC in this
proceeding). Written Direct Testimony of James M. Trautman, Trial Ex. 7100, at 1
(Trautman WDT); 4/3/2023 Tr. 2816-20 (Trautman). For nearly forty years, Mr.
Trautman has supervised market research addressing a wide range of issues, for a variety
of clients, affecting the cable and satellite television industries, including issues related to
the valuation of television programming. Mr. Trautman has had primary responsibility
for management of previous CSO studies conducted by Bortz Media for JSC and has
testified concerning these studies in several proceedings before the Judges of the CRB
and their predecessors. In the 2010-13 cable royalty distribution proceeding, he was
qualified as an expert; and in this proceeding, he was qualified as an expert in market
research, including survey research, applied market analysis and valuation in the cable
and broadcast television industries. 4/3/2023 Tr. 2821 (Trautman).
As explained by Mr. Trautman, the Bortz Survey is a telephone survey. He further
testified that each Bortz Survey offered in this proceeding is a survey of local CSOs and

was designed to address the relative value that distant signal programming has to cable
operators, or would have in a free market. See 4/3/2023 Tr. 2821-22 (Trautman). As
explained by Dr. Mathiowetz,195 the Bortz Survey may be termed an establishment
survey because respondents answered questions of behalf of a business or other entity
rather than themselves. 4/10/2023 Tr. 3835 (Mathiowetz).
After a Bortz Survey was first offered in a royalty proceeding in 1983, changes
have been made to the design of the survey, sometimes in consultation with experts
outside Bortz Media or its predecessor company. Changes were made for the Bortz
Surveys offered in this proceeding, as compared to those offered in prior royalty
proceedings, including the most recent proceedings for distribution of 2010-2013
royalties. See 4/3/2023 Tr. 2824 (Trautman); 4/4/2023 Tr. 3013 (Trautman); 2010-13
Determination at 3582. For example, in 2015-2017, the number of cable systems eligible
for inclusion in the Bortz survey had decreased, falling from 788 (in 2014) to 328-361
(for 2015-2017). Bortz Media responded by shifting from sampling eligible systems for
2014 (as it had also done in earlier surveys) to attempting what it refers to as a census of
all eligible systems for the surveys conducted for 2015, 2016 and 2017.196 Thus, for
2015-2017, Bortz Media states that all eligible systems had an opportunity to respond to
the surveys. See Bortz Rep. at 21. Furthermore, in response to additional changes in the
cable industry, Bortz Media modified its questionnaire in 2015-2017 to account for

Dr. Nancy Mathiowetz was called by JSC as an expert witness at the hearing, and was qualified as an
expert in survey research methodology, questionnaire design and statistics. Dr. Mathiowetz has testified
before on behalf of JSC. 4/10/2023 Tr. 3828, 3835 (Mathiowetz); Mathiowetz CWDT; 2010-13
Determination at 3587.
Dr. Mathiowetz testified that she treated each of the Bortz Surveys for 2015 through 2017 as a sample
rather than a census. She testified that while the Bortz Survey goal was to include each eligible CSO, there
is a different expectation with respect to those Bortz Surveys and the data collection effort compared to, for
example, that of the decennial census in the United States in which the goal is to measure absolutely every
single person in the country. 4/10/2023 Tr. 3842-47 (Mathiowetz). Thus, when Dr. Mathiowetz made
computations of standard errors for the Bortz Survey for 2015 through 2017, she treated each survey as a
sample. 4/10/2023 Tr. 3844 (Mathiowetz).
WGNA’s conversion to a cable network, which has already been discussed with respect
to the regression evidence received in this proceeding.197
As in earlier surveys, for the 2014-2017 period at issue in this proceeding, Bortz
Media surveyed so-called “Form 3” cable systems. Form 3 systems are those that had at
least $527,600 in semiannual gross receipts from retransmitting broadcast signals to their
subscribers.198 According to the Cable Data Corporation (CDC), which compiles data
from the statements of account (SOAs) that cable systems file with the Copyright Office,
Form 3 systems accounted for more than 95 percent of total royalty payments made by
cable operators from 2014-2017. Furthermore, Form 3 systems, unlike the smaller Form
1 and 2 systems, are well-suited for Bortz surveys because they identify in their SOAs the
distant signals that they retransmitted. Bortz Rep. at 20. Nevertheless, inasmuch as some
Form 3 cable systems carry either no distant signals, or carry only distant signals
representing a single programming category (i.e., only PTV signals or only Canadian
signals), Bortz Media determined that it would not be possible to obtain a comparative
value judgment from survey respondents regarding their distant signal programming.
Therefore, as it has done in connection with surveys offered in previous proceedings,

Specifically, Bortz Media used two survey instruments for the 2014 cable operator survey. There was
one form for survey respondents whose cable systems carried distant signals in addition to, or other than,
WGNA. Appendix B (entitled “Survey Instruments”) to the Bortz Report contains the additional distant
signals (ADS) questionnaire that was used with those survey respondents. There was a second form for
respondents whose cable systems carried WGNA as their only distant signal (also included in the Bortz
Report, app. B). When using the second form, respondents were provided with specific information about
(and asked to value only) the compensable programming on WGNA. For the years 2015 through 2017,
only the ADS questionnaire was used because WGNA was no longer a distant signal. Bortz Rep. at 24-25.
Similarly, changes were made to the Bortz weighting and projection approach for 2015-2017 to account for
the changes to the distant signal landscape in that time period. See id. at 21 (citing Bortz Rep., Section II).
198 As

indicated by Dr. Mathiowetz in her written direct testimony, pursuant to section 111 of the Copyright
Act, cable systems are classified into three tiers based on the level of gross receipts that they receive from
their subscribers for the retransmission of over-the-air broadcast signals. Small-sized and medium-sized
systems pay a flat royalty fee. With respect to large cable systems (that use “Form 3” when filing their
SOAs at the United States Copyright Office), royalties are calculated as a percentage of their gross receipts
based on the distant signals they retransmit. Yet, without regard to what (if any) distant signals a system
retransmits, all Form 3 systems must pay at least a minimum royalty fee. See Mathiowetz CWDT at 6-7
(citing 2010-13 Determination at 3553 and 17 U.S.C. 111(d)(1)(B)-(C)). See also United States Copyright
Office, Statement of Account, SA3 (Long Form), https://www.copyright.gov/forms/sa3.pdf (current) (for
use when a system’s “semiannual gross receipts for secondary transmissions (the figure you give in space
K of the form) is $527,600 or more . . . .”) ; United States Copyright Office, Old Cable Statement of
Account Forms, https://www.copyright.gov/licensing/saold.html.

Bortz Media did not interview, or attempt to interview, those systems in connection with
the 2014-2017 Bortz Surveys. Id.
The level of copyright royalty payments played an additional role with respect to
the 2014 Bortz Survey. As discussed above, for the 2014 survey, Bortz Media attempted
to contact what it terms “a stratified random sampling of Form 3 cable systems,” with the
stratification based on copyright royalty payments. Bortz Rep. at 20. JSC’s expert
witness, Dr. Mathiowetz testified that as in the proceeding for 2010-2013 royalties, her
opinion is that “the use of a stratified sample results in an efficient sample that assures the
resulting sample mirrors the population of interest.” Corrected Written Direct Testimony
of Nancy Mathiowetz, Ph.D., Trial Ex. 7107, at 7 (Mathiowetz CWDT). In this case,
Bortz Media obtained data from records compiled by CDC, indicating the royalty
amounts paid by all Form 3 systems, based on SOAs filed by cable systems for the first
accounting period of each survey year. Bortz Media then constructed a sampling plan so
that proportionately more systems with large royalty payments were sampled relative to
systems with small royalty payments. Specifically, the stratified sample included 361
Form 3 cable systems that collectively paid approximately 86 percent of the total Form 3
royalties. Bortz Media reasoned that cable systems that carried distant signals in 2014
were overwhelmingly paying copyright royalties that were derived directly from the
distant signals they actually chose to carry, and further, while systems paying the largest
royalties were typically larger systems (as measured by subscribers served), they also
reported carrying more distant signals on average. Thus, Bortz Media concluded that, in
general, systems paying more royalties were making more use of the section 111 license.
Bortz Rep. at 20-21.
Once the CSOs for inclusion in the surveys were identified, Bortz Media used the
Television & Cable Factbook (Factbook), as it has in the past, to identify contacts for
each respective system, particularly telephone numbers. The Factbook usually lists

approximately three to six managers or executives for each system. Bortz Media usually
selects the senior-most executive from that cable system to list as the initial point of
contact or the survey questionnaire. 4/3/2023 Tr. 2844-55 (Trautman); Bortz Rep. at
A-17 n.57.
Bortz Media retained Sandra Grossman (then, of THA Research) to conduct
telephone interviewing for the 2014-2017 cable operator surveys. Ms. Grossman
specializes in conducting executive interviews, particularly in the cable industry. Indeed,
she has provided market research to cable television industry clients for more than two
decades, during which she and her company have been retained by Bortz Media or its
predecessor for 17 cable operator surveys, starting with the 2001 survey and continuing
through the 2017 survey received in this proceeding. Ms. Grossman personally conducted
approximately 65 percent of the interviews for the 2014-2017 surveys. It is unclear
whether Ms. Grossman relied solely on the information compiled by Bortz Media from
the Factbook to contact potential respondents, or whether she also performed Internet
searches to obtain contact information. Three or four additional interviewers were
supervised by Ms. Grossman, and each specialized in surveying professional and
managerial personnel, with at least five years of such experience. Interviewers were
instructed to call back each cable system as often as necessary to obtain a completed
interview or refusal. For almost every completed interview, no more than three direct
contacts with the eventual respondent were required. Tr. 2841-45, 3258 (Trautman);
Bortz Rep. at A15-17.
Interviewers were instructed that once they had made contact with a cable system,
they should ask first for the system executive identified in advance as most likely to have
responsibility for programming decisions, and to confirm that he individual was the
person “most responsible for programming carriage decisions made” by the system. The
interviewers were instructed that if the identified executive did not fit the description, the

interviewer was to ask for the person who was most responsible for programming
carriage decisions. Calls were placed to the cable system until the individual on the
telephone indicated that he or she was the individual most responsible for programming
carriage decisions. In all cases, the eventual survey respondents were required to confirm
that they were most responsible for programming carriage decisions made by their
systems. Bortz Rep. at A-17.
Indeed, the ADS questionnaire which, as discussed above, was used for many
respondents for 2014, and all respondents for 2015-2017, comprised four questions for
the respondent.199 Question 1 asked the respondent, “Are you the person most responsible
for programming carriage decisions made by your system during [the year in question] or
not?” Bortz Rep. app. B. If the response was no, the questionnaire (e.g., for 2014)
instructs the interviewer, “ASK TO SPEAK WITH PERSON MOST RESPONSIBLE
FOR THE SYSTEM’S PROGRAMMING CARRIAGE DECISIONS IN 2014. REPEAT
INTRODUCTION AND Q.1.” Id.
After the survey respondents were qualified, the interviewers proceeded to the
next questions. Questions 2 and 3 in the cable operator survey are designed by Bortz
Media as preliminary questions intended to focus respondents on the particular distant
signals carried by the system in the survey year, the types of programming on those
signals, and certain factors (importance and cost)200 that contribute to the key allocation
(which Bortz Media sometimes calls a “budget” question) that will be required in the

The WGNA questionnaire used for 2014 had differences in wording specific to carriage of WGNA. See
Bortz Rep. at 83-86.
The Bortz Report notes that in the 2010-13 Determination, the Judges stated that the reference to
expense in Question 3 “muddled the concepts of cost and value” and that “[t]his may have injected some
confusion into the respondent’s estimation of relative value.” Bortz Rep. at 27 n.38 (quoting 2010-13
Determination at 3590); 4/3/2023 Tr. 2895 (Trautman); 4/5/2023 Tr. 3466 (Trautman). Mr. Trautman, on
behalf of Bortz Media stated in the report that he respectfully disagrees with this criticism, and did not find
any evidence of confusion in the 2010-13 Bortz surveys, or in the 2014-2017 Bortz surveys. In any event,
the 2010-13 Determination was not available until October 2018, when the 2014-2016 surveys had already
been completed, and the 2017 questionnaires were in the field. Thus, there was no opportunity for Bortz
Media to evaluate potential changes to this survey question. Id.
fourth and final survey question. Bortz Rep. at 27, 30. In Question 2, the interviewer
identified the particular distant signals (including call letters) for a specific respondent’s
cable system (Question 2a). Bortz Media obtained the distant signals for each system by
reviewing each system’s SOA at for the year in question that was filed at the Copyright
Office.201 The interviewer then asked the respondent to rank up to seven202 non-network
programming categories on those distant signals in order of how important it was for the
system to offer each category.203 Id. at 24-27; 4/3/2023; Tr. 2861-64 (Trautman). Indeed,
for Questions 2, 3 and 4, the number of programming categories provided to each
respondent depended on whether the distant signals listed on the respondent’s SOA
included public television, Canadian, or live professional and college team sports
programming, with the corresponding categories excluded when the respondent CSO did
not carry the relevant programming on a distant basis. Bortz Rep. at 26 n.36.
When asking Question 3, the interviewer asked the respondent to rank the same
categories of non-network programming broadcast by the same stations in order of how

For each of questions 2, 3 and 4, respondents that reported carrying more than eight distant signals were
only asked about their eight most widely carried distant signals. This approach was also followed in the
2010-2013 surveys. Bortz Rep. at 25 n.35; 2010-13 Determination at 3587 (“In the Bortz Survey,
interviewers asked respondents about a maximum of eight distant signals even if their systems carried
more.”).
The seven categories, which could be tailored for each respondent, were: (1) Movies; (2) Live,
Professional and College Team Sports; (3) Syndicated Shows, Series and Specials; (4) News and Other
Station-Produced Programs; (5) PBS and All Other Programming Broadcast by Noncommercial Station(s)
; (6) Devotional Programs; and (7) All Programming Broadcast by Canadian Station(s) . Bortz Rep.
at 32 & app. B at 79. These categories were intended by Bortz Media to correspond with the program
category definitions adopted by the Judges. Id. at 26, app. C (“Program Category Definitions”).
For example, for 2014, Question 2b of the survey instrument reads: “Now, I’d like to ask you how
important it was for your system to offer certain categories of programming that are carried by these
stations. When you consider this, please exclude from consideration any national network programming
from ABC, CBS and NBC. I’ve grouped the non-network programming on these broadcast stations into
seven categories. I will read these seven categories to you to give you a chance to think about their relative
importance (READ EACH CATEGORY BELOW, STARTING WITH THE CATEGORY MARKED BY
THE NUMBER “1”). Considering only the non-network programming on these broadcast stations, please
rank these seven categories in order of their importance to your system in 2014, with one being the most
important category and seven being the least important category. What is your ranking of importance for
the 2014 (READ FIRST CATEGORY, AS MARKED BY THE NUMBER “1”) programming on the
broadcast stations I listed. (REPEAT FOR ALL SEVEN CATEGORIES, IN ORDER LISTED BELOW.
ENTER NUMERICAL RANK ON TABLE BELOW.)”
Bortz Rep. app.B at 79.

expensive it would have been to acquire that programming if the system had been
required to purchase it directly in the marketplace. Id. at 26-27, app. B (Ex. 7101 at 80).
The final question, again for the ADS questionnaire only, was Question 4, the
constant sum question. In this question, the interviewer asked the respondent to value the
various types of non-network programming on the distant signals that the respondent’s
system carried during the relevant year. This required the respondent to allocate a
percentage of a finite dollar amount to each of the program categories on the distant
signals that the system retransmitted. Id. at 27-29. For example, Question 4a in the
survey instrument that incorporated the year 2014 in the text was, as follows:
4a. Now, I would like you to estimate the relative value to your cable system
of each category of programming actually broadcast by the stations I
mentioned during 2014, excluding any national network programming from
ABC, CBS and NBC. Just as a reminder, we are only interested in U.S.
commercial station(s)
,
U.S.
non-commercial
station(s)
,
and
Canadian
station(s)
.
I'll read each of the seven programming categories we’ve been discussing
again to give you a chance to think about them; please write the categories
down as I am reading them. (READ PROGRAM CATEGORIES IN
ORDER, STARTING WITH CATEGORY MARKED BY THE NUMBER
“1”.)204 Assume your system spent a fixed dollar amount in 2014 to acquire
all the non-network programming actually broadcast during 2014 by the
stations I listed. What percentage, if any, of the fixed dollar amount would
your system have spent for each category of programming? Please write
down your estimates, and make sure they add to 100 percent. What
percentage, if any, of the fixed dollar amount would your system have spent
on (READ PROGRAM CATEGORY MARKED BY THE NUMBER
“1”)?205 And what percentage, if any, would your system have spent on
To prevent ordering bias, for each questionnaire, the interviewer was provided with a preset, computergenerated random order in which to read the program types, in order to prevent ordering bias. Bortz Rep.
at 29.
For Question 4, the categories, among other things, incorporated the survey year, and other slight
variations to the categories listed for Questions 2 and 3. The possible seven categories, to be identified by
the interviewer, were: (1) Movies broadcast during (survey year) by the U.S. commercial stations I listed;
(2) Live professional and college team sports broadcast during (survey year) by the U.S. commercial
stations I listed; (3) Syndicated shows, series and specials distributed to more than one television station
and broadcast during (survey year) by the U.S. commercial stations I listed; (4) News and public affairs
programs produced by or for any of the U.S. commercial stations I listed, for broadcast during (survey
year) only by that station; (5) PBS and all other programming broadcast during (survey year) by U.S.
(READ NEXT PROGRAM CATEGORY)? (COMPLETE LIST IN THIS
MANNER.)
Id., app. B (Ex. 7101 at 81).
The survey instrument instructed the interviewer to prompt the respondent if the
percentages did not add up to 100 percent. Id., app. B (Ex. 7101 at 81). As Question 4b,
the interviewer read back the categories and estimates, and then asked whether each
respondent wanted to make any changes. Question 4b concludes the survey; and the
Question ends the interviewers thanking the respondents were for their time and
cooperation. Id., app B (Ex. 7101 at 82).
The interviews were conducted after the calendar year in question.206 Interviews
were completed with between approximately 54 and 58 percent of eligible cable
systems.207 Upon completion of the survey, THA Research returned the completed
questionnaires to Bortz Media for proofing and data entry. Bortz Rep. at A-16.
2.

Results Reported From the Bortz Surveys

As in prior distribution proceedings, in order to address the issues relevant to this
proceeding, the responses provided by the Bortz Surveys, particularly the constant sum
rankings obtained through Question 4, must be expressed in terms of percentage
allocations of the cable royalty funds to be distributed for the years surveyed, which in

noncommercial station(s) ; (6) Devotional and religious programming broadcast during (survey year)
by the U.S. commercial stations I listed; and (7) All programming broadcast during (survey year) by
Canadian station(s) . Bortz Rep. at 28, app. B (7101 at 81) (2014 survey instrument). These
categories were intended to correspond with the program category definitions adopted by the Judges. Id. at
28, app. C (“Program Category Definitions”).
For the 2014, the survey period was 8/11/15-4/7/16; for 2015, the survey period was 8/11/16-4/23/17;
for 2016, the survey period was 10/06/17-4/26/18; and for 2017, the survey period was 7/01/18-6/26/19.
Bortz Rep. at A-16.
For 2014, the response rate was 53.8% (170 surveys completed); for 2015, the response rate was 54.3%
(197 surveys completed); for 2016, the response rate was 57.7% (199 surveys completed); and for 2017, the
response rate was 54.6% (179 surveys completed). Bortz Rep. at A-16.
this case are 2014 through 2017. The procedures used by Bortz Media to perform obtain
such results are in the Bortz Report. See, e.g., Bortz Rep. at A-18 through A-26.208
Table I-1. Bortz Survey Relative Value Allocation by Year, 2014-17 from the
Bortz Report shows the following compiled results:
Table I-1. Bortz Survey Relative Value Allocation by Year, 2014-17

Year
2015

(n=170) (n=197)

2017

(n=199)

(n=179)

Live Professional and College Team
Sports

40.4%

28.5%

28.5%

News and Public Affairs Programs

26.0%

29.7%

30.0%

Syndicated Shows, Series and Specials

10.4%

12.7%

14.8%

Movies

11.4%

13.8%

13.1%

9.0%

11.8%

5.9%

7.9%

6.8%

7.8%

7.1%

5.6%

6.5%

6.0%

5.4%

5.9%

0.3%

1.0%

0.8%

0.6%

0.7%

100.0%

100.0%

100.0
%

100.0
%

100.0
%

PBS and All Other Programming on
Noncommercial Distant Signals
Devotional and Religious
Programming
All Programming on Canadian Signals
Total

31.5
%
30.6
%
14.9
%

Average
:
201417
32.2%
29.1%
13.2%

Bortz Rep. at 2; see CTV PHB at 81 (summary of results for 2014 through 2017, with
acronyms of claimant groups substituted for program categories).
Nevertheless, as discussed below, no party unequivocally proposes that the initial
results, or allocations, of the 2014 through 2017 Bortz Surveys, reflected in Table I-1 of

Bortz weighted survey results for 2014 based on the royalties paid by responding systems in the first
half of 2014, and applied those results to the universe of Form 3 system royalties (consistent with the
weighting approach used in all prior Bortz surveys). For the 2015 through 2017 surveys, inasmuch as most
systems carrying distant signals had become Minimum Fee Systems, the methodology was changed to
weight the results based on the Base-plus-3.75 fees attributable to the actual signal carriage of the Form 3
systems, and to apply the results using signal carriage-based fee calculations rather than actual royalties
paid. Bortz Rep. at 21-24, A-18.
the Bortz Report, be used directly to allocate shares of the royalty funds that are the
subject of this proceeding.209
3.

Issues Raised With Respect to the Bortz Surveys
a.

The Exclusion of PTV-Only and Canadian-Only Systems

As already detailed, Bortz Media chose not to survey Form 3 cable systems that
carried no distant signal, or that carried only distant signals representing a single
programming category. Thus, as it has for surveys used in connection with prior
proceedings, Bortz Media excluded all PTV-only CSOs and Canadian-only CSOs from
the 2014 through 2017 surveys received in this proceeding. See Bortz Rep. at 20; 2010-13
Determination at 3583; 2004-05 Distribution Order at 57067. Bortz Media’s stated
rationale for this decision is that if PTV-only and Canadian-only CSO were survey
respondents, they would not be able to provide comparative value judgments regarding
their distant signal programming. Id. While PTV-only and Canadian-only CSOs may be
limited in their ability to respond to provide a response to the Bortz Survey value
question as formulated, in prior proceedings, the Judges have found that, while one must
not “overstate the impact of this problem,” the exclusion of such cable systems “clearly
biases the Bortz estimates downward for PTV and Canadian programming;” and further,
it has been observed that “the Bortz survey may well be improved in this regard, either
through the reformulation of the questions asked in the survey and/or by revisiting the
underlying survey sample plan.” Id. In any event, the Bortz Media surveys at issue in this
proceeding exclude PTV-only and Canadian-only CSO, and even the parties that rely on
the Bortz Surveys, cognizant of adjustments made in prior proceedings, offer certain
adjustments to the initial results of the Bortz Surveys. See, e.g., JSC PHB at 83-84; SDC
PHB at 82-85; CTV PHB at 79-84.

See JSC WDS at 12-13 (“Claim of JSC”); but see JSC PHB at 82 (“the evidence demonstrates that the
adjusted Bortz survey results are the most accurate and reliable basis for allocating the 2014-17 cable
royalty funds”), 84.
The adjustments were offered largely with the so-called “McLaughlin
Adjustment” in mind, which has a long history in connection with the Bortz Survey. For
example, in the 2004 and 2005 proceeding, Linda McLaughlin, an economist, set forth
calculations to the Bortz Survey results to make, what the Judges deemed to be, an
“appropriate adjustment to the PTV share,” although her efforts did not fully mitigate
deficiencies in the Bortz results with respect to others, such Canadian claimants. 2004-05
Distribution Order at 57064, 57070, 57073 (her “efforts to correct for cable systems
excluded from the survey because they only carry a distant Canadian signal do somewhat
ameliorate the under-representation of Canadian signals in the overall survey results”). In
the 2010-13 proceeding, Ms. McLaughlin and another witness, David Blackburn, set
forth methodologies for augmented PTV and CCG shares, referred to as the
“McLaughlin/Blackburn adjustments,” which assume, for example, that the PTV-only
systems would assign a relative value to PTV of 100%.210 2010-13 Determination at
3583-85, 3602. In that proceeding, three surveys were received, the Bortz Survey, the
Horowitz Survey (which “did not exclude from its sample systems that distantly carried
only PTV and/or Canadian signals”) and the Ringold Survey (which “focused on
Canadian signals”).211 Id. at 3582, 3591. Despite the availability of

In her testimony during the 2010-2013 proceeding, Ms. McLaughlin explained the adjustment, as
follows: Q. In order to do your augmentation of the Bortz survey, what were your initial assumptions? A. I
assumed that the systems that I was adding back in would have to answer the survey in the same way it was
asked for the other people, and that is they were only allowed to respond to the category they are carrying
and they are supposed to split up their value among the categories they are carrying. So they would have to
say 100 percent for PTV, if that's all they carried. And if all they carried was Canadian signal, they'd have
to say 100 percent for Canadian. And if they carried both, they'd have to say something between, you
know, zero for one and 100 to the other or 100 for one and zero to the other. Q. How about with regard to
response rate? Did you make any assumptions about that? A. Oh, when I added them in, I -- I followed the
same response rate. If you look at the -- some of the highlighted numbers, so in the final eligible sample
for the year that we're looking at, 2010, in all the strata together, there were 288 cable systems but only 163
of them completed the surveys. So the response rate, 163 over 288, or, you know, maybe that's, you know,
60 percent, say, 50, 60 percent. So I used that same response rate and I did it actually by strata and applied
that to the omitted signal. So I didn't assume that all 16 were included. I only assumed, you know,
approximately half of the 16 were included.
Oral Testimony of L. McLaughlin (2010-2013), Trial Ex. 7017, at 27-29.
Professor Ringold has previously testified, or otherwise given evidence, in proceedings before the
CARP, and the CRB. See CCG PFF 601; 2010-13 Determination at 3585. In this proceeding, Prof.
McLaughlin/Blackburn adjustments “to augment” the Bortz Survey results, the Judges
placed more weight on the Horowitz results, for several reasons but “particularly the
acknowledged systematic bias against PTV and CCG programming,” and thus “the
Judges accord relatively less weight to the ‘Augmented’ Bortz Survey.’” Id. at 3591. The
weighting of the Bortz Survey evidence below that of the Horowitz survey did not,
however, mean that the Bortz Survey evidence had no weight or played no role in the
Judges final allocations. To the contrary, before setting forth the Judges’ final Basic Fund
allocation, the Judges defined “ranges of reasonable allocations for each program
category, and in doing so relied on “[t]he Bortz and Horowitz Surveys, together with the
McLaughlin ‘Augmented Bortz’ results and the Crawford and George regressions, taking
into account the confidence intervals (when available) surrounding the point estimates . . .
.” Id. at 3610.
In this proceeding, only the Bortz Surveys were offered (i.e., no survey such as
Horowitz was offered by any party), and the surveys continue to exclude the PTV-only
and Canadian-only distant signal cable systems. Although Bortz Media and Mr.
Trautman are highly critical of the McLaughlin Adjustment, nevertheless, Bortz Media
includes two approaches for adjusting its initial results, both of which bear some
relationship to the McLaughlin Adjustment. Bortz Media’s “Adjustment One”212 accepts
(while not agreeing with) the McLaughlin assumption of attributing 100 percent of value
to the PTV (or Canadian category) when that is the only category the system carries
distantly, but does not do so for PTV-only systems in 2015 through 2017 that previously
carried WGNA. As to the latter group of systems, Bortz Media instead attempts to predict
the average valuation from all systems that carried only PTV and WGNA in 2014. The

Ringold was called to testify by CCG, and was qualified as an expert in survey research methodology.
4/17/2023 Tr. 4950-51 (Ringold).
Bortz Media’s Adjustment One is referenced in some of the parties’ post-hearing filings as Adjustment
1. See, e.g., SDC PHB at 85; CTV PFF 434.
stated rationale is there is no reason to assume that a CSO changed its valuation of PTV
content simply because of the WGNA conversion, and indeed, CSOs surveyed in 20152017 did not increase their relative valuation of PTV with regard to systems that carried
signals containing both PTV and other claimant categories. As for Bortz-eligible systems
that were surveyed, Bortz Media weighted the results based on Base-plus-3.75 fees
attributable to the distant signals actually carried by the PTV-only systems.213 See id. at
42-43, app. D (“Potential Bortz Adjustments”). Bortz Media obtained the following,
applying its Adjustment One:
Potential Allocation of Royalties Among Claimant Groups, 2014-17
(Adjustment One)
Year

Average

2015

2017

2014-17

JSC
CTV
PS
PTV
Devotional
Canadian

39.1%
25.2%
21.0%
8.2%
5.5%
1.0%

25.6%
26.6%
23.7%
14.0%
5.8%
4.4%

24.3%
25.6%
23.7%
16.6%
5.1%
4.8%

26.0%
25.3%
19.8%
19.5%
4.5%
4.9%

28.8%
25.7%
22.1%
14.6%
5.2%
3.8%

Total

100.0%

100.0%

100.0%

100.0%

100.0%

Id. at 43 (Table IV-1).214
Bortz Media’s “Adjustment Two” also attributes 100 percent of value to either the
PTV or Canadian category when that is the only category the system carries distantly,
even for systems that became PTV-only by default as result of the WGNA conversion.
However, PTV-only systems that only carried distant PTV signals within those signals’
originating DMAs are excluded. The stated rationale is that those systems have not

In Adjustment One, systems that carried both PTV and Canadian distant signals (but no U.S.
commercial distant signals) are weighted in the same manner, but with the fees allocated equally among the
PTV and Canadian categories. Bortz Rep. at 43 n.45.
The Adjustment One results for 2014 are nearly identical with Mr. Trautman’s calculation of the 2014
Bortz results when subjected to the McLaughlin Adjustment. See JSC Production Materials, Trial Ex. 3049
(discussed in detail later in the main text).
demonstrated any preference for distant PTV programming based on their actual carriage
patterns. Again, consistent with the treatment of Bortz-eligible systems that were
surveyed, Bortz performed weighting based on the Base-plus-3.75 fees attributable to the
distant signals actually carried by the PTV-only systems. See id. at 43, app. D (“Potential
Bortz Adjustments”). Bortz Media obtained the following application, applying its
Adjustment Two:
Potential Allocation of Royalties Among Claimant Groups, 2014-17
(Adjustment Two)
Year

Average:

2015

2017

2014-17

JSC
CTV
PS
PTV
Devotional
Canadian

39.8%
25.7%
21.4%
6.5%
5.6%
1.0%

25.2%
26.2%
23.3%
15.3%
5.7%
4.3%

23.5%
24.8%
23.0%
19.2%
4.9%
4.6%

24.8%
24.1%
18.9%
23.4%
4.3%
4.6%

28.3%
25.2%
21.6%
16.1%
5.1%
3.6%

Total

100.0%

100.0%

100.0%

100.0%

100.0%

Id. at 43-44 (Table IV-2).
JSC endorses the adjustments calculated by Bortz Media, rather than the
McLaughlin Adjustment.215 JSC does so first by raising a number of supposed faults in
the McLaughlin Adjustment. It is argued that PTV-only systems were almost all well
below the minimum fee, and by 2016 and 2017, an average of over 93% of PTV-only
systems could have carried at least one additional PTV signal to all of their subscribers
without having to pay more than the minimum fee, and the calculated Base + 3.75 royalty
fee attributable to the signals actually carried on PTV-only systems amounted to only 14
percent of the minimum fee royalties ultimately paid by these systems. Yet, JSC
observes, the McLaughlin Adjustment would assume that these systems have an extreme
Mr. Trautman did calculate a McLaughlin Adjustment, which he does not recommend. The table he
prepared in that regard is set forth infra.
preference for distant PTV programming based on their carriage decisions, even though
there was almost never an incremental royalty payment associated with those carriage
decisions. Furthermore, JSC argues, over 30 percent of the distant signals carried by
PTV-only systems in 2014-17 were carried pursuant to the Must Carry rules or the related
multicast agreement. The McLaughlin Adjustment nonetheless would assume that these
systems valued their distant PTV signals more than any other categories of programming,
even though the systems were required to carry the signals, and PTV was prohibited from
charging for the content. JSC argues that inasmuch as the price of these signals would be
$0 in the hypothetical market, it makes no sense to assign them 100% of the relative
value. JSC PHB at 65-67.
Additionally, JSC argues that while more than half of the PTV-only systems
during 2016-17 had carried both WGNA and PTV prior to the WGNA conversion, back
in 2014, systems that carried WGNA and one or more PTV distant signals valued PTV in
Bortz surveys at just 8.8%. JSC argues that the McLaughlin Adjustment would assume a
sudden and major shift in valuation. Id. at 67 (quoting 3/30/2023 Tr. 2621 (Majure)).216
Finally, with regard to the McLaughlin Adjustment, JSC argues that the majority of PTVonly systems only carried PTV signals within the signals’ originating DMA. Yet,
because only the PTV signal is deemed distant, the McLaughlin Adjustment would
assume that these systems only care about the PTV content in that bundle of
programming, thereby improperly inferring a set of preferences based on distinct
regulatory treatment rather than the actual behavior of the cable systems. It is argued that
there is no reason to assume that these systems value distant PTV programming more
highly than any other category of content, much less at a 100% relative valuation. Id. at
67-68.

Dr. Majure was qualified as an expert in economics and industrial organization, including their
application to the cable industry. 3/30/2023 Tr. 2551 (Majure).
In contrast, JSC argues, the alternatives calculated by Bortz Media, Adjustment
One and Adjustment Two, is supported by evidence and economic theory, and yields
similar valuations among the program categories. Id. at 68-69 (citing, inter alia, JSC PFF
414 (citing Majure)). Indeed, JSC expert witness, Dr. Majure, testified that the Bortz
Adjustments “avoid these gross misinterpretations that the McLaughlin adjustment would
otherwise be adding into the calculations. I don’t know that they completely resolve the
fundamental issue of the McLaughlin adjustment, however. There’s still no reason to
think, for any particular PTV system, they have this very strongly different set of
preferences, that the only thing they like is Public Television content.” 3/30/23 Tr. 2624
(Majure).
JSC’s allocation request is based only on the Bortz survey, specifically Bortz
Media’s Adjustment One, whose results are reproduced above. JSC states that it prefers
Adjustment One because it accounts for the fact that CSOs did not change their valuation
of PTV simply because WGNA was no longer available as a distant signal. JSC PHB at
83-84. JSC claims no share of the Syndex royalties. With respect to the 3.75% royalty
fund, JSC argues that the Judges should reallocate the shares attributable to PTV
proportionally among the other parties, as PTV is not entitled to a share of the 3.75%
royalty funds, as follows:
JSC's Proposed Reallocation of Shares of the 3.75% Royalty Funds
Year
2015

2017

JSC

42.6%

29.8%

29.1%

32.3%

CTV

27.5%

30.9%

30.7%

31.4%

PS

22.9%

27.6%

28.4%

24.6%

PTV

0.0%

0.0%

0.0%

0.0%

Devotional

6.0%

6.7%

6.1%

5.6%

Canadian

1.1%

5.1%

5.8%

6.1%

Id. at 83-84.

Similarly, SDC supports reliance on the Bortz Surveys for 2014 through 2017 in
this proceeding, and supports the application of Bortz Media’s Adjustment One. SDC
PHB at 81. SDC argues that the McLaughlin Adjustment has always been economically
unsound, and in this proceeding, there is new evidence that militates against an
application of a McLaughlin Adjustment that assigns a 100% value to PTV and CCGonly stations. Id. at 82.
SDC argues that unlike past proceedings, the record here shows that a majority of
PTV-only systems’ distant carriage occurred exclusively within the DMAs in which the
PTV signals originate, and were treated as distant only as a result of a regulatory
reporting. Indeed, it is argued, PTV signals are the only category of distant content that
CSOs can be required to report as “distant” under section 111 when such a signal is
actually carried locally to subscribers within the signal’s DMA, and all other similarly
situated, but commercial, signals would be reported as local signals that are ineligible for
section 111 royalties; and accordingly, a CSO’s choice to carry a PTV signal within its
originating DMA cannot be compared to a CSO’s choice to carry other signals and
programming, and there is no economic basis to assume that a majority of the PTV-only
CSOs had a relatively greater preference for PTV programming than other categories of
programming, much less valued at a 100% relative valuation (as past adjustments have
considered). Id. at 83 (citing, inter alia, Harvey WRT ¶¶ 126-131;217 Bortz Rep. at 17-18
(“throughout 2016-17 approximately 77% percent of the aggregate subscribers served by
the PTV Only Systems did not receive any distant signals.”); Majure WDT ¶¶ 150-51).
Additionally, SDC argues that adjusting the Bortz survey results to account for
PTV-only systems that were excluded from the Bortz sample would inappropriately
assign a 100% value to PTV content on the significant number of systems that were

Mr. R. Garrison Harvey was called to testify by JSC, and was qualified as an expert in statistics and
applied mathematics. 3/28/2023 Tr. 1772, 1777-78 (Harvey).
compelled to carry PTV programming and reimbursed for such carriage pursuant to the
Must Carry rule. See Id. at 83-84 (citing Bortz Rep. at 46; Majure WDT ¶¶ 144; Harvey
CWDT ¶119 (“[a]pproximately 36 percent of the time that a PTV Only system distantly
retransmitted a primary PTV call sign, it was pursuant to the Must Carry rule”). It is
argued that there is no reason to expect that PTV-only systems value PTV content that
they were compelled to carry at all, let alone at 100%. See Majure WDT ¶¶ 144-45.
Thus, it is argued, there is also no economic basis to apply a McLaughlin Adjustment to
the significant number PTV-only stations carried under the primary channel or multicast
subchannel Must Carry rules. Id. at 84 (citing Tr. 2566 (Asker)).218
Nevertheless, SDC argues, SDC’s and JSC’s valuation experts have
acknowledged that some adjustment to the PTV and CCG shares is appropriate, and the
only potential Bortz adjustments presented in this proceeding were set forth by JSC and
in the Bortz Report. It is argued that as its evaluation expert John Sanders testified,219
Bortz Adjustment One in the Bortz Report is preferable to the historic McLaughlin
Adjustment and to Bortz Adjustment Two because Adjustment One is substantially
“grounded in the survey data that was collected” and yields reasonable relative value
allocations for each of the participating claimant groups. Id. at 84 (citing, inter alia,
Sanders WRT ¶¶ 43-44).
SDC argues that the Judges should conclude that the Bortz survey is the
methodology that best reveals relative market value in this proceeding, but that there is
no economic basis for applying the conventional McLaughlin Adjustment in this
proceeding. Rather, it is argued, the Judges should find that some modest adjustment for
PTV and CCG may be appropriate, and the Judges should additionally find that the Bortz

Professor Asker was called to testify by JSC, and was qualified as an expert in economics, industrial
organization, and econometrics. 3/30/2023 Tr. 2390-91 (Asker).
Mr. John Sanders was called to testify by SDC and was qualified as expert in the valuation of media
assets, including television programs. 4/6/2013 Tr. 3694 (Sanders).
survey’s point estimates should be adjusted under Bortz Adjustment One. SDC argues
that thus the following relative value allocations are appropriate shares for the Devotional
claimants with respect to the Basic Fund: 5.5% for 2014; 5.8% for 2015; 5.1% for 2016;
4.5% for 2017; with 5.2% as the average. Id. at 85 (citing Bortz Rep. at 48, SDC PFF
246). SDC further argues that to arrive at the Devotional allocation for the 3.75% Fund,
the Judges should, consistent with their decision in the 2010-13 proceeding, reallocate the
PTV share of royalties proportionally among the categories that participate in that fund,
and make the following allocation of the 3.75% Fund to the Devotional claimants: 6.0%
for 2014; 6.7% for 2015; 6.1% for 2016; 5.6% for 2017; with 6.1% as the average. Id. at
85; SDC PFF 247 (citing 2010-13 Determination at 3611).
CTV argues that the fee-based regression estimates for 2014 that were made by
Prof. Marx,220 and the Bortz survey results for 2014-2017 provide the most appropriate
starting point to determine the relative value of claimant shares in this proceeding. It is
argued that the cumulative evidence of record in this proceeding shows that the fee-based
regressions overestimate the value of PTV programming, while the Bortz survey
underestimates the value of PTV and CCG programming. CTV proposes an adjustment
to the Bortz initial results, but not the McLaughlin Adjustment, or Adjustment One or
Adjustment Two calculated by Bortz Media. Rather, CTV proposes a share adjustment
approach that relies on the estimates from the Marx model and the Bortz Surveys in an
attempt to what it terms “the primary challenge of both methodologies,” which is how to
obtain a reasonable and more reliable estimate of the value of PTV programming during
the 2014-17 period. See CTV PHB at 79-80.
CTV argues that the Bortz Survey’s underestimation of PTV and CCG
programming due to the purposeful exclusion of PTV-only and CCG-only systems from

Professor Marx was called by CTV and was qualified as an expert economist and econometrician with
experience in statistical methods and measurements. 4/11/2023 Tr. 4109 (Marx).
the survey, affects results in each year, but not the year-to-year trends obtained from the
survey. Thus, CTV proposes a share adjustment approach that combines the Marx nonduplicated minute estimates for 2014 with the Bortz results for 2014 to establish a
starting point for allocating shares, and then applies the year-to-year net change in each
category derived from the Bortz survey results for each year in 2015, 2016 and 2017.
CTV argues, in its view, this provides a the only reliable basis to use regression estimates
offered in this proceeding to assist in the determination of relative value of the shares. Id.
at 81-82. To establish the starting point for shares in 2014, CTV proposes taking the
average of the Marx 2014 Bayesian regression and Bortz survey estimates in 2014 for PS,
JSC, CTV and PTV, and the maximum amount under either method in 2014, inexplicably
for SDC,221 and also for CCG, as illustrated in the following table. Id. at 81-82.
CTV’s Proposed Starting Point for Shares in 2014
Valuation Method & Steps

PS

JSC

CTV

PTV

SDC

CCG

Total

Marx 2014 – excluding duplicates

19.7%

43.9%

15.6%

16.4%

0.5%

3.9%

100.0%

Bortz 2014

21.8%

40.4%

26.0%

5.9%

5.6%

0.3%

100.0%

Step 1: average of Bortz and Marx

20.8%

42.1%

20.8%

11.2%
5.6%

3.9%

5.6%

3.9%

Step 2: maximum of Bortz and Marx
Step 1 + 2
Normalizing 1 + 2
(to add up to 100%)

20.8%

42.1%

20.8%

11.2%

104.4%

19.9% 40.4% 19.9% 10.7% 5.4% 3.8% 100.0%

Id. at 81-82. Applying the net change from the Bortz survey results in 2015, 2016, and
2017 to the starting points established for 2014, provides the proposed shares reflected in
the following table, which are presented along with the shares awarded in the 10-13 Final
Determination for reference.
CTV’s Proposed Shares
Year
2010
2011
2012
PS
26.5%
23.9%
21.5%
19.3%

JSC
32.9%
30.2%
33.9%
36.1%

CTV
16.8%
16.8%
16.2%
15.3%

PTV
14.8%
18.6%
17.9%
19.5%

SDC
4.0%
5.5%
5.5%
4.3%

CCG
5.0%
5.0%
5.0%
5.5%

Total
100.0%
100.0%
100.0%
100.0%

Source
2010-13 Final determination
2010-13 Final determination
2010-13 Final determination
2010-13 Final determination

Cf. Commercial Television Claimants’ Post-Hearing Reply Brief in Support of Proposed Royalty
Allocations at 63-64 (CTV RPHB) (referring to the adjustments proposed by Bortz Media).
2014

19.9%

40.4%

19.9%

10.7%

5.4%

3.8%

100.0%

24.6%

28.5%

23.6%

12.7%

6.3%

4.5%

100.1%

26.0%

28.5%

23.9%

11.6%

5.8%

4.3%

100.0%

22.0%

31.5%

24.5%

12.6%

5.2%

4.1%

99.8%

Combined 2014 Bortz
and Marx shares
2014 proposed shares
+ 2015 Bortz net change
2015 proposed shares
+ 2016 Bortz net change
2016 proposed shares
+ 2017 Bortz net change

Id. at 82. CTV argues that no individual valuation method or share adjustment approach
is perfect, but its proposed share adjustment approach helps address several evidentiary
trends established in this proceeding, including: (1) correcting the over-estimation of
PTV programming value under the fee-based regressions and aligning PTV shares more
closely with the overwhelming evidence in the record that CSOs would not be willing to
pay much, if anything, for the right to retransmit distant PTV stations absent the
compulsory license; (2) aligning the value of shares during the 4-year period in a manner
that reflects the impact of streaming on the value of programming to CSOs, which
supports an increase in CTV and JSC programming relative to Program Suppliers and
PTV programming; (3) providing a consistent allocation of shares for PS, JSC, CTV,
SDC and CCG since 2010 which more reasonably and realistically reflects how CSOs
would assess relative value over time; and (4) provides a reliable and reasonable basis for
adjusting shares during the 2015-2017 time period when the estimates from the fee-based
regressions are meaningless and uninformative and should not be given any weight in
determining shares in this case. Id. at 83.
PTV argues that the Bortz Surveys for 2014 through 2017 should be rejected in
their entirety due to numerous deficiencies in the way that that they were conducted,
including their overwhelming bias against Public Television. Nevertheless, PTV
acknowledges that the Judges and their predecessors have accepted the Bortz survey
results but only after applying the conventional McLaughlin Adjustment to account for
the bias against Public Television, and even then, only as a relative value floor for Public
Television’s allocation award. PTV PHB at 81-82 (citing PTV PCL ¶ 41; PTV PFF ¶

204 (citing Distribution of 1998 and 1999 Cable Royalty Funds, Dkt. No. 2001-8 CARP
CD 98-99, Determination at 24; Report of the Copyright Arbitration Royalty Panel, Dkt.
No. 94-3-CARP-CD-90-92, at 123-24; 1998 Cable Royalty Distribution Proceeding, Dkt.
No. CRT-91-2-89CD, 57 FR 15286, 15299–300 (Apr. 27, 1992); 1983 Cable Royalty
Distribution Proceeding, Dkt. No. CRT-84-1 83CD, 51 FR 12792, 12811 (Apr. 15,
1986); 2004-05 Distribution Order at 57070–71 n.20; 2010-13 Determination at 3610;
4/4/2023 Tr. 3139-41 (Trautman)).
PTV argues that at the hearing, Mr. Trautman conceded that he calculated a
McLaughlin Adjustment for this proceeding two years before filing his written direct
testimony, which showed Public Television’s annual shares for 2014–17 as 8.4%, 43.6%,
48.4%, and 48.2%, respectively, with average shares of 37.1%. PTV argues that,
although Mr. Trautman then embarked on a multi-year quest “to conjure up” additional
adjustments that would reduce Public Television’s shares, neither of Mr. Trautman’s
alternative proposed adjustments has any reliable basis. Indeed, it is argued, the Bortz
Survey results, and Mr. Trautman’s two proposed adjustments, give Public Television a
lower share of royalties than the Judges awarded in 2013, despite significant changed
circumstances such as the elimination of WGN as a distant signal and the substantial
changes in the quantity and quality of compensable JSC and Public Television
programming—all of which are realities that would warrant substantially increasing
Public Television’s relative share from 2013 levels. PTV PHB at 42 (citing, inter alia,
4/4/2023 Tr. 3142-43 (Trautman) (concerning table in Trial Ex. 3049)).
PTV argues that if the Judges were to use the Bortz survey to guide allocations in
this proceeding, which PTV believes would be inappropriate, given their unreliability,
several adjustments, at a minimum, would be needed to correct for clear methodological
biases and flaws. It is argued that the adjustments offered by JSC (Bortz Media’s
Adjustment One and Adjustment Two), which result in shares for Public Television that

are less than Public Television’s 2013 share, are not credible. PTV argues that only the
conventional McLaughlin Adjustment adopted in prior proceedings yields shares that
approximate relative valuations for Public Television in 2014–17. Id. at 82. Mr.
Trautman testified during direct and cross-examination that he calculated the
conventional McLaughlin Adjustment to the 2014 through 2017 Bortz surveys. A table
prepared by him, and upon which PTV relies is, as follows:
Weighted Bortz Survey Results by Year, 2014-17 (after Conventional McLaughlin
Adjustment)

PBS
Sports
News
Syndicated
Movies
Devotional
Canadian

2014
(n=171)
8.4%
39.0%
25.2%
10.0%
11.0%
5.4%
1.0%

Year
2015
(n=199)
43.6%
12.7%
19.2%
9.3%
9.1%
4.4%
1.8%

2016
(n=199)
48.4%
12.2%
15.3%
9.8%
8.0%
5.0%
1.3%

2017
(n=179)
48.2%
14.8%
17.2%
9.8%
5.0%
3.9%
1.2%

Average:
2014-17
37.1%
19.7%
19.2%
9.7%
8.3%
4.7%
1.3%

Total

100.0%

100.0%

100.0%

100.0%

100.0%

PTV PHB at 82; PTV PFF 208; Trial Ex. 3049 (from calculations prepared by Mr.
Trautman); 4/4/2023 Tr. 2881-82, 3142-43 (Trautman).
PS argues that there are fundamental issues with the Bortz Survey that cannot be
remedied by after-the-fact adjustments, such that putting ex-post fixes on the Bortz
Survey is like putting a Band-Aid on a bad wound. Indeed, the requests for royalty
allocation shares made by Program Suppliers are based on Dr. Tyler’s regression
model,222 and do not reference the Bortz Surveys. PS PHB at 80-82 (citing PS PFF ¶ 502
(3/27/2023 Tr. 1490-91223( Boyle))); see PS PRFF ¶¶ 59-62.

Dr. Tyler was called by PS, and was qualified as an expert in the fields of economics, data analysis, and
econometrics. 4/19/2023 Tr. 5423, 5428 (Tyler).
Professor Boyle was called by PTV and was qualified as an expert in the field of survey research and
design. 3/27/2023 Tr. 1400, 1410-11 (Boyle).
CCG argues that it is time for the Judges to abandon reliance on the Bortz Survey,
and does not propose any adjustment to the Bortz initial results. CCG PHB at 66-71, 77.
In its reply briefing, CCG again argues that the Bortz results should not be used for any
party, and further argues that Bortz results have never been used, and should never be
used, for the CCG, with or without these adjustments. CCG argues that the proposed
adjustments do not correct the Bortz Survey’s fundamental failure to measure relative
market value, and do not remedy their utter inapplicability to the CCG. Reply PostHearing Brief of The Canadian Claimants Group at 56 (CCG RPHB). Indeed, CCG
specifically criticizes the adjustment to Bortz offered by CTV, which is based on Prof.
Marx’s regression analysis, arguing, “CTV offered no evidence that would support that
conclusion that even though the relative quantity of their programming declined by 60%
their relative unit price went up by 370%. The CTV hybrid model represents the worst of
both worlds, an incomplete regression model that relies on data from the wrong period
combined with the faulty Bortz Survey results.” CCG RPHB at 56-57.
With respect to the issue of which, if any, adjustment should be made to the Bortz
initial results for 2014-2017, it is remarkable that no party had its expert calculate the
McLaughlin Adjustment for those results, at least not for presentation at the hearing.
While no party argues that royalty fund allocations in this proceeding should be made
strictly according to the Bortz initial results subject to the McLaughlin Adjustment, all
parties knew that the Judges applied the McLaughlin Adjustment to the Bortz Survey
initial results in the 2004 and 2005 proceeding, as well as in the more recent 2010-13
proceeding. Moreover, several parties knew that they would raise the McLaughlin
Adjustment at the hearing and in their posthearing filings. As summarized above, some
parties specifically criticized the McLaughlin Adjustment and some, despite their
criticisms or the criticisms of others, argued for application of the McLaughlin
Adjustment in the alternative, or for a calculation that is based upon or otherwise relates

to the McLaughlin Adjustment. To see the figures obtained when the McLaughlin
Adjustment is applied to the Bortz Survey initial results at issue in this proceeding, the
Judges are referred to a chart taken from a spreadsheet prepared by Mr. Trautman,
originally for Bortz Media’s internal use (Trial Ex. 3049, duplicated above). Fortunately,
no party has challenged the figures contained therein as accurately reflecting application
of the McLaughlin Adjustment to the Bortz Survey initial results; and as previously
noted, the figures on the chart resemble those presented in connection with Bortz Media’s
Adjustment One to the extent that one would expect similar figures.
The application of the McLaughlin Adjustment to the initial Bortz results for the
years now at issue, 2014 through 2017, is relevant, and the adjusted results (or
“augmented” results, as they were termed in the 2010-13 proceeding) should be given
varied weight, depending on whether one is considering the adjusted results for 2014, or
for 2015 through 2017. With respect to 2014, the Bortz Survey for that year covers the
year immediately following the last year at issue in the 2010-13 proceeding. For the
2014 survey, Bortz Media used a similar sampling method, and asked similar questions.
While other factors, such as the Horowitz survey results and regression evidence,
weighed more heavily in the Judges’ decision, the 2013 Bortz results with the
McLaughlin Adjustment were taken into consideration by the Judges, even when making
their final allocations. See 2010-13 Determination at 3591, 3610-11. Thus, the 2014
adjusted results may be used for comparison with earlier results, and would be expected
to provide useful insight into relative marketplace value of distant broadcast signal
programming retransmitted by cable systems during that year.
Nevertheless, when weighing all the evidence presented in this proceeding,
including regression evidence, a concern is presented by the fact that the McLaughlin
Adjustment assigns value to PTV content on cable systems that were compelled to carry
PTV programming and reimbursed for such carriage pursuant to the Must Carry rule; and

further, the value it assigns to PTV, even in such circumstances, is 100 percent. As
discussed above, the evidence shows that more than 30 percent of PTV-only systems
were subject to the Must Carry rule. See, e.g., Majure WDT ¶¶ 144; Harvey CWDT ¶
119 (“[a]pproximately 36 percent of the time that a PTV Only system distantly
retransmitted a primary PTV call sign, it was pursuant to the Must Carry rule”)). That
certain PTV signals are subject to the Must Carry rule is not a new circumstance, and
neither is the fact that the McLaughlin Adjustment brings PTV-only systems into the
Bortz results with an assigned value of 100% for PTV. Inasmuch as PTV-only systems
are still not surveyed by Bortz Media, and there is no empirical evidence to show how
PTV-only systems value PTV distant signals, there is no cause now to discard the
McLaughlin Adjustment due to the Must Carry rule, especially for the 2014 results which
pertain to circumstances similar to 2013. The McLaughlin Adjustment has always been
presented as a 100-percent or nothing approach, and the Judges can take that
characteristic of the adjustment into consideration. To the extent that one would
specifically exclude Must Carry signals, such as in a regression analysis, the fact that the
McLaughlin Adjustment is applied to Must Carry signals diminishes the value of such
adjusted Bortz results when making a comparison to such other evidence that devalues
Must Carry signals.
It has also been shown that PTV signals comprise the only category of content
that CSOs can be required to report as “distant” under section 111 when such signals are
actually carried to subscribers within the signals’ DMA, and further that a majority of the
PTV-only systems reported such distant signals during the years at issue. As discussed
above, it has been argued that similarly situated commercial signals would be reported as
local, and thus would be ineligible for section 111 royalties. Bortz Rep. at 17-18
(“throughout 2016-17 approximately 77% percent of the aggregate subscribers served by
the PTV Only Systems did not receive any distant signals.”); Majure WDT ¶¶ 150-51.

Yet, the designation as “distant” is rooted in statutory definitions and requirements, and
thus it is not established that such signals have no place in the hypothetical marketplace
considered in this proceeding.
Furthermore, with respect to distant signals carried within their DMAs, again
certain parties argue that there is no basis to assume that a majority of the PTV-only
CSOs had a relatively greater preference for PTV programming over other categories of
programming, much less at 100% of relative value. Yet, it has always been the nature of
the McLaughlin Adjustment to augment the Bortz results with PTV-only signals, and to
impute a 100-percent valuation. Accordingly, the McLaughlin Adjustment is recognized
as an adjustment that helps to remedy a bias in the Bortz methodology but may do so on
an imprecise basis.
For 2015 through 2017, the Bortz results, when subjected to the McLaughlin
Adjustment, show a dramatic increase in the PTV results, i.e., an increase to 8.4% in
2014 to 43.6% in 2015, then to 48.4% in 2016, and by 2017, the results are 48.2%. A
significant change is also seen for JSC, whose result is 39% in 2014 but only 12.7% for
2015, declining to 12.2% in 2016, with the JSC result at declining to only 14.8%. See
Trial Ex. 3049. The unadjusted, initial Bortz results show increases for PTV, and
decreases for JSC, but they are not nearly as precipitous between 2014 and 2015, and not
nearly as steep overall. See Bortz Rep. at 2. Considering the relative value question that
the Bortz Surveys set out to have answered, and the adjusted Bortz results, it is hard to
see why within only about one year many CSOs went from ascribing relatively small
value to PTV to considering it the most valuable. See 3/30/2023 Tr. 2621 (Majure) (“just
coincidentally at the point where WGNA converted, the system suddenly went from
having a small value for the Public Television content to that being the only thing they
like.”). Thus, an issue is raised as to whether the Bortz Surveys, particularly after

application of the McLaughlin Adjustment, are best suited for the years 2015 through
2017.
With the loss of WGNA as a distant signal, many CSOs that had retransmitted
only PTV and WGNA as distant signals became PTV-only systems, which meant that
they were no longer eligible for participation in the Bortz Survey. They also became
subject to the McLaughlin Adjustment; and according to the adjustment, the value
assigned to PTV was, as always, 100 percent.224 It was also during this time that the
universe of Bortz-eligible CSOs declined.225 That change in the number of eligible CSOs
during 2015-2017 was so great that, as already discussed, Bortz Media went from the use
of a sampling technique in 2014, which was similar to that employed for many preceding
years, to a new and different technique in 2015 and thereafter, which Bortz Media and
Mr. Trautman described as an attempt as a census.
Although the bias caused by exclusion of PTV-only systems from the Bortz
Survey became more profound in 2015-2017, as many systems that carried only PTV and
WGNA as distant signals became PTV-only systems after the WGNA conversion, as
illustrated above, there is little evidence to indicate that the application of the
McLaughlin Adjustment rectifies the situation. Indeed, no party, not even PTV, argues
that the Bortz Survey with the McLaughlin Adjustment is the best methodology of record
for arriving at an allocation for 2015-2017.
Adjustment One, proposed by Bortz Media and Mr. Trautman, and supported by
JSC and SDC, is offered as a response to the situation in which CSOs once carrying only

The number of PTV-only systems grew substantially in 2015-2017. In the second accounting period of
2014, there were 44 PTV-only systems, but that number increased to 173 in the second half of 2017. This
increase occurred in large part because systems that previously carried both PTV and WGNA became PTVonly systems when WGNA converted to a cable network at the end of 2014. Indeed, between 50 and 55
percent of the PTV-only systems in 2016-2017 had carried WGNA in 2014. Bortz Rep. at 10-11; Harvey
CWDT tbl.32.
This decline in Form 3 CSOs carrying distant signals was largely the result of systems that had
previously carried only WGNA electing not to carry any distant signals. Out of the 275 systems that
carried WGNA as their lone distant signal in 2014, only 15 (5.5%) of these systems carried a non-WGNA
distant signal from 2015-2017. Bortz Rep. at 8.
PTV and WGNA as distant signals suddenly became PTV-only systems. Adjustment
One also addresses Canadian-only systems, although it is opposed by CCG; and it has not
been shown that Adjustment One calculations would be useful on allocation CCG’s share
of the subject royalty funds.
As described more fully above, Adjustment One uses the McLaughlin assumption
of attributing 100 percent of value to the PTV (or Canadian category) when that is the
only category the system carries distantly, but does not do so for PTV-only systems in
2015 through 2017 that previously carried WGNA. As to those systems, Adjustment One
attempts to predict the average valuation from all systems that carried only PTV and
WGNA in 2014 because it is not assumed that a CSO changed its valuation of PTV
content simply because of the WGNA conversion. Furthermore, systems that carried
both PTV and Canadian distant signals (but no U.S. commercial distant signals) are
weighted in the same manner, but with the fees allocated equally among the PTV and
Canadian categories. See Bortz Rep. at 42-43.
The results seen from the application of Adjustment One tend to confirm the fact
that the conversion of WGNA had a profound effect on the way that the McLaughlin
Adjustment affected the Bortz results for 2015-2017. The application of Adjustment One
prevents the steep swings seen in the McLaughlin-adjusted results. Yet, as pointed out by
PTV, it does so at a cost. Adjustment One keeps the new PTV-only CSOs from bringing
100-percent PTV value into the calculation because they may have once valued another
signal that no longer exists. It treats the class of new PTV-only CSOs differently from
other PTV-only CSOs, even though they clearly have not replaced WGNA with other
distant signals. Moreover, due to the fact that Adjustment One calculates shares for 2015
through 2017 based on the average valuation from all systems that carried only PTV and
WGNA in 2014, the application of Adjustment One, for the purpose of allocating
royalties, would in effect attribute a portion of section 111 royalties according to the

former existence of WGNA, even though WGNA no longer existed as a distant signal in
2015-2017. Consequently, while Adjustment One is worth considering in the context of
gauging the impact of the WGNA conversion on the Bortz results, it does not provide
figures that can be used to calculate the allocation of shares of the subject royalty
funds.226
CTV’s proposed adjustment is not a proposed adjustment to the survey evidence
available in this proceeding, i.e., the Bortz Survey for 2014 through 2017. Rather CTV
proposes that data connected to the survey for 2014 (without adjustment for the exclusion
of PTV-only CSOs) be used to expand the application of regression evidence from its
expert, Dr. Marx. As detailed above, CTV proposes a share allocation approach that
combines the Marx non-duplicated minute estimates for 2014 with the Bortz results for
2014 to establish a starting point for allocating shares, and then applies the year-to-year
net change in each category derived from the Bortz survey results for each year in 2015,
2016 and 2017. There is a dearth of expert testimony concerning CTV’s proposal.
CTV’s proposal is supported by no other party. CTV’s proposal hinges on acceptance of
Dr. Marx’s fee-based regression estimates for 2014, which as discussed above has not
been accorded the greatest weight.
Accordingly, the McLaughlin Adjustment, provided one understands its
aforementioned limitations, is most helpful among the proposed adjustments in
understanding the Bortz results. The following table shows the McLaughlin Adjustment
allocations when organized according to the claimant groups in this proceeding.

Additionally, Bortz Media’s Adjustment Two addresses the question of whether PTV signals transmitted
within their DMA should be treated differently. It also attempts to address the exclusion of Canadian-only
systems. As already described in the main text, Adjustment Two accepts (while not agreeing with) the
McLaughlin assumption of attributing 100 percent of value to either the PTV or Canadian category when
that is the only category the system carries distantly, even for systems that became PTV-only by default as
result of the WGNA conversion. However, PTV-only systems that only carried distant PTV signals within
those signals’ originating DMAs are excluded. Bortz Rep. at 43. Adjustment Two, therefore, does not
accept the definition of a distant signal imposed by statute, and may also create a gap in compensation for
copyrighted programming within a DMA. Furthermore, no party presents its requested allocation based on
implementation of Adjustment Two, or made an adequate record concerning this potential adjustment.
McLaughlin-Adjusted Royalty Allocations
Basic Fund
Canadian Claimants
Commercial TV
Devotional Programs
Program Suppliers
Public TV
JSC

2015

2017

1.0%
25.2%
5.4%
21.0%
8.4%
39.0%

1.8%
19.2%
4.4%
18.4%
43.6%
12.7%

1.3%
15.3%
5.0%
17.8%
48.4%
12.2%

1.2%
17.2%
3.9%
14.8%
48.2%
14.8%

b. The Constant Sum Methodology
In the 2010-13 proceeding, some criticisms of Bortz and other survey evidence
went to the way constant sum questions were worded or executed, but some criticisms
went to use of the methodology per se. Dr. Mathiowetz provided an opinion in support
of the particular methodology used in the Bortz Surveys received in that proceeding. See
2010-13 Determination at 3587. Ultimately, the Judges found certain regression analyses
to be more persuasive than the survey results. Yet, far from rejecting the survey results,
the Judges concluded, after considering all of the evidence presented in that proceeding,
“the constant sum survey methodology, with adjustments, provides relevant information
relating to the relative value for each of the six categories remaining at issue.” Id. at
3591 (emphasis added).
Many criticisms have been leveled against the Bortz Surveys now at issue. Yet,
even among parties that do not support use of the Bortz Survey in this proceeding, for the
most part there has been an acknowledgement that constant sum surveys, if properly
designed and executed, might yield useful data, even if the Bortz Surveys presented in
this proceeding fall short.227 In this proceeding, Dr. Mathiowetz testified that a constant
sum methodology was used as early as the 1980s in royalty allocation proceedings before
the CRB predecessors. Her testimony in this proceeding is that a constant sum question
offers a perfect solution to the relevant research question. Mathiowetz CWDT at 4-6;

See, e.g., CCG PHB at 50-51; CCG RPFF at 40-41, 47-48.

4/10/2023 Tr. 3849-54 (Mathiowetz). The Judges must allocate 100% of the royalty
funds at issue across several different categories, and an increased allocation for one
category will necessarily require a decrease elsewhere so as to allocate 100 percent.
Consequently, survey evidence that employs constant sum methodology, such as the
Bortz Survey, could again provide relevant evidence.
PTV has a one-paragraph subsection in its main brief devoted to an argument,
which it claims is unrebutted, that the key constant sum question in the Bortz Surveys
(Question 4) is incapable of producing valid and reliable results because it is not
“incentive compatible.” It is argued that PTV’s expert witness Dr. Boyle is one of the
foremost experts on stated preference surveys, of which Bortz’s constant-sum question is
an example, and further that his written and oral testimony is that the literature has
developed on stated preference surveys, and it is now settled that stated preference
surveys must be “incentive compatible.” His opinion is that the Bortz Survey constant
sum question fails multiple requirements for incentive compatibility. PTV PHB at 68
(citing PTV PFF ¶¶ 355-57 (essentially tracking PTV’s brief, or vice versa)); see Written
Rebuttal Testimony of Kevin J. Boyle, Trial Ex. 7306, at 15, 32-36, 42-43 (Boyle WRT).

A review of the parties’ briefs and proposed findings of fact shows that, contrary
to PTV’s claim, PTV’s incentive compatibility argument was not in any sense
unrebutted.228 JSC addressed the issue of incentive compatibility at least as much as PTV
did in its briefs.229 See JSC PHB 45-46; JSC RPFF 40; JSC PFF ¶¶ 247, 248-51; JSC
PRFF ¶ 67. Furthermore, during the hearing, PTV conducted a substantive direct
examination concerning incentive compatibility; and then JSC conducted a vigorous

A review of the parties’ filings shows that incentive compatibility was addressed primarily, if not
entirely, only by PTV and JSC.
PTV’s reply brief and reply proposed findings provided more substance to its argument than PTV
provided in its initial briefing. See PTV RPHB at 39-40, 44-45; PTV PRFF ¶¶ 252-63.
cross-examination of Prof. Boyle on his opinion regarding incentive compatibility. Prof.
Boyle also answered questions from the bench on this topic.230
There is some discussion in PTV’s reply as to whether, in response to Prof.
Boyle’s opinion about incentive compatibility, JSC was wrong to set out to show that
constant sum surveys are reasonable or widely used. See Public Television’s PostHearing Reply Brief at 45 (PTV RPHB). Yet, Prof. Boyle’s written testimony linked the
reliability of constant sum methodology to incentive incompatibility, at least for purposes
of PTV’s case. Furthermore, the presentation of his incentive compatibility opinion
appears as an alternative to evidence concerning the validity and reliability of constant
sum questions. In particular, under the heading “Validity and Reliability of ConstantSum Questions,” Prof. Boyle testified in writing, “There is limited peer-reviewed
research on the validity and reliability of constant sum questions. In the absence of
evidence on the credibility of constant-sum questions for eliciting preferences to support
decision making, I turn to the well-known concept in economics and political science of
incentive compatibility (Groves and Ledyard, 1987; Ledyard, 1989) to consider the
validity of the Bortz survey constant-sum question.” Boyle WRT at 32-33 (footnote
omitted, which shows Prof. Boyle’s reliance on a Google Scholar search, with his search
terms, to show limited peer-reviewed research). Far from leaving that statement
unrebutted, at the hearing, JSC questioned Dr. Mathiowetz, and she responded, as
follows:
Q. * * * Professor Mathiowetz, did you see the assertion by Dr. Boyle that
there is a “absence of evidence on the credibility of constant sum questions
for eliciting preferences to support decision-making”?
A. I did see that by Dr. Boyle. And I disagree with that assertion. First of
all, we still see constant sum being used and appearing in the peer-reviewed
journal literature. Whether it is being used as an end in and of itself for a
substantive topic or sometimes you see the constant sum question being
used as a benchmark to compare other relative value methodologies.
See 3/27/2023 Tr. 1419-21, 1453-54, 1492-512 (Boyle).

Second, in light of Dr. Boyle's comment, I thought it would be useful to
go and look at recent marketing research text, because constant sum is often
taught in MBA programs dealing with marketing research. And I found
textbooks published as recently as 2017, I think was the most recent one, I
found, that are still teaching constant sum methodology.
4/10/2023 Tr. 3852-53 (Mathiowetz). Accordingly, in view of that testimony, the use of
constant sum evidence in prior proceedings, and other record evidence concerning
constant sum methodology, the Judges do not adopt an opinion that there is an absence of
evidence on the credibility of constant sum questions, or in the absence of such evidence
one must turn to incentive compatibility (notwithstanding the importance that incentive
compatibility may otherwise have).
PTV’s reply brief, and the proposed reply findings cited therein, provide a
summary of Dr. Boyle’s testimony on incentive compatibility to the effect “that (1) a
stated-preference question must be incentive compatible for it to produce valid and
reliable results; (2) there are four requirements for a stated-preference question to be
incentive compatible; and (3) Bortz’s constant-sum question is fatally flawed because it
fails multiple requirements for incentive compatibility.” Yet, the requirements for a
stated-preference question are not explained in detail. See PTV RPHB at 44 (citing PTV
PHB at 68; PTV RPFF ¶¶ 254-63). Turning to Prof. Boyle’s hearing testimony, he
explained, as follows:
A. So the constant sum, as I said before, is one example of stated
preference surveys. And the literature for that has been developing for a
long time.
And as it has developed in a variety of different areas of economics,
in terms of stated preference questions, it’s developed standards that a
question needs to be incentive-compatible. And that started, really,
evolving in the early 1990s and codified, really, in the 2000s.
But there are kind of four basic axioms of it; that it needs to be
consequential, it needs to be truthful, it needs to be a binary choice, and
payment needs to be coursed.
And so if you fail one of them, then you’re in problems for incentive
compatibility. If you fail more than one, you’re even more in trouble in
terms of incentive compatibility. And, you know, I have -- three of them
are listed here on the slide, but probably the two most important ones are

the truthful and binary because they apply directly to the way the constant
sum question is framed.
3/27/2023 Tr. 1419-20 (Boyle); cf. Boyle WRT at 34 (quoting Carson and Groves,
Incentive and Informational Properties of Preference Questions, 37 Environmental and
Resource Econ., 181-210 (2007), and a different formulation of the axioms).
Prof. Boyle also testified as to why, in his opinion, the Bortz Survey, particularly
Question 4, is not incentive compatible, as follows:
Q. And why isn’t the constant sum question incentive-compatible?
A. It’s not incentive-compatible because it’s not a binary question and a
single application. And so when I was talking about what we did with the
Deepwater Horizon, that was a specific dollar amount for a specific
valuation that you answered yes or no.
There’s no incentive for somebody to answer wrong on that. You
have got to answer yes or no. And if you answer wrong, you get an
undesirable outcome for yourself. With the Bortz Survey, when you have
the different categories that you can allocate percentages to, there's a
potential there for somebody to misallocate across categories when you
have what’s called an open-ended response that you can fill in.
You know, in the Bortz Survey, there was an enumerator, so they
were giving the information to the enumerator to fill in.
But, you know, I think one of the examples I used in my report was
that if someone had a devotional affinity, they could explicitly or implicitly
allocate more to devotional or less to others. If they are an atheist, it could
be the opposite one.
So there’s an opportunity, by how you allocate the percentages, that
you could either explicitly, implicitly, or accidentally misconstrue what the
true value is that is estimated from the questions.
3/27/2023 Tr. 1420-21 (Boyle).
PTV’s argument concerning incentive compatibility is not persuasive. As pointed
out by JSC, Prof. Boyle held up as a positive example an incentive compatible public
resource survey in which respondents may in fact have had a financial interest in the
outcome of the survey. JSC PFF ¶ 249; 3/27/2023 Tr. 1406-07, 1420 (Boyle) (“And if
you answer wrong, you get an undesirable outcome for yourself”). Additionally, whether
a Bortz survey respondent’s personal beliefs, such as religious beliefs (or the absence

thereof), might cause a respondent to “misconstrue” true value in the Bortz Surveys
remains highly speculative.
Moreover, with respect to the Bortz Surveys, Dr. Mathiowetz explained that Prof.
Boyle’s argument is wrong because “[c]able system operators are paying [the] royalty fee
regardless of how they allocate” value to program categories in the surveys. See
4/10/2023 Tr. 3854 (Mathiowetz). Indeed, it was not shown that Prof. Boyle had any
knowledge of whether or how respondents’ answers to Bortz Survey questions might
actually affect respondents or their CSOs, and what respondents’ perceptions might be on
the subject. Further, JSC’s suspicion that Prof. Boyle lacked knowledge in this area was
confirmed on cross-examination, when Prof. Boyle could not provide clear answers to
simple questions on this topic. He was, for example, specifically asked, “whether you
have an understanding as to whether cable system operators have a financial interest in
the outcome of these proceedings,” and he testified, “I am not testifying as an expert on
cable systems. I’m testifying as an expert on survey design. And that’s how I am
answering you.” Furthermore, when forming his opinions, Prof. Boyle did not consult
with anyone who had worked at a cable system. 3/27/2023 Tr. 1502-05, 1513-15
(Boyle).
c. Value Measurement
On behalf of Program Suppliers, Dr. Stec,231 testified that at best the Bortz Survey
results represent an estimate of the cable system operators’ relative willingness to pay for
the different program categories they were asked to consider, but willingness to pay is not
the same as a market price or market value.232 Furthermore, it is his opinion that the
Dr. Stec was called to testify by PS, and was qualified as an expert witness in economics and survey
research. 4/19/2023 Tr. 5641 (Stec).
Dr. Stec, citing to an article on willingness to pay at the point of purchase, opines “research studies show
that, when controlling for question formats, the hypothetical bias in consumer-intent type measures, like
willingness-to-pay, can be substantial with the hypothetical willingness to pay exceeding the real
willingness to pay. Even in the absence of any other flaws, by not accounting for this hypothetical bias, the
Bortz Survey likely measured willingness to pay, in the form of budget percentages, inaccurately.” Stec
Bortz Survey does not account for the supply side of the transactions, which was noted as
early as the CARP 1990-1992 cable royalty proceeding. He opined that although Mr.
Trautman indicates that the survey respondents are familiar with the rates charged for
programming, as CSOs they do not purchase the individual programming categories as
identified in the survey and instead purchase entire broadcast signals that include multiple
categories of programming. He opined that survey respondents are unfamiliar with the
actual prices charged in the marketplace for the specific programming categories when
they are retransmitted on distant signals. Written Rebuttal Testimony of Jeffrey Stec,
Trial Ex. 7608, at 21-22 (Stec WRT); 4/19/2023 Tr. 5655 (Stec); PS Brief at 67-71; PS
PFF ¶¶ 513-29.
Measurement of sheer willingness to pay may not be identical with a
determination of market value. Yet, as discussed throughout this determination,
including with respect to regression evidence presented by another Program Supplier
expert witness, Dr. Tyler, evidence concerning CSOs’ willingness to pay is an important
indicator when examining the hypothetical market examined by the Judges in this and
prior proceedings.
Furthermore, as pointed out by JSC, Dr. Stec expressed some of the same
negative opinions about the Bortz Survey in the 2010-13 proceedings, and although
considered by the Judges, the opinions did not prevent the Bortz Survey results from
being used by the Judges in making their allocations. See JSC PHB at 46; JSC PFF ¶
253. Indeed, the Judges recognized that the CARP had determined that in the relevant
hypothetical market, the supply of programming would be fixed and value would be
determined only by the CSOs’ demand as reflected in their willingness to pay.
Additionally, in the 2010-13 proceeding, the Judges “agree[d] with the pronouncement in

WRT at 26 (footnote omitted); PS PHB at 70-71; PS PFF ¶¶ 527, 529. The relevancy of this consumerintent, point of purchase opinion to the Bortz Survey remains unclear, especially in view of a dearth of
testimony on the subject.

prior determinations that the royalties that would be paid in the hypothetical market
would essentially be a function only of the CSOs’ demand and the copyright owners’
costs, and their supply curves (if any) would not be important determinants of the marketbased royalty.” See 2010-13 Determination at 3583, 3555 n.18 (citing, as an example,
1998-99 Librarian Order at 3606, 3608).233 In any event, the wording of Question 4 of
each Bortz Survey for a particular year does not seek a response about actual prices
charged in the marketplace, referenced by Dr. Stec. Rather, it seeks a CSO response
about percentages of a fixed dollar amount the system “would have spent” and specific
categories of programming that the system carried as distant signals in the subject year.
The parties have made further arguments to the effect that Bortz Survey, and its
results, are unable to shed light on market value relevant to this proceeding. For
example, Program Suppliers argue that the Bortz results are not credible because they are
inconsistent with market changes, noting that with the conversion of WGNA to a cable
system, the share of compensable minutes for JSC and CTV content significantly
declined; and further, while in 2014, over 90% of the sports programming was JSC
content, by 2015 that share dropped to approximately 65%, with the balance of 35%
being Program Suppliers or CTV content, yet changes to programming shares observed
in the marketplace are not reflected in the Bortz Survey results. It is argued, among other
things, that despite the 94% decline in JSC content, the Bortz Survey suggests that JSC’s
volume fell by only 22% and remained the most valuable category in 2017. See PS PHB
at 77. Similarly, CCG argues that according to the Bortz Survey results, JSC content
retains a constant relative value, and is ranked the most expensive and most valuable
according to Bortz Survey results, but that is unrealistic after 2014 when WGNA
converted to a cable station. Such consistency, it is argued, does not comport with

As Dr. Majure testified, Question 4 is essentially a budget-setting exercise, and as such it is his opinion
that importance and expected cost are relevant to the value of distant signal programming, as they are to
forming a budget. 3/30/2023 Tr. 2616 (Majure).
reality, inasmuch as WGNA carried 94.2% of compensable distant JSC programming
minutes in 2014, and with WGNA’s conversion, compensable distant programming
minutes of JSC content dropped precipitously. CCG argues that the year-to-year
consistency in average JSC relative values from Question 4 despite a loss of over more
than 90% of retransmitted content after 2014 can only be explained through heuristics,
question order bias, and the possible knowledge of the survey’s purpose. See CCG PHB
at 60.
JSC argues that while CCG and Program Suppliers take the position that the Bortz
Survey responses are not sensitive enough (by some unspecified degree) to the change in
volume of subscriber-weighted minutes resulting from the WGNA conversion, the Bortz
results show a strength of the Bortz survey that the Judges’ predecessors have
highlighted. JSC points out that in the 1998-1999 proceeding, following the conversion
of WTBS from a superstation to a cable network, the Bortz survey results showed only a
modest decrease in JSC’s relative value allocation, despite a similar drop in volume as
the one at issue in this proceeding. Indeed, JSC argues, it is wrong to expect that changes
in value will track with changes in the volume of programming, as might be the case in
other industries where value is driven by per-unit sales. Further, it is argued, it is entirely
reasonable that, as the Bortz Surveys show, CSOs continue to value highly the other JSC
programming they carry after a superstation conversion, and perhaps value it even more.
JSC points to the CARP’s assessment that the “Bortz respondents take account of
changes in volume, viewing, and all other material factors;” and argues that as a result,
the Bortz surveys, unlike other methodologies, would not lead the factfinders astray by
confusing volume with value. Rather, it is argued, as the CARP found in its
determination, affirmed by the Circuit Court, the surveys would “best inform [the CARP]
as to whether any changes in sheer programming volume, viewing minutes, subscriber
instances, or any other volume metric, truly translate into changes in value.” Joint Sports

Claimants’ Post-Hearing Reply Brief at 54-55 (JSC RPHB); JSC PFF at 167 ¶¶ 17, 18
(quoting 1998-99 CARP Rep. at 30-31 and Program Suppliers v. Libr. of Cong., 409
F.3d 395, 401-02 (D.C. Cir. 2005)).
JSC correctly argues that value, particularly as ascertained for the purpose of
royalty allocation, is not merely reflective of compensable minutes or of the volume of
programming. Furthermore, as recognized by the CARP, when determining the value of
programming, CSOs, such as Bortz respondents, have the ability to take account of
changes in volume, viewing, and all other material factors when assigning value.
Therefore, to some extent, the Bortz results may show that the CSOs contacted for the
Bortz Surveys, as argued by JSC, always valued JSC programming highly, and taking
many factors into consideration may have continued to do so, or may have done so to an
even greater extent, after the loss of WGNA as a distant signal. Thus, to retain usefulness
in allocations proceedings, the Bortz Survey results need not track precisely the
availability of WGNA. Furthermore, as JSC suggests, it is unclear exactly how closely
the Bortz results would have to track such a market change for its detractors to be
satisfied.
Nevertheless, the magnitude of the changes caused by the conversion of WGNA
is so great that one could expect some appreciable reflection of that event in the Bortz
results, particularly if there had not been significant changes in the Bortz methodology as
changes in the market occurred. Indeed, the Bortz results do show diminished
percentages for JSC after 2014. Yet, as already detailed, it was at the time of the
conversion that, citing various factors, Bortz Media made a radical change in its
methodology such that it abandoned its prior sampling methodology in favor of an
attempt to contact all CSOs it deemed eligible to participate in a Bortz Survey, while still
excluding CSOs that carried only PTV or Canadian programming as distant signals.
Bortz Media also calculated alternative adjustments to be used when interpreting the

Bortz initial results after the WGNA conversion to replace the McLaughlin Adjustment
used previously by the Judges. Thus, it is not simply a question of whether the Bortz
Surveys were sensitive to changes that occurred from 2014 through 2017. There should
be a realization that after 2014, one is looking at Bortz results that in certain respects are
based on a different methodology, and that different adjustments have been proposed.
Consequently, one must exercise caution when comparing results from 2014 (or before)
with results for 2015-2017.
As explained by Dr. Stec, for 2014, Bortz Media sought to interview a random
sample of Bortz-eligible CSOs, but for 2015 through 2017 Bortz Media attempted
something like a census while failing to interview anything near all eligible CSOs. In
fact, about 46% of eligible CSOs did not participate in those surveys. Dr. Stec testified
that participation or non-participation in the surveys was “self-selected,” which maybe an
accurate appellation; but in any case, the sampling that Bortz Media obtained was not a
random sample. Thus, in Dr. Stec’s opinion, one cannot ignore whatever differences
might exist between respondents and non-respondents and, relying on the statistical
properties of randomness, impute the results obtained from the respondents to the
non-respondents, and thus for the entire target population. To do so, he opines, could
introduce bias or inaccuracies into the results. See 4/19/2023 Tr. 5671-74 (Stec); CCG
PFF ¶ 354.234
Somewhat similarly, PTV argues there is no dispute that the massive number of
Public Television and/or Canadian-Only Systems excluded from the 2014 through 2017
Bortz surveys would have responded differently than the CSOs Bortz actually surveyed,
and further, Bortz’s exclusion also creates a clear non-response bias in the years that

Even after the WGNA conversion in 2014, small numbers of cable systems continued to report carriage
of the signal. The reasons for doing so may be varied on the part of the cable systems, but in any event
remain unclear. See Trautman WRT at 2-3; Bortz Rep. at 7 n.6. As discussed, supra note 27, there may
have been some residual WGNA carriage as WGNA transitioned from a broadcast channel to a cable
station.
Bortz attempted to conduct the surveys as a “census.” It is argued that Bortz defined its
target population, in part, based on the amount of the section 111 royalties they represent,
but by 2017, the scope of Bortz’s exclusion of PTV- and/or Canadian-only systems
exceeded the scope of CSOs that were actually surveyed as part of the attempted census,
including in terms of the numbers of systems (37% of systems were excluded while 34%
of systems were surveyed), the section 111 royalties they paid (45% of royalties were
paid by excluded systems while 28% of royalties were paid by surveyed systems), and
the number of subscribers they represented (41% of subscribers were subscribed to
excluded systems while 30% of subscribers were subscribed to surveyed systems). PTV
PHB at 40 (citing PTV PFF ¶¶ 199–200 (relying in part on Boyle WRT at 38-39)).
JSC argues that the Bortz opponents fail to rebut Dr. Mathiowetz’s finding that
there was no evidence of non-response bias impacting the Bortz estimates in any year. It
is argued that, as Dr. Mathiowetz explained, the “risk and type of non-response bias” is
the same under either the sampling or the “census” approach, with no assumed statistical
difference or indeterminacy in one compared to the other. JSC argues that there is an
established method to test for non-response bias, which Dr. Mathiowetz applied, and
found no bias. JSC RPHB at 48; JSC PFF 381. Indeed, during the hearing, Dr.
Mathiowetz provided a succinct explanation of her assessment, as follows:
Q. * * * Just in the interest of time, if you could give us at a high
level what you did to assess whether there was a problem of non-response
bias here and what you concluded?
A. So as we have already established, right, there are respondents
and there are non-respondents. And you worry about non-response bias to
the extent that those who don't respond differ from those who do respond to
the survey.
In order to make that assessment, you have to take two steps. First
of all, you have to take and look at characteristics or variables that you have
for both respondents and non-respondents.
So we have a lot of information about these cable systems. We know
their total royalty payments. We know the region of the country. We know
the distant signal equivalents. We know the programming mix being offered
by those cable systems.

So the first step is to say: Are there any of these characteristics
related to non-response? And as Dr. Boyle asserts, there is -- we see that
there is a relationship between size of royalty and non-response.
But you have to take the second step and you have to say: Now,
among the respondents, is the characteristic that I saw related to nonresponse related to valuations? And when you look at that, total royalty
payments is not related to average program valuations.
So while we see a difference in non-response rates, there is no
indication of non-response bias in any of the years of the Bortz Survey.
4/10/2023 Tr. 3906-08 (Mathiowetz). Dr. Mathiowetz’s opinion expressed at the hearing
is supported by her written testimony.235 See Mathiowetz CWDT at 18-19.
Dr. Mathiowetz’s analysis does not answer the theoretical question of whether or
not the samples obtained through the Bortz’s census-type approach in 2015 through 2017
can be treated the same way as random samples. Nevertheless, with respect to the target
population of the Bortz Surveys, Dr. Mathiowetz’s analysis provides actual evidence of
the absence of non-response bias in the Bortz Surveys for 2014 through 2017, which the
Judges take into consideration when determining the extent to which the Bortz results
indicate value.
Yet, Dr. Mathiowetz’s analysis does not speak to a different bias, which is the
bias in the design of the Bortz Survey caused by the complete exclusion of PTV-only and
Canadian-only CSOs. The hypothetical allocation by those CSO’s under Question 4
would presumably have to have been 100% for the only distant signal that they carried.
See 3/23/2023 Simonson Tr. 1228;236 4/4/2023 Tr. 3131-34 (Trautman). The changes in
the Bortz results that occur when PTV-only or Canadian-only CSO are taken into
account, especially after the conversion of WGNA, are significant and have already been
discussed.

In his written rebuttal to Dr. Mathiowetz’s written direct testimony, Prof. Boyle questions Dr.
Mathiowetz’s use of Census regions when reviewing cable system responses, opining that her investigation
might have been appropriate if one were doing a survey of the population but not for a survey to provide
input to cable royalty revenue allocations. Boyle WRT at 43-44.
Dr. Simonson was called by PTV, and qualified as an expert in an expert in the fields of survey
methodology, marketing, and managerial decision-making. 3/23/2023 Tr. 1170-71 (Simonson).
d. The Identification and Qualification Process of Survey Respondents
Questions have been raised concerning the identification and qualification of the
respondents that Bortz Media contacted for participation in its surveys. An inaccuracy
found among the criticisms of the Bortz surveys is that the executives identified as initial
contacts for the interviewers (whose identities and phone numbers were obtained
primarily through the Factbook) were the targets, or target populations of the surveys, or
the targets for the interviewers.237 Yet, the target for the interviewers, and for the
surveys, was always the person most responsible for programming carriage decisions.
While the initial contacts may in fact serve as the survey respondents, in most cases, the
interviewer was referred to a subsequent contact within the CSO. Notwithstanding some
arguments to the contrary, the method of making an initial contact, and then pursuing a
referral when needed, is not a new method for the 2014-2017 surveys. See 2010-2013
Trautman Oral Testimony, Trial Ex. 7043, at 103-05. Furthermore, despite suggestions
to the contrary, Mr. Trautman’s hearing testimony on this topic is consistent with the
Bortz Report, and with the interviewer instructions of the survey instrument.238
The Bortz Survey has also been criticized as failing to reach the person most
responsible for programming carriage decisions because decision-making authority
within the systems might be at the national or corporate level, or because the survey
respondents worked in the marketing or video product departments. While one cannot
say with certainty that in all cases the Bortz interviewers reached the right respondents,

See, e.g., PS PHB at 63 (“Since Mr. Trautman only reached between 5.9% and 9.0% of his intended
target population, there should have been a process for qualifying respondents who were not the intended
targets.”).
The survey instrument instructs interviewers, when introducing themselves, to ask to speak with the
listed respondent, and if unavailable to confirm he/she is the person most responsible for programming
carriage decisions for the system and to arrange for a call back; and if not, then to ask to speak with the
person most responsible for programming carriage decisions for the system. In addition, Question 1 on the
survey instrument is: “Are you the person most responsible for programming carriage decisions made by
your system during [year] or not?” If the response is negative, the interviewer is instructed by the survey
instrument to ask to speak with the person most responsible for the system’s programming carriage
decisions for the subject year, and then to repeat the introduction and Question 1. See Bortz Rep. app. B;
4/5/2023 Tr. 3220-21 (Trautman).
the evidence shows that during the time period in question, individuals with the
knowledge of why specific distant signals were carried often worked at the local or
regional level, and furthermore could work in departments with titles such as marketing
or video rather than programming. See 4/3/2023 Tr. 2769-73 (Singer);239 4/10/2023 Tr.
4054-55, 4060-61 (Witmer);240 3/28/2023 Tr. 1714-16 (Costantini);241 4/17/2023 Tr.
5066-67 (Ringold).
e. Whether There Was Interviewer Error, Interviewer Bias, or a Lack of
Training
Opponents of the Bortz Survey argue that they have found “error” by the
interviewers in as many as 90% of the survey responses, although none seems to involve
recording the survey responses. The alleged error, it is argued, occurred in recording
information such as the recording of “partial names” or “multiple positions” for the same
respondent. There are even criticisms based on respondents’ LinkedIn profiles (which
assumes, without record evidence, that LinkedIn accounts would be accurate, and up-todate for the survey periods in question). See, e.g., PS PHB at 64-65; PTV PHB at 52-53;
CCG PHB at 54; Tr. 1278-79 (Simonson). Yet, as explained by Mr. Trautman,
respondents in these telephone surveys often hesitate to provide detailed information
about themselves such as full names, or happen to provide abbreviated titles.242 4/4/2023
Tr. 2992, 3004-05 (Trautman). Furthermore, it is not uncommon for regional personnel
to oversee activities at individual systems, depending on the size and individual system

Mr. Singer was called by JSC, and qualified as an expert in the operation of cable systems and cable
networks, including the valuation of television programming in the cable industry. 4/3/2023 Tr. 2738, 2745
(Singer).
Ms. Witmer was called by JSC, and qualified as an expert in the operation of cable systems, including
the valuation of cable and broadcast television programming. 4/10/2023 Tr. 4035 (Witmer).
Ms. Costantini was called by PTV, and qualified as an expert in the cable television industry and
valuation of television programming. 3/27/2023 Tr. 1583, 1588 (Costantini).
There is an email in which Mr. Trautman asks his contractor running the interview process to make sure
interviewers do not record titles short-hand form. While Mr. Trautman was doing the due diligence of
quality control, there is no proof of actual error. See 4/10/2023 Tr. 3967-68 (Mathiowetz).
characteristics and responsibilities. Nor is it uncommon to find individuals who are
responsible for more than one function within a company. See 3/27/2023 Tr. 1622
(Costantini).
Bortz opponents argue that Ms. Grossman’s long experience working on the Bortz
surveys, and the large number of interviews she conducted, could have resulted in bias in
the surveys she performed. That criticism is somewhat speculative. Furthermore, Dr.
Mathiowetz tested for that question, and found no such bias. Specifically, it was found
that on average, responses to the surveys Ms. Grossman performed did not differ from
those of obtained from other interviewers. 4/10/2023 Tr. 3893-94 (Mathiowetz). On the
other hand, despite the long history Bortz Media has with Ms. Grossman, there are
criticisms about a supposed lack of training materials, although the record shows that it is
standard to use the survey instrument, or the questionnaire, as the training material when
there is a small team of interviewers as in the case of the Bortz Surveys.243 4/10/2023 Tr.
3895-96 (Mathiowetz). Moreover, Bortz Media conferred with Ms. Grossman and her
team with respect to the 2014-2017 interviews before starting each survey. 4/3/2023 Tr.
2841 (Trautman); 4/4/2023 Tr. 3006 (Trautman). Subsequently, Bortz Media monitored
approximately 20 percent of the interviews “to ensure accurate interviewing techniques
and to observe any issues related to the respondent’s comprehension or ability to respond
to the constant sum valuation question.” Bortz Rep. at A-15.
f. Whether the Bortz Survey Questions Are Overly Complex or Caused
Confusion or Recall Bias
When examining the actual Bortz Survey constant sum question, industry experts
explained that cable system executives are more than capable of understanding the
categories of content separate and apart from particular linear channels, that they know

Bortz only used a separate, one-page training document for these surveys in the late 1980s to early
1990s, when it worked with a large contractor whose interviewers were not as clearly experienced in
executive interviewing. 4/4/2023 Tr. 3168-69 (Trautman).
these types of programming as part of their day-to-day job. The survey respondents also
have experience running businesses and expenses. Thus, the constant sum question is the
type of question one would ask them. See 4/10/2023 Tr. 4052-55 (Witmer); 4/3/2023 Tr.
2769 (Singer).
With respect to the terms used during the Bortz Survey interviews, there is
argument and testimony that in some cases the terms used to describe the program
categories are undefined or vague. See, e.g., PS PHB at 72. The terms used to describe
the program categories are by necessity generalizations. Yet, there is no showing of
widespread confusion among survey respondents. On the contrary, there is evidence that
the categories are generally understood, in particular a term such as “live professional and
college team sports.” See 2010-2013 Hartman Oral Testimony, Trial Ex. 7056, at 73-77;
3/28/2023 Tr. 1722-23 (Costantini).
With respect to the general complexity of the Bortz Survey, and especially
Question 4, Dr. Mathiowetz, who has studied and conducted establishment surveys,
testified that the Bortz constant sum question was similar in complexity to other
establishment survey questions, and underscored that the executives contacted for the
survey have a sophisticated level of knowledge about the concepts in the survey.
4/10/2023 Tr. 3854-55 (Mathiowetz). Indeed, Dr. Ringold has conducted surveys of
CSO employees, and has asked respondents a constant sum question that required
respondents to allocate 100 points among seven different claimant categories. See
4/17/2023 Tr. 5014-16 (Ringold).
Furthermore, one well-known indication of respondents who were overwhelmed
or confused could be what is termed “satisficing,” in which a respondent may take a
cognitive short cut to stay in the role of a respondent albeit at a minimum.244 See

With respect to satisficing, during the hearing, Dr. Mathiowetz quoted from the Encyclopedia of Survey
Research Methods, as follows: “Satisficing has been posited to at least partly explain several response
4/10/2023 Tr. 3855-56 (Mathiowetz). Yet, Dr. Mathiowetz found no pattern of
respondent confusion or satisficing behavior in the Bortz survey data. There was, for
example, a case cited by PTV of a Bortz respondent who gave the same rankings and
value allocations for two different systems. Dr. Mathiowetz testified, however, “[w]hat
you want to see when you’re looking for evidence that there are problems with the
question is that you see that pattern [of satisficing] overall across most respondents,” not
just “one or two.” 4/10/2023 Tr. 4015-26 (Mathiowetz).245
A question has been raised as to whether the timing of the Bortz surveys led to
recall error or bias. Mr. Trautman testified that as a matter of best survey practices, in
general it is better to perform the Bortz Survey closer to the end of the survey year, rather
than farther from it. As discussed above, the Bortz Surveys did not begin until several
months after the end of the preceding calendar year. Nonetheless, Mr. Trautman did not
conclude that there was recall bias in this the surveys now at issue. 4/4/2023 Trautman
Tr. 3012, 3029-34. Yet, as Dr. Simonson observed, “the Bortz Survey mistakenly asked
a few respondents about programming categories that they did not actually carry.”246
3/23/2023 Simonson Tr. 1223. In all such cases, the respondents should have realized
that their systems had not carried distant signal programming in those categories, and
allocated zero value to such programming. Yet, Dr. Simonson testified, for 2017, for
example, over 11 percent of respondents allocated values of up to 50 percent to
categories they did not carry. Id. Dr. Mathiowetz was candid about the fact that there are

effects, including acquiescence effects, non-response order effects, no opinion option effects, and nondifferentiation in answering batteries of rating scales.” 4/10/2023 Tr. 3856 (Mathiowetz).
Even before the production of more detailed information, as originally produced, the redacted Bortz data
contained anonymized respondent identifications showing every time the same individual responded on
behalf of multiple systems in a given survey year. 4/10/2023 Tr. 2922-24 (Mathiowetz). It appears,
therefore, that early in this proceeding any party could have used such information to track potential
satisficing.
Such occurrences are indeed few in number, but not to be ignored. Specifically, for 2014 through 2017,
90 respondents overall, four in 2014, 33 in 2015, 24 in 2016, and 29 in 2017, provided relative value
allocation to compensable programming that they did not carry. See PS PFF 541 (citing Stec WRT at 41).
some errors in the Bortz Survey. She testified, “I think there are cases in any data
collection effort where there is misinformation, respondent error, respondent recall.
That’s the nature of the beast when you go and interview humans. And the best you can
do is understand how that can impact the data.” It was her opinion, which appears
reasonable, that incorrect answers in those cases, i.e., answers other than zero for a
programming category that was not carried, could be the result of recall error. She
explained that “a respondent is under the impression that the interviewer is giving them -most respondents work under the impression that the information being conveyed by an
interviewer is accurate. And so we may have cases of recall error as opposed to just not
understanding.” 4/10/2023 Mathiowetz Tr. 4030-31.
Despite a relationship between importance and cost, already discussed, there is a
concern that because “warm-up” Question 3 asks about cost, it might have influenced
responses to Question 4, which asks about value. See, e.g., 2010-13 Determination at
3590 (“This may have injected some confusion into the respondent’s estimation of
relative value.”); 3/27/2023 Boyle Tr. 1422 (“But if I was doing it, I probably would not
have had Question 3 before Question 4, if it was something that was important. I would
have had Question 3 after Question 4, after the primary source of information that I was
looking to get.”).247
In this proceeding, there is no strong evidence offered either way to show whether
Question 3 unduly influenced responses to Question 4. The best evidence was, however,
found in the opinion of Dr. Mathiowetz who testified, “when you look at the relationship

Dr. Conrad was called by CCG, and qualified as an expert in survey methodology with specialization in
questionnaire design and data collection. 4/13/2023 Tr. 4796-97, 4806 (Conrad). He expressed concern
over Question 3, and its order in the survey. See Written Rebuttal Testimony of Frederick Conrad, Ph.D.,
Trial Ex. 7405, at 4 (“The cost question (Q3) was intended as a warm-up but the information respondents
used to answer it was almost certainly salient and particularly accessible in their working (short-term)
memory when they answered the value question (Q4) immediately afterward, allowing the cost information
to dominate the valuation process; if the order of these two questions had been reversed, i.e., if Q4 had been
asked before Q3, cost information would less likely be the central consideration in the valuation process.
This pattern, if observed, would be what survey researchers call a question order effect -- considered a type
of measurement error”) (emphasis added).
between importance and relative value, you see a stronger relationship in the [Bortz] data
between importance and relative value than you do between expense and relative value.”
When asked whether Question 3 biases response to Question 4, she answered, that “My
analysis suggests that it is not biasing, that there is a very logical relationship, but it is
one that also includes understanding how respondents answered the importance
question.” 4/10/2023 Tr. 3878 (Mathiowetz); see Mathiowetz CWDT at 11 (“One means
by which questionnaire designers can signal the distinction among related concepts is by
employing different question forms, thereby presenting the respondent with a different
task. In the case of the Bortz surveys, the warm-up questions require the respondent to
rank order among the program categories, from 1 to k, whereas the key question of
interest related to relative valuations is a constant sum task”).
g. Whether Pre-Testing and Post-Testing Verification Procedures Were
Needed
PTV and CCG criticize the Bortz survey for not performing “qualitative pretesting” or “post-survey verifications.” For example, CCG argues that pretesting is a best
practice even for longitudinal surveys that are fielded with the same instrument over a
long period of time, according to the American Association for Public Opinion Research
(AAPOR),248 so that changes or adjustments can be made to the questions asked. CCG
PHB at 51-52. PTV argues in favor of pre-testing, and also that Bortz failed to conduct
any post-survey verification to confirm validity and reliability, such as test/retest
reliability or recontacting respondents to confirm “that they actually exist, the survey
actually happened, or that the respondents were qualified, and to learn how the
respondent understood and answered.” PTV PHB at 58.

The AAPOR is a leading organization on survey research standards, and its past presidents include
JSC’s expert witness Dr. Mathiowetz. In 2015, she was awarded the AAPOR Award for Exceptional
Distinguished Achievement. See Mathiowetz CWDT at 1-2; 4/10/2023 Tr. 3943-44 (Mathiowetz).
JSC argues that it is inaccurate to suggest that pre-testing is the only way to assess
whether the surveys produce valid and reliable results. JSC argues that there are many
ways to test for, for example, internal consistency in responses, evidence of satisficing,
and bias; and Dr. Mathiowetz tested for all of those things, even if other experts did not
do so. JSC RPHB at 52.
While neither JSC nor Dr. Mathiowetz disputes the value of pre-testing in general,
Dr. Mathiowetz testified that pretesting of the 2014-2017 Bortz surveys was not
necessary because the survey has been fielded for many years and has been established in
prior proceedings as a valid approach to looking at relative market value. She explained
that the need for pre-testing is different than if one were undertaking brand new
questionnaire development. Furthermore, Dr. Mathiowetz testified that there is also a
significant downside to pre-testing a survey such as the Bortz Survey because there is a
small population, and Bortz Media goes back to them in the next year. Also, any cases
used for pre-testing usually would not be used in the main study. Tr. 3863-64, 3958-60
(Mathiowetz).
With respect to post-survey verification, Dr. Mathiowetz explained that due to the
small population, and recurring nature of the survey, “you don’t want to burn bridges” by
recontacting CSOs that Bortz Media knows it will want to survey again, just to verify
their prior identification of the respondent. Indeed, Dr. Mathiowetz had never seen such
a verification process for an establishment survey in the literature, nor had she done it
herself. 4/10/2023 Tr. 3897-98 (Mathiowetz). Similarly, Mr. Trautman’s reason for not
contacting survey respondents after each survey is a concern about “placing an additional
burden on respondents or potential respondents,” who are “busy executives,” and the
resulting “risk of not being able to continue to interview respondents in the future.”
4/4/2023 Tr. 3106-07 (Trautman).

h. Whether Bortz Media Used Undisclosed Quotas, Financial Incentives,
and Pressure to Produce “Extraordinary” Results That Biased the
Data
PTV argues in one paragraph of its brief that JSC has trumpeted high response
rates achieved for the Bortz surveys, but never disclosed any response rate quotas it
imposed, as revealed in compelled discovery showing that Bortz imposed substantial
quotas on Ms. Grossman and her team, and pressured them to produce “extraordinary”
results;249 and despite persistent and increasing difficulty, specifically pressured them to
“keep the response rate as high as possible because it has been a big selling point for the
Bortz survey in these proceedings . . . based on past emphasis by the Judges.” It is
further argued that Mr. Trautman admitted, and documents confirmed, that Ms.
Grossman and her team had a financial interest in meeting these quotas in order to keep
the surveys going, and did “everything possible to reach those numbers that [Mr.
Trautman] needed,” including placing many calls, pleading, calling neighboring systems,
disregarding institutional policies against participating in surveys, and staying in the field
for a longer time. See PTV PHB at 50 (citing PTV PFF ¶¶ 266-73).
JSC argues that Bortz Media appropriately sought to obtain high response rates,
and to do so through its contractor, and at higher expense, spent more time in the field
and made more efforts to reach respondents than one might otherwise do. It is argued
that no expert testified to the existence of “quotas” or resulting bias in the Bortz results.
It is argued that to the contrary, Dr. Mathiowetz testified that there is “absolutely not”
anything problematic about telling a survey organization to work hard to obtain good
response rates, even if that requires interviewers to make more frequent calls or leads to

Dr. Simonson testified that he never heard the term “establishment survey” before testifying, and had
never heard of a business or organization survey obtaining a response rate of 50% without offering
compensation (and did know of any compensation for respondents in connection with the Bortz Surveys).
3/23/2023 Tr. 1248-51 (Simonson).
cost overruns. Furthermore, it is argued, Mr. Trautman testified unequivocally that
interviewers were never paid for completing an individual interview or completing a
specific number of interviews. JSC RPHB at 4, 55.
JSC argues that PTV is simply misreading the AAPOR disclosure standard, which
it never submitted into evidence and never showed to any of the numerous testifying
survey expert, including former AAPOR President, Dr. Mathiowetz. Furthermore, JSC
argues that the AAPOR standards require disclosure of quotas used as part of the
“methods of sampling” for the survey, sometimes referred to as “quota sampling.” Quota
sampling is used to “achieve a pre-specified distribution on some set of variables” (such
as gender or Census region) within a survey sample, and there is no suggestion that Bortz
used quota sampling or anything like it, and thus nothing that Bortz improperly failed to
disclose. See id. at 55-56.
Indeed, there was a lack of development of any accusation that Bortz Media, or
any party associated with the Bortz Surveys at issue used undisclosed sampling quotas,
let alone to obtain extraordinary results. Furthermore, it has not been established that
interviewers or anyone else associated with Bortz Media or its contractors received
undisclosed financial incentives to obtain results,250 or the Bortz Media or anyone else
associated with the Bortz Surveys engaged in “quota sampling,” as it has been explained
in the meager record on the topic.
C. The Testimony of Professor Papper
CTV argues that the testimony its expert witness Prof. Papper, referenced above,
is based on empirical analysis and his decades-long expert assessment of trends in the

PTV’s Proposed Finding of Fact 270 contains the statement: “Ms. Grossman and her team were
financially incentivized to meet Mr. Trautman’s quotas because their compensation was a product of
keeping the study going, and the time and effort needed to do so. Ms. Grossman and her team required,
inter alia, more money, resources, longer time in the field.” PTV PFF at 96 (footnotes omitted) (emphasis
added). An examination of the evidence cited in supporting footnotes (i.e., 4/4/2023 Tr. 3195-202
(Trautman)) confirms that the financial incentives involved were, as indicated in PTV’s proposed finding,
only in the nature of compensation for the time, effort and resources needed to keep the study going and to
exceed expectations.
local television news industry generally and their impact on the relative value of CTV
programming during the 2014-2017 period. In particular, Prof. Papper opines that there
has been a steady rise in the production and airing of local news.251 Thus, CTV argues
that it is entitled to an increased share of royalties. See CTV PHB at 4-6. In its reply,
CTV argues that despite criticisms of RTDNA surveys, Program Suppliers provide no
evidence, empirical or otherwise, to rebut or refute what Prof. Papper consistently
presents throughout his testimony, which is that that local television stations across the
country, including those that were distantly retransmitted, were producing and airing
increasingly more local news programming over the course of 2014-2017.252 Further,
CTV argues that as Prof. Marx testified, CSOs’ inability to offer as much CTV content in
2015-2017 was divorced from any actual choice made by the CSOs, and was due to the
reduction of available CTV programming as a result of the WGNA conversion. CTV
Reply at 52-53.
Program Suppliers argue that the RTDNA Surveys should be given no weight for
several reasons, including the fact that Prof. Papper failed to provide the information
necessary to evaluate his target population, sample design, the data he collected (and did
not collect) from the RTDNA Surveys, the quality of that data, or the accuracy of the data
collection and recording of that data. Moreover, Program Suppliers argue that Mr.
Papper’s hearing testimony revealed that the reliability issues are more severe, pervasive,
CTV, based on the written testimony of Prof. Papper, argues that there has been a steady increase on the
amount of news broadcasts by station, including an increase in the amount of local news from 5.3 hours in
2014 to 5.7 hours in 2017; and the amount of local news also went up on the weekend, from an average of
2 hours per Saturday in 2014 to 2.1 hours in 2017, while the amount of local news on Sunday rose from 1.9
hours in 2014 to 2.1 hours in 2017. Further, it is argued, the number of stations running local news rose
from 1026 in 2014 to 1062 in 2017, and as television stations continued to increase their local news budgets
during the four-year period, they added more local newscasts to their lineups in the 4 PM to 7 PM time
slots, and the 5 AM to 7 AM time slots. See CTV PHB at 5; CTV PFF ¶¶ 10-11.
CTV argues that in support of the value of their own content, Program Suppliers continue to rely on
reports that are like those they find objectionable from Prof. Papper, and the articles Prof. Papper writes
that in part rely on the RTDNA Survey. Specifically, CTV argues that Program Suppliers relies on the
content of the Nielsen Year in Sports Media Report, U.S. 2017. It is argued that this Nielsen Report, which
includes and relies on a variety of sports media data, studies and survey results, is no different from Prof.
Papper’s articles and opinions that are informed, in part, by results from the RTDNA Survey. CTV RPHB
at 53 n.267 (citing PS PHB at 13).
and disqualifying than originally thought. Indeed, it is argued, the RTDNA Surveys are
not surveys at all, but are instead part of what CTV terms a “fact-gathering exercise,”
presumably because Prof. Papper admitted that he is not a survey expert and lacks the
expertise necessary to sponsor the RTDNA Surveys as evidence in this proceeding. CTV
PHB 4. In addition, Program Suppliers argue that while CTV takes the position, based
solely on Mr. Papper’s RTDNA Survey, that there was an increase in the amount of CTV
programming appearing on distant signals, this summary conclusion is directly contrary
to the quantitative study conducted by the other CTV experts, Dr. Bennett253 and Dr.
Marx, which shows the dramatic decline in CTV distant carriage over time. Program
Suppliers’ Post Hearing Reply Brief at 22 (PS RPHB).
The RTDNA Surveys were not offered or received as survey evidence, but rather
as information, along with articles, that Prof. Papper relied upon in forming his expert
opinions. As such, the RTDNA Surveys were not scrutinized as, for example, the Bortz
Surveys were scrutinized in this proceeding. Based on the totality of Prof. Papper’s
opinions and the sources upon which he relies, including his involvement in the broadcast
journalism industry, it is found that there was a trend toward increased production and
airing of local news during the 2014-2017 time period, although the extent of that trend is
difficult to gauge from Prof. Papper’s testimony. Furthermore, that trend does not in and
of itself translate to a greater allocation of section 111 royalties for CTV, and the
opinions of Dr. Bennett, Dr. Marx and others who testified on the subject of CTV
programming are addressed elsewhere.
For the foregoing reasons, the Judges accord evidentiary weight to the Bortz
Survey, with the McLaughlin Adjustment – relatively equivalent with the weight given to
the regression analysis as discussed supra. A reconciliation of these two useful (albeit

Dr. Bennett was called by CTV, and qualified as an expert in statistical methods and measurement.
4/12/2023 Tr. 4497, 4504-05 (Bennett).
imperfect) approaches, augmented by the testimony of industry witnesses, is set forth
below.
XVIII. CONCLUSION AND AWARD
Regression evidence was presented through Drs. Johnson, Tyler, George and
Marx, with the Johnson, Tyler and George regression models generating proposed royalty
fund shares for each of the claimant groups in each of the years 2014 through 2017.
Furthermore, survey evidence was presented only in the form of the Bortz Survey, which
was conducted for each of the years at issue, along with adjustments that could be made
to the initial results to account for certain factors (most notably the exclusion of CSOs
from the surveys because they carried only PTV or only Canadian programming as
distant signals). In addition, the Judges received evidence from industry experts who
testified from their unique perspectives about the regressions and annual surveys
presented at the hearing, as well as the valuation of programming relative to several of
the claimant groups.
For the reasons detailed in this determination, the Judges have found that no form
of evidence, be it a regression, the Bortz Survey or the testimony of industry experts,
provided data that translates directly into the allocation of royalty fund shares needed for
this determination.254 The results of all regression models in evidence have been
considered, but the Judges find that the Tyler Model is the most appropriate regression
model in this record, and have accorded it the most weight. The Bortz Surveys provide
relevant illustrations of the values placed on distant signal programming during the
relevant time period. For 2014-2017, the Bortz Surveys had limitations that other Judges
and tribunals have long recognized. In some cases, a more comprehensive assessment of
values can be made by applying adjustments proposed by various parties, especially the

To the extent that any criticism of, or deficiency in, the record evidence was not discussed, it is because
said criticism or deficiency does not change the outcome of this determination.
McLaughlin Adjustment, which has been used at least since the 2004 and 2005
proceeding. The Judges have also taken into consideration the fact that Bortz Survey
methodology, like the regression models, faced challenges over the period following
2014, especially due to the conversion of WGNA.
In view of the totality of the evidence presented in this proceeding, the Judges
find that a synthesis of regression and survey results is necessary to arrive at the required
allocations. In particular, with respect to JSC, the Judges weighted heavily evidence
from the Bortz Surveys. While the record shows that minute volume is not as applicable
to sports programming (which is more dependent, for example, on games carried), JSC’s
allocation must be limited by the fact that significantly less sports was transmitted after
the WGNA conversion. Yet, with respect to PTV, the regression evidence was accorded
greater weight for 2014, and dispositive weight for 2015-2017. As already described, the
regression evidence accounted for the reduction of shares due to the Must Carry signals,
as well as increases due to the implicit willingness to pay as shown by cable systems that
continued to carry PTV even when WGNA was no longer available as a distant signal.
By contrast, the Bortz Surveys did not examine such circumstances, and there is no
rationale for augmenting the survey results with the McLaughlin Adjustment for all the
PTV-only systems that came into existence after 2014.
For CTV, the Bortz Surveys weighed heavily in making the allocation, which is
not inconsistent with evidence presented by industry experts Mr. Vaughn and Prof.
Papper, as well as the industry analysis provided by Dr. Marx. Relatively speaking, the
value of CTV should have increased since 2013, with the rise of streaming and over the
top programming, more than one sees when simply looking at the regression results.
Much of the CTV programming was not available on streaming, and would increase its
relative value in what was technically distant signal programming because it was
retransmitted to a contiguous area.

With respect to the allocation for the Program Suppliers, the Bortz Survey
evidence weighed more heavily than the regression evidence. Expert testimony showed
that streaming services could substitute for retransmitted signals. This factor was not
reflected in the regression evidence, but the Bortz Survey respondents, as cable industry
executives, would have understood the factors affecting the value of Program Suppliers
programming in much the same way as the testifying industry experts.
There is ample evidence in the record that SDC provides niche programming
whose value is not so much determined by minutes, and might not show up well in
regressions. Yet, the niche value of SDC has been reflected well in the Bortz Surveys
received in this proceeding, and previously, and is reflected in relatively consistent
numbers. Inasmuch as the allocations for SDC, by any parties’ estimation, resulted in
low numbers, one sees share allocations with relatively steep jumps or declines between
years, but when compared to the overall allocations to be made, the variations are not
great in absolute terms.
With respect to CCG, in general, the regressions examined the value of Canadian
programming in detail, and were relied upon in making allocations. Yet, even the
regression evidence was weighed carefully because although CCG had strength as a niche
offering, it also overwhelmed some regressions, including the above-minimum-fee
programming model. The Bortz Surveys were considered, but accorded no weight when
arriving at the Basic Fund allocations because much Canadian programming is not taken
into consideration, and the Bortz results were clearly off the mark.
Accordingly, the allocations are, as follows:
Table 2: Basic Fund Royalty Allocations
2014
Basic Fund
CCG
CTV
JSC
Program Suppliers

6.19
20.55
36.13
21.21

2015
14.59
19.78
11.42
28.29

2016
14.60
17.36
10.72
25.53

2017
15.77
17.50
12.36
23.29

PTV
SDC

2014
11.07
4.85

2015
19.18
6.74

2016
24.78
7.01

2017
25.25
5.83

With respect to the 3.75% fund, it is recognized that PTV is a nonparticipant. To
arrive at the allocations for the 3.75% fund set forth in Table 1, the Judges have
reallocated the PTV shares proportionally among the claimant categories that participated
in that fund.
The Register of Copyrights may review the Judges’ Final Determination for legal
error in resolving a material issue of substantive copyright law. The Librarian shall cause
the Judges’ Final Determination, and any correction thereto by the Register, to be
published in the Federal Register no later than the conclusion of the 60-day review
period.


David R. Strickler
Copyright Royalty Judge


Steve Ruwe
Copyright Royalty Judge


David P. Shaw
Chief Copyright Royalty Judge
DATED: April 17, 2024

The Register of Copyrights closed her review of this Determination on June 13,
2024, with no finding of legal error.
Dated: June 13, 2024.
David P. Shaw,
Chief Copyright Royalty Judge.
Approved by:

Carla B. Hayden,
Librarian of Congress.

ADDENDUM A

Before the
COPYRIGHT ROYALTY JUDGES
The Library of Congress
In re
DISTRIBUTION OF
CABLE ROYALTY FUNDS

I.

DOCKET NO. 16-CRB-0009 CD
(2014-17)

PUBLIC
ORDER 46 GRANTING IN PART AND DENYING IN PART
PTV’s MOTION FOR REHEARING AND DENYING JSC’s MOTION FOR
REHEARING
PROCEDURAL BACKGROUND AND LEGAL STANDARD
a.

Procedural Background

On September 6, 2023, the Copyright Royalty Judges (“Judges”) issued their
Initial Determination of Royalty Allocation (“Initial Determination” or “ID”) in the
captioned proceeding (eCRB no. 28762).
On September 21, 2023, the Public Television Claimants (“PTV”) and the Joint
Sports Claimants (“JSC”) filed motions for rehearing (eCRB nos. 30637 and 30638,
respectively).
On September 25, 2023, the Judges issued Order 43, permitting written responses
to the motions for rehearing by October 5, 2023.
On October 5, 2023, the Canadian Claimants Group (“CCG”), Program Suppliers
(“PS” or “Program Suppliers”) and Settling Devotional Claimants (“SDC”) filed a Joint
Response in Opposition to the Motions for Rehearing (eCRB no. 32670) (“Joint
Response”).
On October 5, 2023, JSC and the Commercial Television Claimants (“CTV”)
filed responses in opposition to PTV’s Motion for Rehearing (eCRB nos. 32671 and
40001, respectively).
On October 5, 2023, PTV filed a Response in Opposition to JSC’s Motion for
Rehearing (eCRB no. 32673).

On October 10, 2023, the Judges issued Order 44, granting movants leave to file
replies by October 19, 2023.
On October 19, 2023, JSC filed a reply in support of its motion for rehearing
(eCRB no. 33842) and PTV filed a reply in support of its motion for rehearing (eCRB no.
33843).
b.

Legal Standard

Pursuant to the Copyright Act, the Judges may grant a motion for rehearing in
exceptional cases. 17 U.S.C. 803(c)(2). Applying this statutory “exceptional case”
requirement, the Judges’ regulations state that the movant must show that an aspect of the
determination is “erroneous.” i.e., “without evidentiary support in the record or contrary
to legal requirements.” 37 CFR 353.1-.2.
In applying these statutory and regulatory standards, the Judges grant rehearing
only “when (1) there has been an intervening change in controlling law; (2) new evidence
is available; or (3) there is a need to correct a clear error or prevent manifest injustice.”
See Order Granting in Part and Denying in Part Motions for Rehearing at 2 n.3,
Determination of Royalty Rates and Terms for Making and Distributing Phonorecords
(Phonorecords III), Docket No. 16-CRB-0003-PR (2018-2022) (Oct. 29, 2018) (citing
Order Denying Motion for Reh’g at 1, Determination of Rates and Terms for Preexisting
Subscription Services and Satellite Digital Audio Radio Services (SDARS I), Docket No.
2006-1 CRB DSTRA (Jan. 8, 2008) (applying federal district court standard under Fed.
R. Civ. P. 59(e))). See also Order Granting in Part and Denying in Part Sirius XM’s
Motion for Rehearing and Denying Music Choice’s Motion for Rehearing at 1-2,
Determination of Royalty Rates and Terms for Transmission of Sound Recordings by
Satellite Radio and “Preexisting” Subscription Services (SDARS III), Docket No. 16CRB-0001 SR/PSSR (2018-2022) (Apr. 18, 2018) (“SDARS III Order”) (same).
Moreover, in the SDARS III Order, the Judges made clear what would not be sufficient to

warrant rehearing: “A rehearing motion does not provide a vehicle ‘to re-litigate old
matters, or to raise arguments or present evidence that could have been raised prior to the
entry of judgment.’” 255 Id. at 2 (quoting Exxon Shipping Co. v. Baker, 554 U.S. 471, 485
n.5 (2008) (quoting C. Wright & A. Miller, Federal Practice and Procedure § 2810.1 (2d
ed. 1995))).256
II.

JSC’S MOTION FOR REHEARING
Pursuant to 17 U.S.C. 803(c)(2) and 37 CFR 353.1, JSC requests rehearing,

arguing that the Judges’ allocations must conform to the record evidence and the law by:
“(1) correcting the Initial Determination’s reliance on an outdated and unreliable version
of the ‘McLaughlin adjustment’ calculation; (2) adjusting JSC’s 2014 share to align with
the record evidence and the reasoning of the Initial Determination; and (3) eliminating
reliance on a regression model for the 2015-17 time period that no witness endorsed and
is at odds with the record evidence.” JSC Motion at 1.
a.

JSC’s Motion Is Deficient Because It Does Not State a Standard Under
Which It Can Seek Rehearing

The JSC Motion fails to explicitly set forth a governing rehearing standard for the
Judges to apply that would support the substantive arguments on which JSC seeks
rehearing. As the Judges noted supra, a party may seek rehearing if (1) it demonstrates
the existence of an “exceptional” case under the applicable statutory section, which, (2)
by regulation, means that a party must show that the aspects of the determination

An attempt to re-litigate old matters, or to raise arguments or present evidence that could have been
raised prior to the entry of judgment, is colloquially referred to as an improper attempt at “a second bite at
the apple.”
In determining whether to grant motions for rehearing, the Judges have also previously relied on Fresh
Kist Produce, LLC v. Choi Corp., 251 F. Supp. 2d 138, 140 (D.D.C. 2003), which involved a Rule 59(e)
motion in a case relating to economic rights. See, e.g., SDARS III Order at 2, 7. In view of the facts in
Fresh Kist, the district court held that “[a]lthough the court disapproves of parties raising arguments that
they could have advanced earlier, the court recognizes that the interests of justice and fairness support
reviewing the plaintiff’s motion.” Fresh Kist, 251 F. Supp. 2d at 141. Accordingly, the Judges recognize a
tension between the proscription against using a rehearing motion to obtain a “second bite at the apple” and
the need to prevent an unfairness that constitutes a “manifest injustice,” which can be addressed on a caseby-case basis.
identified by the movant were “erroneous,” pursuant to (3) specific grounds, such as, e.g.,
“clear error” or “manifest injustice.” 257 JSC does not express and apply these specific
standards, let alone maintain that its arguments meet these standards.
The Judges should not have to guess at the standard on which a movant relies for
seeking rehearing. Accordingly, the standardless nature of the JSC Motion renders it
deficient on this basis alone.258
Further, the Judges note that JSC sets forth an incorrect standard for consideration
of requests for rehearing, by repeating three times that the Judges’ adjustments were
“arbitrary”. Motion at 8-10. However, that standard is an appellate standard, not a
standard for rehearing. See, e.g., Hammond v. Reynolds Metals Co. Pension Plan for
Hourly Emps., 2006 WL 8436765, at *2 (N.D. Ala. May 25, 2006) (holding that the
“arbitrary and capricious” appellate standard of review is inapplicable to the court’s
“stringent standard” for consideration of a Rule 59(e) motion and “the judicial interest in
finality of decisions ….”); Perrin v. Hartford Life Ins. Co., 2008 WL 11472191, at *2
(E.D. Ky. Mar. 24, 2008) (“the court finds that the defendant cannot attain arbitrary and

As also noted supra, a “negative” requirement for a proper rehearing motion is that the motion cannot
simply attempt to relitigate matters that were addressed at the hearing (the so-called “no second bite at the
apple” requirement) or to raise issues that the movant could have presented at the hearing but did not.
JSC does cite 17 U.S.C. 803(c)(2) and 37 CFR 353.1, which provide parties with the right to seek
rehearing, but those mere citations are not enough. The Motion must attempt to tie the movants’
substantive arguments regarding the challenged aspects of the determination to specific rehearing
standards.
The Judges also note that JSC does attempt to tie its arguments to actual standards in its Reply. However,
the Judges are highly reluctant to permit new arguments to be made for the first time in a Reply, because
such delinquent assertions sandbag the adverse parties, who had already filed their permitted Responses
and are unable to address the delinquent arguments in the Reply.
In any event, the Judges’ discussion infra rejecting JSC’s arguments makes it clear that, even had JSC
made a timely attempt to identify allegedly applicable specified standards for rehearing and attempted to
connect its factual arguments to those standards, the JSC Motion would nonetheless be denied (in part). (In
this regard, the Judges note that, in its Reply, JSC cites the Judges’ order in the 2010-13 allocation
proceeding which noted the rehearing standard in 37 CFR 353.2, requiring a movant to state why it
believes the determination is “without evidentiary support in the record or contrary to legal requirements.”
JSC Reply at 2. JSC makes no allegation of legal error and, as discussed infra, there is abundant
evidentiary support for the factual findings with which JSC takes issue.)

capricious review of its decision. The court concludes that the defendant has failed to
demonstrate appropriate grounds for relief under Rule 59(e)).259
Despite the legal deficiency of JSC’s “arbitrariness” argument as a basis for
rehearing, in the interest of completeness, the Judges explain infra why JSC’s substantive
assertion that the adjustments were arbitrary is factually deficient.
b.

Whether the Judges’ Initial Determination Relies on an Incorrect
Version of the McLaughlin Adjustment

As JSC states in its pending motion, in the Initial Determination, the Judges relied
in part on the Bortz Survey with the McLaughlin Adjustment, as the adjustment is found
in Exhibit 3049. JSC Motion at 1-2 (citing ID at 177-78, 181, 197-98). JSC argues,
however, that Exhibit 3049 is an “inaccurate version of the McLaughlin adjustment,” and
reliance upon Exhibit 3049 reflects two separate errors. Id. at 1.
According to JSC, the first error is that Exhibit 3049 was an early, preliminary
calculation of the “conventional McLaughlin adjustment,” as proposed in prior
proceedings, that was subsequently updated in Exhibit 3105, and “[t]hus, as between

The paucity of cases in which a party even attempted to rely on the appellate issue of whether a decision
was “arbitrary and capricious” is indicative of the inapplicability of that issue in the context of a Rule 59(e)
type of motion. But see Arias v. DynCorp, 752 F.3d 1011, 1016 (D.C. Cir. 2014) (“We have squarely held
that a party must preserve an issue for appeal even if the only opportunity was a post-judgment motion.”);
see also Jones v. Horne, 634 F.3d 588, 603 (D.C. Cir. 2011) (same). The Judges perceive JSC’s “arbitrary
and capricious” arguments as potentially prophylactic measures intended to preserve this issue on appeal,
rather than a proper basis for rehearing pursuant to statute, regulation, and the Judges’ prior rulings
regarding rehearing, which are expressly patterned on Fed. R. Civ. P. 59(e).
Further, JSC relies on a case which does not involve a Rule 59(e) motion, but rather addresses the standard
by which the D.C. Circuit reviews a district court’s entry of summary judgment. See N. Cent. Airlines, Inc.
v. Cont'l Oil Co., 574 F.2d 582, 587 n.14 (D.C. Cir. 1978) (cited in Reply at 2). But in the same breath,
JSC acknowledges the narrower Rule 59(e) standard. Reply at 2 (citing School for Arts in Learning Public
Charter School v. Barrie, 810 F. Supp. 2d 52, 55 (D.D.C. 2011) for the narrow standard, as “routinely”
held by courts (and CRB Judges), that Rule 59(e) motions are not vehicles for (1) rearguing facts and
theories upon which a court has already ruled or (2) for raising new issues that could have been raised
previously, and that such motions are disfavored and granted only upon a showing of “extraordinary
circumstances”). Additionally, JSC relies on another case, Dyson v. Winfield, 129 F. Supp. 2d 22 (D.D.C.
2001), in which the district court found an error regarding a question of law, rendering that decision
inapposite. But again, the broader defect is that JSC afforded Respondents no opportunity to address the
JSC Reply’s application of these prior decisions.
Accordingly, the Judges understand JSC’s Reply as setting forth the same standards that the courts in the
D.C. Circuit routinely apply to Rule 59(e) motions and, as stated in the prior footnote, consider the JSC
Motion on that basis.

Exhibit 3049 and 3105, Exhibit 3105 is the more accurate calculation of the McLaughlin
adjustment.” Id. at 1-2. The second error, according to JSC, is that “Exhibit 3049, as
well as Exhibit 3105, rely on royalty-based weighting that is economically inappropriate
after the conversion of WGNA and the enormous increase in minimum fee systems.” Id.
at 2. JSC argues that
Bortz subsequently implemented a revised weighting system (referred to as
“base plus 3.75”) that takes account of the proliferation of minimum fee
systems in 2015-17 by weighting based on what the CSO would have paid
according to the system’s distant signal usage absent the minimum fee. Use
of royalty-based weighting for 2015-17 conflicts with the Judges’ findings
regarding minimum fee systems.
Id. JSC further argues, “[i]f the Judges are relying on Bortz with the McLaughlin
adjustment, they should use the version set forth in Exhibits 4001-4003, which applies
base plus 3.75 weighting.” Id. Each of these two alleged errors (i.e., (1) using Exhibit
3049 rather than Exhibit 3105, and (2) not using a “base plus 3.75” adjustment
supposedly set for in Exhibits 4001-4003) are further detailed separately in JSC’s motion,
and are addressed separately, as follows.
i. Whether Exhibit 3049 Is Outdated, and Should Not Be
Used to Determine Shares
1.

Summary of the Parties’ Arguments
a. The JSC Motion

In addition to the JSC arguments recounted above, specifically with respect to the
use of Exhibit 3049, JSC argues:
Mr. Trautman prepared Exhibit 3049 in July 2020, roughly two
years before he submitted testimony in this proceeding. See Tr. at 3142:223143:8, 3145:2-3146:11 (Trautman); Ex. 7100 (Trautman Corrected WDT).
As Mr. Trautman testified, it takes an extensive period of time—well
beyond when the surveys are fielded—for Bortz to obtain and evaluate the
voluminous programming data presented in this proceeding. See Tr. at
2886:21-2887:9 (Trautman). That programming data is used in the Bortz
results to project allocations to non-respondents according to programming
carriage patterns. See Ex. 7101 (Corrected Bortz Report), at 29 (“Bortz
projected non-respondent values based on signal carriage characteristics,”
including “the carriage (or lack thereof) of JSC programming”). Thus, while

the survey responses are not changed over time, the weighted results of the
survey can be expected to become more accurate over time, as Bortz
evaluates more comprehensive programming information.
Mr. Trautman performed, and JSC produced, “UPDATED”
calculations of the weighted Bortz Survey results and “conventional
McLaughlin adjustment” dated “1-21-21” which are different in small but
significant respects from the July 2020 calculations. These “UPDATED”
calculations are in the record at Exhibit 3105 and a copy is attached as
Exhibit 1 hereto. See Tr. at 3099:12-21 (admitting Exhibit 3105).
There is no reasoned basis or record support for relying on the
outdated, incorrect version of the “conventional McLaughlin adjustment”
calculation in Exhibit 3049 given that an updated version is in the record at
Exhibit 3105 and was cited to the Judges. Indeed, the proposed findings of
fact of Public Television Claimants (“PTV”) cite to Exhibit 3105 (not
Exhibit 3049) in presenting the “Proposed Shares” of PTV and JSC
“Determined by Various Analyses of Relative Marketplace Value in 201417.” PTV Corrected PFF ¶ 12, Table 3 & ¶ 43, Table 5. At a minimum, if
the Judges are to rely on Mr. Trautman’s calculation of the “conventional
McLaughlin adjustment,” they should rely on the “UPDATED” calculation
in Exhibit 3105.
The existing record supports the use of Exhibit 3105 rather than
Exhibit 3049. However, if the Judges believe that additional information on
this issue would be helpful, JSC respectfully requests that rehearing be
granted to present additional evidence. Throughout the course of this
proceeding, “[n]o party argue[d] that royalty fund allocations . . . should be
made strictly according to the Bortz initial results subject to the McLaughlin
adjustment,” and “no party had its expert calculate the McLaughlin
adjustment . . . for presentation at the hearing.” Initial Determination at 178.
As a result—while JSC vigorously argued that the McLaughlin adjustment
should not be used in the abstract, see, e.g., JSC Post-Hearing Br. at 6568—JSC has not had an opportunity to present evidence on which specific
version of that calculation is most accurate and reliable.
JSC Motion at 2-4.
b. The CCG, PS, and SDC Joint Response
In their joint response, CCG, the Program Suppliers, and SDC oppose JSC’s
motion with respect to the McLaughlin Adjustment, arguing that merely because Exhibit
3049 was an “early” calculation that Mr. Trautman subsequently “updated” with a
recalculation “does not by itself render the original version outdated or incorrect.” Joint
Response at 4-5. Furthermore, they argue,
JSC has only itself to blame for failing to explain away the earlier results or
to advocate more forcefully for reliance on the later results, particularly

considering that Mr. Trautman was specifically asked about Exhibit 3049
and his preparation of ‘other documents regarding potential adjustments and
weights that would alter those shares’ on cross-examination.
Id. at 5 (citing 4/4/2023 Tr. 3142-3145 (Trautman)). Indeed, they argue that, contrary to
JSC’s assertion, nothing precluded JSC from “present[ing] evidence on which specific
version of that calculation is most accurate and reliable.” Id. at 5 (quoting JSC Motion at
3-4). They argue, “[a]s the Initial Determination observes, ‘all parties knew that the
Judges applied the McLaughlin [A]djustment to the Bortz Survey initial results in the
2004 and 2005 proceeding, as well as in the more recent 2010-2013 proceeding.’” Id.
(quoting ID at 178). According to CCG, the Program Suppliers, and SDC, “JSC was on
notice and cannot use rehearing as a vehicle to present arguments or evidence that it
could have raised prior to issuance of the Initial Determination. Exxon Shipping Co., 554
U.S. at 485 n.5.” Id.
c. The PTV Response
PTV argues that JSC and the other parties devoted considerable time and pages
during the hearing and in post-hearing briefing to the question of the appropriate
weighting for the Bortz Survey responses, and the Judges, having evaluated those
arguments, reached a conclusion based on the evidence and the arguments. PTV argues
that JSC’s motion for rehearing “merely attempts to relitigate these issues, and now
inappropriately advocates for yet another of its panoply of preferred weighting
methodologies (another version of a ‘base plus 3.75’ weighting scheme), among dozens
of options that JSC’s experts mined to identify shares that increased JSC’s allocation.”
PTV Response at 3 (citing Ex. 3039). PTV argues that JSC, apparently aware that its
attempt to advance yet another weighting methodology does not meet the standard for
rehearing,
argues alternatively (indeed, primarily) in favor of a more modest
adjustment—that the Judges should use Exhibit 3105 rather than Exhibit
3049 as the most accurate calculation of the conventional McLaughlinadjusted Bortz Survey results. While the differences between these two

exhibits appear relatively small, the record lacks evidence supporting JSC’s
argument, and JSC had more than ample opportunity to introduce evidence
during the hearing on this point but chose not to do so.
Id. Accordingly, PTV argues, rehearing is inappropriate under the well-established
requirements for a motion for rehearing. Id.
Specifically with respect to Exhibit 3015, PTV argues that “[k]nowing that its
broad arguments for re-weighting pursuant to a new methodology exceed what has
typically been allowed on rehearing, JSC’s more modest lead argument is that the Judges
should rely on a purportedly ‘updated’ calculation of the conventional McLaughlin
[A]djustment. JSC’s argument should be rejected because JSC failed to argue the
point . . . .” Id. at 3. It is further asserted that
JSC failed to . . . introduce evidence supporting its argument prior to its
motion for rehearing, despite ample opportunity to respond to Public
Television’s questioning at the hearing and arguments in its post-hearing
briefing. JSC’s request does not meet the rehearing standard because it
seeks “to raise arguments or present evidence that could have been raised
prior to the entry of judgment.”
Id. at 3-4 (citing Order Denying Program Suppliers’ Motion for Rehearing and
Correcting 2012–13 Allocations for Certain Parties, Docket No. 14-CRB-0010-CD, at 1
(Dec. 13, 2018) (“2018 Rehearing Order”)). Indeed, PTV argues that during the hearing,
Mr. Trautman was questioned extensively about Exhibit 3049, and Exhibit 3049 was the
basis for Public Television’s request, in the alternative, that the Judges use the
McLaughlin-adjusted Bortz Survey results as the “royalty floor.” See id. at 4 (citing PTV
PFFCL ¶ 208 & n.327; PTV Post-Hearing Br. at 42–43 (citing PTV PFFCL ¶ 208
(depicting Ex. 3049))). PTV argues, “Despite these arguments, JSC chose not to
introduce evidence regarding the relative accuracy of Exhibits 3105 and 3049, and chose
not to challenge the figures in Exhibit 3049 until its rehearing motion.” See id. PTV
observes,
[a]ccordingly, in the Initial Determination, the Judges noted that they were
“referred to a chart taken from a spreadsheet prepared by Mr. Trautman,
originally for Bortz Media’s internal use (Ex. 3049 . . .),” and correctly

observed that, “[f]ortunately, no party has challenged the figures contained
therein as accurately reflecting application of the McLaughlin adjustment
to the Bortz Survey initial results.” Initial Determination at 178.
Id. at 4.
PTV argues that JSC belatedly asserts that Exhibit 3049 is an “outdated, incorrect
version of the ‘conventional McLaughlin adjustment’” and that Exhibit 3105 is “an
updated version.” Id. (quoting JSC Motion at 3). Yet, PTV argues, “There is no support
in the record for this assertion. Nor is there support (or even any citation) for JSC’s
assertion that ‘the weighted results of the survey can be expected to become more
accurate over time.’” Id. Rather, it is argued, “there was substantial evidence that over
time, Mr. Trautman attempted to develop a number of creative weighting schemes with
the purpose of seeking to increase JSC’s share, not to achieve more ‘accurate’ results.”
Id. at 4-5.
Finally, PTV argues that JSC is incorrect to argue that JSC lacked the opportunity
to present evidence on which specific version of the conventional McLaughlin
Adjustment is most accurate and reliable. Id. at 5. PTV argues that JSC “had ample
opportunity to present evidence and argument on this issue, including during the
extensive examination of Mr. Trautman regarding Exhibit 3049, or in response to Public
Television’s post-hearing submissions.” Id. It is argued that while JSC asserts that PTV
cited to Exhibit 3105 (not Exhibit 3049), such citation “was only in two illustrative
comparison tables collecting various calculations by various witnesses, in order to show
that all allocation methodologies showed an increase in Public Television’s share, and a
decline in JSC’s shares.” Id. (citing PTV PFFCL ¶¶ 12, 13 & tbls.3, 5; PTV PostHearing Br. at 41–42). PTV argues that it “proposed that Exhibit 3049 could be used in
the alternative as a ‘royalty floor.’ See PTV PFFCL ¶ 208 & n.327; PTV Post-Hearing

Br. at 42–43. Public Television did not advocate for the adoption of Exhibit 3105 as a
basis for share allocation.” Id. (footnote omitted).260
d. The JSC Reply
In its reply, JSC reiterates that one reason Exhibit 3049 is incorrect is because it is
an early, preliminary calculation that was updated in Exhibit 3105. JSC Reply at 5-6
(citing JSC Motion at 1-4). JSC argues that “[n]o party disputes that Exhibit 3105 is a
more recent, ‘UPDATED’ version of the calculation in Exhibit 3049”, or that “over time,
Bortz incorporates more comprehensive programming information into its calculations.”
Id. at 5. JSC argues the “Responding Parties’ speculative attempts to justify reliance on
Exhibit 3049 instead of Exhibit 3105 are contrary to the record.” Id. JSC argues that
while the
Joint Respondents posit that a “later” calculation “does not by itself render
the original version outdated or incorrect” . . . Exhibit 3105 is not simply a
“later” calculation; the record supports the conclusion that Exhibit 3105 is
more accurate because it incorporates more comprehensive programming
data to project allocations to non-respondents.
Id. (citing, inter alia, JSC Motion at 2-3). JSC further argues that while PTV speculates
that Mr. Trautman may have applied some creative weighting scheme with the purpose of
seeking to increase JSC’s share in Exhibit 3105, there is no evidence of that. Id. (citing
PTV Resp. at 4-5). Rather, JSC argues, “Exhibit 3105 was created for Bortz’s internal
use, not to present a proposed share allocation in these proceedings.” Id. (citing 4/3/2023
Tr. 2881-2882 (Trautman)).
2.

Discussion

As addressed in the Initial Determination, the parties knew going into the hearing
that the McLaughlin Adjustment, having been applied to Bortz surveys in the 2004 and
2005 allocation proceeding, and in the 2010-2013 allocation proceeding, would be

In the footnote, PTV argues, “That said, the differences between Exhibit 3049 and Exhibit 3105 appear
relatively small, although the record evidence does not explain the basis for those differences.” PTV
Response at 5 n.1.
relevant to the issues addressed during the allocation hearing for 2014-2017. See ID at
178. Indeed, during its opening argument, JSC expressed its disagreement with use of
the McLaughlin Adjustment to allocate shares, particularly with respect to 2015 through
2017. See 3/20/23 Tr. 69. JSC also knew that it had produced calculations found in
Exhibits 3049 and 3105,261 which showed that Mr. Trautman, JSC’s witness from Bortz
Media who sponsored the Bortz 2014-2017 surveys, had calculated the McLaughlin
Adjustment for the 2014-2017 time period. See, e.g., JSC Motion at 2-3; ID at 161.
During Mr. Trautman’s direct examination at the hearing, JSC asked Mr. Trautman
questions about the McLaughlin Adjustment, including questions concerning the fact that
he had performed the McLaughlin Adjustment, as follows:
Q. * * * In the course of doing your work for 2014 to ‘17, did you ever run
the McLaughlin adjustment?
A. Early on, I did, yes.
Q. Why did you do that?
A. Well, I was aware that some form of the McLaughlin adjustment had
been applied in past proceedings, including in 2010 to ‘13, and so I was
interested to see what the outcome would be if that were applied for 2014
to 2017.
Q. And if someone were to say: Well, the fact that Mr. Trautman ran the
McLaughlin adjustment shows that it was his view that McLaughlin
adjustment was appropriate, what would your response be?
A. That that’s not the case at all. I was simply performing a calculation in
order to see what the outcome would be.
4/3/2023 Tr. 2881-2882 (Trautman). Thus, Mr. Trautman testified that “[e]arly on” he
performed “a calculation.”
Subsequently, during the cross-examination of Mr. Trautman, PTV raised the fact
that he calculated the McLaughlin Adjustment, as follows:

Exhibits 3049 and 3105 were received into evidence with no objection and no argument. See 4/4/23 Tr.
3099.
Q. * * * Mr. Trautman, you did attempt to calculate the McLaughlin
adjustment for the 2014 to ‘17 Bortz Survey results before you filed your
written direct testimony in this proceeding, correct?
A. Yes. Early on in my review of 2014 to ‘17, I did prepare spreadsheets
that calculated what the outcome of the McLaughlin adjustment would be
or could be.
Q. So let’s take a look at Exhibit 3049, which was produced as JSC
00081249. Mr. Trautman, you recognize Exhibit 3049 as one of your
documents, correct?
A. Yes.
Q. And I’ll represent to you that the last modified date on this document,
as it was produced to us, is July 27th, 2020, nearly two years before
written direct testimony was due in this case. Is that consistent with your
recollection?
A. It is, yes.
Q. And there are two tables in Exhibit 3049, correct?
A. Correct.
Q. And the bottom table is titled “Weighted Bortz Survey Results By
Year, 2014-‘17 (After Conventional McLaughlin Adjustment).” Correct?
A. Correct.
Q. And the bottom -- and in this table, PBS is identified in the first column
at the top -- well, in the first row at the top of the table, row 25, correct?
A. Correct.
Q. And there are columns labeled, from left to right, 2014, 2015, 2016,
2017, and average 2014 to ‘17, correct?
A. Correct.
Q. And in this table, you calculated PBS’s share as 8.4 percent in 2014,
43.6 percent in 2015, 48.4 percent in 2016, and 48.2 percent in 2017, with
a 37.1 percent average from 2014 to 2017, correct?
A. That’s correct.
Q. And then let’s go down to the next row the Sports share. The Sports
share is listed as 39 percent in 2014, 12.7 percent in 2015, 12.2 percent in
2016 and 14.8 percent in 2017, with an average 2014-to-‘17 share of 19.7
percent; is that correct?
A. Yes, it is.

Q. Now, after Bortz prepared this document that we just looked at -- and
we can take that down. And let me, I guess, rephrase that. I mean, I don’t
know whether you used the term “Bortz” or you interchangeably. I’m
happy -- do you have a preference in that, Mr. Trautman?
A. I really don’t.
Q. Okay. Well, after you prepared the document we just looked at, you
prepared other documents regarding potential adjustments and weights
that would alter those shares, correct?
A. I recall that I did, yes. I don’t recall a specific sequence or, you know,
exactly which took place when in the sequence, but I did look at other
ways of examining the issue.
4/4/2023 Tr. 3142-3145 (Trautman).
As seen from the preceding transcript portion, the witness’s attention, and the
attention of the Judges, was directed exclusively to Exhibit 3049. On redirect, JSC did
not conduct any examination to show that there was any error in Exhibit 3049 as a
calculation of the McLaughlin Adjustment, or that it had been in any way updated or
superseded, for example, by Exhibit 3015 or the calculations contained therein. In
neither JSC’s pending motion nor its reply is there any such citation to the hearing record.
Given the hearing testimony concerning Exhibit 3049 and the McLaughlin
Adjustment, it was not surprising that PTV relied on pertinent portions of Exhibit 3049 in
its Proposed Finding of Fact (PTV PFF ¶ 208), The Judges expressly relied on this
proposed factual finding in the Initial Determination. See ID at 177 (citing PTV PFF ¶
208); see also PTV Post-Hearing Br. at 82. In its pending motion and reply, JSC has
cited to no initial or reply filing in which it pointed out any particular error in Exhibit
3049.262

In the Initial Determination, the Judges stated, “To see the figures obtained when the McLaughlin
adjustment is applied to the Bortz Survey initial results at issue in this proceeding, the Judges are referred
to a chart taken from a spreadsheet prepared by Mr. Trautman, originally for Bortz Media’s internal use
(Ex. 3049, duplicated above). Fortunately, no party has challenged the figures contained therein as
accurately reflecting application of the McLaughlin adjustment to the Bortz Survey initial results . . . .” ID
at 178.
Not even in the pending motion and reply has JSC shown that any data point
contained in Exhibit 3049 is erroneous. Although Exhibit 3015 is labeled “UPDATED”
and the data were calculated after the tables in Exhibit 3049, it cannot be presumed that
Exhibit 3049 contains error.
The closest JSC has come to explaining why Exhibit 3105 should be considered
“updated” appears only in its pending motion, in which JSC argues,
it takes an extensive period of time—well beyond when the surveys are
fielded—for Bortz to obtain and evaluate the voluminous programming data
presented in this proceeding. See Tr. at 2886:21-2887:9 (Trautman). That
programming data is used in the Bortz results to project allocations to
non-respondents according to programming carriage patterns. See Ex. 7101
(Corrected Bortz Report) at 29 (“Bortz projected non-respondent values
based on signal carriage characteristics,” including “the carriage (or lack
thereof) of JSC programming”). Thus, while the survey responses are not
changed over time, the weighted results of the survey can be expected to
become more accurate over time, as Bortz evaluates more comprehensive
programming information.
JSC Motion at 2-3.
Consequently, only now after the hearing, JSC argues that Exhibit 3105 can be
considered “updated” because when the tables in Exhibit 3105 were calculated, Bortz
Media projected allocations for non-respondents differently than it had at the time that
the tables in Exhibit 3049 were calculated. JSC refers to such differences as “small but
significant.” Id. at 3. Yet, inasmuch as JSC’s citation to a documentary exhibit is general
in nature and does not reference Exhibit 3105 and the calculation contained herein, and
further JSC did not examine Mr. Trautman about his McLaughlin Adjustment
calculations at the hearing (even after the relevant cross-examination by PTV), there is no
way to determine whether JSC’s belated characterization of Exhibit 3105 is accurate, and
that the data contained therein is accurate.
In its pending motion, JSC argues, “the proposed findings of fact of Public
Television Claimants (‘PTV’) cite to Exhibit 3105 (not Exhibit 3049) in presenting the
‘Proposed Shares’ of PTV and JSC ‘Determined by Various Analyses of Relative

Marketplace Value in 2014-17.’ PTV Corrected PFF ¶ 12, Table 3 & ¶ 43, Table 5.”
JSC Motion at 3; see JSC Reply at 8. That argument does not portray the full picture.
PTV cited expressly to Exhibit 3105 in its Proposed Finding of Fact ¶ 12, in a string cite
showing support for a table it created to illustrate proposed share allocations resulting
from seven proposed methodologies. See PTV PFF ¶ 12; see also PTV PFF ¶ 43 (table
with citation to Ex. 3105). Yet, as already discussed, PTV cited, and reproduced a table
from, Exhibit 3049 in its Proposed Finding of Fact. See PTV PFF ¶ 208. Furthermore,
PTV cited to Exhibit 3049 (rather than Exhibit 3105) in a table found in the PTV initial
post-hearing brief, and again cited to Exhibit 3049 (via PTV PFF ¶ 208) when making its
substantive argument concerning a “relative value floor” for PTV. See PTV PostHearing Br. at 15, 42-43. None of the citations made by PTV in its post-hearing brief and
proposed findings clarify or contextualize the content of Exhibit 3105, or, more
importantly, diminish the weight the Judges were able to accord to Exhibit 3049.263
ii. Whether Use of the McLaughlin Adjustment Requires Base
Plus 3.75 Weighting Rather than Royalty-Based Weighting
1.

Summary of the Parties’ Arguments
a. The JSC Motion

The Judges also remain concerned by the fact that Mr. Trautman twice stated in his testimony in this
proceeding that he initially generated a version of the original McLaughlin Adjustment “to see what the
outcome would be.” 4/3/2023 Tr. 2881-2882 (Trautman). But an expert generating his prior preferred
approach in order “to see what the outcome would be” (here, what the allocations would be) undermines
his role as an objective expert, who should first identify the elements of his or her methodology and then
disclose – for better or worse – the results of that action. Here, Mr. Trautman acknowledged that he ran his
prior McLaughlin Adjustment “to see what the outcome would be” and then abandoned it in favor of
making other adjustments (increasing the JSC share), which, as PTV stated, indicates that “Mr.
Trautman … embarked on a multi-year quest ‘to conjure up’ additional adjustments.” Initial Determination
at 176. Indeed, Mr. Trautman’s sequential modeling of the McLaughlin Adjustment resembles the
revisionary work of other experts, which the Judges criticized as evidencing improper “searches” for an
allocation model that would increase the allocations of the parties by whom they were engaged. See Initial
Determination at 39 & n.45 (“Also troubling was the fact that, over a prolonged period, successive testing
by [the expert] was highly correlated with a steady rise in PTV’s allocation shares” … “[T]he Judges are
concerned with whether the evidence suggests that experts may have engaged in any inappropriate or
questionable acts in the course of attempting to maximize the return to the party on whose behalf they give
testimony.”).
In addition to the JSC arguments recounted above, specifically with respect to the
use of base plus 3.75 weighting, JSC argues:
There is a second, independent issue concerning the Judges’
application of the McLaughlin adjustment. Both Exhibit 3049 (the outdated
version) and Exhibit 3105 (the updated version) use royalty-based
weighting. However, after creating these exhibits, Mr. Trautman
determined that royalty-based weighting is not appropriate for 2015-17 due
to the overwhelming number of minimum fees systems. Mr. Trautman
subsequently ran the Bortz results with the McLaughlin adjustment using
the revised base plus 3.75 weighting, as set forth at Exhibits 4001-4003. If
the Judges are relying on the Bortz Survey with the McLaughlin adjustment,
they should use this version that applies base plus 3.75 weighting rather than
royalty-based weighting.
As Mr. Trautman and Dr. Majure testified, use of royalty-based
weighting improperly skews the survey calculations by giving inordinate
weight to minimum fee systems that typically did not even use their full
minimum fee budget. See JSC PFOF ¶ 302. The Judges similarly concluded
that decisions by minimum fee systems during the 2015-17 period are not
probative of relative market value. See Initial Determination at 129 & n.155
(“[T]hese [minimum-fee-paying] CSO decisions do not provide the Judges
with any useful information regarding the relative value of the retransmittal
of the various programming categories . . . .”).
The Initial Determination explains that in “2015-2017, the
overwhelming percentage of CSOs pay only the minimum fee, and the vast
majority of section 111 royalties are generated by those minimum-feepaying CSOs.” Id. at 134. The Initial Determination likewise discusses how
both the regression and survey methodologies changed (or should have
changed) to account for the “dramatic increase in the number of minimumfee only” systems in these years. See, e.g., id. at 21-22, 167 n.206. As
relevant here, the Bortz Survey methodology “changed to weight the results
based on the Base-plus-3.75 fees attributable to the actual signal carriage of
the Form 3 systems, and to apply the results using signal carriage-based fee
calculations rather than actual royalties paid.” Id. at 167 n.206. This change
in the weighting was necessary to avoid “‘introduc[ing] a distortion, by
giving excessive weight to systems with large Minimum Fee payments even
when they have chosen to carry very little distant signal programming.’”
JSC Post-Hearing Br. at 56 (quoting testimony of Dr. Majure). No party
disputed the propriety of Bortz’s new weighting approach, nor is it
questioned in the Initial Determination.
Bortz developed its revised base plus 3.75 weighting approach over
time, after recognizing that there were many more CSOs paying the
minimum fee in 2015-17. See Tr. at 3149:11-3151:11 (Trautman). The first
calculation in the record using an early version of the revised weighting
approach (initially only applied to PTV-only systems) was performed in
June 2021. See Ex. 3048; Tr. at 3147:19-3149:5 (Trautman). The
“conventional McLaughlin adjustment” calculations in Exhibits 3105 and
3049 predate that change, see supra at pp. 2-3, instead applying the

historical, royalty-based weighting that undisputedly distorts the results,
making them unreliable for 2015-17.
The record contains more recent calculations of the McLaughlin
adjustment for the years 2015-17 applying the corrected, base plus 3.75
weighting. These calculations are part of the Bortz Survey data that JSC
produced in connection with Mr. Trautman’s written direct testimony. See
Ex. 4001, “2015 Data File” at Rows 588-590, Columns W-AD (showing
“Adjusted Royalties” after “PTV/Canadian Adjustment” for 2015); Ex.
4002, “2016 Data File” at Rows 573-575, Columns W-AD (same for 2016);
Ex. 4003, “2017 Data File” at Rows 571-573, Columns W-AD (same for
2017); see also Tr. at 4792:7-4793:20 (Carbert) (identifying and admitting
Exhibits 4000-4003). These calculations are the most accurate and reliable
version of the McLaughlin adjustment in the record, on which the Judges
should rely to the extent they give weight to the adjustment. A table setting
forth the relevant results from Exhibits 4001-4003 is attached as Exhibit 2
hereto.
If the Judges conclude that identifying the correctly weighted
McLaughlin adjustment calculation requires further information, JSC
respectfully requests that the Judges grant rehearing to present additional
evidence on the issue. In the post-hearing briefing, JSC raised the problem
of royalty-based weighting in the “conventional McLaughlin adjustment”
calculation in response to PTV’s citation to Exhibits 3049 and 3105. See
JSC Post-Hearing Reply Br. at 62 (“[B]lindly applying the McLaughlin
adjustment as it was proposed in prior proceedings, PTV argues that it
should be attributed . . . 100% of all of those royalties, massively inflating
its share . . .. PTV overlooks that almost all PTV Only CSOs were paying
the Minimum Fee in 2015-17, so their substantial royalty payments have
nothing to do with their distant signal usage.”). However, because PTV first
embraced this calculation in its post-trial briefing, without having
previously offered any witness who endorsed it, JSC did not have an
opportunity to directly address the reliability of the calculation through its
own witnesses.
JSC Motion at 4-6 (footnote omitted).264
b. The CCG, PS, and SDC Joint Response
As discussed above, CCG, Program Suppliers, and SDC argue that “coming up
with a different calculation or weighting system later does not by itself render the original
version outdated or incorrect.” Joint Response at 4-5. Furthermore, they argue, JSC was
on notice that the McLaughlin Adjustment was relevant to the hearing, “and cannot use

JSC argues, “With proper weighting, the Bortz Survey results with the McLaughlin adjustment estimate
shares for PTV that are within 4 percentage points of the Judges’ final award to PTV in each year 201517.” JSC Motion at 6 n.1.
rehearing as a vehicle to present arguments or evidence that it could have raised prior to
issuance of the Initial Determination.” Id.
c. The PTV Response
PTV argues:
In a transparent overreach that is plainly improper on a motion for
rehearing, JSC now argues for yet another alternative weighting
methodology for the Bortz Survey that purportedly uses a “base plus 3.75”
weighting scheme. JSC never presented this calculation on its own as a
potential allocation methodology during the proceeding. The two Bortz
adjustments that JSC actually did choose to advocate in the hearing were
fully vetted in written testimony, at the hearing, and in post-hearing
submissions, and the Judges ultimately rejected them. JSC had every
opportunity to also present this calculation of the McLaughlin adjustment
with “base plus 3.75” weighting, and chose not to do so. JSC’s request
accordingly must be denied. See 2018 Rehearing Order at 7.
PTV Resp. at 5-6.
PTV argues that while JSC acknowledges that Mr. Trautman originally focused
on the conventional McLaughlin-adjusted Bortz Survey results, he
argues that he later preferred alternative weighting methods, including
various versions of a “base plus 3.75 weighting” for which JSC now
belatedly advocates. JSC Motion for Reh’g at 4. In fact, Mr. Trautman
testified that, after initially calculating the conventional McLaughlin
adjustment, he spent years testing multiple adjustments and weights,
including those that specifically singled out Public Television, to reduce
Public Television’s shares from those that result from the conventional
McLaughlin Adjustment.
Id. at 6 (citing PTV PFF ¶ 209; Tr. 3142–3154 (Trautman); Exs. 3048, 3049) (footnote
omitted).265 PTV argues that, “[c]ontrary to JSC’s suggestion, there is no reason to
believe that Mr. Trautman’s weighting innovations became more reliable over time, as
they appear to have been focused instead on achieving his results-oriented purpose of

In the omitted footnote, PTV’s response directs the reader to representative portions of the hearing
transcript. See PTV Response at 6 n.2 (“Tr. 3150:15–20 (Q. ‘[T]he analysis there would have applied the
McLaughlin adjustment but then would have weighted systems that carried only Public Television distant
signals differently from all the other systems? Is that Right?’ A. ‘My recollection is that’s correct.’); Tr.
3153:4–14 (Q. ‘So you then considered other adjustments that could be combined with the new weighting
approach, correct?’ A. ‘Broadly, I think that’s correct.’ Q. ‘Those included assigning various values of less
than 100 percent to Public Television for systems that carried only Public Television distant signals, right?’
A. ‘Well, certainly my two adjustments do employ that approach based on the particular characteristics of
some of the PTV-only systems.’)”).
reducing Public Television’s shares as generated by the conventional McLaughlin
adjustment.” Id. (citing PTV PFF ¶¶ 208-13).
Moreover, PTV argues,
[t]he “base plus 3.75” weighting is inconsistent with the weighting
principles that undergirded the McLaughlin-adjusted Bortz Survey in prior
proceedings. The Bortz Surveys ask respondents to value only the signals
that their CSOs actually distantly carried, and instruct that the sum of the
values must equal 100%. As a result, the conventional McLaughlin
Adjustment reflects the only possible response when a CSO distantly
carried only Public Television signals: 100% to Public Television.
Id. at 6-7. Further, specifically with regard to the weighting of the McLaughlin-adjusted
Bortz Survey results, it is argued,
Mr. Trautman testified unequivocally in the 2010–13 proceeding that
weighting by total royalties was the correct approach—even as to PTV-only
systems, which by definition were almost always “minimum-fee systems.”
When asked, “But in your view . . . , the McLaughlin-Blackburn
augmentation of the Bortz survey assures that an appropriate weight is
applied to the PTV-only systems; correct[?],” Mr. Trautman said, “Yes, it
considers the systems in the context of royalties, the total royalties that they
pay.”
Id. at 7 (citing Ex. 7043 at 551 (2010–13 Trautman Oral Testimony)). Accordingly, PTV
observes,
the Initial Determination rejected JSC’s proposed adjustment that would
have assigned less than 100% of the value to Public Television. Initial
Determination at 180; see also id. at 178–79 (“Inasmuch as PTV-only
systems are still not surveyed by Bortz Media, and there is no empirical
evidence to show how PTV-only systems value PTV distant signals, there
is no cause now to discard the McLaughlin adjustment . . . . The McLaughlin
adjustment has always been presented as a 100-percent or nothing approach,
and the Judges can take that characteristic into consideration.”).
Id. at 7.
d. The JSC Reply
In its reply, JSC argues against using Exhibit 3049 or Exhibit 3105 “because they
use incorrect, royalty-based weighting.” JSC Reply at 6. JSC further argues that its
“witnesses explained at the hearing that royalty-based weighting would improperly skew
the survey calculations in the 2015-17 period due to the overwhelming number of

minimum fee systems.” Id. (citing JSC Motion at 4). JSC also seeks to analogize to the
Judges’ analysis of the regression evidence, arguing that,
in the context of the regression analyses, the Judges similarly recognized
that the increase in minimum fee systems during the 2015-17 period
required methodological changes.
Initial Determination at 21-22.
Accordingly, Bortz revised its methodology to use base plus 3.75
weighting. JSC Mot. at 4. Calculations of the McLaughlin adjustment for
the years 2015-17 applying the corrected, base plus 3.75 weighting are in
the record at Exhibits 4001-4003. Id. at 5-6.
Id.
JSC argues,
None of the Responding Parties opposed Bortz’s change to base plus 3.75
weighting during the proceeding (indeed, SDC and PTV affirmatively
bolstered it), and none of them can explain why the reliance on royaltybased weighting in Exhibit 3049 is anything but clear error. The Joint
Respondents do not address the issue at all.
Id. (footnote omitted).
JSC argues that PTV,
lacking any evidence from the 2014-17 proceeding, attempts to rely on
testimony from the 2010-13 proceeding supporting royalty-based
weighting. See PTV Resp. at 6-7. But the difference between this
proceeding and the last one is critical: royalty-based weighting became a
problem in 2015-17 when, as the Judges found, there was a ‘dramatic
increase in the number of minimum-fee only’ systems.
Initial
Determination at 21. Testimony that royalty-based weighting was
appropriate in 2010-13 does not support its use in the changed landscape of
2015-17.
Id. at 6-7.
In addition, JSC argues in its reply that it was diligent, and
promptly objected to PTV’s belated embrace of the McLaughlin adjustment
with royalty-based weighting when it first arose in post-hearing briefing.
See JSC Post-Hearing Reply Br. at 62. Nothing in the rehearing standard,
or common sense, justifies requiring a party to spend its limited hearing
time and briefing space clarifying the most accurate version of each unendorsed calculation that comes up, particularly where, as here, the
alternative calculations presented for even a single base regression
numbered in the hundreds.
Id. at 7.

JSC argues, with respect to the cross-examination of Mr. Trautman, that “pointing
a witness to his own alternative calculation is a common form of criticizing a
methodology, not an affirmative endorsement of the alternative,” and with respect to
PTV’s citations, JSC argues, inter alia, “JSC had no reason to argue for the use of
Exhibit 3105 over Exhibit 3049 because PTV’s average share does not meaningfully
differ between the two exhibits (only the shares of the other parties do).” Id. at 7-8.
JSC argues,
The implausible degree of foresight that the Joint Respondents and
PTV would demand of any party seeking rehearing is well beyond anything
necessary to deter parties from “re-litigat[ing] old matters” or raising new
arguments out of time. PTV Response at 2 & Joint Response at 2. Rather,
denying rehearing on this record would incentivize parties to disguise their
intent to rely on a specific calculation as long as possible, so as to immunize
that calculation from the full adversarial vetting process.
Id. at 8-9.
2.

Discussion

As an initial matter, the proposed adjustment contained in JSC’s Motion Exhibit 2
(derived from Exs. 4001-4003) would, as indicated in the pending motion, apply only to
the Bortz survey results for 2015 through 2017. Thus, the adoption of JSC’s Motion
Exhibit 2 would leave unanswered any questions pertaining to the McLaughlin
Adjustment for 2014. In any event, the underlying problem that gives rise to the
McLaughlin Adjustment, and all other adjustments advanced by the parties, is in the way
that the Bortz surveys exclude certain PTV and Canadian signals. While the problem
should not be overstated, the Bortz surveys contain downward biases with respect to
relevant PTV and Canadian programming. See ID at 168. The McLaughlin Adjustment
has been recognized as an adjustment, or augmentation, that helps to remedy bias in the
Bortz methodology but may do so on an imprecise basis. Id. at 168, 179. There is no
indication that any adjustment exists that compensates completely for weakness in the
design of the Bortz surveys.

With respect to JSC’s newly advanced adjustment, there is no indication in JSC’s
pending motion and reply that the adjustment derived from Exhibits 4001-4003 was the
subject of hearing testimony. Indeed, the available details surrounding the calculations
made therein, and condensed in JSC’s Motion Exhibit 2, remain scant. JSC argues,
“because PTV first embraced this [McLaughlin] calculation in its post-trial briefing,
without having previously offered any witness who endorsed it, JSC did not have an
opportunity to directly address the reliability of the calculation through its own
witnesses.” JSC Motion at 6. Yet, this argument is unavailing for several reasons. As
discussed above, all parties knew that the McLaughlin Adjustment would be at issue in
the hearing. JSC even addressed the McLaughlin Adjustment in its opening argument,
and later during the direct examination of its witness Mr. Trautman. As JSC expected,
PTV cross-examined Mr. Trautman on the McLaughlin Adjustment, yet without
corresponding redirect by JSC.
Moreover, JSC did not need to wait, nor did it wait, to find out what PTV would
say in its post-hearing filings in order to set forth JSC arguments and evidence
concerning adjustments to the Bortz survey results, including its own proposed
adjustments. Indeed, during the hearing, JSC presented evidence with respect to its
proposed “Adjustment One” and “Adjustment Two,” which were discussed at length in
the Initial Determination.266 See, e.g., ID at 170-180. One feature of the adjustments
proposed by JSC was that Bortz Media weighted the results based on base-plus-3.75 fees
attributable to the distant signals actually carried by the PTV-only systems. See id. at
170, 171. Aside from the substantive deficiencies in this alternative adjustment, it is not
appropriate for JSC to use the rehearing process to advance this argument, when it could
have (and should have) been articulated during the hearing.

In view of the hearing that JSC has already received, PTV argues that “the Judges should deny JSC’s
motion for rehearing, to the extent that the prospective rehearing would rehash which weighting
methodology should be applied to the Bortz Surveys . . . .” PTV Response at 10.
In addition, JSC’s motion fails to adequately address the fact that in the Initial
Determination, the Judges already recognized strengths and weaknesses of the Bortz
surveys, particularly after application of the conventional McLaughlin Adjustment. See,
e.g., id. at 178 (“The application of the McLaughlin adjustment to the initial Bortz results
for the years now at issue, 2014 through 2017, is relevant, and the adjusted results . . .
should be given varied weight, depending on whether one is considering the adjusted
results for 2014, or for 2015 through 2017.”); id. at 179 (“To the extent that one would
specifically exclude Must Carry signals, such as in a regression analysis, the fact that the
McLaughlin adjustment is applied to Must Carry signals diminishes the value of such
adjusted Bortz results when making a comparison to such other evidence that devalues
Must Carry signals.”); id. at 180 (“no party, not even PTV, argues that the Bortz Survey
with the McLaughlin adjustment is the best methodology of record for arriving at an
allocation for 2015-2017”). Having reviewed all adjustments proposed by the parties
during the hearing, the Judges determined, “the McLaughlin adjustment, provided one
understands its aforementioned limitations, is most helpful among the proposed
adjustments in understanding the Bortz results.” Id. at 181. Consequently, in allocating
shares, the Judges made judicious use of the Bortz surveys (with the McLaughlin
Adjustment), in some instances according the Bortz survey evidence no weight at all. Id.
at 197-98.
iii. Conclusion Concerning the McLaughlin Adjustment and
the Request for Rehearing267
For the reasons detailed above, the Judges find that it has not been shown that an
exceptional case exists, and that an aspect of the Initial Determination is erroneous due to
its reliance on Exhibit 3049 and data contained therein. The movant for rehearing, JSC,

JSC’s argument, noted supra, seeking to justify rehearing by analogy to the Judges’ analysis of the
impact of the Minimum Fee CSOs on the regression methodology, is discussed separately, infra.
has not demonstrated that aspects of the determination relating to the McLaughlin
Adjustment and Exhibit 3049 are without evidentiary support in the record or are
contrary to legal requirements. In that regard, it has not been shown that there is a need
to correct a clear error or to prevent manifest injustice with respect to the Initial
Determination’s cautious use of the Bortz surveys with the McLaughlin Adjustment.
Rather, a review of the parties’ filings and relevant portions of the hearing record shows
that evidence concerning Exhibit 3049 went unrebutted during the hearing, and there is
no reason to disturb the hearing record or the findings of the Initial Determination in
favor of another exhibit or exhibits (and other calculations contained therein) as to which
there is less evidentiary support, whether that be Exhibit 3015 or JSC’s newly advanced
adjustment as summarized in JSC’s Motion Exhibit 2. Furthermore, other approaches to
adjustment or augmentation of the Bortz Survey results were presented by JSC during the
hearing. It has not been shown that it is necessary or appropriate to rehear any portion of
the case with respect to yet another proposed adjustment. As the Judges noted supra, the
rehearing process cannot be utilized to obtain a “second bite at the apple,” i.e., to relitigate old matters or to raise arguments or present evidence that could have been raised
prior to the entry of judgment.
Consequently, JSC’s motion for rehearing with respect to reliance on the
McLaughlin Adjustment is denied.
c.

Whether JSC’s Share for 2014 Is Inconsistent with the Record Evidence
and the Reasoning of the Initial Determination
i. Introduction

As explained above, it is clear that in the Initial Determination the Judges
appropriately and sufficiently considered – and rejected – JSC’s proffered alternative
adjustments to the Bortz Survey. JSC’s request for rehearing as to this issue is properly

dismissed, as indicated supra, as an attempt to relitigate the issue, i.e., a violation of the
“second bite at the-apple” proscription.
However, JSC also argues something else – that rehearing is required because,
according to JSC, the Judges erred in the Initial Determination by applying the Minimum
Fee issue differently to the survey methodology than they did to the regression
methodology.268
ii. The Parties’ Positions
1.

The JSC Motion

To put JSC’s “inconsistency” argument in context, it is helpful to begin by taking
note of the basic argument in JSC’s Motion regarding the alleged effect of Minimum Fee
royalty payments on the Bortz Survey results. In this regard, JSC maintains the
following:
[R]oyalty-based weighting is not appropriate for 2015-17 due to the
overwhelming number of minimum fees systems…. [U]se of royalty-based
weighting improperly skews the survey calculations by giving inordinate
weight to minimum fee systems that typically did not even use their full
minimum fee budget …. As relevant here, the Bortz Survey methodology
changed to weight the results based on the Base-plus-3.75 fees attributable
to the actual269 signal carriage of the Form 3 systems, and to apply the
results using signal carriage-based fee calculations rather than actual
royalties paid.
…
This change in the weighting was necessary to avoid “‘introduc[ing] a
distortion, by giving excessive weight to systems with large Minimum Fee
payments ….’”
JSC Motion at 4-5 (citations omitted).

This specific argument cannot be rejected under the “second bite at the apple” proscription because
JSC’s claim of inconsistency is based on a comparison of two aspects of the Initial Determination.
However, as explained infra, this argument fails to support JSC’s request for rehearing for other reasons.
JSC’s use of the word “actual” here is misleading, in the manner previously described by the Judges.
See Initial Determination at 69 n.79 (“The word “actual” in this context is rather Orwellian. For the 20152017 period, a substantial majority of the CSOs in which the subscriber groups are situated “actually” paid
the minimum fee. A Base Fee was “actually” calculated, as required by the regulations, but not “actually”
paid, because the Minimum Fee bound. … [M]isleading semantic use of the adjective “actual” does not
assist the Judges in deciding whether any or all of the Base Fee calculations have objective evidentiary
weight ….”).
But, as noted supra, the JSC Motion also maintains something more than an error
occurred in the Judges’ adopting of this weighting. JSC asserts as well that the Judges
acted inconsistently, because their “[]use of royalty-based weighting for 2015-17
conflicts with the Judges’ findings regarding minimum fee systems.” JSC Motion at 2.270
2.

The PTV Response271

Relating to this issue, PTV responded that it is JSC that is inconsistent as to this
issue:
[T]he “base plus 3.75” weighting is inconsistent with the weighting
principles that undergirded the McLaughlin-adjusted Bortz Survey in prior
proceedings…. Specifically … Mr. Trautman testified unequivocally in the
2010–13 proceeding that weighting by total royalties was the correct
approach—even as to PTV-only systems, which by definition were almost
always “minimum-fee systems.” When asked, “But in your view . . . , the
McLaughlin-Blackburn augmentation of the Bortz survey assures that an
appropriate weight is applied to the PTV-only systems; correct[?],” Mr.
Trautman said, “Yes, it considers the systems in the context of royalties, the
total royalties that they pay.” Ex. 7043 at 551:9–15 (2010–13 Trautman Oral
Testimony).
PTV Response at 6-7.
3.

The JSC Reply272

In Reply, JSC explained why the PTV Response fails to rebut JSC’s argument as
to this issue. Specifically with regard to the issue of inconsistency vis-à-vis the treatment
of the Minimum Fee in the regression analyses, JSC argued:
1. The evidentiary weight the Judges gave to Minimum Fee royalty payments in
the Bortz Survey model was inconsistent with the lesser evidentiary weight the
Judges gave to Minimum Fee royalty payments in the regression models.

The Judges discuss infra at footnote 28 JSC’s problematic use of the word “weighting” to characterize
its application of the Bortz Survey allocations. For clarity, the Judges defer that discussion until after they
have explained the error in JSC’s argument that the Judges should have treated the Bortz Survey results and
the regression analyses in the same manner vi-a-vis the Minimum Fee royalties.
The Joint Respondents did not address this issue and, as noted supra, CTV did not file a response to the
JSC Motion.
As noted supra, JSC described the Judges’ finding as to this (and all other) rehearing issues as “clear
error” for the first time in the JSC Reply.
2. The Judges found that – with regard to the regression models – Minimum Fee
royalty payments, standing alone, for the most part did not provide useful
information regarding the “relative value” of the retransmitted programming,
therefore requiring “methodological changes” to the regression approach.
3. Bortz revised its methodology used in prior allocation proceedings,

substituting instead its new “base plus 3.75 weighting,” to account for
Minimum Fee royalty payments as applied to the Bortz Survey model.
4. The adverse parties fail to rebut the argument that the Judges wrongly
employed a royalty-based weighting approach which gives undue weight to
Minimum Fee royalty payments during the 2015-17 period. Specifically, all
the responding parties except PTV ignored the issue. And, as for PTV, it cites
no evidence from the present proceeding, and instead relies on testimony from
the 2010-13 proceeding supporting royalty-based weighting – ignoring the
JSC’s assertion that royalty-based weighting only became a problem in 201517, with the significant increase in the number of Minimum Fee-only CSOs.
JSC Reply at 1-2, 6-7.
iii. The Judges’ Analysis
JSC Wrongly Maintains that the Judges Erred by Inconsistently
Applying the Bortz Survey Results to the Royalties Actually Paid
inclusive of Minimum Fee Payments, while Declining to Similarly Rely on
Minimum Fee Payments when Considering the Regression Results
The Judges categorically reject JSC’s argument that they acted inconsistently, and
thus committed “clear error,” by giving less evidentiary weight to Minimum Fee royalty
payments in the regression models compared to the weight they gave to Minimum Fee
royalties in the Bortz Survey model. Indeed, as explained infra, by comparing JSC’s
rehearing argument with the hearing testimony of its economic experts’ and its post-

hearing filings, it is clear that it is the JSC analysis (incorrectly advanced in support of its
motion for rehearing) that is inconsistent.273
Specifically, JSC argues on rehearing that the Judges clearly erred because their
“use of royalty-based weighting improperly skews the survey calculations by giving
inordinate weight to minimum fee systems” which, JSC maintains, is inconsistent with
the Judges’ conclusion that in the regression models “decisions by minimum fee systems
during the 2015-17 period are not probative of relative market value.” JSC Motion at 4
(citing Initial Determination at 129 n.155, 134). Moreover, in this regard JSC claims that
“[t]he Initial Determination likewise discusses how both the regression and survey
methodologies changed (or should have changed) to account for the ‘dramatic increase in
the number of minimum-fee only’ systems in these years.” JSC Motion at 4-5 (emphasis
added) (citing Initial Determination at 21-22, 167 n.206).
Before proceeding to discuss the substance of this argument, the Judges take note
that JSC has misleadingly utilized the Initial Determination in the quote above from the
JSC Motion. In the Initial Determination, the Judges explained how they apply the
Minimum Fee problem only in the context of a regression model. See Initial
Determination at 21-22, 129 n.155, 134. By contrast, when referring to the Bortz Survey,
the Judges simply recited how Bortz, not the Judges, sought to insinuate the Minimum
Fee issue into the survey approach. See Initial Determination at 167 n.206. In this
regard, the Judges note that the emphasized parenthetical quote from the JSC Motion in
the paragraph immediately above wrongly intimates that the Initial Determination
expressly discusses how “both the regression and survey methodologies … should have
changed” to address the Minimum Fee issue. JSC Motion at 4-5 (emphasis added). The

To be clear, the Judges’ analysis and findings as to this issue do not rely on PTV’s argument, noted
supra, that the testimony of the Bortz Survey witness, Mr. Trautman, in the prior 2010-13 proceeding,
precluded or diminished JSC’s ability to assert its “inconsistency” argument.
Judges in fact made no such finding in the Initial Determination regarding how the Bortz
Survey methodology should have changed.
Accordingly, the overt inconsistency that JSC suggests is set forth in the Initial
Determination simply does not exist (and as explained infra, for good reason). With the
foregoing misconstrual of the Initial Determination corrected, the Judges proceed infra to
explain the substantive error and inconsistency in JSC’s argument that the Judges’ erred
in their consideration of the effect of the Minimum Fee on the regression approach
compared to its non- effect on the Bortz Survey approach.
To make clear the fundamental error in JSC’s argument, it is instructive to begin
with certain first principles. The statutory scheme supplants marketplace pricing of
distantly retransmitted local programming by CSOs. Thus, the parties proffer economic
models that they claim to be sufficient to represent relative marketplace value.274 Here,
and as in prior proceedings, the Judges were presented with two starkly different types of
models – the regression model and the survey model.275 In the difference between how
these two models approach the concept of relative marketplace lies the explanation why
the Minimum Fee issue is a concern in the regression context, but not in the survey
context.276

The models may be supported by the testimony of industry witnesses and industry documents. Parties
who eschew formal modeling may elect to rely solely on industry-based evidence and testimony (as did
CTV through the “directional analysis” undertaken by its expert witness, Dr. Leslie Marx, for the 2015-17
period. See Marx ACWDT ¶ 83).
The existence of competing models in economic litigation is hardly uncommon. As the Judges have
previously explained: “Benchmarks, Shapley and Nash models, surveys and experiments are all models, in
that a model is a representation of something beyond itself being used as a representative of that something,
and in prompting questions of resemblance between the model … and their target systems.” Initial Ruling
after Remand at 87 n.125, in Final Determination after Remand at App. A, Phonorecords III (June 22,
2023).
As the Judges noted in the Initial Determination, the D.C. Circuit has approvingly noted that there is no
reason to require that assumptions or findings applicable to one type of economic model addressing an
issue necessarily apply to a different type of economic model attempting to address the same issue. See
Initial Determination at 48 (citing NRBNLMC v. CRB, 77 F.4th 949, 971 (D.C. Cir. 2023) (affirming the
Judges’ finding in their Web V Determination declining to apply the “opportunity cost” value in one
economic model (a Shapley Value model) to an economic model (a benchmarking model) with different
assumptions)). Of course, the assumptions in each economic model must be internally consistent. See J.
Schlefer, The Assumptions Economists Make at 29 (2012) (an economic model “provides a check on
thinking: it restricts us to at least consistent economic worlds ….”) (emphasis added).
Broadly stated, the regression approach seeks to identify value from the
expressions of the willingness-to-pay of CSOs, by analyzing their actual decisions (i.e.,
their “revealed preferences”) as to which local stations, and thus which program
categories on those stations, they decide to retransmit. See, e.g., Initial Determination at
78 (“the regressions identify market-based behavior among CSOs, in the form of revealed
preferences for different program categories, and such behavior is relevant evidence
useful for estimating relative marketplace value.”). The “value” element of this
willingness-to-pay (the CSO’s “revealed preference”) is the royalty-based value of a
minute of retransmittal of programming within the program categories. However, the
presence (indeed, the prevalence) of Minimum Fee-only CSOs complicates this form of
value analysis because such CSOs did not incur any royalty cost associated with their
specific choices. Accordingly, the Judges needed to take into account this Minimum Fee
factor in order to reasonably apply the regression approach. ID at 21 (“The Judges find
that the dramatic increase in the number of minimum fee-only CSOs … renders
regression analyses that include those CSOs less reliable and thus can be accorded only
very limited economic evidentiary weight.”).
By contrast, a constant-sum survey, such as the Bortz Survey, does not seek to
estimate relative value by examining actual decision-making, in a regression or
otherwise. Rather, the Bortz Survey seeks to estimate relative value by examining
hypothetical decision-making by presumably informed CSO employees, who are asked to
allocate a fixed but unspecified monetary budget by percentages across identified
program categories, totaling 100%. See JSC PFF ¶ 296 (and record citations therein).
But at no point in the survey are the respondents asked to consider whether the relative
values are affected by the CSO’s payment of the Minimum Fee for any programming.277

Also, there is no record evidence that survey respondents took into account – or even knew – whether
their CSO employer had paid the Minimum Fee or the Base Fee for such programming.
Rather, the Bortz Survey is an attitudinal survey, asking respondents to state the relative
values they would hypothetically assign to some program categories (but not to PTV-only
and CCG-only categories as discussed elsewhere in this order and in the Initial
Determination), whereas the regressions seek to reveal relative value based on how much
CSOs in fact paid in royalties to retransmit programs within all the program categories.
Indeed, the JSC’s own expert economic witnesses dismissed the very idea that
any royalty-based valuation could be probative, characterizing all statutory royalty
amounts as “uninformative” and as mere “artifacts” of the statutory system. Dr. Asker,
on behalf of JSC, testified in this regard:
[F]ollowing the WGNA conversion, the experts’ price proxies, which are
based on base rate (plus 3.75%) royalty fees and therefore ignore the
minimum fee, were uninformative measures of the incremental cost cable
system operators paid for distant signal content. … As a result, these price
proxies became biased ….
...
[V]ariation introduced solely due to this feature of the base rate (plus 3.75%)
royalty fee calculation is an artifact of the computation of the fee ….”
Asker WRT ¶¶ 58, 98 (emphasis added).
In like manner, another JSC economic expert witness, Dr. Majure, testified that all
the regression models merely reflect “the statutory relationship [between DSEs, revenues,
and royalties owed] parrot[ing] back the relative values of distant signals set by
Congress.” Majure WRT ¶ 8.278
Importantly for the issue at hand, Dr. Majure explicitly opined that the Bortz
Survey did not have share this defect:
By contrast with the regression models …, the Bortz [S]urvey method does
not have the same problem of a disconnect between the data and the
conceptual model that is necessary to interpret the data within a
regression. … [T]he survey does not rely on the notion that a minute of each
type of content has a specific incremental value. The Bortz survey only
Dr. Majure offered the same opinion with regard to the 3.75% Fund as he did regarding the Basic Fund,
testifying that “the 3.75 royalty fee … after 2014 … explains only the Congressionally-mandated
framework ….” Majure WRT ¶ 80.
requires that respondents have some experience with different types of
content available on distant signals, so that they will have formed
preferences for these types of content. … The Bortz survey thus connects
directly to actual market value.
Majure WRT ¶¶ 59, 61 (emphasis added).
The economic import of this point was emphasized in further testimony by Dr.
Majure, explaining this distinction between the regression model and the survey model:
[T]he scarcity of valid observations for the regression method due to the
increase, post-WGNA conversion, in CSOs carrying fewer signals than they
could without exceeding the minimum royalty fee … results in a significant
gap between a CSO’s distant carriage decisions and how much that system
paid in royalties. This creates an issue peculiar to the regression method
[which] depends on statistical inferences that are more powerful and reliable
when applied to more independent observations that are derived from the
same underlying model of economic choices. Unlike the regression, which
depends critically on the relationship between these measures to identify
the relative values of content, the survey does not … because the survey
does not rely on the incremental cost of the content to identify value.
Whether a survey respondent carried enough distant signals to be above or
below the minimum royalty, their response can address equally well how
that CSO would apportion a fixed sum between the content types that it did
carry.
A survey can reveal CSO preferences reliably because the survey does not
rely upon inference but instead directly poses the relative value question to
the buyers in the hypothetical market.
***
In summary, the survey method has the advantage of not suffering from any
of the problems that make the regression method unreliable in the wake of
WGNA’s conversion.
Majure WDT ¶¶ 129-130, 133 (emphases added).
This expert testimony distinguishing the regression and survey approaches was
foundational to JSC’s economic theory of the case. See JSC PFF ¶ 236 (quoting Majure
WDT ¶ 130 to distinguish the survey model from the regression model because the
former model “reveal[s] CSO preferences reliably because the survey does not rely upon
inference but instead directly poses the relative value question to the buyers in the
hypothetical market.”); JSC Post-Hearing Brief at 3 (“Unlike the Bortz Survey, the fee-

based regressions are entirely incapable of estimating relative value in the post-WGNA
world predominated by minimum fee systems.”) (emphasis added).
Likewise, in its Post-Hearing Reply Brief (responding to Program Suppliers
argument), JSC expounded upon this fundamental difference between the regression
approach and the survey approach to the Minimum Fee issue:
Program Suppliers mistakenly conflate the manner in which the
Bortz Surveys and the fee-based regressions treat Minimum Fee CSOs,
arguing that “like the regressions offered in this case, the Bortz Survey
considers the stated preferences of survey respondents whose systems pay
only the Minimum Fee—in this way, the Bortz Survey considers Minimum
Fee systems the same way as the regressions do.” Program Suppliers
misunderstand a fundamental difference between the Bortz Surveys and the
regressions.
The fee-based regressions attempt to estimate relative marketplace
value by associating minutes of programming with calculated royalty fees.
For Minimum Fee CSOs, this presents an insurmountable issue, because
Minimum Fee CSOs do not pay their calculated royalty fees but instead face
an incremental royalty cost of $0 for the distant signals they choose to
retransmit. In contrast, the Bortz Surveys do not rely upon a nominal
royalty fee calculation to draw inferences about CSO preferences. Instead,
the Bortz Surveys avoid the problem … by directly asking knowledgeable
CSO executives to assign relative values to the distant signal programming
they carry.
JSC Post-Hearing Reply Brief at 26 (footnotes omitted) (emphases added).
And yet, having repeatedly claimed that the Bortz Survey avoided the alleged
analytical vice of associating the statutory nature of the royalties with relative
marketplace value, JSC nonetheless now seeks to convert that vice into virtue, by seeking
to justify its use of a different survey-weighting approach because of the problem of the
Minimum Fee. Not only is that argument self-contradictory, as explained supra, it is also
lacking in substantive merit regarding the analysis of economic models, as discussed
infra.279 In more general economic terms, the regression approach and the survey
approach each considers relative marketplace value from different modeling perspectives.

PTV also argues that JSC’s experts “mined” this and other “weighting scheme[s]” to “increase[] JSC’s
allocation.” PTV Response at 3. In rejecting this rehearing argument, the Judges need not and do not
inquire into the motives of JSC’s experts.
The Bortz Survey approach does not seek to define value a priori – rather it surveys
industry employees who, in response to Question 4 of the Bortz Survey, assign their
relative value to the several program categories identified by the Bortz interviewer. That
is, the respondent may, for example, be focused on demand-side concepts regarding
subscriber growth or retention, or supply-side issues such as the hypothetical cost of
acquiring the signals necessary to obtain the retransmitted programming, or both. But the
reasons why survey respondents assign particular values are neither sought nor known by
Bortz. In particular, the Bortz Survey respondents are not asked to address any potential
impact on value arising from the statutory nature of the royalties actually paid, whether
via the Minimum Fee, the Base Fee, the 3.75% fee, or otherwise.
Thus, for the Judges to make any adjustments to the Bortz Survey results based on
how the respondents may or may not have incorporated concepts relating to the statutory
royalty framework would be untenable, because the underlying economic reasons lurking
in the minds of the respondents are not in the record.
Moreover, the thought processes of the survey respondents are irrelevant to what
constitutes the probative value according to JSC and the Bortz Survey. That is, it is the
status of the survey respondents as knowledgeable industry participants that makes the
Bortz Survey responses probative and allows the Judges to give it an appropriate
evidentiary weight. In this regard, the survey approach shares a characteristic of the
benchmarking approach used by the Judges in their ratemaking cases, in which the
underlying economic considerations of market participants are deemed to have been
“baked-in” to the decisions of licensors and licensees, and their subjective reasons for
establishing value are not relevant. See Web IV Determination, 86 FR 26316, 26326
(May 2, 2016) (“The Judges hold in this determination, as they have held consistently in
the past, that the use of benchmarks ‘‘bakes-in’’ the contracting parties’
expectations ….”), aff’d SoundExchange, Inc. v. Copyright Royalty Bd., 904 F.3d 41

(2018). So understood, any connection between the Bortz Survey results and the
statutory fees is both unknowable and irrelevant.
By contrast, as noted supra, the regression approach is based on an a priori
assumption as to what constitutes value in this proceeding, positing that a CSO’s relative
valuation of the various program categories can be derived from their actual decisionmaking, i.e., their revealed preferences, based upon the royalties associated with a minute
of programming in each category. Thus, for the regression approach, the Judges found
(rejecting the arguments of the regression proponents) that the existence of the Minimum
Fee royalties was a matter to be addressed, because the evidentiary strength of this a
priori assumption is compromised by the presence of the royalties paid by Minimum Feeonly CSOs, which are not associated with the cost of any programming (absent particular
circumstances necessitating adjustments (such as discussed in the Initial Determination
regarding PTV and CCG programming)).
iv. Conclusion
Simply put, whereas the value proposition in the regression model lies in the
actual retransmission decisions by CSOs, the value proposition in the Bortz Survey
approach lies in the responses to the survey instrument. Properly understood, the
evidentiary weight of the Bortz Survey approach, compared to the regression modeling,
lies in the fact that the survey model circumvents what JSC and its expert witnesses
characterize as the economic irrelevancy of the Minimum Fee and other elements of the
statutory royalty formula set forth in 17 U.S.C. 111. That is, rather than rely on what
they claim to be economic “artifacts,” JSC and Bortz rely instead on the survey responses
of CSO representatives as a practical way to value and allocate royalties that are paid
according to statutory fiat rather than by revealed preference. However, by attempting to
inject concerns regarding the Minimum Fee that apply to regression analyses – through
its misconceived plea for consistency – JSC actually reveals its inconsistent

understanding of its own survey model,280 converting it into a tool that, so to speak, is
neither fish nor fowl. The Judges appropriately declined to make this analytical error.
For the foregoing reasons, the Judges find that there is no inconsistency between
the Judges’ decision to address the Minimum Fee issue in connection with the regression
model, but not with regard to the Bortz Survey model. Indeed, as explained supra, the
inconsistency revealed by JSC’s rehearing argument lies in JSC’s own willingness to
abandon its experts’ testimonies regarding the fundamental economic modeling
differences between the regression and survey approaches, and to pollute the survey
approach with irrelevant aspects of the statutory fee.281
Accordingly, the Judges’ decisions in these regards do not constitute error – let
alone “clear error,” or otherwise serve as a basis for granting rehearing.282

As noted supra, an economic model’s assumptions need to be internally consistent. See Schlefer, supra.

One might question why the Judges criticize JSC for making an inconsistent argument, when the Judges
used Dr. Tyler’s above-Minimum Fee data but found two instances in which it was necessary and
appropriate to utilize his full set of calculated Base Fee royalty data. But the Judges did not engage in an
inconsistent analysis. Rather, there were unique fact-based reasons, as described in this Order and in the
Initial Determination, which made the above-Minimum Fee data an incomplete measure of regressionbased value, to an extent, for PTV and CCG. The needed adjustments that followed did not demonstrate
inconsistency, but rather a careful parsing of the record evidence. By contrast, JSC’s position is
inconsistent at the conceptual level – it first argues (as explained supra) that the statutory royalty fee
structure does not provide evidence of value and that the survey method is the appropriate valuation tool –
only to then alter course and adjust the royalty shares by relying on that very statutory fee structure it
discredits as a value metric.
Alternately stated, it would be contrary to the evidence for the Judges to ignore the divergent marketplace
impact of the WGNA conversion on Minimum Fee royalty payments. In this regard, the Judges are
mindful of the aphorism that a “foolish consistency is the hobgoblin of little minds.” See generally R.W.
Emerson, SELF-RELIANCE AND OTHER ESSAYS 24 (Dover unabridged ed. 1993) (emphasis added).
Further, even if JSC’s approach somehow could be construed, like the Judges’ approach, as not internally
inconsistent, it was hardly error, let alone “clear error,” for the Judges to exercise their fact-finding duty
and their discretion by adopting the approach they found reflects the record evidence and the relative
marketplace value standard – and reject one (JSC’s approach) they found to be logically questionable and
insufficiently probative of marketplace value. (That is, even if the general “logic” of JSC’s argument were
correct, the Judges were under no duty to adopt it.)
As stated in footnote 16, supra, the Judges’ foregoing analysis indicates why JSC’s use of the word
“weighting” can be misleading in the context of its shift away from its former weighting method. One
common meaning of “weighting” is an “allowance or adjustment made in order to … compensate for a
distorting factor.” https://en.bab.la/dictionary/english/weighting. (For example, weighting is often used to
correct for perceived inaccuracies in “unweighted” values – as when an election survey has failed to poll a
representative sample of voters from a political party or other sub-set of the population of voters.) Here,
JSC/Bortz are not changing the weighting of the survey results to correct for a factor that, in their own
experts’ opinions, is not only non-distorting, but wholly irrelevant (as discussed in detail, supra). That is,
JSC and its expert economic witnesses acknowledge that the Bortz Survey methodology, unlike the
regression modeling, is not distorted by the nature of the statutory formula for royalty fees.
d.

Whether the Judges Adopted a Version of the Tyler Model that No
Witness Endorsed for the 2015-2017 Time Period, and Whether It Is at
Odds with the Record Evidence
i. The Parties’ Filings
1.

The JSC Motion

In its Motion for Rehearing regarding the Judges’ adoption of the Tyler Model
and the adjustments thereto, JSC argues the following points:
1. The Initial Determination adopts a version of the Tyler Model that no witness
endorsed for the 2015-17 time period. JSC Motion at 8-9.
2. The other experts opined that the Tyler Model merely “parroted” the statutory
formula. JSC Motion at 9.
3. The Initial Determination makes “arbitrary” adjustments to the Judges’
adopted Tyler Model contrary to record evidence. JSC Motion at 9-10.
4. The Initial Determination allocates shares to PTV and CCG that are beyond
“reasonable limits” because for PTV they are greater than the unadjusted
levels, and, for CCG, they are greater than levels from prior years. JSC
Motion at 10.
5. The Initial Determination fails to credit allegedly unrebutted testimony of
industry fact witnesses inconsistent with the allocations made by the Judges to
PTV and CCG. JSC Motion at 10.
2.

The Adverse Parties’ Responses283
a. The Joint Response

CTV did not file a response to the JSC Motion for Rehearing or otherwise oppose it in any other filing.

In their Joint Response, CCG, Program Suppliers, and SDC respond as follows:
1. JSC does not satisfy any standard for rehearing because it is merely raising points as to which it did
not meet its burden of persuasion. Joint Response at 3-4.
2. JSC’s attempt to litigate issues already considered or which it failed to consider constitutes an
improper attempt to obtain the so-called “second bite at the apple” that the Judges’ reject as a proper
basis for rehearing. Joint Response at 4.
3. The Judges adoption of and adjustment to a version of the Tyler Model based on record evidence is
consistent with the D.C. Circuit’s prior ruling that the Judges are “not strictly limited to choosing
from among proposals set forth by the parties,” but, like agencies in general, “have authority to
modify proposals set forth by the parties, or to suggest models of their own.” Joint Response at 4
n.2; see also id. at 6.
4. JSC fails to note that the higher shares for PTV and CCG were consistent with the regression
evidence on which the Judges relied, and, by contrast, JSC asks the Judges instead to rely fully on
the Bortz Survey evidence, an argument which the Judges expressly considered and rejected. Joint
Response at 6.
The PTV Response
In its Response, PTV argues as follows:
1. JSC correctly asserts that the record contains no evidence to support the
Judges’ reliance on the Tyler above-Minimum Fee Model.
2. The record contains “minimal” yet “disputed” evidence – i.e., the
“conventional McLaughlin-adjusted Survey” and the Tyler Model inclusive of
Minimum Fee-paying CSOs – to support a higher PTV share than determined
by the Judges.

3. JSC incorrectly maintains that there is no record evidence to support what JSC
characterizes as the “large shares” awarded to PTV in the Initial
Determination for the 2015–17 period.
PTV Response at 1-2, 9-10.
JSC’s Reply contains the following points:
1. JSC identifies the “clear error” standard as its specific standard for seeking
rehearing. JSC Reply at 2.
2. JSC’s arguments in its Motion regarding alleged methodological errors cannot
be construed as a mere “rehashing” of arguments previously considered at the
hearing and in the Initial Determination (a/k/a seeking a “second bite at the
apple”) because the above-Minimum Fee version of the Tyler Model was not
“endorsed” by any witness. JSC Reply at 2, 9.
3. JSC minimizes the importance of its own motion argument that cited industry
executive testimony supporting their request for rehearing. Rather, JSC states
in their Reply that this is not the “heart” of their argument, but rather only
reveals that the differences between the regression results and the cited
industry witness testimonies “are so at odds” as to indicate problems with the
regression evidence on which the Judges relied. JSC Reply at 9.
ii. The Judges’ Analysis and Conclusion
1.

The Judges’ Adoption of a Version of the Tyler Model in the Record
Does Not Warrant Rehearing
a. The Judges Did Not Err by Adopting the
Above-Minimum Fee Tyler Model, Let Alone
Commit “Clear Error.”

JSC maintains that the Judges wrongly adopted the above-Minimum Fee analysis
undertaken by Program Supplier’s expert economic witness, Dr. Tyler. As recounted in
detail below, the Judges explained in the Initial Determination why regression modeling
for 2015-17 that relied only on above-Minimum Fee CSOs was more useful and why, by
contrast, modeling that relied on the Base Fees calculated by the subscriber groups of
CSOs who actually paid only the Minimum Fee was of limited usefulness (as when used
to adjust for economic value from the regressions uncaptured by the above-Minimum Fee
modeling). See Initial Determination at 21 (“The Judges find that the dramatic increase
in the number of minimum fee-only CSOs … renders regression analyses that include
those CSOs less reliable and thus can be accorded only very limited economic
evidentiary weight [and] the Judges accord significantly more evidentiary weight to
regression modeling that focuses only on the CSOs that actually revealed their
preferences by willingly paying above the minimum fee, i.e., at the base fee level.”); id.
at 142-144 (noting particular regression adjustments284 to economic value necessitated by
the evidence).
The Judges further recognized that, despite the evidentiary usefulness of the
royalties paid by the above-Minimum Fee cohort in this proceeding, that group generated
a smaller portion of the CSO market than in the prior (2010-13) allocation proceeding.
Accordingly, the Judges did not accord this regression approach primary weight vis-à-vis
the results of the Bortz Survey, as they had in that prior proceeding. See Initial
Determination at 147 (“[T]he Judges are not giving any primacy to the regression
evidence in this proceeding, given how the changes in the retransmission sector after the
WGNA conversion have affected the available data.”); id. at 197 (“[T]he Judges accord
evidentiary weight to the Bortz Survey, with the McLaughlin Adjustment – relatively
equivalent with the weight given to the regression analysis …. [T]he Judges find that a
These are the three adjustments (Adjustments A through C) in the Initial Determination.

synthesis of regression and survey results is necessary to arrive at the required
allocations.”).
Turning to a more granular review, the record is replete with evidence, argument,
and judicial colloquy regarding the use of above-Minimum Fee evidence as a building
block for the ascertainment of relative value. See Initial Determination at 12-13. There,
the Judges relied on the testimony of Dr. Tyler, who expressly found “merit” in a
“version of the model that includes only CSOs paying above the minimum fee [which]
presents with the “highest degree of confidence” the CSO tradeoffs between different
stations and categories of minutes.” Id. at 12-13 (quoting Tyler ACWDT ¶ 155)
(emphasis added). As a general matter, when the Judges have decided to rely, as here, on
the specific opinion testimony of an expert whom they have credited and who himself has
the “highest degree of confidence” in that specific opinion, under no standard could the
Judges’ ruling in that regard be subject to rehearing.
Moreover, further support exists in the record for the Judges’ adoption of this
above-Minimum Fee modeling. See id. at 15 (“for these CSOs which CTV accurately
describes as ‘above-capacity’ … paying above the minimum fee, the base fee royalties
reported by their subscriber groups are their actual royalty payments, revealing the CSO’s
perceived value of the distantly retransmitted stations and their constituent programs.”
(citing Bennett WRT ¶ 15 (a CTV economic expert)); CTV PFF ¶ 158 (For above
capacity CSOs, “the reported [Subscriber Group] royalties reflected the amount of
royalties actually paid … [by CSOs] [that] decided to incur an increased marginal royalty
cost[,] … revealing the CSO’s perceived value of the distantly retransmitted stations.”).
Additionally, the Judges were persuaded by the following supportive argument of
the SDC (no fan of the regression approach, to say the least) regarding the Tyler Model
as applied to above-Minimum Fee-paying CSOs:
Dr. Tyler, whose rate-based methodology is the most explicitly based on a
“minimum willingness to pay” theory … offers a sensitivity test [the above-

Minimum Fee modeling] of this issue. Tyler [ACWDT] ¶ 156. … Dr.
Tyler’s sensitivity test might provide some rough guidance as to the
potential direction and magnitude of bias introduced by the presence of
minimum fees. SDC PFF ¶ 156. See also 4/19/23 Tr. 5473 (SDC’s
counsel’s statement to Dr. Tyler on cross-examination) (“I do want to point
out to your credit that your first sensitivity test tries to address this issue.”).
This argument is generally consistent with Dr. Tyler’s response to SDC
counsel on this point, agreeing that it was important to be “cognizant” of
this minimum fee issue and that it be “considered and addressed” because
there is “reasonable disagreement about how to handle the issue.” Id. at
5473-74. … [T]he Judges find …. the variant of the Tyler Model in Figure
6.3 of the Tyler ACWDT offers the Judges’ “rough guidance” in the
allocation of shares.
Initial Determination at 21-22 (quoting SDC and its counsel) (emphasis added).
Additionally, the Judges carefully considered this issue at the hearing, questioning
witnesses from the bench. See 4/13/23 Tr. 4719 (Bennett) (CTV economic expert
responding to Judge Strickler that “the idea that you're relating carriage with the cost or
willingness to pay for that carriage, I think, is an entirely reasonable modeling approach
where the data exists to link the carriage to … those payments. And that is certainly true
where you have above-minimum-fee-paying systems for which the incremental cost is
apparent …”) (emphasis added); 4/18/23 Tr. 5125 (George) (CCG expert Dr. Lisa George
responding to Judge Ruwe that “the royalty payments are not exact measures of
incremental cost. They are more so when we're above minimum fees.”) (emphasis
added); see also 4/19 Tr. 5503 (Tyler) (agreeing on cross-examination that “CSOs paying
above the minimum fee [is]where you have economic decision-making because the costs
that they're paying for each of those distant signals are actual binding costs ….”).
The Judges further noted at length multiple perspectives in which an aboveMinimum Fee cohort of CSOs can be viewed:
This cohort of CSOs can properly be viewed as essentially the only CSOs
who provide revealed preference information as to the variation in relative
values among the program categories (in contrast with CSOs who did not
retransmit any distant local stations or those with “excess capacity”), which
in that sense is a cohort unto itself, rather than a sub-sample. On the other
hand, this cohort can also reasonably be viewed as but a small sample of all
the CSOs, which reduces the evidentiary weight of their preferences. Both
perspectives on the revealed preferences of these above-minimum fee

paying CSOs are properly considered in weighting the various strands of
useful evidence in order to allocate royalty shares in this proceeding.
...
[I]t is misleading, to say the least, to categorize the base-fee-paying CSOs
as merely a small cohort of the larger population of CSOs, when they are
differentiated by the key marker for section 111 purposes: whether they
assign a relative value to the retransmittals and thus relative values to the
retransmitted programs. The Judges find it more accurate and appropriate
to consider the base-fee-paying CSOs essentially as a separate cohort of
CSOs whose decision-making is pertinent to a regression analysis in this
statutory context.
...
Colloquially, the issue may be characterized as whether the Judges
should let the perfect be the enemy of the good. Here, the “perfect” fact
pattern would be where all or most of the data is generated by CSOs paying
above the Minimum Fee. That is not the factual context here. But there is
“good” evidence from the CSOs who did retransmit enough programming
to trigger the base fees of their subscriber groups, and that the Judges do not
ignore that data.
Accordingly, the Judges will give due weight to the minority of
CSOs that, in the 2015-2017 period, paid above the minimum fee and thus
revealed their preferences by paying an additional royalty in order to
retransmit one or more additional stations.
Initial Determination at 100, 130-131 (emphasis added).
The Judges made it clear that they found important economic evidence in the
above-Minimum Fee version of the Tyler Model:
[F]or those CSOs transmitting above 1.0 DSE, they have economic
decisions to make regarding the mix of programming they will transmit via
their signal decisions. Given the economics and reality of this
retransmission market, as described above, only then will the relative value
of program categories be of material economic importance. It is at this stage
that the Tyler Model generates information as to relative value, through the
Tyler model’s coefficients.
Initial Determination at 136.
Relying on this abundant record, the Judges held as follows:
[T]he Judges rely on the Tyler Model, as Dr. Tyler applied his model to the
CSOs paying above the minimum fee…. [A]bove-minimum fee paying
CSOs[’] channel selections/programming preferences are … probative and
useful, even if less so than in the 2010-2013 Determination because of the
reduction in the number of such CSOs and in the percentage of royalties
they represent.”

Initial Determination at 21, 66.
But, as indicated supra, the Judges did not ignore the fact that the aboveMinimum Fee CSO cohort was substantially smaller than identified in the 2010-13
Determination. Specifically, the Judges stated:
[H]ere the Judges are considering the regression evidence and the Bortz
Survey evidence as essentially equally weighted and useful (but not
flawless) evidence …. [T]he reconciliation will be different than in the
2010-13 proceeding, because the Judges are not giving any primacy to the
regression evidence in this proceeding, given how the changes in the
retransmission sector after the WGNA conversion have affected the
available data.
Initial Determination at 147.
To be sure, in its Motion, JSC disagrees with the Judges’ adoption of the aboveMinimum Fee modeling undertaken by Dr. Tyler. But JSC made its disagreements
known at the hearing stage of this proceeding, and supported those disagreements with
expert testimony. See Initial Determination at 19-20.
In particular, one criticism, as described by the Judges, was levied by one of
JSC’s expert economic witnesses, Dr. Asker, who maintained that it was improper to
“use … the base fee as a price proxy even for CSOs paying above the minimum fee.” Id.
at 19.285 The Judges declined to adopt Dr. Asker’s analysis because: (1) it amounted to
mere “blackboard economics,”286 in that there was “no evidence” that any CSO actually
engages in the “tunnel-vision sort of hyperrationality” described by Dr. Asker; and (2) it
was at odds with the testimony of a cable industry expert witness, Sue Ann Hamilton,
who stated, in testimony credited by the Judges, that “CSOs do not devote much attention
to issues regarding distant retransmittals.” Id. at 22 & n.29.

More specifically, Dr. Asker opined that a rational CSO would calculate the actual “price” of an aboveMinimum Fee retransmission of a local station as the difference between: “(1) the total fees that would
bind, which may have been the minimum fee, without retransmitting that local station, and (2) the total
base fees that would bind (the minimum fee having been exceeded) if that local station was distantly
retransmitted.” Initial Determination at 20.
See id. at 22 n.29 for the Judges’ application of the economic criticism of unrealistic “blackboard
economics.”
As a second criticism regarding this issue, JSC also relied – at the hearing stage
of the proceeding – on what its statistical expert, Mr. Harvey opined was the lack of
“statistical significance” in Dr. Tyler’s above-Minimum Fee modeling. See JSC RPFF ¶¶
29-30; Harvey WRT ¶¶ 45-46 & tbl.10287 (More specifically, JSC and Mr. Harvey
maintained that Dr. Tyler’s above-Minimum Fee modeling “failed to obtain statistically
significant results for JSC minutes in 2015, 2016 and 2017 ….”); see also JSC PostHearing Brief at 27; Harvey WRT ¶¶ 45-46.
In the Initial Determination, the Judges explained in detail why they disagreed,
finding that the above-Minimum Fee Tyler Model was statistically sufficient to carry the
level of evidentiary weight the Judges accorded to that model. See Initial Determination
at 144-148. Accordingly, although JSC may disagree with the Judges’ reasoning as to this
issue (even though JSC does not in fact address the Judges’ reasoning in their Motion
seeking rehearing), their disagreement does not remotely suggest that rehearing is
warranted as to this issue.
In their present Motion seeking rehearing, JSC makes a further criticism of the
Judges’ reliance on the above-Minimum Fee Tyler Model. Specifically, JSC relies on Dr.
Tyler’s recommendation at the hearing that the Judges rely on his preferred model in
which he applies all the Base Fees calculated by the Subscriber Groups within CSOs,
including those for whom the Minimum Fee would bind. But JSC’s present post-hearing
reliance on Dr. Tyler’s preference is seriously misleading.

JSC premises its argument on the fact that far fewer CSOs paid royalties at above-Minimum Fee levels
in the years 2015-17 than in the pre-WGNA conversion period of 2010-2014 (which straddles this and the
prior allocation proceeding). See Initial Determination at 18-20. As explained in the Initial Determination,
and recounted elsewhere in this Order, the Judges did not dispute this point, and therefore accorded Dr.
Tyler’s above-Minimum Fee results less evidentiary weight than when more CSOs paid above-Minimum
Fee royalties, but they declined to adopt JSC’s argument that the Judges therefore should give zero weight
to the evidence of CSO decision-making by CSOs that did pay above-Minimum Fee royalties. Id. at 131
(“there is ‘good’ evidence from the CSOs who did retransmit enough programming to trigger the base fees
of their subscriber groups, and the Judges do not ignore that data.
Accordingly, the Judges will give due weight to the minority of CSOs that, in the 2015-2017 period, paid
above the Minimum Fee and thus revealed their preferences by paying an additional royalty in order to
retransmit one or more additional stations.”).

Although Dr. Tyler preferred one of his models over another, his preference does
not dictate which of his analyses the Judges may credit. Here, the Judges declined to
defer to his preference because regression models that included the royalty payments of
CSOs paying only the Minimum Fee were less useful in reflecting economic decisionmaking (an argument advanced by JSC and other parties). Instead, the Judges relied
heavily on the Tyler Model based on only above-Minimum Fee paying CSOs, for the
reasons explained supra, as supported by abundant aspects of the record evidence. Initial
Determination at 21 (“The Judges find that the dramatic increase in the number of
minimum fee-only CSOs (i.e., those with no distant retransmittals and those with some
distant retransmittals but with ‘excess capacity’) renders regression analyses that include
those CSOs less reliable and thus can be accorded only very limited economic
evidentiary weight. Moreover, the Judges accord significantly more evidentiary weight to
regression modeling that focuses only on the CSOs that actually revealed their
preferences by willingly paying above the minimum fee, i.e., at the base fee level.”).
JSC also overplays its hand. Dr. Tyler did not maintain that his above-Minimum
Fee modeling lacked probative value. Quite the contrary, he testified (as noted supra)
that his above-Minimum Fee modeling showed, with the “highest degree of confidence,”
actual economic tradeoffs made by CSOs, even though he preferred his model inclusive
of the Minimum Fee-paying CSOs. Initial Determination at 13 (quoting Tyler ACWDT ¶
155).
Moreover, as a general matter, there is no doubt that the Judges may give greater
weight to evidence that the proffering witnesses recommend should have less weight.
Indeed, such an expert’s disagreement in this regard ultimately is of little value, as it
intrudes upon the Judges’ exercise of their core judicial function to weigh evidence, and,
for present purposes, cannot support a claim for rehearing under any of the available
standards.

In a related criticism, JSC maintains that the Judges wrongly adopted the aboveMinimum Fee Tyler Model because other experts supported their own models and
approaches over the adoption of any version of Dr. Tyler’s modeling. Motion at 9.288
But again, because one of the Judges’ core duties is to weigh competing testimony,
including expert testimony, their decision to adopt an opinion proffered by one expert
which clashes with opinions of others, is certainly not ipso facto erroneous.
More broadly, the Judges are not locked into the recommendations of the parties
and the experts. This statutory process is not like “final offer” arbitration. As noted by
the Joint Respondents, the D.C. Circuit has held that the Judges are “not strictly limited to
choosing from among proposals set forth by the parties,” but, like agencies in general,
“have authority to modify proposals set forth by the parties, or to suggest models of their
own.” Joint Response at 4 n.2; see also id. at 6; see also Johnson v. Copyright Royalty
Bd., 969 F.3d 363, 381-82 (D.C. Cir. 2020) (citing SoundExchange, Inc. v. Copyright
Royalty Bd., 904 F.3d 41, 50-51, 57 (D.C. Cir. 2018); Association of American
Publishers, Inc. v. Governors of USPS, 485 F.2d 768, 773 (D.C. Cir. 1973)).
b. JSC is Improperly Seeking a “Second Bite at
the Apple” by Asking to Submit Additional
Evidence Regarding Dr. Tyler’s AboveMinimum Fee Model.
As discussed supra, JSC submitted testimony from two expert witnesses, Dr.
Asker, an economist, and Mr. Harvey, a statistician, in unsuccessful attempts to
undermine Dr. Tyler’s above-Minimum Fee modeling. Thus, this issue has already been
considered and, as Joint Respondents assert, JSC cannot obtain rehearing to introduce

Imagine that – the other experts preferred their own models over another expert’s opinion: Quelle
surprise.
further evidence that JSC “could have submitted at the hearing, but did not,” and as to
which JSC “did not meet their burden of persuasion.” Joint Response at 3-4.
Alternately stated, the JSC Motion fails to satisfy the “negative” standard for
rehearing noted earlier in this order – a demonstration that the movant is not seeking the
“second bite at the apple” that the Judges have ruled is insufficient to support a request
for rehearing.
2.

The Judges’ Adjustments to the Version of the Tyler Model They
Adopted Do Not Support JSC’s Motion for Rehearing
a. Introduction

JSC also argues that rehearing is warranted because the Judges made two
“adjustments” via the Initial Determination that were improper.289 JSC’s argument is
deficient for several reasons. At a high level, JSC simply ignores the Judges’
explanations in the Initial Determination for why the above-Minimum Fee version of the
Tyler Model – albeit a highly useful lens for broadly identifying relative value –
generated certain results that required the Judges to make relative value adjustments for
CCG and PTV programming. It is quite simple, but also simply wrong, for JSC to argue
that the Judges erred in their reasoning, by omitting any reference to the Judges’ actual
reasoning.
To highlight the importance of these omissions, the Judges recapitulate the
reasoning in the Initial Determination which JSC ignores.

JSC’s “adjustment” argument comes in two varieties. First, JSC objects to “Adjustment C” in the Initial
Determination which increased PTV shares. Second, JSC objects to the adjustment of the shares allocated
by the Initial Determination to CCG and PTV for 2015-17, in comparison to their share percentages in the
prior years of 2010-13 (in the prior allocation proceeding) and 2014 (in this proceeding.) JSC does not
object to “Adjustment A” in this proceeding that lowered CCG’s allocation share, or to “Adjustment B” in
this proceeding that lowered PTV’s share. Alternately stated, JSC claims error by the Judges in the
adjustments that reduced their royalty allocation, but assert no error in adjustments that increased JSC’s
royalty allocation. (JSC’s argument pertaining to Adjustment B does identify a computational error in the
Initial Determination that the Judges acknowledge and correct infra.)
b. The CCG Share Adjustment (Adjustment A)
First, with regard to the CCG share (Adjustment A) the Judges reasoned as follows in
the Initial Determination:
1. The above-Minimum Fee Tyler Model generates “an anomalous increase” in
the share allocated to the CCG claimants.
2. This anomaly arose because “CCG programming is unique among the
program categories in this proceeding [in that] it is limited in geographic
scope to CSOs located within a 150-mile belt below the U.S./Canadian
border” (known as the “Canada Zone”).
3. Thus, the above-Minimum Fee Tyler Model “reflect[s] the unique value of
Canadian programming in the Canada Zone, including the uniquely
valuable … French language programming, a niche sub-category.”
4. Accordingly, in addition to the demand for the usual complement of distantly
retransmitted programming that exists throughout the wider United States, in
the Canada Zone there exists this additional demand. Such greater demand
means that CSOs would choose to pay more than the Minimum Fee by adding
CCG stations, and thus Canadian claimant programming, to their channel
lineup.
5. Therefore, CSOs in the Canada Zone would very likely be overrepresented in
the above-Minimum Fee Tyler Model.

Although JSC does not seek rehearing on Adjustment A regarding CCG, that adjustment is relevant to
this discussion because it is part and parcel of the Judges’ derivation of the CCG share that JSC claims to
be too high relative to prior years. The deficiency in JSC’s argument in that regard is best understood by
including in the text following this footnote a summary of the reasoning for Adjustment A.
6. This phenomenon creates a problem because the Judges are allocating a
royalty pool for which, over the period 2015-2017, more than 90% of the
funding came from Minimum Fee-only CSOs. Accordingly, although the data
from the above-Minimum Fee Tyler Model provides useful economic
evidence of CSOs’ revealed preferences for other claimant categories, with
regard to CCG content and value, this data is distortionary.
7. Confirming this anomaly, CCG itself did not propose receiving the high
allocations suggested by the above-Minimum Fee Tyler Model (23.2% in
2015; 31.1% in 2016; and 34.6% in 2017). Rather, CCG proposed that it
receive 14.8 % for 2015, 13.7% for 2016, and 13.6% for 2017. 291
8. Accordingly, in their 2015-2017 allocations, the Judges utilize the lower CCG
shares reported by Dr. Tyler for all CSOs, rather than only the aboveMinimum Fee Tyler Model.
Initial Determination at 142-143.
As noted supra, JSC studiously ignores this substantial downward adjustment of
CCG’s 2015-17 share, which benefited JSC and the other claimants by raising their share
allocations, ceteris paribus. Rather, as noted supra, JSC focuses on a comparison of the
CCG shares for 2015-17 with the CCG shares for 2010 through 2014 and claims error
sufficient to warrant rehearing based on the increase in CCG shares in this proceeding.
Simply put, JSC does not object to the Judges’ adoption of adjustments to its aboveMinimum Fee approach, but rather only to those adjustments that reduce its inter-year
allocations. That argument, now in proper context, is addressed in the subsection below.

It is also noteworthy that CCG has not sought rehearing to challenge this significant downward
adjustment in its 2015-17 share of the royalty pool nor to criticize the wider application of the aboveMinimum Fee Tyler Model.
1. JSC Misapprehends the Process for Ascertaining Relative Value in
Allocation Proceedings.
JSC argues that the sheer increase in the size of the Judges’ allocation for PTV
and CCG are “arbitrary.” Motion at 8. More particularly, JSC calculates that “after the
Judges made multiple adjustments to the results, PTV’s share in the adjusted regression
increased by 51% in 2015, by 69% in 2016, and by 105% in 2017. JSC Motion at 9.
With regard to CCG, JSC makes an inter-period argument, asserting that CCG’s shares
more than doubled in the 2015-17 period compared to the pre-WGNA conversion years
of 2010-13 (in the prior allocation proceeding) and 2014 (in the present proceeding). JSC
Motion at 10. As explained infra, JSC’s argument in these regards fundamentally
misapprehends the statutory process by which relative values and shares are
determined.292
Addressing first the CCG inter-period share increase, the Judges note that they do
not begin with some pre-determined allocation of shares and then make certain that they
can “back into” that “pre-determination” by conjuring up a comporting analysis. That
would not only, to put it colloquially, “place the cart-before-the horse,” but would also be
antithetical to the Judges’ fact-finding duty. In this regard, as the Judges proceed through
their analysis, as here, by applying the probative facts – they do not decide ex ante that
their factual findings cannot exceed (or fall below) some arbitrary level (whether an
interim pre-adjusted level or a level from a prior proceeding). Indeed, that too would be
an improper exercise by the Judges of their duty to weigh the facts. Alternately stated,
when the Judges weight the evidence, they are agnostic as to the share percentages that
would ultimately result.

In addition to the specific points discussed infra regarding the CCG and PTV adjustments, it is important
to remain mindful that the Judges are ascertaining relative values, not absolute values. That is, the WGNA
conversion significantly scrambled CSOs’ retransmission decisions, which the record reflects changed the
relative value of program categories. This does not necessarily indicate that, in an absolute sense, any one
program category became more or less valuable.
Nonetheless, as noted supra, JSC complains that CCG’s shares are higher than the
shares CCG received in the 2010-13 Final Allocation Determination and in 2014 in the
present proceeding. But JSC cites no authority to suggest that allocations should equal or
approximate allocations in prior years or from prior proceedings. Indeed, there is no
authority in that regard because in each allocation proceeding the Judges consider the
allocation issues de novo, based on the record developed in that proceeding. To be sure,
a party can argue that the underlying facts in the latter proceeding mirror those of the
prior proceeding, suggesting it would be correct for the Judges not to deviate from the
allocations in the prior proceeding. And because factual patterns may remain relatively
stable across years within a given proceeding, a party may argue that the annual years at
issue should all reflect similar allocations.
Of course, the converse is true as well: If the facts reveal substantial differences
between the years in different proceedings, or across years within a proceeding, the
allocations made by the Judges should reflect those facts. Indeed, the Judges have
described their consideration of this issue as a “Changed Circumstances” analysis.
In the present case, the Judges addressed this very issue in section XVI of the
Initial Determination:
XVI. Changed Circumstances
The Judges may vary from prior decisions when there are (1)
changed circumstances from a prior proceeding; or (2) evidence on the
record before the Judges that requires prior conclusions to be modified
regardless of whether there are changed circumstances.
In the 2014-2017 period, several widely agreed upon changed
circumstances have taken place including 1) WGNA’s conversion to a cable
network, the reclassification of PTV signals from exempt to non-exempt,
and 3) the rise in streaming on alternative platforms. … Based on the agreed
upon record and Judges’ findings here and throughout the determination,
the Judges find that significant changed circumstances occurred across the
relevant period.
Initial Determination at 159-160 (citing the testimonial consensus regarding these
changed circumstances.).

Thus, not only was it permissible for the Judges to deviate from allocation shares
in prior years and/or proceedings, the facts of the case required the Judges to adjust the
share allocations. Quite clearly, therefore, the Judges did not make any findings that –
under any standard – would support rehearing based on changes in the Judges’ share
adjustments.
Second, with regard to the upward adjustment for PTV’s relative value
(Adjustment C), the Judges reasoned as follows in the Initial Determination:
1. PTV argued that, when WGNA was a local station retransmitted by CSOs
pursuant to section 111, a significant number of PTV’s stations were retransmitted
by CSOs together with WGNA.
2. Thus, prior to the WGNA conversion, a CSO’s decision to retransmit PTV and
WGNA jointly generated a Base Fee royalty and revealed that CSO’s revealed
preference and willingness-to-pay.
3. PTV further noted that post the WGNA conversion, many of these CSOs
continued to retransmit the same PTV station, but this did not trigger the Base Fee
because the Minimum Fee applied (with WGNA gone).
4. PTV maintained that the pre-WGNA conversion carriage is probative of the fact
that the PTV carriage post-WGNA conversion demonstrates economic value.
The Judges agreed with this analysis, increasing PTV’s 2015-17 share of royalties
as calculated in Adjustment C.293

As explained in the section of this order denying PTV’s request for rehearing, to adjust for this increase
in PTV’s relative value, the Judges found probative the analysis and testimony by a JSC expert statistical
witness, Mr. Harvey. His analysis and testimony indicated that 44% of the PTV stations that were
identified as being retransmitted by Minimum Fee-paying CSOs after the WGNA conversion had also been
transmitted pre-conversion jointly with WGNA and thus generated Base Fee (above-Minimum Fee)
royalties. The Judges adopted this testimony via Adjustment C, increasing PTV’s share of the royalties.
But JSC objects to this Adjustment C on the same general basis that it objects to
the CCG increase – it is simply too large an increase. As to this issue, JSC compares the
Judges’ interim work-in-progress (i.e., pre-adjustment) PTV shares with the Judges’ final
post-adjustment analysis. But its argument hinges on the same mistaken assumption
made by JSC regarding the CCG share increase across the relevant years – that the
Judges are somehow precluded from increasing a party’s shares by too great a
percentage, regardless of where the Judges’ factual findings lead.
3.

JSC’s Proposal that the Judges Disregard the Regression Evidence
on Which They Relied – and Instead “Fully Rely” on JSC’s Industry
Witnesses by Adopting the Bortz Survey – Is a Blatantly
Impermissible Request for a “Second Bite at the Apple”.

Further, JSC’s proposed alternative to the Judges’ approach underscores the
paucity of its argument. JSC argues that the Judges should “fully rely” on their version of
the Bortz Survey approach, which the Judges rejected in the Initial Determination. JSC
Motion at 8.
But this argument, like other JSC arguments discussed supra, constitutes a request
for the proverbial “second bite at the apple” that is an insufficient basis for granting
rehearing. The Judges agree with the Joint Respondents that because “JSC forcefully
advocated for reliance on the Bortz Survey before, during and after the 5-week hearing,”
this argument is “‘nothing more than a recapitulation of arguments that the Judges fully
considered in fashioning their [Initial Determination] and therefore do[es] not present the
type of exceptional case that would warrant a rehearing or reconsideration.’” Joint
Response at 6. See also PTV Response at 2. More particularly as explained below, in the
Initial Determination, the Judges credited industry witness testimony from JSC witnesses
by significantly increasing the JSC shares above the small shares arising from the aboveMinimum Fee Tyler Model (and all other regression modeling).

To place JSC’s present argument – and the Judges’ rejection of same – in
appropriate context, it is necessary to begin with the Judges’ factual finding that, in the
2015-17 period, “[t]he WGNA conversion … drastically reduced the number of JSC
subscriber-minutes distantly retransmitted.” Initial Determination at 122 n.147. There
was no dispute as to this fact. See generally JSC PFF ¶ 101 (stating, without denying,
that “[a]ccording to multiple non-JSC witnesses [citing Dr. Tyler and multiple other
expert and fact witnesses], the absolute and relative volume of JSC programming
declined significantly following the WGNA conversion when measured in subscriberweighted minutes.”); id. at ¶ 111 (citing JSC’s own expert witness, Dr. Majure, who did
not deny the drastic reduction in the number of JSC subscriber-minutes, but instead
argued “that it would be wrong to infer a drop in JSC value from a drop in subscriberweighted minutes ….”). In like manner, JSC relied on the testimony of three industry
witnesses who, while not denying the drastic reduction in JSC subscriber-weighted
minutes, testified that, from a CSO’s perspective, “the value and volume of different
categories of programming are not correlated.” JSC PFF ¶ 112. See also Program
Suppliers RPFF ¶ 26 (“JSC’s witnesses did not dispute that JSC’s relative subscriberweighted volume share declined by 91 to 92 percent between 2014 and 2015, and []
JSC’s relative volume share fell from approximately 7% in 2014 to 0.6% in 2015, and by
2017, it had fallen to 0.4%, representing a 94% decline.”).
This background is pertinent to JSC’s present argument because the Judges (1) in
fact did credit the testimony by JSC industry witnesses that subscriber-weighted minutes
alone were insufficient to determine relative value for JSC programming; and (2)
therefore substantially increased the relative value of JSC shares above the levels
generated by the above-Minimum Fee Tyler Model and other regression modeling.
However, the Judges declined to ignore the significant impact on relative value of the

substantial reduction in the volume of subscriber-weighted JSC minutes distantly
retransmitted. See Initial Determination at 122 n.147.
The following portions of the Initial Determination make this point in detail:
Based on the entirety of the record, the Judges are not persuaded by
industry expert testimony that the value and volume of programming are
not correlated. The industry expert evidence is set against the more wellestablished sound economic reasoning underlying the regression analyses
in this proceeding.
…
That is not to say that regressions correlating program category
minutes and a measure of royalties is necessarily the only way to determine
value. … [A]s confirmed by some of the industry testimony, the Judges
recognize that … JSC programming, bundled together with programming
from other claimant categories, can have a value (in terms of retaining or
adding subscribers) … that is not well-correlated with overall program
minutes.
…
The Judges find [JSC witnesses] to be particularly credible …
regarding the unique value of JSC content …. Based on the entirety of the
record, the Judges are persuaded that evidence of the unique value of …
JSC content … serves as a limitation on the applicability of certain
proposed regression analyses and their proposed allocation results. These
[findings] do not negate valid application of regression analyses as a basis
for allocation. However, these factors are taken into account within the
Judges’ weighting of the allocation methodologies, including application of
the Bortz survey ….
Initial Determination at 151-152 (emphasis added).
Consequently, the Judges set the 2015-17 post-WGNA conversion allocation
shares for JSC substantially above the shares proposed by the above-Minimum Fee Tyler
Model, as can be seen in the comparison of the two tables below:
Shares Awarded to JSC in Initial Determination
2016

11.44%

10.76%

11.91%

Initial Determination at 2 tbl.1.294
Shares Allocated to JSC by Above-Minimum Fee Tyler Model
2016

2.1%

1.3%

0.5%

Initial Determination at 13.
As a comparison of these two tables shows, by departing from the aboveMinimum Fee Tyler Model, and giving due weight to the Bortz Survey, as suggested by
JSC’s industry witnesses, the Judges increased JSC’s shares by 445% for 2015, 728% for
2016, and by 2,282% for 2017. To be sure, these higher shares are still well below what
the Bortz Survey proposed, and what JSC sought, both at the hearing and again via
rehearing. But, as noted above, the JSC share of subscriber-weighted minutes declined
by over 90% during this period, which is reflected in the effect of the regression analysis
in the above-Minimum Fee Tyler Model, and which the Judges found highly relevant.
Thus, JSC’s claim of purported error regarding this issue is not premised on any
failure by the Judges to ignore its expert witnesses or the Bortz Survey. Rather, JSC’s
complaint is that the Judges did not give zero weight to the regression model and 100%
weight to the Bortz Survey (based on the survey itself and the industry witnesses JSC
proffered). Of course, as noted supra, a party’s disagreement as to the Judges’ weighing
of record evidence, including expert testimony, does not satisfy any grounds for granting
a motion for rehearing.295

These final totals are changed marginally via the correction of a mathematical error in the Initial
Determination, as discussed infra.
Implicit in JSC’s argument is that JSC should not suffer such a loss in royalty revenues compared to past
years. But no implied assumptions regarding a JSC loss in royalty revenues arising from these lower shares
is warranted by the record. Rather, the record indicates that “JSC sports content has been migrating from
broadcast stations to other platforms, including cable networks like TNT, TBS, and ESPN, regional sports
networks, and pay-TV platforms.” See Program Suppliers PFF ¶ 237 (citing witness testimony, including
the testimony of JSC expert Allan Singer). Further, the record reflects that such migration “has increased
significantly for the past several years, resulting in corresponding decreases of distantly retransmitted JSC
programming volume” [indicating that] [t]he significantly low 2014 through 2017 JSC programming
volumes are consistent with a continuing migratory pattern. Id. ¶¶ 239-40.
4.

JSC’s Argument -- That Rehearing Is Necessary Because the Tyler
Modeling Simply “Parrots” the Statutory Formula – Cannot Be
Grounds for Rehearing Because this Argument Was Made at the
Hearing, and Because JSC Fails to Note in Its Motion the Judges’
Detailed Explanation for Rejecting that Argument.

JSC argues that the Tyler modeling (in its several varieties) should have been
rejected because it simply “parrots” the statutory formula. JSC Motion at 9. Ironically,
this basis for rehearing must be denied because it “parrots” the argument made by JSC
and other parties at the hearing. See Initial Determination at 74 (“Dr. Majure maintains
that the Tyler Model … essentially estimates only ‘the equation given by the statutory
formula . . ..’”); id. at 75 (noting that CCG’s expert economic witness, Dr. Lisa George,
likewise criticized the Tyler modeling because it “effectively replicates the regulatory
formula ….” and noting that PTV’s expert, Dr. John Johnson, likewise maintained that
the Tyler modeling “essentially replicates the statutory formula ….”).
However, the Judges comprehensively analyzed and then rejected this argument, in
all its iterations. See id. Section XIB at 131-136. Nonetheless, JSC simply ignores the
Judges’ detailed explanation why this “statutory formula”/“fee generation” criticism
lacks merit.
In sum, JSC once again asks for that improper “second bite at the apple” by
seeking to reargue an issue. Moreover, JSC does not even claim that the Judges’
extended discussion and findings as to this issue were incorrect. Accordingly, this JSC
point is insufficient to justify rehearing.

Thus, as the Judges explained in their Initial Determination, there is no reason to assume that the reduction
in JSC shares caused JSC to lose revenue realized from the transmission of JSC content formerly on
WGNA. That is, there is no record evidence to support an assumption that JSC had irrationally sought out
less profitable distribution outlets than distantly retransmitted local stations after the conversion of WGNA
to cable station status. See Initial Determination at 135 n.161 (“[T]he JSC is simply a representative of the
major professional sports leagues and the NCAA, and the record does not reflect that they suffered any
economic loss because of the reduction of subscriber minutes distantly retransmitted.”)

5.

Conclusion

Accordingly, JSC’s Motion for Rehearing as to these issues is denied.296
III.

PTV’S MOTION FOR REHEARING
a.

Whether “Adjustment B” in the Judges’ Initial Determination Is
Premised on Clear Error that Must Be Corrected

The PTV Motion seeks rehearing with regard to the Judges’ application of
“Adjustment B” in the Initial Determination, which is a downward adjustment of the PTV
shares derived from the Tyler Model for above-Minimum Fee CSOs. This adjustment
was made by the Judges to reflect the presence of must-carry PTV signals, whose value
had not been adequately demonstrated to be included as part of the relative marketplace
value generated by regression approaches. However, PTV maintains that the adjustment
is incompatible with the record evidence and amounts to an erroneous double-counting of
the Judges’ intended adjustment. PTV Motion at 1.
PTV alleges that it is clearly erroneous for the Judges to derive its shares from the
Tyler above-Minimum Fee Model for the 2015–17 period and also apply a downward
adjustment based on Bennett Figure 52. PTV notes that the Tyler above-Minimum Fee
Model excludes CSOs that paid the Minimum Fee, whereas Dr. Bennett (Figure 52)
carried out the analysis applied by the Judges only based on CSOs that paid the Minimum
Fee. PTV Motion at 3.
In their Joint Response, CCG, Program Suppliers, and SDC clarify that the Judges
explained Adjustment B as weighting Dr. Bennett’s Figure 52 analysis in order to avoid
the double counting that is alleged in PTV’s motion. Joint Response at 7, citing ID at

The Judges also do not credit PTV’s invitation for the Judges to “amend[] the Initial Determination to
award [PTV] shares for the 2015-2017 royalty years based on or adjusted upward from either the
conventional McLaughlin-adjusted Bortz Surveys or Dr. Tyler’s primary regression model ….” PTV
Response at 10. PTV’s representation that it would be amenable to this alternative is little more than the
statement by a party that it supports an approach that increases its allocation. Obviously, such argument
based on naked self-interest does not support a rehearing or amendment of the Initial Determination.
143 (note to Adjustment B Table). The Joint Response adds that the applied adjustment
is likely a conservative one, understating the bias from must-carry PTV signals, because
must-carry signals were also retransmitted by above-Minimum Fee cable systems. Joint
Response at 7, citing ID at 45.
Similarly, JSC’s response to PTV’s proposed elimination of Adjustment B notes
the Judges’ recognition of the need to lower the Tyler Model’s estimates for PTV to
correct the issue of fee-based regressions falsely associating must-carry signals with
additional royalties. JSC Response at 2. JSC challenges PTV’s view that excluding
Minimum Fee systems from the Tyler Model somehow accounts for must-carry carriage
within the Tyler regression. JSC argues that the Judges were correct to conclude that all
must-carry signals are being falsely interpreted by the regressions. Furthermore, JSC
observes that reliance on the Tyler above-Minimum Fee Model without adopting
Adjustment B, would incorporate the false inferences from must-carry signals, because
the regression would “see” systems carrying those stations and making royalty payments,
but would not “see” indemnification payments made by the PTV stations back to the
CSO. Id.
CTV asserts that PTV’s motion regarding Adjustment B reflects a fundamental
misunderstanding of the evidence. CTV notes that the Tyler Model does not exclude any
PTV stations that were retransmitted pursuant to must-carry requirements. CTV
Response at 3, citing Ex. 7207 (Bennett WRT) at 63-64 and 4/12/23 Tr. 4608 (Bennett);
Ex. 7600 (Tyler ACWDT) at 37, 64. And, for that reason, Dr. Bennett developed a mustcarry sensitivity analysis to measure the impact of must-carry signals on share
allocations, which is reflected in Figure 52. Id. CTV also notes that the Judges’
weighting methodology effectively decreases the downward adjustment to PTV’s share
determination based on the ratio of the PTV shares reflected in Dr. Tyler’s baseline
regression model, Figure 3.2 (including all CSO royalties), and the PTV shares reflected

in Dr. Tyler’s Figure 6.3 (including only above-Minimum Fee-paying CSO royalties), as
explained by the Judges note accompanying Adjustment Table B on page 143 of the
Initial Determination. Id.
PTV’s Reply reiterates its initial arguments regarding Adjustment B and argues
that any weighting contained within the adjustment is also unsupported. PTV asserts that
in order for the applied weighting to be appropriate, the proportion of Public Television
value derived from must-carry signals estimated by Dr. Bennett must have been the same
within the above-Minimum Fee CSOs as within the Minimum Fee-paying CSOs. PTV
Reply at 1-2.
PTV asserts that Dr. Bennett’s analysis only examined the value of must-carry
signals carried by Minimum-Fee-paying CSOs. PTV maintains that the values estimated
by Dr. Bennett are not proportionally distributed among Minimum Fee and above
Minimum Fee CSOs. PTV argues that that such estimates do not reflect carriage among
above-Minimum Fee CSOs, and that there is no basis for using the numbers calculated by
Dr. Bennett to attempt to estimate that value. Id. at 3.
PTV asserts that the CSOs paying more than the Minimum Fee could have chosen
to decline to carry any distant PTV signals. PTV argues that, under the relevant mustcarry regulations, for the above-Minimum Fee CSOs, distant retransmission of a mustcarry signal necessarily incurs an incremental royalty cost. PTV notes that under those
regulations above-Minimum Fee CSOs thus have the right to demand indemnification
from the originating station for that incremental royalty burden. If a station refuses
indemnification, then the CSO is not obligated to carry the signal under the must-carry
rules. Therefore, PTV argues, a CSO’s decision to carry the signal without
indemnification necessarily demonstrates value of the programs on that signal. PTV adds
that the record indicates that no indemnification payments were made. Id. at 4.

i. The Judges’ Analysis and Conclusion Regarding PTV’s
Adjustment B Rehearing Motion Arguments
The Initial Determination clearly explains the finding that must-carry signals are
problematic when fee-based regressions are used to establish relative value, and thus
require an adjustment. More particularly, this need for adjustment exists for Dr. Tyler’s
allocation share calculations pertaining only to the CSOs who paid more than the
Minimum Fee. The Tyler Model does not exclude any PTV stations that were
retransmitted pursuant to must-carry requirements. PTV proposes to ignore the effect of
must-carry signals on the Tyler Model. PTV takes the position that the must-carry issue
is addressed because the adopted Tyler Model excluded Minimum Fee systems. But
excluding Minimum Fee systems from the Tyler Model does not account for PTV mustcarry signals that are carried by above-Minimum Fee CSOs. Therefore, the Judges’
determination on this proceeding record makes clear that the absence of an adjustment,
rather than the adjustment itself, would more likely impose a clear error and manifest
injustice.
PTV asserts that the Judges cannot apply an adjustment based on Dr. Bennett’s
analysis because Dr. Bennett examined only the value of must-carry signals carried by
Minimum Fee paying CSOs. This argument does not undermine the need for an
adjustment. It simply attacks the applied Adjustment B as supposedly having inadequate
precision or basis in the record. There is a reason that the record evidence does not
provide for greater precision, and that is the noted evidentiary failure of PTV regarding
which stations were subject to the must-carry provisions and which were not. See ID at
47. However, the application of Adjustment B is reasonable, and is clearly based on
evidence in the record and the Judges’ assessment of the entirety of the record.297

Dr. Bennett’s adjustments are based upon Mr. Harvey’s identification of stations likely carried pursuant
to the must-carry provision. See Bennett WRT at 57. Furthermore, as the Judges observed, “Mr. Harvey
Further, Adjustment B, which is properly weighted, does not amount to an
erroneous double-counting of the intended adjustment. While employing the best
evidence available to determine a necessary adjustment, the Judges weighted the Bennett
analysis, for 2015-2017, prior to applying it to the Tyler regression allocations. This is a
reasonable approach, with sufficient evidentiary support, consistent with the relevant
legal requirements.
As explained in the Initial Determination:
The Must Carry adjustment in Bennett WRT fig. 52 was based on the PTV
shares of all CSO royalties, whereas the Judges are applying this adjustment
to the shares of CSO royalties attributable to shares generated by CSOs
paying above the minimum fee (subject to the prior adjustment for CCG,
discussed supra). So, for [2014], the percentage point adjustment to the
PTV share is the percentage point adjustment in Bennett WRT Fig 52. For
2015-2017, the percentage point adjustment to the PTV share is calculated
for each year by: (1) finding the percentage of PTV shares reflected by the
PTV shares from Tyler /WRT fig. 6.3 ÷ PTV’s shares from Tyler WRT fig.
3.2; (2) multiplying that percentage by the percentage point adjustment in
Bennett WRT fig 52; and (3) subtracting that product from the PTV share
from the table above.
ID at 143 (note to Adjustment B Table).
The weighting described above, for 2015-2017, serves to discount the Bennett
downward adjustment by ratios derived from PTV allocations of above-Minimum Fee
CSOs divided by the PTV allocations of all CSOs. As the Joint Response notes, these
ratios and the resulting downward adjustments are conservative in that they may tend to
understate the bias introduced by Dr. Tyler’s inclusion of must-carry PTV signals,
precisely because they do not exclude must-carry signals retransmitted by aboveMinimum Fee systems. At the same time, the approach remains based in record evidence
and is a reflection of reasonable and conservative judgments derived from the entirety of
the record. The Judges appropriately employed the thusly discounted Bennett

engaged in a reasonable attempt to estimate this number, which PTV could have set forth in its
submissions, but did not.” ID at 47.

adjustments (derived for Minimum Fee-paying systems) when applied to the Tyler model
allocations for above-Minimum Fee CSOs.
For the reasons explained herein, and based on the entirety of the record, PTV has
not shown that an exceptional case exists, or that the Initial Determination is erroneous in
relation to Adjustment B. Further, PTV has not demonstrated that aspects of the
determination relating to Adjustment B are without evidentiary support in the record or
are contrary to legal requirements. In that latter regard, PTV has not shown that, with
respect to the Initial Determination’s application of Adjustment B, there exists either
clear error or manifest injustice that would support granting of PTV’s request for
rehearing.298
b.

Whether “Adjustment C” in the Judges’ Initial Determination Reflects
a Clear Error that Must Be Corrected

The PTV Motion also seeks rehearing with regard to the Judges’ application of
what the Judges identified as “Adjustment C” in the Initial Determination. By this
Adjustment, the Judges substantially increased the value of certain PTV stations, and
thus PTV’s share of royalties. However, PTV maintains now that the Judges should have
used “Adjustment C” to increase its share even more. PTV Motion at 1-2.
By way of background, the Judges found in the Initial Determination that “the
dramatic increase in the number of minimum fee-only CSOs (i.e., those with no distant
retransmittals and those with some distant retransmittals but with ‘excess capacity’)
renders regression analyses that include those CSOs less reliable and thus can be
accorded only very limited economic evidentiary weight.” Initial Determination at 21.
In so holding, the Judges rejected PTV’s argument (proffered through the testimony of its

PTV’s Reply raises concerns regarding indemnification, in relation to value of must-carry signals. The
Judges point to section VII.A.5. of the Initial Determination “The Judges’ Analysis & Conclusions
regarding the ‘Must-Carry’ Issue” and the Judges’ undisturbed and valid analysis and conclusions as to
why must-carry signals lack objective and measurable value. See Initial Determination at 47-49.
economic expert, Dr. John Johnson) that the Judges should find predominant “economic
significance in the choices of a CSO ‘to retransmit a distant signal to particular subscriber
groups’ despite the fact that the CSO pays the minimum fee ….” Initial Determination at
13 (emphasis added) (explicitly rejecting the argument in PTV PFF ¶ 58 that “[t]he
decision of a CSO paying the minimum fee to retransmit a distant signal to particular
subscriber groups shows the CSO’s preference for distantly retransmitted programming
without the effect of the statutory royalty, which is an economic context that more closely
resembles the hypothetical marketplace.” (citing, inter alios, at n.83 therein, Dr.
Johnson’s hearing testimony)).299
In contrast with the Judges’ misgivings as to Dr. Johnson’s regression testimony,
they agreed with his argument that, ceteris paribus, the record contained sufficient
evidence to increase PTV’s allocation. In this regard, the Judges found that – although
certain PTV stations were only retransmitted by Minimum Fee-paying CSOs – these
CSOs had previously retransmitted PTV stations when such retransmissions had been
combined with retransmissions of WGNA, the most retransmitted local station, thereby
triggering a CSO royalty obligation above the Minimum Fee. As Dr. Johnson testified,
there was evidence that CSOs’ immediately prior retransmissions of PTV stations that
triggered an incremental royalty cost revealed an incremental value in those
retransmissions and that it was reasonable to conclude that the PTV stations continued to
have incremental value when they were uncoupled from WGNA (and thus generated only
the Minimum Fee). PTV made this specific argument in its post-hearing PFF and posthearing brief. See PTV PFF ¶ 60 (and record citations therein); PTV Post-Hearing Brief
at 27-28. The Judges were persuaded that this WGNA-related evidence reflected
“ongoing marketplace value,” notwithstanding the general principle that Minimum Fee

The Judges also declined to rely on Dr. Johnson’s analysis (including his broad Minimum Fee and
above-Minimum Fee arguments) and PTV’s case, because of certain decisions regarding methodological
approaches and decisions which the Judges found troubling, as discussed infra.
royalty payments did not otherwise disclose actual economic decision making or reveal
the preferences of CSOs. Initial Determination at 143-144.
To calculate PTV’s upward adjustment based on this point, the Judges identified
evidence and testimony proffered by a JSC statistical expert, Mr. R. Garrison Harvey.
Mr. Harvey testified as follows: “[T]he number of PTV Only systems increased after the
WGNA conversion from 44 at the end of 2014 to 173 by the end of 2017. PTV Only
Systems that had carried WGNA and PTV in 2014 account for three-fifths of that
increase.” Harvey WDT ¶ 106.
The Judges found that that Mr. Harvey demonstrated that 44% of the PTV stations
that were identified as retransmitted by Minimum Fee-paying CSOs after the WGNA
conversion had been transmitted pre-conversion and generated Base Fee royalties. That is
sufficient evidence of ongoing marketplace value. Moreover, Mr. Harvey supported this
testimony with reference to specific data, citing to his underlying workpapers, which
were not called into question or contradicted at the hearing. Harvey WDT ¶ 106 n.86.
Accordingly, the Judges used that factual finding to increase by 44% the PTV share
modification, as set forth in the table for Adjustment C. Initial Determination at 144.
This adjustment substantially increased PTV’s allocation of the royalties.
Compare Adjustment B Table with Adjustment C Table, Initial Determination at 143144. The PTV Motion does not challenge the accuracy or the credibility of this evidence
or Mr. Harvey’s testimony in this regard.
But PTV maintains that other testimony indicates that this increased adjustment
was insufficient. In this regard, PTV avers that the Judges erred by limiting their
adjustment to evidence concerning the specific combination of Public Television signals
with WGNA. That is, PTV claims that testimony it had proffered showed that PTV’s
upward adjustment should have been 55% rather than 44%. PTV Motion at 5.

In support of this argument, PTV points to a single one-paragraph statement in
Dr. Johnson’s Written Rebuttal Testimony, wherein he claimed, without identifying any
underlying workpapers or other evidence:
There were 1,115 CSO-Public Television distant signal combinations in the
2015-2017 period where the CSO paid a minimum fee during those years.
For 609 (or 55 percent) of these combinations, the same CSO also carried
the same Public Television distant signal, at a different point in time, when
it paid section 111 royalties greater than the minimum fee. In those
instances, the CSOs elected to pay incremental royalties for these signals
(because they generated more than one DSE). Put differently, the CSOs’
carriage decisions indicate that these Public Television signals did have
value.
PTV Motion at 6 (quoting Ex. 7303 ¶ 79 (Johnson WRT)) (emphasis added).300
PTV also maintains that Mr. Harvey’s testimony, quoted above, refers to the
number of CSOs (systems) that continued to retransmit PTV stations after WGNA was
unavailable, rather than the number of PTV stations retransmitted after the WGNA
conversion. PTV Motion at 5 n.4.
On these bases, PTV invokes two aspects of the standard for rehearing.
Specifically, PTV contends that “the Judges’ ‘Adjustment C’ reflects a clear error that
must be corrected to prevent manifest injustice.” PTV Motion at 5 (emphasis added).
In their Joint Response, CCG, Program Suppliers, and SDC assert that PTV’s
argument regarding this rehearing issue, like the others, fails to satisfy the requisites for
granting a rehearing, particularly the assertions of “clear error” and “manifest injustice”
levied by PTV with regard to “Adjustment C.” Joint Response at 1-3. More particularly,
these parties assert that:
1. The WGNA conversion was a “supply-side phenomenon” inapplicable to PTV
+ non-WGNA commercial station combinations.

In their Motion, PTV also cites to Johnson WRT ¶¶ 76–78 as attribution for this quote. PTV Motion at
6. However, no portion of the quote is contained in those paragraphs, and none of those paragraphs support
this rehearing argument. Moreover, paragraph 78 sets forth as an example a PTV station that had been
retransmitted by an Arizona CSO together with WGNA and continued to be retransmitted after WGNA was
no longer a broadcast station that could be distantly retransmitted. This example supports the Judges’
increase in PTV’s share for the reason set forth in Adjustment C in the Initial Determination, and in no way
supports PTV’s rehearing argument for a more lucrative adjustment.
2. Record evidence suggests that CSOs retransmitting PTV stations may have
been indemnified by the latter for any royalties paid above the Minimum Fee.
3. PTV acknowledges that it presented these very facts and arguments at the
hearing (citing PTV Motion at 6), and PTV’s failure to persuade the Judges to
apply these facts and adopt this argument at the hearing preclude PTV from
using the rehearing process to get a “second bite at the apple.” (citing 20102013 Rehearing Order at 2.).
Joint Response at 4, 7-8.
In its Reply to the Joint Response, PTV argues:
1. The Joint Response wrongly concludes, without explanation, that the issues
relating to, inter alia, Adjustment C, “could have been ‘address[ed] … during
the hearing’”, despite the fact that “it was impossible to anticipate that the
Judges would apply [inter alia] their Adjustment[] C to Dr. Tyler’s sensitivity
limited to Above Minimum Fee CSOs.” Thus, PTV maintains, the rehearing
process constituted the first occasion for it to litigate this issue, and the
rehearing motion thus is not an impermissible attempt to “re-litigate” a matter
considered at the hearing. PTV Reply at 1-2.
2. The Joint Response wrongly maintains that the Judges acted “well within their
discretion by limiting Adjustment C to “PTV + WGNA” combinations,
because the Judges did not account for their differentiation of “PTV + nonWGNA combinations that also generated a base fee royalty ….” PTV Reply
at 10-11 (quoting 17 U.S.C. 803(c)(3) (“A determination of the Copyright
Royalty Judges shall be supported by the written record and shall set forth the
findings of fact relied on by the Copyright Royalty Judges.”). PTV Reply at
10.
In its separate response, JSC argues that PTV’s request for rehearing regarding
“Adjustment C” should be denied because:

1. Any initial royalty obligation for the CSO above the Minimum Fee is subject
to offset via indemnification;301
2. Adjustment C “fails to account for the must-carry issue,” an issue which
uncouples continuing carriage of PTV signals after 2014 from any finding of
“CSO’s revealed willingness to pay for those signals;”
3. More broadly, Adjustment C wrongly relies on data from Minimum Fee-only
CSOs; and
4. Adjustment C treats similarly situated parties differently because some
Minimum Fee-only CSOs in 2017 also carried commercial signals that
“generated base fee royalties” in 2014.
JSC Response at 4-7.
In its Reply to the JSC Response, PTV argues:
1. JSC’s criticism of Adjustment C as arbitrary is wrong, because this adjustment
is “necessary to mitigate the unreasonably low estimates of [PTV’s] shares” as
set forth in the Tyler Model’s analysis of only “Above Minimum Fee CSOs.”
PTV Reply at 6.
2. JSC’s criticism of Adjustment C for supposedly treating different parties
differently is an incorrect criticism because the Judges explained that the
“Above Minimum Fee-Only” version of the Tyler Model disproportionately
ignored circumstantial evidence demonstrating post-2014 PTV value through
the continuation of PTV retransmittals in that period after the retransmittal of
a combination of “WGNA + PTV” signals became moot (with the WGNA
conversion to a cable system). By contrast, no other program category

This argument echoes the argument made in the Joint Response, as noted supra.

suffered from a similar loss of share value because of the WGNA conversion.
PTV Reply at 9-10.
In its separate response to the PTV Motion, CTV maintains that there is no basis
to find that the Judges’ adoption of Adjustment C was incorrect or incomplete – let alone
“clearly erroneous” or that it caused PTV “manifest injustice”. CTV Response at 5-6. In
support, CTV argues the following points:
1. PTV wrongly asserts that the Judges committed clear error in the way they
applied Adjustment C to the share allocations, because the Judges articulated
in the Initial Determination a proper rationale for applying Adjustment C; and
2. The Judges were within their authority to adopt Mr. Harvey’s record
testimony and evidence, rather than Dr. Johnson’s record testimony, to
calculate Adjustment C, particularly because Adjustment C focused on PTV’s
specific argument “regarding demonstrated willingness to pay” by CSOs for a
PTV signal after the WGNA conversion.
CTV Response at 2, 5-6.
In Reply to the CTV Response, PTV maintains:
1. Instead of offering a substantive argument, CTV incorrectly argues that, as a
matter of law, the Judges may adopt whichever percentage (Mr. Harvey’s or
Dr. Johnson’s) they deem “most appropriate”; and
2. The Judges do not have such discretion; rather, their findings “may not be
arbitrary[,] must be supported by substantial evidence” and shall be the
product of a “reasoned decision.”
PTV Reply at 10.
i. The Judges’ Analysis and Conclusion Regarding PTV’s
Adjustment C Rehearing Motion Arguments

1.

Application of the Rehearing Bases on Which PTV Relies for
Adjustment C: “Manifest Injustice” and “Clear Error”
a. PTV Has Not Satisfied the “Manifest
Injustice” Standard

As an initial matter, the Judges find that – for several reasons – PTV’s basis for a
requested rehearing regarding the Adjustment C issue fails to satisfy the “manifest
injustice” standard. First, the Judges agree with the Joint Respondents that the concept of
“manifest injustice” is “exceptionally narrow,” requiring a showing of not only “clear and
certain prejudice” to the movant, but also a harm to the movant that is “fundamentally
unfair.” Joint Response at 3 (citing Leidos, Inc. v. Hellenic Republic, 881 F.3d 213, 217
(D.C. Cir. 2018); Mohammadi v. Islamic Republic of Iran, 947 F.Supp.2d 48, 78 (D.D.C.
2013). Here, PTV maintains that even though the Judges recognized that their primary
regression model (the Tyler Model for above-Minimum Fee CSOs) failed to adequately
reflect a revealed preference for PTV signals – and accordingly increased PTV’s share
substantially – other evidence indicated that the PTV share should have been increased
even more. The Judges detect neither “fundamental unfairness” nor “prejudice” (let
alone “clear and certain prejudice”) arising from the fact that PTV’s increase was not as
great under the evidence relied upon by the Judges (44%, pursuant to Mr. Harvey’s
calculations) as it would have been had the Judges instead relied on PTV’s witness, Dr.
Johnson.
In applying the above D.C. Circuit test for “manifest injustice,” a district court
noted that “a dollar-and-cents comparison” serves to “undercut[] the significance of the
“manifest injustice standard.” Fraenkel v. Islamic Republic of Iran, 326 F.R.D. 341, 345
(D.D.C. 2018), rev’d on other grounds 892 F.3d 348 (D.C. Cir.). (abuse of discretion in

applying a statute).302 The Judges agree, especially where, as here, the movant is
complaining of “manifest injustice” because a substantial upward adjustment in its favor
should have been even greater.303
With regard to a specific point made by JSC, the Judges reject JSC’s argument for
eliminating Adjustment C en toto on the basis that this adjustment is itself erroneous
because it purportedly treats similarly situated parties differently. JSC Response at 6-7.
Although the Judges address this argument, and the opposition thereto, in the section of
this order denying JSC’s Motion seeking to eliminate Adjustment C en toto, the Judges
here take specific note of an important concession by JSC in its Response. Although JSC
claims that categories of programming other than PTV might have benefitted from the
same pre- and post-WGNA conversion analysis of CSO retransmissions, JSC concedes,
in a footnote, that, no witness, including its witness, Mr. Harvey, “analyze[d] whether
these CSOs were carrying the same non-WGNA signals in 2017 as they were in 2014.”
JSC Response at 7 n.2. So, not only did no party other than PTV make the argument that
this analysis might favor its particular programming, the evidence cited does not permit
an allocation among other program categories based on this argument.
b. PTV Has Not Satisfied the “Clear Error”
Standard

The D.C. Circuit reversed because the district court misconstrued a statute by finding that relatives of a
person with American citizenship murdered by terrorists should be lower if the murder victim had dual
Israeli citizenship and was targeted for death because of his latter citizenship. Fraenkel, 892 F.3d 348
(D.C. Cir. 2018). That holding is clearly not analogous to the present issue of “manifest injustice.”
PTV’s reliance on the Judges’ order on rehearing in SDARS III is misplaced. There, the Judges found
that “it would be manifestly unjust to maintain a royalty rate … not based on the … calculation that
prevailed at the time the record was closed,” and the alternative methodology could change the royalty
obligation by $150 million. SDARS III Order at 7-8. The Judges’ reference to the potential royalty dollars
at issue, standing alone, was not the dispositive basis for finding potential manifest injustice; rather
manifest injustice would be the consequence of the use of a calculation methodology not prevailing
according to the extant record. The reference to the $150 million disparity underscored the importance of
the manifest injustice of using an improper methodology. By contrast, in the present case, the differing
methodologies for calculating PTV’s upward adjustment (Mr. Harvey’s or Dr. Johnson’s) both are in the
record, and they are discussed infra.
Pursuant to the Judges’ rules, the statutory “exceptional case” requirement for
rehearing – based on an allegedly “erroneous” factual aspect of a determination – is
satisfied only if that factual finding is “without evidentiary support in the record.” 17
U.S.C. 803(c)(2); 37 CFR 353.1-.2; see also Order Denying Motion for Rehearing at 1, In
re Distribution of 2000-03 Cable Royalty Funds, Docket No. 2008-02 CRB CD 20002003 (Phase II), (Aug. 7, 2013). Further, pursuant to D.C. Circuit precedent, when the
movant’s asserted factual predicate for the assertion of “clear error” relies on the
uncredited testimony of its expert, a Rule 59(e) motion304 must be denied if the expert’s
testimony does not provide sufficient “factual … reasons for [the expert’s] conclusion.”
Martin v. Omni Hotels Mgmt. Corp., 321 F.R.D. 35, 40 (D.D.C. 2017) (citing New York
State Ophthalmological Soc. v. Bowen, 854 F.2d 1379, 1391 (D.C. Cir. 1988)), aff'd, 409
F. A’ppx 362 (D.C. Cir. 2011).
Moreover, a request for rehearing based on a judge’s reliance on a “specific
factual determination[]” does not satisfy the “clear error” test if (1) the evidence which
the motion challenges is “sufficiently reliable to credit” or (2) if the evidence on which
the movant relies is inconsistent with “the entire evidence,” and thus the court is “left
with the definite and firm conviction that a mistake has been committed.” Obaydullah v.
Obama, 688 F.3d 784, 792 (D.C. Cir. 2012) (emphasis added).
Applying these standards, PTV’s motion for rehearing with regard to Adjustment
C must be denied. First, the Judges’ Adjustment C is based on evidence in the record,
i.e., the testimony of JSC’s statistical expert witness, Mr. Harvey, and the documentation
on which he relied. Moreover, this testimony and evidence was not challenged, either at
the hearing or on rehearing. On this basis alone PTV’s motion for rehearing fails to
demonstrate any error, let alone clear error.

As noted supra, the Judges pattern their rehearing analysis pursuant to the standards applicable to
motions under Fed. R. Civ. P. 59(e).
Second, PTV relies upon the testimony of its own economic expert, Dr. Johnson,
which PTV maintains is superior to the testimony of Mr. Harvey on this issue. However,
this argument fails the second “clear error” standard cited above, because Dr. Johnson’s
testimony, on which PTV relies to seek, via rehearing, a 55% Adjustment C increase in
its royalty share (instead of the 44% Adjustment C increase provided by the Judges) does
not provide sufficient factual reasons for his conclusion. Specifically, Dr. Johnson’s
opinion regarding the 55% increase sought by PTV is not supported by any record
evidence cited by PTV. See PTV Rehearing Motion at 6; Johnson WRT ¶ 79.305
Additionally, PTV does not maintain that Mr. Harvey’s analysis that led to the
Judges’ 44% upward adjustment in favor of PTV was erroneous; rather PTV argues that
it is Dr. Johnson’s opinion which would favor a 55% adjustment which “best comports”
with the Initial Determination. PTV Motion at 10. However, the Judges’ exercise of
their discretion in deciding which of two (or more) alternative factual approaches to
follow cannot constitute “clear error” (or any error at all) when the party seeking
rehearing itself simply maintains merely that its preference is better. Moreover, for the
reasons articulated below, the Judges had good cause to rely on Mr. Harvey’s testimony
over that of Dr. Johnson.
2.

PTV’s Claims of “Manifest Injustice” and “Clear Error” also Fail
Because PTV Is Seeking to Relitigate an Issue Raised and
Determined in the Initial Determination

As the Judges have noted previously, a motion seeking rehearing based on, inter
alia, assertions of “manifest injustice” or “clear error,” shall be rejected if the movant has

Although PTV also cites to Johnson WRT ¶¶ 76-78, which are irrelevant as to the Adjustment C
rehearing issue, the Judges note that those paragraphs likewise do not cite to or provide any documentary
support for Dr. Johnson’s opinion. (By contrast, Mr. Harvey’s testimony, on which the Judges relied, was
supported by documentary evidence, in the form of Mr. Harvey’s cited workpapers. Harvey WDT ¶ 106
n.86. Moreover, Mr. Harvey’s testimony was not subject to challenges that the Judges found sufficient to
call into question his testimony, unlike the case with Dr. Johnson’s testimony, as discussed in the text
immediately following this footnote.)
“merely restate[d] … evidence that was presented during the proceeding.” Order
Denying Motions for Rehearing at 2, In re Digital Performance Right in Sound
Recordings and Ephemeral Recordings, Docket No. 2005-1 CRB DTRA (Webcasting II)
(Apr. 16, 2007). It is in such context that the movant seeks rehearing – over an issue that
was raised and determined in the Initial Determination. This principle has been aptly
described by the Judges, and other tribunals, as an improper attempt to seek “a second
bite at the apple”:
[When] the Judges consider whether there exists … a need to correct a clear
error or prevent manifest injustice[] … the Judges must subject the
rehearing arguments to a strict standard, in order “to dissuade repetitive
arguments on issues that have already been fully considered ….” Order
Denying Motions for Reh’g, Docket No. 2005-1 CRB DTRA, at 1-2 (Apr.
16, 2007). Under this strict standard, a rehearing motion does not provide a
litigant with a “second bite at the apple,” allowing it “to re-litigate old
matters, or to raise arguments or present evidence that could have been
raised prior to the entry of judgment.” Exxon Shipping Co. v. Baker, 554
U.S. 471, 485 n.5 (2008) (quoting C. Wright & A. Miller, Federal Practice
and Procedure § 2810.1 (2d ed. 1995)).
Order Denying Program Suppliers’ Motion for Rehearing . . . at 1, Distribution of Cable
Royalty Funds, Consolidated Proceeding Docket No. 14-CRB-0010-CD (2010-13) (Dec.
13, 2018).
Here, PTV is seeking the metaphorical “second bite at the apple.” In this regard,
it has not escaped the Judges’ notice that PTV does not meaningfully attempt to counter
the “second bite” problem – but rather simply avoids it. Perhaps that is because the
Judges explicitly did take note in the Initial Determination that Dr. Johnson had made this
precise claim. See Initial Determination at 13-14 (citing and quoting Johnson WRT ¶
79). Clearly, PTV’s rehearing argument regarding Adjustment C is – to say the least –
complicated by the fact that the Judges were fully aware of Dr. Johnson’s relevant
testimony – yet did not adopt that testimony in the Initial Determination.306

The Judges recalled Dr. Johnson’s testimony in this regard, even though it was not set forth expressly in
PTV’s Proposed Findings of Fact or Conclusions of Law (or PTV’s replies to other parties’ post-hearing
As made clear in the Initial Determination, the Judges had substantial problems
with regard to Dr. Johnson’s testimony and analyses, which should have made obvious
their unwillingness to credit his testimony on which PTV relies for its objection that the
Judges’ 44% Adjustment C in favor of PTV is too low. To make this point explicit, the
Judges recount their difficulties in connection with Dr. Johnson’s hearing testimony, as
expressed in the Initial Determination.
First, the Judges were troubled by Dr. Johnson’s reliance on the modeling of a
witness in a prior proceeding because the testimony and modeling of that witness had
been called into serious question. Initial Determination at 36.307 Second, and relatedly,

submissions). In fact, in both of its post-hearing filings regarding proposed factual findings, PTV only
expressly referenced this issue in connection with CSOs retransmitting PTV + WGNA, and failed to argue
for the wider application it now seeks via rehearing. See PTV PFF ¶¶ 60, 126; PTV RPFF 136 & n.188.
That failure on PTV’s part alone would have sufficed for the Judges to have disregarded PTV’s argument.
See 37 CFR 351.14 (“A party waives any objection to a provision in the determination unless the provision
conflicts with a proposed finding of fact or conclusion of law filed by the party.”). Although PTV claims
that “it was impossible to anticipate that the Judges would apply their Adjustment[] … C to Dr. Tyler’s
sensitivity limited to Above Minimum Fee CSOs,” PTV Reply at 1, a crucial theme of Dr. Johnson’s
testimony was that the Minimum Fee data should have been used en toto to establish value. Thus, it was
incumbent upon PTV to make this point by including it explicitly in its post-hearing submission.
But nonetheless the Judges, sua sponte, recalled, referenced, and quoted testimony as to this issue, rather
than deem PTV’s upward adjustment argument to have been waived. However, the Judges did decline to
credit Dr. Johnson’s testimony (as discussed in the following text), adopting instead the substantial 44%
upward adjustment indicated by the testimony of JSC’s statistical expert, Mr. Harvey. PTV’s argument
strikes the Judges as a fine example of chutzpah, or as Joint Respondents’ put it, “looking a gift horse in the
mouth,” by characterizing only a 44% upward adjustment as “manifest injustice” and “clear error.” See
Joint Response at 7.
In this vein, PTV also takes issue (when assuming arguendo the correctness of Mr. Harvey’s analysis) with
the Judges setting of PTV’s Adjustment C share percentage increase by 44%, rather than setting the
adjustment at 44.5%. PTV Motion at 5 n.4. The Judges disagree with PTV’s argument as to this issue. An
agency has the discretion to truncate a value expressed in decimal form. See North Carolina v. E.P.A., 531
F.3d 896, 915-916 (D.C. Cir. 2008 (“[W]e cannot say that EPA's decision to truncate rather than round …
was arbitrary. … Without a rule mandating any particular method, EPA is free to round or truncate the
numbers it is comparing … as long as its choice is reasonable.”). Here, there was no regulation guiding the
Judges. Moreover, given the uncertainties generated by PTV’s failures, as discussed elsewhere in this
order, to proffer sufficiently credible evidence and to meet its evidentiary burdens regarding which PTV
signals among the CSO systems were must-carry, multicast or subject to royalty indemnification –
truncating the percentage to 44% continues to strike the Judges as a reasonable decision, and certainly not
one that generated “manifest injustice” or “clear error,” as those standards are described in this order. (It
should be noted that PTV has not argued on rehearing that the Judges should have rounded the percentage
increase to 45%, rather than truncate the increase to 44%, nor did PTV argue that the Judges are bound by a
mathematical convention to do so.)
To recount, these materials revealed “compelling” evidence of “potential specification searching and
[of] dissembling” by the expert econometric witness on whose testimony the Judges had relied in the 201013 cable allocation proceeding (before serious questions were raised in the companion satellite proceeding).
Initial Determination at 33. That prior testimony and modeling served as a starting point for Dr. Johnson’s
econometric work in the present proceeding. Id. at 27. The Judges thus found in this proceeding that,
the Judges were stunned when Dr. Johnson claimed at the hearing that he had “never
received” the satellite case documents calling into question the modeling and testimony
on which Dr. Johnson had relied, which SDC’s counsel had produced (as voluntary
discovery) to PTV’s counsel (and to all counsel).308 Third, and also relatedly, PTV’s
counsel never volunteered whether it had in fact transmitted that important discovery to
Dr. Johnson, or whether PTV’s counsel had (intentionally or otherwise) not transmitted
that material. Initial Determination at 36 n.39. Thus, the Judges were unable to
determine whether the failure to consider and address this important evidence was the
fault of Dr. Johnson, PTV’s counsel, or both. For these three related reasons, the Judges
gave “diminished weight” to Dr. Johnson’s testimony. Id. at 38.
Fourth, as explained in the Initial Determination, the Judges were also “troubled”
that PTV appeared to have created two different “teams” within Dr. Johnson’s firm,
Edgeworth Economics (“Edgeworth”), in order to allow Edgeworth to use a so-called
“consulting team” which excluded Dr. Johnson, in order for PTV to provide him with
deniability about specification searching and to withhold discovery of such dubious
activity.309 More particularly, the Judges explained that, “when the ‘consulting team’ is
created within[] the same firm of economists who are also preparing testimony and
actually testifying, there is the risk that work by the ‘consulting’ team will be utilized as a
screening device for work that should have been undertaken by the ‘testifying’ team . . .
[and] the use of a ‘consulting’ team can allow a party to also cloak from discovery expert

inter alios, Dr. Johnson – in order to support his testimony – was “obligated,” yet failed, “to adequately
address the impact of Dr. Crawford’s workpapers, as well as the assertion that they demonstrated he lied in
his testimony in the prior proceeding.”); Id. at 36.
Id. (“[S]tartlingly, Dr. Johnson testified that he never received the satellite case documents that SDC’s
counsel produced to PTV’s counsel … or the [relevant] testimony … [from] the satellite proceeding that
was designated as evidence [in the present proceeding ….”]).
A bona fide “‘consulting team’ of experts can be utilized by a party’s law firm, to allow for work
product confidentiality in connection with the law firm’s evaluation of the facts.” Initial Determination at
38.
work by claiming the protection of the work-product rule.” Id. In this regard, the Judges
took particular note that
an e-mail that was withheld from Dr. Johnson as “consulting” team material
contained a link to CDC distant signals with the caveat: “these data files are
being shared for consulting purposes only and should not be shared with
John”). It is difficult to fathom why raw data regarding distant signals would
be withheld from the testifying expert.
Initial Determination at 39 n.43.
Additional detailed facts only further undermined the credibility of PTV and Dr.
Johnson:
Moreover, the soundness of the “wall” between the “consulting” team and
the “testifying” team was questionable, given that the “consulting” team
was led by Drs. Michael Kheyfets and David Colino, but they also were the
senior members of the “testifying” team that reported to Dr. Johnson, along
with dual team members Dr. Stephanie Cheng and Esther Yan. …..
Additionally, when PTV first produced documents to SDC, it did not also
provide a privilege log describing the Edgeworth documents otherwise
withheld because of an assertion of a privilege relating to a consulting team.
(After SDC’[s] motion to compel, PTV provided a privilege log, but, after
[ordered to produce the documents,] PTV produced virtually all of the
previously withheld material.)
Initial Determination at 39. The Judges thus determined that not only was there evidence
that PTV attempted to avoid discovery of its alleged specification searching, but that this
attempted concealment “serves to diminish the Judges’ reliance on the Johnson
Model ….” Id.
Fifth, when evaluating the substance of the work undertaken by Dr. Johnson, the
Judges were further concerned by the absence of “any sufficient basis in the record to
explain [the] correlation between sequential regression runs and the growth of PTV’s
allocation share,” and PTV’s failure to present a “sufficient basis to rebut SDC’s charge
that data changes should not consistently be correlated with the growth of PTV’s share
allocation, as opposed to a randomized effect on share percentages.” Id. Thus, the
Judges agreed with SDC’s economic expert, Dr. Daniel Rubinfeld, finding that Dr.
Johnson’s work demonstrated “an appearance … of practices that ran counter to sound
empirical research practice ….” Initial Determination at 39-40. For these reasons alone,

the Judges decided to “give reduced weight” to the work undertaken by Dr. Johnson on
behalf of PTV. Initial Determination at 40.
Sixth, the Judges were frustrated by PTV’s failure to produce important evidence
with regard to another issue. Although PTV claimed royalties for multicast programming
and must-carry stations, PTV failed to produce sufficient proof in that regard.310 As the
Initial Determination explains:
[T]here was evidence available to be produced by PTV, namely the PBSNCTA agreement as well as the number of entities it represents that would
provide significant marketplace evidence …. But … PTV did not produce
either this agreement or the number of entities bound by it as evidence,
although its own expert witness testified as to some of the agreement’s
contents.
Thus, the Judges were deprived of full knowledge of the terms of
the agreement, the parties’ fulsome testimony as to the meaning of its
provisions and the number of entities signing on to the agreement.
Moreover, PTV opposed the admission of that agreement into evidence. …
Accordingly, the Judges … find that PTV bore, but failed to discharge, the
burdens of production and persuasion with regard to the details of the
agreement and the extent of its coverage.
Initial Determination at 53.
Regarding the “Must Carry” issue, PTV’s failure to carry its burdens of
production and persuasion are especially instructive, because they are juxtaposed against
the testimony of Mr. Harvey, as in the rehearing issue pertaining to Adjustment C. Mr.
Harvey identified 15.5% of PTV distant signals as having been retransmitted in
compliance with these must-carry rules. Initial Determination at 40. But, as the Judges
noted, “PTV takes issue with the entirety of Mr. Harvey’s approach to designating ‘mustcarry’ stations.” Id. The Judges rejected PTV’s argument, chastising PTV for failing to
satisfy its burden of proof to provide affirmative evidence and for instead attempting to

“Must Carry” stations were those PTV stations which CSOs were legally obligated to transmit,
potentially belying any assertion that the value of such stations was demonstrated by their carriage. See
Initial Determination at 47-49; see also id. at 40, 42-43.
cast doubt on Mr. Harvey’s otherwise credible testimony and analysis. As the Initial
Determination states:
The Judges agree with JSC and CTV, based on the case law cited by JSC,
that PTV, whose clients include the public television stations that are in fact
subject to must-carry requirements, bore the twin burdens of proof – the
burden of producing evidence and the burden of persuasion – regarding
which stations were subject to the must-carry provisions and which were
not. Further, because PTV is seeking a determination including must-carry
station data in the regression, those burdens are apportioned to PTV as a
matter of statute. See 5 U.S.C. 556(d).
But rather than produce such evidence or prove its significance, PTV
elected to attack Mr. Harvey’s attempt to estimate the number of must-carry
stations. Those attacks are insufficient. … Mr. Harvey engaged in a
reasonable attempt to estimate this number, which PTV could have set forth
in its submissions, but did not.
Initial Determination at 47 (emphases in original).311
Seventh, and finally, as noted at the outset of this discussion of PTV’s rehearing
request vis-à-vis Adjustment C, Dr. Johnson’s rebuttal testimony on which PTV relies
does not include a reference to documentation on which he relied to support that
testimony. The Judges are hesitant (to say the least) to grant rehearing based upon an
expert’s testimony when the party relying on that testimony fails to cite to any underlying
documentation of factual analysis or support for that opinion. Moreover, when the
Judges consider the absence of such documentation in the cumulative context of the

PTV also questions the use of Mr. Harvey’s analysis because it identifies the number of “systems” (i.e.,
CSOs) that continued to retransmit a PTV signal after the WGNA conversion, rather than the total number
of PTV stations retransmitted by these CSOs. PTV Motion at 5 n.4. The Judges do not agree with this
criticism. Recall the problems (discussed supra) related to PTV’s failure to meet its evidentiary burdens
related to “must carry” and multicast signals, as well as to indemnified transmissions. The Judges find it
prudent to rely on Mr. Harvey’s “system” calculation, which is equivalent to establishing one PTV signal
per CSO as retaining in the 2015-2017 post-WGNA era its pre-2014 value, as evidenced by its aboveMinimum Fee carriage in that year. Utilizing PTV’s per station approach would require the Judges to
assume that the retransmission of all PTV stations in 2015-2017 were generating royalties, regardless of
whether they were “must carry” or multicast signals, or whether they were subject to indemnification of
any royalties due. As noted supra, the Judges declined to adopt PTV’s arguments regarding the number or
percent of “must carry” stations (for which no net royalty obligation exists), because of PTV’s failure to
meet its evidentiary burdens in those regards (a point unaddressed in the PTV Motion). As the D.C. Circuit
has noted, the daunting factual nature of the statutory task of allocating royalties necessitates a measure of
“rough justice,” which the Judges find to be well-administered as to this issue by making allocation
decisions dependent in part on whether a party had met its evidentiary burden. See Initial Determination at
9 (and citations therein).
assorted problems with PTV’s failures to meet its evidentiary burdens and Dr. Johnson’s
lack of knowledge of critical facts and evidence (as cataloged supra), their reluctance to
grant the “exceptional” section 803 relief of rehearing is reinforced.
The foregoing analysis makes it clear that the Judges had – and continue to have –
serious questions regarding the credibility, reliability, and sufficiency of the evidence and
testimony put forth by PTV and Dr. Johnson. Each of the Judges’ findings and
conclusions in these multiple areas is sufficient grounds for the Judges’ election to rely
on the testimony and evidence provided by JSC’s expert statistician, Mr. Harvey, rather
than PTV’s Dr. Johnson, regarding the basis for, and size of, Adjustment C. Moreover,
when the foregoing seven points calling into question the testimony of Dr. Johnson and
PTV’s position are considered as a whole, the Judges’ decision to rely on Mr. Harvey’s
testimony instead of that of Dr. Johnson, most certainly did not constitute an error, let
alone clear error that could serve as a basis for rehearing.
For these reasons, the Judges agree with the Joint Respondents that the Judges
acted within their discretion in making Adjustment C as set forth in the Initial
Determination.312, 313
IV.

CORRECTION OF TYPOGRAPHICAL AND ARITHMETIC ERRORS

PTV appears to implicitly argue that the “second bite at the apple” argument is not applicable because it
did not know that the Judges would apply Dr. Johnson’s opinion in favor of applying the Minimum Fee
royalty data as an adjustment (Adjustment C). PTV Motion at 1 (arguing it was “impossible to anticipate
that the Judges would apply their Adjustment[] C to Dr. Tyler’s sensitivity limited to Above Minimum Fee
CSOs.”). This argument is meritless. PTV argued emphatically for the Judges to utilize Minimum Fee
royalty data to establish program values and allocation shares in this proceeding. The Judges did use
Minimum Fee evidence in making Adjustment C in PTV’s favor – just not the Minimum Fee evidence that
PTV prefers, nor as extensively as PTV had sought. As noted supra, the D.C. Circuit has held, the Judges
are “not … strictly limited to choosing from among the proposals set forth by the parties” and, like all
agencies, “have the authority to modify proposals set forth by the parties, or to suggest models of their
own.” Johnson v. Copyright Royalty Bd., 969 F.3d 363, 381–82 (D.C. Cir. 2020). See also
SoundExchange, Inc. v. Copyright Royalty Bd., 904 F.3d 41, 50–51, 57 (D.C. Cir. 2018) (upholding the
Judges’ decision to modify a party's proposed rates in light of the Judges’ application of the relevant
statute); Ass’n of American Publishers, Inc. v. Governors of USPS, 485 F.2d 768, 773 (D.C. Cir. 1973)
(when a rate-setting agency partially disregards two experts in connection with “suggested adjustments …
[the] rate-making body may fashion its own adjustments within reasonable limits.”).
The Joint Respondents’ argument that the PTV Motion as it relates to Adjustment C should be denied
because the analysis of WGNA + PTV transmissions is a “supply-side” scenario and thus differentiated
from PTV pairing with other signals is moot in light of this order.
The PTV Motion noted errors in the Adjustment B Table for 2014, observing that
“typographical errors result in total 2014 shares that do not equal 100%.” PTV Motion at
4 n.2. PTV argued that, in order to correct the 2014 shares, "Program Suppliers’ share
should be changed from 28.8% to 26.8%, JSC’s share should be changed from 37.5% to
37.48%, and CTV’s share should be changed from 11.39% to 11.38%." Id.314
The Judges have reviewed the Adjustment B calculations questioned by PTV and
agree that they are erroneous as a consequence of a typographical error. PTV’s proposed
correct shares adjust for this error. The Judges grant the motion for rehearing regarding
the identified typographical errors, finding that there is a need to correct a clear error or
prevent manifest injustice. Having found the Motions for rehearing and related filings a
sufficient rehearing record from the participants, the Judges correct the typographical
errors for 2014.
Further, the Judges correct mathematical errors, not only in 2014 but in all years,
that affected the shares reported in the Adjustment B Table. PTV, JSC, and CTV note
that PTV’s share of 19.09% reported in the Adjustment B table for 2017 is in error.315
PTV Motion at 4 n.3; JSC Motion at 9 n.4; CTV Response to PTV Motion at 6. The
Judges grant the motion for rehearing regarding these arithmetic errors, finding that there
is a need to correct a clear error or prevent manifest injustice. Having found the Motions
for rehearing and related filings a sufficient rehearing record from the participants, the
Judges correct the arithmetic errors.316

When computing the allocation shares in the adjustment tables, the Judges necessarily rounded figures.
When such rounding was applied it was done consistently across parties and years. Due to rounding, the
sum of allocation shares may not equal exactly 100% for a given year.
PTV and CTV describe the error as an arithmetic error.
The first arithmetic error corrected was in the calculation of the proportional increase to other claimants’
shares relating to the reduction in the PTV share due to the presence of “must-carry” stations. The second
arithmetic error corrected was in the calculation of the PTV share for 2017 to account for this “must-carry”
issue.
315
All of these corrections are applied in Adjustment B Table below:317
Adjustment B Table
Year
Program
Suppliers
26.80%

JSC
37.48%

CTV
11.38%

PTV
13.36%

SDC
4.33%

CCG
6.55%

47.67%

2.44%

13.14%

11.78%

11.28%

13.70%

2016
40.75%
44.07%

1.69%
0.67%

17.32%
13.23%

15.32%
15.96%

10.81%
10.41%

14.12%
15.66%

The Must Carry adjustment in Bennett WRT fig. 52 was based on the PTV shares of all CSO royalties,
whereas the Judges are applying this adjustment to the shares of CSO royalties attributable to shares
generated by CSOs paying above the Minimum Fee (subject to the prior adjustment for CCG, discussed
supra). So, for 2014, the percentage point adjustment to the PTV share is the percentage point adjustment
in Bennett WRT Fig 52. For 2015-2017, the percentage point adjustment to the PTV share is calculated for
each year by: (1) finding the percentage of PTV shares reflected by the PTV shares from Tyler /WRT fig.
6.3 ÷ PTV’s shares from Tyler WRT fig. 3.2; (2) multiplying that percentage by the percentage point
adjustment in Bennett WRT fig 52; and (3) subtracting that product from the PTV share from the table
above.
The shares of the other claimants are adjusted upward by: (1) calculating the percentage each category
represents of all the categories’ shares except PTV; (2) multiplying each percentage by the Bennett Must
Carry adjustment (reduced as set forth above); and (3) adding that product to the shares of each claimant
category.

The Judges recalculate the Adjustment C Table to reflect the corrections to the
Adjustment B Table:
Adjustment C Table
Year
2015
2016
Program
Suppliers
44.87%
37.51%
40.39%

JSC
2.30%
1.56%
0.61%

CTV
12.37%
15.94%
12.12%

PTV
16.96%
22.06%
22.98%

SDC
10.62%
9.95%
9.54%

CCG
12.90%
13.00%
14.35%

The Judges recalculated the shares of the other five claimant categories by: (1) calculating the percentage
each category represents of all the categories’ shares except PTV; (2) multiplying each percentage by the
increase in the PTV share generated by adjusting to reflect WTP of CSOs that maintained PTV carriage
after WGNA conversion; and (3) subtracting that product from the shares of each claimant category.

As discussed in the Initial Determination, the Judges allocated shares of the Basic
Fund to each party based on their review and weighting of the record evidence. ID at

To the extent that corrections set forth in this Order might be construed to reach beyond those identified
in the Motions for rehearing or the rehearing authority in 17 U.S.C. 803(c)(2), the Judges also make such
corrections under their authority to correct technical or clerical errors in 17 U.S.C. 803(c)(4). For this
reason, the Judges set forth the analysis herein also as a written addendum to the Initial Determination,
which is distributed to the participants of the proceeding via this Order and will be published as part of the
Final Determination, pursuant to 17 U.S.C. 803(c)(4).
197-198. The corrected Basic Fund and 3.75% Fund allocations incorporate the
corrections discussed above.
For each year, the aggregate sum of the share allocations did not sum to 100% for
the Basic Fund. In 2014, the allocations summed to marginally greater than 100 percent
and, in 2015-2017, marginally less than 100 percent. The Judges therefore adjusted the
allocated shares proportionally to achieve an aggregate allocation of 100%; in 2014
shares this process required a modest downward adjustment and, in 2015-2017, this
process required a modest upward adjustment in shares. The resulting corrected Basic
Fund and 3.75% Fund318 allocations are as follows:
Basic Fund Royalty Allocations
CCG
CTV
JSC
Program
Suppliers
PTV
SDC

2014
6.19%
20.55%
36.13%
21.21%

2015
14.59%
19.78%
11.42%
28.29%

2016
14.60%
17.36%
10.72%
25.53%

2017
15.77%
17.50%
12.36%
23.29%

11.07%
4.85%

19.18%
6.74%

24.78%
7.01%

25.25%
5.83%

2014
6.96%
23.11%
40.63%
23.85%

2015
18.05%
24.48%
14.13%
35.00%

2016
19.41%
23.08%
14.25%
33.94%

2017
21.10%
23.41%
16.53%
31.16%

5.45%

8.34%

9.32%

7.80%

3.75% Fund Royalty Allocations
CCG
CTV
JSC
Program
Suppliers
SDC

V.

RULING AND ORDER

For years 2015 and 2017, the calculated allocation shares did not equal 100%. In the case of 2015, the
total calculated shares were just below 100%. To achieve the full 100%, the Judges reviewed the results
and provided an increase to the claimant whose share was the closest to being rounded up at the second
decimal place. In 2017, the total calculated shares were just above 100% and the Judges did not round up
the claimant whose share was the closest to not being rounded up at the second decimal place to achieve a
100% allocation.
For the foregoing reasons, PTV’s motion for rehearing is GRANTED IN PART
and DENIED IN PART and JSC’s motion for rehearing is DENIED.
The affected parties shall file a joint proposed redacted public version of this
Order for public viewing within TEN DAYS.

SO ORDERED.
/s/
David P. Shaw
Chief Copyright Royalty Judge
DATED: March 21, 2024

[FR Doc. 2024-13597 Filed: 6/27/2024 8:45 am; Publication Date: 6/28/2024]