“The studies are absolutely critical for deregulation”: Q&A With Gabriel Rossman on the FCC’s Suspended Newsroom Study

Federal Communications Commissioner Ajit Pai set
off a
storm of controversy
this week with a
Wall Street Journal op-ed
warning that the Federal
Communications Commission (FCC) was planning an intrusive study of
newsroom decision-making—a study that could pave the way for agency
to pressure news organizations about their editorial decisions. On
Friday afternoon, the study was
suspended
, with the FCC promising to revise its methods.

Critics said the decision vindicated their concerns. But is
there a possible downside for those who favor a less regulated
communications industry?

Yesterday afternoon, as the agency’s decision to put the study
on hold was being announced, I corresponded over email with
University of California sociology professor
Gabriel Rossman
, who studies the diffusion of culture and mass
media, about the nature and history of the FCC’s proposed study and
others like it. Rossman argued that the study was not only fairly
typical for the agency, but that for more than a decade, studies
like it have actually resulted in deregulation of the
communications marketplace. A lightly edited version of our
correspondence follows below:

Suderman: The FCC said it was planning a study in which
it would go into newsrooms and question editorial staff about their
news choices. As a result, a lot of people seem worried about
potential First Amendment abuses—possibly even the return of the
Fairness Doctrine. How concerned should we be about the free speech
implications here? 
 

Rossman: The Pai op-ed in The Wall Street Journal and
much of the discourse around it assumes that this will work out as
something like, “This story you wrote seems to be insufficiently
supportive of the president’s agenda, it would be a shame if you
had to explain this at your next license renewal hearing.”

Having read the Pai op-ed, the public notice, a few other
stories, and generally being familiar with how communications
scholarship and the FCC’s Media Bureau works, it sounds like the
FCC is proposing an ethnography. This is a standard part of
the social science methodological repertoire, including by
government researchers in other agencies and faculty at schools of
communication. So your mental image should be an
anthropologist, not a party commissar. 

As to concerns about free speech or the fairness doctrine, I
don’t really see any evidence for that. Although I don’t blame
people for distrusting the FCC’s respect for its statutory
authority given the way in which they attempted to implement net
neutrality, the statutory language is pretty clear that these
reviews are about market structure and not about any more direct
interference in press content decisions.

The basic idea of this kind of review is to study how media
markets work and what sorts of markets tend to produce content with
the FCC’s goals and then use this to shape merger guidelines to
push markets towards the kinds of structures that tend to produce
this content. Generally speaking the FCC’s goals are “localism” and
“diversity,” which are certainly debatable, but are more or less
innocuous. Note that the FCC generally means “diversity” with a
lower-case “d” to mean a variety of viewpoints and a variety of
content.

For instance, I do a lot of work on pop music radio and in that
context the FCC understands diversity to mean a lot of different
“formats” or genres being represented. Now you could make an
argument that the FCC should be indifferent to this even if the
market were to sort out so every station plays “Everything is
Awesome”
on a continuous loop. But it’s not exactly at the top
of anyone’s list for signs of tyranny that the state prefers that
radio markets have stations that play country, and rock, and hip
hop, and whatever. 

The FCC’s emphasis on diversity superficially resembles the
fairness doctrine, but there’s an enormous difference between
saying that media outlets owned by big or small companies tend to
naturally produce a variety of viewpoints so therefore let’s use
merger guidelines to push the market towards having more big or
more small companies versus simply saying each media outlet must
have a variety of viewpoints and if it doesn’t we’ll fine it or
yank its license. You may or may not be enthusiastic about
anti-trust policy that is intended to reach goals about content,
but even still the goals about content are fairly innocuous and the
means to achieve them are very indirect. So worst case, this is
central planning, but it’s a big stretch to call it censorship.

Suderman: Put this in context. Has the FCC done this
kind of study before? What were the results? 

Rossman: If you want to think in terms of court cases, this
sort of study and review involves issues similar to Associated
Press vs United States
(1945) where the Supreme Court ruled
that the government could require the AP to license its content to
new market entrants. It bears almost no resemblance to the infamous
Red Lion vs FCC (1969) case that allowed direct government
interference in content. I think Red Lion was wrongly
decided and the facts of the case (which involved Democratic party
proxies harassing enemies of LBJ) exemplify the worst abuses of the
fairness doctrine and so I’m glad that the Reagan FCC suspended the
policy.

However I’m not worried about it coming back since even if the
FCC or Congress did try to reinstate the fairness doctrine, the
current Supreme Court would almost certainly overturn it on grounds
that even if the “scarcity” rationale was valid in the 1960s, it
has since been rendered moot by technological improvements that
allow more efficient use of spectrum (digital FM, satellite radio,
streaming audio over 4G) and substitutes for spectrum
(podcasting).

The FCC frequently does studies of media markets and the impact
on content. You can find a lot of that work here or
here.
(In general, these studies are quantitative studies of market
structure and media content, but treat the media outlet’s internal
operations as a black box. The one thing that is a bit new is the
ethnographic component.

There are lots of media ethnographers in academia (one of my
favorites is
Pablo Boczkowski
), so this is an established mode of research,
but as far as I know—and I could be wrong—the FCC hasn’t done much
of this kind of work itself.

I’m not familiar with why the FCC wants to do qualitative work,
but it makes sense. It’s generally a legitimate research technique
and it’s well-suited to evaluating one of the main concerns people
have about media consolidation — that it can lead to top down
control by publishers. As such interviewing journalists about their
experiences with editorial guidance and control will help inform
whether or not this is a realistic concern.

Most of the prior FCC work I’m familiar with is quantitative
work. They study media content, they study structural
conditions, and then they correlate the two and use the results to
shape policies that are not facially content-based. This could
be caricatured to sound ominous but it’s usually stuff like this.
Do tv stations and newspapers owned by the same company tend to
show the same political biases? (Answer:
No, their biases are not correlated, which implies it’s OK to relax
“one to a market”). Or when consolidation increases in the radio
industry, does format diversity increase or decrease?
(Answer:
increase, which implies that deregulation is fine). As you can see
from these examples, study and review often has the policy result
of deregulation. 

Suderman: You say these studies have typically informed
deregulation. How important are they in those decisions? Would we
end up with a more regulated market in the absence of this sort of
work? 

Rossman: The studies are absolutely critical for
deregulation. The FCC only has statutory authority to deregulate
consolidation rules if it has studies justifying the deregulation.
And in fact in Prometheus v FCC, anti-consolidation activists were
able to get an injunction from the 3rd Circuit because they thought
the FCC’s proposed regulatory framework (the “diversity index,”
which in effect was a relaxation of “one to a market”) was
insufficiently closely related to the FCC’s studies. By inference
we can easily imagine that if the FCC proposed future deregulations
without any studies at all to justify them that there is no way it
would survive court challenge.

Suderman: Is there a reason to be concerned simply about
the fact that the FCC is collecting all this information—even if
it’s intended as just a data-gathering exercise? 

Rossman: I rather doubt it. I’m not aware of previous cases
where anyone has used FCC quantitative data about a particular
outlet to obstruct a merger or license renewal and I don’t see any
reason to expect it would be any different for the qualitative
data. These media studies are about informing rules, not making
particular decisions in implementing those rules.

Suderman: If these sorts of studies are so common, why
did this one get so much attention?

Rossman: I think part of it is that it’s using ethnographic
methodology which can be caricatured as government agents
questioning newsroom decisions, whereas prior research has been
much more passive in collecting data from media content or almanacs
and databases summarizing media content. I also think part of it is
the entrepreneurship of people who want to make an issue of it. As
Ari Adut has shown in
his reading
of things like the Oscar Wilde case, scandals don’t
happen directly because somebody did something scandalous, but
because there is an accuser dedicated to making an issue out of it.
In recent memory the most familiar case of this is the Gingrich
Congress and Ken Starr successfully pushing Clinton on his
womanizing whereas a few decades earlier JFK got a pass on his even
more egregious behavior.

Suderman: The FCC has suspended the study, pending a
redesign. Any guesses as to what the agency might change, or how it
might affect results? 

Rossman: I don’t have any inside knowledge on this but my
best guess is that they’ll eliminate or dramatically restrain the
qualitative component of the study and go back to their traditional
approach of treating the media outlet as a black box. This
fundamentally changes the research design from a study of process
to one of outcomes. It’s not clear how it would affect the bottom
line results, but the style of scholarship is radically different.
It’s a bit like asking how it would have changed the results if
Clifford
Geertz
couldn’t attend a Balinese cock fight but only count the
dead chickens in dozens of Balinese villages.

Suderman: You seem to think that critics of the study
overreacted. Even so, is there maybe some value to overreacting
when the intent is to protect free speech rights? Seems like it’s
not unreasonable to be hyper-vigilant about about this.  How
should folks who care about freedom of the press think about the
FCC and its work going forward? What should they be watching out
for—and what should they worry less about?  

Rossman: Although it’s not a particular area of my academic
expertise, I care a lot about freedom of the press and if I thought
this was actually related to anything like reinstating the fairness
doctrine, I would be militantly and vocally opposed to
it. 

As far as threats to a free press from the FCC, I have mixed
feelings about net neutrality since I’m not sure what I fear more:
Comcast or a state that second-guesses Comcast. (My Straussian
reading of Wu’s Master Switch is that regulatory capture is a
pervasive problem in telecom which implies that we should be
skeptical of telecom regulation). It particularly bothers me how
the FCC tried to implement net neutrality through applying common
carrier type rules to “information services” as this opens the door
to “search neutrality” which is a profoundly intrusive concept.

I also worry a bit about more ad hoc actions by local
politicians calling up outlets to explain themselves for various
insensitivities. Probably the biggest threat to free speech is a
shift to a permission culture driven by fears about piracy and
otherwise attempting to protect and extend intellectual
property.

If you want to publish an op-ed saying the president was
negligent over Benghazi, no ethnographers working as subcontractors
for the FCC are going to do anything to stop you. However try
making a documentary and
getting the rights for various incidental music appearing in the
background cleared
. Even after the SOPA backlash, there are
still proposals (mostly through trade diplomacy) to tighten DMCA in
a pro-rightsholder direction and extend copyright terms beyond even
the absurdities of the
Bono Act
. That sort of thing scares me a lot more than anything
the FCC is likely to do. 

Overall, my hope and expectation for the future is one in which
broadcasting becomes less important and IP based content becomes
more important. Partly I just want the spectrum freed up for higher
valued uses (and the Obama administration has been good on this)
but another reason I’d like to see a shift away from broadcasting
is precisely because broadcasting policy traditionally has by far
the most intrusive content-based regulations. The term “public
airwaves” is basically throat-clearing for a censorship pitch so
the less we rely on the public airwaves the better. Fans of a free
press rightly lament the Red Lion decision, but what
everybody forgets is that five years later a case involving a
newspaper but otherwise having almost identical facts was decided
the other way in Miami Herald v Tornillo. That is, the
same court that thought concepts like “scarcity,” “licensee in the
public interest” justified intrusive content-based regulation for
broadcasting unanimously laughed at such claims for
non-broadcasting media. Likewise, nobody would dream of requiring
every website to carry a quota
of educational content, Cass
Sunstein
notwithstanding, nobody will ever get the fairness
doctrine applied to your website or print edition.

from Hit & Run http://ift.tt/Mjgbp6
via IFTTT

Leave a Reply

Your email address will not be published.