Some Notes on Balaji Srinivasan’s Talk on the Future of Media

A few weeks ago Balaji Srinivasan was featured in a virtual meetup hosted by Joshua Fox. Balaji gave a talk on decentralized alternatives to legacy media, with a definite overtone of New York Times delenda est. It was a very good talk. I feel the need to get my head around what he said, and I often find the “5th grade book report” approach of summarizing something to be highly useful in this regard. So this post is just me going over some of his arguments and predictions.

Of Oracles and Advocates

Balaji imagines a future where there is a separation between truth aggregation and narrative in journalism. That is, he predicts that mechanisms will arise that will reliably correlate with reality, expressing the appropriate degrees of uncertainty. And then on top of the mechanisms, either decentralized contractors, or eventually text models like GPT-3, will apply a layer of narrative gloss, obsoleting the publisher and maybe even the journalists altogether. He refers to this class of mechanisms as “feeds/oracles” and those people or algorithms that apply the narrative gloss as “advocates”

He mentions financial and sports journalism as prototypical examples where such a separation exists today. Many such stories are largely just raw scores or stock prices converted rather mechanically (whether by a human or an algorithm) into narrative form. Narrative Science is a company with a rather old-fashioned template-based text generator which is writing millions of such stories every day, using the feed + narrative gloss approach. He has a great line about this in the talk:

Stock market stories are just text wrappers around Bloomberg data, sports stories just text wrappers around scores, and political stories are just text wrappers around tweets.

I particularly like “Political stories are just text wrappers around tweets” – both true, discomfiting and amusing all at once.

After introducing this distinction between “feeds/oracles” and “advocates” he then described the “ledger of record,” which is his word for the sort of generalized space of all active blockchains.

To place my possibly unjustified biases on the table, since reading all the white papers of some of the highest market-cap ERC20 tokens, I have been pretty skeptical of cryptocurrency. I will refrain from writing the three-paragraph rant that would be required to adequately describe the amount of stupid that was present in some those white papers but suffice it to say it turned me off the whole scene.

But despite my distaste for the stuff, a good cook can make a fine meal of the most exotic ingredients and Balaji waxes quite well on the value of decentralized ledgers. The main value-adds he mentions are payments and trustless timestamps, particularly the role of timestamps in allowing pseudonyms to accrue reputation. His slogan for this is “first payments, then truth.”

One easy thing you can prove with a blockchain is that this particular string was signed with this particular key, and existed at this particular time. By hashing the string, you can do this without revealing the contents. One can imagine a pseudonymous data aggregator establishing a reputation by time-stamping facts in this way before they become commonly known. And then selling access to a stream of such data.

One problem that pops up in my head now is it seems like you would end up with a lot of Baltimore stockbrokers. Maybe this would not be a problem though, as once you are aware of a data aggregator you are immune to such selection effects going forward. Time is not on the side of a Baltimore stockbroker. So maybe prominence would screen off such people. But these types of selection effects confuse my squib brain and my intuitions here are not very clear. I wish I had asked him about the Baltimore stockbroker problem during the Q&A.

Regardless, a plain-text public feed does not have this problem, and will probably be the most common. Especially if it is not pseudonymous. If pseudonymous, there is always the temptation of defecting, especially if your feeds are used by smart contracts to settle various bets. I know Augur and other projects are working on solutions to this, and his opinion on the efficacy of such mechanisms is another question I regret not asking.

My impression so far is there does not appear to be much activity on any decentralized prediction markets, despite them existing for many years. Perhaps this is related to it being provably irrational to participate in an unsubsidized predication market.

Subsidizing a prediction market with inflation seems like it could have been a good idea for Bitcoin or Ethereum, but would be hard to implement now. In general, inflation strikes me as an ideal way to subsidize public goods (such as information aggregation) and I think it is a big lost opportunity that Ethereum and Bitcoin use it exclusively to fund security. It’s a shame inflation is such a dirty word among the blockchain crowd, as it is ironically the most powerful tool in their arsenal.

Advocates or Agents?


He talked about the incentives for citizen journalists, and one of them is a notion of duty. One problem with duty as a sole incentive is that a pathological means of amplifying one’s sense of duty is indoctrination into an ideology. And I wonder if, in a world where duty is one of the only remaining incentives for journalism, this will only amplify ideologues of various stripes.

I would prefer a sort of “mechanisms all the way up” approach, and (as mentioned) Balaji speculated about a means to do this. It certainly seems science fictional now, but the idea of prediction markets, centralized or decentralized, solidifying a sort of agreed-upon ontology of the present and future is highly appealing to me.

Scott Alexander describes this potential well here:

A democratic vote among the scientific establishment is insufficient to settle these topics. The most important problem is that it gives massive power to the people who determine who gets to be part of “the scientific establishment”. … So not having any Schelling point – being hopelessly confused about the legitimacy of academic ideas – sucks. But a straight democratic vote of academics would also suck and be potentially unfair.

Prediction markets avoid these problems. There is no question of who the experts are: anyone can invest in a prediction market. There’s no question of special interests taking it over; this just distributes free money to more honest investors. Not only do they escape real bias, but more importantly they escape perceived bias. It is breathtakingly beautiful how impossible it is to rail that a prediction market is the tool of the liberal media or whatever. …

Nate Silver might do better than a prediction market, I don’t know. But Nate Silver is not a Schelling point. Nobody chose him as Official Statistics Guy via a fair process. And if someone objected to his beliefs, they could accuse him of bias and he would have no recourse until it was too late. If a prediction market is almost as good as Nate, and it is also unbiased and impossible to accuse of bias, we have our Schelling point. …

In Balaji’s vision, this shelling point would be reified in the “ledger of record” mentioned earlier.

A humorous thought I had is if we do get an agreed upon method of selecting policies that is effective then a utilitarian could have their GPT-N bot rationalize these policies to them through a utilitarian lens, and others could have their GPT-N rationalize these policies as what Marx truly intended, the culmination of the non-aggression principle, the obvious result of Kantian universality, or the culmination of neo-liberalism. A recipe for Utopia if I ever heard one!

Balaji addressed the obvious objection that most news consumption is largely about affiliation rather than truth-seeking, saying that those people who want truth will have an incentive to use the best means available even if most people prefer a circus. And this should undoubtedly be an improvement over what we have today.

After the talk, Balaji hanged out for a bit in the breakout rooms with our regulars.

I won’t go too far into details here but he mentioned this on Twitter so I think it is safe to share: I made a claim about the inability to transfer reputation between pseudonyms and within about 20 seconds he came up with a scheme, using known cryptographic primitives, that partially bypassed my objections, pointing out that things like Reddit and Stack Overflow karma are fungible and so can be transferred to pseudonyms without unmasking them using something like Zcash. I did a little Googling afterward and it appears it was an entirely novel idea that seems likely to work, and it just popped into his head before I had finished making my point.

I am still skeptical about pseudonyms being useful given the 33 bit problem, but it was impressive.

It is quite plausible that my interpretations of his arguments are much weaker than his actual arguments, so sign up to his mailing list at https://balajis.com/signup/, where he will be releasing a video of a more polished version of the same talk to get the pure stuff.

A Brief Explanation of Egan’s Dust Theory

Permutation City is a great novel with a very interesting philosophical argument embedded inside it. Many people don’t want to read the whole novel, so here is my short explanation of Egan’s Dust Theory.

If you think experience is simulatable, you buy computational or “patternist” theories of identity, there are some very strange implications that are not commonly brought up.

Because computation is sort of subjective, just a set of relations that can be usefully interpreted as representing a function, this seems to imply that the function you identify with can be interpreted as being simulated by any sufficiently complex set of relations.

That is, for any random set of relations with enough complexity to represent a human mind, Egan claims, there exists an exotic encoding scheme that can interpret this set of relations as representing any arbitrary function, including the function that you would call “I”.

Now you cannot access this information usefully, so most functions can be said to not usefully exist within this randomness. But if you buy the whole “I think therefore I am” business this does not matter for those functions that are conscious, because consciousness is self-justifying.

So the function that is you is being “run” far more often in random noise than engineered computers or ‘real’ physical reality. This is sort of like the Boltzmann brain idea but even worse because it applies to any set of relations. According to Egan’s argument all possible experiences that can be computed by any sufficiently complex set of relations are in fact computed by them.

In fact, even if you imagine a static set of relations that do not evolve in time there still exists encoding schemes that can interpret this set of relations as any arbitrary function, so in some sense the set of mathematical objects that represent “you” and “me” and all possible minds can be said to exist in these timeless relations.

Egan discusses some counterarguments to the idea here: https://www.gregegan.net/PERMUTATION/FAQ/FAQ.html