Category: Web/Tech

The AstroZeneca saga, according to one source

This seems unconfirmed, and do note some sources in the story do not believe this account, but here goes:

AstraZeneca, whose Phase 3 coronavirus vaccine clinical trial has been on hold for more than a month, did not get critical safety data to the US Food and Drug Administration until last week, according to a source familiar with the trial.

The FDA is considering whether to allow AstraZeneca to restart its trial after a participant became ill. At issue is whether the illness was a fluke, or if it may have been related to the vaccine.

The source said the root of the delay is that the participant was in the United Kingdom, and the European Medicines Agency and the FDA store data differently.

“They had to convert data from one format to another format. It’s like taking stuff off a PC and putting it onto an Apple. They had to spend a lot of hours to get what they wanted,” the source said.

On Friday, a federal official hinted there might be some word this week on the trial’s future.

Or maybe they just fooled CNN with it?

Otherwise, good thing we are kept safe from such dangerous data formats!  Would it really not be better to move to reciprocal recognition procedures?  Not to mention a unified data format, or perhaps some FDA methods to read data produced for the EU?

For the pointer I thank Jackson Stone.

Could we detect it if we are living in a simulation?

“If quantum computing actually materializes, in the sense that it’s a large scale, reliable computing option for us, then we’re going to enter a completely different era of simulation,” Davoudi says. “I am starting to think about how to perform my simulations of strong interaction physics and atomic nuclei if I had a quantum computer that was viable.”

All of these factors have led Davoudi to speculate about the simulation hypothesis. If our reality is a simulation, then the simulator is likely also discretizing spacetime to save on computing resources (assuming, of course, that it is using the same mechanisms as our physicists for that simulation). Signatures of such discrete spacetime could potentially be seen in the directions high-energy cosmic rays arrive from: they would have a preferred direction in the sky because of the breaking of so-called rotational symmetry.

Telescopes “haven’t observed any deviation from that rotational invariance yet,” Davoudi says. And even if such an effect were to be seen, it would not constitute unequivocal evidence that we live in a simulation. Base reality itself could have similar properties.

Here is further discussion from Anil Anathaswamy.  Via Robert Nelsen.  As you may already know, my view is that there is no proper external perspective, and the concept of “living in a simulation” is not obviously distinct from living in a universe that follows some kind of laws, whether natural or even theological.  The universe is simultaneously the simulation and the simulator itself!  Anything “outside the universe doing the simulating” is then itself “the (mega-)universe that is simultaneously the simulation and the simulator itself.” etc.

What should I answer Vitalik Buterin?

Not a CWT, as he will be interviewing me, but likely a discussion too, who knows what I may fling back at him?

What do you all recommend I read to prepare?  I prepared a great deal for my CWT with Vitalik a few years ago (one of my favorite episodes), but what is new with Ethereum and blockchain and other matters since then?

I thank you all in advance for your usual wise counsel and advice.

Are Nobel Prizes worth less these days?

It would seem so, now there are lots of them, here is one part of my Bloomberg column:

The Nobel Peace Prize this year went to the World Food Programme, part of the United Nations. Yet the Center for Global Development, a leading and highly respected think tank, ranked the winner dead last out of 40 groups as measured for effectiveness. Another study, by economists William Easterly and Tobias Pfutze in 2008, was also less than enthusiastic about the World Food Programme.

The most striking feature of the award is not that the Nobel committee might have gotten it wrong. Rather, it is that nobody seems to care. The issue has popped up on Twitter, but it is hardly a major controversy.

I also noted that the Nobel Laureates I follow on Twitter, in the aggregate, seem more temperamental than the 20-year-olds (and younger) that I follow.  Hail Martin Gurri!

And this:

The internet diminishes the impact of the prize in yet another way. Take Paul Romer, a highly deserving laureate in economics in 2018. To his credit, many of Romer’s ideas, such as charter cities, had been debated actively on the internet, in blogs and on Twitter and Medium, for at least a decade. Just about everyone who follows such things expected that Romer would win a Nobel Prize, and when he did it felt anticlimactic. In similar fashion, the choice of labor economist David Card (possibly with co-authors) also will feel anticlimactic when it comes, as it likely will.

Card with co-authors, by the way, is my prediction for tomorrow.

The Agorics era at Mercatus and GMU

This started in the late 1980s, and was led by GMU economist Don Lavoie, who earlier had been a computer programmer.  Here is one bit from Don’s extensive essay, co-authored with Howard Baetjer and William Tulloh:

The market for scholarly ideas is now badly compartmentalized, due to the nature of our institutions for dispersing information. One important aspect of the limitations on information dispersal is the one-way nature of references in scholarly literature. Suppose Professor Mistaken writes a persuasive but deeply flawed article. Suppose few see the flaws, while so many are persuaded that a large supportive literature results. Anyone encountering a part of this literature will see references to Mistaken’s original article. References thus go upstream towards original articles. But it may be that Mistaken’s article also provokes a devastating refutation by Professor Clearsighted. This refutation may be of great interest to those who read Mistaken’s original article, but with our present technology of publishing ideas on paper, there is no way for Mistaken’s readers to be alerted to the debunking provided by Clearsighted. The supportive literature following Mistaken will cite Mistaken but either ignore Professor Clearsighted or minimize her refutations.

In a hypertext system such as that being developed at Xanadu, original work may be linked downstream to subsequent articles and comments. In our example, for instance, Professor Clearsighted can link her comments directly to Mistaken’s original article, so that readers of Mistaken’s article may learn of the existence of the refutation, and be able, at the touch of a button, to see it or an abstract of it. The refutation by Clearsighted may similarly and easily be linked to Mistaken’s rejoinder, and indeed to the whole literature consequent on his original article. Scholars investigating this area of thought in a hypertext system would in the first place know that a controversy exists, and in the second place be able to see both (or more) sides of it with ease. The improved cross-referencing of, and access to, all sides of an issue should foster an improved evolution of knowledge.

A potential problem with this system of multidirectional linking is that the user may get buried underneath worthless “refutations” by crackpots. The Xanadu system will include provisions for filtering systems whereby users may choose their own criteria for the kinds of cross-references to be brought to their attention. These devices would seem to overcome the possible problem of having charlatans clutter the system with nonsense. In the first place, one would have to pay a fee for each item published on the system. In the second place, most users would choose to filter out comments that others had adjudged valueless and comments by individuals with poor reputations. In other words, though anyone could publish at will on a hypertext system, if one develops a bad reputation, very few will ever see his work.

And this:

Miller and Drexler envision the evolution of what they call agoric open systems–extensive networks of computer resources interacting according to market signals. Within vast computational networks, the complexity of resource allocation problems would grow without limit. Not only would a price system be indispensible to the efficient allocation of resources within such networks, but it would also facilitate the discovery of new knowledge and the development of new resources. Such open systems, free of the encumbrances of central planners, would most likely evolve swiftly and in unexpected ways. Given secure property rights and price information to indicate profit opportunities, entrepreneurs could be expected to develop and market new software and information services quite rapidly.

Secure property rights are essential. Owners of computational resources, such as agents containing algorithms, need to be able to sell the services of their agents without having the algorithm itself be copyable. The challenge here is to develop secure operating systems. Suppose, for example, that a researcher at George Mason University wanted to purchase the use of a proprietary data set from Alpha Data Corporation and massage that data with proprietary algorithms marketed by Beta Statistical Services, on a superfast computer owned by Gamma Processing Services. The operating system needs to assure that Alpha cannot steal Beta’s algorithms, that Beta cannot steal Alpha’s data set, and that neither Gamma or the George Mason researcher can steal either. These firms would thus under-produce their services if they feared that their products could be easily copied by any who used them.

In their articles, Miller and Drexler propose a number of ways in which this problem might be overcome. In independent work, part of the problem apparently has already been overcome. Norm Hardy, senior scientist of Key Logic Corporation, whom we met at Xanadu, has developed an operating system caned KeyKOS which accomplishes what many suspected to be impossible: it assures by some technical means (itself an important patented invention) the integrity of computational resources in an open, interconnected system. To return to the above example, the system in effect would create a virtual black box in Gamma’s computer, in which Alpha’s data and Beta’s algorithms are combined. The box is inaccessible to anyone, and it self-destructs once the desired results have been forwarded to the George Mason researcher.

There is really quite a bit more at the link, noting that at the time Don had assembled a group of about ten people working on these ideas.  As for the hyperlinks, I recall thinking at the time something like: “People don’t value reading so much, so making reading better with hyperlinks won’t have a huge marginal value!”

Emergent Ventures winners, new India cohort

A further Covid-19 India Prize goes to award winning journalist Barkha Dutt for her reporting on the Covid pandemic and related crises in India.

Because of the Covid lockdown (March-June 2020), Indian news reporting and broadcasting faced severe disruptions in March-April 2020. For the first 50 days, as television networks remained studio-bound, Dutt and her small team traveled across India to report from the ground, producing over 250 ground reports. All the videos and reports are available on the MoJo youtube channel.

One of the world’s most severe lockdowns unleashed a massive internal migration from the cities to the villages in India. Dutt’s team was one of the first to shed light on the erroneous state policies concerning economic migrants in India during the lockdown,, often while walking alongside migrants. Her sustained coverage eventually led other stations and newspapers to follow and report similar stories and invoked a policy response from the government.

Another Covid-19 India Prize goes to award winning data journalist Rukmini S, for The Moving Curve Podcast, covering the data issues in India. She is currently an independent journalist writing for MintThe PrintIndia Today (where she is tracking the pandemic daily) and India Spend (she is tracking Covid mortality) and writes occasionally for The GuardianSCMP and The Hindu.

She distills all the information, data, and her daily insights into a 5-7-minute audio update in the form of a free podcast, now at 92 episodes. The episodes range from getting to the heart of India’s death statisticsinterviewing a rural doctor about what it’s like waiting for Covid to hit, to attempting to cut through India’s public/ private healthcare binary, and they have had significant influence on many state governments. The Moving Curve podcast is produced by a small team of two – Rukmini S and sound engineer Anand Krishnamoorthi. The podcast is available on the major platforms as well as on medium.

My Conversation with Audrey Tang

For me one of the most fun episodes, here is the audio, video, and transcript.  And here is the longer than ever before summary, befitting the chat itself:

Audrey Tang began reading classical works like the Shūjīng and Tao Te Ching at the age of 5 and learned the programming language Perl at the age of 12. Now, the autodidact and self-described “conservative anarchist” is a software engineer and the first non-binary digital minister of Taiwan. Their work focuses on how social and digital technologies can foster empathy, democracy, and human progress.

Audrey joined Tyler to discuss how Taiwan approached regulating Chinese tech companies, the inherent extraterritoriality of data norms, how Finnegans Wake has influenced their approach to technology, the benefits of radical transparency in communication, why they appreciate the laziness of Perl, using “humor over rumor” to combat online disinformation, why Taiwan views democracy as a set of social technologies, how their politics have been influenced by Taiwan’s indigenous communities and their oral culture, what Chinese literature teaches about change, how they view Confucianism as a Daoist, how they would improve Taiwanese education, why they view mistakes in the American experiment as inevitable — but not insurmountable, the role of civic tech in Taiwan’s pandemic response, the most important remnants of Japanese influence remaining in Taiwan, why they love Magic: The Gathering, the transculturalism that makes Taiwan particularly open and accepting of LGBT lifestyles, growing up with parents who were journalists, how being transgender makes them more empathetic, the ways American values still underpin the internet, what he learned from previous Occupy movements, why translation, rotation, and scaling are important skills for becoming a better thinker, and more.

This bit could have come from GPT-3:

COWEN: How useful a way is it of conceptualizing your politics to think of it as a mix of some Taiwanese Aboriginal traditions mixed in with Daoism, experience in programming, and then your own theory of humor and fun? And if you put all of that together, the result is Audrey Tang’s politics. Correct or not?

TANG: Well as of now, of course. But of course, I’m also growing, like a distributed ledger.

And this:

COWEN: You’re working, of course, in Taiwanese government. What’s the biggest thing wrong with economists?

TANG: You mean the magazine?

COWEN: No, no, the people, economists as thinkers. What’s their biggest defect or flaw?

TANG: I don’t know. I haven’t met an economist that I didn’t like, so I don’t think there’s any particular personality flaws there.

Finally:

COWEN: Now, my country, the United States, has made many, many mistakes at an almost metaphysical level. What is it in the United States that those mistakes have come from? What’s our deeper failing behind all those mistakes?

TANG: I don’t know. Isn’t America this grand experiment to keep making mistakes and correcting them in the open and share it with the world? That’s the American experiment.

COWEN: Have we started correcting them yet?

TANG: I’m sure that you have.

Definitely recommended.

It’s getting better and worse at the same time

That is the title of my latest Bloomberg column.  Here is one excerpt:

The larger question is how to know when this great stagnation is ending. Counterintuitively, the answer might be when people are most upset — because that’s generally how most humans react to change, even when it proves beneficial in the longer run. These feelings arise in part from the chaos and disruption brought about by some pretty significant changes.

And:

People, here is the good news and the bad news: Change is upon us. We are entering a new era of crises — in politics and biomedicine, with climate and energy, and not incidentally, about how prudently we spend our time.

The regretful truth is that progress is never going to be easy. The great technological advances of the late 19th and early 20th centuries, remember, were followed by two world wars and the rise of totalitarianism. Innovations such as radio and the automobile improved countless lives but also broadcast Hitler speeches and led to destructive tanks.

I’m not predicting the same catastrophe for today. I’m only saying that when the discontent is palpable, as it is right now in America, keep in mind that true breakthroughs may already be underway.

The examples are in the longer text.  Recommended!

Do algorithms collude?

Yes, in short.  Emilio Calvano, Giacomo Calzolari, Vincenzo Denicolò and Sergio Pastorello cover this topic in the latest issue of the American Economic Review:

Increasingly, algorithms are supplanting human decision-makers in pricing goods and services. To analyze the possible consequences, we study experimentally the behavior of algorithms powered by Artificial Intelligence (Q-learning) in a workhorse oligopoly model of repeated price competition. We find that the algorithms consistently learn to charge supracompetitive prices, without communicating with one another. The high prices are sustained by collusive strategies with a finite phase of punishment followed by a gradual return to cooperation. This finding is robust to asymmetries in cost or demand, changes in the number of players, and various forms of uncertainty.

Here is the paper.

Australian price discrimination

The cheapest deal, at $6.99, was offered to queer females aged under 30. City-based straight men over 50 were meanwhile given the most expensive rate, at $34.37. Choice investigators could not find a pattern to explain the differences, and have appealed to Australia’s consumer watchdog, the ACCC, to investigate whether Tinder might be in breach of national consumer law.

Here is the full story, via an MR reader.

Crypto art markets in everything

Christie’s is set to sell its first nonfungible token in an upcoming auction of what has been characterized as “the largest artwork” in the history of Bitcoin (BTC).

Art historian turned blockchain artist Robert Alice has created “Portrait of a Mind” — a monumental series of 40 paintings stretching over 50 meters in length.

Drawing on the history of 20th century conceptualism as well as the founding myth of Bitcoin’s creation, “Portrait of a Mind” is a complete hand-painted transcription of the 12.3 million digits of the code that launched the cryptocurrency.

By scattering the codebase into 40 globally distributed fragments, the project will “draw up a global network of 40 collectors where no one individual will hold all the code,” Alice said.

He explained: “In each work, an algorithm has found a set of hex digits that together are highlighted in gold. These read a set of coordinates that are unique to each painting. 40 locations across 40 paintings – each location is of particular significance to the history of Bitcoin.”

Speaking to Cointelegraph, Alice said he remains curious as to why much of the commemoration of Bitcoin emphasizes the publication of the whitepaper over and above the codebase itself, which, for him, is “the real historical document.”

Christie’s will sell one painting from the series, “Block 21 (42.36433° N, -71.26189° E),” as part of its “Post-War and Contemporary Day Auction” on Oct. 7, at the end of a week-long exhibition of auctioned works in New York.

The piece includes a unique fungible token as an integral part of the work and will be offered at an estimated price of $12–18,000.

Here is the full story, via Shaffin Shariff.

How should America regulate TikTok and other Chinese tech companies?

I say focus on data protection but let them keep the algorithm.  From my new Bloomberg column here is one excerpt:

A second principle for good policy is that the U.S. government should not cut off the U.S. — including of course Chinese Americans and visiting Chinese — from the Chinese internet. Let’s say TikTok and WeChat are banned altogether, along the lines of the (now court-halted) Trump executive order banning WeChat. Are all Chinese apps to be kept out of the country? How about clicking on Chinese links, which also could compromise security? Would Chinese newspapers (including from Hong Kong) be allowed?

The costs of these restrictions would be very high, most of all for Hong Kong, but for America too. Americans would become more ignorant about China, and China would fall out of touch with America. Chinese students and tourists would find it much more difficult to come to the U.S. and stay in touch with home, and as a result many of them would avoid the U.S. altogether. America’s world knowledge and soft power would decline. These too are major national security disadvantages, in addition to their economic costs.

More generally, China is America’s No. 1 trading partner. Can it really make sense to cut off the flow of so much information across the internet? For how long?

There is also a problem of enforcement. The rest of the world is unlikely to take a comparably harsh approach to Chinese technology. Will the U.S. also have to stop Americans from downloading an app from a privately owned joint Cambodian/Chinese company? Where exactly will these lines be drawn?

Regulating the algorithm won’t work, so the deal on the table, despite its ugly, politicized origins, is perhaps the best we can do at this point.  There is much more at the link, and here is more from Elaine Ou at Bloomberg.

On-line education in Oklahoma, from my email box

I have not applied further indentation:

“…this is seemingly starting to be a big deal in OK, but flying under the radar.

Background:
  • 10-15 years ago Oklahoma passed a law allowing online-only charter schools with a separate regulatory structure from physical charter schools.
  • Critically, the unions did not think to push for an enrollment cap.
  • There are 5-10 schools, all quite small, except for one named EPIC.
About EPIC:
  • Has enrollment (~38,000) that is larger than any district in the state. This enrollment is currently surging faster than its usual high growth because of COVID-19 and could reach 46,000 by the Oct 1 “Money Head Count” deadline.
  • From Oct 1, 2018 to Oct 1, 2019, EPIC’s enrollment grew more than the enrollment growth for the entire state of OK.
  • Like all public charters in OK, the school is free to attend. Parents get paid $1000 per student per year for school supplies and activities.
  • They have 100% online and blended learning options. Teachers in the online-only are paid by how many students they take on and can earn over $100,000. The state average pay for teachers is just over $50,000/yr.
  • They are a non-profit but they are run by a closely related for-profit management company that is paid 10% of gross revenue. (Incentives!)
  • Everyone in OK education that isn’t EPIC, hates EPIC. The state has multiple lawsuits and audits alleging that they have been committing fraud. These go back as far as 2012 but none have yet been resolved, even with open investigations by the Oklahoma State Bureau of Investigation. The alleged amounts are less than 1% of cumulative revenue.
Comparison to Regular Schools:
  • An Oklahoma Watch survey from several years ago found that parents were choosing EPIC primarily because they felt their students were falling behind at their districted school, were escaping bullying, or had a desire to pursue other activities i.e. competitive gymnastics.
  • On the Oklahoma State Dept. of Education A-F scorecard, EPIC scores better than every traditional Oklahoma City Public Schools and Tulsa Public Schools middle school or high school. It performs roughly near the state average.
  • 4-year high school graduation rates are SIGNIFICANTLY lower than traditional schools.
  • It seems likely this is because with the online format you cannot graduate without completing assignments on time. There are OKCPS schools that have 1% of students performing at grade level and 95% graduation rates.
  • Total Oklahoma K-12 enrollment for 2019-2020 was ~700,000. So EPIC is now over 5% of total state enrollment. They have been growing roughly 50%/year, but that was starting to slow some before the pandemic.

Enrollment Article:

https://oklahoman.com/article/5667424/surge-pushes-epic-charter-schools-to-highest-enrollment-in-state

And they are trying to scale gamification of learning:

https://oklahoman.com/article/5667051/in-oklahoma-remote-learning-goes-to-the-next-level

Like most online education providers, retention has been their weakest point.

Oklahoma schools are required to have each school facility staffed with a certain number of non-teaching positions (librarian, counselor, etc.) so fixed costs are very high. Teacher salaries are usually 35-40% of the budget and are one of the only variable cost centers. Most money is allocated by the state, following the student. EPIC is not far from doing real damage to traditional school finances. This does not seem to be on most people’s radar. It could get more interesting, yet.

Austin Vernon”