Here was my original post, here is an email response from a specialist in the area, channeled by a reader:
The issue is really, really complicated. I have a lot of data on it because I spent time with Mark Goldenson, interviewing a lot of folks segmented by those who chose to seek mental health assistance from a clinician, those who stayed with that treatment versus those who turned away relatively early, and those who experienced severe mental health conditions that make them think that they should have seen a therapist, but ultimately chose not to, for reasons other than economic ones.
And we also talked to clinicians on the other side of that equation.
So between that and knowing the literature reasonably well, I have a lot of perspective on this.
The first thing is that talk therapy is in general not effective for most people. And I know the paper under examination showed that it’s more effective than antidepressants, but in general, most people do not generally stick with talk therapy. They get a benefit at a reasonably low rate for a reasonably short period of time…
Moreover, there’s some pretty strong evidence that talk therapy or at least CBT is becoming less effective over time – the effect sizes in studies & meta-analyses are going down. And there could be reasons for that that aren’t an indictment of the therapeutic model.
So for example, the modern world could just be becoming more stressful and the therapy is less equipped for it… It could be that as the treatment becomes more popular, rather than the more advanced or cutting-edge therapists using it, it’s used by an increasingly broad set of therapists that include low-skilled or ineffective ones.
So there are a lot of reasons that may not have to do with the merits of CBT as an approach, but the data are reasonably convincing on that front.
I think a lot of people are making a reasonably rational choice that, especially if they’re not going to stick with it for a long period of time, even starting therapy is a low-value proposition.
George Ainslie (the psychologist) has this kind of notion of playing a prisoner’s dilemma with your [future] self… let’s just say I want to start an exercise habit… there are a lot of parallels with exercise and talk therapy.
If I knew for a fact that I was going to stop doing it after one month, it actually doesn’t make sense to start at all. Right, because the benefits of accrued will pretty rapidly deteriorate and it’ll be as if I never did it…
People are not just considering, “Should I try talk therapy?”, they’re considering, “Will I do this for a sufficiently long period of time, or especially can I afford it for a long period of time, to where I will get and maintain the benefits from doing it?”
And many people do in fact have misinformation about how quickly they can experience certain types of benefits, and how much work is involved – it’s clear that there’s a lot of work involved, and many people don’t want to do that work.
From an operant conditioning standpoint, the experience of a therapy session is frankly more punishing than it is rewarding (for many people, a lot of the time). Like any negative stimulus, they’re going to engage in behaviors that cause that stimulus to be experienced at a lower rate.
Sometimes the benefits don’t accrue during the session, they accrue afterwards. It takes a lot of work to experience them and [can] involve emotional trauma to even retrieve them.
It’s not consistent with people’s ROI calculation, or what they would like to see in their ROI calculation. Again, it’s really similar to physical exercise – we know physical exercise works. It works better than antidepressants. It accrues all the benefits that this paper Cowen cited discovered in terms of energy and mood and earnings and so on and so forth.
But people still don’t engage in exercise, and in fact I think the rate of physical activity is actually on the decline, in the industrialized world at least.So, it’s more complex than “Does the behavior accrue benefits if you do it consistently?” It’s also not entirely about access because many forms of physical activity are free, and as the paper examines the seeking of talk therapy is not super sensitive to [price].
So it goes beyond the mere cost of the service, although the cost of the services is definitely prohibitive for a large cross-section of people.
How does ketamine or any other substance relate to this?
I think it relates very favorably in that people may actually have the opposite misconception around psychedelic-assisted therapy. They might view regular talk therapy as something where they’re going to have to do this tedious hour a week for months before they get any benefits or they solve any problems in their lives.
[With ketamine] they probably think that they’re going to do one ketamine session, and all of their issues are going to be solved right their PTSD is cured and they no longer experience any symptoms of anxiety, depression, etc… It’s probably a little bit overhyped in the minds of people who have only casually exposed themselves – they’re seeing an article in The New Yorker, or they’re seeing it on a blog, or someone goes on a podcast and talks about an experience. They’re not looking at it with the measured view of someone from the Johns Hopkins team or whatever. So I think that it does work in your favor….
People may overestimate the level of benefit they’re likely to achieve and it seems like the medicine is doing the work, rather than them. Even though I know that that isn’t really the case….
By the way, fun stuff from that research sprint we did with Goldenson – the average person in our cohort (who did ultimately get therapy), put it off for over two years.
It was a pretty wide range – some people sought help after, perhaps, six weeks I think was the shortest. Nobody has a bad day or think they’re experiencing depression or experiencing dysfunction in their work life or their romantic life or whatever it is and goes straight to a therapist…
They also tend to do a fair bit of research – they research different therapeutic methods and kind of choose one that fits their personality or their values, almost more so than efficacy.
And most of the people who ended up with a stable relationship with a provider trial between two and five different folks.
Of course, Bell Labs itself later grew to be one of the marquees of commercial labs—in the late 1960s it employed 15,000 people including 1,200 PhDs, who between them made too many important inventions to list, from the transistor and the photovoltaic cell to the first digitally scrambled voice audio (in 1943) and the first complex number calculator (in 1939). Fourteen of its staff went on to win Nobel Prizes and five to win Turing Awards.
Most theories and hypotheses in psychology are verbal in nature, yet their evaluation overwhelmingly relies on inferential statistical procedures. The validity of the move from qualitative to quantitative analysis depends on the verbal and statistical expressions of a hypothesis being closely aligned—that is,that the two must refer to roughly the same set of hypothetical observations. Here I argue that most inferential statistical tests in psychology fail to meet this basic condition. I demonstrate how foundational assumptions of the “random effects” model used pervasively in psychology impose far stronger constraints on the generalizability of results than most researchers appreciate. Ignoring these constraints dramatically inflates false positive rates and routinely leads researchers to draw sweeping verbal generalizations that lack any meaningful connection to the statistical quantities they are putatively based on. I argue that failure to consider generalizability from a statistical perspective lies at the root of many of psychology’s ongoing problems (e.g.,the replication crisis), and conclude with a discussion of several potential avenues for improvement.
That is from a recent paper by Tal Yarkoni.
The available data seems to meet the bar for an EUA.
I found this Adam Rogers Wired piece insightful and the best single treatment so far, and also interesting more generally on RCTs:
“Fifty thousand people have been given a treatment, and we cannot know whether it worked or not,” says Martin Landray, one of the leaders of the Randomised Evaluation of Covid-19 Therapies (or Recovery) Trial in England, a large-scale, multi-center, multi-drug randomized controlled trial that showed that the corticosteroid dexamethasone saved the lives of Covid-19 patients and the autoimmune drug hydroxychloroquine did not. (That 50,000 number was from a few weeks back, just after the plasma preprint came out.)
The main arguments against the decision from Trump/FDA seem to be “do RCTs” and “convalescent plasma isn’t shown to be so great.” But those points have it exactly backwards. Patients for trials are extremely scarce right now, and if convalescent plasma is not the highest probability big winner (and I suspect it isn’t), you won’t want to waste scarce patients on doing the RCT. Moreover, if you can’t get the RCT done with 98,000 or so patients, maybe you’re just not up to doing it period! (Please do think at the margin.) In the meantime, convalescent plasma does not seem to involve harms or risks, and it may offer some benefits. So why not let more people have easier access to it?
And might there be a tiny chance that American citizens demand stronger payment incentives for the relevant supplies here and also for other treatments?
If all people have is “do RCTs and CP isn’t shown to be so great,” I don’t think they have begun to engage with the arguments. And additionally politicizing the FDA is definitely a real cost to be reckoned with, but the Twitter noise I am seeing from public health experts seems oblivious to the fact that the FDA’s ex ante risk-averse stance was politicized to begin with (which is not necessarily a bad thing, but yes this is a basic fact — “politicization for me, but not for thee,” etc.).
…it looks like Avi Loeb (Harvard astronomer) is writing a book that will argue that we have been visited by aliens.
Harvard’s top astronomer lays out his controversial theory that our solar system was recently visited by advanced alien technology from a distant star.
In late 2017, scientists at a Hawaiian observatory glimpsed an object soaring through our inner solar system, moving so quickly that it could only have come from another star. Avi Loeb, Harvard’s top astronomer, showed it was not an asteroid; it was moving too fast along a strange orbit, and left no trail of gas or debris in its wake. There was only one conceivable explanation: the object was a piece of advanced technology created by a distant alien civilization.
The FDA has just approved a new and important Covid-19 test:
“Wide-spread testing is critical for our control efforts. We simplified the test so that it only costs a couple of dollars for reagents, and we expect that labs will only charge about $10 per sample. If cheap alternatives like SalivaDirect can be implemented across the country, we may finally get a handle on this pandemic, even before a vaccine,” said Grubaugh.
One of the team’s goals was to eliminate the expensive saliva collection tubes that other companies use to preserve the virus for detection. In a separate study led by Wyllie and the team at the Yale School of Public Health, and recently published on medRxiv, they found that SARS-CoV-2 is stable in saliva for prolonged periods at warm temperatures, and that preservatives or specialized tubes are not necessary for collection of saliva.
Of course this part warmed my heart (doubly):
The related research was funded by the NBA, National Basketball Players Association, and a Fast Grant from the Emergent Ventures at the Mercatus Center, George Mason University.
The NBA had the wisdom to use its unique “bubble” to run multiple tests on players at once, to see how reliable the less-known tests would be. This WSJ article — “Experts say it could be key to increasing the nation’s testing capacity” — has the entire NBA back story. At an estimated $10 a pop, this could especially be a game-changer for poorer nations. Furthermore, it has the potential to make pooled testing much easier as well.
Here is an excerpt from the research pre-print:
The critical component of our approach is to use saliva instead of respiratory swabs, which enables non-invasive frequent sampling and reduces the need for trained healthcare professionals during collection. Furthermore, we simplified our diagnostic test by (1) not requiring nucleic acid preservatives at sample collection, (2) replacing nucleic acid extraction with a simple proteinase K and heat treatment step, and (3) testing specimens with a dualplex quantitative reverse transcription PCR (RT-qPCR) assay. We validated SalivaDirect with reagents and instruments from multiple vendors to minimize the risk for supply chain issues. Regardless of our tested combination of reagents and instruments from different vendors, we found that SalivaDirect is highly sensitive with a limit of detection of 6-12 SARS-CoV-2 copies/μL.
No need to worry and fuss about RNA extraction now. Here is the best simple explanation of the whole thing.
The researchers are not seeking to commercialize their advance, rather they are making it available for the general benefit of mankind. Here is Nathan Grubaugh on Twitter. Here is Anne Wyllie, also a Kiwi and a Kevin Garnett fan. A further implication of course is that the NBA bubble is not “just sports,” but also has boosted innovation by enabling data collection.
All good news of course, and Fast at that. And this:
“This could be one the first major game changers in fighting the pandemic,” tweeted Andy Slavitt, a former acting administrator of the Centers for Medicare and Medicaid Services in the Obama administration, who expects testing capacity to be expanded significantly. “Rarely am I this enthusiastic… They are turning testing from a bespoke suit to a low-cost commodity.”
And here is coverage from Zach Lowe. I am very pleased with the course of Fast Grants more generally, and you will be hearing more about it in the future.
T-Cell immune response (not to be confused with invulnerability) is hardly a new idea in public health. Yet what is striking is how long it took you to hear about it — from the mainstream at least — in the context of coronavirus.
If you go back to February, March, even April or dare I say May, you will not find too many mainstream public health commentators suggesting “there is some possibility of T-cell immunity playing a major role here. That could significantly ease the future casualties and economic burden of Covid-19.” David Wallace-Wells dates the beginning of the discussion to late May, and the “dark matter” hypothesis of Friston, though I believe earlier precursors will be found.
You didn’t even hear much of: “We really are not sure T-cell immunity is a factor. But it could be a factor with probability [fill in the blank], and it is worth keeping that in mind.”
Think about the underlying equilibrium that could lead to such a strange result.
if you do public health, your status incentives are to deliver warnings, not potential good news.
Your status incentives are always to hedge your bets, and to be reluctant to introduce new hypotheses.
Your status incentives are to steer talk away from the virus “simply continuing to rip,” even if you are quite opposed to that outcome. Other than hitting it with an immediate scold, you are not supposed to let that option climb on to the discussion table for too long.
Your status incentives are to discourage individuals from thinking that they might be have some pre-existing level of protection. That might lead them to behave more irresponsibly, and then you in turn would look less responsible.
Since public health commentators are so concerned with “doing good by us,” they fail to see that their altruistic (and status) motives in these matters mean they do not end up telling us the truth. Not the entire truth, and not upfront in a very prompt matter.
To be fair, I don’t recall seeing mainstream commentators making false claims about T-cell immunity, rather their filters end up being very selective ones and they bring it up only slowly. And because they smush together in their minds the actually quite distinct concepts of “doing good,” “status,” and “informing the public,” they genuinely have no idea that they are not entirely on the side of truth.
And they genuinely have no idea why so many smart people look to “the cranks” for advice and counsel.
And, to be clear, the commentary of “the cranks” in this area has plenty of problems of its own, even though in some ways they have turned out to be a more informative (as distinct from accurate) source on T-cell immunity.
Finally, to recap, we still are not sure how much overall social protection T-cell immunity will bring. Furthermore, we are pretty sure that not many places have a chance of current herd immunity from “a mix of previous Covid exposure plus pre-existing T-cell immunity.”
So I am not trying to induce you to overrate the T-cell immunity idea. I am trying to illuminate the biases of the filters at work in your everyday consumption of Covid-19 information. Those biases too, the mainstream commentators are not so keen to tell you about.
Excellent and interesting throughout, here is the transcript, video, and audio. Here is part of the summary:
He joined Tyler for a conversation about which areas of science are making progress, the factors that have made research more expensive, why government should invest more in R&D, how lean management transformed manufacturing, how India’s congested legal system inhibits economic development, the effects of technology on Scottish football hooliganism, why firms thrive in China, how weak legal systems incentivize nepotism, why he’s not worried about the effects of remote work on American productivity (in the short-term), the drawbacks of elite graduate programs, how his first “academic love” shapes his work today, the benefits of working with co-authors, why he prefers periodicals and podcasts to reading books, and more.
Here is an excerpt:
COWEN: If I understand your estimates correctly, efficacy per researcher, as you measure it, is falling by about 5 percent a year [paper here]. That seems phenomenally high. What’s the mechanism that could account for such a rapid decline?
BLOOM: The big picture — just to make sure everyone’s on the same page — is, if you look in the US, productivity growth . . . In fact, I could go back a lot further. It’s interesting — you go much further, and you think of European and North American history. In the UK that has better data, there was very, very little productivity growth until the Industrial Revolution. Literally, from the time the Romans left in whatever, roughly 100 AD, until 1750, technological progress was very slow.
Sure, the British were more advanced at that point, but not dramatically. The estimates were like 0.1 percent a year, so very low. Then the Industrial Revolution starts, and it starts to speed up and speed up and speed up. And technological progress, in terms of productivity growth, peaks in the 1950s at something like 3 to 4 percent a year, and then it’s been falling ever since.
Then you ask that rate of fall — it’s 5 percent, roughly. It would have fallen if we held inputs constant. The one thing that’s been offsetting that fall in the rate of progress is we’ve put more and more resources into it. Again, if you think of the US, the number of research universities has exploded, the number of firms having research labs.
Thomas Edison, for example, was the first lab about 100 years ago, but post–World War II, most large American companies have been pushing huge amounts of cash into R&D. But despite all of that increase in inputs, actually, productivity growth has been slowing over the last 50 years. That’s the sense in which it’s harder and harder to find new ideas. We’re putting more inputs into labs, but actually productivity growth is falling.
COWEN: Let’s say paperwork for researchers is increasing, bureaucratization is increasing. How do we get that to be negative 5 percent a year as an effect? Is it that we’re throwing kryptonite at our top people? Your productivity is not declining 5 percent a year, or is it? COVID aside.
BLOOM: COVID aside. Yeah, it’s hard to tell your own productivity. Oddly enough, I always feel like, “Ah, you know, the stuff that I did before was better research ideas.” And then something comes along. I’d say personally, it’s very stochastic. I find it very hard to predict it. Increasingly, it comes from working with basically great, and often younger, coauthors.
Why is it happening at the aggregate level? I think there are three reasons going on. One is actually come back to Ben Jones, who had an important paper, which is called, I believe, “[Death of the] Renaissance Man.” This came out 15 years ago or something. The idea was, it takes longer and longer for us to train.
Just in economics — when I first started in economics, it was standard to do a four-year PhD. It’s now a six-year PhD, plus many of the PhD students have done a pre-doc, so they’ve done an extra two years. We’re taking three or four years longer just to get to the research frontier. There’s so much more knowledge before us, it just takes longer to train up. That’s one story.
A second story I’ve heard is, research is getting more complicated. I remember I sat down with a former CEO of SRI, Stanford Research Institute, which is a big research lab out here that’s done many things. For example, Siri came out of SRI. He said, “Increasingly it’s interdisciplinary teams now.”
It used to be you’d have one or two scientists could come up with great ideas. Now, you’re having to combine a couple. I can’t remember if he said for Siri, but he said there are three or four different research groups in SRI that were being pulled together to do that. That of course makes it more expensive. And when you think of biogenetics, combining biology and genetics, or bioengineering, there’s many more cross-field areas.
Then finally, as you say, I suspect regulation costs, various other factors are making it harder to undertake research. A lot of that’s probably good. I’d have to look at individual regulations. Health and safety, for example, is probably a good idea, but in the same way, that is almost certainly making it more expensive to run labs…
COWEN: What if I argued none of those are the central factors because, if those were true as the central factors, you would expect the wages of scientists, especially in the private sector, to be declining, say by 5 percent a year. But they’re not declining. They’re mostly going up.
Doesn’t the explanation have to be that scientific efforts used to be devoted to public goods much more, and now they’re being devoted to private goods? That’s the only explanation that’s consistent with rising wages for science but a declining social output from her research, her scientific productivity.
COWEN: What exactly is the value of management consultants? Because to many outsiders, it appears absurd that these not-so-well-trained young people come in. They tell companies what to do. Sometimes it’s even called fraudulent if they command high returns. How does this work? What’s the value added?
What determines the success of a COVID-19 Test & Trace policy? We use an SEIR agent-based model on a graph, with realistic epidemiological parameters. Simulating variations in certain parameters of Testing & Tracing, we find that important determinants of successful containment are: (i) the time from symptom onset until a patient is self-isolated and tested, and (ii) the share of contacts of a positive patient who are successfully traced. Comparatively less important is (iii) the time of test analysis and contact tracing. When the share of contacts successfully traced is higher, the Test & Trace Time rises somewhat in importance. These results are robust to a wide range of values for how infectious presymptomatic patients are, to the amount of asymptomatic patients, to the network degree distribution and to base epidemic growth rate. We also provide mathematical arguments for why these simulation results hold in more general settings. Since real world Test & Trace systems and policies could affect all three parameters, Symptom Onset to Test Time should be considered, alongside test turnaround time and contact tracing coverage, as a key determinant of Test & Trace success.
That is from a new paper by Ofir Reich.
Formal pre-doc programmes have burgeoned, especially in elite universities such as Harvard, Stanford, the University of Chicago and Yale. Participants clean and analyse data, write papers and do administrative tasks. In exchange they may receive free or subsidised classes, a salary in the region of $50,000, potential co-authorship of the papers they work on, and, most prized of all, a letter of recommendation to a top programme.
In part pre-docs show how economic research has changed. “Economics has become more like the sciences in terms of both the methods and the production process,” says Raj Chetty of Harvard, who directs the Opportunity Insights team, a group with a reputation for working its pre-docs hard. When analysing tax records that gave access only to a certain number of people, he switched away from using part-time research assistants to a lab-like team, inspired by his own family of scientists. As bigger data sets, new techniques and generous funding made such collaboration worthwhile, others followed.
Here is much more on pre-docs from Soumaya Keynes at The Economist. I suspect this development is inevitable, but I see at least two things going on here. First, letter writers are internalizing the very high value of those letters in the form of personal services received. Second, this will push out “weirdos” and make the profession more homogenized, more obedient, more elite, more dependent on school of origin, and less interesting. I do understand the value of the training received, and don’t propose any mechanism to “stop this,” but overall it does not make me an entirely happy camper.
One part of the mycelium had access to a big patch of phosphorus. Another part had access to a small patch. She was interested in how this would affect the fungus’s trading decisions in different parts of the same network. Some recognizable patterns emerged. In parts of a mycelial network where phosphorus was scarce, the plant paid a higher “price,” supplying more carbon to the fungus for every unit of phosphorus it received. Where phosphorus was more readily available, the fungus received a less favorable “exchange rate.” The “price” of phosphorus seemed to be governed by the familiar dynamics of supply and demand.
Most surprising was the way that the fungus coordinated its trading behavior across the network. Kiers identified a strategy of “buy low, sell high.” The fungus actively transported phosphorus — using its dynamic microtubule “motors” — from areas of abundance, where it fetched a low price when exchanged with a plant root, to areas of scarcity, where it was in higher demand and fetched a higher price. By doing so, the fungus was able to transfer a greater proportion of its phosphorus to the plant at the more favorable exchange rate, thus receiving larger quantities of carbon in return.
We still do not understand how those behaviors are controlled. And that is all from the new and excellent Merlin Sheldrake book Entangled Life: How Fungi Make Our Worlds, Change Our Minds, & Shape Our Futures.
Let’s not give them Twitter:
They found that the bat noises are not just random, as previously thought, reports Skibba. They were able to classify 60 percent of the calls into four categories. One of the call types indicates the bats are arguing about food. Another indicates a dispute about their positions within the sleeping cluster. A third call is reserved for males making unwanted mating advances and the fourth happens when a bat argues with another bat sitting too close. In fact, the bats make slightly different versions of the calls when speaking to different individuals within the group, similar to a human using a different tone of voice when talking to different people. Skibba points out that besides humans, only dolphins and a handful of other species are known to address individuals rather than making broad communication sounds.
I don’t view this as a formal answer, but it is interesting nonetheless:
Mycelium is how fungi feed. Some organisms — such as plants that photosynthesize — make their own food. Some organisms — like most animals — find food in the world and put it inside their bodies, where it is digested and absorbed. Fungi have a different strategy. They digest the world where it is and then absorb it into their bodies…
The difference between animals and fungi is simple: Animals put food in their bodies, whereas fungi put their bodies in the food.
…to embed oneself is an irregular and unpredictable food supply as mycelium does, one must be able to shape-shift. Mycelium is an living, growing, opportunistic investigation — speculation in bodily form.
That is from the new and excellent book by Merlin Sheldrake, Entangled Life: How Fungi Make Our Worlds, Change Our Minds, & Shape Our Futures.
Fungi are prodigious decomposers, but of their many biochemical achievements, one of the most impressive is this ability of white rot fungi to break down the lignin in wood. Based on their ability to release free radicals, the peroxidases produced b white rot fungi perform what is technically known as “radical chemistry.” “Radical” has it right. These enzymes have forever changed the way that carbon journeys through its earthly cycles. Today, fungal decomposition — much of it of woody plant matter — is one of the largest sources of carbon emissions, emitting about eighty-five gigatons of carbon to the atmosphere every year. In 2018, the combustion of fossil fuels by humans emitted around ten gigatons.
That is from the new and excellent book by Merlin Sheldrake, Entangled Life: How Fungi Make Our Worlds, Change Our Minds & Shape Our Futures.
From an email from Agustin Lebron, noting that I will impose no further indentation:
“One thing that’s worth noting:
The degree of excitement about GPT-3 as a replacement for human workers, or as a path to AGI, is strongly inversely correlated with:
(a) How close the person is to the actual work. If you look at the tweets from Altman, Sutskever and Brockman, they’re pumping the brakes pretty hard on expectations.
(b) How much the person has actually built ML systems.
It’s a towering achievement to be able to train a system this big. But to me it’s clearly a dead-end on the way to AGI:
– The architecture itself is 3 years old: https://arxiv.org/abs/1706.03762. It is not an exaggeration to say that GPT-3’s architecture can be described as “take that 2017 paper and make 3 numbers (width, # layers, # heads) much bigger”. The fact that there hasn’t been any improvement in architecture in 3 years is quite telling.
– In the paper itself, the authors clearly say they’re quite near fundamental limits in being able to train an architecture like this. GPT-3 isn’t a starting point, it’s an end-point.
– If you look at more sober assessments (http://lacker.io/ai/2020/07/06/giving-gpt-3-a-turing-test.html, https://minimaxir.com/2020/07/gpt3-expectations/), without the tweet selection bias, it starts to look less impressive.
– Within my fairly heterogeneous circle of ML-expert friends, there’s little disagreement about dead-end-ness.
The most interesting thing about GPT-3 is the attention and press that it’s gotten. I’m still not sure what to make of that, but it’s very notable.
Again, it’s incredibly impressive and also piles of fun, but I’m willing to longbet some decent money that we’re not replacing front-end devs with attention-layer-stacks anytime soon.”
I remain bullish, but it is always worth considering other opinions.