Results for “Test prep”
185 found

On-line classes really do work, at least sometimes

There is a new report of interest, admittedly MIT physics-specific only:

…for the first time, researchers have carried out a detailed study that shows that these classes really can teach at least as effectively as traditional classroom courses—and they found that this is true regardless of how much preparation and knowledge students start out with.

The findings have just been published in the International Review of Research in Open and Distance Learning, in a paper by David Pritchard, MIT’s Cecil and Ida Green Professor of Physics, along with three other researchers at MIT and one each from Harvard University and China’s Tsinghua University.

“It’s an issue that has been very controversial,” Pritchard says. “A number of well-known educators have said there isn’t going to be much learning in MOOCs, or if there is, it will be for people who are already well-educated.”

But after thorough before-and-after testing of students taking the MITx physics class 8.MReVx (Mechanics Review) online, and similar testing of those taking the same class in its traditional form, Pritchard and his team found quite the contrary: The study showed that in the MITx course, “the amount learned is somewhat greater than in the traditional lecture-based course,” Pritchard says.

A second, more surprising finding, he says, is that those who were least prepared, as shown by their scores on pretests, “learn as well as everybody else.” That is, the amount of improvement seen “is no different for skillful people in the class”—including experienced physics teachers—”or students who were badly prepared. They all showed the same level of increase,” the study found.

For the pointer I thank Samir Varma

For whom are the moochers actually voting?

It is a pretty mixed bag, as illustrated by this newly published paper by Dean Lacy, the abstract is here:

The 2012 election campaign popularized the notion that people who benefit from federal spending vote for Democrats, while people who pay the preponderance of taxes vote Republican. A survey conducted during the election included questions to test this hypothesis and to assess the accuracy of voters’ perceptions of federal spending. Voters’ perceptions of their benefit from federal spending are determined by family income, age, employment status, and number of children, as well as by party identification and race. Voters aged 65 and older who believe they are net beneficiaries of federal spending are more likely to be Democrats and vote for Barack Obama than seniors who believe they are net contributors to the federal government. However, the 77.5 percent of voters under age 65 who believe they are net beneficiaries of federal spending are as likely to vote for Romney as for Obama and as likely to be Republicans as Democrats. Voters who live in states that receive more in federal funds than they pay in federal taxes are less likely to vote for Obama or to be Democrats. For most of the electorate, dependence on federal spending is unrelated to vote choice.

Hat tip goes to Kevin Lewis.  I am not able to find an ungated copy.

Kevin also points us to this interesting paper interpreting the Scandinavian model.  The authors are Erling Barth, Karl O. Moene, and Fredrik Willumsen, and the abstract is this:

The small open economies in Scandinavia have for long periods had high work effort, small wage differentials, high productivity, and a generous welfare state. To understand how this might be an economic and political equilibrium we combine models of collective wage bargaining, creative job destruction, and welfare spending. The two-tier system of wage bargaining provides microeconomic efficiency and wage compression. Combined with a vintage approach to the process of creative destruction we show how wage compression fuels investments, enhances average productivity and increases the mean wage by allocating more of the work force to the most modern activities. Finally, we show how the political support of welfare spending is fueled by both a higher mean wage and a lower wage dispersion.

Again, I cannot find an ungated copy.

Will the major central banks evolve into mega-hedge funds?

Here is the latest from Japan:

Bank of Japan officials are considering maintaining a large balance sheet for the central bank even after it achieves its inflation target, reducing the risk of a surge in long-term bond yields, sources said.

Under the potential strategy, the BOJ would use cash from maturing securities in its portfolio to buy long-term government debt, the sources said, asking not to be named as the talks are private. Gov. Haruhiko Kuroda and his colleagues have yet to meet their inflation target, and pledge to continue asset purchases until consumer prices are rising at a 2 percent pace.

The possibility of permanently large balance sheets — in Japan’s case, now amounting to more than half the size of the economy — may become a global legacy of unprecedented stimulus measures. The BOJ discussions parallel preparations at the U.S. Federal Reserve to avoid an exit strategy of asset sales.

“There’s no need for the BOJ balance sheet to go back to where it was,” said Hiromichi Shirakawa, chief Japan economist at Credit Suisse Group AG in Tokyo and a former BOJ official. “It’s a realistic approach to keep the size of the balance sheet large for a while to avoid a spike in yields.”

Any abrupt end to government bond purchases by the BOJ could send borrowing costs soaring, because the bank currently purchases the equivalent of about 70 percent of the new securities issued.

Were not these exit strategies supposed to be easy and painless?  Maybe they are, except having no exit strategy is all the more easy and painless.  In their shoes, I would not do differently but my level of unease with this situation continues to increase.

There is more here, via www.macrodigest.com.  And here is a new VoxEU piece on what we know about the macroeconomic effects of asset purchases.  And here is Noah Smith mounting a defense of Abe.

China estimate of the day (speculative)

Officially, the People’s Republic of China is an atheist country, but that is changing fast as many of its 1.3 billion citizens seek meaning and spiritual comfort that neither communism nor capitalism seem to have supplied.

Christian congregations, in particular, have rocketed since churches began reopening when Communist leader Mao Zedong’s death in 1976 signalled the end of the Cultural Revolution. Less than four decades later, some believe China is now poised to become not just the world’s No. 1 economy but also its most numerous Christian nation.

“By my calculations China is destined to become the largest Christian country in the world very soon,” said Fenggang Yang, a professor of sociology at Purdue University in Indiana and author of Religion in China: Survival and Revival under Communist Rule. “It is going to be less than a generation. Not many people are prepared for this dramatic change.”

China’s Protestant community, which had just one million members in 1949, has already overtaken those of countries more commonly associated with an evangelical boom. In 2010 there were more than 58 million Protestants in China compared with 40 million in Brazil and 36 million in South Africa, according to the Pew Research Centre’s Forum on Religion and Public Life.

Yang, a leading expert on religion in China, believes that number will swell to around 160 million by 2025. That would be likely to put China ahead even of the United States, which had around 159 million Protestants in 2010 but whose congregations are in decline.

By 2030, China’s total Christian population, including Catholics, would exceed 247 million, placing it above Mexico, Brazil and the U.S. as the largest Christian congregation in the world, Yang predicted.

The article is here, via Noah Smith.

Assorted links

1. Is internal devaluation boosting Greek exports much?

2. In praise of the London Review of Books.  And updated economics on the NYT paywall.

3. The NIH is culling the number of labs it supports.

4. Mega-list of links with advice for economists and students of economics, at various stages of their careers.

5. The new Summers version of the secular stagnation argument doesn’t seem to rely on negative natural rates of interest.  That said, it is getting closer to a supply-side version of the view.

6. What’s it like to own a 3-D printer?

Can public libraries offer high school degrees? (hi future)

The Los Angeles Public Library announced Thursday that it is teaming up with a private online learning company to debut the program for high school dropouts, believed to be the first of its kind in the nation.

It’s the latest step in the transformation of public libraries in the digital age as they move to establish themselves beyond just being a repository of books to a full educational institution, said the library’s director, John Szabo.

Since taking over the helm in 2012, Szabo has pledged to reconnect the library system to the community and has introduced a number of new initiatives to that end, including offering 850 online courses for continuing education and running a program that helps immigrants complete the requirements for U.S. citizenship.

The library hopes to grant high school diplomas to 150 adults in the first year at a cost to the library of $150,000, Szabo said. Many public libraries offer programs to prepare students and in some cases administer the General Educational Development test, which for decades was the brand name for the high school equivalency exam.

But Szabo believes this is the first time a public library will be offering an accredited high school diploma to adult students, who will take courses online but will meet at the library for assistance and to interact with fellow adult learners.

The article is here, and for the pointer I thank Robert Tagorda.

What are humans still good for? The turning point in Freestyle chess may be approaching

Some of you will know that Average is Over contains an extensive discussion of “freestyle chess,” where humans can use any and all tools available — most of all computers and computer programs — to play the best chess game possible.  The book also notes that “man plus computer” is a stronger player than “computer alone,” at least provided the human knows what he is doing.  You will find a similar claim from Brynjolfsson and McAfee.

Computer chess expert Kenneth W. Regan has compiled extensive data on this question, and you will see that a striking percentage of the best or most accurate chess games of all time have been played by man-machine pairs.  Ken’s explanations are a bit dense for those who don’t already know chess, computer chess, Freestyle and its lingo, but yes that is what he finds, click on the links in his link for confirmation.  In this list for instance the Freestyle teams do very very well.

Average is Over also raised the possibility that, fairly soon, the computer programs might be good enough that adding the human to the computer doesn’t bring any advantage.  (That’s been the case in checkers for some while, as that game is fully solved.)  I therefore was very interested in this discussion at RybkaForum suggesting that already might be the case, although only recently.

Think about why such a flip might be in the works, even though chess is far from fully solved.  The “human plus computer” can add value to “the computer alone” in a few ways:

1. The human may in selective cases prune variations better than the computer alone, and thus improve where the computer searches for better moves and how the computer uses its time.

2. The human can see where different chess-playing programs disagree, and then ask the programs to look more closely at those variations, to get a leg up against the computer playing alone (of course this is a subset of #1).  This is a biggie, and it is also a profound way of thinking about how humans will add insight to computer programs for a long time to come, usually overlooked by those who think all jobs will disappear.

3. The human may be better at time management, and can tell the program when to spend more or less time on a move.  “Come on, Rybka, just recapture the damned knight!”  Haven’t we all said that at some point or another?  I’ve never regretted pressing the “Move Now” button on my program.

4. The human knows the “opening book” of the computer program he/she is playing against, and can prepare a trap in advance for the computer to walk into, although of course advanced programs can to some extent “randomize” at the opening level of the game.

Insofar as the above RybkaForum thread has a consensus, it is that most of these advantages have not gone away.  But the “human plus computer” needs time to improve on the computer alone, and at sufficiently fast time controls the human attempts to improve on the computer may simply amount to noise or may even be harmful, given the possibility of human error.  Some commentators suggest that at ninety minutes per game the humans are no longer adding value to the human-computer team, whereas they do add value when the time frame is say one day per move (“correspondence chess,” as it is called in this context.)  Circa 2008, at ninety minutes per game, the best human-computer teams were better than the computer programs alone.  But 2013 or 2014 may be another story.  And clearly at, say, thirty or sixty seconds a game the human hasn’t been able to add value to the computer for some time now.

Note that as the computer programs get better, some of these potential listed advantages, such as #1, #3, and #4 become harder to exploit.  #2 — seeing where different programs disagree — does not necessarily become harder to exploit for advantage, although the human (often, not always) has to look deeper and deeper to find serious disagreement among the best programs.  Furthermore the ultimate human sense of “in the final analysis, which program to trust” is harder to intuit, the closer the different programs are to perfection.  (In contrast, the human sense of which program to trust is more acute when different programs have more readily recognizable stylistic flaws, as was the case in the past: “Oh, Deep Blue doesn’t always understand blocked pawn formations very well.”  Or “Fritz is better in the endgame.”  And so on.)

These propositions all require more systematic testing, of course.  In any case it is interesting to observe an approach to the flip point, where even the most talented humans move from being very real contributors to being strictly zero marginal product.  Or negative marginal product, as the case may be.

And of course this has implications for more traditional labor markets as well.  You might train to help a computer program read medical scans, and for thirteen years add real value with your intuition and your ability to revise the computer’s mistakes or at least to get the doctor to take a closer look.  But it takes more and more time for you to improve on the computer each year.  And then one day…poof!  ZMP for you.

Addendum: Here is an article on computer dominance in rock-paper-scissors.  This source claims freestyle does not beat the machine in poker.

Against subjectivism (and for taxonomy)

Aristotle Circle, the company that employs Vanessa and was co-founded by Ms Rheault in 2008, sells an ERB preparation workbook with sample test questions in it such as “Apples and oranges are both . . . ”. Children get two points for “fruit”, one for “sweet things I eat” and none for “yummy”.

The bulk of its business is providing tutors at $350 an hour, who prepare children for the tests, and admissions experts for parents desperate for information. But demand for play date instruction is increasing.

That is to help get your children into the best private schools.  There is more here.

Bets and Beliefs

I fear that Tyler’s latest post on bets and beliefs will obfuscate more than clarify. Let’s clarify. There are two questions, do portfolios reveal beliefs? Do bets reveal beliefs?

Tyler has argued that portfolios reveal beliefs. This is false. If transaction costs were zero and there were an asset for every possible future state of the world then this would be true. Since transaction costs are not zero and there are many more states of the world than there are assets–even when we combine assets–portfolios do not reveal beliefs. Portfolios might reveal a few coarse beliefs but otherwise no go. Since most people have lots of beliefs about the future but don’t even have a portfolio (beyond human capital) this should be obvious.

Do bets reveal beliefs? Usually but not necessarily. Two people made bets with Noah Smith. Each thought Noah was an idiot for making the bet. Noah, however, had arbitraged so that he couldn’t lose. Clever Noah! Noah’s bets, either alone or in conjunction, did not reveal his beliefs.  But is this the usual situation? No.

For the same reasons that portfolios don’t reveal beliefs, high transaction costs and few assets relative to states of the world, it’s going to be difficult to arbitrage all bets. Many bets in effect create a new and unique asset that can’t be easily duplicated and arbitraged away in other markets. I once bet Bryan as to what an expert would answer when asked a particular question. Hard to arbitrage that away.

I also agree with Bryan that the question is empirical and not simply theoretical. When I say that a bet is a tax on bullshit the implication is not just that bullshitters are more likely to lose their bets but also that a tax on bullshit reduces its supply. The betting tax causes people to think more carefully and to be more precise. When people are more careful and precise the quality of communication increases. As Adam Ozimek writes:

In a lot of writing in blogs it is unclear specifically what the writer is trying to say, and they seem to wish to convey an attitude about a certain position without actually having to make a particular criticism of it, or by making a much actual narrower criticism than rhetoric implies…It is useful to have betting because deciding clearly resolvable terms of a bet leads to specific claims…

Tyler argues that under some conditions betting won’t change what people say (under a wide range of portfolios…a matter of indifference… bets won’t be authentic) but Tyler doesn’t give us a specific, testable prediction. The empirical evidence, however, is that small bets do cause people to change what they say. This is one of the reasons why even small-bet, prediction markets work well.

Tyler has his reasons for not liking to bet but if you think one of those reasons is that he has already revealed his beliefs then you are surely not a loyal reader.

Industry of Mediocrity

AP: Washington: The nation’s teacher-training programs do not adequately prepare would-be educators for the classroom, even as they produce almost triple the number of graduates needed, according to a survey of more than 1,000 programs released Tuesday.

The National Council on Teacher Quality review is a scathing assessment of colleges’ education programs and their admission standards, training and value.

Not surprisingly the report is being criticized by the teacher’s unions who complain that evaluators “did not visit programs or interview students or schools that hired graduates.” Most of the teacher’s colleges, however, refused to cooperate with the evaluators with some even instructing their students not to cooperate. Do you think the non-cooperators were of better quality than the programs that did cooperate?

According to the report, “some 239,000 teachers are trained each year and 98,000 are hired” suggesting a poor return for the potential teachers. One wonders about the quality of the teachers not hired.

In any case, the report is consistent with a wide body of research that shows teacher quality is not high and has declined over time, see Launching the Innovation Renaissance for details.

Meanwhile, on the every cloud has a silver lining front, Neerav Kingsland, Chief Strategy Officer for the important non-profit New Schools for New Orleans argues that the great stagnation will increase the supply of high-quality teachers:

Unfortunately, international trade and technology will continue to eliminate middle-class jobs. Personally, I’m worried that our political system will not adequately ease the pain of this transition. However, this economic upheaval will increase the quality of human capital available to schools. The education sector will likely capture some of this talent surplus, so long as schools are well managed. Moreover, if tech progress reduces the amount of educators we need, we may be in a situation where we have both (a) higher quality applicant pools and (b) less education jobs. I do not view the hollowing out of middle-class jobs as a positive economic development, but it will positively affect education labor…

A New FDA for the Age of Personalized, Molecular Medicine

In a brilliant new paper (pdf) (html) Peter Huber draws upon molecular biology, network analysis and Bayesian statistics to make some very important recommendations about FDA policy. Consider the following drugs (my list):

Drug A helps half of those to whom it is prescribed but it causes very serious liver damage in the other half. Drug B works well at some times but when administered at other times it accelerates the disease. Drug C fails to show any effect when tested against a placebo but it does seem to work in practice when administered as part of a treatment regime.

Which of these drugs should be approved and which rejected? The answer is that all of them should be approved; that is, all of them should be approved if we can target each drug to the right patient at the right time and with the right combination of other drugs. Huber argues that Bayesian adaptive testing, with molecular biology and network analysis providing priors, can determine which patients should get which drugs when and in what combinations. But we can only develop the data to target drugs if the drugs are actually approved and available in the field. The current FDA testing regime, however, is not built for adaptive testing in the field.

The current regime was built during a time of pervasive ignorance when the best we could do was throw a drug and a placebo against a randomized population and then count noses. Randomized controlled trials are critical, of course, but in a world of limited resources they fail when confronted by the curse of dimensionality. Patients are heterogeneous  and so are diseases. Each patient is a unique, dynamic system and at the molecular level diseases are heterogeneous even when symptoms are not. In just the last few years we have expanded breast cancer into first four and now ten different types of cancer and the subdivision is likely to continue as knowledge expands. Match heterogeneous patients against heterogeneous diseases and the result is a high dimension system that cannot be well navigated with expensive, randomized controlled trials. As a result, the FDA ends up throwing out many drugs that could do good:

Given what we now know about the biochemical complexity and diversity of the environments in which drugs operate, the unresolved question at the end of many failed clinical trials is whether it was the drug that failed or the FDA-approved script. It’s all too easy for a bad script to make a good drug look awful. The disease, as clinically defined, is, in fact, a cluster of many distinct diseases: a coalition of nine biochemical minorities, each with a slightly different form of the disease, vetoes the drug that would help the tenth. Or a biochemical majority vetoes the drug that would help a minority. Or the good drug or cocktail fails because the disease’s biochemistry changes quickly but at different rates in different patients, and to remain effective, treatments have to be changed in tandem; but the clinical trial is set to continue for some fixed period that doesn’t align with the dynamics of the disease in enough patients

Or side effects in a biochemical minority veto a drug or cocktail that works well for the majority. Some cocktail cures that we need may well be composed of drugs that can’t deliver any useful clinical effects until combined in complex ways. Getting that kind of medicine through today’s FDA would be, for all practical purposes, impossible.

The alternative to the FDA process is large collections of data on patient biomarkers, diseases and symptoms all evaluated on the fly by Bayesian engines that improve over time as more data is gathered. The problem is that the FDA is still locked in an old mindset when it refuses to permit any drugs that are not “safe and effective” despite the fact that these terms can only be defined for a large population by doing violence to heterogeneity. Safe and effective, moreover, makes sense only when physicians are assumed to be following simple, A to B, drug to disease, prescribing rules and not when they are targeting treatments based on deep, contextual knowledge that is continually evolving:

In a world with molecular medicine and mass heterogeneity the FDA’s role will change from the yes-no single rule that fits no one to being a certifier of biochemical pathways:

By allowing broader use of the drug by unblinded doctors, accelerated approval based on molecular or modest—and perhaps only temporary—clinical benefits launches the process that allows more doctors to work out the rest of the biomarker science and spurs the development of additional drugs. The FDA’s focus shifts from licensing drugs, one by one, to regulating a process that develops the integrated drug-patient science to arrive at complex, often multidrug, prescription protocols that can beat biochemically complex diseases.

…As others take charge of judging when it is in a patient’s best interest to start tinkering with his own molecular chemistry, the FDA will be left with a narrower task—one much more firmly grounded in solid science. So far as efficacy is concerned, the FDA will verify the drug’s ability to perform a specific biochemical task in various precisely defined molecular environments. It will evaluate drugs not as cures but as potential tools to be picked off the shelf and used carefully but flexibly, down at the molecular level, where the surgeon’s scalpels and sutures can’t reach.

In an important section, Huber notes that some of the biggest successes of the drug system in recent years occurred precisely because the standard FDA system was implicitly bypassed by orphan drug approval, accelerated approval and off-label prescribing (see also The Anomaly of Off-Label Prescribing).

But for these three major licensing loopholes, millions of people alive today would have died in the 1990s. Almost all the early HIV- and AIDS-related drugs—thalidomide among them—were designated as orphans. Most were rushed through the FDA under the accelerated-approval rule. Many were widely prescribed off-label. Oncology is the other field in which the orphanage, accelerated approval, and off-label prescription have already played a large role. Between 1992 and 2010, the rule accelerated patient access to 35 cancer drugs used in 47 new treatments. For the 26 that had completed conventional followup trials by the end of that period, the median acceleration time was almost four years.

Together, HIV and some cancers have also gone on to demonstrate what must replace the binary, yes/ no licensing calls and the preposterously out-of-date Washington-approved label in the realm of complex molecular medicine.

Huber’s paper has a foreword by Andrew C. von Eschenbach, former commissioner of the FDA, who concludes:

For precision medicine to flourish, Congress must explicitly empower the agency to embrace new tools, delegate other authorities to the NIH and/or patient-led organizations, and create a legal framework that protects companies from lawsuits to encourage the intensive data mining that will be required to evaluate medicines effectively in the postmarket setting. Last but not least, Congress will also have to create a mechanism for holding the agency accountable for producing the desired outcomes.

You are fairly predictable, perhaps

The new article is “Private traits and attributes are predictable from digital records of human behavior,” by Michal Kosinski, David Stillwell, and Thore Graepel.  Here is the abstract:

We show that easily accessible digital records of behavior, Facebook Likes, can be used to automatically and accurately predict a range of highly sensitive personal attributes including: sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender. The analysis presented is based on a dataset of over 58,000 volunteers who provided their Facebook Likes, detailed demographic profiles, and the results of several psychometric tests. The proposed model uses dimensionality reduction for preprocessing the Likes data, which are then entered into logistic/linear regression to predict individual psychodemographic profiles from Likes. The model correctly discriminates between homosexual and heterosexual men in 88% of cases, African Americans and Caucasian Americans in 95% of cases, and between Democrat and Republican in 85% of cases. For the personality trait “Openness,” prediction accuracy is close to the test–retest accuracy of a standard personality test. We give examples of associations between attributes and Likes and discuss implications for online personalization and privacy.

For the pointer I thank Brandon Robison.

Department of spurious correlations?

Here is the abstract of a forthcoming AER piece, written by M. Keith Chen:

Languages differ widely in the ways they encode time. I test the hypothesis that languages that grammatically associate the future and the present, foster future-oriented behavior. This prediction arises naturally when well-documented effects of language structure are merged with models of intertemporal choice. Empirically, I find that speakers of such languages: save more, retire with more wealth, smoke less, practice safer sex, and are less obese. This holds both across countries and within countries when comparing demographically similar native households. The evidence does not support the most obvious forms of common causation. I discuss implications for theories of intertemporal choice.

Here is from a recent article in The Chronicle of Higher Education, by Geoffrey Pullum:

Chen’s data on languages comes from the World Atlas of Language Structures (WALS), and his evidence on prudence from the World Values Survey (WVS). Both are fully Web-accessible. Sean Roberts, who studies language evolution at the Max Planck Institute for Psycholinguistics in Nijmegen, decided to investigate the other linguistic factors treated in WALS to see how they related to prudence. He compared the goodness of fit for linear regressions on each of a long list of properties of languages (the independent variables), using as the dependent variable the answers that speakers gave to the WVS question “Did you save money last year?”

The results (see this blog post for an informal account) were jaw-dropping. He found that dozens of linguistic variables were better predictors of prudence than future marking: whether the language has uvular consonants; verbal agreement of particular types; relative clauses following nouns; double-accusative constructions; preposed interrogative phrases; and so on—a motley collection of factors that no one could plausibly connect to 401(k) contributions or junk-food consumption.

There is a bit more here.

For the pointer I thank Mike T.  And I would gladly run a response from Chen, if he has interest in drafting one.

Addendum: Here is an important update from the critic, after improving the specification of his alternative fits:

The results showed that there was only one other linguistic variable that improved the fit of the model more than future tense.  That is, future tense was a better predictor than 99% of the linguistic variables.  For comparison, Dediu & Ladd’s test of the link between linguistic tone and Microcephalin/ASPM found that the hypothesised link was stronger than 98.5% of many thousands of links between genetic and linguistic factors.

Greek Islands for Sale

Telegraph: As international inspectors in Athens scrutinise the country’s fitness to receive the latest aid payment, Prime Minister Antonis Samaras has said commercial exploitation of some islands could generate the revenue lenders need to see to continue funding the country.

The shortlist includes islands ranging in size from 500,000 square meters (5.4 million square feet) to 3 million square meters, and which can be developed into high-end integrated tourist resorts under leases lasting 30 years to 50 years, Mr Taprantzis said.

…The fund reviewed 562 of the estimated 6,000 islands and islets under Greek sovereignty. While some are already privately owned, such as Skorpios by the Onassis shipping heiress Athina Onassis, the state owns islands such as Fleves, which is near the coastal resort area of Vouliagmeni, and a cluster of three islands near Corfu. Mr Taprantzis declined to identify any of the islands.

Legislation needs to be passed to allow development of public property by third parties and reduce the number of building, environmental and zoning permits needed before the plan can proceed, Taprantzis said.

It’s a good idea to move these assets into private hands. The U.S. Federal government also has a lot of land that could be privatized. (For the U.S. see map and note that only a small portion is parkland).

Questions about John Cage

Wednesday will count as his 100th birthday.  Here are a few of my views:

1. Is it actually good music?

Much of it is, once you get past the gimmicks.  For direct musical listening (skip 4’33”) I recommend the piano music, most of all by Herbert Henck or David Tudor or Stephen Drury.  The important pieces have held up very well, and even the lesser pieces still are worth hearing at least once.

2. If I wish to try one important piece?

Perhaps “In a Landscape,” on this CD.

3. What if I am looking for a good sampler to reflect his diverse contributions?

Try the Barton Workshop grab bag.

4. Are you pulling my leg?

No.

5. Is aleatory music interesting?

To me, no.

Here is Wikipedia on John Cage.  Here is John Cage on a 1960 game show, being thwarted by a union dispute.  Here is good commentary on that clip.  Here is TNR commentary on that clip.  Cage was also an expert mycologist.  Here are the Italian prizes he won for mushroom identification.  Here is the iTunes prepared piano app.

Here are good quotations from John Cage.