Here is one new report:
The Problem Generator – which is available to all Wolfram Alpha Pro subscribers now – creates random practice questions for students, and Wolfram Alpha then helps them find the answers step-by-step.
Right now, the Generator covers six subjects: arithmetic, number theory, algebra, calculus, linear algebra and statistics.
Here is a 2011 Kurt VanLehn paper (pdf) on human vs. computer systems of tutoring:
This article is a review of experiments comparing the effectiveness of human tutoring, computer tutoring, and no tutoring. “No tutoring” refers to instruction that teaches the same content without tutoring. The computer tutoring systems were divided by their granularity of the user interface interaction into answer-based, step-based, and substep-based tutoring systems. Most intelligent tutoring systems have step-based or substep-based granularities or interaction, whereas most other tutoring systems (often called CAI, CBT, or CAL systems) have answer-based user interfaces. It is widely believed as the granularity of tutoring decreases, the effectiveness increases. In particular, when compared to No tutoring, the effect sizes of answer-based tutoring systems, intelligent tutoring systems, and adult human tutors are believed to be d = 0.3, 1.0, and 2.0 respectively. This review did not confirm these beliefs. Instead, it found that the effect size of human tutoring was much lower: d = 0.79. Moreover, the effect size of intelligent tutoring systems was 0.76, so they are nearly as effective as human tutoring.
One more specific result found in this paper is simply that human tutors very often fail to take advantage of what are supposed to be the advantages of human tutoring, such as flexibility in deciding how to respond to student problems.
By the way, LaunchPad, the new e-portal for our Modern Principles text, contains an excellent adaptive tutoring system.