Category: Web/Tech
How economists use Twitter
When using Twitter, both economists and natural scientists communicate mostly with people outside their profession, but economists tweet less, mention fewer people and have fewer conversations with strangers than a comparable group of experts in the sciences. That is the central finding of research by Marina Della Giusta and colleagues, to be presented at the Royal Economic Society’s annual conference at the University of Sussex in Brighton in March 2018.
Their study also finds that economists use less accessible language with words that are more complex and more abbreviations. What’s more, their tone is more distant, less personal and less inclusive than that used by scientists.
The researchers reached these conclusions gathering data on tens of thousands of tweets from the Twitter accounts of both the top 25 economists and 25 scientists as identified by IDEAS and sciencemag. The top three economists are Paul Krugman, Joseph Stiglitz and Erik Brynjolfsson; the top three scientists are Neil deGrasse Tyson, Brian Cox and Richard Dawkins.
Here is further information, via Romesh Vaitilingam. But I cannot find the original research paper on-line. These are interesting results, but still I would like to see the shape of the entire distribution…
Testing the eggheads in the cryptocurrency market
Some of the world’s best-known economists on Thursday announced plans to create what could be described as the thinking person’s cryptocurrency. Saga aims to address many of the criticisms frequently thrown at bitcoin, the world’s biggest cryptocurrency, to position itself as an alternative digital currency that is more acceptable to the financial and political establishment.
It is being launched by a Swiss foundation with an advisory board featuring Jacob Frenkel, chairman of JPMorgan Chase International and former governor of the Bank of Israel; Myron Scholes, the Nobel Prize-winning economist; and Dan Galai, co-creator of the Vix volatility index. The Saga token aims to avoid the wild price swings of many cryptocurrencies by tethering itself to reserves deposited in a basket of fiat currencies at commercial banks. Holders of Saga will be able to claim their money back by cashing in the cryptocurrency.
The currency also aims to avoid the anonymity afforded by bitcoin, which has raised financial crime concerns with regulators and bankers. Saga will require owners to pass anti-money laundering checks and allow national authorities to check the identity of a holder when required.
Oh so respectable sounding! They’re not doing an ICO, instead there is a variable fractional reserve system, and the ruling principle is that Saga, the asset, “entitles its investors to a rising number of Saga as usage of the cryptocurrency grows.” It sounds like a bet on the notion that bootstrapping is central to crypto success. But do investors really want “safe harbours from the raging volatility”? Do investors want a currency at all? By the way, this one is proof of stake, not proof of work.
Here is their web site, and here is the White Paper. Here are other readings on the asset. Here is the original FT article, FTAlphaville is less impressed.
Do the participants have too much skin in other games? So far I don’t see the point of doing this one, as it doesn’t create an asset with a truly different risk profile than the others, not from what I can see.
My March 28 talk at MIT
What happens when a simulated system becomes more real than the system itself? Will the internet become “more real” than the world of ideas it is mirroring? Do we academics live in a simulacra? If the “alt right” exists mainly on the internet, does that make it more or less powerful? Do all innovations improve system quality, and if so why is a lot of food worse than before and home design was better in 1910-1930? How does the world of ideas fit into this picture?
Here are details on the lunch seminar.
What should I ask Balaji Srinivasan?
I will be doing a Conversation with him, no associated public event. Here is his home page, here is his bio:
Balaji S. Srinivasan is the CEO of Earn.com and a Board Partner at Andreessen Horowitz. Prior to taking the CEO role at Earn.com, Dr. Srinivasan was a General Partner at Andreessen Horowitz. Before joining a16z, he was the cofounder and CTO of Founders Fund-backed Counsyl, where he won the Wall Street Journal Innovation Award for Medicine and was named to the MIT TR35.
Dr. Srinivasan holds a BS, MS, and PhD in Electrical Engineering and an MS in Chemical Engineering from Stanford University. He also teaches the occasional class at Stanford, including an online MOOC in 2013 which reached 250,000+ students worldwide.
His latest Medium essay was on ICOs and tokens. I thank you all in advance for your wise counsel.
China estimate of the day
But because of the way trade deficits are measured, almost all the value of those components is attributed to China, which exports the final product. Reuters reports that 61 million iPhones were shipped from China to the US in 2017 and suggests that just a single phone—the iPhone 7 model, released in 2016 and on sale for all of last year—accounted for $15.7 billion of the trade deficit, or 4.4%.
Louis Kuijs, head of Asia economics research at Oxford Economics, told Reuters if trade deficits were measured to account for the complex nature of global supply chains like the ones used by sophisticated consumer products like smartphones, the US-China trade deficit would be about 36% lower, or $239 billion.
That is from Allison Schraeger.
In tech, we fear what we can’t control
That is the topic of my new Bloomberg column, here is one bit:
Like drones, driverless cars possess some features of an especially potent scare story. They are a new and exciting technology, and so stories about them get a lot of clicks. We don’t actually know how safe they are, and that uncertainty will spook people above and beyond whatever is the particular level of risk. Most of all, driverless cars by definition involve humans not feeling in direct control. It resembles how a lot of people feel in greater danger when flying than driving a car, even though flying is usually safer. Driverless cars raise a lot of questions about driver control: Should you be allowed to sleep in the backseat? Or must you stay by the wheel? That focuses our minds and feelings on the issue of control all the more.
And:
The recent brouhaha over Facebook and Cambridge Analytica (read here and here) reflects some similar issues. Could most Americans clearly and correctly articulate exactly what went wrong in this episode? Probably not, but people do know that when it comes to social networks, their personal data and algorithms, they don’t exactly feel in control. The murkiness of the events and legal obligations is in fact part of the problem.
When I see a new story or criticism about the tech world, I no longer ask whether the tech companies poll as being popular (they do). I instead wonder whether voters feel in control in a world with North Korean nuclear weapons, an erratic American president and algorithms everywhere. They don’t. Haven’t you wondered why articles about robots putting us all out of work are so popular during a time of full employment?
We are about to enter a new meta-narrative for American society, which I call “re-establishing the feeling of control.” Unfortunately, when you pursue the feeling rather than the actual control, you often end up with neither.
Do read the whole thing.
The wisdom of Ben (Stratechery) Thompson
It seems far more likely that Facebook will be directly regulated than Google; arguably this is already the case in Europe with the GDPR. What is worth noting, though, is that regulations like the GDPR entrench incumbents: protecting users from Facebook will, in all likelihood, lock in Facebook’s competitive position.
This episode is a perfect example: an unintended casualty of this weekend’s firestorm is the idea of data portability: I have argued that social networks like Facebook should make it trivial to export your network; it seems far more likely that most social networks will respond to this Cambridge Analytica scandal by locking down data even further. That may be good for privacy, but it’s not so good for competition. Everything is a trade-off.
Here is the link to the longer piece, to get them regularly you have to pay, definitely recommended, now more than ever.
Does fake news spread faster on Twitter?
You may recall last week a spate of stories and tweets claiming that fake news spreads further and faster on Twitter. For instance, there is Steve Lohr at the NYT, who doesn’t quite get it right:
And people, the study’s authors also say, prefer false news.
As a result, false news travels faster, farther and deeper through the social network than true news.
That struck me as off-base, and you can find other offenders, so I went back and read the original paper by Vosoughi, Roy, and Aral. And what did I find?:
1. The data focus solely on “rumor cascades.” The paper does not establish the relative ratio of fake news to real news, for instance. The main questions take the form of “within the data set of rumor cascades, what can we say about those cascades?”
2. It still may (or may not) be the case that real news has its major effects through non-cascade mechanisms. Most people are convinced of 2 + 2 = 4, but probably not because they heard it through a rumor cascade.
3. Within the universe of rumor cascades, this paper measures average effects. It does not mean that at the margin fake news is more powerful. For instance, the rumor “Hillary Clinton is Satan” may have been quite powerful, but that does not mean a particular new rumor can achieve the same force. 2 + 2 = 5 won’t get you nearly as far in terms of retweets, I suspect.
4. Overall the results of this paper remind me of another problem/data issue. At least in the old days, children’s movies used to earn more than films for adults, as stressed by Michael Medved. That doesn’t mean you have a quick money-making formula by simply making more movies for kids. It could be a few major kid’s movies, driven perhaps by peer effects, suck up most of the oxygen in the room and dominate the market. And then, within the universe of cascade-driven movies, kid’s movies will look really strong and indeed be really strong. That also doesn’t have to mean the kid’s movies have more cultural influence overall, even if they look dominant in the cascade-driven category. In this analogy, the kid’s movies are like the fake news.
5. I am not sure how much the authors of the paper themselves are at fault for the misunderstandings. They can defend themselves on the grounds of not being literally incorrect in their statements in the paper. Still, they do not seem to be going out of their way to correct possible and indeed fairly likely misinterpretations.
6. The strongest argument for the coverage of the authors’ paper is perhaps the coverage of the authors’ paper itself. The incorrect interpretations of the result did indeed spread faster and further than the correct interpretations. I even delayed the publication of this post by a few days, if only to make its content less likely to be true.
Big Professor is watching you
At the University of Arizona, school officials know when students are going to drop out before they do.
The public college in Tucson has been quietly collecting data on its first-year students’ ID card swipes around campus for the last few years. The ID cards are given to every enrolled student and can be used at nearly 700 campus locations including vending machines, libraries, labs, residence halls, the student union center, and movie theaters.
They also have an embedded sensor that can be used to track geographic history whenever the card is swiped. These data are fed into an analytics system that finds “highly accurate indicators” of potential dropouts, according to a press release last week from the university. “By getting [student’s] digital traces, you can explore their patterns of movement, behavior and interactions, and that tells you a great deal about them,” Sudha Ram, a professor of management systems, and director of the program, said in the release. “It’s really not designed to track their social interactions, but you can, because you have a timestamp and location information,” Ram added.
That is from Amy X. Wang at Quartz.
Did Facebook depolarize America?
Based on selective exposure and reinforcing spirals model perspectives, we examined the reciprocal relationship between Facebook news use and polarization using national 3-wave panel data collected during the 2016 US Presidential Election. Over the course of the campaign, we found media use and attitudes remained relatively stable. Our results also showed that Facebook news use was related to a modest over-time spiral of depolarization. Furthermore, we found that people who use Facebook for news were more likely to view both pro- and counter-attitudinal news in each wave. Our results indicated that counter-attitudinal news exposure increased over time, which resulted in depolarization. We found no evidence of a parallel model, where pro-attitudinal exposure stemming from Facebook news use resulted in greater affective polarization.
That is from Beam, Hutchens, and Hmielowski. I thank an anonymous correspondent for the pointer.
One smart guy’s frank take on working in some of the major tech companies
This is from my email, I have done a bit of minor editing to remove identifiers. It is long, so it goes under the screen break:
Background
I joined Google [earlier]…as an Engineering Director. This was, as I understand it, soon after an event where Larry either suggested or tried to fire all of the managers, believing they didn’t do much that was productive. (I’d say it was apocryphal but it did get written up in a Doc that had a bunch of Google lore, so it got enough oversight that it was probably at least somewhat accurate.)
At that time people were hammering on the doors trying to get in and some reasonably large subset, carefully vetted with stringent “smart tests” were being let in. The official mantra was, “hire the smartest people and they’ll figure out the right thing to do.” People were generally allowed to sign up for any project that interested them (there was a database where engineers could literally add your name to a project that interested you) and there was quite a bit of encouragement for people to relocate to remote offices. Someone (not Eric, I think it probably was Sergey) proposed opening offices anyplace there were smart people so that we could vacuum them up. Almost anything would be considered as a new project unless it was considered to be “not ambitious enough.” The food was fabulous. Recruiters, reportedly, told people they could work on “anything they wanted to.” There were microkitchens stocked with fabulous treats every 500′ and the toilets were fancy Japanese…uh…auto cleaning and drying types.
And… infrastructure projects and unglamorous projects went wanting for people to work on them. They had a half day meeting to review file system projects because…it turns out that many, many top computer scientists evidently dream of writing their own file systems. The level of entitlement displayed around things like which treats were provided at the microkitchens was…intense. (Later, there was a tragicomic story of when they changed bus schedules so that people couldn’t exploit the kitchens by getting meals for themselves [and family…seen that with my own eyes!] “to go” and take them home with them on the Google Bus — someone actually complained in a company meeting that the new schedules…meant they couldn’t get their meals to go. And they changed the bus schedule back, even though their intent was to reduce the abuse of the free food.)
Now, most of all that came from two sources not exclusively related to the question at hand:
Google (largely Larry I think) was fearless about trying new things. There was a general notion that we were so smart we could figure out a new, better way to do anything. That was really awesome. I’d say, overall, that it mostly didn’t pan out…but it did once in a while and it may well be that just thinking that way made working there so much fun, that it did make an atmosphere where, overall, great things happened.
Google was awash in money and happy to spray it all over its employees. Also awesome, but not something you can generalize for all businesses. Amazon, of course, took a very different tack. (It’s pretty painful to hear the stories in The Everything Store or similar books about the relatively Spartan conditions Amazon maintained. I was the site lead for the Google [xxxx] office for a while and we hired a fair number of Amazon refugees. They were really happy to be in Google, generally…not necessarily to either of our benefit.)
I was there for over ten years. Over time, the general rule of “you get what you incent” made the whole machine move much less well and the burdens of maintaining growth for Wall Street have had some real negative impact (Larry and Sergey have been pushing valiantly for some other big hit of course).
So, onto the question at hand:
I know bits and pieces about Google, Facebook, Apple, and Amazon. I’ve known some people who’ve worked at Netflix but generally know less about them. Google I know pretty well. I’ve worked at a bunch of startups and some bigger companies. I haven’t worked for a non-tech company (Ford) since I was 19 (when I was an undergrad I worked in the group that did the early engine control computers…a story in itself).
I think the primary contributions the tech companies make to organizational management are:
significantly decreasing the power that managers hold
treating organization problems as systems problems to be designed, measured, optimized, and debugged [as a manager, I, personally, treat human and emotional problems that way also]
high emphasis on employing top talent and very generous rewards distributed through the company**only possible in certain configurations of course.
What also went well at Google: Google avoided job categories that were, generally, likely to decrease accountability:
Google avoided the job class of architect — which was both high status and low accountability, making it an easy place for pricey senior people to park and not have much impact (Sun Microsystems was notorious for having lots and lots of architects)
Google avoided the category of project manager, which would have allowed engineering managers to avoid the grungy part of their job (and be out of touch with engineering realities). I don’t know the history of that particular orientation — we did have something called a TPM (“technical program manager”) who were intended to make deep technical contributions, not just keep track of projects.
Google exploited “level of indirection” to avoid giving managers power over their employees or the employees excess emotional bonds to their managers.
hiring committees who would remove the managers from the process of hiring and (mostly, especially in the early days) project assignment
promotion committees who would judge promotion cases, removing the power of promotion from the manager (didn’t scale well, as indicated by the link I sent you)
raises had a strong algorithmic component; promotions and bonuses were both linked to performance ratings in a way such that getting high scores (at the current level) led to big bonuses, so if an employee’s case wasn’t perfect for promotion they wouldn’t feel they were incurring a financial penalty. That gave promotion committees more liberty to say “no by default” and managers less incentive to fight like badgers to get their people promoted.
What didn’t go so well
The industry has its own weird relationship to business:product managers can be valuable if they have either strong business skills or a deep instinct for something amazing that should be built to create a business. Google (and others) explicitly treated product managers as “mini-CEOs” so they attracted a lot of people who…wanted to be a mini-CEO…but weren’t necessarily cut out for a CEO role. (At this point I have a generally low opinion of product managers and people who aspire to product management, with notable exceptions of course.)
Google- and software industry-specific: lots of developers want to make free software, lots of developers only know how to make things for other developers, so trying to be in a business where there’s deep domain knowledge required, or lots of actual business competition (where marketing, awareness, and business strategy are key) mean that overfocus on really, really smart software engineers as the almost exclusive hiring target makes it difficult to succeed.
Selling ads…I’m not in favor of it as an engine of commerce. Amazon has profound and distinguished power accrued over time by ruthless exploitation of scale in low margin industries where everyone is “making it for a dollar, selling it for two…” which makes them very dangerous for every competitor.
You get what you incent
product managers were rewarded for launching, which means they’d tend to launch and ditch
it’s hard not to reward managers for group size; Google was no different — this was the place where it was hardest to avoid fiefdoms that come with centralization of powerWhat degraded over time at Google:
Some things having to do with too much money, not necessarily related to tech management in particular:
sense of company mission vs. sense of entitlement.
pursuing company mission vs. individual advancement.
influx of people responding primarily to financial rewards (related).
Some things related to scale that might work better in an organization based on tight, interpersonal relationships (the opposite of the decreased manager power referenced above):
some processes implicitly dependent on people largely knowing one another or being one degree of separation apart (e.g., promotion)
the ability to reward creative, risky work; the ability to reward engineering work that had little visible outcome.Other companies in bits and pieces
As indicated I’m very admiring of Amazon’s strategic approach and its business-first focus. Google did a lot of awesome stuff, but it had incalculable waste and missed opportunities because of the level of pampering and scattershot approach. If you want a real tech company model, I’d pick Amazon (even though I’m not sure I’d ever work there).
Facebook is kind of nothing. It’s a product company and I (personally) don’t think the product is very compelling. I think they hit a moment and will see the fate of MySpace in time. I can’t pick out product innovations that were particularly awesome (other than incubating on college campuses and exploiting sex more or less tastefully). And, their infrastructure is pretty crude which means they’ll run into the problem, eventually, hiring the kind of people who can do the kind of scaling they’re going to need.
Apple — I don’t know a ton about them currently, but they’re old. Real old. I interviewed there some time ago and they told me they like to set arbitrary deadlines for their projects because once people are late they work harder. I didn’t pursue the job further, although I have no idea if that’s any sort of a broad practice or a current practice. What they *do* epitomize is the notion that new business models are more important than new technologies so things like flat rate data plans, $.99 songs, not licensing their OS, are real, interesting tech company contributions — I haven’t seen much of that sort of thing since Steve Jobs died, but I’m also not that close to them. That’s obviously not exclusive to tech companies, but something that may be more possible where you have new inventions.
Microsoft — the epitome of high pressure big software, abuse of market dominance, decline, and then pivot into new relevance. IBM II. I don’t know that there’s much about their culture or current business that’s particularly admirable. They’ve got this “partner” system that’s insane where they’ve set up a high stakes internal competition that just looks terrible for any kind of team cohesion or morale. I wouldn’t want to work there, either, although (like Amazon) I have a number of friends I really respect who work there. Generally, there are tradeoffs for having an environment with lots of competition for material rewards — I don’t personally like them so they won’t attract people like me… so I’d like to believe they’re terrible for business…although I’m not at all sure that’s true.
Netflix — little info, really. Competent and pivoting but I don’t know much good or bad.
Amazon — totally admirable, really scary, really effective, and very business-focused. Changing capex into opex via Cloud was one of those changes in business mode that I saw in Apple, along with “sell close to cost using Wall Street money so that no one can compete while you push down costs via scale so no one new can afford to enter the market.” They also are willing to ditch products that don’t work. It sounds like a hard place to work.
===
Challenges I see in other industries: low imagination, fiefdoms / politics, inefficiency, communication problems…all could benefit from tech company input. If you’re in a low margin, low revenue business…it’s just going to be hard without the ability to attract and retain top talent, which is usually going to have a money component. But, best practices certainly help along with awareness of the importance of things like business model, systems design within the business, communication and culture, relationships to power, politics, and incentives…
Remaining challenges in tech industry: scaling and incentives (and incentives at scale :). I also see a major extrovert bias, which might seem a little funny for tech. But, again, product managers (or, God forbid, Sales people) are all really subject to the “let’s just get some people in a room” style of planning and problem resolution. I firmly believe some massive amount of productivity is squandered from people choosing the wrong communication paradigm — I think it’s often chosen for the convenience or advantage of someone who is either in an extrovert role or who is just following extrovert tendencies. Massive problem at Google, which is ironic given their composition. Amazon had some obvious nods to avoiding these sorts of things (e.g., “reading time”) but I don’t know how pervasive they were or how effective people believed them to be.
I thank the author for taking the time to do this, of course I am presenting this content, not endorsing it.
Truly driverless cars
California regulators have given the green light to truly driverless cars.
The state’s Department of Motor Vehicles said Monday that it was eliminating a requirement for autonomous vehicles to have a person in the driver’s seat to take over in the event of an emergency. The new rule goes into effect on April 2.
California has given 50 companies a license to test self-driving vehicles in the state. The new rules also require companies to be able to operate the vehicle remotely — a bit like a flying military drone — and communicate with law enforcement and other drivers when something goes wrong.
That is from Daisuke Wakabayashi at the NYT, via Michelle Dawson.
Why Did Trump Pay Less than Clinton For Ads on Facebook?
An article in Wired has sparked controversy with its claim that Trump paid lower prices for its Facebook ads than Clinton:
During the run-up to the election, the Trump and Clinton campaigns bid ruthlessly for the same online real estate in front of the same swing-state voters. But because Trump used provocative content to stoke social media buzz, and he was better able to drive likes, comments, and shares than Clinton, his bids received a boost from Facebook’s click model, effectively winning him more media for less money. In essence, Clinton was paying Manhattan prices for the square footage on your smartphone’s screen, while Trump was paying Detroit prices.
The claim is plausible but although written by a Facebook expert it never really explains why Google and Facebook prices their ads in this way. The reason is what I call the “mesothelioma lawyer” problem. A click on an ad for a “mesothelioma lawyer” is extremely valuable because people who aren’t interested in hiring a mesothelioma lawyer are unlikely to click and those who do click are likely to become profitable clients. Thus, anyone searching for mesothelioma is likely to see an ad for a mesothelioma lawyer.
But suppose that Google or Facebook simply charge for ads by the click. Someone who searches for “funny hat video” isn’t likely to click on an ad for a mesothelioma lawyer but the people who do click are still likely to be very profitable to a mesothelioma lawyer. As a result, the mesothelioma lawyer can outbid the seller of funny hats for ads connected to “funny hat video” even though the search has nothing to do with mesothelioma. If Google or Facebook only charged by the click it would be mesothelioma lawyer ads everywhere, all the time.
To avoid this problem, Google and Facebook calculate how many clicks or interactions your ad is likely to receive and they charge lower prices the greater the predicted number of clicks. As a result, sellers of funny hats get lower prices than mesothelioma lawyers for ads that pop up after the user watches a funny hat video and mesothelioma lawyers get lower prices than sellers of funny hats for ads that pop up after the user searches for information on mesothelioma. In the long run this system better targets ads to customers and thus maximizes the value of the platform to both advertisers and customers.
As the Wired piece eventually states this isn’t even new:
“I always wonder why people in politics act like this stuff is so mystical,” Brad Parscale, the leader of the Trump data effort, told reporters in late 2016. “It’s the same shit we use in commercial, just has fancier names.”
He’s absolutely right. None of this is even novel: It’s merely best practice for any smart Facebook advertiser.
Addendum: See also Hal Varian’s discussion of the underlying issues in the Online Advertising section of this paper.
Could the tech companies run *everything* better?
Under one view, the major tech companies lucked into some pieces of rapidly scalable software. They are phenomenal at producing and distributing such software, but otherwise they put on their pants one leg at a time, just like the rest of us. They are not especially productive at marginal activities beyond their core competencies.
Under the second view, the major tech companies have developed new managerial technologies for hiring, handling, and motivating super-smart employees. That is the reason why the tech companies have become phenomenal at producing and distributing rapidly scalable software. But if tech companies turn their attention to other productive activities, they would do very very well. Alex for instance thinks that Apple ought to buy a university. Or you might expect that Google’s “scallion fried fish” dish would be especially tasty. After all, do not smarter people make for better cooks?
Yet a third view starts with the idea of labor scarcity, at least for the very talented folks. Good, ambitious, non-risk-averse managerial talent is super, super-scarce. The tech companies have a lot of it — good for them — and they pay for it by producing and distributing readily scalable software. In that setting, there is usually some slack within the tech company, so if the tech company takes on a new activity, it will excel at it, at least provided it does not try to move beyond the margin allowed by its collected, on-call talent. Yet if the tech company were to undertake a massive expansion into many non-tech fields, it would be just as talent-constrained as anyone else.
Which are these three views is correct? What if you had to pick three percentages that sum to one? How about 30-30-40?
Is there another contending view I am missing?
Addendum: A very important question is at what rate the existence of the tech companies boosts the incentive for individuals to become one of these very talented cogs in the machine of grand productivity. Training and talent-spotting matters! And just as tennis players keep on getting better, so can we expect the same from talented, high-cooperation workers, at least as long as the rewards are rising.
Is this actually the variable that determines how much good the big tech companies do for the world as a whole?
Amazon arbitrage, money laundering edition
The impersonator priced the book at $555 and it was posted to multiple Amazon sites in different countries. The book — which as been removed from most Amazon country pages as of a few days ago — is titled “Lower Days Ahead,” and was published on Oct 7, 2017.
Reames said he suspects someone has been buying the book using stolen credit and/or debit cards, and pocketing the 60 percent that Amazon gives to authors. At $555 a pop, it would only take approximately 70 sales over three months to rack up the earnings that Amazon said he made.
“This book is very unlikely to ever sell on its own, much less sell enough copies in 12 weeks to generate that level of revenue,” Reames said. “As such, I assume it was used for money laundering, in addition to tax fraud/evasion by using my Social Security number. Amazon refuses to issue a corrected 1099 or provide me with any information I can use to determine where or how they were remitting the royalties.”
Reames said the books he has sold on Amazon under his name were done through his publisher, not directly via a personal account (the royalties for those books accrue to his former employer) so he’d never given Amazon his Social Security number. But the fraudster evidently had, and that was apparently enough to convince Amazon that the imposter was him.
Here are additional points of interest, as the practice is more common than you might have thought. Via the estimable Chug.