by Tyler Cowen
on March 13, 2017 at 12:16 pm
1. How worried should we be about AI? A pithy symposium. And “how to beat the robots” (NYT).
2. Radioactive boar update (NYT).
3. The globalization of Irish pub design.
4. NYT obituary for Stephen A. Ross.
5. The Budapest City Council just removed a statue for Georg Lukács.
6. “Codfather” pleads guilty in fish quota scam.
#2 – “Boars have caused about $854,000 in damage to agriculture in Fukushima prefecture, reported the Japanese newspaper Yomiuri.”
The boars caused that damage? Not the nuclear disaster, but the boars?
Wild hogs are a big environmental problem in the United States, so it’s not particularly surprising. I imagine the boars don’t exactly respect the boundaries of the evacuation zone.
“A conservative estimate of the cost of wild pig damage to agriculture and the environment in the United States currently stands at $1.5 billion annually.”
LA, thanks so much for passing that on! Not only did you show me where my thinking went wrong, but you also presented me with some pretty interesting information!
Relatively simple solution is to encourage the hunting and consumption of wild boar.
Free range boar has much leaner meat and a more vibrant flavor profile than domestic pork.
We should be able to hunt this problem away.
I think that was tried, if I remember correctly, in the BC interior. An open season for hunters was put in place, but they are nocturnal and hard to find, so it didn’t make much difference.
Where is Obelix when you need him?
Not for the radioactive boars.
“Relatively simple solution is to encourage the hunting and consumption of wild boar.”
For sure, that helps. I’ve never been on a hog hunt, but my understanding is it’s a lot of fun. Hogs are fast and move in packs.
You need an easy to aim rifle that can make quick follow-up shots. It’s different from long-range sniping like you might see in western hunting.
We could also save lots of food by eradicating the birds.
6. Cute that there is a Portuguese connection. They were fishing those cod before Columbus, and the European population would have been unsustainable without them.
Well the guy is basically portuguese, and the portuguese are some of the biggest consumers in the world of codfish. I am happy for the portuguese mafia to have its big breakthrough. Maybe they will make a movie a about the codfather soon.
In fact, although Portuguese cod has some fame (even if the Portuguese ambassador in Brazil in the early 1900s admited that its smell is actually obscene – https://books.google.com.br/books?redir_esc=y&hl=pt-BR&id=QLNUAAAAMAAJ&focus=searchwithinvolume&q=Bacalhau), my home state’s moqueca (a kind of fish stew) is widely considered the best dish to be cooked with a fish, if it is worthy enough. Even then-president Lula lavishly praised the dish, pointing out other peoples can make good stews, but only we can make a real moqueca.
“Michael…we’re bigger than Mrs Paul’s.”
“1. How worried should we be about AI?”
“Imagine that, in 20 or 30 years, a company creates the first artificially intelligent humanoid robot. Let’s call her “Ava.””
I think I’d prefer to do with Ada.
@JWatts – clever pun (for those of you that don’t get it, ADA is a military programming language, get it?) –
“This scenario might sound familiar. It’s the plot from a 2015 science fiction film called Ex Machina. The story ends with Ava slipping out the door and ominously boarding the helicopter that was there to take someone else home.”
Does Ex Machina have a Deus Ex Machina ending? (Punny!) Sounds also the ending is intended to make the movie into a sequel, if it becomes popular. That’s the movie going trend.
The premise of keeping Ava/Ada ‘secret’ is absurd, about as plausible as Area 54. The idea that a robot wants to take over the world is the stuff of 2001: A Space Odyssey, which has a good chess scene. Presumably the robot will understand Asinov’s First Rule, like Galen’s nostrum “Primum non nocere”; after all, it’s fun to have pets, no? We keep dumb animals around for that very same reason. And of course humans can have a biometric key that will self-destruct any robot (and if the robot tries to hack this key it will self destruct; sounds like a bad Italian Sci-Fi I once saw on Italian TV about some prisoners with a bomb around their neck, if they tried to escape or remove it, the bomb would blow their head off, good stuff, good stuff)
Bonus trivia: TC refuses to talk about quantum computing, which was featured in this week’s Economist as a cover story. What is he afraid of?
Do you know what puns are?
Studio 54. Area 51.
An AI programmed to maximize the number of paperclips does not care about pets.
A “biometric key” is going to stop a hacked self-driving car (as one example out of millions) how exactly? A “biometric key” is going to stop the FBI from killing you after they received a phone call from the “director” of the FBI (AI using voice generation software, already highly capable) instructing them to conduct the raid on you?
“@JWatts – clever pun (for those of you that don’t get it, ADA is a military programming language, get it?) –”
True, but I was going for the root cause of why ADA is a programming language.
“AI is not going to kill us or enslave us.”
Such certainty from someone with such little expertise. I would take no pleasure if you were wrong. By the way, forever is a long time.
It could only possibly prove true with major efforts to make it so.
There will always be those who could do (rebooting from major distraction; insert main paranoia- and/or insanity-driven concern here).
1b. The UBI is a blocker at this point. It moves discussion of aid to more distant futures. Earned income credits exist. They are good, but they are not helping those out there now without jobs. The questions for American society are: Do we build the wall, enact tariffs, and hope for a job for everyone? Or should we enact jobs programs right now? Can we make sure there is a job for anyone willing to show up and work?
“Or should we enact jobs programs right now? Can we make sure there is a job for anyone willing to show up and work?”
Sure, we can make sure there’s a job for everyone. Drop the minimum wage. Enact an EITC that’s no more than 100% of the earnings from work. Cap it at some affordable level.
There are a lot of potential jobs as $2-5 per hour.
I would not oppose a test along these lines. It seems more difficult than a minimum wage – $2 government job. Some economist will have to find a new minimum wage rate that guarantees some kind of economic efficiency. I mean, at $2 per hour I might try to hire a full set of domestic servants. I’m not sure all you want to pay $6 per hour on top of that as your share for my valet.
The presumption is that the market would fix it; steve would offer your domestic servants $3/hr to assemble widgets, and they’d all jump ship to head over to Steve’s place.
” I mean, at $2 per hour I might try to hire a full set of domestic servants. I’m not sure all you want to pay $6 per hour on top of that as your share for my valet.”
Per my original post: EITC that’s no more than 100% of the earnings from work..
So, your $2 per hour domestic servants would only get an additional $2 per hour from the EITC. However, going to work at Steve’s place for $3 per hour would net them an additional $1 from Steve and $1 from the EITC.
I did miss that detail, JWatts’ cap. But let’s back up. We are all acknowledging that while the invisible hand provides much, it doesn’t provide everyone a job. Certainly not at a wage sufficient for unassisted living in the United States. I don’t see $2 + $2, or $3 + $3, as a proven way to get there. It is after all, a belief in the invisible hand again. It is faith rather than policy.
A “make work” program that does provide (barely) enough for unassisted living in the United States isn’t faith anymore, it is a plan.
(Perhaps you still have your faith, but I’d like to see some live tests at least.)
“It is after all, a belief in the invisible hand again. It is faith rather than policy.”
If you think the price demand curve is based upon faith, then your education is sadly lacking.
And while the market isn’t perfect, I have confidence that there is a high demand for jobs at the $2 per hour level. There will always be a mismatch in what the market will pay versus what someone will work, of course.
I think you missed part of me that time.
“Certainly not at a wage sufficient for unassisted living in the United States. I don’t see $2 + $2, or $3 + $3, as a proven way to get there.”
Will immigrants be eligible for that make-work?
Yes, I did miss that. But, I would dispute the viability of unassisted living at $6 per hour. A worker in Mississippi making $6 per hour is probably better off than a worker in California making the current minimum wage of $7.25.
And in any case the goal is not to provide some kind of socially approved living wage, but instead to provide an effective minimum wage that allows a low skilled worker to compete in an automated, high trade market.
If work empowers, and we are a sufficiently rich society, why not a solution to provide a self-sufficient wage for anyone who will work?
Because sufficient to live is an incredibly low bar, like $5/day and a tent, such that your “livable wage” request is a lie, and the real demand is probably more like “enough for a phone, transport, nice home, good food, internet, computer, tv, occasional night out, etc”. Am I wrong?
Very few people actually want “jobs”. What they want is money with which to buy neat stuff, a very different thing.
However, I think that they are wrong. I stopped needing money for work some time ago, yet I still work day in day out, making the world a bettter place. It is far more fulfilling that sitting on a beach.
Yes, how many people on the Forbes 400 spend their lives buying things, and not working at all?
How about compete, and help out those who situation or skills are not well suited to the present economy?
Otherwise, relative stagnation is the likely outcome.
“Worrying about evil-killer AI today is like worrying about overpopulation on the planet Mars. ”
It’s even more ridiculous than that. We actually know how to get to Mars. There’s much doubt that we could send humans there if we really thought it was worth the money. Nobody still has any effing idea how to even begin to create sentient AI.
“This hype has been unnecessarily distracting everyone from the much bigger problem AI creates, which is job displacement.”
That worry is vastly over-hyped as well. The low-hanging fruit has already been picked. AI-driven-robots struggle mightily with even single ‘simple’ tasks performed by low-skilled workers such as maids and fast-food workers (e.g. vacuuming and burger-flipping) . The next big potential advance in restaurant ‘automation’ is customers ordering from a kiosk. Which is self-service, not automation (the customer, rather than the employee, operates the register).
Burger flipping didn’t need AI, just a conveyor broiler.
Yep. But nonetheless people are working on robotic flippers. Which are hilarious. And even if they worked well and weren’t big, dumb, slow, and clumsy, they’d leave all the other steps in the process and save almost no labor.
I’m guessing you’ve never worked at a fast food burger place. I worked the grill at one for two weeks. There was one person that continuously cooked hamburgers and passed the cooked meat to another person for dressing it. Granted, this was only during peak hours. However, it was a full time person during peak hours. And it was the worst job in the restaurant. Hence, I quit after two weeks and the manager was pretty desperate to feel the position. So, that’s a viable use of automation.
“they’d leave all the other steps in the process and save almost no labor.”
Assume, they only need a full time flipper 4 hours every day (peak periods), assume the total compensation for that type of position is $10 per hour. That’s $14.6 K per year in potential labor savings. If you can buy and install a burger flipping robot for $50K and annual maintenance is no more than 5% of that per year, it’s economically positive.
If the minimum wage in an area drives the total comp up to $17 per hour. Then your robot only needs to hit $85K.
Maybe. But it hasn’t happened yet. AND…observe what task characteristics are needed for this to be even borderline feasible. It has to be 1) a simple task requiring minimal judgement or flexibility, 2) with a high number of repetitions, and 3) done in a single, fixed location. How many jobs (low-skilled or otherwise) actually fit those criteria AND haven’t already been automated?
“Maybe. But it hasn’t happened yet.”‘
“A burger-flipping robot has just completed its first day on the job at a restaurant in California, replacing humans at the grill.”
Yes, I was a aware. One burger bot in one shop. We’ll see if it catches on in any significant way AND, more importantly, if that leads to automation of other restaurant tasks.
@Slocum: every single thing you see around you was at one time just the first adopted version. It’s coming.
“Yes, I was a aware. One burger bot in one shop. We’ll see if it catches on in any significant way AND, more importantly, if that leads to automation of other restaurant tasks.”
I’m an automation engineer, but still I understand your skepticism. There have been and are a lot of pie in the sky projections and wishful thinking.
However, there is also the steady march of progress, where sensors, computing and hardware have all steadily declined in price. Simultaneously, the capabilities have been increasing in capabilities. It’s really inevitable that machines will extend their productivity gains from the factory to the commercial scale and then to the home.
“It’s really inevitable that machines will extend their productivity gains from the factory to the commercial scale and then to the home.”
I really don’t think so. The factory floor is fundamentally different than the home. The factory floor features many operations that are done with high numbers of repetition with little variation and at a single station. The home has nothing like that — instead it features dozens/hundreds of one-off tasks done in a variety of locations.
“many operations that are done with high numbers of repetition with little variation and at a single station”
Mowing the lawn, weeding, gardening, washing dishes, fixing cars, changing oil, preparing meals, cutting hair, applying makeup…?
By the way no one is concerned with sentient AI. Sentience is in no way required for AI to be dangerous and in fact would quite possibly reduce danger.
Another way to look at it is to look at the most common jobs in the U.S., and determine how susceptible each is to AI. I maintain that many of the most common jobs are extremely susceptible to AI:
I think the most vulnerable jobs include:
Retail salespeople (#1 most common) and cashiers (#2 most common) (will be susceptible to automated delivery vehicles, which will promote electronic sales over brick-and-mortar).
Material movers (#8 most common) (e.g. loading dock workers). Bookkeeping, accounting (#12 most common) and tractor-trailer drivers (#14).
While I’m inclined by my disposition to agree with you, the funny thing about things that grow exponentially is that stuff sneaks up on you and catches you by surprise. I’m sufficiently ignorant on the merits and our forecasting ability to come down on it one way or the other.
>Nobody still has any effing idea how to even begin to create sentient AI.
Or a driverless car.
Although we are doing OK with a “driverless car (*)(**)”.
[* driver required]
[** don’t even think of taking it on 99.9999% of roads.]
Nobody still has any effing idea how to even begin to create sentient AI.
Sentience isn’t relevant to the problem.
The Budapest City Council just removed a statue for Georg Lukács.
Chesterton: “Journalism consists of saying ‘Lord Jones died’ to people who had no idea Lord Jones was even alive”.
Is accusing Jobbik of wanting to clear out a memorial to a Jew. Question: why would you put up a statue to a philosophy professor? Why is a philosophy professor who gave himself over to Marxism considered a man of such wisdom that you’d honor him this way?
i assumed it was an anti-star wars thing
I consider a great thing that in the West, there are now more discussions of George Lucas than Georg Lukacs… Except for Depts of Critical Theory or whatever they are.
They hate the prequels That Much.
I respect and admire them for that.
1. Anyone noticed that humans are behaving more like robots? By that I mean an inability to think critically and, instead, programmed to accept misinformation. Robots can’t think critically, they are programmed to think in one way. No, intelligence and high IQ don’t make one immune to robotic thinking: “What’s more, being intelligent and informed can often make the problem worse. The higher someone’s IQ, the better they are at coming up with arguments to support a position—but only a position they already agree with, as one study showed. High levels of knowledge make someone more likely to engage in motivated reasoning—perhaps because they have more to draw on when crafting a counterargument.” https://www.theatlantic.com/science/archive/2017/03/this-article-wont-change-your-mind/519093/ I once commented that Cowen prefers robots over humans. But if humans are behaving increasingly like robots, then in time Cowen may come to accept humans as being just as good as robots.
1. Anyone noticed that humans are behaving more like robots? By that I mean an inability to think critically and, instead, programmed to accept misinformation.
No, I haven’t noticed that. I have noticed that people over-estimate their critical thinking skills (and are given to telling you, whether or not you asked) that they’re fine critical thinkers.
I think that’s the wrong metaphor. Rather than robots, consider pre-literate societies. People increasingly trust what they’ve heard, especially from members of their tribe.
This just in. Rayward discovers that some humans are, on occasion, lazy thinkers happy to assign to default cognitive mechanisms the task of finding solutions instead of proceedly laboriously to analyze options.
We are not more robotic, Rayward, but perhaps default chains shorten. (Or if you will, our heuristics kick in sooner, depending.) For example, when I read the Guardian (UK), I now see many headlines including the word “Trump”. A year ago I might have read some of these articles, now I rightly assume they are boilerplate leftist clickbait. I have not become more robotic. I have learned.
Wouldn’t it be logical for more headlines in the Guardian and any other periodical to include the word “Trump” now vs a year ago?
Right you are. There has indeed been a veritable explosion of headlines at the Guardian, and elsewhere, containing the word “Trump”. From cooking to sports articles. “Age of Trump” “Era of Trump” … perhaps one day this age will be known as the Trumpo-cene”?
The Guardian must sell copy/get clicks. Trump sells.
Has Trump done anything in the past year to warrant more being written about him?
AI implanted in robots or AI implanted in humans. What’s the difference? I’m far more afraid of AI implanted in humans. After all, humans have a history of depravity. Take the cultured Germans . . . . please.
5, about Lukács, is a great read. There is nothing surprising or outraging about the Right-wing Hungarian government removing its statue though. Even if he was courageously opposed on many occasions to the communist dictatorship, he was still a member of the communist party (except for the time he was excluded).
A lateral remark: I’m surprised, and disappointed, about how little we speak, in its hundredth anniversary, of the Russian revolution of 1917 in the main-stream newspapers, on the campuses, etc. I mean, 3 years ago, 100 years after the beginning of WWI in Europe, there were a lot of articles in the
newspapers, of conferences by historians, exhibitions, etc everywhere I looked. But for the Russian revolution, which is surely as important an event, I have seen nothing so far except two or three mediocre article in the NYT.
Even though modern Progressives cannot remember last week, they still find the whole communist thing a bit embarrassing.
It’s like you put history in a wood chipper and that sentence came out.
He said nothing of history. He offered an opinion about why this episode in history is getting little attention.
Do they? I haven’t noticed any embarrassment.
I wish you were right.
I detect far too little embarrassment or contrition with respect to what we can, generally speaking, term the left’s adulation for communism. That adulation might not have been deep but it was broad.
We are found out! Progressives born in 1990 managed to mutiny on the Battleship Potemkin in 1905. We left a red wake in the Chinese Cultural Revolution. With a time machine it was easy.
I know I spent much of my time in 1917 adulating. Broadly and deeply. The worst thing is, I have no regret. Perhaps it is a side-effect of the strong Tesla Field of the machine itself.
A number of communists, especially in Europe, left their national communist parties after the Soviet interventions in ’56 (Hungary) and ’68 (Czechoslovakia) and migrated intellectually to the left centre. Then again even Sartre was more or less tempted back into the fold by Maoism, though he wouldn’t rejoin the party formally.
I think more people on the far left should say, dammit we were wrong about a few things.
Well, obviously I was there in ’56, but I am not sure how your inverse chronology works for normal folks.
As I understand it, people born since then use extremely general words of self description, and then you demand that they take responsibility for acts that you (and importantly not they) give the same general classification?
What must you feel about Catholics, and all the disemboweling “they did?”
Oh, and I for one would like to see the vegetarians take more responsibility for Hitler’s genocide.
I know very few vegetarians who opine that while Hitler was pretty awful, at least he DID espouse vegetarianism. I don’t know much about Catholics, or Catholicism, but I know that various Popes, after many years, distanced themselves from the treatment of Galileo.
And so on.
However, perhaps it is just part of being a historian in an academic setting, but I know a number of Trotskyites. At least that’s what they call themselves (I suppose some are really Stalinists, but you can’t really say that.) It’s not that they wish to replay history so that they get to be in charge of the Soviet Union from 1917 onwards, but that they haven’t stopped pursuing a revolution (a top down, statist, command economy revolution).
I’m not really sure what is bothering you. I am not asking all leftists to own Bolshevism. But I certainly see academic Marxists and even undergraduate Marxists refusing to acknowledge the horrors of collectivism.
Note that “A Black Man” did start this thread with “modern Progressives” and “communism.”
He not the only one here to use various words to say that someone left of center should indeed own some horror he classifies as left. It is a trope.. Just as, I suppose some idiot somewhere is saying that vegetarians, or anyone right does own Hitler.
Are the rare truly-center moderates the only ones to escape?
I like the part where Time Traveler suggests A Black Man isn’t, because of his politics. Time Traveler: upset about his ideological closeness to brutal socialist dictators, oblivious to his racist attitude of black political ownership.
That’s actually a good point. I think it would seem bizarre to be celebrating the Russian Revolution, but nobody seems inclined to mourn it either. Commemorations would thus be painfully awkward.
What Russia does will be interesting to watch, though. Maybe they’ll celebrate the aniversary by invading Latvia.
Which would be quite ironical since one of the first things the bolsheviks did when they took power in November was to propose independence to Latvia, and to other non-Russian nations in the Russian empire.
Well, that didn’t last very long, did it?
Until 1944. That’s 27 years. And then, Lenin was long dead, probably of natural causes, and all the author major Bolsheviks but one as well, of non-natural causes.
1944? I thought the soviets occupied Latvia in 1940 under a pact between themselves and Nazi Germany made in 1939.
So more like 22 years.
Did they propose autonomy and (frankly) liberation for Finland, or did the Finns just declare it?
Finnish grandees issued a declaration in late 1917 which the Soviet government accepted. There followed a brief Red-White civil war which the Whites won. The Communist Party was formally proscribed from 1919 to 1944, though still able to operate prior to 1932.
Thank you, Art. Speaking to a Finn now, she says the episode (leading to independence in Dec. of 1917) can been seen as nationalist Finns shrewdly taking advantage of a window of opportunity to withdraw from their “arrangement” with their large and domineering neighbour.
The Soviet government changed its mind some decades down the road, though.
1b. “Perhaps most effective is reaching students as early as elementary school. Educators should focus on teaching technical skills, like coding and statistics, and skills that still give humans an edge over machines, like creativity and collaboration, experts say.”
Is it “teach statistics in elementary school” of “improve elementary school education(no elaboration) and teach statistics in high school?” Horrible article.
You can’t teach statistics in elementary school and high school is properly given over to more foundational material in both mathematics and social research.
Actually, I’ve taught my fourth grader statistics, social research and arithmetic in a two-week beach vacation.
Academic types call it blackjack.
I thought of learning dice games (Axis and Allies, etc) when I was 14. I had a pretty good foundation for statistics before I had my first formal course.
Yes, Black Man, I was wondering if embarrassment was one of the reasons of this silence, and that seems probable.
Another possible reason is that the modern progressives have moved on certain values to the opposite of the marxist/bolshevik ideal. From “Proletarians of all countries, Unite!” to “if you’re not a black trans lesbian vegan fat person, out of my backyard!”. So perhaps a conservative would be able to find more charms in the Russian revolution (like Lucáks, as explained in the link 5) than a current progressive could.
Identity politics trumps class. We will see if this continues.
Networked ignorance Trumps learning.
Really recognizable writing styles trump fake handles.
5. Lukacs’ statue is removed from Budapest, and Calhoun’s name is removed from Yale. (Tyler, of course, has no thoughts about or knowledge of the latter event.) Neither event troubles me, since both were morally ambiguous men (as are we all), and each generation is surely entitled to honor whom it pleases.
Surely the removal of statutes and the renaming of buildings constitutes evidence of a re-founding of those polities according to “new modes and orders,” to use the language of Machiavelli. This isn’t to comment whether is good or bad, just to say that it is something momentous for members of those polities. Although, as is typical, those fond of the previous regime will call those fond of the new regime evil and criminal, and vice versa.
The legend of John C. Calhoun is in trouble in many places. But not Fighting Phil Sheridan: http://nailheadtom.blogspot.com/2016/08/political-correctness-on-two-wheels.html
#1. Require companies introducing automation to offer employees stock options as compensation for the risk of layoffs. If the employee gets laid off and the company does well, the employee gets a share of the profits. If the company goes broke anyways, then the employee was going to get laid off no matter what.
Congratulations, you thought up a solution that’s even worse than anything in the article.
In the world of Certain People, every company is (a) publicly traded; and (b) large enough that they have spare stock certificates just lying around and unaccounted for — used as coasters at Board meetings, and so forth; and (c) has no restrictions from anyone at all regarding how they can be distributed.
Unlimited free candy, basically.
Why are you against free candy for the working people????
Many companies have employee stock purchase programs or other profit-sharing mechanisms. It’s not like nobody has thought of this concept before.
A company doesn’t have to be publicly traded to offer stock options. Furthermore, most companies that go to the trouble of creating a stock issuance, almost always create a very large amount of ‘extra’ stocks. It’s trivial to offer them to employees. Of course, it dilutes the value of the existing stock holders. And there’s the rub.
Of course, it dilutes the value of the existing stock holders.
God forbid the existing stock holders make a small sacrifice so that displaced workers can benefit slightly (or at least not be completely screwed) from their jobs being automated. Can’t have that. “At will” employment is supposed to mean “fuck off and die” at the end of the employment contract.
Anyway, I would think a mechanism that directs redistribution to only workers actually affected by automation (and in proportion to the profits thereof!) would be preferable to distributing handouts indiscriminately to every unemployed person regardless of how or why they got that way, such as with a UBI.
“God forbid the existing stock holders make a small sacrifice so that displaced workers can benefit slightly (or at least not be completely screwed) from their jobs being automated.”
But you oppose tariffs, right? Because we need to retard progress in our factories so China and Japan will have maximum advantage out-competing us. Go away cuck.
Actually I would suggest that the same stock offering be offered to employees whose jobs are outsourced. That’s actually where I first came up with the idea.
Is it a sacrifice if you are forced to do something? You’re basically talking about a tax on robots
I think what accountants call ‘treasury stock’ is used for these purposes. If I understand correctly, issuance of treasury stock is illegal in some countries.
#2. Nobody seems to be wondering why the Cesium-137 doesn’t seem to be harming the boars. Or whether it is harming them at all.
May be half-life varies from species to species.
Wonder what Bohr would have thought of this?
If you mean radioactive half-life, that’s physically impossible.
It’s possible that uptake and retention of radioactive substances varies by species. It’s also possible that the harmful effects are species specific.
This is actually a great laboratory for finding those things out. Scientists should be all over this.
“May be half-life varies from species to species.”
Hazel’s correct. Half-life is a fundamental physical aspect of a given element.
(The half life of Cesium 137 is 30.2 years, by the way.)
People sometime use the word “half-life” for things other than radiation though. He might be thinking how long it stays in the body after consumption.
. Couldn’t resist jk; have a Master’s in Physics.
I’m stupid and missed the joke. Now I see it. Sorry for making you explain.
Yum, slow cooked pork.
I assume it does harm them, but they only live 5 or so years so things like cancers might not have time to develop before they reproduce. There is plenty of food and no predators.
You would think at levels 300 times the safety limit that there would be some evidence of short term harm though. Or maybe the safety limits are set absurdly low out of an abundance of paranoia.
It is entirely possible that the incidence of cancer has increased dramatically among the boars, and the life expectancy of the population decreased dramatically. That isn’t something you could see from a quick visit to the area.
Yes, it’s possible. It would be nice to actually study it and find out though.
#3 – This has been going on for a long time. When some friends and I looked into opening a bar to spend some of our dot-com gains, we realized you could order an Irish pub, complete with Irish bartenders (visa status unclear), and assorted stereotypes. You just supply the space and the liquor license along with the Honduran barbacks. Ironically, we wanted to open it in an area where there were several authentic Irish bars.
1a. What would happen first? a) Pure artificial intelligence, meaning software running on an humanoid hardware, b) humans modified (bionics) to extend their cognitive capabilities?, c) improvement of interfaces between humans and computers.
I’d say option c is the most probable. The history has been so far keypunch, keyboard, mouse, trackpad, tactile screens, voice commands. These interfaces can be divided in developer tools (keyboard & mouse) and end user tools (tactile screens, voice commands).
Sometimes I feel my computer is so powerful and I am so adaptable and intelligent but our information exchange is sooooo damned slow. I can read many more words per minute that I can type in a keyboard. Voice recognition could help if the computer recognized some keywords that explain a complicate task. It would be really practical if I could code using voice commands….”go to line 125 and insert a for loop”, “read a text file with 4 columns separated with commas with unknown number of rows”, “multiply matrix A by vector C, plot the result in an histogram with 15 buckets”, “export to X data format”……and the bug-free code for those tasks just appeared on the screen.
I hope a new developer interface comes around before I’m too old to adapt to it. So far, even the futuristic Microsoft HoloLens are a just another consuming interface.
Shit!, someone already developed it…… http://vocola.net/default.asp
1) Sure, we don’t know how to make super-intelligent AI yet. That’s the very problem.
Look, nobody is worried about AI rebelling against its creators. What they are worried about is AI doing exactly what its creators programmed it to do. Even the best programmers write code that does things other than what they intended — that’s called a “bug”, y’know? The fact that a program is a perfect slave to what the programmer actually coded isn’t much of a relief if it just, in accordance to that code, gave you a massive overdose of radiation.
And the more poorly programmers understand how their code will execute in practice, the more bugs there will be. If you’re talking about code for a more-than-human-intelligent AI, you’re talking code that will certainly be at the edge of the capacity and understanding of its creators . . . and it will be, like the Therac-25 code, “realistically impossible to test it in a clean automated way.”
So the most likely result of the first coding attempts is something that fails to do much of anything, because the creators don’t know what they’re doing. But they will try to debug it, and if they do get it to work, what’s going to happen is that you’re going to have code that, when it does exactly what it was programmed to do, does things that aren’t what the programmers intended to program it to do. And that’ll probably be pretty blatant and unlikely to be immediately disastrous, which will produce further rounds of patch-and-run.
And eventually, the flaws remaining will be subtle enough that they don’t show up in testing. Which does not actually guarantee they won’t be the sort of thing that causes disaster on wide deployment. It just means that they’ll get deployed widely before they’re caught. And which the AI will execute faithfully to its buggy code, regardless of any effect on human welfare.
I keep wondering why all these people writing about how to deal with robots keep suggesting that we reduce the workweek.
bots is suppose to be happening.
It has already fallen from 38.5 hours per week in 1964 to 33.6 in 2016.
Yes, people in the goods sectors still work a 40 hour week and this is where most of the problems with robots seems to be happening.
But in the services sector of the economy the average workweek was 32.4 hours.
So we are already achieving what they propose as a way to deal with robots.
Of course, the problem with this is that the jobs where the workweek has already fallen are in the low wage sectors of the economy like retail and hospitality.
So maybe a shorter workweek is not the solution people seem to believe it is.
But why should we pay attention to people who do not even know that the average workweek has been falling for decades?
Should we be worried about AI?
No. There is nothing that can be done. If we create super intelligence it will do what it likes and it is impossible to shape and/or control it. Might as well be worried that a gamma ray burst is going to kill us all, there is nothing we can do other than hope. We can hope that, somehow, super intelligence is impossible, or we can hope that it will want to make human’s lives better. That’s all we can do. Might as well continue living our lives. We’re going to continue making AI agents more and more capable, and one day they may wake up.
Waking up is not required for the robots to kill us all. There are many preventable catastrophes, and also some that are not preventable at the present time.
Leave a Comment
Notify me of follow-up comments by email.
Notify me of new posts by email.
Previous post: Robert E. Kelly on the German-Korean unification parallel
Next post: *Scholarly Publishing and its Discontents*
Email Tyler Cowen
Follow Tyler on Twitter
Email Alex Tabarrok
Follow Alex on Twitter
Subscribe in a reader
Follow Us on Twitter
Marginal Revolution on Twitter Counter.com
Get smart with the Thesis WordPress Theme from DIYthemes.