Handing the Universe Over to Europe

Back in the day (ten years ago), I served on a NASA panel charged with developing a long-term roadmap for NASA’s astrophysics missions. At the time there were complaints from Congress and the Office of Management and the Budget that NASA was asking for lots of things, but without any overarching strategy. Whether that was true or not, we recognized the need to make hard choices and put forward a coherent plan. The result was the Beyond Einstein roadmap. We were ambitious, but reasonable, we thought, and the feedback we received from Congress and elsewhere was generally quite positive.

Hahahahaha. In the end, almost nothing that we proposed is actually being carried out. Our roadmap had different ingredients (to mix a metaphor): two large “facility-class” missions comparable to NASA’s Great Observatories, three more moderate “Einstein Probes” to study dark energy, inflation, and black holes, and more speculative “Vision missions” for further down the road. The Einstein Probes have long since fallen by the wayside, although the dark-energy mission might find life via one of the telescopes donated to NASA by the National Reconnaissance Office. If we don’t have the willpower/resources to do the moderate-sized missions, you might suspect that the facility-class missions are even more hopeless, and you’d be right.

But never fear! Word out of Europe (although still not official, apparently) is that the ESA has prioritized missions to study the “hot and energetic universe” and the “gravitational universe.” These map pretty well onto Constellation-X and LISA, the two facility-class missions we recommended pursuing in Beyond Einstein. The former would have been an X-ray telescope, while the latter would be a gravitational-wave observatory. Unfortunately the likely launch date for an ESA gravitational-wave mission isn’t until 2034, which is like forever. Fortunately, China has expressed interest in such a project, which might move things along.

For anyone following the news of last year’s Higgs discovery, it’s a familiar story. Here in the US we had a big particle accelerator planned, the SSC, which was canceled in 1993. That allowed CERN time and money to build the LHC, which eventually found the Higgs (and who knows what else it will find in the future). The US makes big plans, loses nerve, and Europe (or someone else) picks up the pieces.

Personally, I could not possibly care less which country gets the credit for scientific discoveries. If we someday map out the spacetime geometry around a black hole using data from a gravitational-wave observatory, whether it was launched by Europe or the US or China or India or Dubai matters to me not one whit. But I do want to see it launched by somebody. And the health of global science is certainly better off when the US is an active and energetic participant — the more resources and more competition we see in the field, the more benefits for everybody. Let’s hope we find a way for US science to shift back into high gear, so that we are players rather than merely spectators in this amazing game.

Handing the Universe Over to Europe Read More »

15 Comments

Billions of Worlds

I’m old enough to remember when we had nine planets in the Solar System, and zero outside. The news since then has been mixed. Here in our neighborhood we’re down to only eight planets; but in the wider galaxy, we’ve obtained direct evidence for about a thousand, with another several thousand candidates. [Thanks to Peter Edmonds for a correction there.] Now that we have real data, what used to be guesswork gives way to best-fit statistical inference. How many potentially habitable planets are there in the Milky Way, given some supposition about what counts as “habitable”? Well, there are about 200 billion stars in the galaxy. And about one in five are roughly Sun-like. And now our best estimate is that about one in five of them has a somewhat Earth-like planet. So you do the math: about eight billion Earth-like planets. (Here’s the PNAS paper, by Petigura, Howard, and Marcy.)

kepler

“Earth-like” doesn’t mean “littered with human-esque living organisms,” of course. The number of potentially habitable planets is a big number, but to get the number of intelligent civilizations we need to multiply by the fraction of such planets that are home to such civilizations. And we don’t know that.

It’s surprising how many people resist this conclusion. To drive it home, consider a very simplified model of the Drake equation.

x = a \cdot b.

x equals a times b. Now I give you a, and ask you to estimate x. Well, you can’t. You don’t know b. In the abstract this seems obvious, but there’s a temptation to think that if a (the number of Earth-like planets) is really big, then x (the number of intelligent civilizations) must be pretty big too. As if it’s just not possible that b (the fraction of Earth-like planets with intelligent life) could be that small. But it could be! It could be 10-100, in which case there could be billions of Earth-like planets for every particle in the observable universe and still it would be unlikely that any of the others contained intelligent life. Our knowledge of how easy it is for life to start, and what happens once it does, is pretty pitifully bad right now.

On the other hand — maybe b isn’t that small, and there really are (or perhaps “have been”) many other intelligent civilizations in the Milky Way. No matter what UFO enthusiasts might think, we haven’t actually found any yet. The galaxy is big, but its spatial extent (about a hundred thousand light-years) is not all that forbidding when you compare to its age (billions of years). It wouldn’t have been that hard for a plucky civilization from way back when to colonize the galaxy, whether in person or using self-replicating robots. It’s not the slightest bit surprising (to me) that we haven’t heard anything by pointing radio telescopes at the sky — beaming out electromagnetic radiation in all directions seems like an extraordinarily wasteful way to go about communicating. Much better to send spacecraft to lurk around likely star systems, à la the monolith from 2001. But we haven’t found any such thing, and 2001 was over a decade ago. That’s the Fermi paradox — where is everyone?

It isn’t hard to come up with solutions to the Fermi paradox. Maybe life is just rare, or maybe intelligence generally leads to self-destruction. I don’t have strong feelings one way or another, but I suspect that more credence should be given to a somewhat disturbing possibility: the Enlightentment/Boredom Hypothesis (EBH).

The EBH is basically the idea that life is kind of like tic-tac-toe. It’s fun for a while, but eventually you figure it out, and after that it gets kind of boring. Or, in slightly more exalted terms, intelligent beings learn to overcome the petty drives of the material world, and come to an understanding that all that strife and striving was to no particular purpose. We are imbued by evolution with a desire to survive and continue the species, but perhaps a sufficiently advanced civilization overcomes all that. Maybe they perfect life, figure out everything worth figuring out, and simply stop.

I’m not saying the EBH is likely, but I think it’s on the table as a respectable possibility. The Solar System is over four billion years old, but humans reached behavioral modernity only a few tens of thousands of years ago, and figured out how to do science only a few hundred years ago. Realistically, there’s no way we can possibly predict what humanity will evolve into over the next few hundreds of thousands or millions of years. Maybe the swashbuckling, galaxy-conquering impulse is something that intelligent species rapidly outgrow or grow tired of. It’s an empirical question — we should keep looking, not be discouraged by speculative musings for which there’s little evidence. While we’re still in swashbuckling mode, there’s no reason we shouldn’t enjoy it a little.

Billions of Worlds Read More »

60 Comments

Back In the Saddle

So apparently I just took an unscheduled blogging hiatus over the past couple of weeks. Sorry about that — it wasn’t at all intentional, real life just got in the way. It was a fun kind of real life — trips to Atlanta, NYC, and Century City, all of which I hope to chat about soon enough.

Anything happen while I was gone? Oh yeah, dark matter was not discovered. More specifically, the LUX experiment released new limits, which at face value rule out some of those intriguing hints that might have been pointing toward lighter-than-expected dark matter particles. (Not everyone thinks things should be taken at face value, but we’ll see.) I didn’t get a chance to comment at the time, but Jester and Matt Strassler have you covered.

lux

Let me just emphasize: there’s still plenty of room for dark matter in general, and WIMPs (weakly interactive massive particles, the particular kind of dark matter experiments like this are looking for) in particular. The parameter space is shaved off a bit, but it’s far from exhausted. Not finding a signal in a certain region of parameter space certainly decreases the Bayesian probability that a model is true, but in this case there’s still plenty of room.

Not that there will be forever. If dark matter is a WIMP, it should be detectable, as long as we build sensitive enough experiments. Of course there are plenty of non-WIMP models out there, well worth exploring. But for the moment Nature is just asking that we be a little more patient.

Back In the Saddle Read More »

10 Comments

Is Time Real?

I mentioned some time back the Closer to Truth series, in which Robert Lawrence Kuhn chats with scientists, philosophers, and theologians about the Big Questions. Apparently some excerpts are now appearing on YouTube — here I am talking about whether time is real.

Sean Carroll - Is Time Real?

In one sense, it’s a silly question. The “reality” of something is only an interesting issue if its a well-defined concept whose actual existence is in question, like Bigfoot or supersymmetry. For concepts like “time,” which are unambiguously part of a useful vocabulary we have for describing the world, talking about “reality” is just a bit of harmless gassing. They may be emergent or fundamental, but they’re definitely there. (Feel free to substitute “free will” for “time” if you like.) Temperature and pressure didn’t stop being real once we understood them as emergent properties of an underlying atomic description.

The question of whether time is fundamental or emergent is, on the other hand, crucially important. I have no idea what the answer is (and neither does anybody else). Modern theories of fundamental physics and cosmology include both possibilities among the respectable proposals.

Note that I haven’t actually watched the above video, and it’s been more than three years since the interview. Let me know if I said anything egregiously wrong. (I’m sure you will.)

Is Time Real? Read More »

63 Comments

Inside the Mind of the Republican Party

The rest of the world is looking at the United States and wondering, with good reason, why we have gone crazy. Not the entire country has gone crazy, of course. But we have a system of government in which a medium-sized minority can bring things crashing down if they so choose, and exactly such a group is rending one of the major parties apart. The minority group is roughly “the Republican base,” an uneasy alliance of Evangelical Christians and the Tea Party.

So it’s interesting and important to understand what these folks really think — something the media, with its valorization of drama, isn’t very good at conveying. The polling organization run by James Carville and Stanley Greenberg has recently tackled the issue, and presents a fascinating summary of what the concerns of the Republican base really are. (Carville and Greenberg are committed Democrats, of course, but I got the link from The American Conservative, where Ron Dreher completely agrees and expresses his horror and dismay.)

Here are the ideas floating in the mind of an average member of the Republican base, expressed in convenient word-cloud form:

tea party word cloud

For slightly more detail, here are the bullet-pointed main findings:

keyfindings

Most of the Republican base are not fat-cat plutocrats — there aren’t enough of those people to make up a sufficiently substantial voting bloc. A lot of the people described here are poor or at best middle-class, but their cultural identity and self-image is derived in large part from race/nation/religion/lifestyle categories that they see as under attack. The dominant emotions here are fearful ones. (I don’t mean to be condescending by talking about “these people”; this is the environment that I grew up in myself.)

This kind of analysis helps understand why Obamacare — which, for all its faults, is primarily aimed at providing health insurance to more people, many of whom are squarely in the Republican base — is such a hot-button issue. It’s not that they don’t want health insurance; it’s not even that they don’t want the government involved (since they love Medicare and Social Security). It’s that they see Obamacare as a craven ploy to get more people (people not like them) dependent on the government, establishing a permanent Democratic majority, and therefore easing the way for more power going to immigrants, gays, and so on.

Some of their analysis is actually correct! The demographics are tending strongly against what we now think of as the Republican base. The world is changing, and they don’t like it.

The scariest part of the report is that last bullet point, that “climate is next.” The Republican civil war is already bringing the US to the brink of financial disaster. It could end up causing the entire planet immeasurable harm. Scientists need to realize that the climate change debate, like the creationism-in-schools debate from a while a back, is actually not about scientific facts. It’s about culture, and that’s a much more difficult problem to address.

Inside the Mind of the Republican Party Read More »

67 Comments

Don’t Start None, Won’t Be None

[Final update: DNLee’s blog post has been reinstated at Scientific American. I’m therefore removing it from here; traffic should go to her.]

[Update: The original offender, “Ofek” at Biology Online, has now been fired, and the organization has apologized. Scientific American editor Mariette DiChristina has also offered a fuller explanation.]

Something that happens every day, to me and many other people who write things: you get asked to do something for free. There’s an idea that mere “writing” isn’t actually “work,” and besides which “exposure” should be more than enough recompense. (Can I eat exposure? Can I smoke it?)

You know, that’s okay. I’m constantly asking people to do things for less recompense than their time is worth; it’s worth a shot. For a young writer who is trying to build a career, exposure might actually be valuable. But most of the time the writer will politely say no and everyone will move on.

For example, just recently an editor named “Ofek” at Biology-Online.org asked DNLee to provide some free content for him. She responded with:

Thank you very much for your reply.
But I will have to decline your offer.
Have a great day.

Here’s what happens less often: the person asking for free content, rather than moving on, responds by saying

Because we don’t pay for blog entries?
Are you an urban scientist or an urban whore?

Where I grew up, when people politely turn down your request for free stuff, it’s impolite to call them a “whore.” It’s especially bad when you take into account the fact that we live in a world where women are being pushed away from science, one where how often your papers get cited correlates strongly with your gender, and so on.

DNLee was a bit taken aback, with good reason. So she took to her blog to respond. It was a colorful, fun, finely-crafted retort — and also very important, because this is the kind of stuff that shouldn’t happen in this day and age. Especially because the offender isn’t just some kid with a website; Biology Online is a purportedly respectable site, part of the Scientific American “Partners Network.” One would hope that SciAm would demand an apology from Ofek, or consider cutting their ties with the organization.

Sadly that’s not what happened. If you click on the link in the previous paragraph, you’ll get an error. That’s because Scientific American, where DNLee’s blog is hosted, decided it wasn’t appropriate and took it down.

It’s true that this particular post was not primarily concerned with conveying substantive scientific content. Like, you know, countless other posts on the SciAm network, or most other blogs. But it wasn’t about gossip or what someone had for lunch, either; interactions between actual human beings engaged in the communication of scientific results actually is a crucial part of the science/culture/community ecosystem. DNLee’s post was written in a jocular style, but it wasn’t only on-topic, it was extremely important. Taking it down was exactly the wrong decision.

I have enormous respect for Scientific American as an institution, so I’m going to hope that this is a temporary mistake, and after contemplating a bit they decide to do the right thing, restoring DNLee’s post and censuring the guy who called her a whore. But meanwhile, I’m joining others by copying the original post here. Ultimately it’s going to get way more publicity than it would have otherwise. Maybe someday people will learn how the internet works.

Here is DNLee. (Words cannot express how much I love the final picture.)

——————————————————–

(This is where I used to mirror the original blog post, which has now been restored.)

Don’t Start None, Won’t Be None Read More »

84 Comments

Englert and Higgs

Congratulations to Francois Englert and Peter Higgs for winning this year’s Nobel Prize in Physics. However annoying the self-imposed rules are that prevent the prize from more accurately reflecting the actual contributions, there’s no question that the work being honored this time around is truly worthy.

To me, the proposal of the Higgs mechanism is one of the absolutely most impressive examples we have of the precision and restrictiveness of Nature’s workings at a deep level — something that sometimes gets lost in the hand-waving analogies we are necessarily reduced to when trying to explain hard ideas to a wide audience. There they were, back in 1964 — Englert and Higgs, as well as Anderson, Brout, Guralnik, Hagen, and Kibble — confronted with a relatively abstract-sounding problem: how can you make a model for the nuclear forces that is based on local symmetry, like electromagnetism and gravity, but nevertheless only stretches over short ranges, like we actually observe? (None of these folks were thinking about “giving particles mass”; that only came in 1967, with Weinberg and Salam.)

It sounds like a pretty esoteric, open-ended question. And they just sat down and thought about it, with only very crude guidance from actual data. And they went out on a limb (one that had been constructed by other physicists, like Yochiro Nambu and Jeffrey Goldstone) and put forward a dramatic idea: empty space is filled with an invisible field that acts like fog, attenuating the lines of force and keeping the interaction short-range. How would you ever know that such an idea were true? Only because you could imagine poking that field a bit, to set it vibrating, and observe the vibrations as a new kind of particle.

And forty-eight years later, billions of dollars and thousands of dedicated people, that particle finally showed up, as a little bump amidst trillions of collision events. Amazing.

cms-2012-clean

atlas-2012-clean

Here are my Top Ten Higgs Boson Facts. And here I am yakking about it on Sixty Symbols:

Talking about the Higgs Boson - Sixty Symbols

Professors Englert and Higgs have every reason to be very proud, but this prize is really a testament to human intellectual curiosity and perseverance. And well deserved, at that.

Englert and Higgs Read More »

17 Comments

The Nobel Prize Is Really Annoying

nobelOne of the chapters in Surely You’re Joking, Mr. Feynman is titled “Alfred Nobel’s Other Mistake.” The first being dynamite, of course, and the second being the Nobel Prize. When I first read it I was a little exasperated by Feynman’s kvetchy tone — sure, there must be a lot of nonsense associated with being named a Nobel Laureate, but it’s nevertheless a great honor, and more importantly the Prizes do a great service for science by highlighting truly good work.

These days, as I grow in wisdom and kvetchiness myself, I’m coming around to Feynman’s point of view. I still believe that on balance the Prizes are a very good thing, and generally they honor some of the very best work in physics. (Some of my best friends are winners!) But having written a book about the Higgs boson discovery, which is on everybody’s lips as a natural candidate (though not the only one!), all of the most annoying aspects of the process are immediately apparent.

The most annoying of all the annoying aspects is, of course, the rule in physics (and the other non-peace prizes, I think) that the prize can go to at most three people. This is utterly artificial, and completely at odds with the way science is actually done these days. In my book I spread credit for the Higgs mechanism among no fewer than seven people: Philip Anderson, Francois Englert, Robert Brout (who is now deceased), Peter Higgs, Gerald Guralnik, Carl Hagen, and Tom Kibble. In a sensible world they would share the credit, but in our world we have endless pointless debates (the betting money right now seems to be pointing toward Englert and Higgs, but who knows). As far as I can tell, the “no more than three winners” rule isn’t actually written down in Nobel’s will, it’s more of a tradition that has grown up over the years. It’s kind of like the government shutdown: we made up some rules, and are now suffering because of them.

The folks who should really be annoyed are, of course, the experimentalists. There’s a real chance that no Nobel will ever be given out for the Higgs discovery, since it was carried out by very large collaborations. If that turns out to be the case, I think it will be the best possible evidence that the system is broken. I definitely appreciate that you don’t want to water down the honor associated with the prizes by handing them out to too many people (the ranks of “Nobel Laureates” would in some sense swell by the thousands if the prize were given to the ATLAS and CMS collaborations, as they should be), but it’s more important to get things right than to stick to some bureaucratic rule.

The worst thing about the prizes is that people become obsessed with them — both the scientists who want to win, and the media who write about the winners. What really matters, or should matter, is finding something new and fundamental about how nature works, either through a theoretical idea or an experimental discovery. Prizes are just the recognition thereof, not the actual point of the exercise.

Of course, none of the theorists who proposed the Higgs mechanism nor the experimentalists who found the boson actually had “win the Nobel Prize” as a primary motivation. They wanted to do good science. But once the good science is done, it’s nice to be recognized for it. And if any subset of the above-mentioned folks are awarded the prize this year or next, it will be absolutely well-deserved — it’s epochal, history-making stuff we’re talking about here. The griping from the non-winners will be immediate and perfectly understandable, but we should endeavor to honor what was actually accomplished, not just who gets the gold medals.

The Nobel Prize Is Really Annoying Read More »

44 Comments

Guest Post: Lance Dixon on Calculating Amplitudes

Lance Dixon This year’s Sakurai Prize of the American Physical Society, one of the most prestigious awards in theoretical particle physics, has been awarded to Zvi Bern, Lance Dixon, and David Kosower “for pathbreaking contributions to the calculation of perturbative scattering amplitudes, which led to a deeper understanding of quantum field theory and to powerful new tools for computing QCD processes.” An “amplitude” is the fundamental thing one wants to calculate in quantum mechanics — the probability that something happens (like two particles scattering) is given by the amplitude squared. This is one of those topics that is absolutely central to how modern particle physics is done, but it’s harder to explain the importance of a new set of calculational techniques than something marketing-friendly like finding a new particle. Nevertheless, the field pioneered by Bern, Dixon, and Kosower made a splash in the news recently, with Natalie Wolchover’s masterful piece in Quanta about the “Amplituhedron” idea being pursued by Nima Arkani-Hamed and collaborators. (See also this recent piece in Scientific American, if you subscribe.)

I thought about writing up something about scattering amplitudes in gauge theories, similar in spirit to the post on effective field theory, but quickly realized that I wasn’t nearly familiar enough with the details to do a decent job. And you’re lucky I realized it, because instead I asked Lance Dixon if he would contribute a guest post. Here’s the result, which sets a new bar for guest posts in the physics blogosphere. Thanks to Lance for doing such a great job.

—————————————————————-

“Amplitudes: The untold story of loops and legs”

Sean has graciously offered me a chance to write something about my research on scattering amplitudes in gauge theory and gravity, with my longtime collaborators, Zvi Bern and David Kosower, which has just been recognized by the Sakurai Prize for theoretical particle physics.

In short, our work was about computing things that could in principle be computed with Feynman diagrams, but it was much more efficient to use some general principles, instead of Feynman diagrams. In one sense, the collection of ideas might be considered “just tricks”, because the general principles have been around for a long time. On the other hand, they have provided results that have in turn led to new insights about the structure of gauge theory and gravity. They have also produced results for physics processes at the Large Hadron Collider that have been unachievable by other means.

The great Russian physicist, Lev Landau, a contemporary of Richard Feynman, has a quote that has been a continual source of inspiration for me: “A method is more important than a discovery, since the right method will lead to new and even more important discoveries.”

The work with Zvi and David, which has spanned two decades, is all about scattering amplitudes, which are the complex numbers that get squared in quantum mechanics to provide probabilities for incoming particles to scatter into outgoing ones. High energy physics is essentially the study of scattering amplitudes, especially those for particles moving very close to the speed of light. Two incoming particles at a high energy collider smash into each other, and a multitude of new, outgoing particles can be created from their relativistic energy. In perturbation theory, scattering amplitudes can be computed (in principle) by drawing all Feynman diagrams. The first order in perturbation theory is called tree level, because you draw all diagrams without any closed loops, which look roughly like trees. For example, one of the two tree-level Feynman diagrams for a quark and a gluon to scatter into a W boson (carrier of the weak force) and a quark is shown here.

qgVqtree

We write this process as qg → Wq. To get the next approximation (called NLO) you do the one loop corrections, all diagrams with one closed loop. One of the 11 diagrams for the same process is shown here.

qgVq1l

Then two loops (one diagram out of hundreds is shown here), and so on.

qgVq2l

The forces underlying the Standard Model of particle physics are all described by gauge theories, also called Yang-Mills theories. The one that holds the quarks and gluons together inside the proton is a theory of “color” forces called quantum chromodynamics (QCD). The physics at the discovery machines called hadron colliders — the Tevatron and the LHC — is dominantly that of QCD. Feynman rules, which assign a formula to each Feynman diagram, have been known since Feynman’s work in the 1940s. The ones for QCD have been known since the 1960s. Still, computing scattering amplitudes in QCD has remained a formidable problem for theorists.

Back around 1990, the state of the art for scattering amplitudes in QCD was just one loop. It was also basically limited to “four-leg” processes, which means two particles in and two particles out. For example, gg → gg (two gluons in, two gluons out). This process (or reaction) gives two “jets” of high energy hadrons at the Tevatron or the LHC. It has a very high rate (probability of happening), and gives our most direct probe of the behavior of particles at very short distances.

Another reaction that was just being computed at one loop around 1990 was qg → Wq (one of whose Feynman diagrams you saw earlier). This is another copious process and therefore an important background at the LHC. But these two processes are just the tip of an enormous iceberg; experimentalists can easily find LHC events with six or more jets (http://arxiv.org/abs/arXiv:1107.2092, http://arxiv.org/abs/arXiv:1110.3226, http://arxiv.org/abs/arXiv:1304.7098), each one coming from a high energy quark or gluon. There are many other types of complex events that they worry about too.

A big problem for theorists is that the number of Feynman diagrams grows rapidly with both the number of loops, and with the number of legs. In the case of the number of legs, for example, there are only 11 Feynman diagrams for qg → Wq. One diagram a day, and you are done in under two weeks; no problem. However, if you want to do instead the series of processes: qg → Wqg, qg → Wqgg, qg → Wqggg, qg → Wqgggg, you face 110, 1253, 16,648 and 256,265 Feynman diagrams. That could ruin your whole decade (or more). [See the figure; the ring-shaped blobs stand for the sum of all one-loop Feynman diagrams.]

Count1loop

It’s not just the raw number of diagrams. Many of the diagrams with large numbers of external particles are much, much messier than the 11 diagrams for qg → Wq. Plus the messy diagrams tend to be numerically unstable, causing problems when you try to get numbers out. This problem definitely calls out for a new method.

Why care about all these scattering amplitudes at all? …

Guest Post: Lance Dixon on Calculating Amplitudes Read More »

19 Comments

Poker Is a Game of Skill

Via the Seriously, Science? blog comes what looks like a pretty bad paper:

Is poker a game of skill or chance? A quasi-experimental study
Gerhard Meyer, Marc von Meduna, Tim Brosowski, Tobias Hayer

Due to intensive marketing and the rapid growth of online gambling, poker currently enjoys great popularity among large sections of the population. Although poker is legally a game of chance in most countries, some (particularly operators of private poker web sites) argue that it should be regarded as a game of skill or sport because the outcome of the game primarily depends on individual aptitude and skill. The available findings indicate that skill plays a meaningful role; however, serious methodological weaknesses and the absence of reliable information regarding the relative importance of chance and skill considerably limit the validity of extant research. Adopting a quasi-experimental approach, the present study examined the extent to which the influence of poker playing skill was more important than card distribution. Three average players and three experts sat down at a six-player table and played 60 computer-based hands of the poker variant “Texas Hold’em” for money. In each hand, one of the average players and one expert received (a) better-than-average cards (winner’s box), (b) average cards (neutral box) and (c) worse-than-average cards (loser’s box). The standardized manipulation of the card distribution controlled the factor of chance to determine differences in performance between the average and expert groups. Overall, 150 individuals participated in a “fixed-limit” game variant, and 150 individuals participated in a “no-limit” game variant. ANOVA results showed that experts did not outperform average players in terms of final cash balance…

(It’s a long abstract, I didn’t copy the whole thing.) The question “Is poker a game of skill or chance?” is a very important one, not least for legal reasons, as governments decide how to regulate the activity. However, while it’s an important question, it’s not actually an interesting one, since the answer is completely obvious: while chance is obviously an element, poker is a game of skill.

Note that chance is an element in many acknowledged games of skill, including things like baseball and basketball. (You’ve heard of “batting averages,” right?) But nobody worries about whether baseball is a game of skill, because there are obvious skill-based factors involved, like strength and hand-eye coordination. So let’s confine our attention to “decision games,” where all you do is sit down and make decisions about one thing or another. This includes games without a probabilistic component, like chess or go, but here we’re interested in games in which chance definitely enters, like poker or blackjack or Monopoly. Call these “probabilistic decision games.” (Presumably there is some accepted terminology for all these things, but I’m just making these terms up.)

So, when does a probabilistic decision game qualify as a “game of skill”? I suggest it does when the following criteria are met:

  1. There are different possible strategies a player could choose.
  2. Some strategies do better than others.
  3. The ideal “dominant strategy” is not known.

It seems perfectly obvious to me that any game fitting these criteria necessarily involves an element of skill — what’s the best strategy to use? It’s also obvious that poker certainly qualifies, as would Monopoly. Games like blackjack or craps do not, since the best possible strategy (or “least bad,” since these games are definite losers in the long run) is known. Among players using that strategy, there’s no more room for skill (outside card-counting or other forms of cheating.)

Nevertheless, people continue to act like this is an interesting question. In the case of this new study, the methodology is pretty crappy, as dissected here. Most obviously, the sample size is laughably small. Each player played only sixty hands; that’s about two hours at a cardroom table, or maybe fifteen minutes or less at a fast online site. And any poker player knows that the variance in the game is quite large, even for the best players; true skill doesn’t show up until a much longer run than that.

More subtly, but worse, the game that was studied wasn’t really poker. If I’m understanding the paper correctly, the cards weren’t dealt randomly, but with pre-determined better-than-average/average/worse-than-average hands. This makes it easy to compare results from different occurrences of the experiment, but it’s not real poker! Crucially, it seems like the players didn’t know about this fake dealing. But one of the crucial elements of skill in poker is understanding the possible distribution of beginning hands. Another element is getting to know your opponents over time, which this experiment doesn’t seem to have allowed for.

On Black Friday in 2011, government officials swept in and locked the accounts of players (including me) on online sites PokerStars and Full Tilt. Part of the reason was alleged corruption on the part of the owners of the sites, but part was because (under certain interpretations of the law) it’s illegal to play poker online in the US. Hopefully someday we’ll grow up and allow adults to place wagers with other adults in the privacy of their own computers.

Poker Is a Game of Skill Read More »

30 Comments
Scroll to Top