I have a long-percolating post that I hope to finish soon (when everything else is finished!) on “Why String Theory Must Be Right.” Not because it actually must be right, of course; it’s an hypothesis that will ultimately have to be tested against data. But there are very good reasons to think that something like string theory is going to be part of the ultimate understanding of quantum gravity, and it would be nice if more people knew what those reasons were.
Of course, it would be even nicer if those reasons were explained (to interested non-physicists as well as other physicists who are not specialists) by string theorists themselves. Unfortunately, they’re not. Most string theorists (not all, obviously; there are laudable exceptions) seem to not deem it worth their time to make much of an effort to explain why this theory with no empirical support whatsoever is nevertheless so promising. (Which it is.) Meanwhile, people who think that string theory has hit a dead end and should admit defeat — who are a tiny minority of those who are well-informed about the subject — are getting their message out with devastating effectiveness.
The latest manifestation of this trend is this video dialogue on Bloggingheads.tv, featuring science writers John Horgan and George Johnson. (Via Not Even Wrong.) Horgan is explicitly anti-string theory, while Johnson is more willing to admit that it might be worthwhile, and he’s not really qualified to pass judgment. But you’ll hear things like “string theory is just not a serious enterprise,” and see it compared to pseudoscience, postmodernism, and theology. (Pick the boogeyman of your choice!)
One of their pieces of evidence for the decline of string theory is a recent public debate between Brian Greene and Lawrence Krauss about the status of string theory. They seemed to take the very existence of such a debate as evidence that string theory isn’t really science any more — as if serious scientific subjects were never to be debated in public. Peter Woit agrees that “things are not looking good for a physical theory when there start being public debates on the subject”; indeed, I’m just about ready to give up on evolution for just that reason.
In their rush to find evidence for the conclusion they want to reach, everyone seems to be ignoring the fact that having public debates is actually a good thing, whatever the state of health of a particular field might be. The existence of a public debate isn’t evidence that a field is in trouble; it’s evidence that there is an unresolved scientific question about which many people are interested, which is wonderful. Science writers, of all people, should understand this. It’s not our job as researchers to hide away from the rest of the world until we’re absolutely sure that we’ve figured it all out, and only then share what we’ve learned; science is a process, and it needn’t be an especially esoteric one. There’s nothing illegitimate or unsavory about allowing the hoi-polloi the occasional glimpse at how the sausage is made.
What is illegitimate is when the view thereby provided is highly distorted. I’ve long supported the rights of stringy skeptics to get their arguments out to a wide audience, even if I don’t agree with them myself. The correct response on the part of those of us who appreciate the promise of string theory is to come back with our (vastly superior, of course) counter-arguments. The free market of ideas, I’m sure you’ve heard it all before.
Come on, string theorists! Make some effort to explain to everyone why this set of lofty speculations is as promising as you know it to be. It won’t hurt too much, really.
Update: Just to clarify the background of the above-mentioned debate. The original idea did not come from Brian or Lawrence; it was organized (they’ve told me) by the Smithsonian to generate interest and excitement for the adventure of particle physics, especially in the DC area, and they agreed to participate to help achieve this laudable purpose. The fact, as mentioned on Bloggingheads, that the participants were joking and enjoying themselves is evidence that they are friends who respect each other and understand that they are ultimately on the same side; not evidence that string theory itself is a joke.
It would be a shame if leading scientists were discouraged from participating in such events out of fear that discussing controversies in public gave people the wrong impression about the health of their field.
Cecil wrote:
I don’t know. Maybe zero. No one has found any so far.
If you find one, however, you will find a large number. But there will be a huge degeneracy, in terms of the predictions they make for particle physics. The number of independent particle physics models (i.e., neglecting the values of Newton’s constant and the cosmological constant) will be much, much smaller.
This will yield a sparse set of points in the parameter space of (some extension of) the Standard Model. Either the real world corresponds to one of those points (to within experimental accuracy) or it doesn’t.
Is that clear enough?
Marty wrote:
Sure, æsthetics can be a useful guide. But, like everything else, our æsthetic sense is something acquired from experience, not some fixed fixed criteria that can be imparted to beginning graduate students and which will serve them for the rest of their professional lives.
Concert audiences recoiled in horror when they first heard the music of Stravinsky. Only much later did they (well, most of them) learn to appreciate its beauty.
Mother Nature is more oblivious, than any composer ever could be, of whether we appreciate what she is telling us.
String theory has proven to be a vastly richer and vastly more beautiful theory than anyone could have suspected in 1984.
I’m sure you read my account of how I became interested in string theory on my blog. Aesthetics had little to do with it.
Jacques,
I would like to repeat my question from comment #250:
A long time ago, in comment #22, Eric Mayes made the statement
“we can now competely derive the MSSM from string theory [..]” .
You wrote later, “Obviously, no one has yet found a convincing candidate for the Standard Model”.
How should one resolve this apparent contradiction?
Had a few minutes this morning before I leave for the airport to look into the literature about the claims being made here that were new to me.
1. Last night before going to bed people here had me somewhat convinced that maybe there was more of a possibility of some weak kind of string theory prediction about ratios of superpartner masses than I had thought possible. Thanks to “c” for providing the reference http://arxiv.org/abs/hep-ph/0702146.
Taking a look at it, I see pretty much the kind of “predictions” I’ve always seen in this area, which I think it is not unfair to characterize as “you can get pretty much whatever you want”. There’s nothing like a prediction that could be used to falsify string theory here (which I guess kind of explains what was bothering me last night, why I hadn’t seen that claim before, since I’ve certainly been looking).
The authors identify 12 different classes of schemes to get supersymmetry breaking, mostly from string theory, and don’t claim the list is exhaustive. 10 different classes lead to 3 separate pattern of ratios (although one of these patterns involves an undermined parameter), and there are various questions about exactly how to relate these ratios to observed masses. For the other two classes (“volume moduli-dominated SUSY breakdown in perturbative heterotic string theory”, and “M-theory compactification on G2 manifolds with moduli stabilization by non-perturbative dynamics”, the authors claim that predictive power is lost.
The claim is that this is actually the best situation as far as SUSY parameters goes. It looks just like I would expect. Lots of different complicated classes of models, with some sorts of predictions possible in a given class of models (Joe gives an example of this kind of thing), but no predictions possible independent of a choice of specific model. There’s enough complexity to get whatever you want, so you can’t use this to make a falsifiable prediction of any conventional kind.
2. I took a look again at the Denef-Douglas paper (http://arxiv.org/abs/hep-ph/0702146), since I had read it, and didn’t remember seeing Jacques’s argument about independence of the CC there. I see that’s because it’s not in the summary section describing the signifcance of the paper, but is somewhat buried near the end of section 6, which I guess I didn’t read carefully enough. Denef and Douglas refer to it as “the optimistic hypothesis” and comment:
“Do we believe in it? It seems fairly plausible for Standard Model observables, but is perhaps less obvious for other properties, for example the properties of the dark matter.”
Off to catch a plane…
James Gates, from Elegant Universe on PBS, gave a lecture last week at the national science teacher’s conference titled “Can String Theory Be an Educational Force Multiplier?”
He talked about string theory and how it can be used as a bridge to educate the public about science.
He wonders if the remarkable public interest in string theory can be used to bring science to the masses.
See the slides from his talk and hear an interview at: http://www.wsst.org/labtable.asp?newsID=302
“Lots of different complicated classes of models, with some sorts of predictions possible in a given class of models (Joe gives an example of this kind of thing), but no predictions possible independent of a choice of specific model.”
Peter, nobody claimed that the patterns are model independent. However, there are not 10^100 CLASSES of models such that you can get 10^100 types of patterns of gaugino mass ratios. There are so far very few actual classes where the patters are pretty robust and a couple of models which are still in their early development such as the G2 compactifications, where additional threshold corrections have to be computed to make a more robust prediction. So if one of those patterns is confirmed by the LHC it would be a huge boost to string phenomenology. If none of those patters is confirmed then we’ll have to look for alternative ways to stabilize the moduli.
“There’s enough complexity to get whatever you want, so you can’t use this to make a falsifiable prediction of any conventional kind”
No, there is not enough complexity to get anything you want.
As you said: “10 different classes lead to 3 separate pattern of ratios”.
Note that “classes” of models makes this degeneracy much bigger. If a “class” , for example, the large volume Type IIB compactifications, has 10^100 models within it and predicts THE SAME ratios for the gaugino masses then that’s a prediction for this whole class of 10^100 models.
The claim that “There’s enough complexity to get whatever you want, so you can’t use this to make a falsifiable prediction of any conventional kind” in this context basically implies that there is an infinite number of alternative CLASSES of models with different mechanisms stabilizing the moduli and vacua with spontaneously broken SUSY. Well this sounds quite implausible. It took many years to figure out how to fix the moduli in string theory. So far there are only a handful of ways, i.e. fluxes, non-perturbative corrections, alpha prime and string loop corrections. The few “classes” of models people have been able to construct so far where all the moduli are fixed and SUSY is spontaneously broken so that one can honestly compute the F-terms and hence the gaugino masses, scalar masses and the trilinears in terms of the microscopic parameters coming from the Kahler potential and the superpotential are hard to come by.
The vast majority of the “string phenomenology” models don’t do these computations and instead set the F-terms and the moduli vevs to some ad-hoc values and do phenomenology with this stuff – and these are clearly not the types of models I was talking about.
Hi Peter,
“Lots of different complicated classes of models, with some sorts of predictions possible in a given class of models (Joe gives an example of this kind of thing), but no predictions possible independent of a choice of specific model.”
`Complicated’ is in the eye of the beholder, but otherwise – great! The LHC comes along, it skittles most or all of the models and ideas that exist, and we can all forget our theoretical prejudices, give up on the stuff that doesn’t work and instead follow the data and the models that fit it. To me this just seems to be normal science, and not a problem in any way.
Actually, I think your comments above aren’t string-specific in any way: they could be applied, unaltered, to all BSM phenomenology. SUSY, technicolor, extra dimensions, little Higgs, invisible Higgs: doesn’t your complaint above apply equally to all of these, and other, scenarios?
Best wishes
Joe
Jacques:
Thanks for the reply. However, I am still left unclear as to the size of the vacua. You use terms like “large”, “huge” and “sparse” to describe the size of the expected vacua. You will admit these are relative terms. What I was hoping for was something more quantitative. The reason I say this is that if one is looking for this expected vacuum then it would be nice to have some idea of the potential difficulty. Agreed?
Let me ask you: Do you believe that ST must produce a vacuum that is consistent with the SM, i.e., it is necessary in order for ST to be viable? If the answer is yes then shouldn’t it also be important to understand the difficulty of finding such a vacuum?
I gave more quantitative estimates above, and you said “I cannot follow your detailed mathematical arguments.” So I a non-technical response.
Make up your mind!
In the discussion above, I had in mind n of order 100, n’ of order 1, and R of order 10. The parameter space of the Standard Model is 19 dimensional. Simple supersymmetric extensions of the Standard model can have parameter spaces of dimension ~100.
10^{400} points in a 100-dimensional parameter space could be dense (and hence unpredictive). 10^4 points in a 100-dimensional parameter space is incredibly sparse (and hence predictive).
These are all really crude estimates. (For instance, I need to put in what range of values one might expect to obtain for each of the parameters in question and with what accuracy we can hope to measure them. Otherwise “dense” and “sparse” have little meaning. Still, this should give you some idea.)
OK?
Sean Carroll deleted my comment because I asked him to provide the journal title, article title, volume number, issue number and pages of any peer-reviewed HEP physics journal in which a falsifiable testable prediction from string theory has appeared.
Do you really think you’re going to win the public debate on string theory, Sean, by deleting posts asking for falsifiable testable predictions made by string theory?
That’s the scientific method.
Censoring people who demand the application of the scientific method is not a winning debate strategy, Sean.
I repeat my request.
What are the journal titles, article titles, volume numbers, issue numbers and page number of the peer-reviewed HEP articles containing testable falsifiable predictions made by string theory?
Can you even provide one published peer-reviewed testable falsifiable prediciton in all the string theory literature?
One thing I think is safe to say. If the LHC and ILC does not see any tracers of SUSY and instead just sees a scalar Higgs and nothing else, a good percentage of phenomenologists will stop doing stringy constructions, and instead try to tackle the much more pressing problem of trying to solve the hierarchy problem. Forget about the CC, we will have a naturalness problem at accessible energies the likes of which will deeply shake the community.
String theory won’t be falsified, but surely the intellectual and numerical dominance its had over the field will be lessened by that most troubling scenario.
So science will proceed just as before, and Peter and Lees worries will not come to fruition in that case. Experiment ultimately is the trend setter, which is as it should be.
Really interesting comments with regard to string phenomenology. Thank you all for giving us the chance to read them. Even if neither side can win the “debate” (in terms of convincing their opponent), it definitely helps to inform interested observers like myself.
Back from an interesting time at the University of Central Florida, where I had the honor and pleasure of speaking together with Jim Gates. I think people were hoping for controversy and disagreement, but there was relatively little. In particular, while he should speak for himself, we didn’t seem to disagree about the string theory landscape. Will try and write something about this on my blog tomorrow or, in any case soon.
c,
You’re deleting the relevant parts of what I wrote, then making up something I didn’t write and arguing with it. I didn’t say anything at all about an infinite number of classes of alternative models.
What I did do was point to the fact that the reference you gave says that two out of the twelve models described don’t actually predict a specific pattern of gaugino mass ratios. So, there are enough models to provide ones where you can get anything you want. This set-up is not falsifiable. No matter what you gaugino mass ratios you see, some model can accomodate it.
Joe,
For the LHC to decide between SUSY models, first it has to see SUSY. What do you think will happend to this whole field of model-building if the LHC doesn’t see SUSY? I’d like to believe that Haelfix is right that people will give up on it, but I’m not sure.
As for the comparison to other ideas in beyond SM phenomenology, sure, none of them are very convincing, although some at least have more distinctive experimental signatures. Given the lack of a convincing idea, the best thing to do is to try and come up with new ideas. That’s hard, so sure, some people should continue to try and see if they can get something out of ideas that so far haven’t worked very well. But personally I think it’s not very healthy if one unpromising idea completely dominates the field the way “string phenomenology” does.
Jacques:
Thanks for the reply. I cannot follow the detail technical arguments that you use to arrive at a conclusion; however, I certainly am able to draw some opinion about the quantative conclusions of your technical argument.
You said:
10^{400} points in a 100-dimensional parameter space could be dense (and hence unpredictive). 10^4 points in a 100-dimensional parameter space is incredibly sparse (and hence predictive).
These are all really crude estimates.
I am a retired engineer and I can understand crude estimates. But really, 396 orders of magnitude difference!!!
Look, Jacques, I have been an advid layperson follow of high energy physics, cosmology, gravity, etc, for 40 years. I have probably over 100 books on the subject, some very technical that I really don’t grasp, like GSW String Theory, and others not so technical like Guth, Feymann, Suskind, Randall, and Greene’s books, but I try, OK.
Some ST critics like Woit do not believe in ST. I can be convienced by a simple argument which I have been searching for the last several years. After 22-23 years I would have thought that ST could make some connection with the real world. And reduce to GR and the SM (I guess GR is considered part of the SM) and actually make a single prediction that goes beyond the SM. This is the way all new physical theories have been developed. I am trying to understand why this is so hard for ST to do.
So trying to understand the effort involved in reducing the magnitude of the crude estimate does not seem so unreasonable.
So in the spirit of Sean’s suggestion that it would be more helpful to the ST cause if more ST researchers provided feedback to the lay public. I believe my question really gets to the heart of the matter. Just how difficult is it to connect ST to the real world?
Thanks for your time and believe me I am very receptive to a straightforward argument that any lay person can follow.
“What I did do was point to the fact that the reference you gave says that two out of the twelve models described don’t actually predict a specific pattern of gaugino mass ratios. So, there are enough models to provide ones where you can get anything you want. This set-up is not falsifiable. No matter what you gaugino mass ratios you see, some model can accomodate it.”
At the end of the section about the G2 compactifications the authors state:
“In this case, the gaugino mass ratios can be determined only when one can reliably compute the values of the highly UV sensitive Gamma_phi and Omega_{aphi}, which is not available with our present understanding of M theory compactification.”
Peter, how many papers are there on those types of fluxless G2 compactifications? Did you check? There is only ONE paper, which came out in January. You are ready to make your claims that this class of models can accomodate “anything you want” based on the fact that nobody has computed the threshold effects for the anomaly mediated gaugino masses in those models? Well, there are plenty of very smart people around who will figure out how to compute those reliably and make predictions for the gaugino mass ratios.
Your claim that “you can get anything you want”, based on the fact that certain effects have not yet been computed in a brand new class of models cannot be serious. The techical difficulty in comuting the quantities Gamma and Omega certainly does not imply that this class of models can accomodate any ratio for the gaugino masses. You do understand that there is a huge difference between a model which can IN PRINCIPLE accomodate anything you want and a model where a techical problem does not YET allow to make a robust prediction?
In fact, if you check the original paper on these G2 compactifications, there is already a very interesting prediction from the entire class of the G2 compactifications without fluxes: light gauginos with masses
The remaining part of the last sentence:
light gauginos with masses
OOPS, here it is:
light gauginos with masses less than 1TeV and very heavy scalars with masses
O(100)TeV. Hence, if the LHC discovers squarks or sleptons, this entire class of G2 compactifications will be ruiled out!
c,
You’re still ignoring the last class of models in which the authors say no prediction is possible and you aren’t providing any evidence that G2 compactifications will become predictive about this (whether they give predictions about something else is irrelevant). I think “a” a while back is a phenomenologist who is telling you what the informed consensus about this question is:
“c, some phenomenological models make specific predictions for the two ratios among gaugino masses (and combining them one can fit whatever will be measured), but I am not aware of any stringy prediction for gaugino masses. ”
I don’t think it’s at all helpful for the credibility of string theory to be making public claims of predictions which aren’t sustainable. There has been a lot of that going on…
“There is only ONE paper, which came out in January.”
Correction: To be more precise, there was a 4 page paper by the same authors which came out last summer, but it was a very short version of the one in January and had no analysis of the dS vacua Choi and Nilles are talking about in the “Gaugino Code”.
By the way, now that I have thought about it, the UV sensitivity of the threshold correction to the anomaly mediated contribution (which is itself a one-loop effect) is not likely to change the final result by much, maybe a O(10-20)% at the most. Although this would not be the kind of “robust” prediction Choi and Nilles want to obtain, it would still be far from “anything you want”. If they are correct in their claim that the UV physics does affect the gaugino masses to some extent, this class of models would directly probe the UV physics! Don’t you think this is exciting?
“c, some phenomenological models make specific predictions for the two ratios among gaugino masses (and combining them one can fit whatever will be measured), but I am not aware of any stringy prediction for gaugino masses. ”
See, the G2 paper: hep-th/0701034 page 56 for the tree-level gaugino mass calculation and pages 58-60 for the anomaly mediated contributions. Those are pretty honest “stringy” calculations coming from M-theory on G2.
“You’re still ignoring the last class of models in which the authors say no prediction is possible and you aren’t providing any evidence that G2 compactifications will become predictive about this”
About the last class of models:
For the specific toroidal compactification discussed in Nilles and Choi, the string threshold tilde M_a^()|_{string} has been computed in terms of the Dedekind function eta and the explicit expression is given. Hence the UV sensitivity is known for this particular model. Now, Nilles and Choi speculate that the more generic models (which have not yet been constructed) would also be UV sensitive. I don’t see why this would necessarily be the case. The reason for the UV sensitivity in the simple toroidal model is the suppression of the dilaton F-term F_s which contributes to the tree-level gaugino mass. Has anyone demonstrated that it will be suppressed in the general Calabi-Yau case? I don’t think that Choi and Nilles in their review paper have demonstrated this. So, their remark is a speculation about the general models which have not yet been constructed. The specific toroidal model they are discussing is predictive because the UV physics contribution is known.
About the G2 models, in more detail, the authors cite the Friedmann and Witten paper hep-th/0211269 where the M-theory threshold corrections to the gauge kinetic function have been explicitly computed and they are actually constants, independent of either the moduli or the hidden sector matter phi. So, I don’t know what Nilles and Choi have in mind about Omega being dependent on phi. The assumption about the dependence of Omega on phi through e^(K/3)Z or the other coupling is a speculation which assumes that the heavy M-theory modes contribute via SUGRA type couplings like e^(K/3)Z. But we know, for instance, that in the heterotic example I discussed above, the string threshold corrections are expressed in terms of the Dedekind function in terms of the moduli which has nothing to do with N=1 D=4 SUGRA couplings.
The bottom line is that the Nilles and Choi is a review of models where many classes of models make robust predictions about the gaugino mass patterns which can be tested by the LHC. The G2 model is less robust (this is excusable since this is a very new class of compactifications) but the corrections can be taken into account, i.e. using Friedmann and Witten’s work. Corrections to Gamma can be easily estimated and should not give more than O(10-20)% error to the total gaugino massess. You can just add terms with higher powers of phi to see this.
My apologies, Cecil. We really have been talking at cross-purposes.
Peter Woit and I were arguing about the proposition that the Landscape renders string theory unpredictive even in principle (“you can get anything you want”). That is, about the proposition that there are “too many” SM-like vacua.
You are asking why we have not yet found any vacua which are viable candidates for the SM (though we’ve found lots of vacua that come close). These are orthogonal issues and little of my argument with Woit has bearing on your question.
The problem of constructing such vacua and calculating their properties is a hard one, and the technical tools are still under development (see, e.g., c’s comment above). None of the requisite tools existed 20 years ago. Some crucial ones did not exist 5 years ago.
But, then, you’ve surely heard that before.
The fact that the problem is hard does not make it unworthy of study (people have been studying the Navier-Stokes equation for over a century, with relatively little progress). I suppose one could review the progress that has been made on this, and explain why people continue to be optimistic.
(Of course, a prerequisite to optimism is that the “you can get anything you want” argument is clearly wrong; but then you’ve heard that, too.)
But, rereading your comments, it’s not clear to me that’s what you are really interested in either.
Peter,
Just out of curiosity, if the LHC does find superpartners (as I know you expect it won’t), would you reconsider your opposition to string theory? I know that supersymmetry is possible even without string theory, but it still seems like it would be one crucial element falling into place.
Jacques:
Thanks for the reply. That is EXACTLY what I would like to hear. It is hard, if not very hard, to find a single vacuum that is consistent with the SM. How hard is the problem?? Who really knows. It would be very helpful if ST researchers would at least attempt to explain how hard it is and exactly where in this quest is the current research effort and do it in layman’s language.
I do appreciate your responding to my questions. Sometimes lay people can ask questions that really separate the wheat from the chaff if given the opportunity.
Hi Peter,
“For the LHC to decide between SUSY models, first it has to see SUSY. What do you think will happend to this whole field of model-building if the LHC doesn’t see SUSY? I’d like to believe that Haelfix is right that people will give up on it, but I’m not sure.”
Give up and go home 🙂 No, seriously, if the LHC sees anything clearly BSM it’s obviously tremendously, hugely, career-definingly exciting. In that case, I say follow the data and enjoy the ride.
But let’s not be too confident the LHC won’t see SUSY: there is after all > 3 sigma evidence for BSM physics in muon g-2, which would seem to be best explained by susy 😉
“That’s hard, so sure, some people should continue to try and see if they can get something out of ideas that so far haven’t worked very well. But personally I think it’s not very healthy if one unpromising idea completely dominates the field the way “string phenomenology” does. ”
I don’t agree that string phenomenology `dominates the field’. Certainly, hep-ph is not being smothered by stringy constructions of the MSSM. The ‘try to compute the superpartner mass spectrum’ part of string phenomenology is a relatively recent development, from 2005 onwards. It only makes sense to try to do this computation once you can stabilise the moduli, and moduli-stabilising constructions are relatively new. Even after this, it require lots of care, knowledge about the how the SM is embedded in the compactification, etc. I think the reason why there has been lots of work on this recently is that now the technical tools are sufficiently developed that there is a sporting chance of doing the computation and believing the result. There is real progress being made here, and I think this justifies the current interest in this topic.
The other point to make is that, whatever the LHC ends up seeing, the best preparation for the data that will come out of the LHC has to be studying particular models in detail. Even if scenario X is wrong, thinking carefully about how scenario X manifests itself in a collider will massively help in understanding scenario Y which is actually relevant.
Best wishes
Joe
TimG,
Just finding superpartners wouldn’t necessarily address the problems with the lack of predictivity of string theory. So, for generic values of the observed superpartner properties, no, this wouldn’t change my opposition (which is not to “string theory”, but to the way of using it to unify particle physics and gravity that people are pursuing). If the superpartners have properties that are explained by a reasonably simple string theory model (in the sense that the inputs to the model are simpler than the outputs it is matching to experiment) I’d certainly reconsider, and if the model makes testable predictions that were checked, of course I would agree that I had been completely wrong, highly foolish, and shut up about this for good.
For any values of the superpartner parameters, I surely would start taking a lot more interest in the details of the various supersymmetry breaking schemes out there.
c,
People can read it for themselves, but I think the Choi-Nilles review is clear in its claims, and they are explicitly not claiming what you are, that string theory leads to specific values for the gaugino mass ratios. I’ve never seen in the literature a claim that string theory leads to solid LHC predictions, and the review just explains one aspect of why this is true. You argue that some of these models are very recent, so we need to wait before evaluating the situation. Extrapolating from the past, it seems likely that all that will happen in the future will be that there will be more models and more possibilities.
Cecil,
As usual, please don’t believe a word of what Jacques Distler tells you my argument is, without reading my actual argument, which is made all too extensively above. The problem with string theory unification is not the abstract problem Jacques would like to discuss (which I don’t think we’ll ever know the answer to), but the undeniable fact that string theory now makes no predictions about particle physics. There are clear reasons for this undeniable current failure, no good reasons to believe that further elaboration of this failed framework will change the situation.