We’ve mentioned before that Richard Feynman was way ahead of his time when it came to the need to understand cosmological initial conditions and the low entropy of the early universe. (Among other things, of course.) Feynman actually wrote three different books in the early 1960’s — in his way of “writing books,” which consisted of giving lectures and having others transcribe them — all of which made a point of discussing this problem. The Character of Physical Law was aimed at a popular audience, the Feynman Lectures on Physics were aimed at undergraduate physics majors, and the Feynman Lectures on Gravitation were aimed at advanced graduate students — and in every case he emphasized that we can only account for the Second Law of Thermodynamics by assuming a low-entropy boundary condition in the past, for which we currently have no reliable explanation. (These days we have a larger number of speculations, but still nothing reliable.)
Here’s a video clip from about ten years afterward, in 1973, where Feynman raises a similar point in a conversation with Fred Hoyle, the accomplished astronomer and a pioneer of the Steady State cosmology. (Thanks to Ronan Mehigan.) They don’t go into details, but Feynman introduces the idea as a kind of meta-issue in physics:
“What, today, do we not consider part of physics, which we may ultimately be part of physics?”
His answer (which should be cued up here at the 7:10 mark) is the initial conditions of the universe, as well as the possibility that the physical laws themselves evolve with time. (Conversation continues for a tiny bit in the followup video. Listen on to hear Feynman explain how he doesn’t like to speculate about things.)
What’s interesting is that now, four decades later, it’s commonplace to address the issue of initial conditions in a scientific context, and even to consider the evolution of local physical laws, as we do with the multiverse and the string theory landscape. I’m not sure what is the precise history of this endeavor, but in the very same year this interview was aired, Collins and Hawking wrote an early paper asking why the universe is isotropic. In 1979, Dicke and Peebles published “The Big Bang Cosmology — Enigmas and Nostrums,” which set out many of the puzzles that Alan Guth would attempt to address with the inflationary universe scenario. When we marry inflation with the idea of a landscape of vacua (whether from string theory or elsewhere), we naturally are led to the idea of an evolving set of local physical laws, raising the possibility that we might be able to actually explain (using the anthropic principle or simple probability arguments) why we observe one set of laws rather than some other. Not that we have, or even seem very close, but the scientific agenda is clear.
So how could we answer Feynman’s question today? What do we not consider part of physics, which someday we might?
@Sean Carroll,
“So how could we answer Feynman’s question today? What do we not consider part of physics, which someday we might?”
Considering that you are subscribing to MWI as being a possibility worth considering, and are playing with the idea Feynman stated “… as the possibility that the physical laws themselves evolve with time”, you really can have no physics left, since anything goes. Science is all about consistent measurement, if you have no constant meter of time and length/distance with which to operate, you can’t make meaningful calculations, analysis, or prediction.
Your question really should be about something you can comparatively analyze, like “What do you consider part of physics, which someday we might not?” This is a question you could fit all manner of ‘dippy processes’ and Anthropic Principia into and actually reflect upon intelligently. For reference you could read up on the history of science and physics which is chock full of bad ideas and discarded concepts, and perform a real calculation on the ratio of accepted vs. discarded ‘parts/ideas of/about physics’ up to the present. From this ratio, you could actually calculate a probability of the likelihood your current ideas will be accepted or discarded some specified time from now, and extrapolate into the future as many billions of years as you like absurdum ad nauseum ad infinitum, etc. If this suggestion for an actual calculation seems a tad farfetched, I can assure you it is more likely to be accurate than one in 10^520, which passes for a prediction of some sort in some parts these days.
Feynman isn’t emphasizing initial conditions per se he only mentions initial conditions in relation to the much more interesting question of whether the laws of physics themselves have changed over time.
To pretend this recording supports Sean’s view that eggs only break because of a low entropy initial condition of the universe is misleading and quite rude to Feynman really.
The reason why Sean and other people think initial conditions are such a big deal is because they believe the universe is deterministic (via MWI or whatever)!
I doubt Feynman would agree with that.
@9: You have a point, though I am sure that all professional astronomers realize Hoyle’s contributions. However, all descriptions of him I have read or heard paint him as an arrogant, difficult individual, so perhaps he is partly to blame.
The reason the early universe had low entropy is because that’s what “early” means. If the entropy was high, you could always go to an earlier time when it was lower. That’s how entropy works.
People such as Sean who go around claiming that it is a great mystery to “explain” why the entropy was low are inevitably running around in circles logically, and are never really sure what kind of answer they are after.
@Fred
The problem is they extend this explanation to say it’s the “explanation” of why eggs only break rather than why they might reform from broken bits.
In Sean’s muddled argument, we could have a high entropy initial condition going backwards, but in my SENSIBLE (non-deterministic) scenario you either have increasing entropy OR NOISE – backwards (global) entropy is only possible in a ridiculous deterministic scenario – so we kinda have a reductio ad absurdum against determinism there, don’t we?
And the real killer against determinism, why doesn’t everything all happen instantaneously, what’s the hold up?
You need randomness to “slow down” the universe’s evolution.
In Sean’s scenario, entropy can grow forwards from the big bang or backwards from the big bang. So Sean is assuming there was a very low entropy state around 13.7 billion years ago….ok so what has that possibly achieved? Its precisely the thing Sean is claiming is a problem and yet he is invoking it in his model… this is circular illogical reasoning about a problem that doesn’t exist.
I don’t think there will be anything that we will consider part of physics in the future that isn’t already physics because I think we’ve extended our understanding as far as it can go.
In regard to the initial conditions; would whatever caused the universe to become what it is from the initial conditions be considered part of our physics or the limit between whatever existed before our physics? What is the limit of Physics, even if we have an understanding beyond that limit?
…physics.
If you re-define physics, then there is no limit to what can become part of physics.
There is so much that is eminently testable in the world today that is not adequately explained, or even explained at all. I truly believe that the greatest physicists, with their rigor, their brilliance, their purity, could contribute immensely to the solution of these mysteries, problems that have stymied their peers in other disciplines.
But for whatever reason, the greatest physicists seem uninterested in the squalid state of messy and complex reality, and wish to study the multiverse beyond our causal horizon and its anthropic implications. I truly believe this is a sad loss of much-needed talent.
What would science would look like today if scientists stuck to working on theories that were considered “eminently testable”? This was originally intended to be purely rhetorical, but the more I think about it, the less I feel I know the answer…
Anyway, I’m guessing that computer science will largely be subsumed by physics, or maybe the other way around. After all, what is it that physical systems do, if not compute?
If one changes the strict boundary of ‘physics is the observable,testable ect’ everything will change- a must seeing the postulate of say dark energy ect. With so many gaps in the current worldview , I foresee a slow process of postulating and accepting non-observable concepts like multi(higher)-dimensionality to fill the gaps.
The erosion of the boundary of physics will allow ‘meta’concepts to take an acceptable stand in physics. That alone will undermine orthodox sentiments which in turn will revolutionise physics on a Copernicum scale.
So my understanding of the initial condition of the universe is that there was a point of no dimensions containing space and time and everything we now observe. This point is defined by infinity; infinately small, hot, dense and so forth. It could be said not to have existed because it contained existance. Asking how long it existed in that condition is meaningless, but still I wonder about that. Anyway, at 0. (however many 0’s you like but not an infinate number)1 this all changed and the arrow of time began, as did inflation, leading to us here today. So until we understand infinity itself we cannot understand that infinate condition, the clostest we can come is that load of 0’s 1.
Anyone who isn’t a religious fundamentalist should be open to the idea, if not convinced, that the Big Bang wasn’t a beginning and nor in a suitable sense will the universe have an end.
On that assumption, one possible answer, indeed a near logical certainty, is an infinite sequence of scales, of unlimited smallness, such that in an unimaginably distant future when the present scale becomes is a pea soup of maximum entropy, a rescaling or “zooming in” will effectively reduce the entropy at the resulting scale by bringing new asymmetries to the fore.
Needless to say this would not be considered as physics now, on account of the Planck scale being generally regarded as the minimum possible physical scale. Also, the idea would no doubt make physicists uncomfortable, due to the obvious difficulty if not impossibility of verifying it experimentally.
But the question was what may one day be considered physics, and if some sound mathematical principle could allow effective entropy reduction of a dynamic system by rescaling (possibly combined with dualization) then that might be one possible answer.
If one wanted to reduce Sean’s question to the most irreducible mathematical and philosophical framework then there is a path for it. And almost unbelievably there is actually physical data that points in just one direction. Everyone who keeps themselves scientifically informed already knows this data.
The are two basic mathematical constructs ( I know it will seem overly simplified to some but sperical cows have their uses). The first is the concept of infinity, or plenitude if you will. The other idea that opposes that is oneness, that is finiteness. The first, the infinity premise, even though inherently unconstrained, must still match up to scientific observation. Physicists have through various methods tried to consolidate that plenitude. Mostly it has been by matching a positive energy that is already known, and that is infinite in nature, with a hypothetical negative energy to balance it to zero. Various methods over the years have been attempted.
The most basic attempt was by trying to adapt the quantum mechanical rule of quantum superposition and probability to this end. This is the well known MWI idea. Later attempts was through supersymmetry in which the addition of physical particles, superpartners representing the negative side of the negative infinite energy spectrum, were added to make the total energy of the universe appear to be zero. Finally we come to the multiverse hypothesis that has no more scientific basis than any devoutly religious doctrine. In other words it comes directly from the belief that there is plenitude in reality. Nothing else stands behind it scientifically. It is simply a belief, period, end of discussion.
So the division in the road comes down to an energy equation for plenitude of -infinity + infinity = zero. The alternative finite philosophical position is that the energy of the universe is simply 1 when reduced to unit values for all parameters. The physics for this currently is far from reducing the equations in which the substition of unit values will end up with the energy equation for the universe equaling 1. But the 1998 observation of type 2a supernova accelerating away from us, albeit at a slow rate, completely demolishes the idea that we need to balance a positive and negative energy to exactly equal zero. There now is firm evidence that the vacuum energy is NOT detectable as zero, but is a small positive value.
So we now have a new requirement in physics: we need to balance a high vacuum energy density at the beginning of the universe that is compatible with the initial energy requirements for forming the fundamental particles. We must balance that with the low energy density observation we now see now after the universe has nearly ended it’s acceleration and expansion. An expanding and thinning vacuum in which photons from the vacuum are absorbed into matter as it has accelerated away from the point source of it’s initial condition is the only realistic method method to do that. Like an expanding gas particles precipitate out of it as it expands and cools.
Again, the 1998 supernova observation has effectively eliminated the plenitude philosophical stance from legitimacy. Any physicist who now says otherwise, regardless of their mathematical ability is issuing their mathematics to prostitute for a hidden agenda of plenitude. The facts no longer support them.
Make that type 1a supernovae, not 2a.
Discrimination of levels of emergent phenomena as phase-shifts
I’d never say limit yourself to what is eminently testable, but what about subjects that look highly likely, or certainly, to be eminently UNtestable?
From the OP: the Feynman Lectures on Physics were aimed at undergraduate physics majors
This is news to me. My understanding is that they were delivered in a course labeled as the intro course for undergraduate physics majors, but that the aimed them at (i.e., intended them for) his colleagues and perhaps advanced graduate students. Of course, this was about a half century ago, and what with better prep and the Flynn effect, perhaps nowadays they are appropriate for UG majors.
@marcel: It’s 30 years since I was an UG major and I found FLoP to be incredibly useful volumes that I used in conjunction with the set texts.
Pingback: Feynman — Take the world from another point of view
@Brian Too “What effects of quantum mechanics could manifest above ‘normal’ quantum space-time scales? Could we ever directly attribute observed macro results to quantum mechanical causes?”
Is not the quantum computing endeavor an attempt to achieve just that?
@marcel #43 (and also the OP) The Feynman lectures were the textbook for Physics I at Caltech in the 1970’s. Not just for physics majors, it was a required course for everyone. Of course he was around, and he would sometimes lurk in the background or even do a lecture himself.
The inner workings of cells are currently rarely treated in a physics way. There are all sorts of numbers which have been determined empirically — reaction or conformational change rates, charge transfers and so forth — which could in principle be treated as mechanics (cf. quantum biology).
There is no reason not to think that the behaviour of even a complicated eukaryotic cell could not be as fully determined as an artificial machine of the same mass. This will require some new tools — theoretical, observational and computational — however. Some will be borrowed from field theories familiar to people here (some biological applications of things like the Nernst-Planck equation come to mind) but inside a cell viscosity totally dominates inertia and thermal noise is nontrivial.
Inside the cell is neat. Exact models of organelles and simple archaeans cannot be too far off.
http://www.arcfn.com/2011/07/cells-are-very-fast-and-crowded-places.html
Pingback: Massive Equation « amaya ellman
This is peripherally related (perhaps directly if you want to treat it as a sort of reductionist or emergentist answer to Sean’s question about what will be treated as physics in the future), and hopefully of interest to someone wading through the comments.
http://m.theatlantic.com/technology/archive/2012/11/noam-chomsky-on-where-artificial-intelligence-went-wrong/261637/?single_page=true
“The motivation for the interview was in part that Chomsky is rarely asked about scientific topics nowadays. Journalists are too occupied with getting his views on U.S. foreign policy, the Middle East, the Obama administration and other standard topics. ”
A good overview, and an interesting interview.
There is a link to videos of the interview immediately before the first question.
Chomsky makes a couple of picayune errors or lapses in rigour especially early on, but it’s likely worth the effort of not being distracted by those when reading the first time. He raises some interesting points about scientific enterprise and things akin to the correspondence principle that likely will seem familiar to some of the physicists here.
For example, the answer after “So it shifted to engineering…” is a little provocative. 🙂