Simon DeDeo is a theoretical physicist and very smart guy, who started out as a cosmologist and has made the transition to complexity theorist at the Santa Fe Institute. (He’s not smart because he made that transition, it just so happens that both statements are true.)
This summer he gave a series of three lectures at SFI’s Complex Systems Summer School, on the general topics of Emergence and Complexity. These are big ideas, and obviously one cannot say everything interesting there is to say about them in three lectures, but Simon manages to cover a lot of extremely important and fascinating topics such as coarse-graining, renormalization, computation, and effective theories. Worth a listen!
This looks nice and interesting !
What is the level of these lectures and what kind of audience are they targetted at ?
I’ll probably watch them next weekend 🙂
It’s a shame the audio is so poor. I declined the offer to watch due to the low quality.
It’s the basement tapes!
Dilaton — the School is pitched to a broad level perhaps best described as “quantitative graduate student”; that means we have people with theoretical physics (and cognate) backgrounds, as well as those from more empirical fields (and correspondingly greater statistical sophistication). In general, we tend to aim at students who have finished their coursework and are beginning their research careers proper. Some background material is at http://bit.ly/PZS22W
Information on the 2013 program is at http://www.santafe.edu/education/schools/complex-systems-summer-schools/2013-program-info/
Wow @Simon DeDeo,
thanks a lot for the links to the additional material, I will study it 🙂
And I hope it is not that bad with the audio as Michael says. I shortly clicked the first lecture and it seemed to work well enough.
Can someone please explain to me if there is such a thing as `fine graining’ ? One hears the term coarse graining all the time, but the adjective suggests there are two types; however, I’ve never heard the former used in any physics context.
The audio is not that bad, only the questions asked are a bit harder to understand sometimes.
The lectures are very enjoyable and fun and after studying some of the additional material I will rewatch them carefully since I’m particularely interested in learning how concepts such as effective theories and renormalization for example can be applied to turbulence theory 🙂
So thanks a lot @Simon DeDeo for giving these nice lectures and and @Sean Carroll for sharing them here !
Cheers
I’m only through the second lecture, but many thanks to both you, Simon, and Sean for making this material accessible. They really repay close listening and careful thought. I think of it through the lens of how we aim to coarse grain theories about large masses of people into theories about nation-states, for example.
Some questions I have coming out of the second lecture are about the results of coarse graining on the FSMs. Is the decomposition unique? And I think you made reference to a hierarchy in the resulting decomposition–is the hierarchy a property of the semigroups that result in the decomposition, or is it a reflection that some system states are more interesting than others? Is it more like Gram-Schmidt, where I get to pick the order of the vectors, or is it more like prime factorization, where I’ll always end up with a clearly largest factor, no matter what divisor I might first choose to start the factorization with?
Dear Mark —
Thank you for your comments; it is lovely to have the material reach a wider audience.
The decomposition is not unique (this is in contrast to the group case); the holonomy decomposition, the easiest way to prove the Krohn-Rhodes theorem by construction, does produce a unique one, but there may be many others that also work. A good example is provided in Holcombe’s book, where he considers the one element semigroup, which has a sort of tadpole structure and for which the HD gives an unnecessarily long chain. I do not know the current status of the computational complexity of “find the smallest decomposition”, for some notion of small. The HD is NP I *think*.
That said, there are properties of the decomposition that are invariant; these are discussed in Rhodes’s book (see the lecture bibliography, Applications of Automata). He suggests counting the number of times a group is sandwiched between two irreversible levels.
The hierarchy you do find, by whatever means (genius, insight, rote application of the Holonomy decomposition in a computer algebra program like GAP), is a true hierarchy, in the sense that a correct specification requires stating which group/flip flop is above which, and getting that mixed up will in general not give a valid decomposition. Some references in the Chaos paper might help; e.g. Oded Maler’s papers.
Yours,
Simon
Pingback: Linkonomicon VII | The Finch and Pea
Thanks for sharing these talks. I have watched the first two and found them fascinating.
I have a question for Simon DeDeo though. How often do we come across this ability to coarse-grain while not being affected by some odd effect happening at the lower levels that happens to percolate upwards? For example, knowledgeable or influential individuals may have a disproportionately large say in the future course that the group takes. This seems evident in sociology at least, where you have hubs that are more influential.
I feel a bit skeptical about applying this to social phenomena for the above mentioned reason.
Hello Joebevo —
There are a few ways in to your question here.
The first is to suggest that the methods we talk about in the group theoretic case, and (more generally) in the coarse-graining discussion, are the right way to acknowledge the inhomogeneity and differential influence of agents and (at higher levels) emergent agents. Not only is there nothing in principle wrong with retaining fine-grained information about some aspects of the system (e.g., mental states of a Congressman) while coarse-graining away others (e.g., mine), but one expects this.
The second is to return the buckling strut of the first lecture. This seems to be the kind of account one wants to make of the start of the Arab Spring, which one explains but reference not to the particular aspects of Mohamed Bouazizi himself, but some large-scale property of the system that makes it sensitive to the actions “people in his position” might take. In a much less world-historical example, consider the symmetry breaking seen in the traffic jam of Lecture Three.
The third is to suggest that these small scale effects persist throughout the system and are properties of all agents, and do not coarse-grain out (“we are all congressmen”). This would be analogous to the claim, in physics, that a particular theory was not renormalizable (!) In contrast to physics, where non-renormalizable theories are considered pathological, there is no in principle reason to exclude them from the biological and social sciences.
The claim, however, that the world works that way is somewhat contradicted by the fact that we do believe in the existence of emergent, coarse-grained descriptions with some level of predictive power — we give accounts of historical and social phenomena formally and informally all the time. We talk about corporations and groups and nations, and think that these are good approximations and units of explanation — even and often in the absence of detailed accounts of their internal compositions.
Yours,
Simon
Thanks for your reply, Simon.
I might have misunderstood your idea of coarse-graning. As long as you retain some unique aspects of at least certain individuals, as you mention in your first point, I don’t see any problem. I thought that coarse-graining would homogenize everybody, which certainly would cause some problems.
Pingback: Links (28/October/2012)
Pingback: Enlaces (28/Octubre/2012)