Skip to content


by on April 11, 2013

So I am about 4 chapters into Philosophy and Simulation and I quite frankly have little to no idea what DeLanda is attempting to prove, but I think this is because of how little I understood from chapter 2. While Delanda’s section on “Game of Life” made for a comical conversation between Martin and I in the hallway yesterday (yes, it would seem that he was NOT referring to the board game…disappointing; here I was thinking I would have a firm grasp on a philosophical analogy), even after looking up Game of Life, cellular automata, lattice-gas automata, and with only cursory knowledge Turing machines, I have no idea how this proves that simulation can account for emergence. I follow that this was a justification for simulation, and I have really enjoyed some of the directions he is going (mainly the topic of thermodynamics and the prebiotic soup), but I don’t understand why he feels the need to go into vast detail about 6th grade science but offers little to no explanation about the far more complicated simulation programs he is offering up. I fear that without a firm grasp on this second chapter, the rest of the work will have little to no meaning for me, so if anyone can explain this in simple, techno-ignorant terms for me, I will be very grateful.

Having said that, I’m not sure how much trust I can place in the idea of gradients. If he is trying to account for emergence, which I am still inclined to believe took place in a state of completely unordered chaos that preceded time, space, rules of physics, etc., then how can he make the claim that “in the material world nothing interesting happens if we start a process from a maximally disordered state” (33), but wouldn’t a mechanism independent component be one of maximum disorder, with no attachment to the general machinery of physics and such? Again, have I missed the point?

I am really at a loss with this text, and I REALLY want to understand it. The topic of thermodynamics, pre-evolution, and simulation are all wildly interesting to me, and based on the chapter titles, I have a particular interest in the topics he will address later (mainly, early economics), but to be honest, I really have no idea why this is even considered philosophy so far, it strikes me more as a science book with a terrible case of ADHD: it is just throwing topics at me without ever really saying why they are even part of the discussion, or for that matter, what discussion is he even having??? I haven’t seen anything that accounts for a purpose of writing this book (what is he trying to prove??? I just don’t see it). Again, if he wants to prove the validity of simulation, I follow, I just don’t get how anything he has presented thus far really does that (minus the scraps that allowed me to piece together that certain programs he mentioned can produce results not originally accounted for in the programing). What is at stake here? What is he adding to OOO and SR?


From → Quadruple Object

  1. Please ignore the part about mechanism independent components, I just re-read the section that clearly explains that, but that just lead me to be confused by singularities. The science jargon is overwhelming here, and I feel that the language is very needlessly complicated.

  2. I’ve read the second and third chapters three times now and I really don’t follow. Maybe the issue I am having is this: he seems very selective about what details he allows into a simulation, or what can just be “assumed”, like the folding of the enzyme. I understand that in the presence of RNA, we can assume that folding has already occurred, but that seems to jump a pretty big step. If one is an absolutely necessary condition of all to follow, how do you just leap forward and say most of our super computers are working on it? That seems like something of a hit to the general topic of simulation, that it is yet to unwrap what is perhaps the most crucial moment, and that he kind of brushes that off.

    If the lattice-gas automata is just a simulation of where a given particle will move or collide in a period of time, how does that extend to simulation of lock and key molecules if all it handles are collisions or propagation? While he goes into other simulations for increasingly complex processes, if he is still skipping these very important steps, like enzyme polymerization, I don’t see how he can claim the simulation to have “sufficient overlap so that what we learn from the former can be used as a mechanism independent part of the explanation of the latter” (46) (the former here referring to simulation, and the latter real environments). If the simulation is given a limited set, or a “recursive function language” (45) how can you trust those results if “spacial relations are not modeled explicitly, that is, the simulated polymers are not related to one another by relations of proximity” (45). He goes on to see this simulates a constant stirring, but enzyme creation on earth most certainly required spacial relations. What overlap with reality can there be without space relations?

    • For me, the issue of enzyme folding, while a significant research question, is not a fatal problem for the value of simulation. We might understand this, in part, through the ontological status of objects that are in excess of the objects from which they are formed. For example, there are open questions in the field of quantum mechanics; however statistical mechanics and thermodynamics operate on a macro level above that. We can still study thermodynamics even though we can’t fully explain how we get there from the quantum level. Maybe when the enzyme folding simulation problem is solved it will dramatically alter other things; maybe not. Such is the provisional nature of science.

      The lattice-gas automata is a simulation that maps fluid dynamics, so it isn’t about specific particles so much as statistical mechanics. The lock/key molecules (aka catalysts) alter the gradients that drive fluid dynamics when fluids are composed of different substances. So a simple fluid dynamic might be the dispersal of temperature (e.g. hot water added to cold). The tendency of a gradient is toward randomness (i.e. warm water). In this chapter, DeLanda is explaining how we try to account for polymerization (the creation of long molecule chains) when such reactions would seem to require going uphill against such gradients. Catalysts effectively create new reactions and new gradients that facilitate polymerization. I don’t think you would call the simulations that employ Turing gases and metadynamics (discussed at the end of chptr 3) lattice-gas automata, though they are still operating at the level of statistical mechanics. Because the operation is statistical, measuring the proximity (the spatial relations) among the parts might not be necessary.

      I think the main complaint one could have is that the research cited is quite old in some instances. For example, the part we’re discussing here draws on research published in 1992. In fact one can earn a degree at UB in Bioinformatics and Computational Biology, so what DeLanda is writing about here has become one of the more significant areas of contemporary science. In fact, the enzyme (or more generally, protein) folding problem has resulted in the largest distributed computing initiative yet developed:

  3. Ok, after some further research and continued reading, I am starting to see the general idea: I think my area of difficulty came on the macro/micro level, and your explanation makes perfect sense, particularly when considering that several of the simulations he is referring to essentially work backwards. I guess my instinct is to accurately depict these processes in a simulation, one should start from the beginning, which now just seems senseless to me because if you are trying to uncover the beginning you don’t start from it (yup…feeling a bit foolish). I think my larger issue was a more general misunderstanding of the aim of the work: while I was awaiting a philosophical revelation discovered in simulation, I think the goal DeLanda is working towards is a philosophical trust in the validity of simulation as not being preempted by human concepts, though I still struggle with some of the details there. For instance, I think the idea of gradients is a very interesting approach, but anytime the talk moves to applying values to such gradients (here I am specifically referring to the section on fitness values), I am instantly suspicious. Having said that, this suspicion is probably largely due to my ignorance in computer simulation, but applying figures to “the consequences of reproductive success of the catalytic capacities” seems almost totally at the whim of the examiner. If we are talking about a numerical value drawn from the number of successful replications (which would seem to be the strongest indicator of fitness or lack there of), I see why this number would have objective validity. Of course, if we are again working backwards here, then this just provides a general model from which we need some starting point.

  4. Ironically, it seems the larger life forms grow in DeLanda’s progression (and thus the more complicated these simulations become), the more I follow him.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: