User:D. Standish Douglass/sandbox


 * ActInf Livestream Livestream #039 ~ “Morphogenesis as Bayesian inference”**

Discussion of the 2020 paper by Franz Kuchling, Karl Friston, Georgi Georgiev, and Michael Levin, “Morphogenesis as Bayesian inference: A variational approach to pattern formation and control in complex biological systems.”

[**https://pubmed.ncbi.nlm.nih.gov/31320316/**](https://pubmed.ncbi.nlm.nih.gov/31320316/)

Presented by Active Inference Institute in 2022


 * 1) Session 039.0, February 25, 2022



This video is an introduction for some of the ideas in the paper.


 * 1) SESSION SPEAKERS

Daniel Ari Friedman, Dean Tickles


 * 1) CONTENTS

-  00:26     Intro and welcome. 02:57    The role of epigenetics. 06:10    Aims and claims of the paper. 07:41    Bayesin inference framework. 09:01    Recent advances in molecular biology. 11:07    Variational Free Energy Principle 15:05    Modeling morphogenesis 15:51    Introduction to Bayesian inference. 20:00    Math roadmap 21:31    Morphogenesis and neoplastia 24:12    Bayes and his theorem. 28:19    How to build a model of hidden states. 32:45    The Helmholtz decomposition. 37:05    The operative functions. 43:35    The Lyapunov as a potential function. 46:13    Tension between attractive and dissipative forces. 49:50    Variational free energy. 51:15    Generalized coordinates of motion. 54:02    Steady state flow in terms of a scalar function. 57:51    The least-action principle. 1:00:51  The Markov blanket. 1:07:10  Modeling of morphogenesis. 1:09:00  Empirical biology 1:18:49  The model can be modeled. -




 * 1) TRANSCRIPT

00:26 DANIEL ARI FRIEDMAN: All right. Hello and welcome everyone, to ActInf Lab Livestream number 39 Dot 0. It's February 25, 2022. Welcome to the Active Inference Lab. We are a participatory online lab that is communicating, learning, and practicing applied active inference. 00:46 You can find us at the links here on the slide. This is a recorded and an archived livestream, so please provide us with feedback so we can improve our work. All backgrounds and perspectives are welcome and we'll be following good video etiquette for live streams. If you want to learn more about coms, which is the organizational unit that puts on all these live streams and many other products or any of the other activities in the active lab, go ActiveInference.org. All right, well, today in ActInf stream number 390, the goal is to learn and discuss this really awesome paper called “Morphogenesis as Bayesian Inference: A Variational Approach to Pattern Formation and Control in Complex Biological Systems.” 01:39 It's by Franz Kuchling, Karl Friston, Georgi Georgiev and Michael Levin, from 2020. And just like all the dot zero videos and indeed all our videos, it's just an introduction to some of these ideas. It's not a review or a final word. That being said, we're going to go over the aims, claims, abstract, roadmap, keyword, etc. Of the paper. 02:04 And then with a focus on the first parts of the paper and less so on the formalism and more on the big ideas, we're going to go over the paper and the figures. And that will put us in a good position cognition, niche, morphologically, we might even say for the discussions in the coming weeks 39.1 and 39.2 when we'll get to unpack this. So if you would like to, it'd be awesome to have you participate live, or if it's past 39.2, you can still ask questions. All right, we'll start with introductions and warmups. We can each say hi and maybe one thing that was exciting or interesting about the paper, which all reserve for after you, but I'm Daniel, I'm a researcher in California. 02:53 And Dean, thanks a ton for joining. 02:57 DEAN TICKLES: Thanks, Daniel. I'm Dean. I'm here in Calgary. And what really got me excited about this was the idea that they talk a little bit later, closer to the end, we may not even talk about it today is the role that Epigenetics might play in terms of how cells behave and what sorts of things we might be able to turn in terms of simply what we see in biological form into some sort of statistical prediction device. So I'm really excited about that. 03:31 Daniel: Awesome. Yeah, it's a complex paper about complex systems, so we'll see what we can get done today. But I'm really excited to bring active inference into this domain, at least as far as our discussions go, into the morphological and the morphogenetic areas. And it's just bringing like another lens, another way that we can apply active inference as a filter, another set of phenomena that we're bringing into this active inference realm. So we'll just go right into the big question. 04:12 So I'll read this and then feel free to give a thought. The big question, or at least one way to put one of the questions, is how can we find an integrative path between math and morphogenesis? Morphogenesis means form, like physical form, morpho and genesis, meaning how it arises. So how does physical form arise and how is that related to math and to kind of unpack that? Here's some quotes the molecular mechanisms, which are like signaling pathways of all different kinds underlying development and regeneration in biological organisms have undergone a tremendous amount of knowledge gain in the last several years due to genomics and biotechnology advances. 04:58 So that's kind of the one side with the biological and then from the mathematical and the formal, the emergence field of the free free energy principle, pattern information not always morphogenetic patterns, but could be cognitive patterns. Rhythmic patterns provides an essential quantitative formalism for understanding cellular decision making in the context of embryogenesis regeneration and cancer suppression. So if that's the if, then how might the mathematical formalism reveal new understandings about developmental change in evolution and for designing new interventions in regenerative medicine settings? So what did you think about that? Well, I just love anytime that two things that seem disparate or distant are brought together, because I think all sorts of good things can happen and I'm going to do a little shadow. 05:54 Dean: That's kind of what I would like to see if we can organize in August with a couple of other papers that don't start out necessarily appearing to be closer proximal, but by bringing them together, all kinds of interesting things can Kappel awesome. Okay. To the main aims and claims of the paper. So the paper is in Physics of Life Reviews 2020, and we already mentioned the title and authors just to select a few aims that the authors represent and then a couple of key claims that are relevant for the paper. So as far as aims go, one of their aims is to introduce an overarching concept that can predict the emergence of form and the robust maintenance of complex anatomy. 06:42 Daniel: Another aim is to derive the mathematics behind Bayesian inference and use simulations to show simulations that Axel Costa within the Bayesian framework to show that the formalism can reproduce experimental topdown manipulations of complex morphogenesis and then just a few of their claims, more of which will be unpacking in the coming minutes. Classic I e. Dynamical systems and analytical mechanics approaches such as least action principles are difficult to use when characterizing open far from equilibrium systems that predominate in biology. So that's kind of a big challenge that's come up again and again, which is we might have really great equations for a pendulum, maybe even a pendulum with friction, but how about an active pendulum that's doing strategy and trying to run away from you a bit harder to put those same equations on? The Bayesian inference framework treats cells as information processing agents, where the driving force behind morphogenesis is the maximization of a cell's model evidence, which is to say its reduction of uncertainty relative to a generative model, which is what we're going to be unpacking. 08:01 And we had previously read this about how they're applying free energy principle to morphogenesis when it goes well, like during embryogenesis and results in a psychote encoding like an adult life form, as well as all of the ways that it's maintained during that stationarity of healthy life and including cases where it might be relevant to even think about making interventions like medical settings. So those are a few of the aims and claims. Anything to add on it? Yeah, just real quick. It's really interesting that they said top down, and then we're going to see some slides where there's a top top and a down down. 08:47 Dean: So stick around because there's actually going to be something kind of interesting happening here. Okay, how about you read the first half of the top half of this abstract slide? Yeah, sure. Recent advances in molecular biology, such as gene editing, bioelectric recording and manipulation, and live cell microscopy using fluorescent reporters, especially with the agent of light controller protein activation through optogenetics, have provided the tools to measure and manipulate molecular signaling pathways with unprecedented spatial temporal precision. This has produced ever increasing detail about the molecular mechanisms underlying development and regeneration in biological organisms. 09:35 However, an overarching concept that can predict the emergence of form and the robust maintenance of complex anatomy is largely missing in the field. Classic ie. Dynamic systems and analytical mechanics approaches such at least at least action principles are difficult to use when characterizing open far from equilibrium systems that predominate in biology. Similar issues arise in neuroscience when trying to understand neuronal dynamics from first principle. So they're basically covering their basis here by making sure that we understand this is open system. 10:18 Daniel: Yes. Increases in tools in biological systems like genomics microscopy, optogenetics are helping us reduce our uncertainty about the what, but without a bit more information about the how and the why, etc. That's the overarching concept that's largely missing. So we can grind up the cells and sequence the genome, we can extract the RNA, and we can even tell what the ratio of different RNA molecules is. But it turns out that that type of information alone doesn't help us understand why two related species, or the same species in two different contexts, or the same genome in two different tissues or the same tissue in two different environment has such different morphology. 11:04 Okay, and then the second half. In this neurobiology setting, a variational free energy principle has emerged based upon a formulation of selforganization. In terms of active Bayesian inference, the free energy principle has recently been applied to biological selforganization beyond the neurosciences. For biological processes that underwrite development or regeneration. The Bayesian inference framework treats cells as information processing agents where the driving force behind morphogenesis is the maximization of a cell's model. 11:35 Evidence this is realized by the appropriate expression of receptors and other signals that correspond to the cell's internal, ie. Generative model of what type of receptors and other signals it should express. The emerging field of the free energy principle in pattern formation provides an essential quantitative formalism for understanding cellular decision making in the concept of embryogenesis regeneration and cancer suppression. Okay, and then if you read the first two parts here on the second. Slide, yeah, no problem. 12:09 Dean: In this paper, we derive the mathematics behind Bayesian inference as understood in this framework use simulations to show that formalism can reproduce interesting experimental topdown manipulations of complex morphogenesis. First, we illustrate the first principle approach to morphogenesis through simulated alterations of anterior posterior axial polarity, the induction of two heads or two tails. Ah, spoiler alert as a plenarian regeneration. Don't you hate it when the abstract spoils the paper? Then we consider average signaling and functional behavior of a single cell within a cellular ensemble as a first step in carcinogenesis the formation of cancer as false beliefs about what a cell should sense and do. 13:06 Daniel: Inference, perception, action inference. We further show that simple modifications of the inference process can cause and rescue miss patterning of developmental and generative models without changing the implicit generative model of a cell as specified, for example, by its DNA. This formalism offers a new roadmap for understanding developmental change in evolution and for designing new interventions in generative model aesthetics. Speaking of the new roadmap, let's look at the roadmap. It's kind of cool that they actually say that it is providing a roadmap. 13:48 So we've talked about guides, we've talked about roadmaps. Where do you see all of that before we look at the section titles? 13:59 Dean: Well, where guide, I think basically sort of gives you the parameters. I think a roadmap is a lot more specific in terms of the sequencing, but that's just my interpretation of it. I think other people can interpretation roadmap differently in terms of sort of the relational aspect of how the different parts come together. And we've talked about that in ActInf livestream. But I don't want to get bogged down on this part right now because I think we got a lot of stuff to COVID here. 14:34 Daniel: Cool. Yeah. So the paper begins with an interoception to Bayesian inference. Action two is mathematical foundations that are building on Bayesian inference. So going from kind of Bayes theorem to several dynamics, internal representation, some of which we've discussed before, others which will push it into a little bit of a new area. 14:59 Then the key contribution and the focus of the paper is modeling morphogenesis. And so instead of just presenting Bayes and then informational Bayes and then free energy minimization on a bacterium like we saw Axel constant do in live stream number 34, we're going to take that momentum and move it towards an application of modeling in morphogenesis. And that's where we'll talk about like, what model they construct and how it relates to some of the formalisms that were brought up earlier. And then there's a discussion and conclusion section which we won't go too much into today in the dot zero, but it's going to be awesome to go into it in the dot one and dot two, because there's a ton of awesome content and material. All right, so here's the keywords that were listed. 15:56 And we won't go into most of them just to say that they're going to come up in this discussion. We have free energy principle, Bayesian inference, morphogenesis and developmental biology regeneration. Kind of got some mathematical stuff we've seen before. We have a triad of developmental biology, which broadly understood includes embryogenesis as well as aging and cancer and all these other developmental things that Kappel during biology and topdown modeling, and maybe we'll see what that means coming up. Okay, well, to get right into it, section one is an introduction to Bayesian inference. 16:45 So here's a nice picture by Sasha, and here are the opening lines of the paper they wrote and then feel free to give a thought. Evolutionary change results from mutations in DNA and selection acting on functional bodies. Thus it is essential to understand how the hardware encoded by the genome enables the behavior plasticity of cells that can cooperate to build and repair complex anatomies. Indeed, most problems of biomedicine repair of birth defects, regeneration of traumatic injury, tumor reprogramming, etc. Could be addressed if prediction and control could be gained over the processes by which cells implement dynamics pattern homeostasis. 17:35 The fundamental knowledge, graph and opportunity of the next decades in the biosciences is to complement bottom up molecular understanding of mechanisms with a top down computational theory of cellular decision making and info taxes movement towards information. So, Dean, what do you think? What should we study and why and how is this related to active and FEP? Okay, well, my background in genetics is near zero, but what I did come away from with that last bullet in particular was one of the things that I beat like a dead animal, which is that you need a minimum of two. And I think that the bottom up sort of building blocks aspect of it. 18:22 Dean: With the top down, how do we get a better grip on what the signals are that are passing between all of those bricks was really interesting. And so carrying on through, I kind of wanted to open that up and see what that marriage looked like. Awesome. So speaking as someone who started in the science journey with genetics, pretty much, it always was so interesting how genetics would frame itself as if the bottomupup information flow could just simply replace theory. We'll sequence so many genomes, we won't need to speculate about evolution or we'll have your genome, so we won't even need to ask about what medicine is best. 19:06 Daniel: And that's kind of this bottom up molecular understanding which is vital, it's essential, it's important, it's changing a lot. But it's this bottom up subunit description, the what and the where of the subunits and how they touch and all of that. And that is always met implicitly and increasingly explicitly with a top down theory of cellular decision making and infotaxes. And so we're always kind of meeting in the middle with empirical wet lab biology revealing these mechanisms and these findings. And we're meeting that as researchers doing model based science with what are being called here top down modeling, top down computational theories, okay, that's just how they open up and also set a challenge force the coming decades, it's their expectation of the next few decades. 19:59 Then they provide a math roadmap, which is an awesome paragraph. So they write we lay out the mathematical foundations of a type of Bayesian modeling that they're going to use to simulate pattern formation. So red, Bayesian modeling, orange, we start by identifying a liopinf function that can be used to analyze and solve any dynamic system using the fundamental theorem of vector calculus. I e the helmholt's. Decomposition. 20:28 So we're using Bayesian modeling and layering in what we'll talk about soon. The operative function green, we use it to characterize the generalized flow of systemic states of the whole system in terms of conversation to a nonequilibrium steady state, a ness. Next layer blue, we then introduce the notion of a Markov blanket that separates the external and internal states of the system. And finally purple, we can then replace the leopard function with a variational free energy function to solve for the evolution of internal and active states, that's of the particle and thereby characterized self organization in far from equilibrium systems that can be partitioned into a cell and an internal model. That's the math that they go through, that's the math. 21:25 Rainbow. Then sections after that apply this formalism to illustrate morphogenesis and neoplasia, which is the growth of new forms, neoplasia using simulations. And what's so cool is this trajectory from 400 to 700 nm from the Bayes through the vector calculus and flow states to the finding or the figuring out of a Markov blanket. And then the usage of the Markov partitioning legitimated by the orange and the green steps to find a heuristic way to go about solving what was needing to be addressed. Right in the beginning, we've seen some variance of this trajectory multiple times and then in this paper they study morphogenesis. 22:23 And so then they model morphogenesis. But save for a couple of words, a different paper could have said we do the Bayesian thing, the Lip and the thing, the Helmholtz, the Nest, the Markov and then we're back to doing far from equilibrium strategy in online teams and then we applied it to online teams. And so let's keep in mind, like the paper can be long. There are a lot of equations, but where is this broader trajectory that's helping us fine tune our regime of attention and update our generative models about what we are doing here and how it's different than other approaches? And then where is the part that's actually studying morphogenesis? 23:09 Just things to keep in mind. Okay. Anything to add on this? Well, correct me if I'm wrong on this, but I think what we're doing is essentially taking, as you said, we can look at a variety of things. We can look at the physical nature of ants, we can look at sales, we can look at online teams, whatever we're looking at what they basically said is there is a parallel way of looking at the formalisms that exist in those concepts with that type of content. 23:44 Dean: That's essentially what this said to me. There's a way of bumping it up into that sort of statistical, mathematical realm. And now we're going to walk you through what that is. Great. Okay, so into the bays area where I grew up, sitting by the dock of the bays. 24:08 Daniel: All right. I thought it was gain to say where the watermelons grow. Carry on. Bayes theorem rests on three basic axioms of probability theory and is used to relate the conditional probability of an unobservable event A. Okay, so the vertical bar Beni conditioned upon or given. 24:27 And so what we really want to know is, like, the probability of something that we cannot observe cognition on what we did observe be. So you're never observing the lightning strike. You're observing photons on your receptor. And then you want to know about whether there was lightning in that cloud over a few miles away. So we want to know the conditional probability of an unobservable event A. 24:52 That's this left hand side. We kant to relate that to an observable quantity B. That's what the data are. And that's going to be calculated using the Bayes theorem as the probability of the data given the unobservable state, which is called a prior. So that's, like, had to do with cats and meowing or in a previous examples or it has to do with, like if there were lightning, yes or no, what would be the likelihood of me observing photons multiplied by the probability of those unobservable states happening, which is, of course, a prior about them. 25:36 But it COVID be set with empirical data divided by how likely the data are, which is called P of B, the marginal likelihood or evidence. Okay, so P of a given B is called the posterior because it's like after you've done your inference, you want the correct posterior, which is like, updated beliefs about how likely the thing that you care about A is. Given what has come in priors are like, prior to this round of inference, what are your beliefs about how likely A is? And then the evidence coming in is B. Okay. 26:19 And the likelihood is how likely the data are under the prior distinctions. So that's Bayes theorem and we've talked about it multiple times and I'm sure there are other awesome videos and courses that can explain it even better. How are they using it here to describe the dynamics of an ensemble of information processing agent like cells as a process of Bayesian belief updating we need to relate the stochastic differential equations governing Newtonian motion and biochemical activity to the probabilistic quantities above long sentence. But it's kind of saying we need to connect these variables which are informational they're probabilistic, they're statistical to movement in space, movement in a Newtonian coordinate system because we need these variables to measure spatial movement so we can study morphogenesis. This is fairly straightforward to do. 27:22 If we associate biophysical states with the parameters of a probability density and ensure their dynamics perform a gradient flow on a quantity called variational free energy. Informational free energy is a quantity in Bayesian statistics that when minimized, ensures the parameterized density converges to the posterior belief, as we will see below. So yes, this is our first slide on Bayes theorem and we did just kind of go from the total description of the variables and what is Bayes theorem at a first pass to this introduction of the variational free energy so it can be accordioned out and unpacked a lot more. But what variational free energy is doing is helping us tractably converge to the appropriate posterior belief and that's its role here. Okay, anything to add on this one? 28:21 Dean: Dean, I'll ask you a question. So if you were to talk to this, to somebody for the first time, not people who probably listened to other of our livestreams, would you say that it would be dangerous to say that we're taking something that's kind of ratio based and now using that to try to build a model of what our predictions are going to be based on? Is that fair to say? 28:53 Daniel: I believe so. It's like one way to approach that. I hope this would be accessible. The ratio based part is on the right hand side and one could say depending on how much data you're getting and what you're trying to estimate, it could be very hard to go from observations to inference on hidden states. Pretty fair conclusion. 29:18 So what we could do would be construct a smooth approximatable model so that when we slide to the bottom of our model, just like the ball sliding down the hill, we would actually be doing the same thing as fitting this effectively. So instead of number crunching and putting everything into this equation and running the calculation, we're going to construct a bowl which is the variation of free energy that we can do gradient descent on, that is going to converge to a really good solution. Do we construct it or do they kind of tell us that we should assume it if we want to make it tractable assume the bowl. 30:08 When we pick up this instrument, we're making variational free energy bowls. Yeah. Okay, good. And we've seen this before in live streams 26, 32 and 34. We did some variance on moving from Bayes theorem and exact Bayes to variational inference which uses this variational free energy value. 30:43 So those are a few times that we've seen this discussed before in neuroscience. The inclination of informational free energy is referred to as active inference. We will see that the basic when the basic condition for an inference type description of a system, namely the existence of a Markov blanket separating external and internal states is satisfied. Agents such as biological cells form into organized conglomerations based on their generative model of how their blanket states influence and are influenced by external states in the external milieu. So the connection between Bayes theorem and Markov blankets is the unobservables are external states, the observables are blanket states. 31:34 It's a pretty fair model. Whether it also provides like strong guarantees on optimal perception or optional action, that's another discussion. But at the very least, it's a fair framing that external states are unobserved. If you could observe them, they'd be observables, then they'd be more like B than A. So we're talking about a situation where we have some A's and we have some BS and we're going to think about that in terms of a blanket formalism and the dynamics of a system with a Markov blanket that self organizes to non equilibrium. 32:05 Steady states can be described as a gradient flow on this computable variational free energy bound. So we can set up the blanket and it turns out that we don't have to do the ratio number crunch. We can do the bowl gliding down. That's the beginning part on just Bayes. That takes us from Bayes theorem to introducing variational free energy and doing gradient descent on variational free energy so that one can effectively essentially perform Bayesian inference. 32:42 Cool. Perfect. Okay. Section two mathematical foundations. Alright, so section 2.1 stability and conversions in coupled dynamical systems. 32:55 And subsection one the Helmholtz decomposition. So they write the Helmholtz decomposition states that any sufficiently smooth I E possesing continuum derivatives, vector field F can be decomposed into an irritational curl free. So kind of straight up and down, putting the ruler and finding the slope of a hill and a solenoidal divergencefree vector field. The emergence free solenoidal is like putting the ruler at ground level and then being able to make an isocontour around a hill. Because an irritational vector field has only a scalar potential and the solenoidal vector field has only a vector potential, we can express the vector field as insert formalism too. 33:48 And just so people know, the upside down triangle is called Jelle or Mabla, which means harp do the shape. So del phi is the irritational factor field. Dela is the solenoidal field. We talked about this mostly in number 26 and 32 where we talked about the partition of the decomposition into a solenoidal and a gradient, as well as this housekeeping term, which was everyone's favorite housekeeping term. So we're not having a housekeeping term here, but we're keeping in mind that it's not implausible that it's still there. 34:41 It's just that there was a whole paper coming out after this morphogenesis paper that talked about the housekeeping, et cetera. So, again, it wouldn't be surprising if a future morphogenesis paper or project did have this housekeeping term. But 32 was actually a later paper. So that might be a more advanced place or updated place to check out how the partition I'm sorry, that Helmholtz decomposition works. And here was an image from 26 just showing how you could have, like, a kind of hill. 35:21 And in physics, you could have hill climbing and trying to get to the top. Or sometimes it's easier to think about that as, like, the flip and the Bull rolling to the bottom of the hill. So whether it's the optimistic biological talking about going up to the fitness peak or whether it's the pessimistic physicists talking about going to the bottom of the energy, well, it's the same idea that there's a particle on a landscape, and there's been a ton of work on passive particles. That's not to say immobile particles, but particles that roll and get trapped. And then once they're trapped, they stay or they jiggle. 36:02 Stochastically. We're studying active particles, and so that's where action selection comes into play. Landscapes of energy with balls rolling down them, but they're kind of like jumping Beni, where they also have a few tricks that they can play. Okay, anything to add here? Just a reminder to people that's a density that we're looking at, not a vacuum. 36:31 Dean: I know most people like, if I was to talk to younger people and they started imagining a ball rolling down a hill, they wouldn't necessarily confirm that it's an actual density that we're looking at. So just reinforce that. Yeah, like the bottom right here, it's kind

of like you're looking down at an ink drop, and that's where it's Dennett is where it's darkest. And then that is like a landscape where where it's darkest is highest. Right. 37:01 Daniel: Okay. 37:05 Section two, one, two the operative functions. Okay, so the operative functions have been used extensively in dynamical systems theory and engineering to characterize the stability of fixed points of a dynamics system. The operative functions are generally defined for smooth systems through the following conditions. So L is going to be the operative function, and X is going to be a point. 37:35 So a fixed point is going to be X star, and a non fixed point is going to be just regular X. So X star the special points where if you are there, you're perfectly balanced. So this is saying that at fixed points, the Lapina function evaluates as zero, and it's not zero if it's not a fixed point. And the la operative function change. In L with respect to time, it's below zero for all x equation three A requires the operative function to be minimal for fixed points, x star representing local minima and B denotes convergence to these fixed points over time. 38:26 So what does it mean to have a the opposite value of zero, above zero or below zero? So if you're at zero, just like it said with the fixed points, that means that it's not moving at all. But the opposite exponent below zero means that things are converging towards the opponent. If bigger than zero means diverging away, specifically what it means is two points that are really really close to each other, do they stay exactly the same distance away? That's zero a negatively opposite means that two points close to each other will converge. 39:03 So that's a convergence attractor and then positively opposite exponents the opposite values mean that two points that are placed very close to each other will diverge away from each other. 39:17 I hope I have not oversimplified or made unforced errors about the optima functions because it's obviously a really technical area. And they write following 16 we can generalize this locally opinive function of stability. So zero equal fixed point, positive equals divergence, negative equals conversions to a globally opposite function that plays the role of a potential function of any dynamical system. And we won't go into it. But it is the citation to this Rochi paper in 2013 and they do some pretty interesting math. 39:56 So if somebody is more familiar with this, it would be awesome to hear about. But what this is doing is it's taking the same dynamical systems perspective that we've been thinking about, about things changing through time and then overlaying on that landscape, a stability landscape. And so if we think about like this hill on the top right, again, hopefully not being wrong or misleading here at the very top of the hill, it's like a fixed point, it's an unstable fixed point. But if you Kant to perfectly balanced there, we could just say it could get knocked off by stochastic things which will come back later. But it's perfectly balanced. 40:39 It's like a fixed .2 points that were very close to each other on a hill, two balls placed right next to each other would start to diverge as they rolled down the hill. So that would be like a slight divergence. And so that is what the Leopinive function as potential function is allowing. It's allowing us to take the landscape of ball rolling where gravity is the potential function. The higher up, the more potential you have. 41:08 And then it rolls down because of the dissipation of potential energy. And they're going to show that any dynamics with leopold of function, so any smooth, differentiable etc landscape has a corresponding physical implication, a friction force, laurence force and a potential function. So we can think about this, the opponent function being related to potential functions like something that draws the system back. That's where the negative the opponent comes into play. Okay, Dean, just a quick thing. 41:42 Dean: Daniel, when we were setting up the slides between those two bullets, they talked about something called a saddle point. Did you understand what a saddle point was previous to reading this, or do you know what a saddle point is now? Yes, let us I'm just pulling up the paper in the window okay. So people can see it. 42:10 Daniel: Okay. So it's the following sentence. So we can generalize the lip in a function to a global function that plays the role of a potential function of any system. This follows by generalizing condition A to allow for saddle points. So a saddle point is when the landscape is shaped like a saddle, and there's like so if it were on top of a hill, the curvature is negative with respect to all directions. 42:39 Dean: Right? In a saddle point, the curvature is negative with respect to one of the dimensions, but it's positive with respect to another dimension. And so it's like just like the saddle. Yeah. Okay. 42:53 Daniel: Why do they call it that? And so the ball on the top of the hill is like an infinite multifarcator, like a little jiggle. It could go any direction because it's unstable in every direction. Saddle points induce asymmetry where that ball is never going to roll up the hill. It's going to fall to one side of the saddle or the other side of the saddle. 43:20 And one of those could be like a certain attractor, and the other one could be like another elevation. Basically, though, I don't know the details of how the saddle point is related to some of this liopinf stuff. Okay. 43:40 All right, so following 16 so this is they're showing that the liopanov is equivalent to a potential function in physics. A potential function, PSI, can be constructed to describe the flow of or force acting on a particle through a potential energy gradient. So the potential energy function F is del PSI. So here's just one way to state the big idea of equations five through nine, which are not discussed here. So the idea is create a potential function. 44:17 And when you think of potential function, think it's like kinetic energy based upon elevation. But it's not for gravity and elevation. It's like balls falling down tall buildings, but for not gravity, for something that's related to dynamics conversions. So if you were like at the bottom of the bowl, you'd only have a little bit of potential energy, the potential function would be low. If you are really far up the bowl wall, you have a high potential function. 44:50 You hayley a young way to fall that is going to be reflected by a balance of forces, namely an attractor force, which is the combination of the Lord's force and the potential energy induced force. And then there's the dissipative force, and that's the frictional force due to dissipative random fluctuations so that's like the velcro ball on the velcro bowl, the friction is so high that it still is being kept on the side, or it's rolling down very slowly or something like that. Combining these definitions, we can express the total force as a balance of the forces defined above, resulting in and so just like if it was two colliding vectors, and then we were going to express the outcome resultant as like the total force being applied on something. It's like that, but it's not about the physical forces in the gravity. It's like more about these forces. 45:52 Abstractly. And so equation ten is expressing this balance of forces on x using a gradient landscape. So there's that del PSI. And here's the communication of forces being expressed as del PSI. Dean so they use the word tensor here, and when I was reading this, I didn't see it. 46:20 Dean: Okay, so if we Laje to a tug of war on a rope, it might appear to the outsider outside observer that there's a balance. But really, what I read into this, and maybe I'm wrong, was that there's a tension between these two forces, between the attractive force and the decipitive force. Am I misinterpreting that. 46:44 Daniel: Someone has a different answer? Would be awesome to hear it. A tensor is like a generalization of a matrix. It's like a matrix through time. I don't think that it is directly related to tensile or tensegrity. 47:00 Dean: Okay. That being said, you're right that there is a tension between these forces where there can be, and then they like, tension can reflect like a compromised position. Like, if you have a rope with attention on it, then the ending of the rope is going to be like some compromise between the various force. And so we're thinking about, like, the various forces, and then we're expressing it as like this matrix through time. That's what makes it a tensor, not the fact that the force are in tension or opposition with each other. 47:37 Daniel: Yeah. Okay. I hope that's accurate. Okay. And then they follow 16 again and transform that expression in ten into a more standard form using the diffusion tensor gamma and a tensor Q. 47:56 This is like a rewriting that helps them get to this format where f of x is describing the flow of states. So that's the change in states through time, and it's like a vector field, and it is going to be Q minus gamma. So tensor minus diffusion, both of that. So that multiplied by that landscape. 48:26 So that's kind of like the decomposition. 48:35 But if someone could explain this more, let's hear it. Yeah. And my simple words, this is where I keep bringing up the idea that maybe when we're looking at these things through an active inference, least we're not just looking through a frame, we're also looking through. And they talk about filters later on. So how do we make sure that we're including both and sort of the tension between both of those views working together and being in tension with one another. 49:10 Dean: So, yeah, this is very formal, but I think it kind of reinforces that there's what's inside

the partitions and then what's going on within them as well. Yeah, so it's describing the flow of states resulting from these conservative and dissipative forces. So if Q and gamma were the same, then the flow would be zero. If one is larger than the other, then it's going to be dominated by that component with respect to being multiplied by this landscape. So we'll return to it. 49:50 Daniel: Okay, variational free energy. 49:54 What did you want to add on this slide? 50:01 Dean: Well, I don't know that most people already know what we're talking about in terms of variation of free energy. What they're doing is introducing it in terms of so now where does the model, the model of a biological entity, where is that sourced from? And that's this sort of idea of these things like posterior beliefs, the foundation, or the idea that we don't walk into a situation completely blind. We have some sort of prior and we're modeling situations where internal states can be interpreted and framed as a generative model. So it's not they're focusing on the model aspect of this as opposed to the generative processing side. 50:48 They're going to talk about what the external milieu is and where the generative process is here shortly. Great. So good point to pause the video and read this because we're not going to read it out here, but you're absolutely right. They're focusing on the generative models of the agent of the cell, not doing that analysis on the niche. For example. 51:15 Daniel: All right, section 2210kay. So this is a fun part. We can describe dynamics in generalized coordinates of motion denoted with a tilde, where x tilde is defined as x tilde equals x. That's like the absolute state of x. And then x with a dot is the first derivative of x. 51:37 That's called velocity x with two dots is or a two double prime, depending on the notation is the acceleration. What people don't always know is that there are higher derivatives and they also have names. So the third derivative is the jerk, and then the fourth, fifth and 6th are the snap, crackle and pop. So that's just kind of funny. But the generalized coordinates of motion, which came up in number 26, have to do with doing prediction on not just how high or low something is or like on the number line left and right. 52:16 That would be like if x were just one number. But even if it's just one number that we're tracking, like body temperature, for example, we might be interested in kind of unpacking that into this generalized coordinate where we want to know body temperature and how fast it's changing and how fast that is changing. And so one can imagine that once a zero has been achieved, the higher derivatives are also zero. If it's zero through time, the higher derivatives are also zero. If it's zero instantaneously, that's not always the case. 52:52 But if it's zero to time, then it's kind of like you've cashed out the derivatives. The derivative of zero just stays at zero. And so it's like if the position is unchanging, the velocity is zero end. If the position is 5 miles an hour going in the same direction, the velocity is constant, but the acceleration is zero. So keep on carrying that out to how fast things are changing, how fast they change, etcetera, etcetera. 53:20 And then they're going to take that x tilde. How x tilde changes with time. X tilde dot equals the flow on tilde plus a variance. So a noise or fluctuation, a random fluctuation term. This is the form of the Langfein equation generalized coordinates of motion plus a random fluctuation term. 53:52 Any of the mathematical colleagues would be awesome to hear how this is related to the Weiner assumptions, etc. But we're going to continue. So using the Helmholtz decomposition, we can now express steady state flow in terms of a divergence free component and a curl free descent on a scalarly operative function l of x tilde. To obtain f of x tilde equals, there's the Q minus gamma that we saw a few slides ago multiplied by now the L. So here we had f of x, not generalized coordinates of motion q minus gamma del PSI of x. 54:38 That was the potential function. Now, because the potential function and the operative are related to each other, we're looking at the generalized coordinates of motion using the same Q minus gamma Helmholtz decomposition. But multiplying it now by the opposite function on x tilde. It's the solution at non equilibrium steady state and is exactly the same solution for the flow of particles in the classical treatment above. Crucially, we can now see the optimum function is the negative log probability of finding the system in any generalized state. 55:18 The Lap and x tilde is the negative natural log of how surprised one should be p of x tilde. This is known as self information of a state. In information theory, surprising or surprise. In Bayesian statistics, it is the negative log evidence. So the change in the generalized coordinates of motion f of x tilde breathe is an irritational solenoidal decomposition. 55:47 That's the left part of the righthand side multiplied by a leopinif function which also has the interpretation of surprisal. So if you were exactly resting, so to speak, you'd be totally unsurprised. In summary, any weekly mixing dynamical system at nonequilibrium steady state will events give us evidence for or look like it is a flow that can be decomposed into a gradient flow on surprise and accompanying solenoidal flow. Because we can associate the liopina function in 18 with a free energy, the system is effectively minimizing a free energy in its conversions to a set of attracting states which have a high probability of being occupied. 56:39 So it's a little circular. It's like we're finding ourselves where we expect to find ourself. And so if we just said, Jelle, how surprised are we? And let's gradient ascend towards not being surprised. Just at a first approximation, we would find ourselves in regions of low surprise and we would be surprised to find ourself in regions of high surprise. 57:12 Dean: Is it circular or is it kind of the byproduct of a recursive process? 57:20 Again, I'm asking, I'm not questioning. It's a great question in like a really meta way. All of math is circular once the accidents have been stated because the equal sign is there. So you can add one to both sides and it would still be equal. So it's that circular, it's useful and that's what matters for this paper. 57:44 Daniel: Okay, yeah, we're not going to talk about it too much, but in a dot one we absolutely will. And that's this least action principles in two, two, two. And they even give the example we love to see. For example, in colonies, ants find the path of least action to harvest food and bring it to the colony. Citation needed. 58:08 Let's see that. But conceptually, no problem. This example considers their paths as flow channels or trajectories, finding the least average action for each instance of foraging given available resources. So what does least action principle mean? What does least action mean and what is it necessary or sufficient for? 58:33 Because certainly foraging is not the least biochemical expenditure of energy. That's like the sort of naive interpretation of least action planning, just stay where you are. But we're thinking about a usage of least action that impels this ant to forage in an adaptive way, such that even when its foraging trip is hard and the seed is heavy and it gets lost or something, that is still a realization of a lease action on something. Still a ball rolling down a hill as efficiently as it can in a space. So it's a nuanced topic that it will be great to discuss because they also write minimization of action in an open system leads to structure formation. 59:22 So that'd be a good thing to unpack. Okay, carrying on. 59:29 In dissipative random dynamics systems, action is not minimized for each element of the system, but on average over an ensemble of elements or repeated trajectories of the same element. 59:43 Since self organizing open systems are not conservative, their structured flow is quintessentially dissipative. 59:53 Good to think about that and hear many perspectives and questions on it. 1:00:00 Classically, for a conservative system, lagrangian is defined as l. This is L, not the opposite. Now it's lagrangian equals t minus v. So T is the kinetic energy of the particles and V is the potential energy. 1:00:21 And so that's how the trajectory of system states could be solved for a conservative system where the total energy is going to be conserved. But they're opening up into this dissipative and open scenario where this is not going to cut it a little bit more detail on least action, which we're not going to go into. Okay, continuing on two, two, three Markov blanket. Okay. A robust literature is developing around the ability of cells and other a neural nonneural systems, measuring aspects of their environment via specific sensors. 1:01:16 They introduce here the Markov partition as this general case of separation of states into four categories, e, external states, sensory states, S, active states, a interface states i. And so the little E is like, it's a realization of big e that's the bigger space and then x tilde comes back and the generalized coordinates are going to be a realization of values of those partitioned states. So we've talked about the Markov blanket before, but now we're interested in the generalized flow on the blanket partitioned states. So here's what the Markov blanket looks like and we'll talk more about it in the dot one and dot two. What do the arrows mean and why our sense and action connected? 1:02:26 What part of the cell are we looking at? Is it a part of the cell? Is it a model of a cell? All these things that have come up before, but just to say that's their figure one and table One gives the math for Wisps. Okay, anything to add on Markov before we continue? 1:02:44 Dean: No, because we'll unpack. Yeah. So this is where they apply the Markov blanket. So M is describing the Markov partition that defines the underlying random dynamical system, such the cell. So that is like the model that's the mark of the partition of the cell. 1:03:08 Daniel: So inserting C into A and B is going to give there's A is actions and I is internal states. So the top equation is basically saying the flow on action is a function of senseaction and interface and then it has a right hand side. So the flow on action, the performance of action is and then there's that gamma minus Q. But now they have a subscript A because it's those about the action landscape multiplied by dell triangle on action tilde generalized action and then a selfsurprising looking like the selfsurprise of sense action and internal conditioned on M partition. And the bottom equation in B 30 B is basically the same, but it's the flow on internal states f sub I on generalized states of sense action and I and that same gamma minus Q del selfsurprise condition on model. 1:04:19 So we're seeing some patterns come up and even if it's like confusing and we all definitely are confused by it to some extent, we're starting to see like some patterns. Why does it matter to focus so much on A and I, the action and the internal states? Because those are like the ones we control. We know we don't control external states directly. Maybe we can intervene so that they change differently, but we know we don't control them. 1:04:46 We also don't directly control sensed states. We're getting Dean this hand every second by the niche, the generative process. And so we can take action so that we can expect different sensory outcomes, but even then we wouldn't be controlling sensory outcomes. And so a lot of active inference comes down to doing inference on internal model, internal states, generative model learning and action, action selection flows on action. 1:05:25 This bounds the surprise on the particular states which is the internal and the blanket states through control of the autonomous states which is just the action and internal states, the states we control. 1:05:44 Because of the sparsity of the blanket, not every node is connected to every node. There might be a factorized intractable form to bound our surprise about the particular states in general. It would be very difficult problem to solve. However, we can replace the lagrangian that's the one here that was going to be used for conservative systems, but it didn't really apply with a variational free energy functional or of a probabilistic model of how a system thinks it should behave or how we think the system should think it should behave. So it's hard to solve these generally, but in practice there are heuristics such that these are approximatable section two four goes like one layer deeper into KL divergence in the VFE. 1:06:40 But we're not going to go into it, okay? We're just going to keep on plowing through two, three is Bayesian filtering and selforganization? We're not going to talk about it now, but the big questions here are like how does Bayesian statistics relate to identity and what is selfevident? And there's probably a lot of other good questions we could ask like why does Bayesian filtering and self organization come here? All right. 1:07:13 Modeling morphogenesis. So now we actually get to the contribution of the paper which is the modeling of morphogenesis. So they're going to illustrate selforganization to ness using the variational principles above by trying to explain the behavior of a model of pattern regulation by consideration of information processing and error minimization with respect to a specific target morphology. In this setting, the game changes subtly but profoundly. It's a Dean line if I ever read one above, which is what we've reviewed. 1:07:52 The dynamics of any random dynamical system equipped with a Markov blanket can be formulated in terms of a gradient flow on informational free energy that was just here the flow on action and the flow on internal states. Here we turn this formulation on its head by specifying a generative model and implicit VFE function and simulate self organization by solving the equations of motion in equation 34. So they're going to specify the form of the attracting set V, the generative models and then they're going to let it ride. So we have to talk down. Yeah, they specify the external dynamics as a generative process. 1:08:43 That's the niche and the generative models of that process which is being described by the flow of internal states. Okay, so we're going to look at the figures, and then we're not going to even go into the details. So let's just look at the figures and see what they're doing. So here's some empirical biology happening in the lab of Mike Levin and others. So in A, it turns out that when you dissect out the center of a flatworm A planarian, those cells will remodel into a new worm. 1:09:18 So cutting out the middle and it reforms into this new flat worm. And red is the head and then like, blue is the tail. So there's a self organization of this target morphology, even from an initially clumpy cell. What they're going to do in B is showing like, the final fixed point of these different cells. So here we have like four different cell types. 1:09:49 There's like, the red neural or head cells, then yellow cells, green cells and blue cells. And then basically like this, like 3211. And then the bottom one, it's this one. Maybe they change the green and the blue. 1:10:07 I think these two should be green and the bottom should be blue. But minor point, but it's an attractor on this target morphology. 1:10:18 And then C is describing a little bit about how it happens, which is that cells are constantly comparing their sensed signal concentration, like, of a gradient of some morphogen to expectations by minimizing their free energy functional. So it's like, what kind of a cell type am I and what should I be expecting? Flip side, what should I be surprised by? Because to say what I expect is the other side of the coin of what one is surprised by. Okay, so they're modeling this empirical scenario where a flat worm can regenerate its form from just a clump of Jelle here and that if we think about the location of these cells as being, like, a steady state attractor. 1:11:08 Like we want to have this body form last. Then if all the cells just get in line like that, everything's going to work out. All right, how do we take this first jump that we made from the empirical biology to the state space framing? Here it is, an X Y state space. And now take the next jump one more mathematical. 1:11:37 Here on the right side is that XY positioning of each cell. 1:11:46 This is reflected by the position in X and Y of the cells. So like negative nine and zero. So that's like this top red 10 is the midline. So it's like Y and X. And here like negative five, it's a little lower. 1:12:03 And then like negative four and four, it's like one is they're on the same Y elevation, but then one is four to the left and one is four to the right. And so this e star of X is the position. So it's a matrix of position because there's like eight cells and two location variables per cell. So it's like a two by eight matrix and it changes through time. So it's tensor. 1:12:36 The estarc is the external signals. And so in this steady state attractor all three red cells are getting like signal one here. It's a binary signal. There are four signals, but they only have an on or an off state. In reality, there are many, many more signals and four and there's a lot more nuance than just on or off. 1:13:01 But it's a, it's a toy example. All three red cells are getting the exact same signaling milieu of 1100. So factor A and B are diffusing near me, but not C and D. The yellow cells are experiencing those two diffuses and one other one, the blue cells are experiencing the first and the fourth and then the green one is another. And so this is like the attractor state for a stable location and signaling expression. 1:13:40 So converging to this spatially makes your body look like that. But the way that you get a beach ready planarian body like that is actually by reducing your surprise on signal expression. 1:13:58 Okay, so here's them running it through time. So it's a timelapse movie montage of simulations of morphogenesis. And so here it is converging towards on the right side the morphology that's been discussed here. And so the cells start out with differentiate. They know what they're sensing, they don't know what kind of cell they are and they don't know where they are. 1:14:29 And then they sharpen their expectation about what kind of cell they are while also moving into a different spatial niche. But they're not tracking their location. Like I'm at three comma two, where are you? It's like this is what I'm biochemically sensing. And so it's a relational morphology that doesn't need the blueprint in the nucleus. 1:14:56 So it's a lot like an ant colony. There's not the nest architecture blueprint in the brain. There's the process for stigma g. And then they show that with some other changes in the generative process, a positive squared gradient in the generative process, they get double head formation, which some modulations have been shown to empirically result in in the lab. And they can also make it so it gets double tail formation. 1:15:29 So it's just recapitulating this basic example and then showing that like modifying the external field changes the morphology that gets attracted too. 1:15:41 So those are the key pieces that's modeling morphogenesis as Bayesian inference, reducing surprise with a variation of free energy flow on action and internal states. Action states, internal states. Okay, we're not going to go into it here definitely for the dot one and dot two. So 3.1 is the construction of the model. 1:16:12 They then unpack that signal matrix, the communication of the signals, and then they give even more information about modeling signals. And they also imbue it here with like this stem enos which helps the Jelle start out with, I believe, a week or prior about what kind of cell they are, which. Translates to where they should be. 1:16:48 In figure five, they do a targeted intervention in their studying of anomalous cell behavior. And so basically, they start off with these cells that have initially unspecified state, and they converge in A, the panels on A by 32 time points, they're converge to the target morphology. But in B, they show that if one of the cells that gets hit with a white arrow here chaos a perturbed signaling response, it fails to correctly infer its place in the ensemble. So B is like sort of like could be like a genetic mutation or a targeted modification of the signaling. And it's like that third red cell never forms. 1:17:36 And then in C, the same average cell from B is initially is rescued by an increased signaling sensitivity of other cells leading another cell green arrow to switch position with the average cell pink arrow. 1:17:55 Pretty cool. Okay. Yeah. I wanted to come up with an analogy that would actually work that would explain this. Like if you're out Geocaching one afternoon, and then all of a sudden something was able to turn all the lights off, and then the person that was Geo Kuchling beside you change the behavior. 1:18:15 Dean: But I can even come up with a valid analogy to try to explain this. How about we're playing American football and so there's a plan, and everyone is trying to reduce their uncertainty about how the plan is playing out from their perspective. And so if all goes to plan, we're all going to deploy into the exact right positions, but then somebody doesn't move. If each nest mate on the football team has low sensitivity, then they're going to continue in their own expected trajectory as if nothing had happened. That's kind of like what we see in B. 1:18:51 Daniel: It's like a partial rollout, whereas this is changing the environment, mixing our metaphors, et cetera, so that a high sensitivity teammate fills in, force that critical position so that the attractor state of the strategy can still be morphologically realized, even though there still is that one individual who's not moving, but they're not interfering with the strategy's attraction. But we'll come up with probably some other and better ways to talk about it. Okay. And then they just mentioned that this is something that can be modeled, and it's in SPM and also on Franz Kuchling's GitHub. So maybe we will look at the code and then just to say that we're not going to go into it at all today. 1:19:46 But all of section four is super interesting. They review some of the mathematical assumptions and limitations of the model. There's a really fascinating discussion about variational principles in open systems. And so, just to read, we have shown that the variational free energy minimization in active inference is related to the variational principle of least action. It is worth pointing out where these two approaches diverge two instruments in a wood diverged. 1:20:22 And then it's all about that divergence and the contrasting. And so I think there's going to be lots of unpack there. And then they have some closing remarks on the applicability in biological systems and some of the predictive capacities of the simulations in terms of, like, you could make a simulation of a healthy, functioning tissue and then have predictive capacity about asking counterfactuals, what if this happened? What might I expect? So that is a lot of info. 1:20:56 So thanks to everybody who chaos been watching and again, hope that especially with some of the technical parts, that we were able to represent it with high integrity. So you won't have the last word, but what would you say in closing as we move from the dot zero into the one, two and beyond? Jelle, first of all, you went through a heck of a lot of stuff in a very short period of time, so that's quite the gradient flow right there. So that's impressive. I don't think much in terms of this is a classic example of where you have to unpack as opposed to lots of the other papers that are maybe more philosophically based, where you provision like you pack a bunch of stuff and then you kind of go off on a bit of a journey. 1:21:46 Dean: And I think in the one and the Dot 2, we may still be doing a little bit more unpacking just because of the density of this kind of information, but that doesn't, I mean, you want to be able to do both. So I think being able to cover this and hopefully we got it right. I did my best to try to understand it and I think he did a good job of explaining it. So we'll see who shows up in the .1 and two, whether the .1 and two are more unpacking or maybe starting to actually think of ways that we can employ this. You already mentioned that we can apply it to digital teams, but whether or not people have the confidence to be able to take this and apply it like they've done to Morphogenesis, maybe that's part of that conversation. 1:22:34 Daniel: Cool. Yeah. I'll be looking forward to going through with multiple concurrent regimes of attention, some of the formalisms that we either disgraced or skipped in this discussion and then keeping it open to think about Morphogenesis. Like, where has Morphogenesis been in numbers one through 38? Why haven't we been talking about Morphogenesis? 1:23:04 And if not Morphogenesis, then what else have we been focusing on? How does it relate to embodiment and to spatial and physical aspects of cognition? There are so many interesting angles and I'm sure we'll have a lot to discuss. So Dean, thanks a lot for all the help on the slides and for this discussion. See you in the coming weeks. 1:23:31 Dean: Alright. Thanks, Daniel. Take care. Peace. See you later.


 * 1) Session 039.1, March 2, 2022



First participatory group discussion on the 2020 paper by Kuchling, Friston, Georgiev, and Levin, “Morphogenesis as Bayesian inference: A variational approach to pattern formation and control in complex biological systems.”


 * 1) SESSION SPEAKERS

Daniel Ari Friedman, Bleu Knight, Stephen Sillett, Dean Tickles


 * 1) CONTENTS


 * 00:27  | Intro to actinf lab.                         |
 * 01:03  | Introduction and questions.                  |
 * 03:08  | The least action principle.                  |
 * 05:27  | Time evolution of particle systems.          |
 * 11:42  | Noise and fluctuation.                       |
 * 15:42  | Equations 15 and 16.                         |
 * 23:49  | Using these functions in multiple particles. |
 * 28:13  | The motion of cells.                         |
 * 30:25  | The least-action principle.                  |
 * 33:29  | The biological non-equilibrium.              |
 * 41:59  | Where's least action?                        |
 * 46:08  | What we can model?                           |
 * 48:22  | Rapid rate of change.                        |
 * 50:38  | An action-policy model.                      |
 * 53:25  | First Author Franz joins.                    |
 * 53:56  | Active inference in cells.                   |
 * 1:04:16 | Is more flow “better?”                      |
 * 1:07:01 | Is more information flow better?            |
 * 1:08:57 | Proliferation in figure 4.                  |
 * 1:13:46 | Where does least action come from?          |
 * 1:16:34 | The Bayesian model of cell formation.       |
 * 1:27:24 | Stem cell derived isolates for diabetes.    |
 * 1:29:29 | The relationship with foraging.             |
 * 1:34:13 | Is nestedness noise?                        |
 * 1:37:29 | Helmholtz and the block matrix operator.    |
 * 1:42:33 | Quantum perspective on quantum mechanics.   |
 * 1:46:41 | Multiple kinds of invisibility.             |
 * 1:49:32 | See you next week\!                         |
 * 1:46:41 | Multiple kinds of invisibility.             |
 * 1:49:32 | See you next week\!                         |


 * 1) TRANSCRIPT

00:27 DANIEL FRIEDMAN: All right. Hello. Welcome to ActInf Lab Livestream number 39.1. It's March 2, 2022. 00:37 Welcome to the ActInf Lab. We're a participatory lab that is communicating, learning and practicing applied active inference. This is recorded in an archived livestream. Please provide feedback so we can improve our work. 00:50 All backgrounds and perspectives are welcome and we’ll follow good video etiquette for live streams. To learn more about active lab, go to activity.org. We're here in active stream number 39 Dot 1, and we are learning and discussing this paper “Morphogenesis as Bayesian Inference: A Variational Approach to Pattern Formation and Control and Complex Biological Systems” by Kuchling, Friston, Georgiev and Levin from 2020. And we had a fun Zero and time probably all individually going through the paper pretty much in 39 Dot One, we'll just kind of go over introductions and then can go to a blank page and just see where especially Stephen and Bleu want to raise any questions and also anyone can ask in a live chat. Okay, so we'll just do introduction. 01:48 So I'm Daniel, I'm a researcher in California and I'll pass it to Bleu. 01:56 BLEU KNIGHT: I'm Bleu. I'm a researcher in New Mexico, and I will pass it to Stephen. 02:03 STEPHEN SILLETT: Hello, I'm Stephen Sillett. I'm based in Toronto. I'm sort of the action researcher and practitioner in community development and participatory theater. And I will pass it to Dean. 02:18 DEAN TICKLES: Good morning, I'm Dean. I'm here in Calgary and not much to say other than I'm kind of looking forward to hearing what Bleu and Stephen have to add to this rather big math question and how that fits with morphology. So back to you, Dan. 02:42 Daniel: Okay, well, if anyone wants to start with a specific question or figure, or we can review just sort of the main points of the paper, like let's look at the roadmap. So what section would people like to enter in or have a question or idea related to? 03:06 Or we can just go yeah. Dean so you wanted to talk about this and I wanted to talk about this. And it was this whole section on the least action principle, which I think we both find quite interesting because yeah, why? And it led me to the question of is there something efficient about search over somebody sort of passing along and sharing? So I'm kind of curious what people think of that in terms of the math and the morphology. 03:49 Alright. Based action math, morphology. Stephen. Yeah, I don't know if we could unpack a little bit more what Dean was saying there. I'm interested in some of the implication of that work around looking at the math and morphology as it scales from these, what you might call smaller scales than we can normally perceive. 04:18 Stephen: Often what's going on with morphogenesis is not something we can be consciously aware of a lot of time, however, who we are is built upon morphogenesis. So I'm very curious around how some of these first principles can be thought of as rolling out. So when we talk about lease action as being one way of approaching it, where people often think about energy and efficiency of energy processes as being the route that everything will take, the more informational Bayesian inference approaches and learning that is implied by this. Without distracting from the paper, I'm curious about people's thoughts about what sort of implications that have for thinking about these bottom up ways of knowing. 05:27 Daniel: Okay, Jelle. So I have some questions going back even to before the action in the paper talking about generalized flow. And it's really interesting just in light of the paper that's coming up in number 40. But here we see that this is like a spatial, like we're talking about spatial motion here, or like the foundation for the way that they model morphogenesis is positions of particles through space, right. And time. 06:10 Bleu: And I just wonder, we have a time evolution here and we're talking about motion in space, but could we do the reverse and talk about like a space evolution in terms of time, in terms of can we invert space and time here? So that's just something that I was curious about, if it's possible or mathematically illegal or why am I even thinking like that? I don't know. And another question that I had you guys were really talking about in the generalized flow, we talked about the equations of motions, coordinates of motion and the derivatives. And something that I saw in the paper specifically at equation 15 that I've not seen before is p with a dot and it was not defined in the paper p with a dot. 07:14 But I'm assuming it's the first derivative of p and I just thought that that was interesting. 07:24 So it's the probability density, but is it the derivative of the probability density with respect to x with the tilde or is it just p dot because x has a tilde? 07:35 Anyway, that's my question there. 07:43 Daniel: Alright. Wow. Well, we have a lot kind of on the table. 07:53 I think the thing Bleu appended with generalized flow is a good entry point because this is sort of what the systems state evolution is. And in livestream number 26 with Bayesian mechanics, we also talked about the generalized coordinates of motion with relationship with control. So it's kind of like a thermometer cybernetic model. If you only have the position of the temperature, there are certain strategies that can be applied given that modeling just simply the temperature. And then if you are modeling progressively higher and higher levels of how the system changes, it's kind of like a tailor series. 08:36 It equates to basically a better depth of control. And then part of the hard part is that because over the time course that action is being planned for, even hypothetically changes in like the unknown consequences of one's action complicate the estimation of different policies. So this is just not like a total perfect way to roll out the system and understand how it's going to happen. But it gives like approximation terms to higher and higher levels of the system changing and all of the levels of analysis are being linked to each other by being derivatives of each other. So then some stationarity is capped out at some point so the derivative becomes zero and then that's like a heuristically approximatable depth of that model. 09:32 So that's just like one note of what is being estimated on. So it's like temperature and it's higher order components in some either analytical kind of infinite generalized way or just more practically with some smaller realization. And then the dot is like a derivative, pretty sure two Bleu like it's like a prime, like prime prime for double derivative, two dots for two derivatives. And then morphogenesis is like position of particles here. It's like the cell body or the cell nuclei or the center of gravity of something and then those are spatially moving. 10:16 So that's the lowest level of this chain of the integrator chain with the generalized coordinates of motion. So that's how it's from a Bayesian mechanics getting to morphogenesis which is how particles are moving in space. But instead of just modeling like their x and their y, it's like x and y and the higher level derivatives. So that's the developmental motion of particles. Just to summarize that part before the space of time. 10:51 Okay, any thoughts on that? Because the generalized flow is definitely where whatever least action is, is going to be based in. 11:10 Okay, so then let's just look at that formulas here. Is the derivatives okay? Anything on this? Yeah, so if you're going to look at the formalism did you Bull it up here? Yeah, generalized. 11:28 Bleu: There you go. So go ahead and then I'll introduce. Yeah, here's two two, one x tilde is the total blur of all the dot one top. So yeah, go for Bleu. So just here my question more is the noise and fluctuation term and is that just like Brownian motion or I don't know where is the omega random fluctuation in terms of Brownian motion or is there some other random fluctuation positionally? 12:13 Do things wiggle? 12:21 Daniel: Definitely someone who knows more about stochastic processes and I know several people listen who do could help with this like in a math stream on a little bit of the stochastic aspect here. But I think at a first pass there's situations where the what is modeled as noise versus what is modeled as signal have differential ratios. There's times where like the regression is very the f even if it's regression, another function is very tight and then the noise is modeled as low versus one where just the location of x is just dominated by the noise term. But I'm sure it's way more than that too. 13:15 Stephen or Dean. 13:22 Stephen: I'm just thinking broadly in terms of the modeling approaches and active inference. Often it brings in temperature as a way to bring in the kind of noise fluctuations into models, whereas here it's directly seen as a noise fluctuation term. So I think that's maybe again, because we're going down to even more first principles here, it's getting down to stochastic dynamics, which isn't necessarily even thought of as temperature, but as something more foundational. 14:06 Daniel: Okay. Yeah. Bleu so temperature is like yeah, right. Like increasing temperature, random motion would increase. So that does make sense to me. 14:22 Bleu: So another point out of this section or maybe if anybody else has a point here going down into the next set of equations unless anybody else? Just to summarize that it's kind of like the temperature parameter is understood. Sometimes to play this role of mediating the difference between the particles motion being driven by some sort of directed motion like change in position through time, not at stationarity of position. Which is like kind of the observable in that chain that directed motion versus thermal. Brownian and then that might be kind of where these statistical assumptions come into play. 15:13 Daniel: So it's like variance estimation, but temperature is just one kind of variance with the motion of particles and it has a physical interpretation. It's not an insignificant piece. I don't know, we could explore more where temperature specifically comes into play. But Stephen, I think you're getting at the right thing that this is not temperature bound, but it does relate to that sort of situation. Okay. 15:42 Bleu: Bleu so I don't know if you want to go down to equations 15 and 16, probably where my next question or comment is. Do you have them in there or no. Oh, copied in right now. Cool. Go for it. 15:58 So it says equation 15. I think that's x tilde dot yes, p x tilde is a probability current, which I thought that that was cool because I've never really heard of that, like term, a probability current, like how probability changes over time or how the probability density changes over time. What is a probability current? It's an interesting, I don't know, like, term that I hadn't heard before. And then in the second equation, 16, it says that this is a partial differential equation that describes the time evolution of the probability density under dissipative and conservative forces. 16:51 So where the term on the left is the dissipative forces and the term on the right is the conservative forces. And I thought that that was interesting, especially relating back to the dot zero video where you guys were talking about a tensor and Dean was talking about attention. And I think that there is like an interesting tension here between, like conserving and dissipating. And it just makes me wonder how is death when the Dissipative term is winning? Like, is that like, what causes, like, a system or I mean, I know that the term winning or term being greater doesn't cause a system to die or when the system is decomposing, if that can be represented here in terms of these dissipative and conservative forces in the process of an organism dying or ceasing to exist. 17:55 Okay, sorry, I have way more questions than nice. Let's look at 15 and 16 a little bit. So, yes, probability current, it's kind of in this dynamical systems framing. It's being bridged with formalisms that are like more like a current, like a flow on a dynamical landscape. There's that sort of angle, but then there and maybe there's also more of a fluid flow component. 18:27 Daniel: Bleu. 18:35 Bleu: I was needed. So it's funny, when I look up a probability current, like also foreshadowing the dot zero, it says in quantum mechanics, the probability current, sometimes called probability flux, is a mathematical quantity describing the flow of probability. So it's interesting that it's related to quantum mechanics in terms of these, like instead of these classical physics things that we've kind of been looking at or dealing with so far. 19:10 Daniel: Yeah, so this 14 is okay, this might be only a partial element or example, hopefully one that is useful. So this is x tilde with a dot. So that's x tilde all the coordinates of motion and then x with a dot means derivative of. So this left parameter with a tilde and a dot is how the generalized coordinates are changing through time. So that's kind of what is being modeled, the data that are being whether they're an observable or an unobserved estimated state in a Bayesian graph. 19:52 So there's an O or an S that pluses all of its derivatives and then that is being partitioned into one big complex function, f bottling it flow. And then this noise, omega tilde, both are over tilde. So these are both over. Like each of these higher derivatives also have an associated noise term. And then it says yeah, okay, Stephen. 20:29 Stephen: In some ways there could be an analogy. I'm not saying it's the same, but it seems to be a mirroring of the enthalpic term and entropic term that you get in Gibbs free energy, where one is where the energy is bound within the molecules at play. And then the other term, this entropic term where it's kind of dissipating. So it's another because these things are translatable just like mass and energy are translatable and they're known to ultimately be conserved. 21:11 The overall mass energy of a system is always conserved. But it could be translated between normally we don't think about that because it only happens in nuclear reactions. But if you're thinking information and I see they talk about this idea of probability mass. So there's almost an ability to think about that conservation of energy in a form which has mass, which has form as opposed to when things are dissipated as light, where it doesn't have mass anymore. 21:56 Daniel: Okay. Bleu. 22:00 Bleu: So I think where they're talking about the conservation of probability math, I think that they mean like the conversation of the total probability there like so the probability mass function, I think it's that which relates to, like, it's the conversation of mass, but it's also the conversation of information. Using that conservation of probability, like the probability of all of the possible things happening has to add up to that probability is never going to change. And so I think that is there. That it has to add up to one, like a fraction or 100%. Has the realist and instrumentalist take the realist take is something has to happen. 22:53 Daniel: And then the instrumentalist take is if we model it this way, we're going to be able to use statistics. But if it was like, well, the chance of getting heads to 60% and the chance of getting tails is 60% in this next flip, it would open up the space in a different way. It's kind of outside the bounds of the model. You can have a model where both happen, but still there's it's a different thing. Can you model things that can't happen in the real world? 23:24 Or if it's not a good measuring stick for statistics, which is kind of like that one, something must happen. We have to model something happening. We have to model a data set row being added. Otherwise, what is happening? If it were like one every 1.2 rows? 23:46 Stephen. In relation to the point you just made, I'm just curious whether if these functions can be used in multiple particles, say, for instance, you have the idea is the probability of a coin being flipped is 50 50, for instance, or a rigged coin could be 60 40 in one way. However, the probability if there's noise on how rigged something is so for instance, you've got a coin and it's flipping between being rigged 60 40, 50 40. So there's a random fluctuation in that behavior which may be contextually contingent. That in itself wouldn't be in the one equation. But if you had multiple particles, they could start to behave not just as one predictable thing. 24:43 They themselves could have the change in probability approaches. Thanks, Bleu. So I think specifically as related to here, and I think that this is derived like, I don't really know. I don't really do these derivations. It's definitely not my area of expertise. 25:03 Bleu: But I think in terms of the conservation of mass, we're not creating or destroying matter. The conservation of the probability mass, it just says that all the particles, if you're looking at it, doesn't matter if it's one particle or a thousand particles, the particles are somewhere, right? So the particles are not nowhere. Like the particles are somewhere. 25:30 That's like the idea that all the probability has to add up to one. Like there is 100% chance that you will find the particle of interest somewhere. I mean, you're right. 25:47 Daniel: Okay. 25:52 Got, Bleu. Good section to pull out. Important some important bridge equations. So we have the signal to noise. Now when the signal that we're modeling with the instrumentalist take, we're modeling how much the cow weighs defined as the one that comes to walk over to me when I call for it or something like that, that could be dominated by different function or could be driven by different noise on different timescales. 26:22 So the signal sort of directed action component or modelable component of the motion through time is the purple. And here is like some flow with the upside down triangle that we talked a little bit about in dot zero, but would of course be good to hear from someone who knows more about this flow. Operator object signal is like the movement on a probability distribution on those dates changing and then the noise is the dissipative component. 27:07 Stephen. It appears as well that by breaking it into dissipative and conservative components, it's serving a similar role to separating into complexity and accuracy. It's giving a way to break apart for dissipation because you're not actually talking about a final target, but more like things just doing what they do in some ways and having a very kind of physics fundamental process. I think that's proving to be useful because it's also, again, something which can be translated into different types of math, different types of physics. 28:13 Daniel: Okay, so let's connect it to the morphogenesis. So what do they do in the paper, the modeling, the motion of cells. So why does it matter? This whole discussion that we're having about how to get from the motion of generalized coordinates, if you're the modeler, you get to define that and get to set the parameter between this. If you're the experimentalist, then you're inferring this kind of like a regression, like if this was a regression term and a noise term on the regression. 28:43 And then there's some function that helps you fit a certain solution to the one linear regression that fits it best. So it's kind of in that same genre here. So if we take the realist take on that, it's like how the actual system is defined as or inferred to be as from experimental data and then flowing the probability density on that, of our inference on that, whether we're getting that from the data or whether we're defining that system formally. So that modeling we could help from the authors or someone else. Like it engages some new future. 29:34 It's like a bridge to some other set of equations. So that's where it would be good to learn more, like what is actually enabled here. But it enables a new vista that separates, that enables this separation so we can talk about instead of this like potentially extremely open ended and nonlinear change on the system can be bounded within a statistical inference, zero to one inference framework that is more amenable to certain kinds of analysis. So that would help model cell location and all of that. But I think there's still a lot more similar in there and unpack. Okay, Dean - yes. 30:23 So I want to just sort of step back for a second here. Because I want to go we started out talking about what are the implications for least action. So the act part and then we sort of stepped into the idea of what came before the action. 30:42 Dean: Well, it was these flow states and now I kind of want to go back to the action part under that section. Two, two, two the second paragraph says that the least action principle can predict the emergence of form. So what Daniel was mentioning there the sort of the morphogenesis in terms of the flow or paths of least action in biological systems. For example, in colonies ants find the paths of least action to harvest food and bring it to the colony. The examples considers their paths as flow channels or trajectories finding the least average action for each instance of foraging given available resources. 31:18 So this gets me thinking about Jelle what do we mean by least action as opposed to say most action and that general way, the general way that things tend to follow like if you thought of an analogy of maybe 19th century war where armies would line up facing one another parallel to one another. This is speaking more like a flow state, more like you find in nature where the molecules we talked about here are sort of flowing and following down the side of a bowl to the least potential energy. And so now my question is why does nature tend to flow with the advantage of least action? Whereas people with instruments don't necessarily flow or follow one another unless they're mirroring some other form or some other shape. So it's interesting to me because I think by nature we are quite adaptable but when we start making nonliving things as instruments to necessarily take the nature away, the life away, we're maybe not as quite as adaptive. 32:52 So I'm trying to step back and now pull us back into this idea of what advantages there in this physics view of least action. There must be something advantageous there in nature. So what is it? I'm not the geneticist, so I don't know what this signal milieu is that makes that so I kant to tap your guys'and gals expertise on this because I don't have it. 33:26 Daniel: All right. Thanks, Dean. Stephen. So following on what Dean saying, I think in the traditional stance we tend to take what we call equilibrium dynamics approaches. So for instance, the armies are lined up like Dean said, then there's a battle and then there's the final state. 33:47 Stephen: So we talk about the initial cognition, which is equilibrium. Everyone lined up and the final state and then a bit in the middle we sort of talk into our shirt or our shoulder or whatever, cough a few times and somehow it happened. And in some ways with the biological approach it's like all that bit in the middle is where the flow is happening. So essentially speaking and biological doesn't have those initial conditions in the sort of traditional sense of pure equilibrium. It's always in this nonequilibrium, or at least a large component is in nonequilibrium states. 34:25 This is an interesting thing. It's flowing down towards the least energy. It never reaches, it always so, for instance, in the chemical reactions, you take your pure reactants, you mix them together, you stir it up, you do the reaction. And then once you've got your product, which could be a precipitate, for instance, it could be a ton of the precipitate. It then gets filtered, it gets washed, it gets dried, and it gets measured for how pure it is. 35:01 And now it's in this kind of stable product form. 35:09 This question about what does it mean to flow is really important. 35:19 Daniel: Thanks, Stephen. Alright, Dean. 35:24 Dean: What this flowing and following thing kinda says is that the desire line or the termite mound? The result is of signaling. 35:41 I don't know if that's collective signaling, I don't know if it's the stigmarchy part of it. Like I don't know why nature sees the advantage in the following, but there has to be something there. Why is that advantageous? Why is that more adaptable versus the alternative? Okay. 36:05 Daniel: Thanks, Stephen. I suppose one way of looking at that is it's the only cue. If we're going to take a realist route, recognizing that maybe it's the only plausible route on the table, they don't get to force teeth in a kiln. In a way, that's partly why all chemicals that are available in nature have to be within certain plausible temperature scales and pressure scales that can yield those products at that kind of doesn't answer the question exactly, because I know that's kind of a bit of a but it's part of what's available, I think, in terms of biological plausibility. And I suppose in some ways humans, we've adapted a cognition to try and move outside that. 37:13 Dean: One last thing on this, and that is that following implies that there's also something leading, there always has to be something taking that initiative for others to fall in line behind. So shifting that and making leadership something that's more lateral. So sort of going forward together as opposed to who do we decide to follow behind? That's an interesting thing in terms of sort of setting out how this Laje to action principles, things turns into things that we actually see phenomenologically. So again, we're going to get into what the final form takes. 38:01 I just think that when we were sort of treating this in the .0, this was kind of a pivotal moment because it went from what are all the things that have to be in place for formation to occur to now let's look at what the form now results. As I said in here, in that one paragraph, it said well, the previous paragraph is physics offers a useful formalism to understand at a quantitative levels the ambiguity of biological systems to work towards. So now we're basically now moving to something that is adaptable. So I think like I said, I don't want to overtalk this one point, but I think it's an interesting part in terms especially when we get into the discussion at the end around what some of the assumptions were and yada, yada. 39:04 Daniel: Thanks I suppose. Does this work towards idea? I mean that can be thought of as a modeling and a life realist problem. How does that work towards and in some ways accuracy and complexity can be used once if we're at the level of assuming that we are actively inferring to stay alive. But prior to that, something like conversation and distinctions or conserved are a bit more foundational. 39:44 Okay? Because if by conserving or working within this conserving dissipating noise dynamics at the levels which are below what we are consciously aware of, however, which all our cognition is effectively built upon somewhere, we're able to potentially give that towards us without necessarily having set a goal. And I think that might be an interesting see how that rubs out in the real, real world of active inference lab and papers and stuff. Yeah, cool. Yeah, a lot you link there Stephen, just now like accuracy, complexity and then okay, we'll leave it to a future day and work and who knows how many of these are just concordances versus biological. 40:49 But accuracy within a model is like pragmatic aims within a model with the imperative to prefer to fit as much data as possible. And then it's like conservative using the reward structure that worked at one time step using that generative model. Moving forward versus changing it. Any sort of dimension around the parameters as evidence at the lowest level of the chain. The position of the particles or the actual parameter that's being modeled, like the actual image that's being percepted on versus the higher generative models, which don't realize, like, at the kind of tip of a javelin, so to speak. 41:36 So then the signal and noise is also like conversation and distinctions with respect to the model and then that is coming all the way back to morphology with the position in stasis versus vibration of the particle from thermal. So it's like a lot that gets linked here. But where's least action in all of this? 42:08 Yeah. Bleu. 42:13 Bleu: So just to read a quote from the paper, it says since self organizing open systems are not conversation, their structured flow is quintessentially, dissipated, dissipative. And that's like down it's in the third paragraph in the least action principle section. So that goes back to what I was saying earlier. Does that equate to death? Like when the system totally dissipates, like it's ultimately dissipative. 42:51 So we stay in this like nest for a little while and then we dissipate. Right? Can it be modeled like these processes using these equations? I find that super interesting. 43:07 Daniel: Thanks. Stephen? 43:13 Stephen: This idea of the least action, I think what Bleu was saying there as well is it's lease. Action. Maybe we should say dynamical lease action as opposed to least action, which we normally think of as I was saying action products. 43:30 What's the route to get from A to B? Or the army lined up to final space? Normally we are taking starting conditions, ending conditions, or where I am now, what's my goal, what's the least to get there? But those two things are defined in some ways they're definable. Now all the bit in between is trying to go to least action. 44:05 Daniel: In. Some way using that kind of in the space of chaos. I suppose if we're going to take the kind of complexity approach in human systems, it's in a chaotic state where you're trying to sometimes make sense of how to act. But however, in some ways there's only chaos moving into complexity which is available at biological systems. Theoretically, maybe not the idea of complicated and kind of simple sort of reproductive steady equilibrium state systems like mechanical systems aren't necessarily available. 44:54 Stephen: Although you could argue that maybe certain properties of our morphology like our bone structure and that sort of gives something which is approximate to that. Once we know how to move our arm, we have a relatively simple thing that we can now bring into our regime of attention. But yeah, I wonder what people think of dynamics. Lease action is somehow what's going on here. 45:24 Daniel: Yeah. Thanks, Stephen. Here's one take on that. It's a good idea or question. I hope this is accurate too, because I think we're all learning here. 45:34 But this is definitely a really challenging area in some ways to approach, especially given our realistically limited familiarity. So again, it'd be awesome for people who have more familiarity with these equations to join us either in preparation or even just joining us on the extreme so that we can actually learn and connect to these other areas. Somebody who sees this formalism like everyday, just like we might see some other one. Okay, but Stephen brought up is this a dynamical lease option? Okay, so first kind of a hopefully non contentious point that we're analyzing what we model. 46:14 DANIEL FRIEDMAN: We can only ask the computer to calculate the numbers that we ask it to model. And at we can't expect something that goes truly beyond that. Not to say that the models can't have surprising outputs or interactions, et cetera, but we can't have it go beyond what we specify. And the model is over x tilde, which is the generalized coordinates of motion, which is the position, and all of the higher derivatives, the whole integrator chain position in all higher order moments of the statistical distribution. They're called moments like the first, 2nd, third moment of the derivatives of the statistical distribution. 46:55 Daniel: And these are basically terms of approximation of motion that allow a high order, like a Taylor series approximation or a Voltaire series approximation to allow snapshot modeling with real time flow action, cognition, perception. It's the flow over as gets explored. The blanket states. This is like the flow over S A and I states, action and internal states. So this is enabling a flow description of Markov blanket states and their perception, cognition, action, their flow over states, their change to time, including higher order moments of time. 47:47 So that it is a dynamics systems model. But still there has to be a snapshot like a time series model of a stock price. It has a value at whatever time resolution or continuous or discrete it's being time series modeled at. So it's kind of the relationship between snapshot modeling and capturing higher order trends. And this is a certain way to think about that in the direction that they're going to take it towards the Markov blanket partitioning and bridging it to everything that that affords. 48:22 Okay, Dean Stephen. 48:27 Dean: This is really important because I think I want to tease something that maybe we can look at right here, but only in the point too. So between us doing the live stream zero. And today there was a report about the rate at which I think it was a UN report, the rate at which climate change is happening so rapidly that as people who are sort of in that highly changing flow state aren't able to necessarily going to be able to adapt to the rate of change. And so I wonder if some of what this paper speaks to might help us understand why the authors of that report are indicating to us that the environment in which we are existing in is going to change. So quickly that as cells within that larger structure, our form is not going to be able to adapt quickly enough to the way that the entorhinal system is changing. 49:38 So I want to save that for the point too, but I think this least action principle part would be a good entry point into some of these bigger questions going forward. So I just want to park that. But I think it matters. Yeah, just to give one note on that. The rate of change and how it changes, those are kind of natural language descriptions of derivatives of how things are changing. 50:05 Right. Might be a simple claim to some, but that's really important to keep in mind. So how things are changing and then that's always going to be unknown to some degree. So how things change and so on. This is the generalized coordinates of motion and so it is about modeling rates of change and predictability of systems that have rates of change. 50:32 Daniel: Okay, yes. Stephen. 50:38 Stephen: With this we mentioned there about your action perception, cognition and in some ways the cognition niche cognition on the generative model and the generative model is an action policy model. Okay. So we've got this recognition coming in with it is drive by this action. And of course one challenge you've got in the climate change scenario is really what you think is one thing being able to act. And being the danger is we hit a point where no matter how much we know, it's beyond the ability to act in our capabilities and capacities as a species. 51:29 Jelle. I think the word cognition niche useful, maybe useful to be careful, just that the inference process is more general because it can be a process which is like, say, happening beyond thought. So thought is another action almost giving a higher order understanding on the generative model for an action active inference lab. Of course, a lot of what's going on underneath is beyond our perception, literally, way beyond. So that can be interesting or useful to strip out. 52:13 Daniel: Nice. Yeah, Bleu. Thanks. Cue. So just to kind of go back to dynamics lease action, I do think that that's what they're referring to. 52:26 Bleu: At the very end of the section, the authors say, from our perspective, the key observation here is that any dissipative random dynamical system can be formulated as gradient flow on the log likelihood of its states. This is reflected in our solution to the fucker Planck equation in 17, which means the action is the time or path integral of the marginal likelihood or self information for any system or model M. So this is really the key thing. This means the least action integral over the lagrangian turns Hinton an integration over the information of states, which is known as entropy and information theory. In short, the principle of least action manifests as a principle of least entropy for systems that possess a random dynamical attractor and thereby obtain nonequilibrium steady state. 53:21 Daniel: Thanks a lot, Bleu. Great point. Good day. Franz\! Hi\! 53:28 FRANZ KUCHLING: Sorry I'm late and only half a half an hour or two. I'm in the middle of finding preschool, so daycare is a whole mess right now. I do apologize. I don't want to take this time. It's a little bit of a mess. 53:40 Next week should be better, I think. We found a school, so hopefully next year. 53:48 Stephen: Cool. Unexpected but preferred. How will we model that? But thanks a lot for joining. This is really cool. 53:58 Daniel: So we were just describing some of the formalisms, but where would you like to begin? It'd be awesome to hear, like, any just interoception and context on the paper, and then we kind of jumped in at least action and would love to hear your take on that. Sure. Interoception, I guess. 54:24 Franz: I work in McDonald's Lab and Taffs and doing a PhD in biology, and this is the whole kind active inference lab. Part of my work came actually before my entire PhD, so I joined Michael. So I wanted to relearn what kind of information physics do in a biology context. I don't Kant to study the whole level of I'm still doing it as a technique, but I didn't want to have to focus on protein interaction and, you know, genetic information where you measure all these things. So I was based on something bit more broad in that. 54:56 And so when I reached out to him once I got some accepted he chaos, some months to kill, can you send me somewhere or do something cool? And then he sent me to Karl Friston. And that's when that's the first time I heard the term active inference. So I spent about four months in Carlslab and then basically started working on, I think, half the work in the paper. I was actually done in those couple of months and when I was in carslab. 55:21 So the overall goal of the paper and of that part of my work is to see if we look at morphogenesis and similar biological processes, can we look at something where you have a very kind of baseline stem cell like behavior? We have cells that can't do like a morphenesis, basically. I'm not familiar with the whole biology aspect of this paper, but essentially is the idea that you model cells that are uninformed. They have some kind of genetic code, something that encodes their structure. Basically you have the generative process already ingrained in the gender model into the cells, but they're completely by the stem cell like they don't have actually yet achieved their final form. 56:01 And they are just like in the simulation and biology, right, they're starting off a very few asymmetries. I think that's a famous who said that. But some physicians at once said most physics is basically just asymmetry symmetries everywhere. And that's exactly kind of how most biological think of morph genesis. There's some asymmetry in the beginning in the egg, or there's some exafferent information, some correlation of something, some localization of certain agents. 56:30 And then from that on, basically everything else just follows through. But it's hard to believe that's the whole side of the story because there's just so much complexity and so many cells doing it at the same time. So if there isn't any capability for them to adapt to signals in the environment, to learn from each other, it's hard to believe that the time frame they're given, they can really achieve the complex models outcomes that we know do exist. So coming to this paper, basically this model is actually already like the father or mother of this paper was already done before I even heard this topic that was the encoding Ines place. So the model structure itself was already done, but I think called it most, if not all of it. 57:11 And Mike just basically on the paper and Giovanni as well, they basically gave him some input on what a background of biology would look like. And so they already figured out basically let's do a simplistic model itself. As far as I know from Hohwy myself, my simulations, many practical concerns don't mind too complex. If you have too many cells at once, then you don't mind the problems. In the beginning. 57:39 Something I actually added onto the model, even the baseline model, was that damping parameter which I talk later on in the results which often it's not a key part but I find it very interesting because it's one of the requirements. Active inference, lab force, fear, injury of fear and inclination. There's a smooth landscape. And when you have all these cells initially clustered together and they're trying to infer their place, and they all kind of have some randomly initialized, prior beliefs, they don't know where to go from there. And if you have already high precision in there and the sensory apparatus initially, they have a really hard time. 58:17 They still end up mostly getting later but they jump all over the place and you can imagine that'd be very bad for an abolishing scenario of every cell immediately jumps to the first queue they have. We actually think now in the Mic Eleven labs that least cancer initiation happens a little bit like that. Repositions are set too high. There's paper coming out soon by a colleague of mine that talks a bit more about the argument. But so basically we had the target mythology already encoding that and what you see in the baseline control experiments was what we've done. 58:51 What I was interested in basically, again coming back to my question and my approach to this from the biology side of you is how can we use the aspects of information flow, specifically active inference to manipulate and better control genesis outcomes and control biology? So the two main results in that paper are looking at if you have like a normal Bowse experiment where you basically interrupt one part of the machinery and then see Hohwy reacts, you actually control the process information processing itself. So one was basically if you put in an asymmetry and the response of the cells to the signaling ligands to get the signaling concentrations in the environment, how can you basically completely remodel your entire morphogeneous outcome even though you actually left the entire code\[?\] itself? That's always one thing I tried to express in the paper. The toxic morphology and all the simulations are exactly the same. 59:52 None of them. They basically serve as a coding of where it's supposed to be but how the process has been changed. So the first figure is where you have those two heads and two tails and if it did the two tails in that or if I just did it myself and then publish it, I think it's in there as well, then that's basically something that we also see in the lab. I think that was mentioned as well. That's something that my glove lab has done with an area where they basically induce those two head types, right? 1:00:19 That's something that's really weird to biological and again, in the type of manipulations that we do in a lab that happened on this lab, they didn't manipulate the nitrogen code. Again, the genetic Brea that was exactly the same but some of the perturb, the actual bioelectric network itself on the state space essentially so that's kind of like where this inspiration came from. And then the second part where the malformation where I also talk about cancer information and of course it's called cancer because we didn't put proliferation in this simulation. So that's the inclination stage. But it's the idea is that, again, there's something that we talk about also in the lab, and we've called many emails afterwards and it kind of also goes back to this whole idea with how different psychiatric orders work. 1:01:09 In the brain where you have some of these inference processes being disrupted and then lead to large scale outcomes, but they kind of start somewhere. The idea was here is that if we disrupt only the information flow from one cell to the other cells in your cells to that one cell, then what would that sell? Do we basically just completely reduce its sensitivity to the environment. And then what happened was that one cell basically kind of like it didn't move a whole lot, it got the wrong kind of cell type and it was completely out of place, right? So that's always bad biology if a seller is doing something that's not supposed to be in a place where it's not supposed to be. 1:01:52 And what I thought was interesting afterwards so how can we rescue that? And again like not trying to rescue it in the traditional cancer therapy idea where you bombard the cell or just kill it, how can you actually have the system model itself? So the idea was to increase the sensitivity was disrupted from that one cell but then the flow like how much concentrate, how much the other cells reacted to it. So basically we manipulated how the information flow from the other cells to that cells was flowing and increased that. And then what was interesting is that they were just kind of going more close Hinton an interactions just to see this time Laje figure and then eventually they reshape. 1:02:34 So even though the cells normally run it's a deterministic simulation, right? It's always the same random seats. So you run the simulation and run again all these outcome so you can kind of know which cells go where even though no Jelle that offer knowing I'm supposed to be not so that's not how it happens. But because we run the simulation with a fixed and randomized least we always know which cell ends up going where in that cancer simulation that was not the same anymore. So actually even though at the end, after that rescue experiment they ended up with a perfectly normal shape and the morphology at the end, it wasn't what normally was supposed to happen. 1:03:11 The cells had normally work on to that fixed positions didn't all do that. So there was some reconfiguration, which is something that's an ideal strategy. You would like something that you exploit the systems and traction itself to then rescue an overall you want to work on the field type, but you want to work on what's actually wrong. You don't care exactly what cell does and is implemented. There's a lot of cool signs, little tangential, but that's really cool work. 1:03:37 But you might have that works on neural networks and lobsters. Where they see that this works in a Costa. That's simple enough system to work new only. But what they're seeing is there's actually a multitude of implementations that the neural network can use and do these individuals using entirely. In different parameters with like two to four different system concentrations to achieve the same end result. 1:04:02 And it's actually the response, the results to stress, to perturbations and environment is in the details but in the end the actual homeostatic aspect of it is kept the same right. The goal of having something in the the same steam. Would like to speak. Go ahead. Thanks. 1:04:20 Stephen: I was just gain to one question. Is the idea of flow in a way. More flow is better. Even though that so unlike a generative model where you maybe don't want too much complexity. You can have a lot of flow, but as long as that flow not noise, it just is able. 1:04:42 To inform that's the first part. And the second part is are you saying with this flow, it's about finding where the choice point is or where the threshold is to say, okay, that's now going to decide that that's going to create a morphological target as opposed to just being a gradual gradient descent on energy. It is more the gradual so I would say it's not idea to think about decision points even though in the end it doesn't happen. It's something that we just agent used to our level of speech, but on the level of ourselves, it's pretty much a gradual process. Right. 1:05:24 Franz: Basically, you have all these different priorities force the probabilities of being one cell and the other in the simulation and the idea of increased information flow or increase sensitivity, which are aspects you're modeling and manipulating or precision as well is that you are trying to get the vibration of energy to be minimized, but also specifically to converge on a system where those premise actually end up somewhere meaningful if you don't want to be in a state where they constantly fluctuate. Axel Costa randomize. You mentioned noise. That's one aspects you want to avoid, you also want to avoid. That they basically do this and that's where you write the decision point, right? 1:06:01 That the cell is basically going towards one state and then there was one one, and then you increase the information flow of the other cells and the cancer rescue simulation to bring it away from that. But you first of all not actually working on identifying that point and then you're fixing that. You're basically just realizing that the evolution overall, you basically look at the probability changes over time. So how are they updating and how fast are they updating? And then of course, in the end, what are they updating towards? 1:06:30 And that gives you a queue. If you think from experimental point of view or even like experimental assimilation point of view, then that tells, okay, so if that happened too fast and there was malformation happening too fast, then you probably want to act and you want to increase sensitivity or something early on. You don't do it one time, you just set the parameters for whole simulations. But it tells you basically the strength that the sensitivity is with respect to how early and how strongly that information is happening. Does that question somewhere? 1:07:00 Stephen: Yeah, I think that does. I suppose one question just on that. Is there like an optimum level in terms of this flow it peaks or is it like the more flow you can get the better? It's just literally a limit on how much information flow is possible. 1:07:24 Franz: Hang on for a second. 1:07:29 The short answer is that the boring answer always will depend on the context. I would say there is no I don't think you can say broadcasting more information force is always better doesn't make sense. I think you already gave the answer to that. Why? That is why? 1:07:43 Because in the end it will end up too much noise based on the system. The answer has to be a respect of how much information actually is being can be processed. What is the time scale of the sensory parameters? My experimental work works a lot on basically having different variations of input signals to my model system, which is actually algae. I won't get to that. 1:08:05 But long story short, I actually once got the question where I had these randomized signals that was feeding it and then someone asked, well you actually this is actually taking more information because you're having I was like Jelle, like white noise technically carries more variation, but it's not information. Right? More information flow by itself is not that more informative information flow. The informative part being is actually does it how quick does it change over time? There's a lot of the important aspect of the derivative there and with respect to the also the sensory precision and the timing, there are certain timescales involved in like how fast sensory states lead to updates of internal states. 1:08:45 All these things together make up basically how fast an agent can react and how much information they can process and that's the optimal level will be dependent on that essentially. Okay, awesome. Dean or Bleu? Before we have anyone go again. 1:09:04 Bleu: So I have a question. Can you switch to figure four Daniel please? Yup. So you had mentioned earlier, hi by the way. It's nice to see you. 1:09:15 I'm glad you could make it. It's a pleasant surprise, but here in figure four you mentioned that you didn't model proliferation in this model. But it seems to me that there's like more like it looks proliferative here. So is it just like a very dense cluster at the first time step or do you duplicate it every time step? Or how did you end up with so many more feeds or whatever agents sell at the end than at the beginning? 1:09:49 Franz: There might be an optical accommodation problem on my part. So the actual cells are eight in each of those images, all of them. But what you're seeing is the trace of it, right? So only the ones that have the strongly colored dot and then like the little star around it at the end, if you look at the last frames on there, those are actually the cells. What you see in the back of basically like time Laje kind of like snapshots. 1:10:16 They are actually Jelle in there. It will be actually interesting. I know you saw that. That would actually be a fairly simple introduction at each of those points that just make that in your cell. That would be cool. 1:10:28 Actually, I think basically Karl, one of his when he initially published that I think he had at the end of the simulations, he basically had some new Jelle, but he never went further with that. And I think it's the same problem that I mentioned that when you have initially and in the first time frame at T equals one, that if you have to make Jelle close together, you have to kind of set the precision lower because otherwise they're too close together. I mean, it works somewhat, but it's very susceptible to perturbation at that stage. So in order if you basically want to be introduced per fluid, either like you said, he gave me the idea to do in between or you do at the end, you would have to come all that what sells do, right? If you have new substance that I'm not going to be the same kind of sense of rest and mature or form sellers. 1:11:17 But that was basically, I think, just for the simulation was making a little bit complicated. But I'm not right now active working on that simulation anymore. If I did, that would be great. Yeah, maybe I'll come back to that. No, that would be way cool. 1:11:31 Bleu: And if it needs to be less sensitive at the beginning, you just double the sensitivity at each time step and then that way you would get that enhanced sensitivity over time. And it's also very interesting from the point of view, fractal. And there are some really cool webinar. Now, I recently read that Jelle from the guy behind us, Martha, he chaos a whole new physics approach where they basically just have some sort of rules and then they just use fractal kind of multiplications. And each time that goes on and they increase all these kind of physical laws, which I haven't yet gotten deep into. 1:12:02 Franz: I can't really say anything about how good and useful it is, but force fact that we all know is a very kind of informative mechanism that's used in biology senselessly. And that's something you could use here as well, where basically each step where you introduce a new cell you kind of just have the same rule set with the initial and you might make a small simulation in that simulation essentially. And with the same simple set rules you could probably make up a whole much more complex. So it probably looks nothing like this. I wouldn't expect this to have the same shape but cool stuff. 1:12:33 Cool idea right now. One day just kind of unpack that because it's a great suggestion and thanks for kind of giving your take on it too affordance. It's really cool to hear like when the cells divide, you may need a first Asymmetry break or to introduce Asymmetry break, you get some gradient. It could be gravity, it could be a nutrient gradient. It can be the entry point of the sperm that triggers a calcium wave. 1:12:59 Daniel: But it's like Asymmetries can give rise to Asymmetries and then interacting, you get two morphogens and then now it's high high and low low. And then the alternations influence gene regulation and that's like very complex. But adjacency effects are really important and that's how we get the self organization like of the insect eye or of tissues because they don't have to do what this challenge is which is sort of like getting information spaced out. This is like the bird flow morphology which is also so cool because it does apply to other systems too, which I'm sure we'll be explore more. But like bodies fill out into a morphology, not just dissolve. 1:13:41 This is cool though. I mean of course we really learned a lot. Let's return to least action. Feel free just to leave anytime you'd like. But on least action where does it come into play or what is it doing here? 1:13:57 Franz: Yes, there was more of a background section. So Gary Gordie was also on the paper, also my son die supervisor we Kant talk to him because this is nothing that I did. And there was like novel it was not the way you framed it, but I didn't make new math for that. But the idea was that what Joggy was doing all these action principles, classically action principles apply to body questions and Jelle. You know, it's really I think you're missing like a communication aspect of this where you're not clearly how this fits into action principles and you're not clearly of what like where diverge essentially and actually did write a paper on you. 1:14:46 He springs in there. But I wanted to make more explicit essentially that dimension principle is at least Ashame principles by definition, essentially. But I was trying to show here is simply kind of like the whole idea of the paper from background was that you could anyone could start from this that has more of a physics background and then can work the cells to influence to biology. And it's not that something that. Physics are very familiar with, right? 1:15:12 And the idea is actually where does this what is the interesting take on it from the informational free energy. And that's what I think. So you start as a general cognition, what is extremer, and then you kind of see later on that the whole definition of the club or cabinet dimension of innovation force Angie very much corresponds to that. And it's also interesting to see basically where it takes from. So now that answers your question. 1:15:41 Basically the point of that was more not the least action that I used, that particular correlation. I did use the classical typical equations that Karl used for energy. There's more of a motivation, and I hope to show essentially where

this all flows from. How do you get from the upper function in the beginning possibility, the balance of function, how do you get from that to aviation to really make that kind of integration to physics tighter? Which I know I failed out because I didn't do any of the comments to that paper. 1:16:15 They were published with it. And if you read that one, two of them were like, you still don't see how the physical interactions is doing. And then my comment to that was then I think I called that mini paper, response paper, like a cheaper integration specific. So I know I didn't succeed in that. I apologize, but that was the goal. 1:16:33 Daniel: Okay, awesome. Dean, would you like to go anywhere or ask anything? 1:16:41 Dean: No, I'm just going to sit back. 1:16:47 Force me. You've taken a whole bunch of stuff around what a form might turn into based on what signals it's capable of acting on. Is that besides being a new grossover simplification, can we start there? Does that make sense? Yeah, so like yes, that makes sense. 1:17:14 Franz: Basically, I always like talking about what you put in, what you get out of it. So what we put into the model is an encoding of signaling responses, which is like a classic. Like if you see signaling AB but not much of C and D, then you think your cell type, whatever that means, head type. And then you adjust your own signaling expression. So each cell basically has four signaling molecules that they can exchange, they can express, and they can sense. 1:17:40 And based on different combinations of those, they think they're one or the other. And that is semi place encoded. So they have basically from that map, that's what the target morphology is meaning. So what we put in basically is that encoding the generative model, generative process, how they update that, the fusion, Costa, all that stuff, that basically how the information and spreads to each other. And then what we get out essentially is what kind of shapes, like you said, can emergence from that. 1:18:10 Of course, initially the initial model was basically you don't understand in how many shapes because you can do a shape that you put into it. So the point of this paper was keeping that same target, the same kind of like each cell has these options that it can be based on what sensing and then it's all secreting. But keeping with that, can you get different shapes just by messing up the way that information flows in outside of the cell and it's being processed around the things that the inclination from that comes from the Latin working and what. We do a lot of this where we don't actually mess up the genetic coding, so we can mess up the generative model in active infants terms, but we mess up how basically active states space looks like conversation at each point in time. Updated. 1:18:53 And that's the idea of this. 1:18:59 Daniel: What does it look like to engineer or design a different target morphology? How did you fit the Bayesian model or fit the flow descriptor whatever it was in its representation towards the anatomy that's described in figure three? Yes. So the outcomes that we get like the outcomes here, we didn't like design the term force. Like I said, we keep all the same. 1:19:31 Franz: We just basically had some intuition that some of them turned out to be true of how we could change it based on just the asymmetry put in the information flow. How would we design this one mainly so this is an average level and again, this model itself is not that's already called this before I joined it. So that's not I didn't do it new. But how do you approach this? So basically idea is the idea of a fairly biological idea is that you have joint encoding where certain transcription factors basically based on whatever you sense will then basically activate gene expression in the cells and then allowed to differentiate the cells. 1:20:07 Most of morphogenesis essentially starts off from something that I mentioned initially. You have some asymmetry in signaling that basically even when you already have the fitness egg in an animal system, there is some asymmetry that's being stored actively by the whole organism of the mother essentially. And based on that asymmetry then you have gradient of Oregon of difference and then some cells will sense more of signaling type A and less of B and then from that certain things will be expressed that will then Chang Kim self fate to substate and then you have more selfhat and the same thing happens there. You have now more asymmetry but starting off in the same one. So you again will have cells that will send more and less of some signaling signaling ligands that will basically then turn out to be incorporated through transcription factors, differential expression to differentiation. 1:21:05 So this is essentially what this is doing, what the codes here are essentially those different relationships between transcription, between signaling receptors, transcription factors downstream to a genetic code where then basically we don't do probation again, but this is the first step. There's basically one snippet of type morphology. One aspect where you have at that point in this hypothetical morphogenes experiments, you have four different type signaling four different signaling ligands that are being expressed that can be expressed by the cells. And based on where you are in a Laje where you will see more or less of it and based on that information, you will then yourself start to express more or less. Of course, you basically start differentiate so then you will start expressing one. 1:21:52 Less of that and at some point of that you reach certain threshold where they actually will fully undergone the differentiation process and have a new cellphone and that's what these different head, tail and stuff so of course like you know, there's no animal system I know of that has eight cells and four cell types. It makes a whole body. That would be crazy. Although there are some interesting owners and systems where they are the ceilings these are one example where they have a complete map from cell to cell from basically agent they know exactly which cell based turns out to be. What's at the end? 1:22:31 They can go from the embryo and they can tell exactly this cell is going to be at the end of time. This is the minority of examples. Most models I know are a lot more messier than that and that's what this trying to basically answer given that you have a lot of messy that cells can be different things and then if perturbations happen like they happen all the time in environment and often we do see misformed embryos that we start in the labs to it too we see in nature all the time, where you get completely different. Outcomes in the morphology, even though not all of these occur because of some genetic mutation. That's not always the case. 1:23:09 Sometimes it's just basically some perturbation environment and the same genetic code with some active inference that was here. Basically the inference here between one simulation and different. If you did it that way we didn't. But if you did, at least not in this paper, if you had different minimization initially of the prior beliefs, you can also get exafferent outcomes. But even with one inclination of the initial beliefs, you will get certain outcomes for those cell types based on the encoding, but also on what the environment does. 1:23:38 That's the kind of the goal is. You have cell types. You want to see how can you change the line of cell type differentiation based on interplay? Not just like saying you do this and you tell the seller to do this, but based on how they interface with each other. 1:23:56 Hope to answer the question. 1:24:01 Daniel: It makes me think of Waddington's landscape, the epigenetic landscape, which I'll copy in how would you say it relates to epigenetics or the epigenetic? Sorry, if I have my shirt? Yeah, I have a shirt with it right now. Yeah, I love the white snoop in it. Yeah, exactly right so that was the big number. 1:24:23 I'll put it onto a slide. Yeah. Basically the whole evolution was initially when we started understanding Kant information, that's the end of it. Now we know where to be done here. We know basically that there are some certain signals that we serve and then you will get geraint rees. 1:24:39 Franz: But the thing was down. But basically from that it's all straightforward. What was new then later on was the genetic part of it that was basically actually modifications happening dot one the spot at the of end the genome, but which genes were more or less accessible for expression. So basically you kept the coding the same, but based on modifications, system modifications, the way that the chromosomes where the Ines wrapped around itself in chromosomes, the more or less tightly they are called to each other, the more or less accessible through transcription factors. And that side is basically happening later on. 1:25:17 Now we think a lot more about this and we even see that even that's even heritable and it's also interactive with lots more than physical forces of it. So perturbations in the mechanical or biological environment will lead to a lot faster epigenetic transformations. Of course it's also important to mention that this faster process is faster to basically modify some of the proteins that make up the histones and then make up the accessibility of the generative model and to start off going some evolution experiment where you start to resetting and crossover until you get the right combination. So it's a different layer of information flow that is interactive with genetic level. So what this paper kind of interacts with a little bit of intersects with it is the aspect of in that initial information geometry you have a sentinel code within basically one lifetime. 1:26:12 How

do certain modifications and sensory precision which you can think of like? I didn't make that explicit claim. I hate to make claims, actually, then afterwards answering some relevant biological mechanism, but I could think of easily that way, that you listed these modifications of the response function, the sensitivity of the double tail experiments and the reduced sensitivity and the cancer initiation. Those can easily be thought of as short term epigenetic modifications. And we do see that a lot this happens especially in stress environments and stressful environment and genesis where they very quickly Chang Kim need to drastic outcomes. 1:26:59 And it's thought of as a quick response mechanism to basic perturbations. And it's evolved of course in cancer as well. That's well known. So that's kind of how I would think about that. The landscape here can be thought of as the version of range landscape in a sense of like what different outcomes you can get, how quickly that your privacy space on the changes in the environment. 1:27:23 Daniel: Awesome. Thank you, Dean. 1:27:28 Dean: When I was reading this, I was thinking of a company that I'm invested in that has taken stem cell derived isolates because people have diabetes, and they have created an environment to protect against a perturbation. So instead of injecting those pancreatic cells directly into the portal vein, they try to they have or working on trying to sort of create the environment in which the isolates survive. So essentially, again, rather than looking from the bottom up, it's sort of creating that safe environment in which these isolates can then carry out their function. And so that's what I was kind of thinking about throughout when I was reading your paper. Now, I'm not sure, again, whether I was thinking of the correct analogy, but yeah, in terms of what the pancreatic isolates can do depends on their survivability. 1:28:28 So if you don't have the right kind of environment for that, they tend to die. Or worse, they don't die, they become cancerous. So it's kind of interesting that you're looking at it here. And then I know of some sort of practical application force. The research is really trying to build these environments so that they can actually protect against the perturbations. 1:28:50 Franz: Yeah, no, that's exactly the online side of things as well. I have two more minutes if I can then go. Really appreciate you joining. If it works for you to group in and out at any point in the dot two, please feel welcome. And any other time, you or anyone else are welcome to participate. 1:29:12 Daniel: But thanks a lot for joining. You're welcome. My pleasure. See you later. Okay, bye bye. 1:29:20 Stephen: Thank you. Excellent. That was great. Fantastic. Great. 1:29:26 Daniel: Yeah. Fun times. Okay, so here's one thought on the relationship with foraging. Kind of picking up on what we were just looking at. So he described some of the manipulations as kind of like, interventions that were epigenetic. 1:29:43 So it wasn't the addition of, like, another changing the model structure, like changing there from four to five transcription factors. It was just changing their state. It's like picking one cell up and moving. It would be changing its position. But then there are other levers of intervention, like changing its sensitivity or other aspects of its cognitive or its generative models. 1:30:08 And the whole point with science and active inference, though, C, anticipating brain is not a scientist by Brunberg's paper, so it's not exactly the same thing, but it's about informed or directed experimentation. And that's true from the epistemic foraging with the isolating to the scientific experiment. 1:30:33 When we make targeted interventions into systems based upon a good understanding of them, we can learn about the cognitive model that's underlying it or the as if cognition model. So there's a lot of interventions that would not be informative about morphogenesis, like jokingly hitting the petri dish with a hammer. It's an interoception that does kill cells. It kills cancer. But of course, isn't that the joke? 1:31:01 Right? Like, there needs to be more to it in the experimental design and in the measurement of the outcomes so that we're actually learning something. And so to make increasingly nuanced designs and learn more and more and apply better and better. That's where having like this kind of formal modeling really matters. So this is kind of taking just saying like well, changes in receptors might be a precursor or a biomarker or an early mechanistic warning of x disease. 1:31:33 You'll read that 1000 times in the molecular biology literature and this is picking up there and putting it into a modeling architecture where we actually can talk about changes in signaling and position in an interface way so that it might be able to ActInf lab modelstream settings just like people use the epigenetic landscape heuristically but also increasingly quantitatively to model self a decisions, but just to kind of close that gain. It's the interventions that we as the investigator know based upon our understanding of the natural history of the system that give us updates on our cognitive model of the system. So, like, an ant foraging example is they'll take a desert ant that forges alone. The distinctions goes out without following, like, just simply a fairmont trail and pick the ant up when it's like, let's just say 30 meters away from the least or 20 meters away, and then move it to, like, 90 degrees rotated the same distance. So one extreme case would be it continuum from that location as if it were walking back from the north. 1:32:42 Let's just say that would be a pure direction step integrator model. There might be an angle it takes between that and for example, heading towards the nest. That might reflect the usage of other sensory or cognitive features like detecting the polarization of light or scene memory or other ways that it might be able to adjust and turn its vector to some other direction. So that's the kind of experiment where we can start to learn like how much are the nest mates in a foraging adjusting between their own onboard memory versus these external cues that might be picked up, like in one instance by any least mate in that location. So there's some cells where moving it into a different tissue the first thing it would do is die because it's just like too acidic or just like not a good environment for it. 1:33:39 But there are certain interventions that do seem to happen that lead to, at the very least we can say, states we don't prefer or just more directly like states that are abnormal or unhealthy. Though it really does come back all to Blue's question about death and aging and how is that related and what is this living phase between the developmental part and the data part even though there's such a complex continuum between them? Stephen. Can I just ask a question on what you're just saying that Daniel, do you feel that or do you think that there's the level of what's happening with Epigenetics is fairly flat and once these other types of changes these types of changes happen, that the adjacencies permeate out and the cells are kind of finding their place? And is that a bit different to the nestedness? 1:34:46 Stephen: You know, that we're talking about the nestedness because, like, for instance, a tissue also does it then fall into some other type of behavior where it's like into this idea of the nested kind of Markov blanket states? Is it then a case of what was the types of information flows which were conservation at one scale become noise that makes sense? Maybe. I'm just curious about if that is in here. 1:35:29 Daniel: It very well might. I mean, what it just makes me think about is the eye specification in fruit flies and similar in other insects. So this is like an example of a developmental genetics pathway at a relatively downstream level being worked out through a lot of work. But these cells are pre committed to being eye precursor cells, but then they undergo a second or a future stage of differentiate that results in a compound eye. And then it's the modularity of this developmental framing of first eye precursor cell and then to a specified eye compound cell that looks like the insect eye scale in different ways, for example, than the mammal eye developmentally. 1:36:17 So over evolutionary time, there's different affordances for insect eye evolution versus mammalian cell eye evolution because they're on different pathways. So it's kind of fractal because this eye field precursor is not the second one that comes out of the embryo. So there's multiple, like, fractal layers and precommittals. And it's not possible to jump from a canyon way at the bottom. Directly over there's a few cancer cell papers that do a little bit more of like a reversion and around or a perturbation that pushes and then changes to susceptibility are kind of like changes in the elevation of these hills between. 1:37:00 So it's a landscape which isn't just simply flat. It has some scale. And then the relationship between how jagged it is and how much noise or flow there is is about how navigable that landscape is in that one model. Bleu. 1:37:29 Bleu: So I have a question for Daniel and Dean specifically. So when Franz was here, he talked about the block matrix operator that he used in equation 39. And you guys Jelle, maybe it was Daniel specifically. We're talking about some kind of housekeeping term in the dot zero. And I couldn't remember or place the housekeeping term that you were referring to. 1:37:55 And I wondered if it was like, the same or similar in a similar or used in a similar way. I think it's Connor. I'll bring in a housekeeping term slide. It was from 32 livestream, 32 Stochastic. Chaos, Markov blankets. 1:38:12 Daniel: And here is the slide in slide 45 here. 1:38:23 Basically, the Helmholtz decomposition is usually discussed in terms of just the gradient, the solenoidal, so sort of like directed term and then the isocontour circulation change in elevation. Ruler on the hill and then around the hill. On the isocontour. And then this paper 32 brought in the housekeeping term and then there was like a supplement, so we didn't really totally go into it, but together they represent the total flow. So it's a slight Karl J. 1:38:57 Friston variation on the Helmholtz decomposition in this context. But as Connor mentioned in 32.3, it also has sort of antecedents and maybe even similarities with other areas. So totally remains to be a little bit unpacked. But it's the influence of how movement changes the landscape, at least that's a little bit how we were framing it like change on a trampoline where the movement is going to influence it. You can't just snapshot the landscape and then calculate what would happen and what the gradient and the solenoidal flow would be at every point because it's going to defeat the future. 1:39:32 Bleu: That might be related to what Franz was saying because he said that he used it to smooth out the landscape because otherwise you get stuck in like a little local pocket when you're talking about something that's bumpy. So it sounds like it could be a similar type of housekeeping thing or just like a smoothiness when you're going into the bowl. You don't want to get stuck in a bump down along the way. Right. You're a marble traveling down. 1:39:58 Dean: I thought it was a cleanse. 1:40:04 Daniel: There's just too many jokes to be made. 1:40:10 Wow. I think in our last ten ish minutes we should just chill, land a plane, prepare for dot two. It was pretty unexpected and awesome to have the time that we got to speak. And I think that's like the housekeeping term in action, which is a process and a protocol, and then people who have different perspectives on situation, but the group can always readapt to a changing landscape. And so even if that's interpreted as structured learning at a higher level, from the sort of agent view looking up, it's equivalent to adaptive to a different situation in some fractal way, slightly different or very different, using the same affordances and perceptive and cognitive features as it had in the time step before. 1:41:01 And that's why this model that looks at the flow of the autonomous states, so the states that can actually be controlled in the Markov blanket partition in 30 by defining a specific function that we want to be fitting. Well, by really focusing on the flow over the blanket and internal states, it allows inference on that component perception, cognition and action of the given modeled entity to be uncoupled in a certain way from inference on the hidden external state evolution, which is fundamentally unknowable, but in a slightly different way, if that makes any sense. 1:42:05 And a classic moment Bleu to suggest the mitotic elements and just to bring in the continuity and the way that that gives rise to asymmetry breaking sort of for free cool conversation that we had today. And so as a dot one might. I didn't get to mention, and I was going to mention it, but there's that fractal fMRI brain pattern paper that I'm like dying to read and would love to discuss with authors also. But it's interesting just in terms of the fractal dimension also that Franz was talking about is super cool. 1:42:58 Is there anything else that we want to, like, write down to think about in the dot two or questions that we'd like to ask Franz if he returns, or just things that we want to take moving forward in the dot two already? 1:43:22 Yeah, I agree. It's like you're eating dinner. It's like, what do you want dinner tomorrow? 1:43:29 Inverting space and time, we didn't really get to. And some of these initial points, it was least Aaron fath and morphology and then generalized flow, then taking us all the way back. Dean, I would like to maybe ask friends about quantum mechanics and what role, if any, that played in. Is there a quantum perspective that he was perhaps viewing in from when he was writing this paper? I would like to ask just that one question, because that will encompass inverting space and time, and it will encompass what we're talking about with the probability current and the probability mass conservation, so that all is bridging into the quantum where we'll go in the 40. 1:44:20 Bleu: So I think I would like to ask that's one thing. Dean, what are you looking forward to? What would you want to add here at the end? I was going to go back to the drawing board and basically look a little more closely at some of those math operations, because I don't tend to give those as much attention as I need to. I think it's really interesting now that we've had a chance to sort of bring more people into the conversation. 1:44:48 Dean: There's three parts to this, I think we can agree. There are three things that we can really center on now. There's the forming, which is what figure four and five do. But as soon as you go to figure five, what's interesting is that the attention seems to be on the shape that the cells eventually take. And yet in the blackness around each one of those moment captures, there's signaling and there's changing, right? 1:45:21 So there's all the part we've talked about today, which is the whole flowing piece and where is it going and how is it channeling and how is it narrowing, or how is it spreading and how is it dissipating and all that stuff. But even in those diagrams, it's hard to figure out where the signal is. Even when they introduce arrows, they introduce the arrow pointing at the form, not necessarily the signaling and the changing. And that's where the math really comes in, I think, because without the math, you lose sight of those other two present factors that are all moving and mixing and turning into something. So I hope Franz comes back because, man, he's got a lot more to tell us about, really, because this is such a dance paper. 1:46:15 Like, I think I mentioned to you, Daniel, before, the .0, this could be a 13 week course, this one paper, if you really wanted to I don't know how many hours he agent on this, but he already admitted that going to Carl's lab. And from the lab he's at, there's way more multiples of 10,000 hours of work that were done before he started typing things out. 1:46:41 Daniel: Awesome with that, maybe think about is this idea, like, multiple kinds of invisibility or overlay or unpacking, like, projection up into a bigger dimension of interpretation? Like, you pointed out how the focus was on morphology, which is to say the final realize, like this picture is of the morphology of the animal. It's not of the signaling density. But this could be, like, presented as just a gradient of transcription factor A. Or it could be presented as a gradient of vitamin A. 1:47:16 Or, like, it's kind of like Karelian photography. Like looking at the morphology with the energy field. I don't know if that's exactly what it is, but that sort of idea with the field based perspective. But then it gets hard to show many overlaying fields because, like, what do you do? 20 overlaying colors? 1:47:37 But we can't actually see that. And so that's just kind of interesting, like a question about visualization of higher dimensional models with a lot of overlays. And then that's like one kind of invisibility, which is something that's modeled but just not graphically shown easily, like the density of 50 pheromones or 50 transcription factors or vitamins. But then also but it does exist, and it's, like, modeled as an actual chemical component of the biological system, like the generative process. And then there's this generative model with the mathematical derivatives, the generalized coordinates of motion, of cellular position. 1:48:16 And those are also, like, invisible in a different way because they're a modeling tool and the derivatives are not anywhere in reality. Like, where's the 7th derivative of the baseball's movements just relative to what? Where is it hiding in the current moments? This is just a purely tool driven way of thinking about the current moment, not just in terms of its composition and, like, anatomy and position and the lowest level of the observed model, but, like, these higher levels, which are real, yet nonexisting in the moment. Structurally real in the structural realism sense. 1:49:00 Dean: I'll take your word for it. I've been wanting to say that for a long time. I have to now go back and figure it out. But what a discussion. I hope for those who stick around to the end here that they enjoy this wasn't just total morphopastrophe. 1:49:32 Daniel: All right, well, fun times and awesome to have all the good discussions and appreciate everybody who was here. Bleu and Stephen and Dean. So see you all next week or any other time. Peace. Thank you. 1:49:48 Bye. Thank you. Bye.


 * 1) Session 039.2, March 9, 2022



Second participatory group discussion on the 2021 paper by Kuchling, Friston, Georgiev, and Levin, “Morphogenesis as Bayesian inference: A variational approach to pattern formation and control in complex biological systems.”


 * 1) SPEAKERS

Daniel Ari Friedman, Dean Tickles, Bleu Knight, Stephen Sillett, Franz Kuchling


 * 1) CONTENTS


 * 00:28  | Intro and welcome.                           |
 * 01:53  | The stem cell question.                      |
 * 03:19  | “Stemness” and differentiation.              |
 * 09:27  | Rearrangement of information.                |
 * 11:21  | The state space dependency.                  |
 * 18:16  | Signals to keep things interesting.          |
 * 20:03  | The general answer.                          |
 * 22:24  | Do you still need to adapt?                  |
 * 25:05  | Wrong dropperie\[?\] and active inference.   |
 * 30:35  | Uncertainty and stress.                      |
 * 31:46  | The probability flow and information.        |
 * 39:25  | Eric Smith on biology and entropy.           |
 * 40:56  | Sensitivity to stress.                       |
 * 47:53  | Learning with stress assumed.                |
 * 53:09  | Coping with stress.                          |
 * 59:09  | Multiple archetype dimensions of stress.     |
 * 1:01:54 | Grad school vs involuntary.                 |
 * 1:03:32 | The computational definition of empowerment. |
 * 1:05:42 | “Can I ask you a question?”                 |
 * 1:06:36 | Reapplication of active inference.          |
 * 1:15:56 | What does cellular empowerment look like?   |
 * 1:24:03 | Average signaling figures.                  |
 * 1:25:02 | The norm simulations are inference.         |
 * 1:29:25 | Perturbing the costa\[?\] disturbance.      |
 * 1:25:02 | The norm simulations are inference.         |
 * 1:29:25 | Perturbing the costa\[?\] disturbance.      |


 * 1) TRANSCRIPT

00:28 DANIEL FRIEDMAN: All right. Hello everyone. Welcome to ActInf lab livestream number 39 two. It's march 9, 2022. Welcome to the act and Flab participatory online lab that is communicating, learning and practicing applied active inference. 00:44 You can find us at the links here on the slide. This is recorded in an active livestream, so please provide us with feedback so we can improve on our work. All backgrounds and perspectives are welcome and will be following video etiquette for live streams. Check active inference lab.org to learn more about anything that the lab is up to today. In stream number 39 two, we're in our third discussion around this paper, morphogenesis as Bayesian Inference a variational approach to pattern formation control in complex biological systems by Franz Kuchling, who's here with us as Karl J. 01:25 Friston, Georgiev and Levin. So we're going to have a fun discussion in the dot two, the dot zero was some background and concept, hopefully overviewing the paper. The dot one, we had a discussion that opened up several threads and today we will in the dot two, say hello on the interactions and then just jump in to a blank slide or to a not blank slide. So I'm Daniel, I'm a researcher in California and I'll pass it to Dean. 02:01 DEAN TICKLES: My name is Dean, I'm in Calgary, and I am going to say that I have one thing I want to look at down the road, which is how long is a person a stem cell? Which I think this paper might help us parse a little bit. And I'll pass it down to Bleu. 02:22 BLEU KNIGHT: Hi. I'm Bleu. I'm a researcher in New Mexico and I also, similar to Dean, have questions about when a cell becomes a cancer cell and how does it develop its own generative models.And just in general, when you're looking at complex systems, how these small perturbations can lead to a totally different effect downstream, which I think is kind of related to the stem cell question that Dean has. And I'll pass it over to the first author, Franz. 02:58 FRANZ KUCHLING: Hi, I'm Franz. I am a PhD student in the eleven lab and Tufts, met with near Boston and I study part of influence and Attial work. And I do love experimental work as well, looking at information processing and organisms themselves. 03:18 Daniel: Cool. So where does stemness or how do stem cells play into this paper and this line of research fronts? Definitely something we've touched on, but just to kind of start generally and then we'll zoom in somewhere. So take it away. 03:36 Franz: Sure. The idea is that the self initially in the simulation, when they're being initialized, they have the random initialized and they have very low beliefs about what cell type they are. They can be one of four different cell types in this particular simulation and they all share a model morphology they could achieve, that they essentially are programmed to achieve at some point, but none of the cells are any of the given cell types initially at least not any significant decision. And as they move along signaling to each other and sensing each other and secreting certain types in response to each other, they then slowly by slowly infer a cell type which essentially is differentiation. 04:30 Daniel: So cells starting with low precision about what kind of cell they believe themselves to be and then there's a process of differentiation. So Dinar will have any thoughts on that sort of stem to differentiated continuum. And I'll pull up an image from a non active perspective that also shows that kind of a continuum. 04:58 Bleu: So I have just something to add that ties Hinton the paper where there's average signaling. So a stem cell differentiates but then during the process of carcinogenesis like retains some kind of or d differentiate into something that's more similar to a stem cell than it is to a different kind of cell. And so this plasticity is important to retain so that we can repair ourselves. But also it can lead to kind of an aberrant signaling pattern. 05:39 Daniel: Like this action of having a terminal cell revert to a state and regain the ambiguity of having feminists is one of the characteristics of cancer because it can then sometimes differentiate into other cell types that are, like, normal and downstream of a precursor or abnormal in some other way of business. I was going to say it's also something that happens especially in higher organisms, right? Not every organism keeps absolute stem cells in all kind populations. One example I used to work on is an anti perfish where you have an always developing eye and there are certain parts of the region essentially that can differentiate some extent. They're not entirely stems, they have a stem cell niche as well. 06:39 Franz: But there's also especially higher organisms while they keep those populations of either stem cells or of already part differentiate cells that can be differentiate. So it's an important part regeneration. And ironically, this is something that's not been studied for a while. There's older research that was done where highly regenerative organisms when they undergo salamanders and they injected them with carcinogens and they basically had it saw a tumor progressing and then they cue a limp. And these are ants are highly regenerating. 07:13 We actually renormalize the tumors potentially. So initiation of regeneration and those organisms seems to actually counteract cancer progression and even kind of remove the tumors, which is very interesting. And this feeds a little bit into the idea on this paper as well that regeneration, what happens is you suddenly now have a much higher flow of information, things to rearrange, to re differentiate. And that is kind of like I what was thinking as well with the simulation here of the cancer rescue phenomena where you rescue this formation of aberne cells and abrint signaling by having a stronger contact between cells as well. One thing I'm modeling generation here, although you can I think in the original paper call has done some way to cut, I think, part off and let it grow again. 08:11 Speaker A: I think that has been done in the original paper and knowing one's place, but that's something to kind of keep in mind. This is something that's mentally how you studied to an extent. \[08:25 \>\>\>Daniel: So one general note and then a question. This idea that the organisms that have the highly regenerative capacity and there's a continuum from regrowing just the tail to regrowing a whole limb, to kind of an extreme or highly generative model like the planaria that was studied in this paper. So it's all kind of on a continuum from total ability to regenerate from any adult somatic cell can fully recapitulate the full population all the way to a total opposite of that. 09:00 Daniel: And then you pointed out that it might be related to cancer progressions and phenotype. And that speaks to what Bleu brought up about the system level effects. There's the individual cell and there's the system level because clearly there's something about the system level in the regenerative situation that would lead to this being different or maybe it could be explained by just something internal to the model. Anyways, you mentioned that higher information was associated with rearrangement. Could you explain a little bit about that or how does it play out in this paper or in general? 09:42 Franz: Yeah, it's basically the idea that not so much the total information content, but the flow of it. And we talked a little bit about this last time. It's hard to really kind of quantify this in a more meaningful biological sense, but the idea is that it's information flow. I think Dean of question is more information always good? And my kind of answer to that was within context of what the participating algorithms can receive and can process. 10:13 So in the sense that if you have cells that processing certain information with a certain kind of rate, axel Costa then kind of getting information with that time free in mind is what makes it more likely for them to kind of understand better their environment and react to it as opposed to a cell thing of it. Also cancer cells that have that whole sensory part completely perturb, that do not react to the environment anymore, then everything else becomes noise and it gets ignored. That's kind of the idea. So there is a lot of differences in how the cancer cells, because they have changed profile profiles, have changed machinery of what they're actually transcribing. So all that leads to the difference in how signals are being processed and understanding that and then reacting to that is I think is a key to understanding and treating the disease. 11:17 Daniel: Awesome\! - Thanks. Dean? 11:21 Dean: I don't know if I have the answer to my question, which is how long is a person in scare quotes a stem cell? But one of the things you mentioned bronze in the one was I think Steven asked the question about how much volume were in the flow. And what I've heard you say back was it's always a state space dependency, which when I'm reading the paper, I think you made a really good sound case for that. But then one of the things that when you had to sign off one of the questions that I brought up, hopefully, that maybe we can discuss a little bit today is the configure four and five of the paper, all that black area around when the sort of cells move into place. That's a crude way of describing what's going on there. 12:17 But all that black area around that, is it fair to say that that's kind of the state space dependency or that just assumed and how would we because Daniel did a good job of sort of saying there's some invisibility aspects to this when we do multiple layering. And so how do you give people a sense of what's going on in that in that black, that black doubt area? Right. I think black is a good color representation because it's not really a color, I don't think, but it kind of speaks to this idea that we've got these low beliefs, but for how long as a person or as cells, as a cluster of cells, is that beneficial? Right. 13:09 We're talking, of course, here under the big umbrella of morphology. So that was kind of where I was wondering today, maybe you could explain it because the eye tends to focus on the object within the frame as opposed to sort of the area around what we would consider to be the object. So maybe you can kind of help me understand this state space dependency piece. \[13:36\] \>\>\>Franz. Yeah, if you have a figure two I didn't plot in the figure four and five and two I did is the back of concentrations. Right. 13:46 Franz: So there are, of course, some levels of signaling molecules in the background as well, which will also, of course, be the case in the actual simulations down the line four and five, the blackness. One interesting part about the simulation, which was done mainly for simplicity and lack of necessity, is to not actually have an external environment per se. Like there's nothing, right? There's no external force outside of base. Everything in the simulations is done by the cells where the cells, they image the environment. 14:22 Right. Essentially there's an environment by the signaling molecules that they put out, but we don't have any external limit like limits around it. We just basically that's basically plot. The signaling models don't go faster than that and there isn't any external force which we could do. Right. 14:43 The positive simulations we do asymmetry essentially we get sort of an implicit environment by changing the response to the signal range. You can imagine that you put something in the medium that flips how signal is being processed and that's it. So back to your question. So this blackness and when is that beneficial? If I answer that question correctly. 15:10 There is of course like a precision component to it and kind of more less hand wavy. If your environment is very volatile and constantly changes, you have not just a closed system, if you had no system quantum, you would get signals in the environment that will also become a lot messier. And especially initially, right in the first frames of that figure, four and five or agent the simulation, they all start together. I mentioned last time, there's certain smoothness requirements where, like, if we had to put the stampening on the sensitivity along time so inverse dampening, it will get more sensitive with time. Just because initially they're very unstructured. 15:56 They have, you know, very low prior beliefs. And they will jump crazy if they have high sensitivity to each other because they're in such close proximity. And that is something that we of course do see in biology, right? Especially the higher organisms. They start off usually in close environments and of course you can argue this. 16:15 You don't need to go to any information that people argue well, it's just thermally insulated. They have their own food supplies. It just makes more sense. They're like just physically energy speaking. But I do think there's a big component to it. 16:27 It's like shutting this highly volatile, it's self inferring organism off from influence from the environment at a stage when all these cells are figuring out what to do each other. So they kind of need to have kind of a barrier between signals from the environment which normally to us adults to a mature organism are not really dangerous. But to end developing organism full of stem cells can be problematic and will basically perturb its natural kind of progression in this state space on this sulfate acquisition. There is a cool paper I can inference to you from Chris Field, who I am from disclosure I'm working with and very much admire him, who wrote a paper about modularity as exactly the consequence of what I'm talking about. Implication basically emerging as a way to shut some cells or cells that are an initial of modularity cells going from an evolution towards complexity needing some kind of barrier from the environment, basically making themselves creating a market of bank around them that makes it so they're not constantly being perturbed by the environment. 17:46 And that's the idea of how I would start to worry in the first place is to insulate niches which will become stem cell niches from the environment. So that cells are undergoing very dynamics changes in from our time that therefore need to be kind of sensitive to the environment, cannot be need to be isolated from influence in environment. That would perturb this careful inference of cell type.

18:14 Daniel:

Thanks. Dean?

18:17 Dean:

Yeah, so in this figure too, you can see that we're talking about ranges because we've got values on the horizontal and the vertical sides of the box. 18:30 So one of the things that I think is really interesting is that once you we've all spelled it out, if we're dealing with active inference, we want to avoid something surprising, right? The whole variational aspect of this. So again, if we're talking about going from something that has low beliefs to something that gains, I'll say gains and sensitivity without hopefully killing what I want to say next. If it's about signal adaptation to these top ten conditions which influence the final form taken, is there some way, based on what you researched with this paper, is there some way to know as you're approaching that place of energy exhaustion? Like, is there something that the Jelle tells you that it's finding this too surprising? 19:26 Because one of the things, again, we talked about was that rate of change being so rapid, say, like the UN report on goal warming, where we simply don't have the machinery to be able to adapt. Is there something that you found out through this paper that kind of said to you as you were looking at these things, oh, wait a second here. Our intent is dot two, kill this thing off. But we're approaching that place where the environment is just changing too fast. Yeah, trying to give you a force detail and hand waving us. 20:03 Speaker A: But let's talk with the general answer first. So, yes, one way you can quickly see it is kind of like something that you get with the picture and error plot that you also get with the free energy plot. If you in a perfect world, you'll get like a nice move, like an exponential decay of the free energy function. If that keeps changing, bouncing up and down, that would be very problematic. I did some simulations afterwards. 20:31 We're not included in this paper, but based on exactly the same stuff where I was including a time on top of this time sensitivity increase. They used for basically increase the sensitivity over the time, but having a low in the beginning. On top of that, I put in an impulsive, like a sinusoid or rectangular function, which was we were just trying to simulate what I'm doing in my experiments in the lab, where basically certain signaling inputs to my organisms. I wanted to see basically, if I randomized that or if made that in a certain pattern, how well would the simulation still be able to cope with that? In the case where I had a nice regular pattern of it in the energy decay, you hardly saw that initially when I since it was very high, you can see it there. 21:31 There was like a big jumpingness and fuzziness. You could see it also. And the belief updates, basically, they kind of emergence kind of like they diverged initially very strongly and diverge in terms of their beliefs that they had about themselves. But then kind of afterwards when the sensitivity overall was kind of acclimating but was kind of varying around this acclimation point you didn't see that much anymore in the regular pattern but in the ones were less regular that was very much perturb. And the energy also didn't actually quite did not only did not come to kind of like to this asymptotic behavior but actually started increasing again. 22:19 Basically just was really interoceptive and you didn't see that. So John's question like how do you see if you are not adapting correctly even though you don't actually know yet what your external states? I would answer is how much are you switching back and force between your beliefs? And I guess more precisely, how certain are you of that? This is something that we talk about medical condition, which I've recently, from the last people I've been getting into and reading about metacognition and rats and primates and other animals. 22:57 And the first step to medical cognition niche, as far as I understand it, is having a certain confidence about your results with rats, where they basically gave rats the option in this kind of cognitive task that they could also just not answer the task. Not push a lever's. And they had an option to not push a lever for which they would get a lower reward than if they much lower than if they got it correctly, but more than if they got it wrongly. And so what they saw is that they actually would in certain cases when they hadn't learned when they hadn't learned the task correctly, they would just not push the levels. And that is used as a the first instance of the medical cognition. 23:45 They must have been unsure about what you do and then chose not to do it. So I think that's kind of like it's one thing if you kind of lean back and forth because you always think this and that being unsure is normal but kind of constantly going back and forth between certain states of shortness is also problematic. So that's a generalized answer to this. 24:15 Dean: I think the interesting thing there is the emphasis on the back and forth. Metaphorically speaking. We tend to give a lot of attention over to the balance piece. But what you're talking about when you're talking about the environment is not the plank as much as a moving fulcrum. So how do you adjust? 24:35 Right. The balance, I guess is the outcome, but the process itself is the back and forth. I wondered about that because some things you do want from an epigenetic standpoint, you do want certain cells to die. You want their energy to expire. Absolutely right. 24:56 That's part of the back as opposed to the balance. There's no balance metaphor you want there you want them to die and they're not dying. So that's really interactions. Thank you. I wanted to add one more kind of attempt to answer this more specifically. 25:13 Speaker A: There's a paper by probably pronouncing the name Wrong Dropperie, which was just Google Valence and Active inference. They basically looked at valence in terms of judging if your actions are certainly like are they better or worse? Basically are you assigning assigned positive or negative to outcomes? And they were doing that in active infant schemes and they were defining it. There are other definitions. 25:42 Not everyone was gain to be about this but they were looking at scheme about the rate of change. They were looking at the first and second order derivative of how I'm saying this correctly about how the prior leads. Were updated to the precision. How important, like how sure they were about what Jelle type they were. And so basically, in the second order derivative, basically how much would they change at any given time in the belief that's what, they plugged in valence into exafferent modes? 26:17 It gets more complex than that but that's the gist of it. So I think it's in active inference of more specific answer questions, look at the rate of change by which your beliefs are updating and that with respect to you uncertainty of course this whole column is written about this as well. I'm about stress sensing mechanisms like stress is a universal driver in biology it's actually much more important than rewards. I kind of find that we always talk about rewards but biology is much more focused on stress because it's more informative and it's more important to deal with then what maximizing. Of course you always say it's just negative but the point of being is stress as in terms of uncertainty like not having not minimizing your energy effectively and stabley that will lead to stress. 27:20 And there's people that humans I think mentioned last time as well where they do experiments with humans and mice as well where they shocked shocks. I think they did both mice and humans and then they showed that the Coda levels they were more stressed out not just by the bioelectric shocks themselves but they couldn't predict the electric shocks. So it's the uncertainty of what coming next. I think that is the most if I had to say one thing that is most negative of your going in a bad place if you are stressed because you leave to idea what's happening and if you know something that's going to happen when it's happening that's a lot less bad than if you don't know at all. One least really dark example to really make it a dark place. 28:08 I was born east Germany. I guess I was only wanting to open a bottom up. My parents were very much and there was this now it's a museum, but it was a prison for the basic political detainees in East Germany and East Brain. And he went there as a class once, and they showed all the thing, all the techniques they used to interrogate prisoners, including torture. And one of the torture devices was, I believe, a Japanese device where you would get free to put your head down on this device and there is a water that's dropping from the top onto your neck. 28:48 So you think at this point, that doesn't sound too bad, right? The problem was that the drops, there's a lot of viscosity in there, a lot of fracturation. So the drops wouldn't always come at the same time. And they kept repeating repair happening over time and time again. The prisoners actually people that used to be in prison, they were actually giving the tour. 29:09 So they were telling us that if you kant a device, after a couple of hours or so, these drops will feel like hammers on your neck. And apparently a big component of that was the body not being able to adapt to it. Because the drops don't come at a perfect five second interval. It chaos, kind of a semiflux randomized intervals. So I think there's a huge part in not just looking at stress over time, but how you can adapt to stress. 29:43 And in order to adapt to stress, adaptation to me always involves a component of prediction. If you can't predict the source of the stress when it happens, it makes things a lot worse. That has been shown extensively in humans and also lower organisms. And I believe very strongly that there's something fundamental to any biological system, to fundamental drive. If you prescribe to the point of view that minimizing uncertainty in your environment is a fundamental drive of life, then with that you describe that minimizing stress and the source of stress over time and being able to predict it is also fundamental and lack thereof makes a very dangerous system. 30:28 Speaker B: Bleu. Thanks. So all that's super interesting. Okay, so I'm going to start at the front and work backwards. So in terms of uncertainty and how that causes stress, I'm really kind of, I don't know, reluctant to use the word stress in a biological system. 30:51 Bleu: Like, I think about stressing, a biological system is like pushing it out of equilibrium. But maybe that's just a stimulus, right? Like I want you to rebalance and go to like a new equilibrium or like turn on heat shock protein or there's a variety of different things. Like many ways you can view stress as a stimulus to do something new or to perturb the system in a new way. But in terms of uncertainty, I think that there's a lot of truth in how uncomfortable people are with uncertainty at a cognitive level. 31:25 And you can even see it, like in the stock market before. Like the election is in for the election results. Like the stock market goes crazy and because nobody knows. I mean, it's not that one candidate is preferable over the other, but it's just the unknowing is very like people freak out and that speaks, I think, to metacognition. Also, I wanted to back up a little bit, if I can, to your discussion of information flow and how that might be difficult to quantify. 32:00 And Daniel, I don't know, can you flip to the section for me, please, on the generalized flow section. I think we started to get into that a little bit last week, and we did discuss it in the dot zero, but I know that this was very mathematically technical, and so I just have a little bit of questions about this and maybe you can help explain it in a way that's perhaps less technical, but specifically information flow. And Chris Field is coming to discuss the FTP for generic quantum systems next week, and so I can similarly interrogate him about some of this stuff, and I'm looking forward to it. But we have a discussion there's a discussion in the paper about the probability flow and information. Like, a probability current, I think, is actually what is used and getting into information currents. 32:55 Like, if you look up what is an information current, I think there's, like, the idea of the von Neumann information current, and that is explicitly related to quantum systems. So is there any similar, like, do we have a way to measure information current? Because even when Mike came to the live stream and talked about expanding cognition, biological cognition, computational boundary was helped. There was sorry, but when you expand biological cognition, you're basically expanding your informational awareness. And so is there anything like an information current or how can you best relate that within a biological system? 33:36 Is it like an expansion of your computational boundary, or does it look like just more information goes in and comes out? 33:46 Speaker A: I think it depends on what level you're looking at. I would say it's both. So the second part, you mentioned that the information flow comes in and out. That's something we just have more access to, biologically speaking. Expensively, right? 34:03 And you can absolutely have all these sensors for receptors being activated for genetic transcription in response to that. So that's something that we can have the best chance of quantifying experimentally. And in terms of computationally, it's something you can monitor and then also couple that with chaos. Good thing you mentioned that, because then I can relate to him. That's basically what he's also interested in. 34:35 How does physical energy, of course, all into this Nebraska metabolism right now? This comes for free. That's something that biological often kind of not biological, actually. No, theoretical biological often under the table, because the idea is that energy is abundant in life, which is to extend true, there's plenty of sunlight. There's plenty of things to fool around. 35:04 But at the same time, locally, just entropy, there is a strong competition there. So just because you are living in a reservoir that has plenty of food and sunlight, that doesn't mean that you're locally going to survive because there are people locally competing with you. The same is true, right? I mean, this is always like, when you talk to people that people that have problems. The idea of why this entropy? 35:34 I don't want to say names, but that's saying that. Entropy is always increase. If chaos and universe increases, how do we see much more structure evolving? And then the answer to that is that well, just like in a bath of water, if you have Dutch, an oil group, that's going to come together because overall that makes more degrees of freedom for all the water molecules. So the entire system entropy did increase. 35:58 But you manage to find a solution that does increase that by locally creating more water. That's the drive of, I think, of life overall. The quote that probably the talk that got me into biology is was from a biophysicist in Heidelberg was saying life is not about energy per entropy. But the two come together, right? Yes. 36:21 It's about how much information is available. But then the extent of that, once you are subscribed to that, once you have to form the focalized structures, then energy becomes important again. That's what I think is needed right now. When you have these highly spaced organized systems, energy suddenly does become something that is valuable again. 36:46 Yeah, I think I kind of lost my brain of thought there. The boundary of self. That's right. The other information flow that was part of our information flow can measure that. You have good ideas. 36:57 What's learning is a kind of understanding of how metabolism is really hooked into the information flow. Whether or not it's built into the model or whether it's ad hoc just down the line. You give it a certain resources, after a while it just exploits them. The boundary itself is interesting because of course, in the cell, we think what we have there is a membrane. The boundary, the cell is fairly obvious. 37:22 It's not as easy as that, I think. And of course, if you look at the motor cellular organisms, it becomes tissue scale, then that becomes a lot more difficult. But even on the cellular level, if you have certain receptors, you have certain activations of it, then a model, any general monologues in active inference lab scheme, especially in active infant scheme, you also have a model of yourself. Right. And if at what point what used to creating your signals, that used to create a feedback into what you're sensing after all. 37:58 So it is kind of spot. That whole system is very much perturbing cancer cells as well, where how much are you actually receiving things from the environment? Are you basically just succeeding and succeeding and you don't really react to anymore? So the boundary of self, what? Mike already talked about this. 38:20 We have been much more iconic to be about this. But the idea is the boundary of self is encoded specifically with time and spatial constraints. Like how much your boundary more generally about this yourself model depends very much on your sensory memory. Like how far do you sense and spatially, how much radio you sense and how much back do you have encoded as memory? How much affect in the past persisting. 38:53 So even though if you have a cell that has a certain membrane, if that cell does not actually keep any track, if that cell is not somehow sensing beyond a certain boundary and then it creates on boundary by these highly preferred cells that are based on surrounding themselves with each other, then suddenly the environment becomes nude. So the boundary of the self has essentially expanded, I would argue. I think that's kind of the start learning here. I hope that was somewhere able to answer a question. Awesome Bleu so just I was curious, you mentioned that there was a talk that got you hooked into biology. 39:34 Speaker D: Was it Eric Smith and his discussion of biology and entropy? Because that's also fabulous.

39:42 Franz:

It was not, I'm real just seeing that. No, that's pretty embarrassing. I don't remember the name of the person who got me into biological as I speak English right now is all in German back then and always kind of my brain then has problems switching between those two content as well as the language. 39:59 Speaker A: I'll come back to probably within this talk. Eric Smith does a wonderful job though of discussing biology in terms of entropy. And so if you are not familiar with this work, I would just highly recommend it. Yeah, that really reflects on our discussion. Hae park of the self model are important to carry forward and then what are the affordance for direct and indirect self modification. 40:27 Speaker B: So that's one piece like you mentioned the autocrine signaling and that's not even a contentious component, it just is being modeled in a different way with active inference rather than just a molecular event that's happening that like a cells secrete molecules that they also possess receptors to measure. It's just putting that into a framework of reflexive self modeling and stigma g and niche modification here. But just one other point that I thought was really maybe useful to highlight was this sensitivity. And just like many of the terms in active we're kind of seeing that there is a narrow or a quantitative sense and then there's a broader sense because in any dynamical systems model you might hear about sensitivity of the parameter which is how much changing that parameter changes the outcome of the system. And so that's a quantitative sense. 41:25 And then also it was brought up that it's important to model these like second order derivatives like how fast are things changing relative to expectations or how fast are they changing how they're changing. And that is what the generalized flow is, is all those higher derivatives. So we can use that quantitative sense of sensitivity to look at how parameter changes in the generalized coordinates of motion matter. But also we can perhaps with cognition modeling take this discussion of sensitivity in the way that people are usually meaning it like sensitivity of what is seen on someone's emotional state and then model that using cognitive parameters like evidence and affect even though that's quite a disjoint use from the parameter sensitivity discussion on the generalized coordinates. So it was an awesome discussion with like what is stress and how does that relate to the parametric sort of neutral perspective on what stress or sensitivity might mean? 42:39 Just a model descriptive versus some of these functional or phenomenological even ways that these terms come into play. Bleu. So that's super interesting thinking about sensitivity and stress in just in terms of cognition. I know people that have sensory processing disorders get super aggravated by something like a bright light or a windy day. And just how sensitivity does play into maybe the flow of information because I'm suddenly sensitive to this and I'm also sensitized to every other thing in my environment. 43:24 Bleu: Like someone's screaming over here and it's windy and it's hot and it's bright and then I become just so overstimulated, right? Like people have these processing things and is sensory is that something that has momentum, right? So you have enhanced sensitivity and does that just like then all of a sudden you're so sensitive. We hear people say stuff like that stop being so sensitive. Just in terms of cognition processing. 43:48 I thought that that was just super interesting. I think that's an excellent point. I mean, that's something that shows exactly how important it is for humans and I argue low levels as well in the evolution to be able to model our sensitive and to change our sensitivity, right? We would never be able to have conversations in our environment if we couldn't change consciously and sometimes subconsciously as well our sensitivity parameters. And I think that's something Jelle that then also feeds into metacognition where you learn like well, I'm not paying attention enough so something to pay force attention to trying to tune it in. 44:29 Speaker A: And I think that's the sense of on the Jelle level, I think where that feeds in is by learning over time how many receptors are being activated, what does that feed in the stream. Then there is this component where essentially the cell can upper sensitivity by expressing more receptors, by if it's not its input, it can increase those proteins that are responsible for sensing these things. And there's also different if you talk about iron channels specifically that we work with, that there are different versions of iron channels that have different rate constants. Same is true for other active proteins as well as pumps. So that becomes a lot more explicitly modulating sensitivity over time in a way to better least and cope with adapt to its environment. 45:30 Again, adaptation, keep in mind, involves prediction in most cases, if not explicitly, then basically. 45:39 Speaker B: Awesome. And by the way, the guy is microhouse mine in Heidelberg before I turn around my graph, that was the person coming to biophyics cool just to kind. Of connect that change in sensitivity to some of those mechanisms. So if it's the sensitivity to a given hormone or a given circulating molecule then that sometimes gets accomplished biologically by changes in, like, the membrane receptor density or changes in downstream signaling pathways. So even if the amount of receptor in the membrane stays constant, it's like you can release more of the neurotransmitter in the synapse, or you could have more or least receptors, or you could have more or less of all these regulators and phosphorylating proteins, et cetera, in the downstream signaling pathways, because there's a lot of complexity in that space. 46:35 But there's a lot of knobs, including rates of change and rates of rates of change and lag effects with transcription factors and all this other basically any thing is possible and then at the organism level, just to give one and then anyone else, maybe at some other level, like it's kind of like a nest image agent being sensitive to interactions. And then a nurse who's a younger aunt is initially more susceptible or more sensitive due to multiple reasons like at the antenna, at the brain, in just spatially where the nurse is. And then there's this development towards being a forger, being sensitive to forger cues, finding oneself in spaces where there are forager cues that are

useful. So there's a lot of ways that these changes in the type of sensitivity, again, are already things that we talk about in developmental biology. And so it's so cool to see how that is connected to the molecular mechanism, the affective angle and then with the underpinning of the generalized flow. 47:53 Speaker A: Yeah. Dean so you guys can push back on this hard if you want. But it's interesting one of the terms that I coined was that stress programming or at least programming with stress assumed was basically learning either formally or informally. That's what it is because there's a certain amount of stress involved be it positive or negative. But I kind of through that through that viewfinder. 48:24 Dean: I think it's interesting that back to the back and forth thing stress programming can be safer or it can be riskier as long as it's not sort of killing the learner. It's safer in the sense of as interoception. I think this paper showed that you can intervene. But riskier is as distribution like sending something out into the unknown. It can be safer as teaching or taught. 48:54 It can also be riskier as unteaching or untaught sort of just sort of throwing somebody into a situation and see whether they can paddle or not. The last thing is it can also be seen as both modeling in the safer sense because we've reduced some of the variation or it can also be riskier in terms of coordinating meaning the person has to kind of figure it out as they're flying the plane. So I wonder what you guys think. Do you think that is there some agreement that learning in general be it formalized or informalized has an aspect of stress programming that kind of is assumed or on my way out. 49:48 Speaker B: Awesome question. Bleu or Franz, what do you think? 49:56 Speaker A: I don't think your aspects definitely of course the explicit incorporations of stress into learning. But your question I think is like is it kind of always implication there even if not explicitly so. One interesting kind of folks discuss Bleu mentioned earlier that stress can be interpreted as like human perceived stress hormones. They can also just be like perturbation from equilibrium from homeostasis. And they actually do kind of this has been combined in homeostatic reinforcement learning where they kind of explicitly make the reward and reward based reinforcement learning about the homeostatic off point. 50:36 And then stress becomes suddenly a lot more explicit in that sense of perturbation from your home aesthetic set point. In the active inference scheme, stress is basically just your you could just define it basically as your overall difference. And again with those bans I was talking about, like how much is your priority? Is certainty changing? Are you minimizing certainty over time or not? 51:10 I think it's hard both to completely say stress is not involved and without saying that while you're just calling everything stress. It depends on how you define stress. But if you go something like stress as you're not right now in an energy minimized state, you're not in scheme or you're not in your home set point, in a more basic scheme, then stress is I would argue you think it's learning. Yes. Because without that, if you don't have the difference to that, if you don't actually if you don't have any drive to learn because you think everything's perfect, then it's hard to have learning. 51:55 So specifically if learning is about with the environment, in concept environment, I try to adapt to it. Are you trying to learning in a sense of like adapt to it and find a certain equilibrium with it, then I think stress becomes very much explicit and it's ubiquitous in learning. 52:19 Dean: Okay, I just want to add one thing that's perfect because I always talk about the minimum of two and I think I think you're absolutely right, Franz. I think you have to then look at stress as being either voluntary or involuntary. I think that's the minimum of two that you have to look at. Because I think if you have voluntary stress, it's measurably different than the involuntary type. Yeah, you're right. 52:45 I don't think we COVID just throw everything under one umbrella and make it a monolith. I think we have to partition it right from the very GetGo and say all programming, be it a curriculum, whatever the curriculum is that those stem cells have, some of it's going to be voluntary and some of its going to be involuntary based on the situation that they're placed in. So again, I wasn't trying to pull it in that direction, but I didn't bring up stress. But I really like that it was brought up.

53:24 Bleu: From neuroscience perspective, it's interesting like the voluntary involuntary stress thing is so different. And I feel like maybe Daniel will remember we discussed that at some point, like when you're exposed to a stress that you do to yourself on purpose, like your strength training or something like that, versus some externally imposed stress that you didn't volunteer for. I think that that has a difference in affect. I feel like we looked at a paper I can't remember off the top of my head, though. But in terms of sensory processing, it's really interesting. 53:57 In neuroscience, we actually adapt, right? So you can listen to this is a great example. Like, you turn on the car radio and you're listening jamming out in the car for, I don't know, a while, 20 minutes, 30 minutes, and then suddenly it doesn't seem as loud. And so you turn it up and then you're jamming out, jamming out, jamming out, and still it doesn't seem as loud, so you turn it up because the same level of stimulus produces a decreasing sensation. And that's true in terms of visual, like when you first turn on the leg, you're like, it's so bright, but then you adapt. 54:31 And so in terms of sensory processing, we have a mechanism for adaptation. And I wonder about actually cells, not neurons, because this is like I'm specifically talking about like a sensory processing way, but I wonder if cells do the same thing. I mean, I guess like, you know, the receptor shuts or the receptors triggered and then it can't get activated again for a certain amount of time. So even just receptor binding in and of itself is like an adaptation mechanism. Like you can't constantly dump the chemical into the receptor or open the channel or whatever. 55:03 That's not biochemically possible. But I wonder if cells have some other adaptation mechanism outside of this, like, channel opening or closing or just this. Is there more than, like, the mechano dynamics of the channel that enables cellular adaptive? 55:29 Speaker A: I can think of a couple of ones, but no one expert on this. But you can think of first, on the epigenetic level, if you basically just recruit more historian, basically have a more tightly pac chromosome, you're going to have less genetic expression no matter what kind of signaling input you get to the nucleus. That's one that I can think of it another one is mechanically. So I hope I'm not overgeneralizing, but I think most of the signaling against that receptors are sensing and that has to be transported to the nucleus somehow to cause transcription that is usually directed right there has to crack up across the skeleton. So if you change the mechanical properties of the Jelle skeleton itself and again, yeah, I'm out of my league. 56:21 But that's a very well imagined one. That's smart on that topic. Lastly, if you talk in the interstellar, intercellular context, then you know things like in the old context, you have these synapses. In the non neuronal context, you still have gap junctions and other complexes that basically allow transport between cells and those can also be modulated by the cells. So then there you can also get habituation and definitely not examples where people have shown habituation and even other there's active research on showing that there's more disbituation, but also actually the learning of a certain pattern as well. 57:09 I think that's true. So I'm really interested in the psychoskeleton because a lot of the receptors are in the membrane, embedded in the membrane, but there's not a lot of work that looks at like the cytoskeletal dynamics with respect to correlation of receptor dynamics. Right? So that's a field that I think or I haven't seen very much work in terms of how does the underlying cytoskeleton contribute to the properties of the membrane and how a cell might be behaving. And just in terms of things having to get into the nucleus. 57:43 Speaker D: I don't know I'm iffy on that one. And it's just because I did very early work, like transfecting cells in a variety of different ways. And to get a gene into the cell, to get the cell to express the gene, you have to get it through the nucleus. Right. How does it get into the nucleus? 58:00 Like nobody can answer you. So like, if it doesn't have a nuclear target on it, how is it going? So is it just like osmosing itself through? How is that exactly happening? And so you call the vendor like, well, I'm trying to use your product, whether you're electroporating or using some kind of lipidmediated transaction, they don't know. 58:17 They have no idea, actually. So it's just a super interesting prospect. And I think that in terms of both that and the cytoskeleton, there's a lot of dynamic remodeling that occurs during cell cycles, right? Like during the phases when a cell is dividing and growing and becoming two cells and so like the nuclear membrane point, but in terms of terminally differentiated cells, like a neuron or something that's not undergoing constant change in remodeling, it's curious to see how that can happen. And I'm interested if anybody has any good, like hardcore mechanistic papers that you want to point out or if anybody listening in the Livestream wants to drop them in the YouTube chat, I would really love to take a look. 59:09 Speaker B: I really like Dean returning to this idea that there's a continuum where there's multiple archetype dimensions. So not that it's all going to map on to just two, but we know it's, at least locally going to be at minimum two, that there's the kind of one motive interacting in an educational or training setting with a teaching and a taught that could still be in an interactionist or an instructionist way versus the unteaching and the untaught. So that's the difference between the athlete having their form, like observed in a tight feedback versus when they're on a sojourn away from that sort of a training context. And so there's like an environment or there's a setting where there's some kind of stress as fundamental and then there's another type of environment where the different kind of stress is fundamental and then frauds. It was an awesome point moving from that basic homeostatic framing where stress is usually considered at the first derivative or the state. 1:00:24 So it's like for A, thermoorganism thermal stress is going to be when it's hot or cold and then that's going to be like a bowl or it's a V or it's a bathtub or it's some other thermal stress curve. It's about temperature moving into the generalized coordinates of motion with temperature and change of temperature and change of change of temperature. So all the higher derivatives of temperature, that expands the space a lot because maybe it's possible that a temperature that you slowly reach, like over ten years is a different amount of stress than just jumping there instantly. Time matters. And so the rates of change captures that time dependence in an instance with a snapshot vector. 1:01:11 So that's sort of the formal use of how we can look beyond just the stress on temperature and then we introduce the whole cognitive stress, because you said, well, for the active inference entity, this is sort of like the first resonance. And this is the one whose violation of integrity results in physical death. But then stress moves to higher or different analogous settings. And that could be how fast am I learning? And as pointed out by Dave in the Chat and by Dean and others, it really matters whether somebody has agency or affordance is to change it. 1:01:54 And so my closing point there with the voluntary involuntary is like grad school and stress because or any research or any educational environment, but one that many people experience is like, it can be stressful. It's also a very often privileged setting to be in and one is taken care of sometimes in a way that even neighbors won't be. And so that is something that like I saw first and second hand and just thinking about the stress, it's like why is it stressful to just read a few papers some days or something like that? So just kind of interactions with what is voluntary and involuntary, how we commit ourself to different kinds of stress, of what kind it's like, well, I wanted it to be novel, but not that novel, and how we actually make those action selections in uncertain environment. 1:02:53 Speaker D: Bleu so just to add on to the voluntary versus involuntary, like grad school is an entirely voluntary endeavor, obviously, but high school is not. And I dropped out of high school at 16. As soon as I was old enough to drop out. I was like, I'm out. I'm not doing this compulsory regimented program. 1:03:11 Like, sorry, no regrets, right? Like I started college, obviously have a PhD, did well, but it's the difference versus involuntary versus voluntary. Like when it's compulsory, it becomes torture. I mean, just my feet. 1:03:32 Speaker A: We have just a discussion on the competition group recently about about this whole idea of how do you quantify any kind of agency. And Josh Bongo in that context works with us, brought up this idea. Instead of talking about agency, which is how to quantify, you can talk about empowerment in the computational sense. Empowerment is about how much do your action, how much can your active inference, your sensations. Again, this makes very clear and experiment scheme where you have the smarter blanket. 1:04:06 So, for example, right, there's no action you can take to change it. We can think about it as natively stressful, but we can also define that fairly clearly in a computational sense using something like empowerment gain. I like that you brought this up because this drives home the point that a lot of these fundamental things you understand about stress and human contact with anyone that's ever dealt with depression or people that suffer from depression knows that it's never really about the objective goal amount of stress, about how it's being perceived by that person. And I think this empowerment helps a lot the definition empowerment helps a lot to understand that better. How much control do you have of that? 1:04:52 There's other things that of course you enter that, but it also makes Freddie Kelly that a lot of the fundamental principles must hold true and lower all in as well. Because these definitions, of course, as soon as you prescribe action, of course you can argue that there's no real actions and cell biology, then that's okay, that's just them don't want to talk about that level. But if you do look at active states in a set of context, then you do get then you can fairly simply in active inference lab scheme but other ones as well define empowerment measure empowerment over time by looking at how much how action actually change the sensory state of the cell and then definitions of stress from that point of view to become fairly explicit. 1:05:42 Dean: Can I ask you a question? Can I ask you a question? Because you mentioned when you were with us on the .1 for a bit there, and I don't know if you re mentioned it today, he said you went to Mark, I think, and you said, I want to do something cool. And it brings back to the voluntary and involuntary piece. 1:06:04 I think he said I'd never heard active inference lab before. I was pointed in that direction. 1:06:14 In the context of what's voluntary and involuntary, when you said I want to do something cool, that sounds like you wanted to do something voluntarily. Then you got this deep dive into ActInf lab, pushed that up against your biology background, which is something that you've obviously done because you like it, right? And you wanted to finish off. So going forward now, that voluntary piece that got you into active inference lab, what do you see active inference lab doing in terms of this empowerment question because Daniel's ridded out here with a question mark behind it. So how do you feel more agent? 1:06:57 Sorry, I always kill words because I'm not very anyway, I'm just curious what you think now in terms of what sort of empowerment you've sort of. 1:07:09 Become. In the sense that now that you've got this sense of what active inference can do and what some of these math formalisms provide and what this statistical world? Bridges what are you thinking going forward in terms of the reapplication of what you pulled out of a different bucket and then a set down in the bucket that you're most familiar with? So you mean like on a personal level, like for my decisions or yeah. When you're going to be going forward and trying to use that active inference sense in terms of some of those decisions and what kind of environments do you think you see yourself projecting yourself into that you may not have projected yourself into prior to sort of engaging or encountering or exposing yourself to active inference? 1:08:04 Franz: That's a question\! I think in the end, my wife and I, we both love this word word serendipity, which is the concept of happy accidents, which there's all kinds of connotations there by an active infant scheme. Or when you look at what information force you what the idea is that if you leave to much, if you active inference right now, you have super high precision, very strong prior beliefs, then whatever. If you are confining yourself and you don't allow variances to change your mind, then you're almost always going to be upset because your precision is set too high. Right? There's a paper going to be coming out soon about this collaborator as well, looks the right precision in this context as well. 1:08:53 Speaker A: So that's something that any active infant model will know that you leave to keep that in mind when you take your precision. For me personally, it really is an idea of like I think Evie, this is not idea by myself any stretch, but any human tells stories about the environment but most about themselves and storytelling is in my sense nothing else than doing a model at exafferent scheme, making a generative model and setting that. So I think for me I'm Dutch young to be giving life advice. For me to achieve any kind of happiness, you have to really be careful of what story you tell about yourself and how strictly you can find that on things. And there is definitely advantages of I think personally I've always had better luck. 1:09:41 Personally, I think by not to find that too strictly and really looking into what your environment shows at you. And I think that helps. And I know personally people that have a very hard time with that and have very strong models of I'm going to get married at that age and then I do this. And that and then when things don't happen that way, I think at least unhappiness so in that context, active inference, what can I do for you? I think it makes things explicit that we all kind of hear about some level and maybe confirm psychology, but folk psychology things just people giving you advice, parents saying I think, oh no, I don't kant to say any of those insights. 1:10:29 I think you can get this lots of ways. But what I like about that inference schema, it's putting this dimension, it makes the ultra biology the biggest advantages. You agent a very explicit structure about information flow for very simple systems. And because it's been primarily applied in neurosciences, you get for us intuitive results from that. What happens if you have tried precision? 1:10:56 What happens if you probably don't change? What happens if energy isn't being minimized all these things? I can fairly easily explain that in the concept of human behavior because that's been applied and we all have to understanding. But I think the biology point of view, there's a lot of things we can learn from that. If you look at the funded mathematical distinctions, then of course it deal with the baggage that you get from the neuroscience, which I Laje to deal a lot with, which is fair. 1:11:25 But for your life, I think it just helps you. Once you see really how much study has been put into miss inference amount like bad inference and of course, people doing tests in the Attial infant scheme about psychology behavior of task states, then I think that's one lesson I can definitely learn from that if I hadn't already start being mindful of your price. Be mindful of the rate of change. Like how are you really stressed out about right now? What's the worst uncertainty coming from and how much that show you off in a sense to kind of bring all this home a little bit. 1:12:08 Daniel: The biggest limitation of any algorithm, of any model is that you are defining the generative model beforehand. Not just mechanical inference scheme, but any papers that come out that criticize the idea of artificialized intelligence because you always have predefined model in life on that sense. But that definition is does not do that. Like always biological systems evolution kind of generates bunch of diversity in its background and from that kind of they pick the best path to constantly evolve the generative model. So that's kind of like a limitation of any other algorithm which can be overcome by chain different algorithms together, by evolving system, by having models make models. 1:12:53 There's some cool work on that too. But that's something you need to take, I think both computational scientifically, both personally kept taking mind of at what point you changed your own model and allow for fixable. That's what makes human right now still. That's one of the fourth acts that makes humans superior, I would say, into any kind of algorithm right now is because they all come predefined with lots of hot monitor. Again, I'm saying this now, I also don't wear that 100%. 1:13:21 I don't actually believe in fundamental differences between different intelligences. But that's something that at least kind of intuitively makes sense to us, whether or not that's way too scientifically. I have my points, but that's going off the rails. 1:13:38 Awesome. Wow. 1:13:43 A few things like we often talk about concordances with different systems and it brings the cognitive apparatus that was developed for humans and starts to at least instrumentally, if not from a realism perspective, also project that. And so it allows as a trans discipline people to talk about systems and map their analogies and models in a way that would help us find the resonances between cellular metabolism and economics. Kind of taking a vague feeling like those might have some similarities into using the same notation, terms, forms, et cetera. So that's one very interesting piece about active inference and also the limitations in terms of at what order and how much time and attention is allocated to this structural aspect and how many different families of structures are provided. And then to your well communicated uncertainty about if and how biology is different from other kinds of processes. 1:15:06 That's an awesome question. Especially connecting back to your point about the reservoir of energy versus how locally there can be a different energy or entropy balance. So it's almost like there might be little pieces that can be Kappel to digital signal processing with super high fidelity and that will be taken as like a win for the realists. And then there might be other parts where the instrumentalism bridges because we're just making a model of this one smaller thing and this one larger thing. And then in cases where this is fully computationally defined or fully described, then there's like a case for a reload. 1:15:56 Bleu: What empowerment looks like at maybe like a lower level or even like a cellular level. 1:16:08 Is cellular behavior or lower organism behavior voluntary versus compulsory? And is there a way to measure that? Obviously you can trap an animal in box and frequently we'll try to get out, but if you don't kant to go Hinton hang out there. Does that happen? Maybe. 1:16:28 I don't know if anybody knows any examples of what empowerment looks like. It's a super interesting concept to me. 1:16:42 Speaker A: In the FedEx, if you take that definition of the action to change your sensory puts, your perceptions, then definitely once you have a definition, it's of course built on what your actions are, that it's fairly easy because you don't have to go to this route, but it's boundary, not something internal. States even simpler. You look at the flow of states come out your markup blanket which you can show forms naturally, which call is shown. Then from that point if you want to have active states then you can measure that fairly easily. As an example, I would argue if cells actually interact with each other and really change other cell types. 1:17:33 Here's an example mormogenesis is traditionally fairly established. It's really driven by Laje organizing, and it's always a substance of cells. That we drive to sell the ones. I would argue then that those cells are probably very much in power because their actions cause differentiation of cells around them which then will very much and they're going to do certain expression of self. Well, so then they have action very much controller that whereas themselves the receiving and have a low impact because their own signaling probably isn't really because they often just don't have that they don't make a lot of that signaling. 1:18:16 South being organizing. Ireland. They are not going to have a lower think of that. 1:18:25 You're asking me? Yeah, as our guest or anyone else is welcome too. 1:18:35 Well, if you go, I personally like to get to a Laje where there's more than kind of quality of making actual instead of just making these models of active inference. 1:18:51 Those two together, right? In a sense. A lot of work went into this whole SPM software. That they developed in London. At this point, you actually can get fMRI data and then you can do the Lamme of a causal model. 1:19:04 You can really postulate different causal models and then do a Bayesian belief updating you. Can do a hypothesis testing in the sense that which hypothesis is more probability have cost that you see, if we could agent to that level get the high fidelity images of the developing embryo that you would get over. Time get signals that you can measure and feedback into your model and really make the Costa of what cause a model? Like, what structure will Costa these sensations? That the data that you receiving, that would be a huge step forward. 1:19:45 And Microscopy is really doing some having some great advance on that. One example can give us. A friend of mine works on that. Dimitri Crumb and Heidelberg. And he was a light sheet microscope. 1:19:57 There's a lot of other groups. He's a master student, PhD student. About that from him. But there's groups that do. High fidelity light sheet from two sides. 1:20:10 Nice near confocal resolution but with high temporal resolution of organism doing recapture and high process from constellation further on that's that you could collect where you can read and start to make least like causal malls in active inference lab framework and then they basically test whereas the Attial and you can retest the idea that we already have as incentives. I mentioned in Mark Genesis and we test out how much that fits into our notions of these updating and actions. I would love to see that in my lifetime. 1:20:56 Speaker B: Wow. Super awesome. So just to kind of restate that. SPM has been developed in the capacity of behavior neural imaging. And so it allows a combination of observable features such as what button somebody pressed or what image somebody was shown and also observable features of the environment like the measurements encoding in off of the EEG or the fMRI or the Meg. 1:21:25 And it allows integrated modeling of those different kinds of parameters along with the dynamical causal modeling on the underlying dynamic system identification. And there's a lot more in the SPM documentation on that. And so active here. Along with some advances in, for example, the ability to look at the process of embryogenesis and track cellular location for a whole organism developing through time. Then that can be fit as if those are the fMRI measurements of this embryo SPM and then the underlying mechanics of development, like a hypothesized morphogen gradient or a known or hypothesized mechanical relationship. 1:22:15 That's part of the model that's like the underlying neural systems model in SPM. And so it's kind of a great Brea but if it can be stated, if it's likely to exist, it sounds like you're taking some actions that will reduce your surprise about it. Yeah pretty cool. So on the microscopy note I'm like getting ready to receive and set up I'm really looking forward to like the multi parametric capability of like I can look structures top lots also like do fluorescence imaging so I'm really excited to have simple affordances available on this are great. I've used an AFM and

it gives you the biomechanics of the sample but I've ever also been able to look at the sample you just mapping without being able to see it. 1:23:20 Bleu: So I'm really excited to be able see and measure mechanics what's going on in a cell that skeleton parameters discussing earlier about how that manipulates or enables the Jelle to do different things. 1:23:41 Speaker A: Cool. 1:23:48 Speaker B: Let's look at any other things or also we can sort of give a closing round. Whatever people if you like. So one thing I'd like really about the average signaling figure so I don't think we went into figure five very much last time in the dot one you could talk about the average signaling kind of unpack that for us in your own words. We're not seeing the slides anymore also Daniel, I don't know if you know that. Yeah it's also kind of lagged on YouTube so I'm recording it, I'll re upload like the high quality one. 1:24:32 It's kind of like weird. It's a normal video chat but the YouTube stream link is like disrupted. But say that. 1:24:46 Speaker A: You don't see the. I'm showing figure five. 1:24:54 We can do that. 1:24:58 Speaker B: Great, thank you. Awesome. Yes. So this is the top row here. The norm simulations is inference, right? 1:25:06 Speaker A: You start off with itself in the beginning and then kind of have this vague they're very specified by low prior belief what type they are as they come out that happens to be changes. So the cue tone here makes them means that they're now more the more better they are, the more sure they are being at the set type Jelle or B where one cell which you can see off the errors points towards that had a reduced sensitivity to basically had a really hard time. Basically was much more longer, kind of unspecified than its neighborhoods were. And in the end, it active, actually. It gets wrong sell type. 1:26:01 Not only is it a wrong position. Like there's no sell in the original one at that place and even expect to be either yellow, red or somewhere in between. But not Bleu as it is down here. So in the sense this the inclination of cancer. I turn this that way because I think that basically the first step in constantiation is miss identification of your environment and your place in it. 1:26:30 Right. The original paper from Car was called Knowing. One's place. The cell definitely does not know its place. So this goes in and then here is where you have basically the same as above. 1:26:43 But the overall, the absolute flow of actually signaling to one Jelle to get to that has been up regulated so that each cell basically now drives more concentrations of cells to that Jelle basically kind de vries to compensate for that sensitivity aspects at all is that you have more here so you don't see that so much yet. You see the cell that kind of initially were normally gone, gone down, more cell push out. In the end, what happened is that signaling concentration and allowed agreement allow to retake a place. That is confirmed what you see in the top of mythology. What I also always wait when I talk about the figure and both figures really is you do not in the simulation at any point in time change target. 1:28:02 Morphology, all the actual encoding was exactly the same. So what they're expecting and where they're expecting us is exactly the same. And that, I think, is where I see the fundamental promise of that. You don't always want to. Change the code, right? 1:28:20 We know that we have higher engines that we ever have, and yet I think anyone really thinks that this is the way forward in all cases. And if you can do it, and you really comma to understanding that's great. And you have no other options? In extreme cases, that is absolutely warranted. I'm not trying to dissuade anyone from advances in gene therapy, but I think the agent is so complex for me. 1:28:49 Things that can go wrong. We don't know all of the details. That if you do not have to start messing up people learning up to be preferred. And I think this is and also even just from a capability setting that you don't always have the capability that we think often do. So our lab as well, I kind of take it. 1:29:09 The core of my experimental approach to things is that I want to understand how cells interface with their environment and work on that manipulation. Basically control information and sensing and drive them towards different dates. Again with that Arlington and Scale that we talked last time. You want to direct the floor? Actually, this time I took my shirt and I didn't see that right now I do have the white landscape. 1:29:36 There are my shirts. 1:29:43 Are you trying to redirect the flow of telephone decision making, not try to manipulate the actual patterns themselves? 1:29:56 Bleu: So what was the Costa disturbance signal response? Oh, sorry. 1:30:03 Speaker B: There yeah, the perturb signal response. How did you perturb the like did you cut it off? Did it get less information or was it supposed to infer an incorrect place? Like, how exactly in that one cell did you manipulate the parameters of that one cell so that it went to a different place? Yeah, basically was basically done. 1:30:39 Bleu: It was increase in sensitivity. Is that what it was? 1:30:47 Speaker A: I'm trying to make a specific right now instead of hand waving that so that we also formally induced that. That's basically how much actions are being updated based on sensory inputs and we can put a gradient on there and how much they ask on that. That is the part that I manipulate here. So what that basically means is that by altering the response capability of the cell, how much sense you update, to what extent sessions are being updated based on external information flow and what kind of power on top of that. By that you basically reducing the sensitivity to an environment. 1:31:41 You miss arrange, you miss regulating the computer essentially, but not that long stream, but you have the potential of the Jelle to access environment and do anything about also even upstream, how much sensing of that. So in this case specifically, it just wasn't really sensing as much of the environment as it was supposed to be encoding. 1:32:11 Speaker B: It almost is equivalent to the argument or the evolution of sort of nature and nurture partitioning by just taking a complexity stance, kind of like Evelyn Fox Keller in her book "The Mirage of a Space between Nature and Culture." You were talking about a target morphology, but then about how changes in the signal we're about to do number 40 on the quantum FEP paper and Bleu. And our friend Jason and I have been preparing a lot for it, and I think separating quantum from the electrons and their behavior and just asking, well, what is quantum strategy? Or what does this quantum statistics or quantum information projected onto biology mean? Not how it's been approached from the Mechanistic question, which is quantum biology? 1:33:09 Well, that's about the synapses and their quantum mechanical properties or about photon tunneling or proton tunneling and like quantum effects. But what about just using the statistics instrumentally of quantum and talking about situations with complex patterns of uncertainty and conversation bias and memory and modeling and all of that. And this was like a dot two that kind of like opened up a whole new area for us in our discussions with morphology and like the spatial and the embedded and then cracked a window into just another area we'll go into. And you brought so many awesome insights, so we really appreciate it. 1:34:00 Speaker A: Yeah, that talk should be a real pleasure. I highly recommend doing it. Chris Field is great to listen to and some exciting work. I'm sure you have seen that the particular paper from Calcutta, he has a section there on quantum mechanics. It's a very cool section because he actually derives, I think, the shortening equation, essentially from my children's point of view, from just a definition of what your log probability density is. 1:34:30 Very cool. Kind of I don't know how much that will feed into other and future work, but it's always very exciting to me when you see different fields, at least formally being able to be related to each other. That usually means there's something going on, something fundamental. 1:34:51 Daniel: We'll have more to say in join us. Yeah. And you're welcome any time. It will be Chris, Karl and I think Mike, I think everybody is coming. So if you want to come and join the panel, you're more than welcome. 1:35:09 To the events and pop in if it works. So thanks again, everybody. Hope that the viewers can bear with the lag if it was there, but hey, lazy. So peace out, everyone. Thanks a lot for this great discussion. 1:35:23 Speaker A: Thanks for having me. Bye.