Home // Posts tagged "self-organized criticality"

Nosso universo vai congelar como uma cerveja super-resfriada…

SCIENTIFIC METHOD / SCIENCE & EXPLORATION

Finding the Higgs? Good news. Finding its mass? Not so good.

“Fireballs of doom” from a quantum phase change would wipe out present Universe.

by  – Feb 19 2013, 8:55pm HB

A collision in the LHC’s CMS detector.

Ohio State’s Christopher Hill joked he was showing scenes of an impending i-Product launch, and it was easy to believe him: young people were setting up mats in a hallway, ready to spend the night to secure a space in line for the big reveal. Except the date was July 3 and the location was CERN—where the discovery of the Higgs boson would be announced the next day.

It’s clear the LHC worked as intended and has definitively identified a Higgs-like particle. Hill put the chance of the ATLAS detector having registered a statistical fluke at less than 10-11, and he noted that wasn’t even considering the data generated by its partner, the CMS detector. But is it really the one-and-only Higgs and, if so, what does that mean? Hill was part of a panel that discussed those questions at the meeting of the American Association for the Advancement of Science.

As theorist Joe Lykken of Fermilab pointed out, the answers matter. If current results hold up, they indicate the Universe is currently inhabiting what’s called a false quantum vacuum. If it were ever to reach the real one, its existing structures (including us), would go away in what Lykken called “fireballs of doom.”

We’ll look at the less depressing stuff first, shall we?

Zeroing in on the Higgs

Thanks to the Standard Model, we were able to make some very specific predictions about the Higgs. These include the frequency with which it will decay via different pathways: two gamma-rays, two Z bosons (which further decay to four muons), etc. We can also predict the frequency of similar looking events that would occur if there were no Higgs. We can then scan each of the decay pathways (called channels), looking for energies where there is an excess of events, or bump. Bumps have shown up in several channels in roughly the same place in both CMS and ATLAS, which is why we know there’s a new particle.

But we still don’t know precisely what particle it is. The Standard Model Higgs should have a couple of properties: it should be scalar and should have a spin of zero. According to Hill, the new particle is almost certainly scalar; he showed a graph where the alternative, pseudoscalar, was nearly ruled out. Right now, spin is less clearly defined. It’s likely to be zero, but we haven’t yet ruled out a spin of two. So far, so Higgs-like.

The Higgs is the particle form of a quantum field that pervades our Universe (it’s a single quantum of the field), providing other particles with mass. In order to do that, its interactions with other particles vary—particles are heavier if they have stronger interactions with the Higgs. So, teams at CERN are sifting through the LHC data, checking for the strengths of these interactions. So far, with a few exceptions, the new particle is acting like the Higgs, although the error bars on these measurements are rather large.

As we said above, the Higgs is detected in a number of channels and each of them produces an independent estimate of its mass (along with an estimated error). As of the data Hill showed, not all of these estimates had converged on the same value, although they were all consistent within the given errors. These can also be combined mathematically for a single estimate, with each of the two detectors producing a value. So far, these overall estimates are quite close: CMS has the particle at 125.8GeV, Atlas at 125.2GeV. Again, the error bars on these values overlap.

Oops, there goes the Universe

That specific mass may seem fairly trivial—if it were 130GeV, would you care? Lykken made the argument you probably should. But he took some time to build to that.

Lykken pointed out, as the measurements mentioned above get more precise, we may find the Higgs isn’t decaying at precisely the rates we expect it to. This may be because we have some details of the Standard Model wrong. Or, it could be a sign the Higgs is also decaying into some particles we don’t know about—particles that are dark matter candidates would be a prime choice. The behavior of the Higgs might also provide some indication of why there’s such a large excess of matter in the Universe.

But much of Lykken’s talk focused on the mass. As we mentioned above, the Higgs field pervades the entire Universe; the vacuum of space is filled with it. And, with a value for the Higgs mass, we can start looking into the properties of the Higgs filed and thus the vacuum itself. “When we do this calculation,” Lykken said, “we get a nasty surprise.”

It turns out we’re not living in a stable vacuum. Eventually, the Universe will reach a point where the contents of the vacuum are the lowest energy possible, which means it will reach the most stable state possible. The mass of the Higgs tells us we’re not there yet, but are stuck in a metastable state at a somewhat higher energy. That means the Universe will be looking for an excuse to undergo a phase transition and enter the lower state.

What would that transition look like? In Lykken’s words, again, “fireballs of doom will form spontaneously and destroy the Universe.” Since the change would alter the very fabric of the Universe, anything embedded in that fabric—galaxies, planets, us—would be trashed during the transition. When an audience member asked “Are the fireballs of doom like ice-9?” Lykken replied, “They’re even worse than that.”

Lykken offered a couple of reasons for hope. He noted the outcome of these calculations is extremely sensitive to the values involved. Simply shifting the top quark’s mass by two percent to a value that’s still within the error bars of most measurements, would make for a far more stable Universe.

And then there’s supersymmetry. The news for supersymmetry out of the LHC has generally been negative, as various models with low-mass particles have been ruled out by the existing data (we’ll have more on that shortly). But supersymmetry actually predicts five Higgs particles. (Lykken noted this by showing a slide with five different photos of Higgs taken at various points in his career, in which he was “differing in mass and other properties, as happens to all of us.”) So, when the LHC starts up at higher energies in a couple of years, we’ll actually be looking for additional, heavier versions of the Higgs.

If those are found, then the destruction of our Universe would be permanently put on hold. “If you don’t like that fate of the Universe,” Lykken said, “root for supersymmetry”

Planetas extra-solares, Kepler 62 e o Paradoxo de Fermi local

Conforme aumentam o número de planetas extra-solares descobertos, também aumentamos vínculos sobre as previsões do modelo de percolação galática (Paradoxo de Fermi Local).
A previsão é que, se assumirmos que Biosferas Meméticas (Biosferas culturais ou Tecnosferas) são um resultado provável de Biosferas Genéticas, então devemos estar dentro de uma região com pucos planetas habitáveis. Pois se existirem planetas habitados (por seres inteligentes) por perto, com grande probabilidade eles são bem mais avançados do que nós, e já teriam nos colonizado.
Como isso ainda não ocorreu (a menos que se acredite nas teorias de conspiração dos ufólogos e nas teorias de Jesus ET, deuses astronautas etc.), segue que quanto mais os astronomos obtiverem dados, mais ficará evidente que nosso sistema solar é uma anomalia dentro de nossa vizinhança cósmica (1000 anos-luz?), ou seja, não podemos assumir o Princípio Copernicano em relação ao sistema solar: nosso sistema solar não é tipico em nossa vizinhança.  Bom, pelo menos, essa conclusão está batendo com os dados coletados até hoje…
Assim, é possível fazer a previsão de que uma maior análise dos planetas Kepler 62-e e Kepler 62-f revelará que eles não possuem uma atmosfera com oxigênio ou metano, sinais de um planeta com biosfera.

Persistence solves Fermi Paradox but challenges SETI projects

Osame Kinouchi (DFM-FFCLRP-Usp)
(Submitted on 8 Dec 2001)

Persistence phenomena in colonization processes could explain the negative results of SETI search preserving the possibility of a galactic civilization. However, persistence phenomena also indicates that search of technological civilizations in stars in the neighbourhood of Sun is a misdirected SETI strategy. This last conclusion is also suggested by a weaker form of the Fermi paradox. A simple model of a branching colonization which includes emergence, decay and branching of civilizations is proposed. The model could also be used in the context of ant nests diffusion.

03/05/2013 – 03h10

Possibilidade de vida não se resume a planetas similares à Terra, diz estudo

SALVADOR NOGUEIRA
COLABORAÇÃO PARA A FOLHA

Com as diferentes composições, massas e órbitas possíveis para os planetas fora do Sistema Solar, a vida talvez não esteja limitada a mundos similares à Terra em órbitas equivalentes à terrestre.

Editoria de arte/Folhapress

Essa é uma das conclusões apresentada por Sara Seager, do MIT (Instituto de Tecnologia de Massachusetts), nos EUA, em artigo de revisão publicado no periódico “Science“, com base na análise estatística dos cerca de 900 mundos já detectados ao redor de mais de 400 estrelas.

Seager destaca a possível existência de planetas cuja atmosfera seria tão densa a ponto de preservar água líquida na superfície mesmo a temperaturas bem mais baixas que a terrestre. Read more [+]

Artigos em neurociência teórica, criticalidade em árvores dendríticas

journal.pcbi.1000402.g001

Leonardo Lyra Gollo me incentivou a retomar o blog. Obrigado pelo incentivo, Leo!

Single-Neuron Criticality Optimizes Analog Dendritic Computation

Leonardo L. GolloOsame KinouchiMauro Copelli
(Submitted on 17 Apr 2013)

Neurons are thought of as the building blocks of excitable brain tissue. However, at the single neuron level, the neuronal membrane, the dendritic arbor and the axonal projections can also be considered an extended active medium. Active dendritic branchlets enable the propagation of dendritic spikes, whose computational functions, despite several proposals, remain an open question. Here we propose a concrete function to the active channels in large dendritic trees. By using a probabilistic cellular automaton approach, we model the input-output response of large active dendritic arbors subjected to complex spatio-temporal inputs, and exhibiting non-stereotyped dendritic spikes. We find that, if dendritic spikes have a non-deterministic duration, the dendritic arbor can undergo a continuous phase transition from a quiescent to an active state, thereby exhibiting spontaneous and self-sustained localized activity as suggested by experiments. Analogously to the critical brain hypothesis, which states that neuronal networks self-organize near a phase transition to take advantage of specific properties of the critical state, here we propose that neurons with large dendritic arbors optimize their capacity to distinguish incoming stimuli at the critical state. We suggest that “computation at the edge of a phase transition” is more compatible with the view that dendritic arbors perform an analog and dynamical rather than a symbolic and digital dendritic computation.

Comments: 11 pages, 6 figures
Subjects: Neurons and Cognition (q-bio.NC)
Cite as: arXiv:1304.4676 [q-bio.NC]
(or arXiv:1304.4676v1 [q-bio.NC] for this version)

Mechanisms of Zero-Lag Synchronization in Cortical Motifs

(Submitted on 18 Apr 2013)

Zero-lag synchronization between distant cortical areas has been observed in a diversity of experimental data sets and between many different regions of the brain. Several computational mechanisms have been proposed to account for such isochronous synchronization in the presence of long conduction delays: Of these, the phenomena of “dynamical relaying” – a mechanism that relies on a specific network motif (M9) – has proven to be the most robust with respect to parameter and system noise. Surprisingly, despite a contrary belief in the community, the common driving motif (M3) is an unreliable means of establishing zero-lag synchrony. Although dynamical relaying has been validated in empirical and computational studies, the deeper dynamical mechanisms and comparison to dynamics on other motifs is lacking. By systematically comparing synchronization on a variety of small motifs, we establish that the presence of a single reciprocally connected pair – a “resonance pair” – plays a crucial role in disambiguating those motifs that foster zero-lag synchrony in the presence of conduction delays (such as dynamical relaying, M9) from those that do not (such as the common driving triad, M3). Remarkably, minor structural changes to M3 that incorporate a reciprocal pair (hence M6, M9, M3+1) recover robust zero-lag synchrony. The findings are observed in computational models of spiking neurons, populations of spiking neurons and neural mass models, and arise whether the oscillatory systems are periodic, chaotic, noise-free or driven by stochastic inputs. The influence of the resonance pair is also robust to parameter mismatch and asymmetrical time delays amongst the elements of the motif. We call this manner of facilitating zero-lag synchrony resonance-induced synchronization and propose that it may be a general mechanism to promote zero-lag synchrony in the brain.

Comments: 27 pages, 8 figures
Subjects: Neurons and Cognition (q-bio.NC)
Cite as: arXiv:1304.5008 [q-bio.NC]
(or arXiv:1304.5008v1 [q-bio.NC] for this version)

SOC e Câncer

A dar uma olhada…

Self-Organized Criticality: A Prophetic Path to Curing Cancer

J. C. Phillips
(Submitted on 28 Sep 2012)

While the concepts involved in Self-Organized Criticality have stimulated thousands of theoretical models, only recently have these models addressed problems of biological and clinical importance. Here we outline how SOC can be used to engineer hybrid viral proteins whose properties, extrapolated from those of known strains, may be sufficiently effective to cure cancer.

Subjects: Biomolecules (q-bio.BM)
Cite as: arXiv:1210.0048 [q-bio.BM]
(or arXiv:1210.0048v1 [q-bio.BM] for this version)

Novo artigo sobre automata celulares e Paradoxo de Fermi

Saiu um novo artigo sobre a hipótese de percolação para o Paradoxo de Fermi, onde simulações de automata celulares em três dimensões são usadas.  Dessa vez, a conclusão dos autores é a de que as simulações não suportam a hipótese.

Bom, acho que isso não é o fim da história. Eu já sabia que, para a hipótese dar certo, a difusão deveria ser critica (ou seja, formando um cluster crítico ou levemente supercrítico de planetas ocupados).

Ou seja, a hipótese precisa ser complementada com algum argumento de porque a difusão deveria ser crítica. Bom, como sistemas críticos são abundantes nos processos sociais e biológicos, eu acho que basta encontrar esse fator de criticalidade para justificar o modelo. Minha heurística seria: Read more [+]

A ideologia das leis de potência



Power Laws, Weblogs, and Inequality

First published February 8, 2003 on the “Networks, Economics, and Culture” mailing list. Subscribe to the mailing list.

Version 1.1: Changed 02/10/03 to point to the updated “Blogging Ecosystem” project, and to Jason Kottke’s work using Technorati.com data. Added addendum pointing to David Sifry’s “Technorati Interesting Newcomers” list, which is in part a response to this article.

A persistent theme among people writing about the social aspects of weblogging is to note (and usually lament) the rise of an A-list, a small set of webloggers who account for a majority of the traffic in the weblog world. This complaint follows a common pattern we’ve seen with MUDs, BBSes, and online communities like Echo and the WELL. A new social system starts, and seems delightfully free of the elitism and cliquishness of the existing systems. Then, as the new system grows, problems of scale set in. Not everyone can participate in every conversation. Not everyone gets to be heard. Some core group seems more connected than the rest of us, and so on.
Prior to recent theoretical work on social networks, the usual explanations invoked individual behaviors: some members of the community had sold out, the spirit of the early days was being diluted by the newcomers, et cetera. We now know that these explanations are wrong, or at least beside the point. What matters is this: Diversity plus freedom of choice creates inequality, and the greater the diversity, the more extreme the inequality.
In systems where many people are free to choose between many options, a small subset of the whole will get a disproportionate amount of traffic (or attention, or income), even if no members of the system actively work towards such an outcome. This has nothing to do with moral weakness, selling out, or any other psychological explanation. The very act of choosing, spread widely enough and freely enough, creates a power law distribution.

Grandes extinções são mais periódicas do que se pensava

The Death of Nemesis: The Sun’s Distant, Dark Companion

Posted: 11 Jul 2010 09:10 PM PDT

The data that once suggested the Sun is orbited by a distant dark companion now raises even more questions

Over the last 500 million years or so, life on Earth has been threatened on many occasions; the fossil record is littered with extinction events. What’s curious about these events is that they seem to occur with alarming regularity.

The periodicity is a matter of some controversy among paleobiologists but there is a growing consensus that something of enormous destructive power happens every 26 or 27 million years. The question is what?

In this blog, we’ve looked at various ideas such as the Sun’s passage through the various spiral arms of the Milky Way galaxy (it turns out that this can’t explain the extinctions because the motion doesn’t have had the right periodicity).

But another idea first put forward in the 1980s is that the Sun has a distant dark companion called Nemesis that sweeps through the Oort cloud every 27 million years or so, sending a deadly shower of comets our way. It’s this icy shower of death that causes the extinctions, or so the thinking goes.

Today, Adrian Melott at the University of Kansas and Richard Bambach at the Smithsonian Institute in Washington DC re-examine the paleo-record to see if they can get a more accurate estimate of the orbit of Nemesis.

Their work throws up a surprise. They have brought together a massive set of extinction data from the last 500 million years, a period that is twice as long as anybody else has studied. And their analysis shows an excess of extinctions every 27 million years, with a confidence level of 99%.

That’s a clear, sharp signal over a huge length of time. At first glance, you’d think it clearly backs the idea that a distant dark object orbits the Sun every 27 million years.

But ironically, the accuracy and regularity of these events is actually evidence against Nemesis’ existence, say Melott and Bambuch.

That’s because Nemesis’ orbit would certainly have been influenced by the many close encounters we know the Sun has had with other starsin the last 500 million years.

These encounters would have caused Nemesis’ orbit to vary in one of two ways. First, the orbit could have changed suddenly so that instead of showing as a single the peak, the periodicity would have two or more peaks. Or second, it could have changed gradually by up 20 per cent, in which case the peak would be smeared out in time.

But the data indicates that the extinctions occur every 27 million years, as regular as clockwork. “Fossil data, which motivated the idea of Nemesis, now militate against it,” say Melott and Bambuch.

That means something else must be responsible. It’s not easy to imagine a process in our chaotic interstellar environment that could have such a regular heart beat; perhaps the answer is closer to home.

There is a smidgeon of good news. The last extinction event in this chain happened 11 million years ago so, in theory at least, we have plenty of time to work out where the next catastrophe is coming from.

Either way, the origin of the 27 million year extinction cycle is hotting up to become one of the great scientific mysteries of our time. Suggestions, if you have any, in the comments section please.

Ref: arxiv.org/abs/1007.0437: Nemesis Reconsidered

Quasi-criticalidade auto-organizada

Fig. 3.

Nested θ- and β/γ-oscillations organize in the form of neuronal avalanches. (A) Definition of neuronal avalanches formed by the nested θ- and β/γ-oscillations. (Top) Threshold detection (broken line) of nLFPs (filled circles) at a single electrode. (Middle) Corresponding time–amplitude raster plot of nLFPs on the MEA (color: nLFP amplitude). (Bottom) Spatiotemporal nLFP clusters occupy successive bins of width Δt avg (dotted rectangles). (B) Average cross-correlation function for nLFPs in vivo at P8 (red) and P13 (black; single experiments). (C) nLFP clusters from nested θ- and β/γ-oscillations organize in the form of neuronal avalanches, i.e., distribute in sizes according to a power law with slope close to α = −1.5 (broken line). Average cluster size distribution in vivo plotted in log–log coordinates for P8 (red open circles; n = 5) and P13 (black; n = 7). (D) Example of two simultaneous burst periods before (black) and after (red) phase-shuffling. (E) The power law in cluster sizes is established for cluster area and cluster intensity (G) in single in vivo experiments and in the average (n = 7; F; cp. also C; all P13), but is destroyed on phase-shuffling of the LFP (open red). (H) Average cluster size distribution in vitro follows a power law with slope α ≅ −1.5 (broken line; n = 15; ≥10 DIV). (Inset) Average nLFP cross-correlation function for single experiment. Published online before print May 22, 2008, doi:10.1073/pnas.0800537105

PNAS May 27, 2008 vol. 105no. 21 7576-7581

Neuronal avalanches organize as nested theta- and beta/gamma-oscillations during development of cortical layer 2/3

  1. Elakkat D. Gireesh and
  2. Dietmar Plenz*

+Author Affiliations


  1. Laboratory of Systems Neuroscience, National Institute of Mental Health, 9000 Rockville Pike, Bethesda, MD 20892
  1. Edited by Nancy J. Kopell, Boston University, Boston, MA, and approved March 27, 2008 (received for review January 18, 2008)

Abstract

Maturation of the cerebral cortex involves the spontaneous emergence of distinct patterns of neuronal synchronization, which regulate neuronal differentiation, synapse formation, and serve as a substrate for information processing. The intrinsic activity patterns that characterize the maturation of cortical layer 2/3 are poorly understood. By using microelectrode array recordings in vivo and in vitro, we show that this development is marked by the emergence of nested θ- and β/γ-oscillations that require NMDA- and GABAA-mediated synaptic transmission. The oscillations organized as neuronal avalanches, i.e., they were synchronized across cortical sites forming diverse and millisecond-precise spatiotemporal patterns that distributed in sizes according to a power law with a slope of −1.5. The correspondence between nested oscillations and neuronal avalanches required activation of the dopamine D1 receptor. We suggest that the repetitive formation of neuronal avalanches provides an intrinsic template for the selective linking of external inputs to developing superficial layers.

Self-organized (quasi-)criticality: the extremal Feder and Feder model

(Submitted on 27 Feb 1998)

A simple random-neighbor SOC model that combines properties of the Bak-Sneppen and the relaxation oscillators (slip-stick) models is introduced. The analysis in terms of branching processes is transparent and gives insight about the development of large but finite mean avalanche sizes in dissipative models. In the thermodynamic limit, the distribution of states has a simple analytical form and the mean avalanche size, as a function of the coupling parameter strength, is exactly calculable.

Comments: 6 pages, 3 figures
Subjects: Disordered Systems and Neural Networks (cond-mat.dis-nn); Statistical Mechanics (cond-mat.stat-mech)
Cite as: arXiv:cond-mat/9802311v1 [cond-mat.dis-nn]

Self-organization without conservation: true or just apparent scale-invariance?

(Submitted on 12 May 2009 (v1), last revised 16 Mar 2010 (this version, v3))

The existence of true scale-invariance in slowly driven models of self-organized criticality without a conservation law, as forest-fires or earthquake automata, is scrutinized in this paper. By using three different levels of description – (i) a simple mean field, (ii) a more detailed mean-field description in terms of a (self-organized) branching processes, and (iii) a full stochastic representation in terms of a Langevin equation-, it is shown on general grounds that non-conserving dynamics does not lead to bona fide criticality. Contrarily to conserving systems, a parameter, which we term “re-charging” rate (e.g. the tree-growth rate in forest-fire models), needs to be fine-tuned in non-conserving systems to obtain criticality. In the infinite size limit, such a fine-tuning of the loading rate is easy to achieve, as it emerges by imposing a second separation of time-scales but, for any finite size, a precise tuning is required to achieve criticality and a coherent finite-size scaling picture. Using the approaches above, we shed light on the common mechanisms by which “apparent criticality” is observed in non-conserving systems, and explain in detail (both qualitatively and quantitatively) the difference with respect to true criticality obtained in conserving systems. We propose to call this self-organized quasi-criticality (SOqC). Some of the reported results are already known and some of them are new. We hope the unified framework presented here helps to elucidate the confusing and contradictory literature in this field. In a second accompanying paper, we shall discuss the implications of the general results obtained here for models of neural avalanches in Neuroscience for which self-organized scale-invariance in the absence of conservation has been claimed.

Comments: 40 pages, 7 figures.
Subjects: Statistical Mechanics (cond-mat.stat-mech)
Journal reference: J. Stat. Mech. (2009) P09009
DOI: 10.1088/1742-5468/2009/09/P09009
Cite as: arXiv:0905.1799v3 [cond-mat.stat-mech]

Self-organization without conservation: Are neuronal avalanches generically critical?

(Submitted on 19 Jan 2010)

Recent experiments on cortical neural networks have revealed the existence of well-defined avalanches of electrical activity. Such avalanches have been claimed to be generically scale-invariant — i.e. power-law distributed — with many exciting implications in Neuroscience. Recently, a self-organized model has been proposed by Levina, Herrmann and Geisel to justify such an empirical finding. Given that (i) neural dynamics is dissipative and (ii) there is a loading mechanism “charging” progressively the background synaptic strength, this model/dynamics is very similar in spirit to forest-fire and earthquake models, archetypical examples of non-conserving self-organization, which have been recently shown to lack true criticality. Here we show that cortical neural networks obeying (i) and (ii) are not generically critical; unless parameters are fine tuned, their dynamics is either sub- or super-critical, even if the pseudo-critical region is relatively broad. This conclusion seems to be in agreement with the most recent experimental observations. The main implication of our work is that, if future experimental research on cortical networks were to support that truly critical avalanches are the norm and not the exception, then one should look for more elaborate (adaptive/evolutionary) explanations, beyond simple self-organization, to account for this.

Comments: 28 pages, 11 figures, regular paper
Subjects: Disordered Systems and Neural Networks (cond-mat.dis-nn); Statistical Mechanics (cond-mat.stat-mech); Computational Physics (physics.comp-ph); Neurons and Cognition (q-bio.NC)
Cite as: arXiv:1001.3256v1 [cond-mat.dis-nn]

Review

Generic aspects of complexity in brain imaging data and other biological systems
Purchase the full-text article

Ed Bullmorea, Corresponding Author Contact Information, E-mail The Corresponding Author, Anna Barnesa, Danielle S. Bassetta, b, c, Alex Fornitoa, d, Manfred Kitzbichlera, David Meuniera and John Sucklinga

aBehavioural and Clinical Neurosciences Institute, University of Cambridge, Department of Psychiatry, Addenbrooke’s, Hospital, Cambridge, UK

bBiological Soft Systems Sector, Department of Physics, University of Cambridge, Cambridge, UK

cGenes Cognition and Psychosis Program, Clinical Brain Disorders Branch, National Institute of Mental Health, NIH, Bethesda, USA

dMelbourne Neuropsychiatry Centre, Department of Psychiatry, University of Melbourne, Parkville, Australia

Received 13 January 2009;
revised 3 May 2009;
accepted 8 May 2009.
Available online 19 May 2009.

Abstract

A key challenge for systems neuroscience is the question of how to understand the complex network organization of the brain on the basis of neuroimaging data. Similar challenges exist in other specialist areas of systems biology because complex networks emerging from the interactions between multiple non-trivially interacting agents are found quite ubiquitously in nature, from protein interactomes to ecosystems. We suggest that one way forward for analysis of brain networks will be to quantify aspects of their organization which are likely to be generic properties of a broader class of biological systems. In this introductory review article we will highlight four important aspects of complex systems in general: fractality or scale-invariance; criticality; small-world and related topological attributes; and modularity. For each concept we will provide an accessible introduction, an illustrative data-based example of how it can be used to investigate aspects of brain organization in neuroimaging experiments, and a brief review of how this concept has been applied and developed in other fields of biomedical and physical science. The aim is to provide a didactic, focussed and user-friendly introduction to the concepts of complexity science for neuroscientists and neuroimagers.


V Pasquale, P Massobrio, LL Bologna, M Chiappalone, … – Neuroscience, 2008 – Elsevier
experimental model for studying the universal mechanisms governing the formation and
conservation of neuronal In our work, while trying to understand the phenomenon of
selforganization in dissociated In this study, no attempt was made to discriminate and sort the
Citado por 19Artigos relacionadosTodas as 4 versões

JD Halley, DA Winkler – Biosystems, 2008 – Elsevier
Propulsive Actin Networks A.3. Example 3. Neuronal Avalanches References. is a need for new
theoretical frameworks that explain how natural selection and selforganization interact ([Kauffman
As originally defined, the sand pile model seems to involve no tuneable parameters
Citado por 6Artigos relacionadosTodas as 4 versões

upd.edu.ph [PDF]DE Juanico, C Monterola, C Saloma – New Journal of Physics, 2007 – iop.org
The non-conservative transfer rule is repeated until no more excited agents remain—when the
by using the definition ε = 1–α–β and then taking β = 0 without loss of q* = q c , which indicates
that the non-conservative system evolves towards its critical state by selforganization.
Citado por 4Artigos relacionadosTodas as 5 versões


Self-organising mechanism of neuronal avalanche criticality

Juanico, Dr D.E. (2007) Self-organising mechanism of neuronal avalanche criticality. [Preprint]

Full text available as:

[img] PDF
285Kb

Abstract

A self-organising model is proposed to explain the criticality in cortical networks deduced from recent observations of neuronal avalanches. Prevailing understanding of self-organised criticality (SOC) dictates that conservation of energy is essential to its emergence. Neuronal networks however are inherently non-conservative as demonstrated by microelectrode recordings. The model presented here shows that SOC can arise in non-conservative systems as well, if driven internally. Evidence suggests that synaptic background activity provides the internal drive for non-conservative cortical networks to achieve and maintain a critical state. SOC is robust to any degree $\eta \in (0,1]$ of background activity when the network size $N$ is large enough such that $\eta N\sim 10^3$. For small networks, a strong background leads to epileptiform activity, consistent with neurophysiological knowledge about epilepsy.


Role of membrane potential fluctuations to the criticality of neuronal avalanche activity

Juanico, Dr Dranreb Earl (2007) Role of membrane potential fluctuations to the criticality of neuronal avalanche activity.[Preprint]

Full text available as:

[img] PDF
548Kb

Abstract

Experimental evidence for self-organised criticality (SOC) in non-conservative systems has recently been found in studies of rat cortical slices. The size distribution of observed neuronal avalanches has been attested to obey $3/2$ power-law scaling. A mean-field sandpile model of a noisy neuronal system is proposed to refute the irreconcilability between non-conservation and criticality put forward by longstanding SOC hypotheses. The model predicts that neuronal networks achieve and maintain criticality despite non-conservation due to the presence of background activity originating from membrane potential fluctuations within individual neurons. Furthermore, small networks are demonstrated to tip towards epileptiform activity when background activity is strong. This finding ties in redundancy, an intriguing feature of brain networks, to robustness of SOC behaviour.



A energia escura e o Congresso

How to Build a Dark Energy Detector

Posted: 25 Jan 2010 09:10 PM PST

All the evidence for dark energy comes from the observation of distant galaxies. Now physicists have worked out how to spot it in the lab

The notion of dark energy is peculiar, even by cosmological standards.

Cosmologists have foisted the idea upon us to explain the apparent accelerating expansion of the Universe. They say that this acceleration is caused by energy that fills space at a density of 10^-10 joules per cubic metre.

What’s strange about this idea is that as space expands, so too does the amount of energy. If you’ve spotted the flaw in this argument, you’re not alone. Forgetting the law of conservation of energy is no small oversight.

What we need is another way of studying dark energy, ideally in a lab on Earth. Today, Martin Perl at Stanford University and Holger Mueller down the road at the University of California, Berkeley, suggest just such an experiment

The dark energy density might sound small but Perl and Mueller point out that physicists routinely measure fields with much smaller energy densities. For example an electric field of 1 Volt per metre has an energy density of 10^-12 joules per cubic metre. That’s easy to measure on Earth.

Of course there are some important differences between an electric field and the dark energy field that make measurements tricky. Not least of these is that you can’t turn off dark energy. Another is that there is no known reference against which to measure it.

That leaves the possibility of a gradient in the dark energy field. If there is such a gradient, then it ought to be possible to measure its effect and the best way to do this is with atom interferometry, say Perl and Mueller.

Atom interferometry measures the phase change caused by the difference in two trajectories of an atom in space. So if a gradient in this field exists it should be possible to spot it by cancelling out the effects of all other forces. Perl and Mueller suggest screening out electromagnetic forces with conventional shields and using two atom interferometers to cancel out the the effect of gravitational forces.

That should allow measurements with unprecedented accuracy. Experiments with single atom interferometers have already measured the Earth’s gravitational pull to an accuracy of 10^-9. The double interferometer technique should increase this to at least 10^-17.

That’s a very exciting experiment which looks to be within reach with today’s technology.

There are two potential flies in Perl and Mueller’s ointment. The first is that the nature of dark energy is entirely unknown. If it exists and if there is a gradient, it is by no means certain that dark energy will exert a force on atoms at all. That will leave them the endless task of trying to place tighter and tighter limits on the size of a non-existent force.

The second is that some other unknown force will rear its head in this regime and swamp the measurements. If that happens, it’s hard to imagine Perl and Mueller being too upset. That’s the kind of discovery that ought to put a smile on any physicists face.

Ref:arxiv.org/abs/1001.4061: Exploring The Possibility Of Detecting Dark Energy In A Terrestrial Experiment Using Atom Interferometry

To Understand Congress, Just Watch the Sandpile

Posted: 24 Jan 2010 09:10 PM PST

The behavior of Congress can be modeled by the same process that causes avalanches in sandpiles.

What does it take for a resolution in Congress to achieve sizeable support? It’s easy to imagine that the support of certain influential representatives is crucial because of their skill in the cut and thrust of political bargaining.

Not so, say Mikhail Simkin and Vwani Roychowdhury at the University of California, Los Angeles. It turns out that the way a particular resolution gains support can be accurately simulated by the avalanches that occur when grains of sand are dropped onto each other to form a pile.

Simkin and Roychowdhury begin their analysis with a study of resolution HR1207 and a plot of the number of co-sponsors it received against time early last year. This plot is known in mathematics as a Devil’s staircase–it consists of long periods without the addition of any new co-sponsors followed by jumps when many new co-sponsors join during a single day. “One might have suspected that the biggest steps of the staircase are due to joining of a highly influential congressman bringing with himself many new co-sponsors which he had influenced,” say Simkin and Roychowdhury.

That’s uncannily similar to the way in which avalanches proceed in a a model of sandpiles developed by Per Bak, Chao Tang and Kurt Wiesenfeld in 1988. Perhaps Congress can be modelled in a similar way, reason Simkin and Roychowdhury.

Their model assumes that the roles of sand grains is played units of political pressure. They assume that there is a network of influence in Congress through which representatives exert political pressure on each other (just as sand grains exert forces on each other through the network of contacts between them in the pile). When the pressure on representatives reaches a threshold, they co-sponsor the resolution and this, in turn, puts pressure on other member of congress to sign.

This is like the pressure that builds up in a sandpile as grains are dropped onto it. When a threshold is reached at a certain point on the pile, an avalanche occurs which redistributes the pressure to other places.

In addition, the representatives are pressured by their constituents which is analogous to dropping grains of sand at random.

There is a difference between sandpiles and congress however. Once a representative has signed, he or she cannot do it again and so take no further part in the process. Any further pressure on them is simply dissipated. So representatives cannot topple more than once, unlike sand grains which can keep on toppling as the pile gets bigger.

This is a pretty simple model but when Simkin and Roychowdhury ran it, they found that it generates a Devil’s staircase that is uncannily similar to the one generated by representatives for HR1207.

Perhaps the most interesting feature is that the model assumes that all representatives have equal influence. “In our model, big steps are a result of evolution of Congress to a sort of critical state, where any congressman can trigger an avalanche of co-sponsors,” say Simkin and Roychowdhury.

The pair suggest some interesting ways to follow up their work. They point out that not all resolutions in Congress get the same level of support. In their model, this is due to the amount of public pressure, ie the number of units of political pressure dropped onto the pile at random. If there is no outside pressure, the resolution will not get sizeable support in a reasonable amount of time.

“An obvious extension to the model is to introduce political pressure against the resolution,” they say, pointing out that an interesting case would be when the negative pressure exactly balances the positive. “It could explain the cases when a resolution quickly gains some support, which, however, never becomes overwhelming.”

So representatives are not as important as perhaps they might imagine. Perhaps the stage should be replacing them with actual grains of sand. By Simkin and Roychowdhury’s reckoning, it wouldn’t make much difference.

Ref: arxiv.org/abs/1001.3732: Stochastic modeling of Congress

Vida e morte nas marés galácticas

Este artigo me parece relevante também para a teoria de extinções periódicas devido às marés galácticas.
Friday, December 18, 2009

Galactic Tide May Have Influenced Life on Earth

The galactic tide is strong enough to influence Oort Cloud comets, which means it may also have helped shape our planet.

The Moon’s tides have been an ever-present force in Earth’s history, shaping the landscape and the lives of the creatures that inhabit it. Now there’s a tantalising hint that the galactic tide may have played a significant role in Earth’s past.

The work comes from Jozef Klacka at Comenius University in the Slovak Republic. He has calculated the strength of the galactic tide and its effect on the Solar System. His conclusion is that the tide is strong enough to significantly effect the orbital evolution of Oort Cloud comets.

That’s a fascinating result. We’ve long known that the Moon’s tides must have been crucial for the evolution of life on Earth. The constant ebb and flow of the oceans would have left sea life stranded on beaches, forcing adaptations that allowed these creatures to cope with conditions on land.

Astrobiologists also believe that comets played an important part in the development of life on Earth because the atmosphere and oceans were seeded, at least in part, by comets. By that way of thinking, the forces and processes that have shaped evolution stretch to the edge of the Solar System.

But if the galactic tide plays a role in sending these comets our way, then it looks as if we’re part of a much larger web. Could it be that Earth and the life that has evolved here, is crucially dependent, not just on our planet, our star and our local interplanetary environment, but on the Milky Way galaxy itself?

Klacka has a lot more work to do to prove that the galactic tide plays such a role. But it might just be that the field of astrobiology has become a whole lot bigger.

Ref: arxiv.org/abs/0912.3112: Galactic Tide

Avalanches Neuronais: novo papers que citam Kinouchi and Copelli (2006)

Spontaneous

cortical

activity in awake monkeys composed of neuronal avalanches

  1. Thomas Petermanna,
  2. Tara C. Thiagarajana,
  3. Mikhail A. Lebedevb,
  4. Miguel A. L. Nicolelisb,
  5. Dante R. Chialvoc and
  6. Dietmar Plenza,1

+

Author Affiliations


  1. aSection on Critical Brain Dynamics, National Institute of Mental Health, Bethesda, MD 20892;

  2. bDepartment of Neurobiology, Center for Neuroengineering, Duke University, Durham, NC 27710; and

  3. cDepartment of Physiology, Northwestern University, Chicago, IL 60611
  1. Edited by Eve Marder, Brandeis University, Waltham, MA, and approved July 16, 2009 (received for review April 16, 2009)

Abstract

Spontaneous neuronal activity is an important property of the cerebral cortex but its spatiotemporal organization and dynamical framework remain poorly understood. Studies in reduced systems—tissue cultures, acute slices, and anesthetized rats—show that spontaneous activity forms characteristic clusters in space and time, called neuronal avalanches. Modeling studies suggest that networks with this property are poised at a critical state that optimizes input processing, information storage, and transfer, but the relevance of avalanches for fully functional cerebral systems has been controversial. Here we show that ongoing cortical synchronization in awake rhesus monkeys carries the signature of neuronal avalanches. Negative LFP deflections (nLFPs) correlate with neuronal spiking and increase in amplitude with increases in local population spike rate and synchrony. These nLFPs form neuronal avalanches that are scale-invariant in space and time and with respect to the threshold of nLFP detection. This dimension, threshold invariance, describes a fractal organization: smaller nLFPs are embedded in clusters of larger ones without destroying the spatial and temporal scale-invariance of the dynamics. These findings suggest an organization of ongoing cortical synchronization that is scale-invariant in its three fundamental dimensions—time, space, and local neuronal group size. Such scale-invariance has ontogenetic and phylogenetic implications because it allows large increases in network capacity without a fundamental reorganization of the system.

Neuronal Avalanches Imply Maximum Dynamic Range in Cortical Networks at Criticality

Woodrow L. Shew,1 Hongdian Yang,1,2 Thomas Petermann,1 Rajarshi Roy,2 and Dietmar Plenz1

1Section on Critical Brain Dynamics, Laboratory of Systems Neuroscience, National Institute of Mental Health, Bethesda, Maryland 20892, and 2Institute for Physical Science and Technology, University of Maryland, College Park, Maryland 20742

Correspondence should be addressed to Dr. Dietmar Plenz, Section on Critical Brain Dynamics, Laboratory of Systems Neuroscience, Porter Neuroscience Research Center, National Institute of Mental Health, Room 3A-100, 35 Convent Drive, Bethesda, MD 20892. Email: [email protected]

Spontaneous neuronal activity is a ubiquitous feature of cortex. Its spatiotemporal organization reflects past input and modulates future network output. Here we study whether a particular type of spontaneous activity is generated by a network that is optimized for input processing. Neuronal avalanches are a type of spontaneousactivity observed in superficial cortical layers in vitro and in vivo with statistical properties expected from a network operating at “criticality.” Theory predicts that criticality and, therefore, neuronal avalanches are optimal for input processing, but until now, this has not been tested in experiments. Here, we use cortex slice cultures grown on planar microelectrode arrays to demonstrate that cortical networks that generate neuronal avalanches benefit from a maximized dynamic range, i.e., the ability to respond to the greatest range of stimuli. By changing the ratio of excitation and inhibition in the cultures, we derive a network tuning curve for stimulus processing as a function of distance from criticality in agreement with predictions from our simulations. Our findings suggest that in the cortex, (1) balanced excitation and inhibition establishes criticality, which maximizes the range of inputs that can be processed, and (2) spontaneous activity and input processing are unified in the context of critical phenomena.


Received Aug. 6, 2009; revised Sept. 28, 2009; accepted Oct. 30, 2009.

Avalanches Neuronais

Para Ariadne e Sandro Reia: A parte da discussão no artigo:

Neuronal Avalanches in Neocortical Circuits

John M. Beggs and Dietmar Plenz

The Journal of Neuroscience, December 3, 2003, 23(35):11167-11177


Discussion
 Top
 Abstract
 Introduction
 Materials and Methods
 Results
 Discussion
 References
Three distinct modes of correlated population activity have been experimentally identified in cortex in vivo: oscillations,synchrony, and waves (for review, see Singer and Gray, 1995Go; Engel et al., 2001Go; Ermentrout and Kleinfeld, 2001Go). These network modes have also been described in cortical networks in vitro [e.g., {gamma}-oscillations (Plenz and Kitai, 1996Go), synchrony (Kamioka et al., 1996Go), and waves (Nakagami et al., 1996Go)].

In the present study, we identified a new mode of spontaneous activity in cortical networks from organotypic cultures andacute slices: the neuronal avalanche. Neuronal avalanches were characterized by three distinct findings: (1) Propagation of synchronized LFP activity was described by a power law. (2) The slope of this power law, as well as the branching parameter, indicate that the mechanism underlying theseavalanches is a critical branching process. (3) Our network simulations and pharmacological experiments suggest that a critical branching process optimizes information transmission while preserving stability in cortical networks.

The analysis presented here focuses exclusively on the propagation of sharp (<20> commonly observed in slice cultures (Jimbo and Robinson, 2000Go) or evoked extracellular potentials in acute slices. Current source density analysis in combination with optical recordings has demonstrated that sharp, negative LFPs peaks are indicative of synchronized population spikes (Plenz and Aertsen, 1993Go).Similarly, cortical LFPs in vivo are closely correlated with single spike cross-correlations of local neuronal populations (Arieli, 1992Go). Thus, negative LFP peaks in the present study might represent synchronized action potentials from local neuronal populations. This is supported by computer simulations of the neuron-electrode junction of planar microelectrode arrays, which demonstrate that sharp, negative LFPs originate from synchronizedaction potentials from neurons within the vicinity of the electrode (Bove et al., 1996Go). Therefore, our results might be specifically applicable to the propagation of synchronized action potentials in the form of neuronal avalanches through the network, and the power law of -3/2 provides the statistical framework for transmitting information through the cortical network in form of locally synchronized action potential volleys.

Other authors who have studied propagation of synchronized action potentials in neural networks have concluded that precise patterns of activity could travel through several synaptic stages without much attenuation (Abeles, 1992Go; Aertsen et al., 1996Go; Reyes, 2003Go). The concept of a critical branching process does not necessarily conflict with this view, but does place constraints on the distance that activity could propagate when it is traveling in avalanche form. Although it is natural to think that a critical branching parameter of 1 will produce a sequence of neural activity in which one neuron activates only one other neuron at every time step, this is not the case. Because the branching parameter reflects a statistical average, it gives only the expected number of descendants after many branching events, not the exact number at every event. Thus, a single neuron might activate more than one other neuron on some occasions, whereas on others it may activate none. In fact, the most common outcome in the critical state will be that no other neurons are activated. The resulting events generated by this system will contain many short avalanches, some medium-sized avalanches, and very few large avalanches.

Neuronal avalanches in the context of self-organized criticality
The spontaneous activity observed in the present study remarkably fulfills several requirements of physical theory developed to describe avalanche propagation. Tremendous attention in physics has been given recently to the concept of self-organized criticality, a phenomenon observed in sandpile models for avalanches (Paczuski et al., 1996Go), earthquakes (Gutenberg and Richter, 1956Go), and forest fires (Malamud et al., 1998Go). In brief, this theory states that many systems of interconnected, nonlinear elements evolve over time into a critical state in which avalanche or event sizes are scale-free and can be characterized by a power law. This process of evolution takes place without any external instructive signal; it is an emergent property of the system. In addition, many of these systems are modeled as branching processes.

The neuronal activity discussed here has numerous points of contact with this body of theory: (1) All cortical networks displayed power law distributions of avalanche sizes. (2) The cortical networks in the cultures arrived at this state without any external instructive signal. (3) The slope of the power law for avalanche sizes and for avalanche life times, as well as the experimentally obtained values of {sigma} all indicate that the avalanches can be accurately modeled as a critical branching process. For these reasons, the activity observed in the cortical networks should be considered as neuronalavalanches.

Neuronal avalanches as a new mode of network activity
Although some power law statistics have been observed before in the temporal domain of neuronal activity [e.g., time series of ion channel fluctuations (Toib et al., 1998Go), transmitter secretion (Lowen et al., 1997Go), interevent times of neuronal bursts (Segev et al., 2002Go), and EEG time series in humans (Linkenkaer-Hansen et al., 2001Go; Worrell et al., 2002Go)] our results go beyond the phenomenological description of a power law only. We provide two independent approaches to understanding neuronal propagation in cortical networks (unique exponent of -3/2 and critical branchingparameter) that lead to a statistical description of neuronal propagation that can be viewed in the framework of information processing. To our knowledge, no previous evidence has been presented for the existence of a critical branching process operating in the spatiotemporal dynamics of a living neural network.

The power law in the present study basically says that the number of avalanches observed in the data scales with the size of the avalanche, raised to the -1.5 power. This allows for a prediction of very large avalanches. They are a natural consequence of the local rule for optimized propagation, and are expected to occur even in normal (i.e., nonepileptic) networks, and are not particularly rare. For example, in a network with ~10,000avalanches/hr that engage just one electrode, at least 21 avalanches will occur every hour that will encompass exactly all 60 electrodes. Thus, on average, activity on every electrode will be correlated with every other electrode in the network at least once every 3 min.

The neuronal avalanches described here are profoundly different from previously observed modes of network operation. As shown by the correlograms, activity in the cortical networks was not periodic or oscillatory within the duration of maximal avalanche lifetimes. In addition, the contiguity index revealed that activity at one electrode most often skipped over the nearest neighbors, indicating that propagation was not wave-like. Finally, although the spontaneous activity did display notable synchrony at relatively long time scales, the avalanches that we describe here actuallyoccurred within such synchronous epochs at a much shorter time scale (<100>avalanches themselves did not display synchrony, regardless of the threshold level, IED, or number of electrodes used to obtain the data. These are compelling reasons for neuronalavalanches to be considered a new mode of network activity.

Features of the critical state
It should be noted that the branching parameter used to characterize the critical state is a statistical measure and does not say anything about the specific biological processes that could produce a particular value of {sigma}. There are several mechanisms operative in cortical networks that are likely to influence {sigma}: the degree of fan-in or fan-out of excitatory connections, the degree of fan-in or fan-out of inhibitory connections, the ratio of inhibitory synaptic drive to excitatory drive, the timing of inhibitory responses relative to excitatory responses, and the amount of adaptation seen in both excitatory and inhibitoryneurons, to name a few. To clearly distinguish the specific role each of these mechanisms would play in the branching process will be the subject of future experiments.

Previous theoretical work has discussed the importance of a balance between excitation and inhibition in network dynamics (Van Vreeswijk and Sompolinsky, 1996Go; Shadlen and Newsome, 1998Go). This balance has been implicated in proportional amplification in cortical networks (Douglas et al., 1995Go) as well as in the maintenance of cortical up states (Shu et al., 2003Go). Here, we extend the idea of balance by using the branching parameter, a concept that allows us to explore information transmission at the network level. Although a branching parameter well below unity would confer stability on a network, the simulations suggest that this stability would come at the rather severe price of greatly reduced information transmission. In contrast, a branching parameter hovering near unity would optimize information transmission, but at the risk of losing stability every time the network became supercritical. Although these neural network simulations are vastly oversimplified representations of the dynamics that occur in cortical networksin vivo, they may nonetheless offer some insight as to why the cerebral cortex is so often at risk for developing epilepsy. In fact, our experimental results demonstrate that removal of inhibition to increase propagation in the neuronal network to obtain a power law with slope {alpha} > -1.5 results in epileptic activity. The competing demands of stability and information transmission may both be satisfied in a network whose branching parameter is at or slightly below the critical value of 1. Thus, calculating the power law exponent and/or branching parameter might offer quantitative means to evaluatethe efficacy of cortical networks to transmit information.

O importante é terr uma mente crítica, e o cérebro também

Que pena, não citaram a gente… mas deviam.

Broadband Criticality of Human Brain Network Synchronization

Manfred G. Kitzbichler1, Marie L. Smith2, Søren R. Christensen3, Ed Bullmore1,3*

1 Behavioural & Clinical Neurosciences Institute, Departments of Experimental Psychology and Psychiatry, University of Cambridge, Cambridge, United Kingdom, 2 MRC Cognition and Brain Sciences Unit, Cambridge, United Kingdom, 3 Clinical Unit Cambridge, GlaxoSmithKline, Addenbrooke’s Hospital, Cambridge, United Kingdom

Abstract Top

Self-organized criticality is an attractive model for human brain dynamics, but there has been little direct evidence for its existence in large-scale systems measured by neuroimaging. In general, critical systems are associated with fractal or power law scaling, long-range correlations in space and time, and rapid reconfiguration in response to external inputs. Here, we consider two measures of phase synchronization: the phase-lock interval, or duration of coupling between a pair of (neurophysiological) processes, and the lability of global synchronization of a (brain functional) network. Using computational simulations of two mechanistically distinct systems displaying complex dynamics, the Ising model and the Kuramoto model, we show that both synchronization metrics have power law probability distributions specifically when these systems are in a critical state. We then demonstrate power law scaling of both pairwise and global synchronization metrics in functional MRI and magnetoencephalographic data recorded from normal volunteers under resting conditions. These results strongly suggest that human brain functional systems exist in an endogenous state of dynamical criticality, characterized by a greater than random probability of both prolonged periods of phase-locking and occurrence of large rapid changes in the state of global synchronization, analogous to the neuronal “avalanches” previously described in cellular systems. Moreover, evidence for critical dynamics was identified consistently in neurophysiological systems operating at frequency intervals ranging from 0.05–0.11 to 62.5–125 Hz, confirming that criticality is a property of human brain functional network organization at all frequency intervals in the brain’s physiological bandwidth.

Author Summary Top

Systems in a critical state are poised on the cusp of a transition between ordered and random behavior. At this point, they demonstrate complex patterning of fluctuations at all scales of space and time. Criticality is an attractive model for brain dynamics because it optimizes information transfer, storage capacity, and sensitivity to external stimuli in computational models. However, to date there has been little direct experimental evidence for critical dynamics of human brain networks. Here, we considered two measures of functional coupling or phase synchronization between components of a dynamic system: the phase lock interval or duration of synchronization between a specific pair of time series or processes in the system and the lability of global synchronization among all pairs of processes. We confirmed that both synchronization metrics demonstrated scale invariant behaviors in two computational models of critical dynamics as well as in human brain functional systems oscillating at low frequencies (<0.5>

Avalanches neuronais e faixa dinâmica decolando!

Neuronal avalanches imply maximum dynamic range in cortical networks at criticality

Woodrow L. Shew1, Hongdian Yang1;2, Thomas Petermann1, Rajarshi Roy2, Dietmar Plenz1
1Section on Critical Brain Dynamics, NIMH, NIH Bethesda, MD, 20892
2Institute for Physical Science and Technology, University of Maryland, College Park, MD, 20742

Model studies predict that aspects of information processing would be optimal at criticality (e.g. [1, 8, 13]), but experimental support of these ideas is lacking. Our aim here was to provide an answer to these questions based on comparisons of spontaneous activity and stimulus-evoked activity measured in the same brain tissue. Specifically, we test a prediction by Kinouchi and Copelli [8] that neuronal networks at criticality are sensitive to the largest range of stimuli, i.e. they have maximum dynamic range.

In summary, we used experiments and a model to demonstrate a strong connection between the spontaneous activity generated by a neuronal network and its ability to process external stimuli. When the spontaneous activity indicated that the network state was closest to criticality, i.e. when k was closest to one, the dynamic range of the network was maximum. This result supports previous predictions from modelling studies [8].

[1] J.M. Beggs & D. Plenz, J. Neurosci. 23, 11167 (2003).
[2] J.M. Beggs & D. Plenz, J. Neurosci. 24, 5216 (2004).
[3] E.D. Gireesh & D. Plenz, Proc. Natl. Acad. Sci. 105,
7576 (2008).
[4] A. Mazzoni et al., PLoS ONE 2, e439 (2007); V.
Pasquale et al., Neurosci. 153 1354 (2008).
[5] C.V. Stewart & D. Plenz, J. Neurosci. Methods 169,
405 (2008).
[6] I. Breskin et al., Phys. Rev. Lett. 97, 188102 (2006).
[7] A. Corral et al., Phys. Rev. Lett. 74, 118 (1995); C.W.
Eurich et al., Phys. Rev. E 66, 066137 (2002); L. de
Arcangelis et al., Phys. Rev. Lett. 96, 028107 (2006).
[8] O. Kinouchi & M. Copelli, Nature Phys. 2, 348 (2006).
[9] R. Otter, Ann. Math. Statist. 20, 206 (1949); T.E. Har-
ris, The theory of branching processes (Dover, New York,
1989); S. Zapperi, K.B. Lauritsen & H.E. Stanley, Phys.
Rev. Lett. 75, 4071 (1995); S.-S. Poil et al., Hum. Brain
Mapp. 29, 770 (2008).
[10] M.A. Buice & J.D. Cowan, Phys. Rev. E 75, 051919
(2007).
[11] P. Bak & D.R. Chialvo , Phys. Rev. E 63, 031912 (2001).
[12] A. Levina et al., Phys. Rev. Lett. 102, 118110 (2009); A.
Levina et al., Nature Phys 3, 857 (2007); D.-M. Chen et
al., J. Phys. A Math. Gen. 28, 5177 (1995).
[13] C.G. Langton , Physica D 42, 12 (1990); Usher et al.,
Phys. Rev. Lett. 74, 326 (1995); A.V.M. Herz & J.J.
Hop eld, Phys. Rev. Lett. 75, 1222 (1995).
[14] H. Hinrichsen, Adv. Phys. 49, 815 (2000).
[15] P. Hagmann et al., PLoS Biol. 6, e159 (2008); M.D. Fox
& M.E. Raichle, Nature Rev. Neurosci. 8, 700 (2007).
[16] F.W. Ohl et al., Nature 412, 733 (2001).
[17] T. Kenet et al., Nature 425, 954 (2003).

Avalanches Neuronais estão decolando…

Dietmar Plenz escreveu para o Mauro de que nossos trabalhos tem sido bem recebidos e que “o avião está decolando…”

Neuronal avalanches imply maximum dynamic range in cortical networks at criticality

(Submitted on 2 Jun 2009 (v1), last revised 10 Jun 2009 (this version, v2))

Abstract: Spontaneous neural activity is a ubiquitous feature of the brain even in the absence of input from the senses. What role this activity plays in brain function is a long-standing enigma in neuroscience. Recent experiments demonstrate that spontaneous activity both in the intact brain and in vitro has statistical properties expected near the critical point of a phase transition, a phenomenon called neuronal avalanches. Here we demonstrate in experiments and simulations that cortical networks which display neuronal avalanches benefit from maximized dynamic range, i.e. the ability to respond to the greatest range of stimuli. Our findings (1) show that the spontaneously active brain and its ability to process sensory input are unified in the context of critical phenomena, and (2) support predictions that a brain operating at criticality may benefit from optimal information processing.

Comments: main text – 4 pages, 4 figures supplementary materials – 3 pages, 4 figures
Subjects: Neurons and Cognition (q-bio.NC)
Cite as: arXiv:0906.0527v2 [q-bio.NC]

Porcos, Cisnes e pseudociência

Este post pertence à Roda de Ciência, tema de abril. Deixe seus comentários lá, por favor.

TEORIA DO CISNE NEGRO

Antes de a Austrália ser descoberta, todos os cisnes do mundo eram brancos. A Austrália, onde existe o cisne negro (´cygnues atratus´), mostrou a possibilidade de uma exceção escondida de nós, da qual não tínhamos a menor idéia.

2. O meu cisne negro não é um pássaro, mas um evento com três características: 1ª) altamente inesperado; 2ª) tem grande impacto; e 3ª) depois de acontecer, procuramos dar uma explicação para fazê-lo parecer o menos aleatório e o mais previsível.

3. O cisne negro explica quase tudo no mundo, como a Primeira Grande Guerra. Era imprevisível, mas, depois de sua ocorrência, as suas causas pareceram óbvias para as pessoas. O mesmo aconteceu com a Segunda Grande Guerra. Esses fatos provam a incapacidade de a humanidade prever grandes eventos.

4. Mais recentemente, a internet é um cisne negro. Surgida como ferramenta de comunicação militar, ela transformou o mundo de maneira muito rápida. Ninguém imaginava essa possibilidade.

5. As descobertas causadoras de forte impacto na humanidade foram acidentes de percurso, ou seja, os cientistas estavam procurando uma outra coisa, como no caso do ´laser´, criado para ser um tipo de radar e não para ser usado em cirurgia nos olhos.

6. Ninguém poderá saber quando um cisne negro irá surgir, mas o fundamental é a pessoa não levar tão a sério o seu planejamento de vida. As coisas podem mudar quando a pessoa menos espera. O ´stress test´, um dos modelos de gerenciamento de risco, avalia o impacto já ocorrido e não o impacto a ocorrer. As variáveis utilizadas são tiradas do passado.

7. O grau de aleatoriedade depende do observador. A aleatoriedade é a compreensão ou a informação incompleta. Eventos como o 11 de Setembro de 2001, em Nova Iorque, não são aleatórios. Na verdade, terroristas planejaram e tinham conhecimento do 11 de Setembro.

8. A previsão de eventos sócio-econômicos é muito difícil. O histórico das previsões é lixo. O ´risk management´ (gestão de risco) é lixo. A tentativa de determinar causa e efeito dos fatos é continuamente obstruída por fenômenos imprevisíveis. As pessoas do mundo das finanças têm a ilusão de poder prever os fatos, porém elas não conseguem justificar suas previsões.

9. A indicação de ações para compra é postura de charlatões. Não são charlatões aqueles a recomendar o que não fazer no mercado, ao invés de dizer o que fazer. As pessoas podem fazer muitas coisas se souberem o que não fazer. Se as pessoas evitarem as técnicas mirabolantes, não vão depender das previsões do mercado.

10. As pessoas não devem depender dos ´measures of risk´, indicadores destinados a medir o risco. O importante é garantir ´portfólio´ estruturado de maneira a não ter ´downside risk´ (potencial de perdas) ou ´upside exposure´ (potencial de ganho), porquanto assim as pessoas poderão ganhar muito dinheiro se encontrarem um cisne negro.

11. As pessoas não devem sair à caça do cisne negro, mas, uma vez apareça, devem estar com sua exposição maximizada para ele. As pessoas devem acreditar na possibilidade de o mais inusitado acontecer. Tanto do lado positivo quanto do lado negativo.

12. O cisne negro é o risco dos grandes eventos, positivos ou negativos. Algumas coisas podem ser voláteis, mas não são um cisne negro necessariamente.

Entrevista com Nassim Nicholas Taleb, americano, autor de ´O cisne negro: o impacto do altamente improvável´ (´Black swan: the impact of the highly improbable´), há cinco semanas na lista dos livros mais vendidos do jornal ´The New York Times´ (Valor, São Paulo, 04 jun. 2007, p. F14).

Uma das coisas boas que os físicos estatísticos fizeram, em termos culturais, foi chamar a atenção das pessoas para os eventos extremais, ou seja, os eventos de uma cauda de distribuição estatística. Se a cauda cai exponencialmente (como é o caso da distribuição normal), então os eventos extremais são muito raros e podemos desprezá-los em uma análise de risco. Mas se a cauda cai na forma de uma lei de potência então os eventos extremais não podem ser desprezados, tem que ser levados em conta.
É claro que os estatístico matemáticos sabiam disso, mas quem popularizou a idéia (entre os cientistas) foram os físicos porque eles sabem vender o peixe, digamos assim. Mas a divulgação científica para a população em geral não foi tão bem sucedida. OK, apareceram ótimos livros em inglês, como o Ubiquity do Mark Buchanan e o Critical Mass do Philip Ball, mas estes livros não foram traduzidos para o português (eu realmente não entendo por que).
Quando você entra numa livraria, depois dos livros de auto-ajuda e new age, uma das áreas mais populares são os livros pop de administração e marketing. Alguém já analisou essa literatura em termos de contribuição para a divulgação científica? OK, eu sei que o leitor de O Gerente Quântico não vai ter uma idéia adequada de Física Quântica, mas ele será menos ignorante sobre quântica do que um gerente tipo Homer Simpson que não leu o livro. Ou não?
Se você ponderar bem, mesmo os livros de pseudociência ajudam a divulgar a ciência. Conversando com meus colegas físicos, eu vejo que toda uma geração foi despertada (eu inclusive) para a vocação científica lendo revista Planeta na década de 70 (na época em que era editada por Ignácio de Loyola Brandão, claro!)  e os livros O Despertar dos Mágicos e Eram os Deuses Astronautas!
Como disse Reinaldo Lopes em seu nobo blog Chapéu, Chicote e Carbono 14, se você pensar bem os filmes de Indiana Jones são todos pseudocientíficos (arca perdida,  santo graal, ETs e crânios de cristal etc.) e sua apresentação da pesquisa arqueológica é totalmente distorcida, mas muitos e muitos meninos e meninas se tornaram (ou sonharam ser) arqueólogos devido a esses filmes. Será que alguém já percebeu que o despertar de vocações científica é não-linear, que muitas vezes um museu de ciência inteiro não adianta mas um simples conto de Isaac Asimov pode ser decisivo?

Netwars


Pois é, não tem jeito não: qualquer coisa que você inventar (fogo, faca, roupa, botas, comida em lata, penicilina…) pode ser utilizada como tecnologia de guerra. Mesmo as idéias. Principalmente as idéias.

Do Blog NetWars, recentemente encontrado:

Welcome

Here’s a brief background: I’ve just completed my masters in International Relations and have been selected for Army OCS. I intend to serve as a career military officer.

So while waiting for BCT and OCS, I’m going to blog.

The focus of this blog is the concept of 4th generation warfare as well as other topics in international relations and political science.

Netwar by John Arquilla and David Ronfeldt provides a introduction to 21st century warfare:

The information revolution is leading to the rise of network forms of organization, with unusual implications for how societies are organized and conflicts are conducted. “Netwar” is an emerging consequence. The term refers to societal conflict and crime, short of war, in which the antagonists are organized more as sprawling “leaderless” networks than as tight-knit hierarchies. Many terrorists, criminals, fundamentalists, and ethno-nationalists are developing netwar capabilities.

Technology amplifies the military capabilities of small social networks and allows them to catastrophically disrupt economies and civilian networks. However, it is possible to target and eliminate hostile networks. We can use many tools such as game theory, network theory, and systems analysis to understand the threat and counter it.

The motto of this blog is nil desperadum. Victory is possible.

Lei de Clausewitz vale para ataques terroristas

Sobre Clausewitz ver aqui.

Via Physics ArXiv Blog (quem será o KFC? Um físico desempregado? É impossível que um professor ou estudante tenha tanto tempo para ler os papers, blogar e escrever bem assim!):

Plot the number of people killed in terrorists attacks around the world since 1968 against the frequency with which such attacks occur and you’ll get a power law distribution, that’s a fancy way of saying a straight line when both axis have logarithmic scales.

The question, of course, is why? Why not a normal distribution, in which there would be many orders of magnitude fewer extreme events?

Aaron Clauset and Frederik Wiegel have built a model that might explain why. The model makes five simple assumptions about the way terrorist groups grow and fall apart and how often they carry out major attacks. And here’s the strange thing: this model almost exactly reproduces the distribution of terrorists attacks we see in the real world.

These assumptions are things like: terrorist groups grow by accretion (absorbing other groups) and fall apart by disintegrating into individuals. They must also be able to recruit from a more or less unlimited supply of willing terrorists within the population.

Being able to reproduce the observed distribution of attacks with such a simple set of rules is an impressive feat. But it also suggests some strategies that might prevent such attacks or drastically reduce them in number . One obvious strategy is to reduce the number of recruits within a population, perhaps by reducing real and perceived inequalities across societies.

Easier said than done, of course. But analyses like these should help to put the thinking behind such ideas on a logical footing.

Ref: arxiv.org/abs/0902.0724: A Generalized Fission-Fusion Model for the Frequency of Severe Terrorist Attacks

Redes Bipartidas em Ecologia e Economia

Nature 457, 463-466 (22 January 2009)

doi:10.1038/nature07532;

Received 5 June 2008; Accepted 10 October 2008; Published online 3 December 2008

A simple model of bipartite cooperation for ecological and organizational networks

Serguei Saavedra1,2,3, Felix Reed-Tsochas2,4 & Brian Uzzi5,6

  1. Department of Engineering Science, University of Oxford, Oxford OX1 3PJ, UK
  2. CABDyN Complexity Centre,
  3. Corporate Reputation Centre,
  4. James Martin Institute, Saïd Business School, University of Oxford, Oxford OX1 1HP, UK
  5. Kellogg School of Management and Northwestern Institute on Complex Systems, Northwestern University, Evanston, Illinois 60208, USA
  6. Haas School of Business, University of California, Berkeley, California 94720, USA

Correspondence to: Felix Reed-Tsochas2,4 Correspondence and requests for materials should be addressed to F.R-T. (Email: [email protected]).

In theoretical ecology, simple stochastic models that satisfy two basic conditions about the distribution of niche values and feeding ranges have proved successful in reproducing the overall structural properties of real food webs, using species richness and connectance as the only input parameters1, 2, 3, 4. Recently, more detailed models have incorporated higher levels of constraint in order to reproduce the actual links observed in real food webs5, 6. Here, building on previous stochastic models of consumer–resource interactions between species1, 2, 3, we propose a highly parsimonious model that can reproduce the overall bipartite structure of cooperative partner–partner interactions, as exemplified by plant–animal mutualistic networks7. Our stochastic model of bipartite cooperation uses simple specialization and interaction rules, and only requires three empirical input parameters. We test the bipartite cooperation model on ten large pollination data sets that have been compiled in the literature, and find that it successfully replicates the degree distribution, nestedness and modularity of the empirical networks. These properties are regarded as key to understanding cooperation in mutualistic networks8, 9, 10. We also apply our model to an extensive data set of two classes of company engaged in joint production in the garment industry. Using the same metrics, we find that the network of manufacturer–contractor interactions exhibits similar structural patterns to plant–animal pollination networks. This surprising correspondence between ecological and organizational networks suggests that the simple rules of cooperation that generate bipartite networks may be generic, and could prove relevant in many different domains, ranging from biological systems to human society11, 12, 13, 14.

A crise como avalanche


Eu realmente nao sei com esse cara arranja tempo para postar com tanta frequência…

the physics arXiv blog


How the credit crisis spread
Posted: 14 Jan 2009 03:17 AM PST

Where did the credit crunch start? According to Reginald Smith at the Bouchet-Franklin Research Institute in Rochester, it began in the property markets of California and Florida in early 2007 and is still going strong.
To help understand how the crisis has evolved, Smith has mapped the way it has spread as reflected in the stock prices of the S&P 500 and NASDAQ-100 companies. The picture above shows how the state of affairs changed between August 2007 and October 2008. Each dot represents a stock price and the colour, its return (green equals bad and red equals catastrophic).
Smith says the problems first emerged in housing stocks, soon followed by finance stocks then mainstream banks before hitting stocks across the board.
The graphic may be dramatic but it shows only how the collapse occurred, not why. That’s much more subtle and is related to the far more complex network of links that exist between the companies involved.
However, the graph does bear a remarkable resemblance to any number of other network-related catastrophies, such as the spread of disease, forest fires and fashion. That’s almost certainly because all these events can be described terms of the physics of self-organised criticality.
Smith says it’ll take years, perhaps decades, to fully understand and analyse the credit crunch. Econophysicists could start by brushing up on their knowledge of self-organised criticality.
Ref: arxiv.org/abs/0901.1392: The Spread of the Credit Crisis: View from a Stock Correlation Network