Home // Posts tagged "Processo de ramificação"

Número de neurônios no cérebro é cinco vezes maior que o número de árvores na Amazônia

Fiz a seguinte conta:  peguei a estimativa de 86 bilhões de neurônios no cérebro e comparei com o número de árvores sugerido pela reportagem abaixo (ou seja, 85/15*2,6 bilhões).  Deu que o cérebro corresponde a cerca de seis Amazônias (em termos de árvores).

Acho que essa é uma comparação importante para quem quer entender, modelar ou reproduzir um cérebro.  Você aceitaria tal tarefa sabendo que é mais difícil do que modelar a Amazônia???

PS: Sim, eu venho acalentando faz tempo que a melhor metáfora para um cérebro é uma floresta, não um computador. Acho que se aplicarmos ideias de computação paralela por meio de agentes, acabaremos encontrando que florestas computam (por exemplo, a sincronização das árvores de ipês, que hora emitir os aerosóis que nucleiam gotas de chuva e fazem chover sobre a floresta etc.). OK, é uma computação em câmara lenta (e é por isso que a não enxergamos).

PS2: Norberto Cairasco anda também encafifado sobre as semelhanças entre dendritos de neurônios e de árvores. Acha que pode haver alguma convergência evolucionária para certas funções, embora em escalas diferentes.

Aproximadamente 2,6 bilhões de árvores foram derrubadas na Amazônia Legal até 2002

 

01/06/2011 – 11h09

Repórter da Agência Brasil

Rio de Janeiro – Cerca de 15% do total da vegetação original da Amazônia Legal foram desmatados, o que equivale à retirada de aproximadamente 2,6 bilhões de árvores e ao desmate de uma área de 600 mil quilômetros quadrados até 2002. Esse cenário corresponde à destruição de 4,7 bilhões de metros cúbicos de madeira de uma área que, originalmente, representava 4 milhões de quilômetros quadrados cobertos por florestas. Read more [+]

SOC e Câncer

A dar uma olhada…

Self-Organized Criticality: A Prophetic Path to Curing Cancer

J. C. Phillips
(Submitted on 28 Sep 2012)

While the concepts involved in Self-Organized Criticality have stimulated thousands of theoretical models, only recently have these models addressed problems of biological and clinical importance. Here we outline how SOC can be used to engineer hybrid viral proteins whose properties, extrapolated from those of known strains, may be sufficiently effective to cure cancer.

Subjects: Biomolecules (q-bio.BM)
Cite as: arXiv:1210.0048 [q-bio.BM]
(or arXiv:1210.0048v1 [q-bio.BM] for this version)

Novo artigo sobre automata celulares e Paradoxo de Fermi

Saiu um novo artigo sobre a hipótese de percolação para o Paradoxo de Fermi, onde simulações de automata celulares em três dimensões são usadas.  Dessa vez, a conclusão dos autores é a de que as simulações não suportam a hipótese.

Bom, acho que isso não é o fim da história. Eu já sabia que, para a hipótese dar certo, a difusão deveria ser critica (ou seja, formando um cluster crítico ou levemente supercrítico de planetas ocupados).

Ou seja, a hipótese precisa ser complementada com algum argumento de porque a difusão deveria ser crítica. Bom, como sistemas críticos são abundantes nos processos sociais e biológicos, eu acho que basta encontrar esse fator de criticalidade para justificar o modelo. Minha heurística seria: Read more [+]

Invariância de Escala no Sistema Motor e Avalanches Neuronais

Pensando sobre este paper…
View full text from the publisher Context Sensitive Links Go to NCBI for additional information  (0)    Save to:    more options

Scale invariance in the dynamics of spontaneous behavior

Author(s): Proekt, A (Proekt, Alex)1,2Banavar, JR (Banavar, Jayanth R.)3Maritan, A (Maritan, Amos)4,5Pfaff, DW (Pfaff, Donald W.)2
Source: PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA  Volume: 109   Issue: 26   Pages: 10564-10569   DOI: 10.1073/pnas.1206894109   Published: JUN 26 2012
Times Cited: 0 (from Web of Science)
Cited References: 35 [ view related records ]     Citation MapCitation Map
Abstract: Typically one expects that the intervals between consecutive occurrences of a particular behavior will have a characteristic time scale around which most observations are centered. Surprisingly, the timing of many diverse behaviors from human communication to animal foraging form complex self-similar temporal patterns re-produced on multiple time scales. We present a general framework for understanding how such scale invariance may arise in nonequilibrium systems, including those that regulate mammalian behaviors. We then demonstrate that the predictions of this framework are in agreement with detailed analysis of spontaneous mouse behavior observed in a simple unchanging environment. Neural systems operate on a broad range of time scales, from milliseconds to hours. We analytically show that such a separation between time scales could lead to scale-invariant dynamics without any fine tuning of parameters or other model-specific constraints. Our analyses reveal that the specifics of the distribution of resources or competition among several tasks are not essential for the expression of scale-free dynamics. Rather, we show that scale invariance observed in the dynamics of behavior can arise from the dynamics intrinsic to the brain.
Accession Number: WOS:000306291400092

Probabilidade de ocorrer um evento maior que o “11 de setembro” ultrapassa os 95%

Statisticians Calculate Probability Of Another 9/11 Attack

According to the statistics, there is a 50 per cent chance of another catastrophic terrorist attack within the next ten years

3 comments

THE PHYSICS ARXIV BLOG

Wednesday, September 5, 2012

Earthquakes are seemingly random events that are hard to predict with any reasonable accuracy. And yet geologists make very specific long term forecasts that can help to dramatically reduce the number of fatalities.

For example, the death toll from earthquakes in the developed world, in places such as Japan and New Zealand, would have been vastly greater were it not for strict building regulations enforced on the back of well-founded predictions that big earthquakes were likely in future.

The problem with earthquakes is that they follow a power law distribution–small earthquakes are common and large earthquakes very rare but the difference in their power is many orders of magnitude.

Humans have a hard time dealing intuitively with these kinds of statistics. But in the last few decades statisticians have learnt how to handle them, provided that they have a reasonable body of statistical evidence to go on.

That’s made it possible to make predictions about all kinds of phenomena governed by power laws, everything from earthquakes, forest fires and avalanches to epidemics, the volume of email and even the spread of rumours.

So it shouldn’t come as much of a surprise that Aaron Clauset at the Santa Fe Institute in New Mexico and Ryan Woodard at ETH, the Swiss Federal Institute of Technology, in Zurich have used this approach to study the likelihood of terrorist attacks.  Read more [+]

Mais um paper (nosso) em Avalanches Neuronais

O interessante desse trabalho é que as redes são de neurônios inibitórios… Lembrar de falar para o Maurício enfatizar isso na dissertação!

This article is part of the supplement: Twentieth Annual Computational Neuroscience Meeting: CNS*2011

Open AccessPoster presentation

 

Signal propagation and neuronal avalanches analysis in networks of formal neurons

Mauricio Girardi-Schappo1*, Marcelo HR Tragtenberg1 and Osame Kinouchi2

Author Affiliations

1 Departamento de Física, Universidade Federal de Santa Catarina, Florianópolis, SC, 88040-970, Brazil

2 Faculdade de Filosofia, Ciências e Letras de Ribeirão Preto, Universidade de São Paulo, Ribeirão Preto, SP, Brazil

For all author emails, please log on.

 
 
 
 

BMC Neuroscience 2011, 12(Suppl 1):P172 doi:10.1186/1471-2202-12-S1-P172

The electronic version of this article is the complete one and can be found online at: http://www.biomedcentral.com/1471-2202/12/S1/P172

Published: 18 July 2011

© 2011 Girardi-Schappo et al; licensee BioMed Central Ltd.

Poster presentation

To study neurons with computational tools, one may call upon, at least, two different approaches: (i) Hodgkin-Huxley like neurons [1] (i.e. biological neurons) and (ii) formal neurons (e.g. Hindmarsh-Rose (HR) model [2], Kinouchi-Tragtenberg (KT) model [3], etc). Formal neurons may be represented by ordinary differential equations (e.g. HR), or by maps, which are dynamical systems with continuous state variables and discrete time dynamics (e.g. KT). A few maps had been proposed to describe neurons [310]. Such maps provide one with a number of computational advantages [10], since there is no need to set any precision on the integration variable, which leads to better performance in the calculations.

An extended KT neuron model, here called KTz model, has been studied in [4] and [5], may be supplied with a Chemical Synapse Map (CSM) in order to study interacting neurons in a lattice, in the framework of coupled map lattices. KTz model presents most of the standard behavior of excitable cells like fast spiking, regular spiking, bursting, plateau action potentials and adaptation phenomena, and the CSM is in good agreement with some standard functions used to model post-synaptic currents, like the alpha function or the double-exponential function [4]. Preliminary results indicate antiferromagnetic oscillatory behavior or plane wave behavior in KTz neurons coupled with inhibitory CSM on a square lattice.

Besides, many systems in nature are characterized by complex behavior where large cascades of events, named avalanches, unpredictably alternate with periods of little activity (e.g. snow avalanches, earthquakes, etc). Avalanches are described by power law distributions and when the branching parameter equals to unity, the system is said to be a self-organized critical (SOC) system [13]. These have been observed for neuronal activity in vitro [11,12]. And since both SOC systems and neuronal activity show large variability, long-term stability and memory capabilities, networks of neurons have been proposed to be SOC systems. This hypothesis was tested in [13], where they made comparisons among in vivo recordings using Local Field Potentials in three macaque monkeys performing a short term memory task and three different well-established subsampled SOC models (e.g. Sandpile model, Random Neighbour Sandpile model and Forest Fire model). Some similar comparison has been done in [14] with in vivo data from fourteen rats and a cellular automaton developed by the authors.

We claim that still no simulation has been made to detect whether formal or realistic neuron models can evolve naturally to a SOC state, in a full or subsampled network. Our simulations are made with KTz model, which is a formal neuron, but keeps the usual behaviors of living cells, connected through CSM on a square lattice. We divided the work into two parts: (i) the analysis of network itself and how it evolves with time from a given initial state, varying its parameters and (ii) the analysis of the data generated by a network of silent cells, stimulated at random sites, trying to resemble the SOC models above. We compare these second part results with the experimental ones presented in [1113].

References

  1. Hodgkin A, Huxley A: A quantitative description of membrane current.

    J Physiol 1952, 117(4):500-544. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL

    Return to text

  2. Hindmarsh JL, Rose RM: A model of neuronal bursting.

    Proc R Soc Lond B Biol Sci 1984, 221:87-102. PubMed Abstract | Publisher Full Text OpenURL

    Return to text

  3. Kinouchi O, Tragtenberg MHR: Modeling neurons by simple maps.

    Int J Bifurcat Chaos 1996, 6:2343-2360. Publisher Full Text OpenURL

    Return to text

  4. Kuva SM, Lima GF, Kinouchi O, Tragtenberg MHR, Roque AC: A minimal model for excitable and bursting elements.

    Neurocomputing 2001, 38-40:255-261. Publisher Full Text OpenURL

    Return to text

  5. Copelli M, Tragtenberg MHR, Kinouchi O: Stability diagrams for bursting neurons.

    Physica A 2004, 342:263-269. Publisher Full Text OpenURL

    Return to text

  6. Chialvo DR: Generic excitable dynamics on a two-dimensional map.

    Chaos Solit Fract 1995, 5:461-479. Publisher Full Text OpenURL

    Return to text

  7. Rulkov NF: Modeling of spiking-bursting neuronal behavior using two-dimensional map.

    Phys Rev E 2002, 65:041922. Publisher Full Text OpenURL

    Return to text

  8. Cazelles B, Courbage M, Rabinovich M: Anti-phase regularization.

    Europhys Lett 2001, 56:504-509. Publisher Full Text OpenURL

    Return to text

  9. Laing CR, Longtin A: A two variable model of somaticdendritic interactions.

    Bull Math Biol 2002, 64:829-860. PubMed Abstract | Publisher Full Text OpenURL

    Return to text

  10. Izhikevich EM, Hoppensteadt F: Classification of bursting mappings.

    Int J Bifurcat Chaos 2004, 14(11):3847-3854. Publisher Full Text OpenURL

    Return to text

  11. Beggs JM, Plenz D: Neuronal avalanches in neocortical circuits.

    J Neurosci 2003, 23(35):11167-11177. PubMed Abstract | Publisher Full Text OpenURL

    Return to text

  12. Beggs JM, Plenz D: Neuronal avalanches are diverse and precise activity patterns that are stable for many hours in cortical slice cultures.

    J Neurosci 2004, 24(22):5216-5229. PubMed Abstract | Publisher Full Text OpenURL

    Return to text

  13. Priesemann V, Munk MHJ, Wibral M: Subsampling effects in neuronal avalanche.

    BMC Neurosci 2009, 10:40. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text OpenURL

    Return to text

  14. Ribeiro TL, Copelli M, Caixeta F, Belchior H, Chialvo DR, Nicolelis MAL, Ribeiro S: Spike avalanches exhibit universal dynamics across the sleep-wake cycle.

    PLoS One 2010, 5(11):e14129. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL

    Return to text

Gentileza gera Gentileza

Parece que ser gentil realmente dá certo!

Dear Osame,
Thank you for your explanation, my understanding about your paper improves much with your help. Your warm heart impresses me!
Be happy and healthy.
Z.

Dear Z.,

The model studied is a general one, that is, an excitable media with probabilistic couplings. The level where we can apply such model depends on the interest of the researcher: the elements could be persons in a epidemiological model (so, our model would be a probabilistic SIRS model), a neuronal network model (with excitatory couplings), a model of a glomerulus in the olfactory bulb (the particular application that we made in the paper), a mean field model of a dendritic arbor (see reference bellow) or even as a model of sensor networks of bacteria (to model bacterial chemotaxis).

The particular level which you desire to apply the model will constrain and set the acceptable parameter ranges. If you are interested to model excitatory networks of neurons, you are right that one shoud use n=3 or n=4, so that the refractory period is similar to the spike width.

As you can see in Eq. (3), the refractory time governed by n affects the results only quantitatively, not qualitatively. We have studied all the cases from n=3 (that is, if spike = 1ms, then refractory period = 1ms) up to n = 10, but reported only the n=10 case because indeed we was interested in large refractory periods in the glomerulus (the particular application wich we made at the final part of the paper).

As stated in the pag. 349 of the paper, we have also studied the case with assymetrical p_ij and no difference is found. The reference to synchronization phenomena in the glomerulus is made as evidence of the presence of gap junctions in that system (More strong evidence is by now avaiable by the recent direct observation of such electrical synapses). If we apply external inputs to the system, synchronization appears, as can be seen in Fig.2c and 2d.

This sinchronization under inputs is what is observed in the experimental papers. Only the spontaneous activity is in the form of avalanches, as found in experiments by Plenz. Our couplings are fast in the sense that there is no delay times at the couplings, when a site is excited, the neighbours could be excited at the following time step, without delay.

I hope that these observations coul be useful for your interests.

Presently we are working with dendritic computation, with a similar model in a tree structure, see here and here. In this model the refractory period is small and the couplings vary from the symmetric case to the full assymetric case.

Best regards,

Osame

Dear Osame,
I’m sorry I did not express myself clearly. My question is not about simulation, but about the physical meaning about your cellular automata
model. It seems not so reasonable.

First, in your model, there is a very long refractory period for each cell, but in real neurons, the refractory period is usually very short. So
I wonder what makes you do such an adventurous hypothesis.

Second, in your paper, you mentioned many times about the electrical synapse. The electrical synapses have two properties, it is fast and symmetrical. But I cannot figure out what ingredient in your model represents the property of “fast”. As to the property of symmetrical, you assume that p_ij=p_ji; but I don’t know whether the network can still perform so well without such symmetrical property. Have you done such a simulation on your computer? How the result?

What’s more, still about the electrical synapse, you refer some articles about the electrical synapses and the synchronization of the network in your paper. But I’m afraid I still cannot figure out what’s the relationship between the contents of the papers you mentioned and the content of your own paper. It seems that there is nothing about synchronization of the network in your paper.

Best wishes,
Z.

Dear Z.,
I am not sure about what is your question. The model is simply a generalized Greenberg-Hastings cellular automata in a random network where the conections p_ij are draw from a simple uniform distribution in [0,pmax]. Notice however that the mean field calculation assumes that p_ij = p (homogeneous network) and that this approximation seems to describe the behavior very well.

If you are having any difficulty to reproduce the results, I can send you more details about the exact procedure for the simulations.

Cheers,

Osame

—–Menssagem Original—–
De: “Z. B.”
Enviado 08/12/2011 07:20:43

Assunto: A question about your paper

Dear Prof. Kinouchi,

I’m a Chinese student, recently I’m reading your paper “Optimal dynamical range of excitable networks at criticality”  published in Nature Physics. However, I’m really puzzled by the model you proposed: where does it come from, how do you think out? Could you explain about it for me?

Thanks!

Besh wishes,

Z.

A rede complexa dos sabores culinários

Pois é, a nova versão do paper do grupo do Barabasi, com uma maior discussão do nosso paper (comentarios na New Scientist, no ArXiv Blog e na Folha de São Paulo) sobre Gastronophysics, ficou bem melhor! Deverá ser publicado no Scientific Reports do grupo NATURE.

arXiv blog

Flavour Networks Shatter Food Pairing Hypothesis

Recipe networks give the lie to the idea that ingredients that share flavours taste better together

KFC 11/29/2011

  • 6 COMMENTS

Some years ago, while experimenting with salty foods and chocolate, the English chef Heston Blumenthal discovered that white chocolate and caviar taste rather good together. To find out why, he had the foods analyses and discovered that they had many flavour compounds in common.

He went on to hypothesise that foods sharing flavour ingredients ought to combine well, an idea that has become known as the food pairing hypothesis. There are many examples where the principle holds such as cheese and bacon; asparagus and butter; and in some modern restaurants chocolate and blue cheese, which apparently share 73 flavours.

But whether the rule is generally true has been hotly debated.

Today, we have an answer thanks to the work of Yong-Yeol Ahn at Harvard University and a few friends. These guys have analysed the network of links between the ingredients and flavours in some 56,000 recipes from three online recipe sites: epicurious.com, allrecipes.com and the Korean site menupan.com.

They grouped the recipes into geographical groups and then studied how the foods and their flavours are linked.

Their main conclusion is that North American and Western European cuisines tend towards recipes with ingredients that share flavours, while Southern European and East Asian recipes tend to avoid ingredients that share flavours.

In other words, the food pairing hypothesis holds in Western Europe and North America. But in Southern Europe and East Asia a converse principle of antipairing seems to be at work.

Ahn and co also found that the food pairing results are dominated by just a few ingredients in each region. In North America these are foods such as milk, butter, cocoa, vanilla, cream, and egg. In East Asia they are foods like beef, ginger, pork, cayenne, chicken, and onion. Take these out of the equation and the significance of the group’s main results disappears.

That backs another idea common in food science: the flavour principle. This is the notion that the difference between regional cuisines can be reduced to just a few ingredients. For example, paprika, onion and lard is a pretty good signature of Hungarian cuisine.

Ahn and co’s study suggest that dairy products, wheat and eggs define North American cuisine while East Asian food is dominated by plant derivatives such as soy sauce, sesame oil, rice and ginger.

Ahn and co conclude by discussing what their network approach can say about way recipes have evolved. They imagine a kind of fitness landscape in which ingredients survive according to their nutritional value, availability, flavour and so on. For example, good antibacterial properties may make some spices ‘fitter’ than others and so more successful in this landscape.

Others have also looked at food in this way but Ahn and co bring a bigger data set and the sharper insight it provides. They say their data contradicts some earlier results and that this suggests that better data is needed all round to get a clearer picture of the landscape in recipe evolution.

Given the number of ingredients we seem to eat, the total number of possible recipes is some 10^15 but the number humans actually prepare and eat is a mere 10^6. So an important question is whether there are any quantifiable principles behind our choice of ingredient combinations.

Another intriguing possibility is that this kind of evolutionary approach will reveal more not just about food, but also about the behaviour of the individuals that created it.

Food pairing seems to be one principle operating in some parts of the world. How far antipairing can take us has yet be seen, although customers to the Blumenthal’s restaurant, The Fat Duck, may be among the first to find out.

It’s still early days in the science of food networks. There are surely exciting discoveries ahead.

Ref: arxiv.org/abs/1111.6074: Flavor Network And The Principles Of Food Pairing

TRSF: Read the Best New Science Fiction inspired by today’s emerging technologies.

O Paradoxo de Fermi refuta a Inteligência Artificial Forte

Eu tenho um monte de amigos que acham que trabalhar com o Paradoxo de Fermi  é ficcção científica, mas ficam submetendo projeto para a FAPESP e CNPq para estudar Inteligencia Artificial, Robôs etc…

Entretanto, o Paradoxo de Fermi é um forte argumento contra a possibilidade de se construir um robô autoreplicante pensante (sondas de Vonn Neuman). Se tais sondas fossem possíveis, tipo Monolitos Negros de 2001 e 2010, elas já teriam tempo de se espalhar por toda a Galaxia e já teriam chegado até nós. Como isso não ocorreu, é impossível construir Sondas de Von Neuman, ou seja, Inteligência Artificial Forte.

Fermi paradox

From Wikipedia, the free encyclopedia
This article is about the absence of evidence for extraterrestrial intelligence. For the type of estimation problem, see Fermi problem. For the music album, see Fermi Paradox (album).

A graphical representation of the Arecibo message – Humanity’s first attempt to use radio waves to actively communicate its existence to alien civilizations

The Fermi paradox (Fermi’s paradox or Fermi-paradox) is the apparent contradiction between high estimates of the probability of the existence ofextraterrestrial civilizations and the lack of evidence for, or contact with, such civilizations.

The age of the universe and its vast number of stars suggest that if the Earth is typical, extraterrestrial life should be common.[1] In an informal discussion in 1950, the physicist Enrico Fermi questioned why, if a multitude of advanced extraterrestrial civilizations exists in the Milky Way galaxy, evidence such as spacecraft or probes is not seen. A more detailed examination of the implications of the topic began with a paper by Michael H. Hartin 1975, and it is sometimes referred to as the Fermi–Hart paradox.[2] Other common names for the same phenomenon are Fermi’s question (“Where are they?”), the Fermi Problem, the Great Silence,[3][4][5][6][7] and silentium universi[7][8] (Latin for “the silence of the universe”; the misspellingsilencium universi is also common).

There have been attempts to resolve the Fermi paradox by locating evidence of extraterrestrial civilizations, along with proposals that such life could exist without human knowledge. Counterarguments suggest that intelligent extraterrestrial life does not exist or occurs so rarely or briefly that humans will never make contact with it.

Starting with Hart, a great deal of effort has gone into developing scientific theories about, and possible models of, extraterrestrial life, and the Fermi paradox has become a theoretical reference point in much of this work. The problem has spawned numerous scholarly works addressing it directly, while questions that relate to it have been addressed in fields as diverse as astronomy, biology, ecology, and philosophy. The emerging field ofastrobiology has brought an interdisciplinary approach to the Fermi paradox and the question of extraterrestrial life.

Contents

[hide]

[edit]Basis

The Fermi paradox is a conflict between an argument of scale and probability and a lack of evidence. A more complete definition could be stated thus:

The apparent size and age of the universe suggest that many technologically advanced extraterrestrial civilizations ought to exist.
However, this hypothesis seems inconsistent with the lack of observational evidence to support it.

The first aspect of the paradox, “the argument by scale”, is a function of the raw numbers involved: there are an estimated 200–400 billion[9] (2–4 ×1011) stars in the Milky Way and 70 sextillion (7×1022) in the visible universe.[10] Even if intelligent life occurs on only a minuscule percentage of planets around these stars, there might still be a great number of civilizations extant in the Milky Way galaxy alone. This argument also assumes the mediocrity principle, which states that Earth is not special, but merely a typical planet, subject to the same laws, effects, and likely outcomes as any other world.

The second cornerstone of the Fermi paradox is a rejoinder to the argument by scale: given intelligent life’s ability to overcome scarcity, and its tendency to colonize new habitats, it seems likely that at least some civilizations would be technologically advanced, seek out new resources in space and then colonize first their own star system and subsequently the surrounding star systems. Since there is no conclusive or certifiable evidence on Earth or elsewhere in the known universe of other intelligent life after 13.7 billion years of the universe’s history, we have the conflict requiring a resolution. Some examples of which may be that intelligent life is rarer than we think, or that our assumptions about the general behavior of intelligent species are flawed.

The Fermi paradox can be asked in two ways. The first is, “Why are no aliens or their artifacts physically here?” If interstellar travel is possible, even the “slow” kind nearly within the reach of Earth technology, then it would only take from 5 million to 50 million years to colonize the galaxy.[11] This is a relatively small amount of time on a geological scale, let alone acosmological one. Since there are many stars older than the Sun, or since intelligent life might have evolved earlier elsewhere, the question then becomes why the galaxy has not been colonized already. Even if colonization is impractical or undesirable to all alien civilizations, large-scale exploration of the galaxy is still possible; the means of exploration and theoretical probes involved are discussed extensively below. However, no signs of either colonization or exploration have been generally acknowledged.

The argument above may not hold for the universe as a whole, since travel times may well explain the lack of physical presence on Earth of alien inhabitants of far away galaxies. However, the question then becomes “Why do we see no signs of intelligent life?” since a sufficiently advanced civilization[Note 1] could potentially be observable over a significant fraction of the size of the observable universe.[12] Even if such civilizations are rare, the scale argument indicates they should exist somewhere at some point during the history of the universe, and since they could be detected from far away over a considerable period of time, many more potential sites for their origin are within range of our observation. However, no incontrovertible signs of such civilizations have been detected.

It is unclear which version of the paradox is stronger.[Note 2]

[edit]Name

In 1950, while working at Los Alamos National Laboratory, the physicist Enrico Fermi had a casual conversation while walking to lunch with colleagues Emil KonopinskiEdward Tellerand Herbert York. The men discussed a recent spate of UFO reports and an Alan Dunn cartoon[13] facetiously blaming the disappearance of municipal trashcans on marauding aliens. They then had a more serious discussion regarding the chances of humans observing faster-than-light travel by some material object within the next ten years, which Teller put at one in a million, but Fermi put closer to one in ten. The conversation shifted to other subjects, until during lunch Fermi suddenly exclaimed, “Where are they?” (alternatively, “Where is everybody?”)[14] One participant recollects that Fermi then made a series of rapid calculations using estimated figures (Fermi was known for his ability to make good estimates from first principles and minimal data, see Fermi problem.) According to this account, he then concluded that Earth should have been visited long ago and many times over.[14][15]

Sonhos criativos: civilizações galáticas e gripe suína

Com preguiça de postar, vou reproduzindo aqui os melhores posts do antigo SEMCIÊNCIA no Blogger.

SÁBADO, JULHO 11, 2009

Sonhos criativos: civilizações galáticas e gripe suína



Há poucos dias, postei sobre uma recente pesquisa envolvendo criatividade e sonhos. Fiz um comentário meio cético, dizendo que se houver relação entre sonhos e solução de problemas, isso provavelmente seria uma exadaptação biológica, mas não a função primordial dos mesmos.

Poucos dias depois, meu inconsciente me prega uma peça, fornecendo um exemplo de sonho criativo (no sentido de produzir associações entre assuntos diversos). No sonho, eu percebi que a fórmula de Drake para probabilidade de existir civilizações tecnológicas na Galáxia envolve o mesmo tipo de raciocínio usado no cáculo da taxa de mortalidade da gripe (suína) pelo Ministério da Saúde do Brasil.

Explico: no Plano Brasileiro de Prepapração para uma Pandemia de Influenza (3a versão – abril 2006), existe um modelo estático para estimar o número de óbitos de uma temporada de gripe. Ele pode ser resumido pela fórmula (ver esquema na figura):

Número de Óbitos = Po x Pag x Pc x Pat x N, onde

  • Po = probabilidade de óbito
  • Pag = Probabilidade de agravamento
  • Pc = Probabilidade de complicação
  • Pat = Probabilidade de ataque
  • N = Número de indivíduos da população.

Eu tinha visto essa figura em dias anteriores e ela reapareceu durante o sonho. Daí eu percebi que o tipo de raciocínio é idêntico ao usado na fórmula de Drake, a saber:

N = R* x fp x ne x fl x fi x fc x LOnde:

  • N é o número de civilizações extraterrestres em nossa galáxia com as quais poderíamos ter chances de estabelecer comunicação.
  • R* é a taxa de formação de estrelas em nossa galáxia
  • fp é a fração de tais estrelas que possuem planetas em órbita
  • ne é o número médio de planetas que potencialmente permitem o desenvolvimento de vida por estrela que tem planetas
  • fl é a fração dos planetas com potencial para vida que realmente desenvolvem vida
  • fi é a fração dos planetas que desenvolvem vida que desenvolvem vida inteligente
  • fc é a fração dos planetas que desenvolvem vida inteligente e que têm o desejo e os meios necessários para estabelecer comunicação
  • L é o tempo esperado de vida de tal civilização

Ou seja, a fórmula de Drake é um modelo estático como o do Ministério da Saúde. Mas é possível usar modelos melhores, dinâmicos, por exemplo o modelo SIR (susceptible-infected-recovered).

Daí, durante o sonho, eu percebi que o ferramental teórico da epidemiologia poderia ser um quadro adequado para se pensar o paradoxo de Fermi. Afinal, a expansão de uma civilização na galáxia pode ser pensada como um processo de infecção de uma população de planetas “suscetíveis”, ou seja, capazes de abrigar vida. O tempo de duração de uma civilização planetária capaz de produzir outras colônias corresponde ao tempo de infecção. O planeta entra no estado “recuperado” quando sua civilização morre ou se torna indiferente à criação de novas colônias. Acho que vocês já entenderam (sim, somos os vírus do Universo!).

Sendo assim, o modelo SIR poderia fornecer uma nova resposta (mais uma?) para a pergunta de porque estamos sós (pelo menos em nossa vizinhança). No modelo SIR, dependendo da taxa de infecção e da duração do estágio de infecção, a onda epidêmica atinge apenas uma fração dos sítios totais (curva verde acima). Além disso, em um dado momento, existe apenas uma fração pequena de sítios ativos (civilizações vivas, curva vermelha).

Assim, teríamos uma sucessão de ondas epidêmicas (colonizações galáticas) que de forma nenhuma atingem a maioria dos planetas num dado momento. É possível ainda que estejamos vivendo em uma era inter-epidêmica, entre duas grandes ondas civilizatórias: isso explicaria o “grande silêncio”.

Tudo isso eu percebi e comentei com alguém durante o sonho. Logo, devo reconhecer que algumas vezes os sonhos geram novas associações, sim.

O ponto chave do modelo é que a duração de uma civilização tecnológica em um dado planeta (ou melhor, uma civilização fértil capaz de produzir colônias) tem que ser finita. É sabido que, devido ao aquecimento do Sol, nossa biosfera deverá morrer em 500.000 anos, isso fornece um limite superior. Mas afinal, ninguém fica para semente, não é mesmo?

PS: OK, OK, seria melhor usar um modelo SIR na rede, mas tudo bem: o modelo SIR tradicional, de campo médio, onde distâncias e correlações espaciais não importam, corresponderia ao caso otimista de civilizações com WARP.

Do plano do Ministério da Saúde:

a) Modelo estático para geração de estimativas globais:

Para gerar estimativas globais foi construído um modelo estático que representa o fluxo de indivíduos ao longo de cinco categorias, como ilustrado na figura abaixo. Este modelo segue a mesma estrutura geral dos modelos proposto para a Holanda (Genugten, 2002).

Dependendo da virulência do patógeno, uma fração maior ou menor desta população de gripados virá a evoluir para quadros mais complicados, por exemplo: bronquite aguda, pneumonia, sinusite, otite, ou exacerbação de condições crônicas. Sem tratamento adequado,
uma fração destes “casos complicados” deve evoluir para um quadro grave e uma fração destes casos deverá ir a óbito.

Postado por Osame Kinouchi às Sábado, Julho 11, 2009 

Marcadores: 

4 comentários:

generic cialis disse…Dreams are a mystery that sometimes people do not understand and I find it interesting that you took the dream that made you change your mind because I also think they may be the solution to the problem … thanks for the article10:08 PM, Junho 25, 2010

Sebastian disse…Osame:Os céticos dizem que os místicos são sonhadores e vivem num mundo irreal.

Mas com essa fórmula acho que agora está havendo um chega-pra-cá, dos racionalistas céticos, com um vem-mais-um pouco-meu-bem, dos místicos.

Abraço.

1:06 PM, Junho 26, 2010

Osame Kinouchi disse…Eu não sei por que alguns leitores deste blog encucaram que eu sou do movimento cético. Eu não sou, sou apenas um cientista que acredita em muitas coisas não demonstradas pela ciência, tais como criticalidade auto-organizada em redes neurais, Darwinismo Cosmológico ou que o budismo Mahayana é um sincretismo entre Budismo e Cristianismo…6:00 PM, Junho 26, 2010

Sidarta Ribeiro disse…

Osame,

Muito legal o post, um belo exemplo de insight onírico na ciência. Acredito que os sonhos não são peças isoladas de um quebra-cabeças, nem cadeias lineares de memórias, mas sim uma concatenação de representações de acordo com as emoções dominantes do sonhador. Idéias novas vêm necessariamente da recombinação de idéias velhas. Os sonhos são oráculos cegos que criam cenários futuros com base apenas na experiência do passado, orientando as ações da vigília de modo a maximizar a adaptação ao ambiente. Este aspecto onírico de predição do futuro, ou mais exatamente de especulação sobre o futuro, é provavelmente a explicação para a crença generalizada na premonição onírica em diversas sociedades do passado. Apesar de probabilísticos, os sonhos por vezes predizem muito precisamente os acontecimentos futuros. Este é um fenômeno raro na sociedade moderna, mas adivinhos de sonhos desempenharam um papel histórico importante nas civilizações da Antiguidade. Hoje em dia, a interpretação dos sonhos continua a ser bastante relevante em muitas das chamadas culturas “primitivas”.

De que forma é possível conciliar a explicação materialista dos sonhos com a função premonitória a eles atribuída por tantas tradições diferentes? O ponto de encontro é a reativação e recombinação de memórias durante o sono, que alimentam o enredo onírico. Para vivenciá-lo subjetivamente, não basta reverberar padrões de atividade neural. É preciso concatená-los numa busca da satisfação do desejo mediada por dopamina, de forma a simular uma sequência comportamental plausível, capaz de inserir-se num futuro em potencial que inclua o ambiente e o próprio sonhador. Governado por emoções e motivações, o sonho permite a simulação de futuros possíveis, tão mais claros e prováveis quanto mais marcantes e previsíveis forem os desafios da vigília. Nessa concepção, a função primitiva dos sonhos é a simulação de estratégias comportamentais, adaptativas ou não. Recompensando os circuitos neurais dos sonhos bons e punindo os circuitos subjacentes aos pesadelos, é possível aprender durante a noite sem os riscos da realidade.

As fortes pressões seletivas sobre comportamentos cruciais devem moldar de forma darwinista o enredo do sonho, estereotipando a reverberação mnemônica em relação direta com a sobrevivência. Presume-se que os enredos oníricos de animais livres na natureza consistam de poucas narrativas repetidas à exaustão mas com inúmeras variações sobre os mesmos temas: predar e ser predado, fazer a corte e procriar, navegação para forrageio e cuidado parental. Mesmo para nossos ancestrais hominídeos de 500 mil anos atrás, já equipados com armas e fogo, a vida era perigosa e podia acabar mal a qualquer momento. Foi apenas com o advento da pecuária, da agricultura e da medicina xamânica que começamos a nos libertar dos estreitos limites da necessidade.

À medida que a vida humana tornou-se mais fácil e mais complexa, com o desenvolvimento da cultura e seus confortos, os sonhos perderam muito de seu poder de previsão, adquirindo um repertório simbólico muito diversificado. Em comparação com outros mamíferos, seres humanos contemporâneos experimentam muito menos ansiedades em seu cotidiano. Predadores não-humanos são raros, a lei inibe a predação entre pessoas, alimentos e cuidados de saúde são acessíveis, e habitamos abrigos permanentes. Nossos sonhos não estão mais sob a influência de eventos de vida ou morte. Ao contrário, são dominados por uma miríade de pequenas frustrações e expectativas prosaicas. Depois da cultura e do símbolo, o sonho virou qualquer nota…

Na ausência de vivências cotidianas altamente significativas, não é de surpreender que os sonhos contemporâneos tendam a misturar elementos recentes e triviais da vida desperta com memórias antigas fortemente codificadas, chegando até a infância. Ainda assim, é possível em circunstâncias especiais revelar o caráter adaptativo dos sonhos. Cientistas e artistas sempre se beneficiaram disso. Será que agora você vai começar a escrever um sonhário?

Grande abraço,

Sidarta

11:32 PM, Julho 09, 2010

 

 

 

 

Eu não preciso de um sonhario, Sidarta, tenho este blog pra registrar isso!

Porque não somos uma colônia de ETs?

O Paradoxo de Fermi pode ser resumido no seguinte: dado que é aceito pela maior parte dos cientistas da atualidade de que planetas habitáveis são comuns na galáxia e que o surgimento da Vida é um processo natural que provavelmente ocorre rapidamente nos mesmos (no nosso planeta, levou apenas 400 milhões de anos, ou seja, praticamente assim que a Terra esfriou o suficiente para ter água líquida), por que ainda não fomos colonizados por ETs? (*).

Uma das respostas, de O. Kinouchi (unpublished no ArXiv, mas já contando com 7 citações!)

Mais alguns papers novos sobre o assunto, envolvendo automata celulares (dê uma olhada nisso, Sandro!)

Where is everybody? — Wait a moment … New approach to the Fermi paradox

I. BezsudnovA. Snarskii
(Submitted on 16 Jul 2010)

The Fermi Paradox is the apparent contradiction between the high probability extraterrestrial civilizations’ existence and the lack of contact with such civilizations. In general, solutions to Fermi’s paradox come down to either estimation of Drake equation parameters i.e. our guesses about the potential number of extraterrestrial civilizations or simulation of civilizations development in the universe. We consider a new type of cellular automata, that allows to analyze Fermi paradox. We introduce bonus stimulation model (BS-model) of development in cellular space (Universe) of objects (Civilizations). When civilizations get in touch they stimulate development each other, increasing their life time. We discovered nonlinear threshold behaviour of total volume of civilizations in universe and on the basis of our model we built analogue of Drake equation.

Comments: 14 pages, 5 figures
Subjects: Popular Physics (physics.pop-ph); Instrumentation and Methods for Astrophysics (astro-ph.IM); Cellular Automata and Lattice Gases (nlin.CG)
Cite as: arXiv:1007.2774v1 [physics.pop-ph]

Estes caras me citaram, legal! Faz parte da ética científica citar bem os predecessores…

Cellular Automation of Galactic Habitable Zone

Branislav VukoticMilan M. Cirkovic
(Submitted on 26 Jan 2010)

We present a preliminary results of our Galactic Habitable Zone (GHZ) 2D probabilistic cellular automata models. The relevant time-scales (emergence of life, it’s diversification and evolution influenced with the global risk function) are modeled as the probability matrix elements and are chosen in accordance with the Copernican principle to be well-represented by the data inferred from the Earth’s fossil record. With Fermi’s paradox as a main boundary condition the resulting histories of astrobiological landscape are discussed.

Comments: 4 pages, one figure, to appear in Publication of the Astronomical Observatory of Belgrade (6th SREAC Meeting proceedings)
Subjects: Galaxy Astrophysics (astro-ph.GA); Cellular Automata and Lattice Gases (nlin.CG)
Cite as: arXiv:1001.4624v1 [astro-ph.GA]

(*) Se você acredita em teorias conspiratórias sobre UFOs, Greys e governo norte-americano, considere o seguinte: a) Depois de milhões de supostos avistamentos e abduções, ninguém até hoje trouxe uma caneta BIC fabricada por ETs ou conseguiu descrever como é o banheiro dentro de um disco voador; b) Os ETs relatados parecem não ter medo de nossos micróbios e não têm o menor escrúpulo de abrir a porta do UFO e contaminar nossa biosfera com os micróbios deles… Por quê? c) Temos 97% de genes em comuns com os chimpanzés, então, a menos que os chimps também foram criados pelos ETs, nós não somos experiências genéticas extraterrenas, como tantos afirmam… d) OK, curiosamente, o filme 2001 – Uma Odisséia no Espaço é um (sensacional) filme religioso e teísta, feito por dois céticos (Kubrik e Arthur C. Clark). Mas… é apenas um filme, OK? e) Não, eu não faço parte da Grande Conspiração que esconde as evidências da presença dos UFOs no Brasil e no Mundo. Na verdade, quando eu era teen, eu era Ufólogo, responsável, junto com meu amigo Sinézio Inácio da Silva Jr., pelo C.E.F.A Sudeste (Centro de Estudos de Fenômenos Aéro-Espaciais), editava o UFO Report e o fanzine new age Novos Horizontes. Meu primeiro artigo publicado (que não está no Lattes!) é uma resenha do livro “Uma Breve História do Tempo” de Stephen Hawking, na Revista UFO do Gevaerd que, infelizmente, parece que virou espiritufólogo… Antigamente a Revista UFO era mais científica.

the physics arXiv blog


Astronomers Define New Class Of Planet: The Super-Earth

Posted: 02 Aug 2011 09:10 PM PDT

Rocky planets that are almost as big as Uranus seem far more common than anyone suspected

In our Solar System, planets fall into two types. First, there are the rocky planets like Earth, Mars and Venus, which are similar in size and support gaseous atmospheres. Then there are the gas giants, like Jupiter, Saturn and Uranus. These huge puff balls are two or more orders of magnitude bigger than their rocky cousins.

Perhaps strangest of all, there are no planets in between; nothing that sits on the borderline between rocky minnow and gas giant.

This sharp distinction has driven much of astronomers’ thinking about planet formation. One of the main challenges they have faced is to come up with a theory that explains the formation of two entirely different types of planet, but no hybrids that share characteristics of both.

That thinking will have to change. It now looks as if we’ve been fooled by our own Solar System. When astronomers look elsewhere, this two-tiered planetary division disappears.

Astrophysicists have now spotted more than 500 planets orbiting other stars and all of these systems seem entirely different to our Solar System. They’ve seen entirely new class of planets such as the Super-Jupiters that are many times larger than our biggest planet with orbits closer than Mercury.

But the one we’re interested here has a mass that spans the range from Earth to Uranus, exactly the range that is missing from our Solar System.

Astronomers are calling these new types of planet Super-Earths and they have so far found more than 30 of them.

Today, Nader Haghighipour at the University of Hawaii in Honolulu reviews what we know about SuperEarths and shows they are changing the way astronomers think about planet formation. Their mere existence, for example, should allow astrophysicists to reject a large portion of current theories about planet formation.

Of course, the question about Super-Earths that generates the most interest is whether they can support life. To that end, Haghighipour discusses the possibility that these planets may be rocky with relatively thin atmospheres, that they have dynamic cores that generate a magnetic field and that they may support plate tectonics. Above all, there is the question of whether they can support liquid water.

It makes for fascinating reading. But when all this new information has been absorbed by the community, astronomers will still be left with an important puzzle. That is why our Solar System is so different from all the others we can see, why it has this sharp distinction in planet type and what relevance this has to the question of habitability.

This is a mystery that astronomers are only just getting their teeth into.

Ref: http://arxiv.org/abs/1108.0031: Super-Earths: A New Class of Planetary Bodies

Amigos e seus papers

Two-level Fisher-Wright framework with selection and migration: An approach to studying evolution in group structured populations

Roberto H. SchonmannRenato VicenteNestor Caticha
(Submitted on 23 Jun 2011)

A framework for the mathematical modeling of evolution in group structured populations is introduced. The population is divided into a fixed large number of groups of fixed size. From generation to generation, new groups are formed that descend from previous groups, through a two-level Fisher-Wright process, with selection between groups and within groups and with migration between groups at rate $m$. When $m=1$, the framework reduces to the often used trait-group framework, so that our setting can be seen as an extension of that approach. Our framework allows the analysis of previously introduced models in which altruists and non-altruists compete, and provides new insights into these models. We focus on the situation in which initially there is a single altruistic allele in the population, and no further mutations occur. The main questions are conditions for the viability of that altruistic allele to spread, and the fashion in which it spreads when it does. Because our results and methods are rigorous, we see them as shedding light on various controversial issues in this field, including the role of Hamilton’s rule, and of the Price equation, the relevance of linearity in fitness functions and the need to only consider pairwise interactions, or weak selection. In this paper we analyze the early stages of the evolution, during which the number of altruists is small compared to the size of the population. We show that during this stage the evolution is well described by a multitype branching process. The driving matrix for this process can be obtained, reducing the problem of determining when the altruistic gene is viable to a comparison between the leading eigenvalue of that matrix, and the fitness of the non-altruists before the altruistic gene appeared. This leads to a generalization of Hamilton’s condition for the viability of a mutant gene.

Comments: Complete abstract in the paper. 71 pages, 20 figures
Subjects: Populations and Evolution (q-bio.PE)
Cite as: arXiv:1106.4783v1 [q-bio.PE]

Agent-based Social Psychology: from Neurocognitive Processes to Social Data

(Submitted on 31 May 2010 (v1), last revised 11 Feb 2011 (this version, v2))

Moral Foundation Theory states that groups of different observers may rely on partially dissimilar sets of moral foundations, thereby reaching different moral valuations. The use of functional imaging techniques has revealed a spectrum of cognitive styles with respect to the differential handling of novel or corroborating information that is correlated to political affiliation. Here we characterize the collective behavior of an agent-based model whose inter individual interactions due to information exchange in the form of opinions are in qualitative agreement with data. The main conclusion derived connects the existence of diversity in the cognitive strategies and statistics of the sets of moral foundations and suggests that this connection arises from interactions between agents. Thus a simple interacting agent model, whose interactions are in accord with empirical data on conformity and learning processes, presents statistical signatures consistent with those that characterize moral judgment patterns of conservatives and liberals.

Comments: 11 pages, 4 figures, submitted
Subjects: Physics and Society (physics.soc-ph); Social and Information Networks (cs.SI); Neurons and Cognition (q-bio.NC)
Cite as: arXiv:1005.5718v2 [physics.soc-ph]

Spike Avalanches Exhibit Universal Dynamics across the Sleep-Wake Cycle

(Submitted on 10 Jan 2011)

Scale-invariant neuronal avalanches have been observed in cell cultures and slices as well as anesthetized and awake brains, suggesting that the brain operates near criticality, i.e. within a narrow margin between avalanche propagation and extinction. In theory, criticality provides many desirable features for the behaving brain, optimizing computational capabilities, information transmission, sensitivity to sensory stimuli and size of memory repertoires. However, a thorough characterization of neuronal avalanches in freely-behaving (FB) animals is still missing, thus raising doubts about their relevance for brain function. To address this issue, we employed chronically implanted multielectrode arrays (MEA) to record avalanches of spikes from the cerebral cortex (V1 and S1) and hippocampus (HP) of 14 rats, as they spontaneously traversed the wake-sleep cycle, explored novel objects or were subjected to anesthesia (AN). We then modeled spike avalanches to evaluate the impact of sparse MEA sampling on their statistics. We found that the size distribution of spike avalanches are well fit by lognormal distributions in FB animals, and by truncated power laws in the AN group. The FB data are also characterized by multiple key features compatible with criticality in the temporal domain, such as 1/f spectra and long-term correlations as measured by detrended fluctuation analysis. These signatures are very stable across waking, slow-wave sleep and rapid-eye-movement sleep, but collapse during anesthesia. Likewise, waiting time distributions obey a single scaling function during all natural behavioral states, but not during anesthesia. Results are equivalent for neuronal ensembles recorded from V1, S1 and HP. Altogether, the data provide a comprehensive link between behavior and brain criticality, revealing a unique scale-invariant regime of spike avalanches across all major behaviors.

Comments: 14 pages, 9 figures, supporting material included (published in Plos One)
Subjects: Neurons and Cognition (q-bio.NC); Data Analysis, Statistics and Probability (physics.data-an)
Journal reference: PLoS ONE 5(11): e14129, 2010
DOI: 10.1371/journal.pone.0014129
Cite as: arXiv:1101.2434v1 [q-bio.NC]

Collective oscillations of excitable elements: order parameters, bistability and the role of stochasticity

(Submitted on 31 Jan 2011)

We study the effects of a probabilistic refractory period in the collective behavior of coupled discrete-time excitable cells (SIRS-like cellular automata). Using mean-field analysis and simulations, we show that a synchronized phase with stable collective oscillations exists even with non-deterministic refractory periods. Moreover, further increasing the coupling strength leads to a reentrant transition, where the synchronized phase loses stability. In an intermediate regime, we also observe bistability (and consequently hysteresis) between a synchronized phase and an active but incoherent phase without oscillations. The onset of the oscillations appears in the mean-field equations as a Neimark-Sacker bifurcation, the nature of which (i.e. super- or subcritical) is determined by the first Lyapunov coefficient. This allows us to determine the borders of the oscillating and of the bistable regions. The mean-field prediction thus obtained agrees quantitatively with simulations of complete graphs and, for random graphs, qualitatively predicts the overall structure of the phase diagram. The latter can be obtained from simulations by defining an order parameter q suited for detecting collective oscillations of excitable elements. We briefly review other commonly used order parameters and show (via data collapse) that q satisfies the expected finite size scaling relations.

Comments: 19 pages, 7 figures
Subjects: Neurons and Cognition (q-bio.NC); Disordered Systems and Neural Networks (cond-mat.dis-nn); Statistical Mechanics (cond-mat.stat-mech); Cellular Automata and Lattice Gases (nlin.CG)
Journal reference: J. Stat. Mech. (2011) P01012
DOI: 10.1088/1742-5468/2011/01/P01012
Cite as: arXiv:1101.6054v1 [q-bio.NC]

Infinite-Period Phase Transition versus Nucleation in a Stochastic Model of Collective Oscillations

(Submitted on 16 Jun 2011 (v1), last revised 21 Jun 2011 (this version, v2))

A lattice model of three-state stochastic phase-coupled oscillators has been shown by Wood et al. [Phys. Rev. Lett. 96, 145701 (2006)] to exhibit a phase transition at a critical value of the coupling parameter, leading to stable global oscillations. In the complete graph version of the model, we show that, upon further increase in the coupling, the average frequency of collective oscillations decreases until an infinite-period (IP) phase transition occurs, at which collective oscillations cease. Above this second critical point, a macroscopic fraction of the oscillators spend most of the time in one of the three states, yielding a prototypical nonequilibrium example (without an equilibrium counterpart) in which discrete rotational (C_3) symmetry is spontaneously broken, in the absence of any absorbing state. Simulation results and nucleation arguments strongly suggest that the IP phase transition does not occur on finite-dimensional lattices with short-range interactions.

Comments: 15 pages, 8 figures
Subjects: Biological Physics (physics.bio-ph); Statistical Mechanics (cond-mat.stat-mech); Chaotic Dynamics (nlin.CD); Data Analysis, Statistics and Probability (physics.data-an)
Cite as: arXiv:1106.3323v2 [physics.bio-ph]

A sobrevivência das espécies vira-latas sortudas

Suponha que o número total de espécies seja razoavelmente constante de uns 100 milhões de anos para cá, ou seja, que em número de espécies a biosfera esteja em um estado estácionário. Isso significa que, em média, uma espécie-mãe dá lugar a apenas uma espécie filha. Note que isso ocorrem em média, ou seja, um monte de ramos da “árvore da vida” foram podados (extintos) e muitas espécies atuais são descendentes de umas poucas espécies presentes.

Ou seja, a árvore da vida seria similar a um processo de ramificação crítico (ver figura). Acho que ainda falta um estudo mais matemático das consequências para o processo evolucionário do fato da árvore da vida ser crítica e fractal.

Ora, sabemos que em um processo assim, o papel das flutuações aleatórias se torna decisivo. Ou seja, em um processo crítico, haverá um monte de gargalos onde pequenos eventos podem determinar todo o futuro da árvore. Todo um ramo cheio de espécies poderia ter sido varrido do mapa se uma seca, inundação, peste ou asteróide (os “eventos extremais”) dizimasse uma população. As espécies sobreviventes não seriam as mais aptas mas as mais sortudas. A menos, é claro, que se defina espécie mais apta como aquela mais robusta frente a variações do ambiente (e não mais apta no sentido de adaptada a um dado ambiente).

Nesse sentido, uma espécie menos especializada a dado ambiente, mais generalista, embora pode perder na competição com espécies mais especialistas num dado ambiente, seria mais robusta por estar espalhada em uma área geográfica maior, ter uma variada fonte de alimentos, etc.

Assim, parece que uma pressão forte para seleção natural promove a especialização e portanto o aumento de vulnerabilidade de uma espécie para a extinção. A seleção natural, assim como a seleção artificial, produz variedades menos robustas (no sentido que um poodle é menos robusto que um vira-lata).

Chamando as espécies generalistas de “vira-latas”, em comparação com espécies e raças “puras” (especialistas), parece óbvio que o processo evolucionário é dominado pelos vira-latas. Ou melhor, dos vira-latas sortudos que não são atropelados pelos eventos extremais.

Asteroid Impact Early Warning System Unveiled

Astronomer reveals plans for a network of telescopes that could give up to three weeks’ warning of a city-destroying impact.

At about 3am on 8 October last year, an asteroid the size of a small house smashed into the Earth’s atmosphere over an isolated part of Indonesia. The asteroid disintegrated in the atmosphere causing a 50 kiloton explosion, about four times the size of the atomic bomb used to destroy Hiroshima. The blast was picked up by several infrasound stations used by the Comprehensive Nuclear-Test-Ban Treaty Organization to monitor nuclear tests. Read more [+]