Home // Posts tagged "Física Estatística"

Nosso universo vai congelar como uma cerveja super-resfriada…


Finding the Higgs? Good news. Finding its mass? Not so good.

“Fireballs of doom” from a quantum phase change would wipe out present Universe.

by  – Feb 19 2013, 8:55pm HB

A collision in the LHC’s CMS detector.

Ohio State’s Christopher Hill joked he was showing scenes of an impending i-Product launch, and it was easy to believe him: young people were setting up mats in a hallway, ready to spend the night to secure a space in line for the big reveal. Except the date was July 3 and the location was CERN—where the discovery of the Higgs boson would be announced the next day.

It’s clear the LHC worked as intended and has definitively identified a Higgs-like particle. Hill put the chance of the ATLAS detector having registered a statistical fluke at less than 10-11, and he noted that wasn’t even considering the data generated by its partner, the CMS detector. But is it really the one-and-only Higgs and, if so, what does that mean? Hill was part of a panel that discussed those questions at the meeting of the American Association for the Advancement of Science.

As theorist Joe Lykken of Fermilab pointed out, the answers matter. If current results hold up, they indicate the Universe is currently inhabiting what’s called a false quantum vacuum. If it were ever to reach the real one, its existing structures (including us), would go away in what Lykken called “fireballs of doom.”

We’ll look at the less depressing stuff first, shall we?

Zeroing in on the Higgs

Thanks to the Standard Model, we were able to make some very specific predictions about the Higgs. These include the frequency with which it will decay via different pathways: two gamma-rays, two Z bosons (which further decay to four muons), etc. We can also predict the frequency of similar looking events that would occur if there were no Higgs. We can then scan each of the decay pathways (called channels), looking for energies where there is an excess of events, or bump. Bumps have shown up in several channels in roughly the same place in both CMS and ATLAS, which is why we know there’s a new particle.

But we still don’t know precisely what particle it is. The Standard Model Higgs should have a couple of properties: it should be scalar and should have a spin of zero. According to Hill, the new particle is almost certainly scalar; he showed a graph where the alternative, pseudoscalar, was nearly ruled out. Right now, spin is less clearly defined. It’s likely to be zero, but we haven’t yet ruled out a spin of two. So far, so Higgs-like.

The Higgs is the particle form of a quantum field that pervades our Universe (it’s a single quantum of the field), providing other particles with mass. In order to do that, its interactions with other particles vary—particles are heavier if they have stronger interactions with the Higgs. So, teams at CERN are sifting through the LHC data, checking for the strengths of these interactions. So far, with a few exceptions, the new particle is acting like the Higgs, although the error bars on these measurements are rather large.

As we said above, the Higgs is detected in a number of channels and each of them produces an independent estimate of its mass (along with an estimated error). As of the data Hill showed, not all of these estimates had converged on the same value, although they were all consistent within the given errors. These can also be combined mathematically for a single estimate, with each of the two detectors producing a value. So far, these overall estimates are quite close: CMS has the particle at 125.8GeV, Atlas at 125.2GeV. Again, the error bars on these values overlap.

Oops, there goes the Universe

That specific mass may seem fairly trivial—if it were 130GeV, would you care? Lykken made the argument you probably should. But he took some time to build to that.

Lykken pointed out, as the measurements mentioned above get more precise, we may find the Higgs isn’t decaying at precisely the rates we expect it to. This may be because we have some details of the Standard Model wrong. Or, it could be a sign the Higgs is also decaying into some particles we don’t know about—particles that are dark matter candidates would be a prime choice. The behavior of the Higgs might also provide some indication of why there’s such a large excess of matter in the Universe.

But much of Lykken’s talk focused on the mass. As we mentioned above, the Higgs field pervades the entire Universe; the vacuum of space is filled with it. And, with a value for the Higgs mass, we can start looking into the properties of the Higgs filed and thus the vacuum itself. “When we do this calculation,” Lykken said, “we get a nasty surprise.”

It turns out we’re not living in a stable vacuum. Eventually, the Universe will reach a point where the contents of the vacuum are the lowest energy possible, which means it will reach the most stable state possible. The mass of the Higgs tells us we’re not there yet, but are stuck in a metastable state at a somewhat higher energy. That means the Universe will be looking for an excuse to undergo a phase transition and enter the lower state.

What would that transition look like? In Lykken’s words, again, “fireballs of doom will form spontaneously and destroy the Universe.” Since the change would alter the very fabric of the Universe, anything embedded in that fabric—galaxies, planets, us—would be trashed during the transition. When an audience member asked “Are the fireballs of doom like ice-9?” Lykken replied, “They’re even worse than that.”

Lykken offered a couple of reasons for hope. He noted the outcome of these calculations is extremely sensitive to the values involved. Simply shifting the top quark’s mass by two percent to a value that’s still within the error bars of most measurements, would make for a far more stable Universe.

And then there’s supersymmetry. The news for supersymmetry out of the LHC has generally been negative, as various models with low-mass particles have been ruled out by the existing data (we’ll have more on that shortly). But supersymmetry actually predicts five Higgs particles. (Lykken noted this by showing a slide with five different photos of Higgs taken at various points in his career, in which he was “differing in mass and other properties, as happens to all of us.”) So, when the LHC starts up at higher energies in a couple of years, we’ll actually be looking for additional, heavier versions of the Higgs.

If those are found, then the destruction of our Universe would be permanently put on hold. “If you don’t like that fate of the Universe,” Lykken said, “root for supersymmetry”

Planetas extra-solares, Kepler 62 e o Paradoxo de Fermi local

Conforme aumentam o número de planetas extra-solares descobertos, também aumentamos vínculos sobre as previsões do modelo de percolação galática (Paradoxo de Fermi Local).
A previsão é que, se assumirmos que Biosferas Meméticas (Biosferas culturais ou Tecnosferas) são um resultado provável de Biosferas Genéticas, então devemos estar dentro de uma região com pucos planetas habitáveis. Pois se existirem planetas habitados (por seres inteligentes) por perto, com grande probabilidade eles são bem mais avançados do que nós, e já teriam nos colonizado.
Como isso ainda não ocorreu (a menos que se acredite nas teorias de conspiração dos ufólogos e nas teorias de Jesus ET, deuses astronautas etc.), segue que quanto mais os astronomos obtiverem dados, mais ficará evidente que nosso sistema solar é uma anomalia dentro de nossa vizinhança cósmica (1000 anos-luz?), ou seja, não podemos assumir o Princípio Copernicano em relação ao sistema solar: nosso sistema solar não é tipico em nossa vizinhança.  Bom, pelo menos, essa conclusão está batendo com os dados coletados até hoje…
Assim, é possível fazer a previsão de que uma maior análise dos planetas Kepler 62-e e Kepler 62-f revelará que eles não possuem uma atmosfera com oxigênio ou metano, sinais de um planeta com biosfera.

Persistence solves Fermi Paradox but challenges SETI projects

Osame Kinouchi (DFM-FFCLRP-Usp)
(Submitted on 8 Dec 2001)

Persistence phenomena in colonization processes could explain the negative results of SETI search preserving the possibility of a galactic civilization. However, persistence phenomena also indicates that search of technological civilizations in stars in the neighbourhood of Sun is a misdirected SETI strategy. This last conclusion is also suggested by a weaker form of the Fermi paradox. A simple model of a branching colonization which includes emergence, decay and branching of civilizations is proposed. The model could also be used in the context of ant nests diffusion.

03/05/2013 – 03h10

Possibilidade de vida não se resume a planetas similares à Terra, diz estudo


Com as diferentes composições, massas e órbitas possíveis para os planetas fora do Sistema Solar, a vida talvez não esteja limitada a mundos similares à Terra em órbitas equivalentes à terrestre.

Editoria de arte/Folhapress

Essa é uma das conclusões apresentada por Sara Seager, do MIT (Instituto de Tecnologia de Massachusetts), nos EUA, em artigo de revisão publicado no periódico “Science“, com base na análise estatística dos cerca de 900 mundos já detectados ao redor de mais de 400 estrelas.

Seager destaca a possível existência de planetas cuja atmosfera seria tão densa a ponto de preservar água líquida na superfície mesmo a temperaturas bem mais baixas que a terrestre. Read more [+]

Criticalidade auto-organizada: uma visão de mundo que os físicos construíram

Livro para carregar no Kindle que a Rita me deu…
As idéias de SOC ou SOqC (Self-organized quasi-criticality) estão percolando pela cultura popular, criando fortes metáforas cognitivas que nos ajudam a pensar sistemas complexos tais como a economia, os sistemas sociais, os sistemas ecológicos etc. Ela resolve, por exemplo, a velha questão sobre que fatores são importantes na história, se as grandes forças econômicas, os movimentos de classe ou a ação de indivíduos. Nesta concepção, a História é pensada como uma sucessão de avalanches de fatos históricos (algumas superpostas). Essas avalanches podem ser de qualquer tamanho e podem ser desencadeadas mesmo pela ação de indivíduos (Jesus, Marx ou Steve Jobs, correspondentes a um grão na pilha de areia) dentro de um contexto de acumulação de tensão social (as forças econômicas, de classe, culturais etc).
Essas novas ferramentas de pensamento foram desenvolvidas principalmente pelos físicos (e acho que eu ajudei com alguns papers, ver aqui, aqui e aqui). Me parece que será uma visão de mundo influente neste século…
Start reading Bak’s Sand Pile: Strategies for a Catastrophic World on your Kindle in under a minute. Don’t have a Kindle? Get your Kindle here.

Deliver to your Kindle or other device

How buying works

Try it free

Sample the beginning of this book for free

Deliver to your Kindle or other device

How sampling works

Read books on your computer or other mobile devices with ourFREE Kindle Reading Apps.
Bak's Sand Pile: Strategies for a Catastrophic World

Bak’s Sand Pile: Strategies for a Catastrophic World [Kindle Edition]

Ted G. Lewis (Author)

5.0 out of 5 stars  See all reviews (1 customer review) | Like(0)

Digital List Price: $9.99 What’s this? 
Print List Price: $34.95
Kindle Price: $9.99 includes free international wireless delivery via Amazon Whispernet
You Save: $24.96 (71%)


Book Description

Publication Date: December 22, 2011
Did the terrorist attacks on the United States in 2001, the massive power blackout of 2003, Hurricane Katrina in 2005, and the Gulf oil spill of 2010 ‘just happen’-or were these shattering events foreseeable? Do such calamities in fact follow a predictable pattern? Can we plan for the unforeseen by thinking about the unthinkable? Ted Lewis explains the pattern of catastrophes and their underlying cause. In a provocative tour of a volatile world, he guides the reader through mega-fires, fragile power grids, mismanaged telecommunication systems, global terrorist movements, migrating viruses, volatile markets and Internet storms. Modern societies want to avert catastrophes, but the drive to make things faster, cheaper, and more efficient leads to self-organized criticality-the condition of systems on the verge of disaster. This is a double-edged sword. Everything from biological evolution to political revolution is driven by some collapse, calamity or crisis. To avoid annihilation but allow for progress, we must change the ways in which we understand the patterns and manage systems. Bak’s Sand Pile explains how.

Novo artigo sobre automata celulares e Paradoxo de Fermi

Saiu um novo artigo sobre a hipótese de percolação para o Paradoxo de Fermi, onde simulações de automata celulares em três dimensões são usadas.  Dessa vez, a conclusão dos autores é a de que as simulações não suportam a hipótese.

Bom, acho que isso não é o fim da história. Eu já sabia que, para a hipótese dar certo, a difusão deveria ser critica (ou seja, formando um cluster crítico ou levemente supercrítico de planetas ocupados).

Ou seja, a hipótese precisa ser complementada com algum argumento de porque a difusão deveria ser crítica. Bom, como sistemas críticos são abundantes nos processos sociais e biológicos, eu acho que basta encontrar esse fator de criticalidade para justificar o modelo. Minha heurística seria: Read more [+]

Invariância de Escala no Sistema Motor e Avalanches Neuronais

Pensando sobre este paper…
View full text from the publisher Context Sensitive Links Go to NCBI for additional information  (0)    Save to:    more options

Scale invariance in the dynamics of spontaneous behavior

Author(s): Proekt, A (Proekt, Alex)1,2Banavar, JR (Banavar, Jayanth R.)3Maritan, A (Maritan, Amos)4,5Pfaff, DW (Pfaff, Donald W.)2
Source: PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA  Volume: 109   Issue: 26   Pages: 10564-10569   DOI: 10.1073/pnas.1206894109   Published: JUN 26 2012
Times Cited: 0 (from Web of Science)
Cited References: 35 [ view related records ]     Citation MapCitation Map
Abstract: Typically one expects that the intervals between consecutive occurrences of a particular behavior will have a characteristic time scale around which most observations are centered. Surprisingly, the timing of many diverse behaviors from human communication to animal foraging form complex self-similar temporal patterns re-produced on multiple time scales. We present a general framework for understanding how such scale invariance may arise in nonequilibrium systems, including those that regulate mammalian behaviors. We then demonstrate that the predictions of this framework are in agreement with detailed analysis of spontaneous mouse behavior observed in a simple unchanging environment. Neural systems operate on a broad range of time scales, from milliseconds to hours. We analytically show that such a separation between time scales could lead to scale-invariant dynamics without any fine tuning of parameters or other model-specific constraints. Our analyses reveal that the specifics of the distribution of resources or competition among several tasks are not essential for the expression of scale-free dynamics. Rather, we show that scale invariance observed in the dynamics of behavior can arise from the dynamics intrinsic to the brain.
Accession Number: WOS:000306291400092

Probabilidade de ocorrer um evento maior que o “11 de setembro” ultrapassa os 95%

Statisticians Calculate Probability Of Another 9/11 Attack

According to the statistics, there is a 50 per cent chance of another catastrophic terrorist attack within the next ten years



Wednesday, September 5, 2012

Earthquakes are seemingly random events that are hard to predict with any reasonable accuracy. And yet geologists make very specific long term forecasts that can help to dramatically reduce the number of fatalities.

For example, the death toll from earthquakes in the developed world, in places such as Japan and New Zealand, would have been vastly greater were it not for strict building regulations enforced on the back of well-founded predictions that big earthquakes were likely in future.

The problem with earthquakes is that they follow a power law distribution–small earthquakes are common and large earthquakes very rare but the difference in their power is many orders of magnitude.

Humans have a hard time dealing intuitively with these kinds of statistics. But in the last few decades statisticians have learnt how to handle them, provided that they have a reasonable body of statistical evidence to go on.

That’s made it possible to make predictions about all kinds of phenomena governed by power laws, everything from earthquakes, forest fires and avalanches to epidemics, the volume of email and even the spread of rumours.

So it shouldn’t come as much of a surprise that Aaron Clauset at the Santa Fe Institute in New Mexico and Ryan Woodard at ETH, the Swiss Federal Institute of Technology, in Zurich have used this approach to study the likelihood of terrorist attacks.  Read more [+]

O (quase) primeiro post do SEMCIÊNCIA

Estou fazendo a importação do SEMCIÊCIA do Blogger para o WordPress (não sabia que era tão fácil). Topei com este primeiro post, de 19 de junho de 2006. Infelizmente existe um gap nos posts antigos (os posts de maio, mês de nascimento do blog, e os posts de junho a dezembro (mais de 200!) foram deletados porque… hummm, naquela época eu estava me preparando para um concurso de livre docência e algumas pessoas diziam que ter um blog não era algo sério e que poderia me prejudicar se eu emitisse opiniões politicamente incorretas (em relação à USP), por exemplo. Ou seja, este não é realmente o primeiro post, mas sim, o primeiro que sobreviveu…

Mas este post de junho sobrou eu algum lugar, em um cache que achei um ano depois, e foi recuperado. Estou reproduzindo o mesmo porque o acho ainda bem atual e, além disso, pelo fato de que finalmente irei ministrar pela segunda vez a disciplina a que ele se refere, para a turma de Licenciatura em Química da FFCLRP.




Coincidência de novo. Eu estava aqui procurando uma figura para fazer este post sobre o volume especial “As diferentes faces do infinito” quando recebi um e-mail da Ana Cláudia Ferrari me dizendo que, sim, eu havia ganho duas assinaturas de graça, da SciAm e da Viver Mente e Cérebro. Tão vendo? Quem disse que não ganho nada com este blog?

É claro que isso não vai afetar minha atitude quanto à revista, pois a compro desde o primeiro número. Afinal basta eles manterem a qualidade e ocuparem o nicho da SUPERINTERESSANTE (pois esta está querendo substituir a PLANETA, que por sua vez trocou o esoterismo pela ecologia) já está ótimo! Todo apoio a você, SciAm!

A Ana me perguntou se é verdade que meus alunos realmente lêem as revistas. Bom, alunos de Estatística I, respondam prá ela ai nos comentários. Em todo caso, conto aqui duas maneiras de usar a SciAm nas salas de aula que já testei.

Bom, primeiro eu sou responsável por uma disciplina optativa do Departamento de Química aqui na FFCLRP chamada Tópicos de Ciência Contemporânea, cuja ementa está meio ambiciosa, concordo:


Introduzir e incentivar o estudante a ter contato com a literatura científica e de divulgação científica, traçando um panorama da ciência contemporânea que permita uma visão contextualizada e crítica de diferentes áreas do conhecimento tais como a Cosmologia, a Física, a Química e a Biologia. Read more [+]

Cliodinâmica e Psicohistória

Trilogia da Fundação – Isaac Asimov


Human cycles: History as science

Advocates of ‘cliodynamics’ say that they can use scientific methods to illuminate the past. But historians are not so sure.


Sometimes, history really does seem to repeat itself. After the US Civil War, for example, a wave of urban violence fuelled by ethnic and class resentment swept across the country, peaking in about 1870. Internal strife spiked again in around 1920, when race riots, workers’ strikes and a surge of anti-Communist feeling led many people to think that revolution was imminent. And in around 1970, unrest crested once more, with violent student demonstrations, political assassinations, riots and terrorism (see ‘Cycles of violence’).

To Peter Turchin, who studies population dynamics at the University of Connecticut in Storrs, the appearance of three peaks of political instability at roughly 50-year intervals is not a coincidence. For the past 15 years, Turchin has been taking the mathematical techniques that once allowed him to track predator–prey cycles in forest ecosystems, and applying them to human history. He has analysed historical records on economic activity, demographic trends and outbursts of violence in the United States, and has come to the conclusion that a new wave of internal strife is already on its way1. The peak should occur in about 2020, he says, and will probably be at least as high as the one in around 1970. “I hope it won’t be as bad as 1870,” he adds. Read more [+]

Neutrinos, Higgs e LHC no BLOGPULSE

Blogosfera como meio excitável

Oi Osame,

Você viu este paper?


Pelo jeito alguém teve a mesma ideia que você…


doi:10.1016/j.physa.2011.05.033 | How to Cite or Link Using DOI

Permissions & Reprints

The blogosphere as an excitable social medium: Richter’s and Omori’s Law in media coverage

Peter KlimekabWerner BayeraStefan ThurnerbcaCorresponding Author Contact InformationE-mail The Corresponding AuthorE-mail The Corresponding Author

a IIASA, Schlossplatz 1, A 2361 Laxenburg, Austria
b Section for Science of Complex Systems, Medical University of Vienna, Spitalgasse 23, A 1090 Vienna, Austria
c Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501, USA

Received 10 February 2011; revised 15 April 2011; Available online 15 June 2011.


We study the dynamics of public media attention by monitoring the content of online blogs. Social and media events can be traced by the propagation of word frequencies of related keywords. Media events are classified as exogenous–where blogging activity is triggered by an external news item–or endogenous where word frequencies build up within a blogging community without external influences. We show that word occurrences exhibit statistical similarities to earthquakes. Moreover the size distribution of events scales with a similar exponent as found in the Gutenberg–Richter law. The dynamics of media events before and after the main event can be satisfactorily modeled as a type of process which has been used to understand fore–and aftershock rate distributions in earthquakes–the Omori law. We present empirical evidence that for media events of endogenous origin the overall public reception of the event is correlated with the behavior of word frequencies at the beginning of the event, and is to a certain degree predictable. These results imply that the process of opinion formation in a human society might be related to effects known from excitable media.


► Dynamics of public media attention measured by online blogs. ► Society as an excitable social medium. ► Media events have statistical characteristics of earthquakes. ► Public media reception to a certain degree predictable.

Keywords: Excitable social system; Statistical human dynamics; Empirical power laws

Gastronophysics na Nature News


Mining molecular gastronomy

East Asian cuisine doesn’t use matching flavour compounds, unlike North American recipes.

Would you eat caviar and white chocolate in the same mouthful? The answer might depend on where in the world you live.

In North American and Western European cuisines, chefs tend to combine foods that share flavour compounds, so the more adventurous would serve up caviar and white chocolate, because they share trimethylamine, among other compounds. But Asian chefs work differently, according to work published today in Scientific Reports1 by theoretical physicist Sebastian Ahnert from the University of Cambridge, UK, and his colleagues.

Shrimp and tomato are often paired together in North American cuisine, because they share certain flavour compounds. Read more [+]

A rede complexa dos sabores culinários

Pois é, a nova versão do paper do grupo do Barabasi, com uma maior discussão do nosso paper (comentarios na New Scientist, no ArXiv Blog e na Folha de São Paulo) sobre Gastronophysics, ficou bem melhor! Deverá ser publicado no Scientific Reports do grupo NATURE.

arXiv blog

Flavour Networks Shatter Food Pairing Hypothesis

Recipe networks give the lie to the idea that ingredients that share flavours taste better together

KFC 11/29/2011


Some years ago, while experimenting with salty foods and chocolate, the English chef Heston Blumenthal discovered that white chocolate and caviar taste rather good together. To find out why, he had the foods analyses and discovered that they had many flavour compounds in common.

He went on to hypothesise that foods sharing flavour ingredients ought to combine well, an idea that has become known as the food pairing hypothesis. There are many examples where the principle holds such as cheese and bacon; asparagus and butter; and in some modern restaurants chocolate and blue cheese, which apparently share 73 flavours.

But whether the rule is generally true has been hotly debated.

Today, we have an answer thanks to the work of Yong-Yeol Ahn at Harvard University and a few friends. These guys have analysed the network of links between the ingredients and flavours in some 56,000 recipes from three online recipe sites: epicurious.com, allrecipes.com and the Korean site menupan.com.

They grouped the recipes into geographical groups and then studied how the foods and their flavours are linked.

Their main conclusion is that North American and Western European cuisines tend towards recipes with ingredients that share flavours, while Southern European and East Asian recipes tend to avoid ingredients that share flavours.

In other words, the food pairing hypothesis holds in Western Europe and North America. But in Southern Europe and East Asia a converse principle of antipairing seems to be at work.

Ahn and co also found that the food pairing results are dominated by just a few ingredients in each region. In North America these are foods such as milk, butter, cocoa, vanilla, cream, and egg. In East Asia they are foods like beef, ginger, pork, cayenne, chicken, and onion. Take these out of the equation and the significance of the group’s main results disappears.

That backs another idea common in food science: the flavour principle. This is the notion that the difference between regional cuisines can be reduced to just a few ingredients. For example, paprika, onion and lard is a pretty good signature of Hungarian cuisine.

Ahn and co’s study suggest that dairy products, wheat and eggs define North American cuisine while East Asian food is dominated by plant derivatives such as soy sauce, sesame oil, rice and ginger.

Ahn and co conclude by discussing what their network approach can say about way recipes have evolved. They imagine a kind of fitness landscape in which ingredients survive according to their nutritional value, availability, flavour and so on. For example, good antibacterial properties may make some spices ‘fitter’ than others and so more successful in this landscape.

Others have also looked at food in this way but Ahn and co bring a bigger data set and the sharper insight it provides. They say their data contradicts some earlier results and that this suggests that better data is needed all round to get a clearer picture of the landscape in recipe evolution.

Given the number of ingredients we seem to eat, the total number of possible recipes is some 10^15 but the number humans actually prepare and eat is a mere 10^6. So an important question is whether there are any quantifiable principles behind our choice of ingredient combinations.

Another intriguing possibility is that this kind of evolutionary approach will reveal more not just about food, but also about the behaviour of the individuals that created it.

Food pairing seems to be one principle operating in some parts of the world. How far antipairing can take us has yet be seen, although customers to the Blumenthal’s restaurant, The Fat Duck, may be among the first to find out.

It’s still early days in the science of food networks. There are surely exciting discoveries ahead.

Ref: arxiv.org/abs/1111.6074: Flavor Network And The Principles Of Food Pairing

TRSF: Read the Best New Science Fiction inspired by today’s emerging technologies.

Estado minimo e eficiência econômica e Blade Runner

Eu aposto que se fizessem modelos de econofísica com Estado Minimo (neoliberalismo) competindo com Estados Médio (capitalismo de estado) e Estado Maximo (comunismo estatal), e botasse os caras para competir, venceria o Estado Médio.

Por que, afinal de contas, o Caminho do Meio (chinês?) está vencendo…

Falta apenas democratizar a China. A menos, é claro, que a Democracia Politica não seja economicamente eficiente. Se o critério maior for a eficiência econômica para se determinar a estrutura econômico-social (uma tese do Marxismo Vulgar), em vez do bem estar e liberdade humana ou da Biosfera, então a Democracia será superada pelo Estado Chinês (que, pelo que entendo, funciona bem no longo prazo…).

A hipótese Pollyanna pode ser derivada da psicologia evolucionária?

Eu não conhecia a hipótese Pollyanna, mas a pergunta acima é uma questão para meus amigos blogueiros…

Positive words carry less information than negative words

Authors: David Garcia, Antonios Garas, Frank Schweitzer
(Submitted on 18 Oct 2011)

Abstract: We show that the frequency of word use is not only determined by the word length [1] and the average information content [2], but also by its emotional content. We have analysed three established lexica of affective word usage in English, German, and Spanish, to verify that these lexica have a neutral, unbiased, emotional content. Taking into account the frequency of word usage, we find that words with a positive emotional content are more frequently used. This lends support to Pollyanna hypothesis [3] that there should be a positive bias in human expression. We also find that negative words contain more information than positive words, as the informativeness of a word increases uniformly with its valence decrease. Our findings support earlier conjectures about (i) the relation between word frequency and information content, and (ii) the impact of positive emotions on communication and social links.

Comments: 11 pages, 2 figures, 2 tables
Subjects: Computation and Language (cs.CL); Information Retrieval (cs.IR); Physics and Society (physics.soc-ph)
Cite as: arXiv:1110.4123v1 [cs.CL]

It From Bit: Matéria = Férmions, Espírito = Informação?

Um post que estava nos Rascunhos desde dezembro, e que só completei agora…

Acho que finalmente entendi o conceito Bayesiano de probabilidades. Antes tarde do que nunca! É claro que eu poderia ter aprendido isso muito antes, com o livro do Jaynes tão recomendado pelo Nestor Caticha. Acho que na verdade aprendi, depois esqueci, depois li de nôvo, depois esqueci de novo. “Apreender” é diferente de aprender. Acho que envolve uma mudança de Gestalt, uma espécie de momento de “iluminação”.

         Isso aconteceu devido a dois acidentes (na verdade três): a) estou sem internet em casa, ou seja, sem essa máquina de perder tempo; b) este computador tinha uma pasta com alguns artigos em pdf, entre eles o ótimo Lectures on probability, entropy and statistical mechanics de Ariel Caticha, que me fora mandado há um bom tempo atrás pelo Nestor; c) eu havia terminado o livro Artemis Fowl – Uma aventura no Ártico e estava sem nada para ler na noite de Natal (escreverei um post sobre isso outro dia).

         Além do conceito de probabilidade Bayesiano, foi muito esclarecedor a discussão sobre entropia, em particular sua ênfase de que entropia não é uma propriedade física do sistema, mas depende do grau de detalhe na descrição desse sistema:

         The fact that entropy depends on the available information implies that there is no such thing as the entropy of a system. The same system may have many different entropies. Notice, for example, that already in the third axiom we find an explicit reference to two entropies S[p] and SG[P] referring to two different descriptions of the same system. Colloquially, however, one does refer to the entropy of a system; in such cases the relevant information available about the system should be obvious from the context. In the case of thermodynamics what one means by the entropy is the particular entropy that one obtains when the only information available is specified by the known values of those few variables that specify the thermodynamic macrostate.

         Aprendi outras coisas muito interessantes no paper, cuja principal virtude, acho, é a clareza e o fato de reconhecer os pontos obscuros como realmente obscuros. Imagino que este texto poderia ser a base de uma interessante disciplina de pós-graduação aqui no DFM. Eu ainda o estou estudando, e o recomendo aos meus amigos frequentistas. Mas é claro, eu não pude resistir em dar uma olhada no capítulo final, onde encontrei esta intrigante conclusão:

            Dealing with uncertainty requires that one solve two problems. First, one must represent a state of knowledge as a consistent web of interconnected beliefs. The instrument to do it is probability. Second, when new information becomes available the beliefs must be updated. The instrument for this is relative entropy. It is the only candidate for an updating method that is of universal applicability and obeys the moral injunction that one should not change one´s mind frivolously. Prior information is valuable and should not be revised except when demanded by new evidence, in which case the revision is no longer optional but obligatory. The resulting general method  the ME method    can handle arbitrary priors and arbitrary constraints; it includes MaxEnt and Bayes-rule as special cases; and it provides its own criterion to assess the extent that non maximum-entropy distributions are ruled out.

         To conclude I cannot help but to express my continued sense of wonder and astonishment at the fact that the method for reasoning under uncertainty  which presumably includes the whole of science turns out to rest upon a foundation provided by ethical principles. Just imagine the implications!

         Acho que este último parágrafo merece um comentário completo em um próximo post…

         Dúvidas sobre o reducionismo

         Eu tenho uma listinha (incompleta) de termos que possuem uma ordem ascendente de abstração que me fazem duvidar da afirmação que a Física é materialista (no sentido clássico da palavra). Acho que o único termo que possui análogos às características clássicas da matéria como impenetrabilidade são os férmions, via Princípio de Pauli. Já os bósons, com seus condensados de Bose-Einstein, são uns caras bem esquisitos (OK, os férmions são quanticamente esquisitos também). Bom, eis a minha lista da escadinha material → espiritual dentro da Física contemporânea. De cima para baixo na escala reducionista: Read more [+]

Qual era a montanha mais alta da Terra antes do Everest ser descoberto?

Qual era a montanha mais alta da Terra antes do Everest ser descoberto?

 Um longo update num antigo post de 2009.

Se você é um realista ingênuo, deverá responder “Ora bolas, é claro que é o Monte Everest!”.
Um realista com conhecimentos de história e menos etnocêntrico diria: Mount Everest, também chamado
Sagarmāthā (Nepali: सगरमाथा),
Chomolungma ou Qomolangma (Tibetan) ou
Zhumulangma (Chinese: 珠穆朗玛峰 Zhūmùlǎngmǎ Fēng).
Ou seja, para um realista filosófico, o Monte Everest era a montanha mais alta da Terra mesmo antes de qualquer cientista medir isso.
Se você é um positivista ou neo-positivista (especialmente se você for adepto do axioma fundamental do neo-ceticismo que é “não se deve crer em algo se não há evidências suficientes para isso”), você deve responder o nome de alguma outra montanha conhecida antes da mensuração de 1856.

Mount Everest – also called Sagarmāthā (Nepali: सगरमाथा),Chomolungma or Qomolangma(Tibetan🙂 orZhumulangma (Chinese: 珠穆朗玛峰 Zhūmùlǎngmǎ Fēng) – is the highest mountain onEarth, and the highest point on the Earth’s crust, as measured by the height above sea level of itssummit, 8,848 metres (29,029 ft). The mountain, which is part of the Himalaya range in Asia, is located on the border between Sagarmatha ZoneNepal, and TibetChina.

In 1856, the Great Trigonometric Survey of India established the first published height of Everest, then known as Peak XV, at 29,002 ft (8,840 m). In 1865, Everest was given its official English name by theRoyal Geographical Societyupon recommendation of Andrew Waugh, the British Surveyor General of Indiaat the time. Chomolungma had been in common use by Tibetans for centuries, but Waugh was unable to propose an established local name because Nepal and Tibet were closed to foreigners.

Bom, é claro que tudo depende da definição de “altura” que você está usando. É necessário definir isso antes de fazer a pergunta, mas isso não muda a questão do positivismo versus realismo que estou discutindo neste post.

Summits farthest from the Earth’s center

Mount Everest is the point with the highest elevation above sea level on Earth but it is not the summit that is farthest from the Earth’s center. Because of the equatorial bulge, the summit of Mount Chimborazo in Ecuador is the point on Earth that is farthest from the center of the earth, and is 2,168 m (7,113 ft) farther from the Earth’s center than the summit of Everest.

Note: Chimborazo’s summit is about 25 metres farther from the earth’s centre than that of Huascaran.
Summit Distance from Earth’s center Elevation above sea level m Latitude Country
Chimborazo 6,384.4 km or 3,967.1 mi 6,268.2 (20,565 ft) 1°28’9″S Ecuador
Huascaran 6,384.4 km or 3,967.1 mi 6,748 (22,139 ft) 9°7′17″S Peru
Several other peaks in the Andes
Kilimanjaro (KiboSummit) ? 5,895 (19,341 ft) 3°4′33″S Tanzania
Everest 6,382.3 km or 3,965.8 mi 8,848 (29,035 ft) 27°59′17″N NepalChina(Tibet)
This list is incomplete; you can help by expanding it.

2 comentários:

Kentaro Mori disse…

Acredito que não há necessariamente uma dicotomia aí.

Antes do Everest ser descoberto, a resposta deveria frisar o conhecimento limitado sobre o tema. Algo como “a mais alta montanha conhecida cuja altura foi medida é XXX”, deixando implícito, ou mesmo explícito que “nem toda a superfície terrestre foi explorada e montanhas de maiores elevações podem ser descobertas”.

Hoje em dia podemos dizer com mais segurança que o Everest é de fato a montanha mais alta da Terra.

Ou não. Talvez exista uma região minúscula (Triângulo das Bermudas?) onde a gravidade não faça efeito e exista um pedaço de rocha de 1 m de largura, e 10km de altitude, que pela sua estreiteza jamais tenha sido visto ou detectado por instrumentos (estamos longe de ter catalogado toda a superfície terrestre com essa precisão).

O exemplo é absurdo, claro, mas ilustra como a dicotomia entre o realista filosófico e o cético não é tão grande. Ambos podem concordar que há uma realidade, e ambos devem conceder que as afirmações que podemos fazer a respeito devem estar baseadas em e serem proporcionais às evidências.

9:45 PM, Novembro 08, 2009

Osame Kinouchi disse…

Kentaro, acho que a dicotomia clássica é entre realistas e positivistas (embora não seja uma dicotomia mas um espectro, que eu sugeri pelos termos relativos realista ingênuo, realista sofisticado, positivista, neo-positivista etc). Mas concordo que o ceticismo está mais alinhado ao positivismo. por exemplo, Mach era cético em relação aos átomos, pois para ele (e ele estava correto na época) não havia evidência suficiente para se acreditar em átomos no século XIX.

Mas meus heróis (filosoficamente realistas) são outros: Boltzmann, Maxwell, Einstein, que acreditavam em átomos mesmo antes de terem evidências suficientes para tanto.

O que eu sempre discuto neste blog é a compatibilidade entre o princípio cético “não se deve crer em algo se não há evidências suficientes sobre isso” e a física teórica de ponta.

É que as pessoas interpretam este princípio no sentido restrito de “evidências empíricas” e esquecem das “evidências teóricas”.

Exemplo: Desde quando você acredita em planetas extra-solares? Desde as primeiras evidências empíricas da década de 90? Ou desde as evidencias teóricas (probabilisticas) válidas desde o tempo de Giordano Bruno? Será que Giordano morreu em vão? Sim, o Cardeal Belarmino era cético positivista, ele só acreditaria em mundos extra-solares se os pudesse ver pelo telescópio, ele precisava de evidencias empíricas e não apenas de possibilidades teóricas. Algo que Giordano Bruno e Galileu (sim, é possível que Galileu acreditasse que as outras estrelas eram sóis com seus próprios planetas, mas não encontrei uma referencia para isso, alguém conhece?) não poderiam oferecer.

Mas como pode uma evidência ser apenas teórica? O fato de ser teórica em vez de empírica não a invalida como evidência? Ainda mais se for apenas probabilística? OU seja, qual é a base da crença de Carl Sagan de que existem outras civilizações inteligentes em nossa Galáxia? Não existem evidências empíricas, você concorda? E não apenas evidências suficientes, mas mesmo qualquer evidência empírica.

Assim, dado que não podemos acreditar no Bule Voador dado que nos falta evidência empírica, também não podemos acreditar na existência de planetas com vida fora do sistema Solar, dado que não temos também evidência empírica sobre isso. E entretanto… todo cientista atualmente acredita que existam seres vivos em outros planetas da Galáxia, mas que ainda não podemos detetá-los. Ou seja, acreditam em algo que não podem ver, provar ou comprovar. Esses cientistas são menos científicos que aqueles que não acreditam em vida fora da Terra? Muito pelo contrário!

Ou seja, todos os cientistas acreditam no Bule Voador (em vida fora da Terra, no fato de que a origem da Vida é um fenômeno natural que ocorre rápido em geral – uma generalização temerária, dado que temos apenas n=1 exemplos de emergência de uma Biosfera, etc).

Eu sou fã dos seriados céticos tipo “O Mentalista”. Mas o nosso herói lá descobre o criminoso muito antes de ter evidências empíricas cientificamente aceitaveis. Ele usa intuição, leitura corporal, lógica, evidências circunstanciais.  Ele acredita (“tem fé”) em uma hipótese mesmo antes de ter evidências empíricas conclusivas.

Ele age como um cientista criativo, que baseia toda a sua trajetória de pesquisa em evidências circunstanciais, deixando para os cientistas menos criativos a tarefa de tornar tais evidências em evidências (temporariamente) conclusivas.

Você sabia, por exemplo, que quando Newton propõe a teoria da gravitação, ele já sabia que ela dava resultados errados por uma margem de 4% (na época…)? E mesmo assim, ele não usou o critério de Popper e refutou sua própria teoria. Por que? Por que Newton era um mau cientista? Não, pelo contrário, por que ele era um cientista criativo! E o caso de Milikan é exemplar, é a regra, não é excessão entre os cientistas criativos.

Quem fica esperando evidências conclusivas para acreditar em algo age apenas como um escritor de livro-texto, não é um cientista de verdade… É simples assim.

Porque não somos uma colônia de ETs?

O Paradoxo de Fermi pode ser resumido no seguinte: dado que é aceito pela maior parte dos cientistas da atualidade de que planetas habitáveis são comuns na galáxia e que o surgimento da Vida é um processo natural que provavelmente ocorre rapidamente nos mesmos (no nosso planeta, levou apenas 400 milhões de anos, ou seja, praticamente assim que a Terra esfriou o suficiente para ter água líquida), por que ainda não fomos colonizados por ETs? (*).

Uma das respostas, de O. Kinouchi (unpublished no ArXiv, mas já contando com 7 citações!)

Mais alguns papers novos sobre o assunto, envolvendo automata celulares (dê uma olhada nisso, Sandro!)

Where is everybody? — Wait a moment … New approach to the Fermi paradox

I. BezsudnovA. Snarskii
(Submitted on 16 Jul 2010)

The Fermi Paradox is the apparent contradiction between the high probability extraterrestrial civilizations’ existence and the lack of contact with such civilizations. In general, solutions to Fermi’s paradox come down to either estimation of Drake equation parameters i.e. our guesses about the potential number of extraterrestrial civilizations or simulation of civilizations development in the universe. We consider a new type of cellular automata, that allows to analyze Fermi paradox. We introduce bonus stimulation model (BS-model) of development in cellular space (Universe) of objects (Civilizations). When civilizations get in touch they stimulate development each other, increasing their life time. We discovered nonlinear threshold behaviour of total volume of civilizations in universe and on the basis of our model we built analogue of Drake equation.

Comments: 14 pages, 5 figures
Subjects: Popular Physics (physics.pop-ph); Instrumentation and Methods for Astrophysics (astro-ph.IM); Cellular Automata and Lattice Gases (nlin.CG)
Cite as: arXiv:1007.2774v1 [physics.pop-ph]

Estes caras me citaram, legal! Faz parte da ética científica citar bem os predecessores…

Cellular Automation of Galactic Habitable Zone

Branislav VukoticMilan M. Cirkovic
(Submitted on 26 Jan 2010)

We present a preliminary results of our Galactic Habitable Zone (GHZ) 2D probabilistic cellular automata models. The relevant time-scales (emergence of life, it’s diversification and evolution influenced with the global risk function) are modeled as the probability matrix elements and are chosen in accordance with the Copernican principle to be well-represented by the data inferred from the Earth’s fossil record. With Fermi’s paradox as a main boundary condition the resulting histories of astrobiological landscape are discussed.

Comments: 4 pages, one figure, to appear in Publication of the Astronomical Observatory of Belgrade (6th SREAC Meeting proceedings)
Subjects: Galaxy Astrophysics (astro-ph.GA); Cellular Automata and Lattice Gases (nlin.CG)
Cite as: arXiv:1001.4624v1 [astro-ph.GA]

(*) Se você acredita em teorias conspiratórias sobre UFOs, Greys e governo norte-americano, considere o seguinte: a) Depois de milhões de supostos avistamentos e abduções, ninguém até hoje trouxe uma caneta BIC fabricada por ETs ou conseguiu descrever como é o banheiro dentro de um disco voador; b) Os ETs relatados parecem não ter medo de nossos micróbios e não têm o menor escrúpulo de abrir a porta do UFO e contaminar nossa biosfera com os micróbios deles… Por quê? c) Temos 97% de genes em comuns com os chimpanzés, então, a menos que os chimps também foram criados pelos ETs, nós não somos experiências genéticas extraterrenas, como tantos afirmam… d) OK, curiosamente, o filme 2001 – Uma Odisséia no Espaço é um (sensacional) filme religioso e teísta, feito por dois céticos (Kubrik e Arthur C. Clark). Mas… é apenas um filme, OK? e) Não, eu não faço parte da Grande Conspiração que esconde as evidências da presença dos UFOs no Brasil e no Mundo. Na verdade, quando eu era teen, eu era Ufólogo, responsável, junto com meu amigo Sinézio Inácio da Silva Jr., pelo C.E.F.A Sudeste (Centro de Estudos de Fenômenos Aéro-Espaciais), editava o UFO Report e o fanzine new age Novos Horizontes. Meu primeiro artigo publicado (que não está no Lattes!) é uma resenha do livro “Uma Breve História do Tempo” de Stephen Hawking, na Revista UFO do Gevaerd que, infelizmente, parece que virou espiritufólogo… Antigamente a Revista UFO era mais científica.

the physics arXiv blog

Astronomers Define New Class Of Planet: The Super-Earth

Posted: 02 Aug 2011 09:10 PM PDT

Rocky planets that are almost as big as Uranus seem far more common than anyone suspected

In our Solar System, planets fall into two types. First, there are the rocky planets like Earth, Mars and Venus, which are similar in size and support gaseous atmospheres. Then there are the gas giants, like Jupiter, Saturn and Uranus. These huge puff balls are two or more orders of magnitude bigger than their rocky cousins.

Perhaps strangest of all, there are no planets in between; nothing that sits on the borderline between rocky minnow and gas giant.

This sharp distinction has driven much of astronomers’ thinking about planet formation. One of the main challenges they have faced is to come up with a theory that explains the formation of two entirely different types of planet, but no hybrids that share characteristics of both.

That thinking will have to change. It now looks as if we’ve been fooled by our own Solar System. When astronomers look elsewhere, this two-tiered planetary division disappears.

Astrophysicists have now spotted more than 500 planets orbiting other stars and all of these systems seem entirely different to our Solar System. They’ve seen entirely new class of planets such as the Super-Jupiters that are many times larger than our biggest planet with orbits closer than Mercury.

But the one we’re interested here has a mass that spans the range from Earth to Uranus, exactly the range that is missing from our Solar System.

Astronomers are calling these new types of planet Super-Earths and they have so far found more than 30 of them.

Today, Nader Haghighipour at the University of Hawaii in Honolulu reviews what we know about SuperEarths and shows they are changing the way astronomers think about planet formation. Their mere existence, for example, should allow astrophysicists to reject a large portion of current theories about planet formation.

Of course, the question about Super-Earths that generates the most interest is whether they can support life. To that end, Haghighipour discusses the possibility that these planets may be rocky with relatively thin atmospheres, that they have dynamic cores that generate a magnetic field and that they may support plate tectonics. Above all, there is the question of whether they can support liquid water.

It makes for fascinating reading. But when all this new information has been absorbed by the community, astronomers will still be left with an important puzzle. That is why our Solar System is so different from all the others we can see, why it has this sharp distinction in planet type and what relevance this has to the question of habitability.

This is a mystery that astronomers are only just getting their teeth into.

Ref: http://arxiv.org/abs/1108.0031: Super-Earths: A New Class of Planetary Bodies

Amigos e seus papers

Two-level Fisher-Wright framework with selection and migration: An approach to studying evolution in group structured populations

Roberto H. SchonmannRenato VicenteNestor Caticha
(Submitted on 23 Jun 2011)

A framework for the mathematical modeling of evolution in group structured populations is introduced. The population is divided into a fixed large number of groups of fixed size. From generation to generation, new groups are formed that descend from previous groups, through a two-level Fisher-Wright process, with selection between groups and within groups and with migration between groups at rate $m$. When $m=1$, the framework reduces to the often used trait-group framework, so that our setting can be seen as an extension of that approach. Our framework allows the analysis of previously introduced models in which altruists and non-altruists compete, and provides new insights into these models. We focus on the situation in which initially there is a single altruistic allele in the population, and no further mutations occur. The main questions are conditions for the viability of that altruistic allele to spread, and the fashion in which it spreads when it does. Because our results and methods are rigorous, we see them as shedding light on various controversial issues in this field, including the role of Hamilton’s rule, and of the Price equation, the relevance of linearity in fitness functions and the need to only consider pairwise interactions, or weak selection. In this paper we analyze the early stages of the evolution, during which the number of altruists is small compared to the size of the population. We show that during this stage the evolution is well described by a multitype branching process. The driving matrix for this process can be obtained, reducing the problem of determining when the altruistic gene is viable to a comparison between the leading eigenvalue of that matrix, and the fitness of the non-altruists before the altruistic gene appeared. This leads to a generalization of Hamilton’s condition for the viability of a mutant gene.

Comments: Complete abstract in the paper. 71 pages, 20 figures
Subjects: Populations and Evolution (q-bio.PE)
Cite as: arXiv:1106.4783v1 [q-bio.PE]

Agent-based Social Psychology: from Neurocognitive Processes to Social Data

(Submitted on 31 May 2010 (v1), last revised 11 Feb 2011 (this version, v2))

Moral Foundation Theory states that groups of different observers may rely on partially dissimilar sets of moral foundations, thereby reaching different moral valuations. The use of functional imaging techniques has revealed a spectrum of cognitive styles with respect to the differential handling of novel or corroborating information that is correlated to political affiliation. Here we characterize the collective behavior of an agent-based model whose inter individual interactions due to information exchange in the form of opinions are in qualitative agreement with data. The main conclusion derived connects the existence of diversity in the cognitive strategies and statistics of the sets of moral foundations and suggests that this connection arises from interactions between agents. Thus a simple interacting agent model, whose interactions are in accord with empirical data on conformity and learning processes, presents statistical signatures consistent with those that characterize moral judgment patterns of conservatives and liberals.

Comments: 11 pages, 4 figures, submitted
Subjects: Physics and Society (physics.soc-ph); Social and Information Networks (cs.SI); Neurons and Cognition (q-bio.NC)
Cite as: arXiv:1005.5718v2 [physics.soc-ph]

Spike Avalanches Exhibit Universal Dynamics across the Sleep-Wake Cycle

(Submitted on 10 Jan 2011)

Scale-invariant neuronal avalanches have been observed in cell cultures and slices as well as anesthetized and awake brains, suggesting that the brain operates near criticality, i.e. within a narrow margin between avalanche propagation and extinction. In theory, criticality provides many desirable features for the behaving brain, optimizing computational capabilities, information transmission, sensitivity to sensory stimuli and size of memory repertoires. However, a thorough characterization of neuronal avalanches in freely-behaving (FB) animals is still missing, thus raising doubts about their relevance for brain function. To address this issue, we employed chronically implanted multielectrode arrays (MEA) to record avalanches of spikes from the cerebral cortex (V1 and S1) and hippocampus (HP) of 14 rats, as they spontaneously traversed the wake-sleep cycle, explored novel objects or were subjected to anesthesia (AN). We then modeled spike avalanches to evaluate the impact of sparse MEA sampling on their statistics. We found that the size distribution of spike avalanches are well fit by lognormal distributions in FB animals, and by truncated power laws in the AN group. The FB data are also characterized by multiple key features compatible with criticality in the temporal domain, such as 1/f spectra and long-term correlations as measured by detrended fluctuation analysis. These signatures are very stable across waking, slow-wave sleep and rapid-eye-movement sleep, but collapse during anesthesia. Likewise, waiting time distributions obey a single scaling function during all natural behavioral states, but not during anesthesia. Results are equivalent for neuronal ensembles recorded from V1, S1 and HP. Altogether, the data provide a comprehensive link between behavior and brain criticality, revealing a unique scale-invariant regime of spike avalanches across all major behaviors.

Comments: 14 pages, 9 figures, supporting material included (published in Plos One)
Subjects: Neurons and Cognition (q-bio.NC); Data Analysis, Statistics and Probability (physics.data-an)
Journal reference: PLoS ONE 5(11): e14129, 2010
DOI: 10.1371/journal.pone.0014129
Cite as: arXiv:1101.2434v1 [q-bio.NC]

Collective oscillations of excitable elements: order parameters, bistability and the role of stochasticity

(Submitted on 31 Jan 2011)

We study the effects of a probabilistic refractory period in the collective behavior of coupled discrete-time excitable cells (SIRS-like cellular automata). Using mean-field analysis and simulations, we show that a synchronized phase with stable collective oscillations exists even with non-deterministic refractory periods. Moreover, further increasing the coupling strength leads to a reentrant transition, where the synchronized phase loses stability. In an intermediate regime, we also observe bistability (and consequently hysteresis) between a synchronized phase and an active but incoherent phase without oscillations. The onset of the oscillations appears in the mean-field equations as a Neimark-Sacker bifurcation, the nature of which (i.e. super- or subcritical) is determined by the first Lyapunov coefficient. This allows us to determine the borders of the oscillating and of the bistable regions. The mean-field prediction thus obtained agrees quantitatively with simulations of complete graphs and, for random graphs, qualitatively predicts the overall structure of the phase diagram. The latter can be obtained from simulations by defining an order parameter q suited for detecting collective oscillations of excitable elements. We briefly review other commonly used order parameters and show (via data collapse) that q satisfies the expected finite size scaling relations.

Comments: 19 pages, 7 figures
Subjects: Neurons and Cognition (q-bio.NC); Disordered Systems and Neural Networks (cond-mat.dis-nn); Statistical Mechanics (cond-mat.stat-mech); Cellular Automata and Lattice Gases (nlin.CG)
Journal reference: J. Stat. Mech. (2011) P01012
DOI: 10.1088/1742-5468/2011/01/P01012
Cite as: arXiv:1101.6054v1 [q-bio.NC]

Infinite-Period Phase Transition versus Nucleation in a Stochastic Model of Collective Oscillations

(Submitted on 16 Jun 2011 (v1), last revised 21 Jun 2011 (this version, v2))

A lattice model of three-state stochastic phase-coupled oscillators has been shown by Wood et al. [Phys. Rev. Lett. 96, 145701 (2006)] to exhibit a phase transition at a critical value of the coupling parameter, leading to stable global oscillations. In the complete graph version of the model, we show that, upon further increase in the coupling, the average frequency of collective oscillations decreases until an infinite-period (IP) phase transition occurs, at which collective oscillations cease. Above this second critical point, a macroscopic fraction of the oscillators spend most of the time in one of the three states, yielding a prototypical nonequilibrium example (without an equilibrium counterpart) in which discrete rotational (C_3) symmetry is spontaneously broken, in the absence of any absorbing state. Simulation results and nucleation arguments strongly suggest that the IP phase transition does not occur on finite-dimensional lattices with short-range interactions.

Comments: 15 pages, 8 figures
Subjects: Biological Physics (physics.bio-ph); Statistical Mechanics (cond-mat.stat-mech); Chaotic Dynamics (nlin.CD); Data Analysis, Statistics and Probability (physics.data-an)
Cite as: arXiv:1106.3323v2 [physics.bio-ph]

A sociofísica das revoluções políticas no Afganistão, Líbia e Síria

A dar uma olhada:


Lanchester Theory and the Fate of Armed Revolts

Michael P. AtkinsonAlexander GutfraindMoshe Kress
(Submitted on 22 Jun 2011)

Major revolts have recently erupted in parts of the Middle East with substantial international repercussions. Predicting, coping with and winning those revolts have become a grave problem for many regimes and for world powers. We propose a new model of such revolts that describes their evolution by building on the classic Lanchester theory of combat. The model accounts for the split in the population between those loyal to the regime and those favoring the rebels. We show that, contrary to classical Lanchesterian insights regarding traditional force-on-force engagements, the outcome of a revolt is independent of the initial force sizes; it only depends on the fraction of the population supporting each side and their combat effectiveness. We also consider the effects of foreign intervention and of shifting loyalties of the two populations during the conflict. The model’s predictions are consistent with the situations currently observed in Afghanistan, Libya and Syria (Spring 2011) and it offers tentative guidance on policy.

Subjects: Dynamical Systems (math.DS); Adaptation and Self-Organizing Systems (nlin.AO)
MSC classes: 37N99, 91B74,
Cite as: arXiv:1106.4358v1 [math.DS]


A complexidade da cooperação

Acho que preciso ler isso:

Emergence of cooperation with self-organized criticality

Hyeong-Chai Jeong, Sangmin Park
(Submitted on 9 Nov 2010)
Cooperation and self-organized criticality are two main keywords in current studies of evolution. We propose a generalized Bak-Sneppen model and provide a natural mechanism which accounts for both phenomena simultaneously. We use the prisoner’s dilemma games to mimic the interactions among the species. Each species is identified by its cooperation probability and its fitness is given by the payoffs from the neighbors. The species with the least payoff is replaced by a new species with a random cooperation probability. When the neighbors of the least fit one are also replaced with a non-zero probability, a strong cooperation emerges. Bak-Sneppen process builds a self-organized structure so that the cooperation can emerge even in the parameter region where a uniform or random population decreases the number of cooperators. The emergence of cooperation is due to the same dynamical correlation which leads to self-organized criticality in replacement activities.
Comments: 4 pages, 4 figures
Subjects: Data Analysis, Statistics and Probability (physics.data-an); Biological Physics (physics.bio-ph); Populations and Evolution (q-bio.PE)
Cite as: arXiv:1011.2013v1 [physics.data-an]