Home // Posts tagged "statistical physics"

Planetas extra-solares, Kepler 62 e o Paradoxo de Fermi local

Conforme aumentam o número de planetas extra-solares descobertos, também aumentamos vínculos sobre as previsões do modelo de percolação galática (Paradoxo de Fermi Local).
A previsão é que, se assumirmos que Biosferas Meméticas (Biosferas culturais ou Tecnosferas) são um resultado provável de Biosferas Genéticas, então devemos estar dentro de uma região com pucos planetas habitáveis. Pois se existirem planetas habitados (por seres inteligentes) por perto, com grande probabilidade eles são bem mais avançados do que nós, e já teriam nos colonizado.
Como isso ainda não ocorreu (a menos que se acredite nas teorias de conspiração dos ufólogos e nas teorias de Jesus ET, deuses astronautas etc.), segue que quanto mais os astronomos obtiverem dados, mais ficará evidente que nosso sistema solar é uma anomalia dentro de nossa vizinhança cósmica (1000 anos-luz?), ou seja, não podemos assumir o Princípio Copernicano em relação ao sistema solar: nosso sistema solar não é tipico em nossa vizinhança.  Bom, pelo menos, essa conclusão está batendo com os dados coletados até hoje…
Assim, é possível fazer a previsão de que uma maior análise dos planetas Kepler 62-e e Kepler 62-f revelará que eles não possuem uma atmosfera com oxigênio ou metano, sinais de um planeta com biosfera.

Persistence solves Fermi Paradox but challenges SETI projects

Osame Kinouchi (DFM-FFCLRP-Usp)
(Submitted on 8 Dec 2001)

Persistence phenomena in colonization processes could explain the negative results of SETI search preserving the possibility of a galactic civilization. However, persistence phenomena also indicates that search of technological civilizations in stars in the neighbourhood of Sun is a misdirected SETI strategy. This last conclusion is also suggested by a weaker form of the Fermi paradox. A simple model of a branching colonization which includes emergence, decay and branching of civilizations is proposed. The model could also be used in the context of ant nests diffusion.

03/05/2013 – 03h10

Possibilidade de vida não se resume a planetas similares à Terra, diz estudo

SALVADOR NOGUEIRA
COLABORAÇÃO PARA A FOLHA

Com as diferentes composições, massas e órbitas possíveis para os planetas fora do Sistema Solar, a vida talvez não esteja limitada a mundos similares à Terra em órbitas equivalentes à terrestre.

Editoria de arte/Folhapress

Essa é uma das conclusões apresentada por Sara Seager, do MIT (Instituto de Tecnologia de Massachusetts), nos EUA, em artigo de revisão publicado no periódico “Science“, com base na análise estatística dos cerca de 900 mundos já detectados ao redor de mais de 400 estrelas.

Seager destaca a possível existência de planetas cuja atmosfera seria tão densa a ponto de preservar água líquida na superfície mesmo a temperaturas bem mais baixas que a terrestre. Read more [+]

Artigos em neurociência teórica, criticalidade em árvores dendríticas

journal.pcbi.1000402.g001

Leonardo Lyra Gollo me incentivou a retomar o blog. Obrigado pelo incentivo, Leo!

Single-Neuron Criticality Optimizes Analog Dendritic Computation

Leonardo L. GolloOsame KinouchiMauro Copelli
(Submitted on 17 Apr 2013)

Neurons are thought of as the building blocks of excitable brain tissue. However, at the single neuron level, the neuronal membrane, the dendritic arbor and the axonal projections can also be considered an extended active medium. Active dendritic branchlets enable the propagation of dendritic spikes, whose computational functions, despite several proposals, remain an open question. Here we propose a concrete function to the active channels in large dendritic trees. By using a probabilistic cellular automaton approach, we model the input-output response of large active dendritic arbors subjected to complex spatio-temporal inputs, and exhibiting non-stereotyped dendritic spikes. We find that, if dendritic spikes have a non-deterministic duration, the dendritic arbor can undergo a continuous phase transition from a quiescent to an active state, thereby exhibiting spontaneous and self-sustained localized activity as suggested by experiments. Analogously to the critical brain hypothesis, which states that neuronal networks self-organize near a phase transition to take advantage of specific properties of the critical state, here we propose that neurons with large dendritic arbors optimize their capacity to distinguish incoming stimuli at the critical state. We suggest that “computation at the edge of a phase transition” is more compatible with the view that dendritic arbors perform an analog and dynamical rather than a symbolic and digital dendritic computation.

Comments: 11 pages, 6 figures
Subjects: Neurons and Cognition (q-bio.NC)
Cite as: arXiv:1304.4676 [q-bio.NC]
(or arXiv:1304.4676v1 [q-bio.NC] for this version)

Mechanisms of Zero-Lag Synchronization in Cortical Motifs

(Submitted on 18 Apr 2013)

Zero-lag synchronization between distant cortical areas has been observed in a diversity of experimental data sets and between many different regions of the brain. Several computational mechanisms have been proposed to account for such isochronous synchronization in the presence of long conduction delays: Of these, the phenomena of “dynamical relaying” – a mechanism that relies on a specific network motif (M9) – has proven to be the most robust with respect to parameter and system noise. Surprisingly, despite a contrary belief in the community, the common driving motif (M3) is an unreliable means of establishing zero-lag synchrony. Although dynamical relaying has been validated in empirical and computational studies, the deeper dynamical mechanisms and comparison to dynamics on other motifs is lacking. By systematically comparing synchronization on a variety of small motifs, we establish that the presence of a single reciprocally connected pair – a “resonance pair” – plays a crucial role in disambiguating those motifs that foster zero-lag synchrony in the presence of conduction delays (such as dynamical relaying, M9) from those that do not (such as the common driving triad, M3). Remarkably, minor structural changes to M3 that incorporate a reciprocal pair (hence M6, M9, M3+1) recover robust zero-lag synchrony. The findings are observed in computational models of spiking neurons, populations of spiking neurons and neural mass models, and arise whether the oscillatory systems are periodic, chaotic, noise-free or driven by stochastic inputs. The influence of the resonance pair is also robust to parameter mismatch and asymmetrical time delays amongst the elements of the motif. We call this manner of facilitating zero-lag synchrony resonance-induced synchronization and propose that it may be a general mechanism to promote zero-lag synchrony in the brain.

Comments: 27 pages, 8 figures
Subjects: Neurons and Cognition (q-bio.NC)
Cite as: arXiv:1304.5008 [q-bio.NC]
(or arXiv:1304.5008v1 [q-bio.NC] for this version)

Número de neurônios no cérebro é cinco vezes maior que o número de árvores na Amazônia

Fiz a seguinte conta:  peguei a estimativa de 86 bilhões de neurônios no cérebro e comparei com o número de árvores sugerido pela reportagem abaixo (ou seja, 85/15*2,6 bilhões).  Deu que o cérebro corresponde a cerca de seis Amazônias (em termos de árvores).

Acho que essa é uma comparação importante para quem quer entender, modelar ou reproduzir um cérebro.  Você aceitaria tal tarefa sabendo que é mais difícil do que modelar a Amazônia???

PS: Sim, eu venho acalentando faz tempo que a melhor metáfora para um cérebro é uma floresta, não um computador. Acho que se aplicarmos ideias de computação paralela por meio de agentes, acabaremos encontrando que florestas computam (por exemplo, a sincronização das árvores de ipês, que hora emitir os aerosóis que nucleiam gotas de chuva e fazem chover sobre a floresta etc.). OK, é uma computação em câmara lenta (e é por isso que a não enxergamos).

PS2: Norberto Cairasco anda também encafifado sobre as semelhanças entre dendritos de neurônios e de árvores. Acha que pode haver alguma convergência evolucionária para certas funções, embora em escalas diferentes.

Aproximadamente 2,6 bilhões de árvores foram derrubadas na Amazônia Legal até 2002

 

01/06/2011 – 11h09

Repórter da Agência Brasil

Rio de Janeiro – Cerca de 15% do total da vegetação original da Amazônia Legal foram desmatados, o que equivale à retirada de aproximadamente 2,6 bilhões de árvores e ao desmate de uma área de 600 mil quilômetros quadrados até 2002. Esse cenário corresponde à destruição de 4,7 bilhões de metros cúbicos de madeira de uma área que, originalmente, representava 4 milhões de quilômetros quadrados cobertos por florestas. Read more [+]

Linguagens esfriam enquanto expandem…

 

 Languages cool as they expand: Allometric scaling and the decreasing need for new words

 

Alexander M. PetersenJoel N. TenenbaumShlomo HavlinH. Eugene StanleyMatjaz Perc
(Submitted on 11 Dec 2012)

We analyze the occurrence frequencies of over 15 million words recorded in millions of books published during the past two centuries in seven different languages. For all languages and chronological subsets of the data we confirm that two scaling regimes characterize the word frequency distributions, with only the more common words obeying the classic Zipf law. Using corpora of unprecedented size, we test the allometric scaling relation between the corpus size and the vocabulary size of growing languages to demonstrate a decreasing marginal need for new words, a feature that is likely related to the underlying correlations between words. We calculate the annual growth fluctuations of word use which has a decreasing trend as the corpus size increases, indicating a slowdown in linguistic evolution following language expansion. This “cooling pattern” forms the basis of a third statistical regularity, which unlike the Zipf and the Heaps law, is dynamical in nature.

Comments: 9 two-column pages, 7 figures; accepted for publication in Scientific Reports
Subjects: Physics and Society (physics.soc-ph); Statistical Mechanics (cond-mat.stat-mech); Computation and Language (cs.CL); Applications (stat.AP)
Journal reference: Sci. Rep. 2 (2012) 943
DOI: 10.1038/srep00943
Cite as: arXiv:1212.2616 [physics.soc-ph]
(or arXiv:1212.2616v1 [physics.soc-ph] for this version)

Submission history

Probabilidade de ocorrer um evento maior que o “11 de setembro” ultrapassa os 95%

Statisticians Calculate Probability Of Another 9/11 Attack

According to the statistics, there is a 50 per cent chance of another catastrophic terrorist attack within the next ten years

3 comments

THE PHYSICS ARXIV BLOG

Wednesday, September 5, 2012

Earthquakes are seemingly random events that are hard to predict with any reasonable accuracy. And yet geologists make very specific long term forecasts that can help to dramatically reduce the number of fatalities.

For example, the death toll from earthquakes in the developed world, in places such as Japan and New Zealand, would have been vastly greater were it not for strict building regulations enforced on the back of well-founded predictions that big earthquakes were likely in future.

The problem with earthquakes is that they follow a power law distribution–small earthquakes are common and large earthquakes very rare but the difference in their power is many orders of magnitude.

Humans have a hard time dealing intuitively with these kinds of statistics. But in the last few decades statisticians have learnt how to handle them, provided that they have a reasonable body of statistical evidence to go on.

That’s made it possible to make predictions about all kinds of phenomena governed by power laws, everything from earthquakes, forest fires and avalanches to epidemics, the volume of email and even the spread of rumours.

So it shouldn’t come as much of a surprise that Aaron Clauset at the Santa Fe Institute in New Mexico and Ryan Woodard at ETH, the Swiss Federal Institute of Technology, in Zurich have used this approach to study the likelihood of terrorist attacks.  Read more [+]

A ideologia das leis de potência



Power Laws, Weblogs, and Inequality

First published February 8, 2003 on the “Networks, Economics, and Culture” mailing list. Subscribe to the mailing list.

Version 1.1: Changed 02/10/03 to point to the updated “Blogging Ecosystem” project, and to Jason Kottke’s work using Technorati.com data. Added addendum pointing to David Sifry’s “Technorati Interesting Newcomers” list, which is in part a response to this article.

A persistent theme among people writing about the social aspects of weblogging is to note (and usually lament) the rise of an A-list, a small set of webloggers who account for a majority of the traffic in the weblog world. This complaint follows a common pattern we’ve seen with MUDs, BBSes, and online communities like Echo and the WELL. A new social system starts, and seems delightfully free of the elitism and cliquishness of the existing systems. Then, as the new system grows, problems of scale set in. Not everyone can participate in every conversation. Not everyone gets to be heard. Some core group seems more connected than the rest of us, and so on.
Prior to recent theoretical work on social networks, the usual explanations invoked individual behaviors: some members of the community had sold out, the spirit of the early days was being diluted by the newcomers, et cetera. We now know that these explanations are wrong, or at least beside the point. What matters is this: Diversity plus freedom of choice creates inequality, and the greater the diversity, the more extreme the inequality.
In systems where many people are free to choose between many options, a small subset of the whole will get a disproportionate amount of traffic (or attention, or income), even if no members of the system actively work towards such an outcome. This has nothing to do with moral weakness, selling out, or any other psychological explanation. The very act of choosing, spread widely enough and freely enough, creates a power law distribution.

Leis de escala e criticalidade em ciências cognitivas

doi:10.1016/j.tics.2010.02.005 | How to Cite or Link Using DOI
Copyright © 2010 Elsevier Ltd All rights reserved.
Permissions & Reprints
Review

Scaling laws in cognitive sciences
Christopher T. Kello1, , Gordon D.A. Brown2, Ramon Ferrer-i-Cancho3, John G. Holden4, Klaus Linkenkaer-Hansen5, Theo Rhodes1 and Guy C. Van Orden4
1 Cognitive and Information Sciences University of California , Merced, 5200 North Lake Rd., Merced, CA 95343, USA
2 Department of Psychology, University of Warwick, Coventry CV4 7AL, United Kingdom
3 Department de Llenguatges i Sistemes Informatics, Universitat Politecnica de Catalunya, Campus Nord, Edifici Omega, Jordi Girona Salgado 1-3, 08034 Barcelona, Catalonia, Spain
4 Center for Perception, Action and Cognition, Department of Psychology, University of Cincinnati, PO Box 210376, Cincinnati, OH 45221-0376, USA
5 Department of Integrative Neurophysiology, VU University Amsterdam, De Boelelaan 1085, 1081 HV Amsterdam, the Netherlands
Available online 1 April 2010. 
Scaling laws are ubiquitous in nature, and they pervade neural, behavioral and linguistic activities. A scaling law suggests the existence of processes or patterns that are repeated across scales of analysis. Although the variables that express a scaling law can vary from one type of activity to the next, the recurrence of scaling laws across so many different systems has prompted a search for unifying principles. In biological systems, scaling laws can reflect adaptive processes of various types and are often linked to complex systems poised near critical points. The same is true for perception, memory, language and other cognitive phenomena. Findings of scaling laws in cognitive science are indicative of scaling invariance in cognitive mechanisms and multiplicative interactions among interdependent components of cognition.
Article Outline
The scaling law debate
Scaling laws in perception, action and memory
Scaling laws in reaction times and word frequencies
Scaling laws and criticality
Concluding remarks
Acknowledgements
References
Droga, essa idéia era minha…: 
Another type of scaling law in memory comes from a classic free recall paradigm, yet was only recently discovered by drawing an analogy to studies of animal foraging behaviors [24]. Birds, monkeys, fish and numerous other species have been reported to search for food in Lévy flight patterns [25], which have been hypothesized as effective search strategies because they cover more territory than, for example, a random walk with normally distributed steps [26]. Searching for items or events in memory is like foraging, particularly in tasks such as free recall of members of a given semantic category (e.g. animals) in a given time period [27]. Rhodes and Turvey [24] analyzed inter-response time intervals (IRIs) from this classic memory task, which are analogous to steps from one recalled item to the next. The authors found IRIs to be power-law distributed with exponents very similar to those found in animal foraging (Figure 2). These comparable results suggest that Lévy flights are generally adaptive across a variety of search ecologies. These results also illustrate how scaling laws can lurk unnoticed in data for decades, in the absence of theories and analytic techniques necessary to recognize them.
Complex Times for Earthquakes, Stocks, and the Brain’s Activity
Christoph Kayser1, , and Bard Ermentrout2, , 
1 Max Planck Institute for Biological Cybernetics, Spemannstrasse 38, 72076 Tübingen, Germany
2 Department of Mathematics, University of Pittsburgh, Pittsburgh, PA 15260, USA
Available online 12 May 2010. 
Refers to: The Temporal Structures and Functional Significance of Scale-free Brain Activity
Neuron, Volume 66, Issue 3, 13 May 2010, Pages 353-369, 
Biyu J. He, John M. Zempel, Abraham Z. Snyder, Marcus E. Raichle
PDF (3392 K) | Supplementary Content
A new study by He et al. in this issue of Neuron shows that large-scale arrhythmic (1/f) brain activity contains nested temporal structure in the form of crossfrequency coupling. This suggests temporal organization in neural mass activity beyond oscillations and draws attention to ubiquitous but often ignored arrhythmic patterns in neural activity.
What do earthquakes, Dow-Jones, and brain activity have in common? Unpredictability first springs to mind, of course, but as researchers have long noticed, these and many other complex processes might actually share common patterns pertaining to long-range spatio-temporal correlations of the underlying quantities ([Kello et al., 2010] and [Jensen, 1998]). In addition, and as an intriguing study in this issue of Neuronillustrates (He et al., 2010), they might also share another level of temporal organization, whereby the phase of slower timescales predicts the amplitude of faster ones. This nesting of timescales might open a window onto the complex structure of neural activity, but also raises questions with regard to its universality.
In their new study, He et al. recorded electrocorticographic (ECoG) activity across several brain areas in human patients. To investigate the signal’s temporal structure, they calculated the frequency spectrum, i.e., the distribution of amplitudes of individual frequency bands as a function of frequency. In concordance with previous studies, they described the frequency spectra using the power-law 1/fa, with the scaling factor adiffering between low (<1>1 Hz) frequency bands. When shown on logarithmic axes, such power-law scaling translates into a straight line with slope a, as illustrated in Figure 1A.
It is important to note the distinction between the spectral 1/fa shape and rhythmic oscillatory activity. Oscillatory activities with well-defined frequencies (e.g., theta, alpha, or gamma oscillations) are prevalent in neural networks and result in distinct peaks above the 1/fa background (Buzsaki, 2006) (cf. Figure 1A). Typically, such oscillations result from processes with well-defined intrinsic timescales and can be associated with defined networks such as thalamocortical or hippocampal loops. In contrast to this, activity characterized by a (straight) 1/fa spectrum is considered “arrhythmic,” as it does not reflect processes with identifiable timescales. Systems that generate perfect power-law spectra are also known as “scale-free,” since the underlying process or network possesses no distinguished scale ([Bak et al., 1987] and [Jensen, 1998]). Importantly, while oscillations have attracted wide interest and are matter of various speculations with regard to their meaning and function, the arrhythmic component of electric brain activity is often considered self-evident or uninteresting and hence ignored.
The stunning finding of He et al. is that even such supposedly arrhythmic brain activity has a complex temporal structure in the form of crossfrequency phase-amplitude coupling. Crossfrequency implies that the coupling involves two distinct frequency bands, and phase-amplitude implies that the amplitude of one band is dependent on the phase of the other. In particular, He et al. extracted band-limited components from their wide-band signals and found that the amplitude of the faster component depends on the phase of the slower one, as illustrated in Figure 1B. For their analysis they considered a range of frequency pairs and used statistical bootstrapping methods to validate the significance of phase dependency. Overall, they found that more than 70% of the electrodes contained frequency pairs with significant frequency coupling. Importantly, and to prove the importance of this phenomenon, they demonstrated the existence of crossfrequency coupling not only in resting state activity, but also during task performance and slow-wave sleep.

Engarrafamentos fantasmas: experimento e teoria

Matemáticos [aplicados, físicos e engenheiros] dos EUA desenvolvem fórmula para prever engarrafamentos


O Globo 


Matemáticos do MIT (Massachusetts Institute of Technology), nos Estados Unidos, desenvolveram uma fórmula para prever o surgimento dos chamados congestionamentos fantasmas – engarrafamentos que surgem de uma hora para outra, sem motivo aparente, em vias com grande concentração de veículos.

A equação poderá ajudar na criação de projetos para desenvolvimento de vias com menor potencial de congestionamentos.

O modelo descreve como e por que os engarrafamentos são formados. A análise inclui fatores à primeira vista pouco importantes, como freadas súbitas ou a circulação de veículos muito próximos uns dos outros.

“As equações, semelhantes às usadas para descrever a mecânica de fluidos, levam em conta fatores como a velocidade de trânsito e densidade do tráfego para calcular as condições para formação e expansão de congestionamentos”, diz Morris Flynn, o principal responsável pelo estudo.

Planejamento viário


O modelo não contribui para acabar com congestionamentos já formados. Nesse caso, os motoristas continuam sem nada a fazer, afirma Flynn.

Mas a fórmula pode ajudar especialistas em planejamento viário na determinação de limites de velocidade adequados.

O estudo também pode contribuir para identificação de locais com maior densidade de veículos e maior risco de acidentes.

Para o desenvolvimento das equações, os cientistas do MIT se basearam, em parte, em um experimento conduzido por cientistas japoneses.

Na ocasião, motoristas foram instruídos a dirigir a 30 km/h e manter uma distância constante do carro da frente em uma via circular. Rapidamente, o trânsito sofreu problemas.

Quanto maior a densidade de veículos, mais rapidamente os congestionamentos se formavam.

Agora, os cientistas do MIT pretendem começar a estudar como outros aspectos contribuem para a formação de congestionamentos, como, por exemplo, a quantidade de pistas das vias.


Sexo grupal entre vagalumes

Estudo: vagalumes sincronizam luzes para conquistar fêmeas
08 de julho de 2010 16h50

comentários
1

Cientistas afirmam que vagalumes sincronizam suas piscadas para que fêmea não confunda as espécies Foto: Getty Images

Cientistas afirmam que vagalumes sincronizam suas piscadas para que fêmea não confunda as espécies
Foto: Getty Images

Um novo estudo da universidade de Connecticut, nos Estados Unidos, indica que os vagalumes machos sincronizam suas luzes para atrair as fêmeas. Segundo os cientistas, a sincronia das piscadas ajuda as fêmeas a encontrar os machos da sua própria espécie, e não cruzar com um vagalume de outra. As informações são do Discovery News.

Os pesquisadores afirmam que quando a fêmea vê os machos piscando no mesmo padrão, ela pode responder com outro flash para chamar o parceiro. “O que as fêmeas estão fazendo é procurar pelo número e o tempo dos flashes”, diz o pesquisador Andrew Moiseff.

Os cientistas, para testar a teoria, utilizaram pequenas luzes LED que imitavam os padrões de diferentes espécies. Contudo, os pesquisadores não consideram que os trabalhos tenham acabado. “De seis a 12 machos podem ser atraídos (pela luz da fêmea), nos não sabemos se ela distingue entre os machos quando responde”, diz Moiseff.

Não é incomum encontrar no campo uma fêmea cercada por vários machos, o que indica, segundo os cientistas, que há outro nível de seleção ainda não descoberto. “O fato de que as fêmeas tendem mais a responder a sinais sincronizados do que aos não sincronizados é um grande negócio. Nós sabemos que, para o macho, receber um sinal positivo da fêmea é pelo menos metade da batalha de conseguir uma parceira”, diz o cientista.

O estudo será publicado na edição do dia 9 da revista especializada Science.

Físicos explicam porque existem paradigmas e revoluções científicas de Kuhn

Por que a maior parte dos blogueiros de ciência e céticos são Popperianos? Kuhn é muito melhor, e pode ser modelado pela sociophysics…

Highly connected – a recipe for success

Authors: Krzysztof Suchecki, Andrea Scharnhorst, Janusz A. Holyst
(Submitted on 5 Jul 2010 (v1), last revised 6 Jul 2010 (this version, v2))

Abstract: In this paper, we tackle the problem of innovation spreading from a modeling point of view. We consider a networked system of individuals, with a competition between two groups. We show its relation to the innovation spreading issues. We introduce an abstract model and show how it can be interpreted in this framework, as well as what conclusions we can draw form it. We further explain how model-derived conclusions can help to investigate the original problem, as well as other, similar problems. The model is an agent-based model assuming simple binary attributes of those agents. It uses a majority dynamics (Ising model to be exact), meaning that individuals attempt to be similar to the majority of their peers, barring the occasional purely individual decisions that are modeled as random. We show that this simplistic model can be related to the decision-making during innovation adoption processes. The majority dynamics for the model mean that when a dominant attribute, representing an existing practice or solution, is already established, it will persists in the system. We show however, that in a two group competition, a smaller group that represents innovation users can still convince the larger group, if it has high self-support. We argue that this conclusion, while drawn from a simple model, can be applied to real cases of innovation spreading. We also show that the model could be interpreted in different ways, allowing different problems to profit from our conclusions.
Comments: 36 pages, including 5 figures; for electronic journal revised to fix missing co-author
Subjects: Physics and Society (physics.soc-ph)
Cite as: arXiv:1007.0671v2 [physics.soc-ph]

Tutorial de grupo de Renormalização

Para Sandro e Ariadne estudarem (e talvez Paulo e Leonardo):

American Journal of Physics, Vol. 72, No. 2, pp. 170–184, February 2004

©2004 American Association of Physics Teachers. All rights reserved.

Up: Issue Table of Contents

Go to: Previous Article Next Article

Other formats: HTML (smaller files) PDF (158 kB)

A hint of renormalization

Bertrand Delamotte

Laboratoire de Physique Théorique et Hautes Energies, Universités Paris VI, Pierre et Marie Curie, Paris VII, Denis Diderot, 2 Place Jussieu, 75251 Paris Cedex 05, France

Received: 28 January 2003; accepted: 15 September 2003

An elementary introduction to perturbative renormalization and renormalization group is presented. No prior knowledge of field theory is necessary because we do not refer to a particular physical theory. We are thus able to disentangle what is specific to field theory and what is intrinsic to renormalization. We link the general arguments and results to real phenomena encountered in particle physics and statistical mechanics.

© 2004 American Association of Physics Teachers.

Contents

I. INTRODUCTION
II. A TOY MODEL FOR RENORMALIZATION
III. RENORMALIZABLE THEORIES WITH DIMENSIONLESS COUPLINGS
IV. RENORMALIZATION GROUP
V. SUMMARY
ACKNOWLEDGMENTS
APPENDIX A: TOY MODELS FOR RENORMALIZABLE AND NONRENORMALIZABLE PERTURBATION EXPANSIONS
APPENDIX B: DERIVATION OF EQ. (22)
APPENDIX C: LOGARITHMIC DIVERGENCES IN RENORMALIZABLE THEORIES WITH DIMENSIONLESS COUPLINGS
APPENDIX D: RENORMALIZATION GROUP IMPROVED EXPANSION
APPENDIX E: THE RENORMALIZATION GROUP APPLIED TO A DIFFERENTIAL EQUATION
REFERENCES
FIGURES
FOOTNOTES

Americanos revolucionam futebol

Em termos de esportes, é bem conhecida a fascinação dos americanos por estatísticas. Neste blog do New York Times você pode encontrar dados estatísticos minuto a minuto das partidas da Copa.  Muito interessante… para um físico estatístico.
Idéia para um paper: claramente as estatísticas para intervalos entre posse de bola, chutes a gol, escanteios e faltas não seguem uma distribuição de Poisson, mas lembram uma distribuição com lei de potência (com cut-off de 45 min, claro!) ou talvez uma log-normal. Para um exemplo, veja as estatísticas do jogo Uruguai vs França aqui. Dá para entender algo disso: a probabilidade de um escanteio seguir outro é alta e, quando um time está pressionando outro, os chutes a gol podem sair em sequência (não está claro a origem da distribuição não uniforme de faltas).
Na verdade, existem boas razões para se acreditar que o jogo de futebol não é Markoviano nem estacionário. Mas então, seria possível modelar (gerar) essas distribuições de modo simples?
Idéia: Coletar estatísticas desse blog (usando o Match Analysis?) e mostrar que essas distribuções não são Poisson. Daí propor um modelo que gere essas distribuições.

A massa crítica para grupos de pesquisa


Critical mass and the dependency of research quality on group size

Ralph KennaBertrand Berche
(Submitted on 4 Jun 2010)

Academic research groups are treated as complex systems and their cooperative behaviour is analysed from a mathematical and statistical viewpoint. Contrary to the naive expectation that the quality of a research group is simply given by the mean calibre of its individual scientists, we show that intra-group interactions play a dominant role. Our model manifests phenomena akin to phase transitions which are brought about by these interactions, and which facilitate the quantification of the notion of critical mass for research groups. We present these critical masses for many academic areas. A consequence of our analysis is that overall research performance of a given discipline is improved by supporting medium-sized groups over large ones, while small groups must strive to achieve critical mass.

Comments: 14 pages, 6 figures consisting of 16 panels
Subjects: Physics and Society (physics.soc-ph); Statistical Mechanics (cond-mat.stat-mech)
Cite as: arXiv:1006.0928v1 [physics.soc-ph]

Qual foi a melhor seleção brasileira?

New Statistical Method Ranks Sports Players From Different Eras

Posted: 02 Mar 2010 09:10 PM PST

A new statistical approach reveals the intrinsic talent of sportsmen and women, regardless of the era in which they played.

It’s a problem that leaves brows furrowed on barstools across the world: how to rate the sportsmen and women of the day against the stars of yesteryear.


There’s no easy way to make meaningful comparisons when sports change so dramatically over the years. Even in endeavours like baseball where player stats have been meticulously kept for almost a hundred years, comparisons across the decades can be odious. Is it really fair to compare players from the 1920s against those of the last 20 years when so many external factors have changed such as the use of new equipment, better training methods and, of course, performance enhancing drugs?


In 1914, the National League Most Valuable Player was Johnny Evers with a batting average of 0.279, 1 Home Run and 40 Runs Batted In. That was impressive then but these stats would embarrass even a second rate player in today’s game.


But what if there were a way to remove the systematic differences to reveal intrinsic talent? Today, Alexander Petersen at Boston University and a few pals explain just such a method that “detrends” the data leaving an objective measure of a player’s raw ability.


The detrending process is a statistical trick that essentially rates all players relative only to their contemporaries. This effectively cancels out the effect of performance-enhancing factors which are equally available to everybody in a given era. The detrended stats then allows them to be objectively compared with players from other eras and the end product is a ranking of pure talent.


Petersen and co compare the detrended rankings against the traditional ones for several standard baseball metrics, such as Career Home Runs, Season Home Runs and so on.


The results will be an eye-opener for some fans and Petersen and co provide an interesting commentary on the new tables. For example, their new list of the top 50 individual home run performances by season does not contain a single entry after 1950. Not even the performance of Barry Bonds in 2001 or of Mark McGwire in 1998 make the list. In fact, Babe Ruth’s achievements from the 1920s fill seven of the top ten slots.


Petersen and co are at pains to point out why this is: “It behooves us to point out that these results do not mean that Babe Ruth was a better slugger than any other before or after him, but rather, relative to the players during his era, he was the best home run hitter, by far, of all time.”

The Boston team say their method can be applied to other sports with professional leagues such as American basketball, Korean baseball and English football. And it also works in ranking research scientists too.


Petersen and co may not actually settle any barstool brow-creasers with this paper but they’ve clearly had some fun in trying.


Ref: arxiv.org/abs/1003.0134: Detrending Career Statistics In Professional Baseball: Accounting For The Steroids Era And Beyond

História da sociofísica III

Sociophysics http://www.eoht.info/page/Sociophysics

In science, sociophysics is the study of social phenomena from a physics perspective. The subject seems to deal with thermodynamics in part; albeit, the majority of theories in this field seem to be on statistical extrapolations or phase transition models.

A noted researcher in the field is physicist Serge Galam claims that he has been in the field of sociophysics since the late 1970s. [1] A noted researcher in this field is Canadian political scientist Paris Arnopoulos and his 1993 book Sociophysics, in which he speculates on heat, pressure, temperature, entropy, and volumes of societies. [3]

At the 2008 international workshop on socio physics, at Torino, Italy, the focus was on the statistical physics modeling of large scale social phenomena, such as opinion formation, cultural dissemination, the origin and evolution of language, crowd behavior, social contagion; where it was stated that previous years had witnessed the attempt to study collective phenomena emerging from the interactions of individuals as elementary units in social structures. The workshop worked to promote effective cooperation between physicists and social scientists. [2]

See also

Social physics

Social chemistry

Socio-thermodynamics

Sociological thermodynamics

References1. Galam, Serge. (2004). “Sociophysics: a Personal Testimony.”, Laboratory of Heterogeneous and Disorderly Environments, Paris. Arxiv.org.

2. Sociophysics – International Workshop, ISI Foundation, Torino, Italy, 26-29 May 2008.

3. Arnopoulos, Paris. (2005). Sociophysics: Cosmos and Chaos in Nature and Culture (thermics, pgs. 26-31). Nova Publishers, 1993 first edition.

Further reading

● Chakrabarti, Bikas K. Chakraborti, Anirban, and Chatterjee, Arnab. (2006). Econophysics and Sociophysics: Trends and Perspectives. Wiley-VCH.

Revolução na Teoria de Gravitação?

Gravity as an entropic force

From Wikipedia, the free encyclopedia

Verlinde’s statistical description of gravity as an entropic force leads to the correct inverse square distance law of attraction between classical bodies.

The hypothesis of gravity being an entropic force has a history that goes back to research on black hole thermodynamics by Bekenstein andHawking in the mid-1970s. These studies suggest a deep connection between gravity and thermodynamics. In 1995 Jacobson demonstrated that the Einstein equations describing relativistic gravitation can be derived by combining general thermodynamic considerations with the equivalence principle.[1] Subsequently, other physicists have further explored the link between gravity and entropy.[2]

In 2009, Erik Verlinde disclosed a conceptual theory that describes gravity as an entropic force.[3] This theory combines the thermodynamic approach to gravity with Gerardus ‘t Hooft‘s holographic principle. If proven correct, gravity is not a fundamental interaction, but an emergent phenomenon which arises from the statistical behaviour of microscopic degrees of freedom encoded on a holographic screen.[4]

Verlinde’s suggestion of gravity being an entropic phenomenon attracted considerable media[5][6] exposure, and led to immediate follow-up work in cosmology,[7][8] the dark energy hypothesis,[9] cosmological acceleration,[10][11] cosmological inflation,[12] and loop quantum gravity.[13] Also, a specific microscopic model has been proposed that indeed leads to entropic gravity emerging at large scales.[14]

[edit]See also

[edit]References

This article uses bare URLs in its references. Please use proper citations containing each referenced work’s title, author, date, and source, so that the article remains verifiable in the future. Help may be available. Several templates are available for formatting. (March 2010)
  1. ^ Thermodynamics of Spacetime: The Einstein Equation of StateTed Jacobson, 1995
  2. ^ Thermodynamical Aspects of Gravity: New insightsThanu Padmanabhan, 2009
  3. ^ http://www.volkskrant.nl/wetenschap/article1326775.ece/Is_Einstein_een_beetje_achterhaald Dutch newspaper ‘Volkskrant‘, 9 December 2009
  4. ^ On the Origin of Gravity and the Laws of NewtonErik Verlinde, 2010
  5. ^ The entropy force: a new direction for gravityNew Scientist, 20 January 2010, issue 2744
  6. ^ Gravity is an entropic form of holographic informationWired Magazine, 20 January 2010
  7. ^ Equipartition of energy and the first law of thermodynamics at the apparent horizon, Fu-Wen Shu, Yungui Gong, 2010
  8. ^ Friedmann equations from entropic force, Rong-Gen Cai, Li-Ming Cao, Nobuyoshi Ohta 2010
  9. ^ It from Bit: How to get rid of dark energy, Johannes Koelman, 2010
  10. ^ Entropic Accelerating Universe, Damien Easson, Paul Frampton, George Smoot, 2010
  11. ^ Entropic cosmology: a unified model of inflation and late-time acceleration, Yi-Fu Cai, Jie Liu, Hong Li, 2010
  12. ^ Towards a holographic description of inflation and generation of fluctuations from thermodynamics, Yi Wang, 2010
  13. ^ Newtonian gravity in loop quantum gravityLee Smolin, 2010
  14. ^ Notes concerning “On the origin of gravity and the laws of Newton” by E. Verlinde, Jarmo Makela, 2010

[edit]Further reading

Entropia e Gravidade

Gravity Emerges from Quantum Information, Say Physicists

Posted: 25 Mar 2010 09:10 PM PDT

The new role that quantum information plays in gravity sets the scene for a dramatic unification of ideas in physics

One of the hottest new ideas in physics is that gravity is an emergent phenomena; that it somehow arises from the complex interaction of simpler things.

A few month’s ago, Erik Verlinde at the the University of Amsterdam put forward one such idea which has taken the world of physics by storm. Verlinde suggested that gravity is merely a manifestation of entropy in the Universe. His idea is based on the second law of thermodynamics, that entropy always increases over time. It suggests that differences in entropy between parts of the Universe generates a force that redistributes matter in a way that maximises entropy. This is the force we call gravity.

What’s exciting about the approach is that it dramatically simplifies the theoretical scaffolding that supports modern physics. And while it has its limitations–for example, it generates Newton’s laws of gravity rather than Einstein’s–it has some advantages too, such as the ability to account for the magnitude of dark energy which conventional theories of gravity struggle with.

But perhaps the most powerful idea to emerge from Verlinde’s approach is that gravity is essentially a phenomenon of information.

Today, this idea gets a useful boost from Jae-Weon Lee at Jungwon University in South Korea and a couple of buddies. They use the idea of quantum information to derive a theory of gravity and they do it taking a slightly different tack to Verlinde.

At the heart of their idea is the tricky question of what happens to information when it enters a black hole. Physicists have puzzled over this for decades with little consensus. But one thing they agree on is Landauer’s principle: that erasing a bit of quantum information always increases the entropy of the Universe by a certain small amount and requires a specific amount of energy.

Jae-Weon and co assume that this erasure process must occur at the black hole horizon. And if so, spacetime must organise itself in a way that maximises entropy at these horizons. In other words, it generates a gravity-like force.

That’s intriguing for several reasons. First, Jae-Weon and co assume the existence of spacetime and its geometry and simply ask what form it must take if information is being erased at horizons in this way.

It also relates gravity to quantum information for the first time. Over recent years many results in quantum mechanics have pointed to the increasingly important role that information appears to play in the Universe.

Some physicists are convinced that the properties of information do not come from the behaviour of information carriers such as photons and electrons but the other way round. They think that information itself is the ghostly bedrock on which our universe is built.

Gravity has always been a fly in this ointment. But the growing realisation that information plays a fundamental role here too, could open the way to the kind of unification between the quantum mechanics and relativity that physicists have dreamed of.

Ref: arxiv.org/abs/1001.5445: Gravity from Quantum Information

Econofísica prevê estouro de bolha no IBOVESPA

A recente queda do IBOVESPA de 19.9% desde seu record histórico foi prevista por Didier Sornette?

Wednesday, June 02, 2010

Econophysicist Accurately Forecasts Gold Price Collapse


The first results from the Financial Bubble Experiment will have huge implications for econophysics

There are good reasons to think that stock markets are fundamentally unpredictable. Many econophysicists believe for example, that the data from these markets bear a startling resemblance to other data from seemingly unconnected phenomena, such as the size of earthquakes, forest fires and avalanches, which defy all efforts of prediction. 

Some go as far as to say that these phenomena are governed by the same fundamental laws so that if one is unpredictable, then they all are. 
And yet financial markets may be different. Last year, this blog covered an extraordinary forecasts made by Didier Sornette at the Swiss Federal Institute of Technology in Zurich, who declared that the Shanghia Composite Index was a bubble market and that it would collapse within a certain specific period of time.
Much to this blog’s surprise, his prediction turned out to be uncannily correct.
Sornette says there are two parts to his forecasting method. First, he says bubbles are markets experiencing greater-then-exponential growth. That makes them straightforward to spot, something that surprisingly hasn’t been possible before. 
Second, he says these bubble markets display the tell signs of the human behaviour that drives them. In particular, people tend to follow each other and this result in a kind of herding behaviour that causes prices to fluctuate in a periodic fashion. 
However, the frequency of these fluctuations increases rapidly as the bubble comes closer to bursting. It’s this signal that Sornette uses in predicting a change from superexponential growth to some other regime (which may not necessarily be a collapse).
While Sornette’s success last year was remarkable it wasn’t entirely convincing as this blog pointed out at the time
“The problem with this kind of forecast is that it is difficult interpret the results. Does it really back Sornette’s hypothesis that crashes are predictable? How do we know that he doesn’t make these predictions on a regular basis and only publicise the ones that come true? Or perhaps he modifies them as the due date gets closer so that they always seem to be right (as weather forecasters do). It’s even possible that his predictions influence the markets: perhaps they trigger crashes Sornette believes he can spot.”
That’s when Sornette announced an brave way of test his forecasting method which he calls the Financial Bubble Experiment. His idea is to make a forecast but keep it secret. He posts it in encrypted form to the arXiv which time stamps it and ensures that no changes can be made.
Then, six months later, he reveals the forecast and analyses how successful it has been. 
Today, we can finally see the analysis of his first set of predictions made 6 months ago.
Back then, Sornette and his team identified four markets that seemed to be experiencing superexponential growth and the tell tale signs of an imminent bubble burst.
These were:the IBOVESPA Index of 50 brazillian stocks, a Merrill Lynch Corporate Bond Indexthe spot price of goldcotton futures
These predictions had mixed success. First let’s look at the failures. Sornette says that it now turns out that the Merill Lynch Index was in the process of collapse when Sornette made the original prediction six months ago. So that bubble burst long before Sornette said it would. And cotton futures are still climbing in a bubble market that has yet to collapse. So much for those forecasts.
However, Sornette and his team were spot on with their other predictions. Both the IBOVESPA Index and the spot price of gold changed from superexponential growth to some other kind of regime in the time frame that Sornette predicted. That’s an impressive result by anybody’s standards.
And the team says it can do better. They point out that they learnt a substantial amount during the first six months of the experiment. They have used this experience to develop a tool called a “bubble index” which they can use to determine the probability that a market that looks like a bubble actually is one. 
This should help to make future forecasts even more accurate. Had this tool been available six months ago, for example, it would have clearly showed that the Merrill Lynch index had already burst, they say. If Sornette continues with this type of success it’s likely that others will want to copy his method. An interesting question is what will happen to the tell tale herding behaviour once large numbers of analysts start looking for and betting on it.
It’s tempting to imagine that this extra information would have a calming effect on otherwise volatile markets. But the real worry is that it could have exactly the opposite effect: that predictions of the imminent collapse whether accurate or not would lead to violent corrections. That will have big implications for econophysics and those who practice it. 
Either way, Sornette is continuing with the experiment. He has already sealed his set of predictions for the next six months and will reveal them on 1 November. We’ll be watching. 
The Financial Bubble Experiment: Advanced Diagnostics and Forecasts of Bubble Terminations Volume I

História da sociofísica II

The Next Million Years

In famous publication, The Next Million Years is a 1952 book written by C.G. Darwin notable as being the first-ever published document outlining a theory using the terms “human thermodynamics” and “human molecule” in unison. [1] Original copies sell for as much as $395 dollars. The book was reprinted in 1953 and 1973. The Next Million Years is one of the founding books in the history of human thermodynamics.

OverviewIn short, Darwin argued that humans were molecules, that assemblies of humans constituted “conservative dynamical systems”, and that one could use statistical thermodynamics, particularly American mathematical physicist Willard Gibbs version of it, to predict the course of the next million years of human evolution. In his book, according to a 1953 review by Time magazine, Darwin, a theoretical physicist, invades sociological territory where many sociologists fear to tread. [2] He bases his reasoning about man’s future on what is sometimes called “social physics“: the idea that the behavior of humans in very large numbers can be predicted by the statistical methods that physicists use with large numbers of molecules. Accordingly, in the gas phase, the motions of single molecules are unpredictable: they may move fast or slow and zigzag in any direction, but the impacts of billions of gas molecules against a restraining surface produce a steady push that obeys definite and rather simple laws. In the same manner, Darwin believes, the actions of individual humans are erratic and sometimes remarkable, but the behavior of large numbers of them over long periods of time is as predictable as the pressure of gas. All that is needed is to determine the basic, averaged-out properties of humanmolecules.”

In Darwin’s view, according to the review, “human molecules have one fundamental property that dominates all others: they tend to increase their numbers up to the absolute limit of their food supply”. [2]

References

1. Darwin, Charles G. (1952). The Next Million Years (chapter one). London: Rupert Hart-Davis.

2. Staff Writer. (1953). “Million-Year Prophecy”. Time, Monday, Jan. 19.

Further reading

● Jessop, Brent. (2008). “A Darwin’s Look into The Next Million Years” (four part review), March 3. Knowledge Driven Revolution.com

● Bates, Marson. (1954). “Reviewed work: The Next Million Years by Charles G. Darwin.” American Anthropologist, New Series, Vol. 56, No. 2, Part. 1. Apr. pg. 337.

História da sociofísica I

Social physics

In science, social physics is the study of social systems and social phenomena from the perspective of physics.

History

The term social physics was first introduced by Belgian statistician Adolphe Quetelet in his 1835 book Essay on Social Physics: Man and the Development of his Faculties, in which he outlines the project of a social physics and describes his concept of the “average man” (l’homme moyen) who is characterized by the mean values of measured variables that follow a normal distribution and collects data about many such variables. [5]

When French social philosopher Auguste Comte, who had also used the term social physics, in his 1842 work, discovered that Quetelet had appropriated the term ‘social physics’ prior to him, Comte found it necessary to invent the term ‘sociologie’ (sociology) because he disagreed with Quetelet’s collection of statistics.

Comte defined social physics as the study of the laws of society or the science of civilization. [1] Specifically, in part six of series of books, written between 1830 and 1842, on the subject of Positive Philosophy, Comte argued that social physics would complete the scientific description of the world that Galileo, Newton, and others had begun: [4] “Now that the human mind has grasped celestial and terrestrial physics, mechanical and chemical, organic physics, both vegetable and animal, there remains one science, to fill up the series of sciences or observation—social physics. This is what men have now most need of; and this it is the principal aim of the present work to establish.”

In the opening page to his Social Physics, Comte gives the following situation: “The theories of social science are still, even in the minds of the best thinkers, completely implicated with the theologico-metaphysical philosophy (which he says is ‘in a state of imbecility’); and are even supposed to be, by a fatal separation from all other science, condemned to remain so involved forever.” It is interesting, how in the year 2010, the same dogmatic view remains.

In recent years, building on the development of the kinetic theory of gases (1859) and statistical mechanics (1872), founded during the last half of the 19th century, some authors have begun to incorporate a statistical thermodynamic perspective in models of social physics in which people are viewed as atoms or molecules (human molecules) such that the law of large numbers yields social behaviors such as, for instance, the 80-20 rule, wherein, typically, 80 percent of a country’s wealth is distributed among 20 percent of the population. [2]

Implications

When one applies statistical thought or the “logic of large numbers” to society, according to English chemist and physicist Philip Ball, the concept of human free will is the first question in the minds of those encountering the new “physics of society” for the first time. The debate on this topic, according to Ball, began to rage in the 19th century and still preoccupies sociologists today. [3]

See also

Social thermodynamics

Sociological thermodynamics

Socio-thermodynamics

Social atom

Social bond

Sociophysics

Social pressure

Social temperature

References

1. (a) Ball, Philip. (2004). Critical Mass – How One Things Leads to Another, (pg. 58). New York: Farrar, Straus and Giroux.(b) Nisbet, Robert A. (1970). The Social Bond – an Introduction to the Study of Society, (pg. 29). New York: Alfred A Knopf.

2. Buchanan, Mark. (2007). The Social Atom – why the Rich get Richer, Cheaters get Caught, and Your Neighbor Usually Looks Like You, (pgs. x-xi). New York: Bloomsbury.

3. Ball, Philip. (2004). Critical Mass – How One Things Leads to Another (pgs. 71-72). New York: Farrar, Straus and Giroux.

4. Comte, Auguste. (1856). Social Physics: from the Positive Philosophy. New York: Calvin Blanchard.

5. (a) Quetelet, Adolphe. (1835). Essay on Social Physics: Man and the Development of his Faculties (Sur l’homme et le Développement de ses Facultés, ou Essai de Physique Sociale, Vol. 1, Vol. 2). Paris: Imprimeur-Libraire.

(b) Quetelet, Adolphe. (1842). Treatise on Man: and the Development of His Faculties. Ayer Publishing. Further reading

● Foley, Vernard. (1976). The Social Physics of Adam Smith (thermodynamics, pgs. 191-94; entropy, pg. 199). Purdue University Press.

● Iberall, A.S. (1985). “Outlining Social Physics for Modern Societies: Locating Cultures, Economics, and Politics: the Enlightenment Reconsidered.” (abstract), Proc Natl Acad Sci USA, 82(17): 5582-84.

● Mirowski, Philip. (1989). More Heat than Light: Economics as Social Physics, Physics as Nature’s Economics. Cambridge University Press.

● Urry, John. (2004). “Small Worlds and the New ‘Social Physics’,” (html) (abstract) Global Networks, 4(2): 109-30. External links

Physics of Society – A collection of articles by English chemist and physicst Phillip Ball.