Home // Posts tagged "powerlaws"

Estatísticas Musicais para aula de Estatística Aplicada II para Psicologia

12/08/2013 – 03h07

Estudo mostra que maioria das pessoas escuta sempre as mesmas músicas

IARA BIDERMANDE SÃO PAULO

Ouvir o texto

A opção de ouvir toda e qualquer música nova está a um toque na tela. E você vai sempre escolher aquelas mesmas velhas canções.

Quem crava qual será a sua seleção são os autores de um estudo feito na Universidade de Washington sobre o poder da familiaridade na escolha musical.

A pesquisa foi feita com mais de 900 universitários, autodeclarados apreciadores de novos sons. Pelo menos foi isso o que disseram em questionários prévios. Curiosamente, o lado B dos participantes apareceu quando foram confrontados com escolhas reais entre pares de músicas. A maioria optou por aquelas que tinha ouvido mais vezes.

Ouvir sempre a mesma música não é falta de opção ou imaginação. Segundo o coordenador do laboratório de neuromarketing da Fundação Getulio Vargas de São Paulo, Carlos Augustos Costa, é coisa da sua cabeça.

“O cérebro não gosta de nada complicado. Se você ouve um som novo, tem de parar para entender, mas se a música tem padrões familiares, é sopa no mel: você decide imediatamente ouvi-la.”

Familiar é um padrão musical que a pessoa sabe reconhecer ou um estilo associado a memórias positivas.

“A música que você já conhece tem um valor emocional enorme. Cada vez que você a ouve, a associa a uma sensação de prazer e, quanto mais ouve, mais reforça essa associação”, diz a neurocientista e colunista da Folha Suzana Herculano-Houzel.

Editoria de arte/Folhapress
As dez músicas mais lucrativas, nacionais e internacionais
As dez músicas mais lucrativas, nacionais e internacionais

Read more [+]

Nosso universo vai congelar como uma cerveja super-resfriada…

SCIENTIFIC METHOD / SCIENCE & EXPLORATION

Finding the Higgs? Good news. Finding its mass? Not so good.

“Fireballs of doom” from a quantum phase change would wipe out present Universe.

by  – Feb 19 2013, 8:55pm HB

A collision in the LHC’s CMS detector.

Ohio State’s Christopher Hill joked he was showing scenes of an impending i-Product launch, and it was easy to believe him: young people were setting up mats in a hallway, ready to spend the night to secure a space in line for the big reveal. Except the date was July 3 and the location was CERN—where the discovery of the Higgs boson would be announced the next day.

It’s clear the LHC worked as intended and has definitively identified a Higgs-like particle. Hill put the chance of the ATLAS detector having registered a statistical fluke at less than 10-11, and he noted that wasn’t even considering the data generated by its partner, the CMS detector. But is it really the one-and-only Higgs and, if so, what does that mean? Hill was part of a panel that discussed those questions at the meeting of the American Association for the Advancement of Science.

As theorist Joe Lykken of Fermilab pointed out, the answers matter. If current results hold up, they indicate the Universe is currently inhabiting what’s called a false quantum vacuum. If it were ever to reach the real one, its existing structures (including us), would go away in what Lykken called “fireballs of doom.”

We’ll look at the less depressing stuff first, shall we?

Zeroing in on the Higgs

Thanks to the Standard Model, we were able to make some very specific predictions about the Higgs. These include the frequency with which it will decay via different pathways: two gamma-rays, two Z bosons (which further decay to four muons), etc. We can also predict the frequency of similar looking events that would occur if there were no Higgs. We can then scan each of the decay pathways (called channels), looking for energies where there is an excess of events, or bump. Bumps have shown up in several channels in roughly the same place in both CMS and ATLAS, which is why we know there’s a new particle.

But we still don’t know precisely what particle it is. The Standard Model Higgs should have a couple of properties: it should be scalar and should have a spin of zero. According to Hill, the new particle is almost certainly scalar; he showed a graph where the alternative, pseudoscalar, was nearly ruled out. Right now, spin is less clearly defined. It’s likely to be zero, but we haven’t yet ruled out a spin of two. So far, so Higgs-like.

The Higgs is the particle form of a quantum field that pervades our Universe (it’s a single quantum of the field), providing other particles with mass. In order to do that, its interactions with other particles vary—particles are heavier if they have stronger interactions with the Higgs. So, teams at CERN are sifting through the LHC data, checking for the strengths of these interactions. So far, with a few exceptions, the new particle is acting like the Higgs, although the error bars on these measurements are rather large.

As we said above, the Higgs is detected in a number of channels and each of them produces an independent estimate of its mass (along with an estimated error). As of the data Hill showed, not all of these estimates had converged on the same value, although they were all consistent within the given errors. These can also be combined mathematically for a single estimate, with each of the two detectors producing a value. So far, these overall estimates are quite close: CMS has the particle at 125.8GeV, Atlas at 125.2GeV. Again, the error bars on these values overlap.

Oops, there goes the Universe

That specific mass may seem fairly trivial—if it were 130GeV, would you care? Lykken made the argument you probably should. But he took some time to build to that.

Lykken pointed out, as the measurements mentioned above get more precise, we may find the Higgs isn’t decaying at precisely the rates we expect it to. This may be because we have some details of the Standard Model wrong. Or, it could be a sign the Higgs is also decaying into some particles we don’t know about—particles that are dark matter candidates would be a prime choice. The behavior of the Higgs might also provide some indication of why there’s such a large excess of matter in the Universe.

But much of Lykken’s talk focused on the mass. As we mentioned above, the Higgs field pervades the entire Universe; the vacuum of space is filled with it. And, with a value for the Higgs mass, we can start looking into the properties of the Higgs filed and thus the vacuum itself. “When we do this calculation,” Lykken said, “we get a nasty surprise.”

It turns out we’re not living in a stable vacuum. Eventually, the Universe will reach a point where the contents of the vacuum are the lowest energy possible, which means it will reach the most stable state possible. The mass of the Higgs tells us we’re not there yet, but are stuck in a metastable state at a somewhat higher energy. That means the Universe will be looking for an excuse to undergo a phase transition and enter the lower state.

What would that transition look like? In Lykken’s words, again, “fireballs of doom will form spontaneously and destroy the Universe.” Since the change would alter the very fabric of the Universe, anything embedded in that fabric—galaxies, planets, us—would be trashed during the transition. When an audience member asked “Are the fireballs of doom like ice-9?” Lykken replied, “They’re even worse than that.”

Lykken offered a couple of reasons for hope. He noted the outcome of these calculations is extremely sensitive to the values involved. Simply shifting the top quark’s mass by two percent to a value that’s still within the error bars of most measurements, would make for a far more stable Universe.

And then there’s supersymmetry. The news for supersymmetry out of the LHC has generally been negative, as various models with low-mass particles have been ruled out by the existing data (we’ll have more on that shortly). But supersymmetry actually predicts five Higgs particles. (Lykken noted this by showing a slide with five different photos of Higgs taken at various points in his career, in which he was “differing in mass and other properties, as happens to all of us.”) So, when the LHC starts up at higher energies in a couple of years, we’ll actually be looking for additional, heavier versions of the Higgs.

If those are found, then the destruction of our Universe would be permanently put on hold. “If you don’t like that fate of the Universe,” Lykken said, “root for supersymmetry”

Planetas extra-solares, Kepler 62 e o Paradoxo de Fermi local

Conforme aumentam o número de planetas extra-solares descobertos, também aumentamos vínculos sobre as previsões do modelo de percolação galática (Paradoxo de Fermi Local).
A previsão é que, se assumirmos que Biosferas Meméticas (Biosferas culturais ou Tecnosferas) são um resultado provável de Biosferas Genéticas, então devemos estar dentro de uma região com pucos planetas habitáveis. Pois se existirem planetas habitados (por seres inteligentes) por perto, com grande probabilidade eles são bem mais avançados do que nós, e já teriam nos colonizado.
Como isso ainda não ocorreu (a menos que se acredite nas teorias de conspiração dos ufólogos e nas teorias de Jesus ET, deuses astronautas etc.), segue que quanto mais os astronomos obtiverem dados, mais ficará evidente que nosso sistema solar é uma anomalia dentro de nossa vizinhança cósmica (1000 anos-luz?), ou seja, não podemos assumir o Princípio Copernicano em relação ao sistema solar: nosso sistema solar não é tipico em nossa vizinhança.  Bom, pelo menos, essa conclusão está batendo com os dados coletados até hoje…
Assim, é possível fazer a previsão de que uma maior análise dos planetas Kepler 62-e e Kepler 62-f revelará que eles não possuem uma atmosfera com oxigênio ou metano, sinais de um planeta com biosfera.

Persistence solves Fermi Paradox but challenges SETI projects

Osame Kinouchi (DFM-FFCLRP-Usp)
(Submitted on 8 Dec 2001)

Persistence phenomena in colonization processes could explain the negative results of SETI search preserving the possibility of a galactic civilization. However, persistence phenomena also indicates that search of technological civilizations in stars in the neighbourhood of Sun is a misdirected SETI strategy. This last conclusion is also suggested by a weaker form of the Fermi paradox. A simple model of a branching colonization which includes emergence, decay and branching of civilizations is proposed. The model could also be used in the context of ant nests diffusion.

03/05/2013 – 03h10

Possibilidade de vida não se resume a planetas similares à Terra, diz estudo

SALVADOR NOGUEIRA
COLABORAÇÃO PARA A FOLHA

Com as diferentes composições, massas e órbitas possíveis para os planetas fora do Sistema Solar, a vida talvez não esteja limitada a mundos similares à Terra em órbitas equivalentes à terrestre.

Editoria de arte/Folhapress

Essa é uma das conclusões apresentada por Sara Seager, do MIT (Instituto de Tecnologia de Massachusetts), nos EUA, em artigo de revisão publicado no periódico “Science“, com base na análise estatística dos cerca de 900 mundos já detectados ao redor de mais de 400 estrelas.

Seager destaca a possível existência de planetas cuja atmosfera seria tão densa a ponto de preservar água líquida na superfície mesmo a temperaturas bem mais baixas que a terrestre. Read more [+]

SOC e Câncer

A dar uma olhada…

Self-Organized Criticality: A Prophetic Path to Curing Cancer

J. C. Phillips
(Submitted on 28 Sep 2012)

While the concepts involved in Self-Organized Criticality have stimulated thousands of theoretical models, only recently have these models addressed problems of biological and clinical importance. Here we outline how SOC can be used to engineer hybrid viral proteins whose properties, extrapolated from those of known strains, may be sufficiently effective to cure cancer.

Subjects: Biomolecules (q-bio.BM)
Cite as: arXiv:1210.0048 [q-bio.BM]
(or arXiv:1210.0048v1 [q-bio.BM] for this version)

Novo artigo sobre automata celulares e Paradoxo de Fermi

Saiu um novo artigo sobre a hipótese de percolação para o Paradoxo de Fermi, onde simulações de automata celulares em três dimensões são usadas.  Dessa vez, a conclusão dos autores é a de que as simulações não suportam a hipótese.

Bom, acho que isso não é o fim da história. Eu já sabia que, para a hipótese dar certo, a difusão deveria ser critica (ou seja, formando um cluster crítico ou levemente supercrítico de planetas ocupados).

Ou seja, a hipótese precisa ser complementada com algum argumento de porque a difusão deveria ser crítica. Bom, como sistemas críticos são abundantes nos processos sociais e biológicos, eu acho que basta encontrar esse fator de criticalidade para justificar o modelo. Minha heurística seria: Read more [+]

Probabilidade de ocorrer um evento maior que o “11 de setembro” ultrapassa os 95%

Statisticians Calculate Probability Of Another 9/11 Attack

According to the statistics, there is a 50 per cent chance of another catastrophic terrorist attack within the next ten years

3 comments

THE PHYSICS ARXIV BLOG

Wednesday, September 5, 2012

Earthquakes are seemingly random events that are hard to predict with any reasonable accuracy. And yet geologists make very specific long term forecasts that can help to dramatically reduce the number of fatalities.

For example, the death toll from earthquakes in the developed world, in places such as Japan and New Zealand, would have been vastly greater were it not for strict building regulations enforced on the back of well-founded predictions that big earthquakes were likely in future.

The problem with earthquakes is that they follow a power law distribution–small earthquakes are common and large earthquakes very rare but the difference in their power is many orders of magnitude.

Humans have a hard time dealing intuitively with these kinds of statistics. But in the last few decades statisticians have learnt how to handle them, provided that they have a reasonable body of statistical evidence to go on.

That’s made it possible to make predictions about all kinds of phenomena governed by power laws, everything from earthquakes, forest fires and avalanches to epidemics, the volume of email and even the spread of rumours.

So it shouldn’t come as much of a surprise that Aaron Clauset at the Santa Fe Institute in New Mexico and Ryan Woodard at ETH, the Swiss Federal Institute of Technology, in Zurich have used this approach to study the likelihood of terrorist attacks.  Read more [+]

A ideologia das leis de potência



Power Laws, Weblogs, and Inequality

First published February 8, 2003 on the “Networks, Economics, and Culture” mailing list. Subscribe to the mailing list.

Version 1.1: Changed 02/10/03 to point to the updated “Blogging Ecosystem” project, and to Jason Kottke’s work using Technorati.com data. Added addendum pointing to David Sifry’s “Technorati Interesting Newcomers” list, which is in part a response to this article.

A persistent theme among people writing about the social aspects of weblogging is to note (and usually lament) the rise of an A-list, a small set of webloggers who account for a majority of the traffic in the weblog world. This complaint follows a common pattern we’ve seen with MUDs, BBSes, and online communities like Echo and the WELL. A new social system starts, and seems delightfully free of the elitism and cliquishness of the existing systems. Then, as the new system grows, problems of scale set in. Not everyone can participate in every conversation. Not everyone gets to be heard. Some core group seems more connected than the rest of us, and so on.
Prior to recent theoretical work on social networks, the usual explanations invoked individual behaviors: some members of the community had sold out, the spirit of the early days was being diluted by the newcomers, et cetera. We now know that these explanations are wrong, or at least beside the point. What matters is this: Diversity plus freedom of choice creates inequality, and the greater the diversity, the more extreme the inequality.
In systems where many people are free to choose between many options, a small subset of the whole will get a disproportionate amount of traffic (or attention, or income), even if no members of the system actively work towards such an outcome. This has nothing to do with moral weakness, selling out, or any other psychological explanation. The very act of choosing, spread widely enough and freely enough, creates a power law distribution.

Leis de escala e criticalidade em ciências cognitivas

doi:10.1016/j.tics.2010.02.005 | How to Cite or Link Using DOI
Copyright © 2010 Elsevier Ltd All rights reserved.
Permissions & Reprints
Review

Scaling laws in cognitive sciences
Christopher T. Kello1, , Gordon D.A. Brown2, Ramon Ferrer-i-Cancho3, John G. Holden4, Klaus Linkenkaer-Hansen5, Theo Rhodes1 and Guy C. Van Orden4
1 Cognitive and Information Sciences University of California , Merced, 5200 North Lake Rd., Merced, CA 95343, USA
2 Department of Psychology, University of Warwick, Coventry CV4 7AL, United Kingdom
3 Department de Llenguatges i Sistemes Informatics, Universitat Politecnica de Catalunya, Campus Nord, Edifici Omega, Jordi Girona Salgado 1-3, 08034 Barcelona, Catalonia, Spain
4 Center for Perception, Action and Cognition, Department of Psychology, University of Cincinnati, PO Box 210376, Cincinnati, OH 45221-0376, USA
5 Department of Integrative Neurophysiology, VU University Amsterdam, De Boelelaan 1085, 1081 HV Amsterdam, the Netherlands
Available online 1 April 2010. 
Scaling laws are ubiquitous in nature, and they pervade neural, behavioral and linguistic activities. A scaling law suggests the existence of processes or patterns that are repeated across scales of analysis. Although the variables that express a scaling law can vary from one type of activity to the next, the recurrence of scaling laws across so many different systems has prompted a search for unifying principles. In biological systems, scaling laws can reflect adaptive processes of various types and are often linked to complex systems poised near critical points. The same is true for perception, memory, language and other cognitive phenomena. Findings of scaling laws in cognitive science are indicative of scaling invariance in cognitive mechanisms and multiplicative interactions among interdependent components of cognition.
Article Outline
The scaling law debate
Scaling laws in perception, action and memory
Scaling laws in reaction times and word frequencies
Scaling laws and criticality
Concluding remarks
Acknowledgements
References
Droga, essa idéia era minha…: 
Another type of scaling law in memory comes from a classic free recall paradigm, yet was only recently discovered by drawing an analogy to studies of animal foraging behaviors [24]. Birds, monkeys, fish and numerous other species have been reported to search for food in Lévy flight patterns [25], which have been hypothesized as effective search strategies because they cover more territory than, for example, a random walk with normally distributed steps [26]. Searching for items or events in memory is like foraging, particularly in tasks such as free recall of members of a given semantic category (e.g. animals) in a given time period [27]. Rhodes and Turvey [24] analyzed inter-response time intervals (IRIs) from this classic memory task, which are analogous to steps from one recalled item to the next. The authors found IRIs to be power-law distributed with exponents very similar to those found in animal foraging (Figure 2). These comparable results suggest that Lévy flights are generally adaptive across a variety of search ecologies. These results also illustrate how scaling laws can lurk unnoticed in data for decades, in the absence of theories and analytic techniques necessary to recognize them.
Complex Times for Earthquakes, Stocks, and the Brain’s Activity
Christoph Kayser1, , and Bard Ermentrout2, , 
1 Max Planck Institute for Biological Cybernetics, Spemannstrasse 38, 72076 Tübingen, Germany
2 Department of Mathematics, University of Pittsburgh, Pittsburgh, PA 15260, USA
Available online 12 May 2010. 
Refers to: The Temporal Structures and Functional Significance of Scale-free Brain Activity
Neuron, Volume 66, Issue 3, 13 May 2010, Pages 353-369, 
Biyu J. He, John M. Zempel, Abraham Z. Snyder, Marcus E. Raichle
PDF (3392 K) | Supplementary Content
A new study by He et al. in this issue of Neuron shows that large-scale arrhythmic (1/f) brain activity contains nested temporal structure in the form of crossfrequency coupling. This suggests temporal organization in neural mass activity beyond oscillations and draws attention to ubiquitous but often ignored arrhythmic patterns in neural activity.
What do earthquakes, Dow-Jones, and brain activity have in common? Unpredictability first springs to mind, of course, but as researchers have long noticed, these and many other complex processes might actually share common patterns pertaining to long-range spatio-temporal correlations of the underlying quantities ([Kello et al., 2010] and [Jensen, 1998]). In addition, and as an intriguing study in this issue of Neuronillustrates (He et al., 2010), they might also share another level of temporal organization, whereby the phase of slower timescales predicts the amplitude of faster ones. This nesting of timescales might open a window onto the complex structure of neural activity, but also raises questions with regard to its universality.
In their new study, He et al. recorded electrocorticographic (ECoG) activity across several brain areas in human patients. To investigate the signal’s temporal structure, they calculated the frequency spectrum, i.e., the distribution of amplitudes of individual frequency bands as a function of frequency. In concordance with previous studies, they described the frequency spectra using the power-law 1/fa, with the scaling factor adiffering between low (<1>1 Hz) frequency bands. When shown on logarithmic axes, such power-law scaling translates into a straight line with slope a, as illustrated in Figure 1A.
It is important to note the distinction between the spectral 1/fa shape and rhythmic oscillatory activity. Oscillatory activities with well-defined frequencies (e.g., theta, alpha, or gamma oscillations) are prevalent in neural networks and result in distinct peaks above the 1/fa background (Buzsaki, 2006) (cf. Figure 1A). Typically, such oscillations result from processes with well-defined intrinsic timescales and can be associated with defined networks such as thalamocortical or hippocampal loops. In contrast to this, activity characterized by a (straight) 1/fa spectrum is considered “arrhythmic,” as it does not reflect processes with identifiable timescales. Systems that generate perfect power-law spectra are also known as “scale-free,” since the underlying process or network possesses no distinguished scale ([Bak et al., 1987] and [Jensen, 1998]). Importantly, while oscillations have attracted wide interest and are matter of various speculations with regard to their meaning and function, the arrhythmic component of electric brain activity is often considered self-evident or uninteresting and hence ignored.
The stunning finding of He et al. is that even such supposedly arrhythmic brain activity has a complex temporal structure in the form of crossfrequency phase-amplitude coupling. Crossfrequency implies that the coupling involves two distinct frequency bands, and phase-amplitude implies that the amplitude of one band is dependent on the phase of the other. In particular, He et al. extracted band-limited components from their wide-band signals and found that the amplitude of the faster component depends on the phase of the slower one, as illustrated in Figure 1B. For their analysis they considered a range of frequency pairs and used statistical bootstrapping methods to validate the significance of phase dependency. Overall, they found that more than 70% of the electrodes contained frequency pairs with significant frequency coupling. Importantly, and to prove the importance of this phenomenon, they demonstrated the existence of crossfrequency coupling not only in resting state activity, but also during task performance and slow-wave sleep.

Lei de Benford e Física Estatística

Benford’s Law And A Theory of Everything

Posted: 06 May 2010 09:10 PM PDT

A new relationship between Benford’s Law and the statistics of fundamental physics may hint at a deeper theory of everything

In 1938, the physicist Frank Benford made an extraordinary discovery about numbers. He found that in many lists of numbers drawn from real data, the leading digit is far more likely to be a 1 than a 9. In fact, the distribution of first digits follows a logarithmic law. So the first digit is likely to be 1 about 30 per cent of time while the number 9 appears only five per cent of the time.

That’s an unsettling and counterintuitive discovery. Why aren’t numbers evenly distributed in such lists? One answer is that if numbers have this type of distribution then it must be scale invariant. So switching a data set measured in inches to one measured in centimetres should not change the distribution. If that’s the case, then the only form such a distribution can take is logarithmic.

But while this is a powerful argument, it does nothing to explan the existence of the distribution in the first place.

Then there is the fact that Benford Law seems to apply only to certain types of data. Physicists have found that it crops up in an amazing variety of data sets. Here are just a few: the areas of lakes, the lengths of rivers, the physical constants, stock market indices, file sizes in a personal computer and so on.

However, there are many data sets that do not follow Benford’s law, such as lottery and telephone numbers.

What’s the difference between these data sets that makes Benford’s law apply or not? It’s hard to escape the feeling that something deeper must be going on.

Today, Lijing Shao and Bo-Qiang Ma at Peking University in China provide a new insight into the nature of Benford’s law. They examine how Benford’s law applies to three kinds of statistical distributions widely used in physics.

These are: the Boltzmann-Gibbs distribution which is a probability measure used to describe the distribution of the states of a system; the Fermi-Dirac distribution which is a measure of the energies of single particles that obey the Pauli exclusion principle (ie fermions); and finally the Bose-Einstein distribution, a measure of the energies of single particles that do not obey the Pauli exclusion principle (ie bosons).

Lijing and Bo-Qiang say that the Boltzmann-Gibbs and Fermi-Dirac distributions distributions both fluctuate in a periodic manner around the Benford distribution with respect to the temperature of the system. The Bose Einstein distribution, on the other hand, conforms to benford’s Law exactly whatever the temperature is.

What to make of this discovery? Lijing and Bo-Qiang say that logarithmic distributions are a general feature of statistical physics and so “might be a more fundamental principle behind the complexity of the nature”.

That’s an intriguing idea. Could it be that Benford’s law hints at some kind underlying theory that governs the nature of many physical systems? Perhaps.

But what then of data sets that do not conform to Benford’s law? Any decent explanation will need to explain why some data sets follow the law and others don’t and it seems that Lijing and Bo-Qiang are as far as ever from this.

Ref: arxiv.org/abs/1005.0660: The Significant Digit Law In Statistical Physics

Avalanches Neuronais estão decolando…

Dietmar Plenz escreveu para o Mauro de que nossos trabalhos tem sido bem recebidos e que “o avião está decolando…”

Neuronal avalanches imply maximum dynamic range in cortical networks at criticality

(Submitted on 2 Jun 2009 (v1), last revised 10 Jun 2009 (this version, v2))

Abstract: Spontaneous neural activity is a ubiquitous feature of the brain even in the absence of input from the senses. What role this activity plays in brain function is a long-standing enigma in neuroscience. Recent experiments demonstrate that spontaneous activity both in the intact brain and in vitro has statistical properties expected near the critical point of a phase transition, a phenomenon called neuronal avalanches. Here we demonstrate in experiments and simulations that cortical networks which display neuronal avalanches benefit from maximized dynamic range, i.e. the ability to respond to the greatest range of stimuli. Our findings (1) show that the spontaneously active brain and its ability to process sensory input are unified in the context of critical phenomena, and (2) support predictions that a brain operating at criticality may benefit from optimal information processing.

Comments: main text – 4 pages, 4 figures supplementary materials – 3 pages, 4 figures
Subjects: Neurons and Cognition (q-bio.NC)
Cite as: arXiv:0906.0527v2 [q-bio.NC]

Estudando a blogosfera

Macroscopic and microscopic statistical properties observed in blog entries

Authors: Yukie Sano, Misako Takayasu
(Submitted on 9 Jun 2009)

Abstract: We observe the statistical properties of blogs that are expected to reflect social human interaction. Firstly, we introduce a basic normalization preprocess that enables us to evaluate the genuine word frequency in blogs that are independent of external factors such as spam blogs, server-breakdowns, increase in the population of bloggers, and periodic weekly behaviors. After this process, we can confirm that small frequency words clearly follow an independent Poisson process as theoretically expected. Secondly, we focus on each blogger’s basic behaviors. It is found that there are two kinds of behaviors of bloggers. Further, Zipf’s law on word frequency is confirmed to be universally independent of individual activity types.

Comments: 10 pages, 14 figures
Subjects: Physics and Society (physics.soc-ph)
Cite as: arXiv:0906.1744v1 [physics.soc-ph]

Erramos: Raios não derrubam aviões

O bom do uso de blogs e janelas de comentários é que as informações podem ser rapidamente corrigidas. Eu reproduzi em outro post o anúncio da Air France de que um raio poderia ter derrubado o avião. Roberto Takata e João Carlos rapidamente me chamaram a atenção de que isso é improvável (e eu realmente dei marcada, porque aviões são gaiolas de Faraday, e eu deveria saber isso. Um texto sobre isso pode ser encontrado aqui.
Fica a curiosidade sobre por que a Air France teria solto esta informação exdrúxula. A menos que tenham informações internas (uma da “caixas pretas” da aeronave envia online informações técnicas (centenas de parâmetros) via satélite para a central da Air France.
Uma especulação minha seria sobre a magnitude dos raios. Será que alguém já traçou uma distribuição de  magnitude (energia liberada?)  dos raios? Eu aposto que dá uma lei de potência, porque raios parecem lembrar avalanches com características fractais. 
Em termos de extensão espacial, eu sei que relâmpagos (não raios, mas relâmpagos, OK!) podem se extender por centenas de quilometros (veja no filme acima como os eventos não são aleatórios, alguns relâmpagos parecem induzir outros a quilometros de distância, ou seja, os eventos estão correlacionados e não seguem uma distribuição de Poisson). Eu gostaria de fazer um paper sobre isso um dia… Algum leitor tem acesso aos dados do radar de Bauru?
Então, sei lá, talvez os sistemas de segurança dos aviões só aguentem raios até uma certa magnitude M e esta aeronave foi atingida por um raio de altissima magnitude. Tipo assim um prédio que aguenta terremoto de magnitude 6 mas não de magnitude 7, entendem? 
Se o comunicado da Air France não se baseia em dados comunicados pela caixa preta, então foi apenas uma especulação tecnicamente pobre e na verdade absurda. Será que os porta-vozes da Air France não entendem nada de aviões? 
No meu post, então esqueça a história de raios atingindo aviões. Mas o raciocínio sobre como ser atingido por um raio (em um campo de futebol?) está conectado com buracos negros a milhões de anos luz daqui continua válido…

Porcos, Cisnes e pseudociência

Este post pertence à Roda de Ciência, tema de abril. Deixe seus comentários lá, por favor.

TEORIA DO CISNE NEGRO

Antes de a Austrália ser descoberta, todos os cisnes do mundo eram brancos. A Austrália, onde existe o cisne negro (´cygnues atratus´), mostrou a possibilidade de uma exceção escondida de nós, da qual não tínhamos a menor idéia.

2. O meu cisne negro não é um pássaro, mas um evento com três características: 1ª) altamente inesperado; 2ª) tem grande impacto; e 3ª) depois de acontecer, procuramos dar uma explicação para fazê-lo parecer o menos aleatório e o mais previsível.

3. O cisne negro explica quase tudo no mundo, como a Primeira Grande Guerra. Era imprevisível, mas, depois de sua ocorrência, as suas causas pareceram óbvias para as pessoas. O mesmo aconteceu com a Segunda Grande Guerra. Esses fatos provam a incapacidade de a humanidade prever grandes eventos.

4. Mais recentemente, a internet é um cisne negro. Surgida como ferramenta de comunicação militar, ela transformou o mundo de maneira muito rápida. Ninguém imaginava essa possibilidade.

5. As descobertas causadoras de forte impacto na humanidade foram acidentes de percurso, ou seja, os cientistas estavam procurando uma outra coisa, como no caso do ´laser´, criado para ser um tipo de radar e não para ser usado em cirurgia nos olhos.

6. Ninguém poderá saber quando um cisne negro irá surgir, mas o fundamental é a pessoa não levar tão a sério o seu planejamento de vida. As coisas podem mudar quando a pessoa menos espera. O ´stress test´, um dos modelos de gerenciamento de risco, avalia o impacto já ocorrido e não o impacto a ocorrer. As variáveis utilizadas são tiradas do passado.

7. O grau de aleatoriedade depende do observador. A aleatoriedade é a compreensão ou a informação incompleta. Eventos como o 11 de Setembro de 2001, em Nova Iorque, não são aleatórios. Na verdade, terroristas planejaram e tinham conhecimento do 11 de Setembro.

8. A previsão de eventos sócio-econômicos é muito difícil. O histórico das previsões é lixo. O ´risk management´ (gestão de risco) é lixo. A tentativa de determinar causa e efeito dos fatos é continuamente obstruída por fenômenos imprevisíveis. As pessoas do mundo das finanças têm a ilusão de poder prever os fatos, porém elas não conseguem justificar suas previsões.

9. A indicação de ações para compra é postura de charlatões. Não são charlatões aqueles a recomendar o que não fazer no mercado, ao invés de dizer o que fazer. As pessoas podem fazer muitas coisas se souberem o que não fazer. Se as pessoas evitarem as técnicas mirabolantes, não vão depender das previsões do mercado.

10. As pessoas não devem depender dos ´measures of risk´, indicadores destinados a medir o risco. O importante é garantir ´portfólio´ estruturado de maneira a não ter ´downside risk´ (potencial de perdas) ou ´upside exposure´ (potencial de ganho), porquanto assim as pessoas poderão ganhar muito dinheiro se encontrarem um cisne negro.

11. As pessoas não devem sair à caça do cisne negro, mas, uma vez apareça, devem estar com sua exposição maximizada para ele. As pessoas devem acreditar na possibilidade de o mais inusitado acontecer. Tanto do lado positivo quanto do lado negativo.

12. O cisne negro é o risco dos grandes eventos, positivos ou negativos. Algumas coisas podem ser voláteis, mas não são um cisne negro necessariamente.

Entrevista com Nassim Nicholas Taleb, americano, autor de ´O cisne negro: o impacto do altamente improvável´ (´Black swan: the impact of the highly improbable´), há cinco semanas na lista dos livros mais vendidos do jornal ´The New York Times´ (Valor, São Paulo, 04 jun. 2007, p. F14).

Uma das coisas boas que os físicos estatísticos fizeram, em termos culturais, foi chamar a atenção das pessoas para os eventos extremais, ou seja, os eventos de uma cauda de distribuição estatística. Se a cauda cai exponencialmente (como é o caso da distribuição normal), então os eventos extremais são muito raros e podemos desprezá-los em uma análise de risco. Mas se a cauda cai na forma de uma lei de potência então os eventos extremais não podem ser desprezados, tem que ser levados em conta.
É claro que os estatístico matemáticos sabiam disso, mas quem popularizou a idéia (entre os cientistas) foram os físicos porque eles sabem vender o peixe, digamos assim. Mas a divulgação científica para a população em geral não foi tão bem sucedida. OK, apareceram ótimos livros em inglês, como o Ubiquity do Mark Buchanan e o Critical Mass do Philip Ball, mas estes livros não foram traduzidos para o português (eu realmente não entendo por que).
Quando você entra numa livraria, depois dos livros de auto-ajuda e new age, uma das áreas mais populares são os livros pop de administração e marketing. Alguém já analisou essa literatura em termos de contribuição para a divulgação científica? OK, eu sei que o leitor de O Gerente Quântico não vai ter uma idéia adequada de Física Quântica, mas ele será menos ignorante sobre quântica do que um gerente tipo Homer Simpson que não leu o livro. Ou não?
Se você ponderar bem, mesmo os livros de pseudociência ajudam a divulgar a ciência. Conversando com meus colegas físicos, eu vejo que toda uma geração foi despertada (eu inclusive) para a vocação científica lendo revista Planeta na década de 70 (na época em que era editada por Ignácio de Loyola Brandão, claro!)  e os livros O Despertar dos Mágicos e Eram os Deuses Astronautas!
Como disse Reinaldo Lopes em seu nobo blog Chapéu, Chicote e Carbono 14, se você pensar bem os filmes de Indiana Jones são todos pseudocientíficos (arca perdida,  santo graal, ETs e crânios de cristal etc.) e sua apresentação da pesquisa arqueológica é totalmente distorcida, mas muitos e muitos meninos e meninas se tornaram (ou sonharam ser) arqueólogos devido a esses filmes. Será que alguém já percebeu que o despertar de vocações científica é não-linear, que muitas vezes um museu de ciência inteiro não adianta mas um simples conto de Isaac Asimov pode ser decisivo?

Quando dar oprtunidades iguais é injusto

OK, OK, já comentei que a Política, assim como a Arte, não pode ser reduzida à Ciência. Mas isso não implica que não existam relações e influências. Por exemplo, a Política, e as políticas públicas, são muito influenciadas pelos cientistas sociais e economistas (mesmo que estes sejam considerados por alguns céticos como pseudocientistas).

Na questão da origem e mecanismos de mudança da distribuição de renda, os físicos têm dado uma contribuição interessante. Basta fazer uma busca no ArXiv sobre whealth distribution.

Aqui um exemplo interesante, do pesoal da UFRGS:

Versão final e referência aqui.

E aqui o artigo com o copyleft do ArXiv:

The unfair consequences of equal opportunities: comparing exchange models of wealth distribution

Authors: G. M. Caon, S. Goncalves, J. R. Iglesias
(Submitted on 1 Nov 2006)

Abstract: Simple agent based exchange models are a commonplace in the study of wealth distribution of artificial societies. Generally, each agent is characterized by its wealth and by a risk-aversion factor, and random exchanges between agents allow for a redistribution of the wealth. However, the detailed influence of the amount of capital exchanged has not been fully analyzed yet. Here we present a comparison of two exchange rules and also a systematic study of the time evolution of the wealth distribution, its functional dependence, the Gini coefficient and time correlation functions. In many cases a stable state is attained, but, interesting, some particular cases are found in which a very slow dynamics develops. Finally, we observe that the time evolution and the final wealth distribution are strongly dependent on the exchange rules in a nontrivial way.

Conclusion: Probably the most relevant result is the fact that the loser rule appears to produce a less unequal wealth distributions than the one we call fair rule. That is valid for values of the f < style="color: rgb(255, 0, 0);">It seems to us that this result is an indication that the best way to diminish inequality does not pass only through equal opportunity (fair rule) but through some kind of positive action increasing the odds of poorer strata of the society.

Lei de Clausewitz vale para ataques terroristas

Sobre Clausewitz ver aqui.

Via Physics ArXiv Blog (quem será o KFC? Um físico desempregado? É impossível que um professor ou estudante tenha tanto tempo para ler os papers, blogar e escrever bem assim!):

Plot the number of people killed in terrorists attacks around the world since 1968 against the frequency with which such attacks occur and you’ll get a power law distribution, that’s a fancy way of saying a straight line when both axis have logarithmic scales.

The question, of course, is why? Why not a normal distribution, in which there would be many orders of magnitude fewer extreme events?

Aaron Clauset and Frederik Wiegel have built a model that might explain why. The model makes five simple assumptions about the way terrorist groups grow and fall apart and how often they carry out major attacks. And here’s the strange thing: this model almost exactly reproduces the distribution of terrorists attacks we see in the real world.

These assumptions are things like: terrorist groups grow by accretion (absorbing other groups) and fall apart by disintegrating into individuals. They must also be able to recruit from a more or less unlimited supply of willing terrorists within the population.

Being able to reproduce the observed distribution of attacks with such a simple set of rules is an impressive feat. But it also suggests some strategies that might prevent such attacks or drastically reduce them in number . One obvious strategy is to reduce the number of recruits within a population, perhaps by reducing real and perceived inequalities across societies.

Easier said than done, of course. But analyses like these should help to put the thinking behind such ideas on a logical footing.

Ref: arxiv.org/abs/0902.0724: A Generalized Fission-Fusion Model for the Frequency of Severe Terrorist Attacks

Queremos educar nossas crianças para que produzam armas termobáricas?

(teclado com problemas nos acentos aqui…)
OK, OK, todo mundo sabe que a Ciencia, especialmente a patrocinada pelos militares,  é feita por homens brancos, machistas e ocidentais (Hummmm… eu sempre achei que os homens orientais é que fossem machistas!). Assim, achei curiosa uma reportagem que vi esta semana no canal Turbo (esses canais americanos obcecados por máquinas e tecnologia hard).
A reportagem era sobre as armas termobáricas, especialmente desenvolvidas para desencavar Osame Bin Laden (meu xará, o nome dele é com “e” final sim, busque no Google!) das cavernas do Afeganistão.
A chefe do Laboratório, que se orgulhava de descrever todo o processo de desenvolvimento da nova arma, é uma mulher vietnamita, ou seja, amarela feminista ocidental! Curioso, curioso, nada New Age…
O que falta a muitos cientistas que acham que a associação entre ciencia e tecnologia militar é natural é uma análise do impacto cultural dessas práticas no longo prazo: sim, voce pode ganhar a batalha – contra os terroristas ou o que for – mas perderá a guerra pelos coraçoes e mentes dos jovens: cada vez mais a ciencia será vista como associada às armas, às forças da morte, aos países poderosos, às grandes corporações… Basta ver os desenhos animados e filmes de Holywood, onde onde o cientísta é sempre o vilão ou  pior, empregadinho ingenuo ou vendido do vilão… Não se desperta verdaderias vocações científicas assim!
Análise de longo prazo: o gap tecnológico (em armamentos) entre os EUA e os outros países está aumentando, de modo que nenhum exército convencional pode lhe fazer frente. Ou seja, os EUA podem impor sua vontade, mesmo que esta seja injusta, mesmo que países razoáveis como Canadá ou a Noruega discordem veementemente, por exemplo! 
Se em um dado momento o interesse for outro – por exemplo uma nova política militarista Neocon em um governo cripto-teocrático de Jeb Bush, depois que Obama for destroçado pela crise – os países do mundo poderão chorar e mendigar (na ONU), mas o que fazer realmente contra o porta-aviões Nimitz e as armas super-hiper-mega inteligentes? A República Democrática pode virar um Império, lembram? Ou precisamos assistir de novo a Star Wars episódios 1, 2 e 3?
Fica claro o que irá ocorrer: a única tática viável frente a esse gap tecnológico é o terrorismo, com suas vítimas civis. Basta ver o segundo episódio de Battlestar Galactica, terceira temporada… onde os homens-bomba ateus são os mocinhos!

Música dos Terremotos

Neurofisiólgos costumam ouvir os spikes de neurônios em aplificadores em em vez de apenas visualizá-los em osciloscópios. Dizem que isso se faz porque o ouvido humano é mais sensível a detetar padrões do que o sistema visual.

Idéia: imitando este vídeo, faça uma escala associando magnitudes dos terremotos com tons, de um modo psicofisicamente significativo. Em princípio, se os eventos são descorrelacionados, deveriamos ter apenas uma “música fractal”, ou seja, notas musicais tiradas de uma distribuição tipo lei de potência.

Mas os pre-shocks, os after-shoks e os terremotos correlacionados ou disparados em avalanches globais produzirias assinaturas que poderiam ser facilmente detetáveis acusticamente. Ou não?

Blogs talentosos versus blogs experientes

Acho que seria possível adaptar as idéias deste artigo para ranquear blogs. Vou propor aqui o Kinouchi rank :´) :

K = A/N

A = autoridade technorati.
N = número de visitas nos últimos seis meses.

Poderiamos adotar tanto A como K para distinguir os blogs experientes (com tradição) dos blogs talentosos (estrelas ascendentes).

Para isso, cada blog teria que se cadastrar no Technorati e no Google Analytics e fornecer esse número de visita na barra lateral. Vou pedir ao Zedy que envie um email para todos os blogs do Anel de Blogs Científicos, com essas recomendações. Mas se você que está lendo isso ainda não está cadastrado nesses sites, seria uma boa idéia fazê-lo, pois teremos que coletar dados por seis meses…

Do physics ArXiv Blog:

How will the next generation of search engines outperform Google’s all-conquering Pagerank algorithm?

One route might be to hire Vwani Roychowdhury at the University of California, Los Angeles and his buddies who have found a fascinating new way to tackle the problem of website rankings.

Their breakthrough is to have found that the structure of the web is determined by three factors: the number of inbound links to a page, the rate at which pages are created and deleted and the likelihood that somebody visiting a page will link to it.

This last factor is the forehead smacker. Google’s PageRank cannot easily identify new sites with huge potential because their very newness means they they don’t have a large number of inbound links and so feature poorly in the rankings.

But by looking at the ratio of visitors to incoming links, Roychowdhury and co can get a good handle on a site’s potential, even when it is new.

In fact, these sites stick out like sore thumbs. It turns out that in the year’s worth of data the team examined, only 6.5 per cent of the 10 million or so sites they monitored received more than two new incoming links.

It is these up-and-coming sites that go on to displace more established sites in the popularity stakes, leading to the constant slow churn of content we see on conventional Pagerank-based search results. The UCLA team simply see them earlier.

That’s a fascinating and valuable insight.

Roychowdhury and co liken this process to the age-old battle between “experience” (well established sites with many incoming links) and “talent” (up-and-coming sites with potential).

Their algorithm won’t replace Pagerank but it could help to significantly fine tune it, and that could pique the interest of a well known company based in Mountain View, not to mention numerous other pretenders to the search engine crown.

Ref: arxiv.org/abs/0901.0296: Experience Versus Talent Shapes the Structure of the Web

Tribo perdida na Galáxia

Um paper um pouco antigo mais interessante, por Ken D. Olum:

Conflict between anthropic reasoning and observation

Authors: Ken D. Olum
(Submitted on 19 Mar 2003 (v1), last revised 4 Feb 2004 (this version, v2))
Abstract: Anthropic reasoning often begins with the premise that we should expect to find ourselves typical among all intelligent observers. However, in the infinite universe predicted by inflation, there are some civilizations which have spread across their galaxies and contain huge numbers of individuals. Unless the proportion of such large civilizations is unreasonably tiny, most observers belong to them. Thus anthropic reasoning predicts that we should find ourselves in such a large civilization, while in fact we do not. There must be an important flaw in our understanding of the structure of the universe and the range of development of civilizations, or in the process of anthropic reasoning.
Comments:
7 pages, RevTeX. v2: New “lost colony” section. Corresponds to published version
Subjects:
General Relativity and Quantum Cosmology (gr-qc);

Physics and Society (physics.soc-ph)
Journal reference: Analysis 64 (2004),1
Cite as: arXiv:gr-qc/0303070v2

Ubiquity

Como vocês sabem, este não é um blog jornalístico de divulgação científica, mas um “weblog” de viagem na vida científica. Isso quer dizer que vou registrando aqui notícias e referências que vou encontrando nas minhas navegações, idéias para papers, idéias “malucas” que um dia talvez poderiam se mostrar frutíferas, divulgação dos meus trabalhos e dos meus amigos, comentários diversos “sem ciência” e principalmente tudo o que acho que vou esquecer se não anotar. Ah, sim, tem uma dose exagerada de “auto-centrismo” (eufemisticamente falando), mas o blogueiro que não tiver ego que atire a primeira pedra…

Bom, hoje, de novo, vou divulgar um livro que recomendo aos meus alunos. Abaixo, meu review na Amazon, escrito sob o pseudônimo literário de B. B. Jenitez:

This is the book that I would like to have written. Although being a popular account, it is scientifically accurate and carefull in its suggestions, always informing the reader what is consolidated science and what is scientific speculation. In contrast to a previous review, I have read all the pages of this book. Since I am a physicist working in this very subject (self-organized criticality), I probably can say that if someone use the example of a Gaussian (bell shaped curve) to illustrate that the power laws discussed in the book are trivial, well, this person have not understood anything. Gaussians have exponential decays, so they predict that very larg events (catastrophes) will occur with vanishing probability. For example, the heigh of people is distributed as a Gaussian. What is the probability of finding a 3 meter person? Zero.

Distributions wich have power law tails, depending on the power exponent, may have no well defined variance or even average value. This means that there is no “average” earthquake, and that very big earthquakes (or other cathastrophes) are not “acts of God” but have a no desprezible chance of occur due to simple chain reactions of events. I have introduced my students to ideas like critical states and modern physical thinking by using this book. So, I can recommend it to any reader without reserve. The emphasis by the author that critical chain reactions of events must be accounted by any view of History and Society is an important mind tool in our increasing interconnected (and, because it, prone to global chain reactions) world.