Home // Posts tagged "papers"

Planetas extra-solares, Kepler 62 e o Paradoxo de Fermi local

Conforme aumentam o número de planetas extra-solares descobertos, também aumentamos vínculos sobre as previsões do modelo de percolação galática (Paradoxo de Fermi Local).
A previsão é que, se assumirmos que Biosferas Meméticas (Biosferas culturais ou Tecnosferas) são um resultado provável de Biosferas Genéticas, então devemos estar dentro de uma região com pucos planetas habitáveis. Pois se existirem planetas habitados (por seres inteligentes) por perto, com grande probabilidade eles são bem mais avançados do que nós, e já teriam nos colonizado.
Como isso ainda não ocorreu (a menos que se acredite nas teorias de conspiração dos ufólogos e nas teorias de Jesus ET, deuses astronautas etc.), segue que quanto mais os astronomos obtiverem dados, mais ficará evidente que nosso sistema solar é uma anomalia dentro de nossa vizinhança cósmica (1000 anos-luz?), ou seja, não podemos assumir o Princípio Copernicano em relação ao sistema solar: nosso sistema solar não é tipico em nossa vizinhança.  Bom, pelo menos, essa conclusão está batendo com os dados coletados até hoje…
Assim, é possível fazer a previsão de que uma maior análise dos planetas Kepler 62-e e Kepler 62-f revelará que eles não possuem uma atmosfera com oxigênio ou metano, sinais de um planeta com biosfera.

Persistence solves Fermi Paradox but challenges SETI projects

Osame Kinouchi (DFM-FFCLRP-Usp)
(Submitted on 8 Dec 2001)

Persistence phenomena in colonization processes could explain the negative results of SETI search preserving the possibility of a galactic civilization. However, persistence phenomena also indicates that search of technological civilizations in stars in the neighbourhood of Sun is a misdirected SETI strategy. This last conclusion is also suggested by a weaker form of the Fermi paradox. A simple model of a branching colonization which includes emergence, decay and branching of civilizations is proposed. The model could also be used in the context of ant nests diffusion.

03/05/2013 – 03h10

Possibilidade de vida não se resume a planetas similares à Terra, diz estudo

SALVADOR NOGUEIRA
COLABORAÇÃO PARA A FOLHA

Com as diferentes composições, massas e órbitas possíveis para os planetas fora do Sistema Solar, a vida talvez não esteja limitada a mundos similares à Terra em órbitas equivalentes à terrestre.

Editoria de arte/Folhapress

Essa é uma das conclusões apresentada por Sara Seager, do MIT (Instituto de Tecnologia de Massachusetts), nos EUA, em artigo de revisão publicado no periódico “Science“, com base na análise estatística dos cerca de 900 mundos já detectados ao redor de mais de 400 estrelas.

Seager destaca a possível existência de planetas cuja atmosfera seria tão densa a ponto de preservar água líquida na superfície mesmo a temperaturas bem mais baixas que a terrestre. Read more [+]

Entrevista com Osame Kinouchi

1


CIÊNCIA

O que você investiga? Qual é o núcleo de sua investigação?
Física Computacional Interdisciplinar: Redes Complexas em Linguística e Psiquiatria, Transição Vítrea, Otimização de Estratégia Exploratória por Animais, Métodos de Aprendizagem em Redes Neurais Artificiais, Neurociência Teórica e Computacional (Modelos de Neurônios, Dentritos Excitáveis, Modelagem do Bulbo Olfatório, Psicofísica, Teoria de Sonhos e Sono REM), Criticalidade Auto-Organizada (Modelos de Terremotos, Avalanches Neuronais) , Modelos de Evolução Cultural (evolução da culinária), Astrobiologia (Modelos de Colonização Galática),  Cientometria e Divulgação Científica (Portal de Blogs Científicos em Língua Portuguesa).
Você tem algum link onde possamos ver algo sobre você, ou sobre o centro onde você trabalha?
Meus artigos no repositório livre ArXiv de Física: http://arxiv.org/a/kinouchi_o_1
Meu curriculo Lattes: http://tinyurl.com/3h28kr8
Qual é sua formação? Que experiência de trabalho tinha antes disto?
Bacharelado em Física, Mestrado em Física Básica, Doutorado em Física da Matéria Condensada, Pós-doutorado em Física Estatística e Computacional. Primeiro emprego na USP, aos 40 anos de idade!
Você era muito estudioso no colégio?
Não. Eu apenas lia compulsivamente enciclopédias…Tirar nota boa sempre foi fácil.
Que tipo de tecnologia você está usando para sua investigação?
Um bom notebook é suficiente para realizar minha pesquisa. Read more [+]

Artigos em neurociência teórica, criticalidade em árvores dendríticas

journal.pcbi.1000402.g001

Leonardo Lyra Gollo me incentivou a retomar o blog. Obrigado pelo incentivo, Leo!

Single-Neuron Criticality Optimizes Analog Dendritic Computation

Leonardo L. GolloOsame KinouchiMauro Copelli
(Submitted on 17 Apr 2013)

Neurons are thought of as the building blocks of excitable brain tissue. However, at the single neuron level, the neuronal membrane, the dendritic arbor and the axonal projections can also be considered an extended active medium. Active dendritic branchlets enable the propagation of dendritic spikes, whose computational functions, despite several proposals, remain an open question. Here we propose a concrete function to the active channels in large dendritic trees. By using a probabilistic cellular automaton approach, we model the input-output response of large active dendritic arbors subjected to complex spatio-temporal inputs, and exhibiting non-stereotyped dendritic spikes. We find that, if dendritic spikes have a non-deterministic duration, the dendritic arbor can undergo a continuous phase transition from a quiescent to an active state, thereby exhibiting spontaneous and self-sustained localized activity as suggested by experiments. Analogously to the critical brain hypothesis, which states that neuronal networks self-organize near a phase transition to take advantage of specific properties of the critical state, here we propose that neurons with large dendritic arbors optimize their capacity to distinguish incoming stimuli at the critical state. We suggest that “computation at the edge of a phase transition” is more compatible with the view that dendritic arbors perform an analog and dynamical rather than a symbolic and digital dendritic computation.

Comments: 11 pages, 6 figures
Subjects: Neurons and Cognition (q-bio.NC)
Cite as: arXiv:1304.4676 [q-bio.NC]
(or arXiv:1304.4676v1 [q-bio.NC] for this version)

Mechanisms of Zero-Lag Synchronization in Cortical Motifs

(Submitted on 18 Apr 2013)

Zero-lag synchronization between distant cortical areas has been observed in a diversity of experimental data sets and between many different regions of the brain. Several computational mechanisms have been proposed to account for such isochronous synchronization in the presence of long conduction delays: Of these, the phenomena of “dynamical relaying” – a mechanism that relies on a specific network motif (M9) – has proven to be the most robust with respect to parameter and system noise. Surprisingly, despite a contrary belief in the community, the common driving motif (M3) is an unreliable means of establishing zero-lag synchrony. Although dynamical relaying has been validated in empirical and computational studies, the deeper dynamical mechanisms and comparison to dynamics on other motifs is lacking. By systematically comparing synchronization on a variety of small motifs, we establish that the presence of a single reciprocally connected pair – a “resonance pair” – plays a crucial role in disambiguating those motifs that foster zero-lag synchrony in the presence of conduction delays (such as dynamical relaying, M9) from those that do not (such as the common driving triad, M3). Remarkably, minor structural changes to M3 that incorporate a reciprocal pair (hence M6, M9, M3+1) recover robust zero-lag synchrony. The findings are observed in computational models of spiking neurons, populations of spiking neurons and neural mass models, and arise whether the oscillatory systems are periodic, chaotic, noise-free or driven by stochastic inputs. The influence of the resonance pair is also robust to parameter mismatch and asymmetrical time delays amongst the elements of the motif. We call this manner of facilitating zero-lag synchrony resonance-induced synchronization and propose that it may be a general mechanism to promote zero-lag synchrony in the brain.

Comments: 27 pages, 8 figures
Subjects: Neurons and Cognition (q-bio.NC)
Cite as: arXiv:1304.5008 [q-bio.NC]
(or arXiv:1304.5008v1 [q-bio.NC] for this version)

Determinando se vivemos dentro da Matrix

The Measurement That Would Reveal The Universe As A Computer Simulation

If the cosmos is a numerical simulation, there ought to be clues in the spectrum of high energy cosmic rays, say theorists

1 comment

THE PHYSICS ARXIV BLOG

Wednesday, October 10, 2012

One of modern physics’ most cherished ideas is quantum chromodynamics, the theory that describes the strong nuclear force, how it binds quarks and gluons into protons and neutrons, how these form nuclei that themselves interact. This is the universe at its most fundamental.

So an interesting pursuit is to simulate quantum chromodynamics on a computer to see what kind of complexity arises. The promise is that simulating physics on such a fundamental level is more or less equivalent to simulating the universe itself.

There are one or two challenges of course. The physics is mind-bogglingly complex and operates on a vanishingly small scale. So even using the world’s most powerful supercomputers, physicists have only managed to simulate tiny corners of the cosmos just a few femtometers across. (A femtometer is 10^-15 metres.)

That may not sound like much but the significant point is that the simulation is essentially indistinguishable from the real thing (at least as far as we understand it).

It’s not hard to imagine that Moore’s Law-type progress will allow physicists to simulate significantly larger regions of space. A region just a few micrometres across could encapsulate the entire workings of a human cell.

Again, the behaviour of this human cell would be indistinguishable from the real thing.

It’s this kind of thinking that forces physicists to consider the possibility that our entire cosmos could be running on a vastly powerful computer. If so, is there any way we could ever know?

Today, we get an answer of sorts from Silas Beane, at the University of Bonn in Germany, and a few pals.  They say there is a way to see evidence that we are being simulated, at least in certain scenarios.

First, some background. The problem with all simulations is that the laws of physics, which appear continuous, have to be superimposed onto a discrete three dimensional lattice which advances in steps of time.

The question that Beane and co ask is whether the lattice spacing imposes any kind of limitation on the physical processes we see in the universe. They examine, in particular, high energy processes, which probe smaller regions of space as they get more energetic

What they find is interesting. They say that the lattice spacing imposes a fundamental limit on the energy that particles can have. That’s because nothing can exist that is smaller than the lattice itself.

So if our cosmos is merely a simulation, there ought to be a cut off in the spectrum of high energy particles.

It turns out there is exactly this kind of cut off in the energy of cosmic ray particles,  a limit known as the Greisen–Zatsepin–Kuzmin or GZK cut off.

This cut-off has been well studied and comes about because high energy particles interact with the cosmic microwave background and so lose energy as they travel  long distances.

But Beane and co calculate that the lattice spacing imposes some additional features on the spectrum. “The most striking feature…is that the angular distribution of the highest energy components would exhibit cubic symmetry in the rest frame of the lattice, deviating significantly from isotropy,” they say.

In other words, the cosmic rays would travel preferentially along the axes of the lattice, so we wouldn’t see them equally in all directions.

That’s a measurement we could do now with current technology. Finding the effect would be equivalent to being able to to ‘see’ the orientation of lattice on which our universe is simulated.

That’s cool, mind-blowing even. But the calculations by Beane and co are not without some important caveats. One problem is that the computer lattice may be constructed in an entirely different way to the one envisaged by these guys.

Another is that this effect is only measurable if the lattice cut off is the same as the GZK cut off. This occurs when the lattice spacing is about 10^-12 femtometers. If the spacing is significantly smaller than that, we’ll see nothing.

Nevertheless, it’s surely worth looking for, if only to rule out the possibility that we’re part of a simulation of this particular kind but secretly in the hope that we’ll find good evidence of our robotic overlords once and for all.

Ref: arxiv.org/abs/1210.1847: Constraints on the Universe as a Numerical Simulation

Landis e a abordagem de percolação para o Paradoxo de Fermi

Published in Journal of the British Interplanetary Society, London, Volume 51, page 163-166 (1998).
Originally presented at the NASA Symposium “Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace” (NASA CP-10129), Mar. 30-31, 1993, Westlake, OH U.S.A.


The Fermi Paradox: An Approach Based on Percolation Theory

Geoffrey A. Landis

NASA Lewis Research Center, 302-1
Cleveland, OH 44135 U.S.A.


Abstract

If even a very small fraction of the hundred billion stars in the galaxy are home to technological civilizations which colonize over interstellar distances, the entire galaxy could be completely colonized in a few million years. The absence of such extraterrestrial civilizations visiting Earth is the Fermi paradox.

A model for interstellar colonization is proposed using the assumption that there is a maximum distance over which direct interstellar colonization is feasable. Due to the time lag involved in interstellar communications, it is assumed that an interstellar colony will rapidly develop a culture independent of the civilization that originally settled it.

Any given colony will have a probability P of developing a colonizing civilization, and a probability (1-P) that it will develop a non-colonizing civilization. These assumptions lead to the colonization of the galaxy occuring as a percolation problem. In a percolation problem, there will be a critical value of the percolation probability, Pc. For P<Pc, colonization will always terminate after a finite number of colonies. Growth will occur in “clusters,” with the outside of each cluster consisting of non-colonizing civilizations. For P>Pc, small uncolonized voids will exist, bounded by non-colonizing civilizations. When P is on the order of Pc, arbitrarily large filled regions exist, and also arbitrarily large empty regions.

The biology dummies tutorial para barras de erros

Error bars in experimental biology.

Geoff Cumming, Fiona Fidler, David L Vaux
Add to My Library ShareThis

The Journal of Cell Biology (2007)
Volume: 177, Issue: 1, Publisher: Rockefeller Univ Press, Pages: 7-11
PubMed ID: 17420288
Available from http://www.ncbi.nlm.nih.gov/

Tags
basics error bars graphing methods standard deviation standard error statistics

Abstract

Error bars commonly appear in figures in publications, but experimental biologists are often unsure how they should be used and interpreted. In this article we illustrate some basic features of error bars and explain how they can help communicate data and assist correct interpretation. Error bars may show confidence intervals, standard errors, standard deviations, or other quantities. Different types of error bars give quite different information, and so figure legends must make clear what error bars represent. We suggest eight simple rules to assist with effective use and interpretation of error bars.
Cuidado! O autor diz: About two thirds of the data points will lie within the region of mean ± 1 SD, and ∼95% of the data points will be within 2 SD of the mean. Mas isso só vale para dados com distribuição aproximadamente Gaussiana. Se os dados tiverem distribuição com cauda longa, leis de potência (leis de Pareto) ou similares, o desvio padrão pode até nem existir (muito menos a média!). Neste caso, talvez seja interessante usar a entropia S = – \sum_i p_i ln p_i, onde p_i é a probabilidade de um valor cair no i-esimo bin (coluna) de uma tabela discretizada de valores. Uma sugestão para a qual ainda preciso encontrar uma referência…

Mendely: uma boa idéia!

Through the Mendeley research network you can connect with other researchers in your field. This opens up a whole new avenue for knowledge discovery. You can view the most read authors, journals and papers in your field. You can explore by using tags associated with your research area. By navigating the web of knowledge available to you, you make some useful contacts along the way too. In addition to that, you can also view interesting statistics about your own digital library.

arrow-right Discover more papers linked to your research interests

Collaborate with fellow researchers and share information, resources and experiences with shared and public collections. Your research team will have easy access to each others papers. Just create a group, invite your colleagues and drag and drop documents in there. This way you can keep on top of what they’re reading and discover more about what interests you.

arrow-right Start collaborating with other researchers

Mendeley Desktop is academic software that indexes and organizes all of your PDF documents and research papers into your own personal digital bibliography. It gathers document details from your PDFs allowing you to effortlessly search, organize and cite. It also looks up PubMed, CrossRef, DOIs and other related document details automatically. Drag and drop functionality makes populating the library quick and easy. The bookmarklet allows you to quickly and easily import papers from resources such as Google Scholar, ACM, IEEE and many more at the click of a button.

arrow-right Create your digital library now

Um artigo com peer review aberto que revisei

Scale-invariant transition probabilities in free word association trajectories

1 Integrative Neuroscience Laboratory, Physics Department, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Argentina
2 Computer Science Department, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Argentina


Free-word association has been used as a vehicle to understand the organization of human thoughts. The original studies relied mainly on qualitative assertions, yielding the widely intuitive notion that trajectories of word associations are structured, yet considerably more random than organized linguistic text. Here we set to determine a precise characterization of this space, generating a large number of word association trajectories in a web implemented game. We embedded the trajectories in the graph of word co-occurrences from a linguistic corpus. To constrain possible transport models we measured the memory loss and the cycling probability. These two measures could not be reconciled by a bounded diffusive model since the cycling probability was very high (16 % of order-2 cycles) implying a majority of short-range associations whereas the memory loss was very rapid (converging to the asymptotic value in ∼ 7 steps) which, in turn, forced a high fraction of long-range associations. We show that memory loss and cycling probabilities of free word association trajectories can be simultaneously accounted by a model in which transitions are determined by a scale invariant probability distribution.

Keywords: word association, graph theory, semantics, Markov, networks, simulations

Citation: Elias Costa M, Bonomo F and Sigman M (2009) Scale-invariant transition probabilities in free word association trajectories. Front. Integr. Neurosci. 3:19. doi:10.3389/neuro.07.019.2009

Received: 26 May 2009; paper pending published: 12 June 2009; accepted: 06 August 2009; published online: 11 September 2009.

Edited by:
Sidarta Ribeiro, Edmond and Lily Safra International Institute of Neuroscience of Natal, Brazil; Federal University of Rio Grande do Norte, Brazil

Reviewed by:
Guillermo A. Cecchi, IBM Watson Research Center, USA
Mauro Copelli, Federal University of Pernambuco, Brazil
Osame Kinouchi, Universidade de São Paulo, Brazil

Copyright: © 2009 Elias Costa, Bonomo and Sigman. This is an open-access article subject to an exclusive license agreement between the authors and the Frontiers Research Foundation, which permits unrestricted use, distribution, and reproduction in any medium, provided the original authors and source are credited.

Vôos de Levy em associação de palavras

Legal esse negócio dos referees não ficarem anônimos no final…
Scale-invariant transition probabilities in free word association trajectories


1 Integrative Neuroscience Laboratory, Physics Department, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Argentina
2 Computer Science Department, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Argentina


Free-word association has been used as a vehicle to understand the organization of human thoughts. The original studies relied mainly on qualitative assertions, yielding the widely intuitive notion that trajectories of word associations are structured, yet considerably more random than organized linguistic text. Here we set to determine a precise characterization of this space, generating a large number of word association trajectories in a web implemented game. We embedded the trajectories in the graph of word co-occurrences from a linguistic corpus. To constrain possible transport models we measured the memory loss and the cycling probability. These two measures could not be reconciled by a bounded diffusive model since the cycling probability was very high (16 % of order-2 cycles) implying a majority of short-range associations whereas the memory loss was very rapid (converging to the asymptotic value in ∼ 7 steps) which, in turn, forced a high fraction of long-range associations. We show that memory loss and cycling probabilities of free word association trajectories can be simultaneously accounted by a model in which transitions are determined by a scale invariant probability distribution.

Keywords: semantics, word association, graph theory, corpus based metrics, random walks, free association trajectories, markov, semantic network

Citation: Elias Costa M, Bonomo F and Sigman M (2009) Scale-invariant transition probabilities in free word association trajectories. Front. Integr. Neurosci. 3:19. doi:10.3389/neuro.07.019.2009


Received: 26 May 2009; paper pending published: 12 June 2009; accepted: 06 August 2009;

Edited by:
Sidarta Ribeiro, Edmond and Lily Safra International Institute of Neuroscience of Natal, Brazil; Federal University of Rio Grande do Norte, Brazil


Reviewed by:
Guillermo A. Cecchi, IBM Watson Research Center, USA
Mauro Copelli, Federal University of Pernambuco, Brazil
Osame Kinouchi, Universidade de São Paulo, Brazil


Copyright: © 2009 Elias Costa, Bonomo and Sigman. This is an open-access article subject to an exclusive license agreement between the authors and the Frontiers Research Foundation, which permits unrestricted use, distribution, and reproduction in any medium, provided the original authors and source are credited.

*Correspondence: Martin Elias Costa, Integrative Neuroscience Laboratory, Departamento de Fisica, Facultad de Ciencas Exactas y Naturales, Universidad de Buenos Aires. Pab I, Ciudad Universitaria, CP: 1428, Ciudad Autonoma de buenos aires, Argentina, [email protected]

Quando dar oprtunidades iguais é injusto

OK, OK, já comentei que a Política, assim como a Arte, não pode ser reduzida à Ciência. Mas isso não implica que não existam relações e influências. Por exemplo, a Política, e as políticas públicas, são muito influenciadas pelos cientistas sociais e economistas (mesmo que estes sejam considerados por alguns céticos como pseudocientistas).

Na questão da origem e mecanismos de mudança da distribuição de renda, os físicos têm dado uma contribuição interessante. Basta fazer uma busca no ArXiv sobre whealth distribution.

Aqui um exemplo interesante, do pesoal da UFRGS:

Versão final e referência aqui.

E aqui o artigo com o copyleft do ArXiv:

The unfair consequences of equal opportunities: comparing exchange models of wealth distribution

Authors: G. M. Caon, S. Goncalves, J. R. Iglesias
(Submitted on 1 Nov 2006)

Abstract: Simple agent based exchange models are a commonplace in the study of wealth distribution of artificial societies. Generally, each agent is characterized by its wealth and by a risk-aversion factor, and random exchanges between agents allow for a redistribution of the wealth. However, the detailed influence of the amount of capital exchanged has not been fully analyzed yet. Here we present a comparison of two exchange rules and also a systematic study of the time evolution of the wealth distribution, its functional dependence, the Gini coefficient and time correlation functions. In many cases a stable state is attained, but, interesting, some particular cases are found in which a very slow dynamics develops. Finally, we observe that the time evolution and the final wealth distribution are strongly dependent on the exchange rules in a nontrivial way.

Conclusion: Probably the most relevant result is the fact that the loser rule appears to produce a less unequal wealth distributions than the one we call fair rule. That is valid for values of the f < style="color: rgb(255, 0, 0);">It seems to us that this result is an indication that the best way to diminish inequality does not pass only through equal opportunity (fair rule) but through some kind of positive action increasing the odds of poorer strata of the society.

Blogosferas e redes complexas


Dois artigos sobre a blogosfera encontrados hoje:

aCenter for Systems and Control, College of Engineering, Peking University, Beijing 100871, China

bDepartment of Industrial Engineering and Management, College of Engineering, Peking University, Beijing 100871, China

Received 17 July 2007;

revised 30 September 2007.

Available online 9 October 2007.


Abstract

Today the World Wide Web is undergoing a subtle but profound shift to Web 2.0, to become more of a social web. The use of collaborative technologies such as blogs and social networking site (SNS) leads to instant online community in which people communicate rapidly and conveniently with each other. Moreover, there are growing interest and concern regarding the topological structure of these new online social networks. In this paper, we present empirical analysis of statistical properties of two important Chinese online social networks—a blogging network and an SNS open to college students. They are both emerging in the age of Web 2.0. We demonstrate that both networks possess small-world and scale-free features already observed in real-world and artificial networks. In addition, we investigate the distribution of topological distance. Furthermore, we study the correlations between degree (in/out) and degree (in/out), clustering coefficient and degree, popularity (in terms of number of page views) and in-degree (for the blogging network), respectively. We find that the blogging network shows disassortative mixing pattern, whereas the SNS network is an assortative one. Our research may help us to elucidate the self-organizing structural characteristics of these online social networks embedded in technical forms.

Who blogs? Personality predictors of blogging

Computers in Human Behavior
Volume 24, Issue 5, September 2008, Pages 1993-2004

Rosanna E. Guadagno, a, , Bradley M. Okdiea and Cassie A. Enoa
aDepartment of Psychology, University of Alabama, P.O. Box 870348, Tuscaloosa, AL 35487-0348, United States

Available online 19 November 2007.

Abstract
The Big Five personality inventory measures personality based on five key traits: neuroticism, extraversion, agreeableness, openness to experience, and conscientiousness [Costa, P. T., Jr., & McCrae, R. R. (1992). Normal personality assessment in clinical practice: The NEO Personality Inventory. Psychological Assessment 4, 5–13]. There is a growing body of evidence indicating that individual differences on the Big Five factors are associated with different types of Internet usage [Amichai-Hamburger, Y., & Ben-Artzi, E. (2003). Loneliness and Internet use. Computers in Human Behavior 19, 71–80; Hamburger, Y. A., & Ben-Artzi, E. (2000). Relationship between extraversion and neuroticism and the different uses of the Internet. Computers in Human Behavior 16, 441–449]. Two studies sought to extend this research to a relatively new online format for expression: blogging. Specifically, we examined whether the different Big Five traits predicted blogging. The results of two studies indicate that people who are high in openness to new experience and high in neuroticism are likely to be bloggers. Additionally, the neuroticism relationship was moderated by gender indicating that women who are high in neuroticism are more likely to be bloggers as compared to those low in neuroticism whereas there was no difference for men. These results indicate that personality factors impact the likelihood of being a blogger and have implications for understanding who blogs.

Fundamentos éticos da Teoria de Probabilidades

Acho que finalmente entendi o conceito Bayesiano de probabilidades. Antes tarde do que nunca! É claro que eu poderia ter aprendido isso muito antes, com o livro do Jaynes tão recomendado pelo Nestor Caticha. Acho que na verdade aprendi, depois esqueci, depois li de nôvo, depois esqueci de novo. “Apreender” é diferente de aprender. Acho que envolve uma mudança de Gestalt, uma espécie de momento de “iluminação”.

Isso aconteceu devido a dois acidentes (na verdade três): a) estou sem internet em casa, ou seja, sem essa máquina de perder tempo; b) este computador tinha uma pasta com alguns artigos em pdf, entre eles o ótimo Lectures on probability, entropy and statistical mechanics de Ariel Caticha, que me fora mandado há um bom tempo atrás pelo Nestor; c) eu havia terminado o livro Artemis Fowl – Uma aventura no Ártico e estava sem nada para ler na noite de Natal (escreverei um post sobre isso outro dia).

Além do conceito de probabilidade Bayesiano, foi muito esclarecedor a discussão sobre entropia, em particular sua ênfase de que entropia não é uma propriedade física do sistema, mas depende do grau de detalhe na descrição desse sistema:

The fact that entropy depends on the available information implies that there is no such thing as the entropy of a system. The same system may have many different entropies. Notice, for example, that already in the third axiom we find an explicit reference to two entropies S[p] and SG[P] referring to two different descriptions of the same system. Colloquially, however, one does refer to the entropy of a system; in such cases the relevant information available about the system should be obvious from the context. In the case of thermodynamics what one means by the entropy is the particular entropy that one obtains when the only information available is specified by the known values of those few variables that specify the thermodynamic macrostate.

Aprendi outras coisas muito interessantes no paper, cuja principal virtude acho que é a clareza e o fato de reconhecer os pontos obscuros como realmente obscuros. Imagino que este texto poderia ser a base de uma interessante disciplina de pós aqui no DFM. Eu ainda o estou estudando, e o recomendo aos meus amigos frequentistas. Mas é claro, eu não pude resistir em dar uma olhada no capítulo final, onde encontrei esta intrigante conclusão:

Dealing with uncertainty requires that one solve two problems. First, one must represent a state of knowledge as a consistent web of interconnected beliefs. The instrument to do it is probability. Second, when new information becomes available the beliefs must be updated. The instrument for this is relative entropy. It is the only candidate for an updating method that is of universal applicability and obeys the moral injunction that one should not change one´s mind frivolously. Prior information is valuable and should not be revised except when demanded by new evidence, in which case the revision is no longer optional but obligatory. The resulting general method – the ME method – can handle arbitrary priors and arbitrary constraints; it includes MaxEnt and Bayes-rule as special cases; and it provides its own criterion to assess the extent that non maximum-entropy distributions are ruled out.

To conclude I cannot help but to express my continued sense of wonder and astonishment at the fact that the method for reasoning under uncertainty – which presumably includes the whole of science – turns out to rest upon a foundation provided by ethical principles. Just imagine the implications!

Acho que este último parágrafo merece um comentário em um próximo post…
A ser lido:
From Inference to Physics

Authors: Ariel Caticha

(Submitted on 8 Aug 2008)
Abstract: Entropic dynamics, a program that aims at deriving the laws of physics from standard probabilistic and entropic rules for processing information, is developed further. We calculate the probability for an arbitrary path followed by a system as it moves from given initial to final states. For an appropriately chosen configuration space the path of maximum probability reproduces Newtonian dynamics.
Foto: Ariel Caticha.

Um resultado muito curioso

Isso parece ser interessante:

Mathematical undecidability and quantum randomness

Authors: Tomasz Paterek, Johannes Kofler, Robert Prevedel, Peter Klimek, Markus Aspelmeyer, Anton Zeilinger, Caslav Brukner

(Submitted on 27 Nov 2008)

Abstract: We propose a new link between mathematical undecidability and quantum physics. We demonstrate that the states of elementary quantum systems are capable of encoding mathematical axioms and show that quantum measurements are capable of revealing whether a given proposition is decidable or not within the axiomatic system. Whenever a mathematical proposition is undecidable within the axioms encoded in the state, the measurement associated with the proposition gives random outcomes. Our results support the view that quantum randomness is irreducible and a manifestation of mathematical undecidability.
Comments: 9 pages, 4 figures
Subjects: Quantum Physics (quant-ph)
Cite as: arXiv:0811.4542v1 [quant-ph]

Jogo da Vida

Uma referência para meu (possível) futuro estudante de mestrado…

(Submitted on 11 May 2006 (v1), last revised 18 Sep 2006 (this version, v2))
Abstract: The cellular automata with local permutation invariance are considered. We show that in the two-state case the set of such automata coincides with the generalized Game of Life family. We count the number of equivalence classes of the rules under consideration with respect to permutations of states. This reduced number of rules can be efficiently generated in many practical cases by our C program. Since a cellular automaton is a combination of a local rule and a lattice, we consider also maximally symmetric two-dimensional lattices. In addition, we present the results of compatibility analysis of several rules from the Life family.
Comments: 11 pages, corrected to match published version
Subjects: Mathematical Physics (math-ph); Cellular Automata and Lattice Gases (nlin.CG)
Journal reference: CASC 2006, LNCS 4194, pp. 240–250, 2006. Springer-Verlag Berlin Heidelberg 2006
Cite as: arXiv:math-ph/0605040v2

Energia – Grandeza intervalar ou absoluta?

Neste paper muito interessante (e razoávelmente legível), consegui esclarecer uma dúvida que coloquei no fórum de Física da UFF:

Gravity and its Mysteries: Some Thoughts and Speculations
Authors: A. Zee
(Submitted on 14 May 2008 (v1), last revised 28 Jul 2008 (this version, v2))
Abstract: I gave a rambling talk about gravity and its many mysteries at Chen-Ning Yang’s 85th Birthday Celebration held in November 2007. I don’t have any answers.

Gravity, knowing about everything, is the only interaction sensitive to a shift of the Lagrangian by an additive constant. In classical physics, additive constants do not affect the equation of motion. In quantum mechanics, experiments typically measure only energy differences Delta E and not the energies themselves. The Casimir effect measures the change in vacuum energy Delta E before and after the mirrors are introduced, not the vacuum energy itself (as is sometimes erroneously stated.) But gravity knows about the vacuum energy 1/2 hw!
Is the zero point energy 1/2 hw real? I should think so, since it comes directly from the uncertainty principle. The textbook demonstration of reality is of course the liquidity of helium at zero temperature, but in fact, during the early days of quantum mechanics, many of the greats were skeptical. At the 1913 Solvay Congress Einstein declared that he did not believe in zero point energy, writing to Ehrenfest that the concept was “dead as a door nail.”
Pauli also had his doubts, but the experiment gamma + H2 → H + H convinced him. He was apparently the first to worry about the gravitational effect of the zero point energy filling space. He used for M the classical radius of the electron and concluded that the resulting universe “could not even reach to the moon!” With the passage of time people found “better” things to worry about and the issue was forgotten until Zel’dovich raised it again in the late sixties.

Para ler e mandar para o Alexandre, Pablo e Mônica

EDITORIAL

To Share the fame in a fair way, h_m modifies h for multi-authored manuscripts
Michael Schreiber

2008 New J. Phys. 10 040201 (9pp) doi: 10.1088/1367-2630/10/4/040201
Michael Schreiber
Institut für Physik, Technische Universität Chemnitz, 09107 Chemnitz, Germany
E-mail: [email protected]

Abstract. The h-index has been introduced by Hirsch as a useful measure to characterize the scientific output of a researcher. I suggest a simple modification in order to take multiple co-authorship appropriately into account, by counting each paper only fractionally according to (the inverse of) the number of authors. The resulting hm-indices for eight famous physicists lead to a different ranking from the original h-indices.

Received 16 November 2007
Published 10 April 2008

OK, OK, ninguém disse que o h_I era perfeito, mas sim um degrau na busca de índices melhores. Agora já existem três tipos de índice h individuais (existe um, não publicado, mas usado pelo site Publish or Perish, que calcula vários índices usando o Google Scholar). A popularização dos mesmos é importante, pois são os únicos índices que combatem a corrupção de incluir gente demais nos papes.

Precisaríamos fazer um estudo sobre se existe uma curva universal para P(h_m), como foi encontrada para P(h_I).

Overall, I’m more interested in physics than citations
By Jorge Hirsch, Physics Professor, University of California San Diego, USA

Many people ask me why I came up with the “highly cited index” or h-index, a method for quantifying a scientist’s publication productivity and impact. Basically, the truth is that I dislike impact factors because, due to the controversial nature of my articles and research, I’m unable to get my work published in journals with high impact factors. Despite this, many of my articles have received large numbers of citations.

Background of the h-index

At many institutions, including my own, citation counts are considered during decisions relating to hiring, promotion and tenure. Despite the fact that citation counts can contain misinformation, for example, when many co-authors or self-citations are involved, they form a basic quantitative measure of a researcher’s output and impact. Hence citation counts should play an important role in evaluations, even if (or maybe especially when) the papers are not published in “high-impact journals.”

The h-index is about providing a simple objective measure for research evaluation.

In the summer of 2003, I first discussed the concept and mathematical calculation of the x-coefficient, as I initially called the h-index, with some colleagues at UCSD, and started to use it informally in evaluations. I wrote up a draft paper but wasn’t sure it would be of sufficient interest for publication. In the spring of 2005, I sent the paper to some colleagues and asked for comments. Some time later, a colleague from Germany emailed me inquiring about the index and expressing great interest. Then I decided to upload my h-index paper1 onto the Los Alamos server, which I did on August 3, 2005. I was still not sure whether to publish it in a refereed journal. To my surprise, the preprint received a very high level of interest. Before long, I found my email box filled with comments related to the article.

In essence, the h-index is about providing a simple objective measure for research evaluation. Since it is not related to the popularity of a journal, this index is a way to put more democracy into research performance measurements. In fact, papers that receive high numbers of citations in “low-impact” journals should be especially noteworthy.

Possible improvements to the h-index

Naturally no single quantitative measurement is sufficient on its own. One can add other features of the citation distribution besides the h-index to reflect additional citation information. For example, one may also consider the slope (first derivative) and curvature (second derivative) of the distribution, as well as the integral (total number of citations), as additional criteria. In the relation NTotal = ah2, a is normally between 3-5, but deviations do occur.

The h-index does not normalize for the number of years that a researcher has been active. This can be done by dividing by the time since graduation or receipt of a PhD: h(t) = mt (where m is expected to be approximately time independent). It is also interesting to normalize the h-index taking into account the number of co-authors. Furthermore there are variations in the h-index between different disciplines and subdisciplines.

Will I continue my investigation into indicators of research evaluation? To some extent, yes; however overall, I am more interested in physics than citations.

Deu na New Scientist também…

Da New Scientist (acho que foi o Mark Buchanan que escreveu):

Recipes came about by evolution
22 March 2008
From New Scientist Print Edition

COQ AU VIN and steak-and-kidney pudding may not bring to mind the principles of evolution. Yet evolutionary mechanisms may well be reflected in recipes for these tried-and-tested dishes.
Physicist Antonio Roque of the University of São Paulo in Brazil and colleagues analysed thousands of recipes (arxiv.org/abs/0802.4393v1) drawn from the French Larousse Gastronomique, the British New Penguin Cookery Book, three editions of the Brazilian Dona Benta spanning nearly 60 years, and a medieval cookbook. When they looked at how often ingredients appeared in recipes and ranked them accordingly, they saw a precise mathematical relationship across the board between an ingredient’s position on the list and how commonly it was used. “There’s a remarkable similarity,” says Roque, “independent of culture and author.”

This similarity, they suggest, may point to an evolutionary mechanism at work in the way recipes are passed down. When they tested their idea using a mathematical model in which additions, deletions or errors could be introduced into recipes, they obtained the same relationship as long as they assigned each ingredient an inherent “fitness” which made it more likely to be retained. This fitness might in practice reflect, say, its nutritional value or flavour.

There was also a curious tendency for some low-fitness ingredients, which add little to a recipe, to become locked in, says Roque. These can persist by default if there are no obvious substitutes.

Evolution – Learn more about the struggle to survive in our comprehensive special report.
From issue 2648 of New Scientist magazine, 22 March 2008, page 18

Perceptron – 51 anos de uma boa idéia!

Embora Frank Rosemblatt tenha publicado um report em 1957, o primeiro artigo sobre o Perceptron publicado em revistas científicas é de 1958, ou seja, exatamente há 50 anos. Para comemorar essa data, vou contar nos próximos posts algo sobre o Perceptron.

Este site de livros raros onde encontrei o exemplar do Psychologial Review de 1958 é interessante.

Da Wikipédia, a conferir mais tarde:

More recently, interest in the perceptron learning algorithm has increased again after Freund and Schapire (1998) presented a voted formulation of the original algorithm (attaining large margin) and suggested that one can apply the kernel trick to it. The kernel-perceptron not only can handle nonlinearly separable data but can also go beyond vectors and classify instances having a relational representation (e.g. trees, graphs or sequences).

Freund, Y. and Schapire, R. E. 1998. Large margin classification using the perceptron algorithm. In Proceedings of the 11th Annual Conference on Computational Learning Theory (COLT’ 98). ACM Press.

Food for Thought

Ainda sobre o paper noticiado na Nature News. Para quem ficou com preguiça de ir até o ArXiv Blog, reproduzo aqui o post com o comentário respondendo à crítica (ou dúvida) feita, que por sinal também nos foi feita por Mark Buchanan, autor de Ubiquity, um livro que eu recomendo para todos os meus orientados.

Food for thought

March 4th, 2008 by KFC

Food for thought

Evolution seems to crop up all over the place. In life, business, ideas. And now in recipes through the ages.

Yup, that’s recipes. For food. Osame Kinouchi from the Universidade de São Paulo in Brazil and buddies, have studied the way in which ingredients used in recipes vary around the world and through the ages. And they’ve found, they say, evidence of evolution.

The team studied the relationship betwen recipes and ingredients in four cookbooks: three editions of the Brazilian Dona Benta (1946, 1969 and 2004), the French Larousse Gastronomique, the British New Penguin Cookery Book, and the medieval Pleyn Delit.

They took the recipes from each book, counted the number of times each ingredient appeared in these recipes and ranked them according to frequency.

What’s remarkable is that the frequency-rank distribution they found is more or less the same for each cookbook. Kinouchi and co say this can be explained if recipes evolve in much the way that living organisms do–in a landscape in which some ingredients can be thought of as fitter than others, in which random mutations take place, and some ingredients die out while others prosper.

Very clever…unless they’ve missed something.

Perhaps it’s not ingredients that produce this distribution but words themselves. I’d be interested to see whether the results they get would be significantly diffierent were they to examine the frequency of adjectives or colours or numbers in these books rather than ingredients. If not, then recipes have nothing to do with the results they are presenting.

Of course, it’s possible that recipes have evolved in the way the group suggests. But the evidence they present here doesn’t look convicing to me.

Ref: arxiv.org/abs/0802.4393: The Nonequilibrium Nature of Culinary Evolution

One Response to “Food for thought”

  1. By Osame Kinouchi on Mar 14, 2008 Reply

    Hi, it is interesting that Mark Buchanan asked us with the same question this week. Here is our reply to him (and thanks for the comment about our work, anyway…):

    Dear Mark,

    We thank you for your interest in our paper and are available to give you any needed information for you to write your article.

    Answering your question, we are aware of the “ubiquity” of power laws (we have enjoyed very much reading your book). However, we think that our finding is not directly related with Zipf’s law as you seem to be suggesting.

    This is because there is a clear “physical” interpretation of f(r): it is the percentage of recipes where ingredient with rank r occurs, for example, thyme has rank 13 in the New Penguin Cookery Book not because it is a common word in English or in cookery books but because it is well used in the recipes of this particular book.

    We also would like to emphasize that we did not count how many times an ingredient name appears in a book but how many recipes use that ingredient.

    Of course, perhaps, there is a lexical Zipf’s law for ingredient names — as well as for adjectives, nouns, etc –, probably with a different exponent (the classic lexical exponent is about 1 and our exponent is about 1.4).

    We think that our law is not lexical but reflects the relative popularity of ingredients in each culture. As another example, the frequency of given names in the population probably follows a power law, which is not a Zipfian lexical law but reflects the copy-mutation mechanisms underlying name transmission within society.

    Best regards,
    Osame and Antonio

Deu na Nature News

Nosso paper sobre a evolução da culinária foi comentado na Nature News, em uma reportagem de Daniel Cressey. Se alguém quiser convidar alguém da equipe para dar seminários, estamos à disposição, desde que a pessoa pague o jantar…

Na Nature News, Hervé This comenta nosso paper, fazendo a seguinte crítica: He notes that the paper also doesn’t clearly define what an ‘ingredient’ is — is a carrot an ingredient, or is a chopped carrot one ingredient and grated carrot another, for instance?

Posso contar que durante a pesquisa estivemos bem conscientes desse problema. O fato é que a classificação ou agrupamento dos ingredientes formam um dendrograma muito ramificado. Ou seja, devemos juntar “caldo de limão” e “limão” num mesmo rótulo? Vinho tinto, vinho rosé e vinho branco são ingredientes separados?

Nossa opção foi seguir de forma mais fiel possível a opção dos autores dos livros, pois se fossemos introduzir nossa própria classificação, talvez ela fosse tão arbitrária quanto qualquer outra. Entretanto, escolhemos um nível de descrição dos ingredientes (sem levar em conta o tipo de processamento sofrido) com detalhes suficientes para obter uma boa variabilidade e ao mesmo tempo uma classificação não muito fina a fim de evitar uma estatística pobre (ou seja, um ingrediente ser representado apenas por uma ou duas receitas). Esse compromisso entre especificidade e representatividade é tradicional na estatística.