Home // Posts tagged "science philosophy"

Uma defesa secular para criação intencional do universo

What is the purpose of the Universe? Here is one possible answer.

A Secular Case for Intentional Creation

By Clay Farris Naff | November 18, 2011 |  Comments21

Scientific American Blog

ShareShare  ShareEmail  PrintPrint

“Does aught befall you? It is good. It is part of the destiny of the Universe ordained for you from the beginning.”

– Marcus Aurelius, Stoic Philosopher and Emperor of Rome, in Meditations, circa 170 CE

“’He said that, did he? … Well, you can tell him from me, he’s an ass!”

– Bertie Wooster, fictional P.G. Wodehouse character, in The Mating Season, 1949

People have been arguing about the fundamental nature of existence since, well, since people existed. Having lost exclusive claim to tools, culture, and self, one of the few remaining distinctions of our species is that we can argue about the fundamental nature of existence.

There are, however, two sets of people who want to shut the argument down. One is the drearily familiar set of religious fundamentalists. The other is the shiny new set of atheists who claim that science demonstrates beyond reasonable doubt that our existence is accidental, purposeless, and doomed. My intent is to show that both are wrong.

I do not mean to imply a false equivalence here. Concerning the fundamentalist position, my work is done. Claims of a six-day Creation, a 6,000-year-old Earth, a global flood, and so forth have been demolished by science. It has not only amassed evidence against particular claims but has discovered laws of nature that exclude whole classes of claims. To the extent we can be certain about anything, we can rest assured that all supernatural claims are false.

The “New Atheist” position, by contrast, demands serious consideration. It has every advantage that science can provide, yet it overreaches for its conclusion. The trouble with the “New Atheist” position, as defined above, is this: it commits the fallacy of the excluded middle. I will explain.

But first, if you’ll pardon a brief diversion, I feel the need to hoist my flag. You may have inferred that I am a liberal religionist, attempting to unite the scientific narrative with some metaphorical interpretation of my creed. That is not so.

I am a secular humanist who is agnostic about many things — string theory, Many Worlds, the Theo-logical chances of a World Series win for the Cubs  – but the existence of a supernatural deity is not among them. What’s more, I am one of the lucky ones: I never struggled to let go of God. My parents put religion behind them before I was born.

I tell you this not to boast but in hopes that you’ll take in my argument through fresh eyes. The science-religion debate has bogged down in trench warfare, and anyone foolhardy enough to leap into the middle risks getting cut down with no questions asked. But here goes. Read more [+]

Nosso universo vai congelar como uma cerveja super-resfriada…


Finding the Higgs? Good news. Finding its mass? Not so good.

“Fireballs of doom” from a quantum phase change would wipe out present Universe.

by  – Feb 19 2013, 8:55pm HB

A collision in the LHC’s CMS detector.

Ohio State’s Christopher Hill joked he was showing scenes of an impending i-Product launch, and it was easy to believe him: young people were setting up mats in a hallway, ready to spend the night to secure a space in line for the big reveal. Except the date was July 3 and the location was CERN—where the discovery of the Higgs boson would be announced the next day.

It’s clear the LHC worked as intended and has definitively identified a Higgs-like particle. Hill put the chance of the ATLAS detector having registered a statistical fluke at less than 10-11, and he noted that wasn’t even considering the data generated by its partner, the CMS detector. But is it really the one-and-only Higgs and, if so, what does that mean? Hill was part of a panel that discussed those questions at the meeting of the American Association for the Advancement of Science.

As theorist Joe Lykken of Fermilab pointed out, the answers matter. If current results hold up, they indicate the Universe is currently inhabiting what’s called a false quantum vacuum. If it were ever to reach the real one, its existing structures (including us), would go away in what Lykken called “fireballs of doom.”

We’ll look at the less depressing stuff first, shall we?

Zeroing in on the Higgs

Thanks to the Standard Model, we were able to make some very specific predictions about the Higgs. These include the frequency with which it will decay via different pathways: two gamma-rays, two Z bosons (which further decay to four muons), etc. We can also predict the frequency of similar looking events that would occur if there were no Higgs. We can then scan each of the decay pathways (called channels), looking for energies where there is an excess of events, or bump. Bumps have shown up in several channels in roughly the same place in both CMS and ATLAS, which is why we know there’s a new particle.

But we still don’t know precisely what particle it is. The Standard Model Higgs should have a couple of properties: it should be scalar and should have a spin of zero. According to Hill, the new particle is almost certainly scalar; he showed a graph where the alternative, pseudoscalar, was nearly ruled out. Right now, spin is less clearly defined. It’s likely to be zero, but we haven’t yet ruled out a spin of two. So far, so Higgs-like.

The Higgs is the particle form of a quantum field that pervades our Universe (it’s a single quantum of the field), providing other particles with mass. In order to do that, its interactions with other particles vary—particles are heavier if they have stronger interactions with the Higgs. So, teams at CERN are sifting through the LHC data, checking for the strengths of these interactions. So far, with a few exceptions, the new particle is acting like the Higgs, although the error bars on these measurements are rather large.

As we said above, the Higgs is detected in a number of channels and each of them produces an independent estimate of its mass (along with an estimated error). As of the data Hill showed, not all of these estimates had converged on the same value, although they were all consistent within the given errors. These can also be combined mathematically for a single estimate, with each of the two detectors producing a value. So far, these overall estimates are quite close: CMS has the particle at 125.8GeV, Atlas at 125.2GeV. Again, the error bars on these values overlap.

Oops, there goes the Universe

That specific mass may seem fairly trivial—if it were 130GeV, would you care? Lykken made the argument you probably should. But he took some time to build to that.

Lykken pointed out, as the measurements mentioned above get more precise, we may find the Higgs isn’t decaying at precisely the rates we expect it to. This may be because we have some details of the Standard Model wrong. Or, it could be a sign the Higgs is also decaying into some particles we don’t know about—particles that are dark matter candidates would be a prime choice. The behavior of the Higgs might also provide some indication of why there’s such a large excess of matter in the Universe.

But much of Lykken’s talk focused on the mass. As we mentioned above, the Higgs field pervades the entire Universe; the vacuum of space is filled with it. And, with a value for the Higgs mass, we can start looking into the properties of the Higgs filed and thus the vacuum itself. “When we do this calculation,” Lykken said, “we get a nasty surprise.”

It turns out we’re not living in a stable vacuum. Eventually, the Universe will reach a point where the contents of the vacuum are the lowest energy possible, which means it will reach the most stable state possible. The mass of the Higgs tells us we’re not there yet, but are stuck in a metastable state at a somewhat higher energy. That means the Universe will be looking for an excuse to undergo a phase transition and enter the lower state.

What would that transition look like? In Lykken’s words, again, “fireballs of doom will form spontaneously and destroy the Universe.” Since the change would alter the very fabric of the Universe, anything embedded in that fabric—galaxies, planets, us—would be trashed during the transition. When an audience member asked “Are the fireballs of doom like ice-9?” Lykken replied, “They’re even worse than that.”

Lykken offered a couple of reasons for hope. He noted the outcome of these calculations is extremely sensitive to the values involved. Simply shifting the top quark’s mass by two percent to a value that’s still within the error bars of most measurements, would make for a far more stable Universe.

And then there’s supersymmetry. The news for supersymmetry out of the LHC has generally been negative, as various models with low-mass particles have been ruled out by the existing data (we’ll have more on that shortly). But supersymmetry actually predicts five Higgs particles. (Lykken noted this by showing a slide with five different photos of Higgs taken at various points in his career, in which he was “differing in mass and other properties, as happens to all of us.”) So, when the LHC starts up at higher energies in a couple of years, we’ll actually be looking for additional, heavier versions of the Higgs.

If those are found, then the destruction of our Universe would be permanently put on hold. “If you don’t like that fate of the Universe,” Lykken said, “root for supersymmetry”

Eli Vieira e o Niilismo

Por que não sou niilista – uma resposta a André Díspore Cancian

Postado por Eli Vieira on sexta-feira, 20 de agosto de 2010

o-gritoRecentemente comentei uma entrevista do André Díspore Cancian, criador do site Ateus.net, em que ele expressava o niilismo. Desenvolverei um pouco mais aqui.

Se o niilismo (do latim nihil, nada) é meramente notar o fato de que não há um sentido para a vida, ao menos não um que seja propriedade fundamental da nossa existência ou do universo, então eu também sou niilista. Ou seja, é muito útil o niilismo como ceticismo voltado para a ética.

Porém há mais para o niilismo de alguns: não apenas notam este fato sobre a ausência de sentido na natureza que nos gerou, como também descartam de antemão, dogmaticamente, qualquer tentativa de construção de sentido como uma mera ingenuidade. E nesta segunda acepção eu não sou, em hipótese alguma, um niilista.

Há duas razões para eu não ser um niilista:

1) Vejo uma inconsistência interna, que é técnica, no niilismo:

É uma posição circular, pois parte de uma questão de fato, que é a falta de “sentido” na vida, para voltar a outra questão de fato, que é nossa necessidade de “sentido” na vida apesar de o tal sentido não existir.

A inconsistência aqui é ignorar um enorme campo, a ética, que é o campo das questões de direito.

“Sentido” é algo que pode ser construído pelo indivíduo e pela cultura, como sempre foi, porém sem o autoengano de atribuir sentido ao mundo natural que nos gerou mas ver o tal sentido como vemos uma obra de arte.

Ninguém espera que a beleza das obras de Rodin seja uma propriedade fundamental da natureza. Da mesma forma, não se deve esperar que sentido seja uma propriedade fundamental da vida.

Filósofos como Paul Kurtz e A. C. Grayling estão estre os que explicitam e valorizam a construção do sentido da vida da mesma forma que se valoriza a construção de valor estético em obras de arte.

Como niilista, o André Cancian acha que a decisão ética diária que tomamos por continuar a viver é fruto apenas de instintos moldados pela seleção natural, e que a razão deve apenas não se demorar em tentar conversar com estes instintos, pois se tentar, ou seja, se focarmos nossa consciência no fato do absurdo da natureza (tal como denunciado por Camus e Nietzsche) cessaríamos nossa vontade de viver voluntariamente.

É um erro pensar assim, porque um indivíduo pode, como Bertrand Russell e Stephen Hawking relatam para si mesmos, construir um sentido para sua própria vida, consciente de que esta vida é finita e insignificante no contexto cósmico. É uma alegação comum que era esta a posição defendida por Nietzsche – que valores seriam construídos após a derrocada dos valores tradicionais. Mas não sou grande fã da obra de Nietzsche como filosofia, sou da posição de Russell de que Nietzsche é mais literatura que filosofia.

A posição do niilista ignora também os tratados de pensadores como David Hume sobre a fragilidade da razão frente a paixões. A razão é escrava das paixões – é um instrumento preciso, como uma lâmina de diamante, porém frágil frente à força das paixões.

A razão e a âncora empírica são as mestras do conhecimento e da metafísica. Por outro lado, as paixões, ou seja, as emoções, incluída aqui a emoção empática, são as mestras das questões de direito, como indicam pesquisas científicas como as do neurocientista Jorge Moll.

A posição niilista é inconsistente ao limitar a legitimidade do pensamento ao escrutínio racional e/ou científico. Na verdade, razão e ciência são para epistemologia e metafísica (não respectivamente, mas de forma intercambiável). Ética existe não apenas como objeto de estudo destas outras faculdades, mas como todo um alicerce sustentador das nossas mentes: o alicerce das questões de direito –
– “devo fazer isso?”
– “isso é bom?”
– “isso é ruim?”

São questões com que nos deparamos todos os dias, a respeito das quais as respostas epistemológicas e metafísicas (referentes a questões de fato) são neutras.

Na entrevista no blog Amálgama, André Cancian faz questão de citar que as emoções (ou paixões), que ele chama de “instintos”, tiveram origem através da seleção natural.
Esta prioridade inusitada na resposta do Cancian é exemplo da circularidade do niilismo: nada teria sentido porque as emoções vieram de um processo natural de sobrevivência diferencial entre replicadores que variam casualmente.

Frente ao fato de também a razão ter vindo do mesmíssimo processo, como Daniel Dennett argumenta brilhantemente em suas obras, por acaso isso torna a razão desimportante ou então faz dela um instrumento que só gera respostas falsas?

Esta pergunta retórica serve para exemplificar que nenhuma resposta a ela faz sentido ao menos que se separe, como fez Kant, as questões de fato das questões de direito.
É algo que niilistas como o André Cancian insistem em não fazer.

Na UnB, tive aulas com um filósofo admirável chamado Paulo Abrantes. Quando entrava na questão metafilosófica de explicar o que é filosofia ou não, ele, como a maioria dos filósofos, dava respostas provisórias e incertas. Uma dessas respostas me marcou bastante: filosofia é a arte de explicitar. Todo trabalho de tomar uma ideia cheia de conceitos tácitos, dissecá-la e explicitá-la melhor, seria um trabalho filosófico.

Quando certas formas de niilismo ignoram a importantíssima explicitação kantiana da separação entre questões de fato e questões de direito, estão voltando a um estado não filosófico de aferrar-se a posições nebulosas e tácitas.

Concluindo a primeira razão pela qual eu não sou niilista, posso então dizer que é porque o niilismo parece-me antifilosófico.

2) Não sou niilista, também, por ser humanista. Em outras palavras, o vácuo que o niilista gosta de lembrar é preenchido em mim pelo humanismo.

Aqui vou ser breve: por mais que eu tente explicar, racionalmente, razões pelas quais sou humanista, todas estas tentativas são meras sombras frente a sentimentos reais que me abatem.

O fato de existir o regime teocrático no Irã, e o fato de dezenas ou centenas de pessoas estarem na fila do apedrejamento, entre elas uma mulher chamada Sakineh Mohammadi Ashtiani, é algo que açula “meus instintos mais primitivos”, parafraseando uma frase famosa no Congresso alguns anos atrás.

Sinto de verdade que é simplesmente errado enterrar uma mulher até o ombro e atirar pedras contra a cabeça dela até que ela morra.

Sinto que também é errado achar que “respeitar a cultura” do Irã é mais importante que preservar a vida desta mulher e de outros que estão na posição dela.

Porque culturas não são indivíduos como Sakineh, portanto culturas não contam com nem um pingo da minha empatia. Mas aqui são de novo minhas capacidades racionais tentando explicar minhas emoções.

Tendo isto em mente, convido qualquer niilista a pensar, agora, o que acha de dizer a esta mulher, quando ela estiver enterrada até os ombros, que nada do que ela está sentindo tem importância porque as emoções dela são instintos que vieram da evolução pela seleção natural.

Estou apelando para a emoção numa argumentação contra o niilismo? Claro. Eu construí o sentido da minha vida, aliás estou sempre construindo, com a noção de que minhas emoções – e as dos outros – são importantes sim, mesmo tendo elas nascido do absurdo da natureza. Na verdade, este batismo de sangue as valoriza.

Mas falar das emoções, que são a base da ética, à luz de seu batismo de sangue é assunto para outro texto. Quem sabe usar o personagem Dexter Morgan como mote para falar disso? Não decidi ainda se será bom ou ruim.

Amit Goswami realmente existe!

Em minha palestra Ciência e Religião: Quatro Perspectivas, dada no IEA-RP, chamei de pseudocientífica toda crença que  afirma que possui evidências científicas a seu favor quando esse não é exatamente o caso. O melhor que uma opinião filosófica, ideológica ou religiosa deve afirmar é que ela é “compatível com” e não “derivada do” conhecimento científico. Essa também é a posição de Freeman Dyson.

Durante a palestra, fiz uma crítica a Amit Goswami que se revelou mais tarde bastante errada, e devo aqui registrar um “erramos” ou mea culpa.  Pelo fato de que Goswami não tem uma página na Wikipedia inglesa (mas apenas na Portuguesa) e devido a ter feito uma busca na Web of Science que não revelou nenhum artigo de física desse autor, fiz a inferência apressada de que talvez Amit Goswami fosse um pseudônimo de uma personagem menor (assim como Acharya S. é o pseudônimo de Dorothy M. Murdock, a propagadora da teoria da conspiração do Cristo Mítico).

Creio que os editores da Wikipedia foram demasiado rigorosos com Goswami. Afinal, embora ele seja um físico não notável, com índice de Hirsch igual a sete, ele pelo menos tem um PhD e é autor de um livro-texto sério de Física Quântica.  Sua migração para a New Age, seguindo os passos de Fritjof Capra, longe de ser um demérito, pode refletir grande inteligência social e financeira (ironia aqui!).  Assim, se deletaram Goswami da Wikipedia, deveriam deletar Acharya S. também, por coerência!

Wikipedia:Articles for deletion/Amit Goswami

From Wikipedia, the free encyclopedia
The following discussion is an archived debate of the proposed deletion of the article below. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as the article’s talk page or in a deletion review). No further edits should be made to this page.

The result was delete. Guillaume2303’s research indicates that the early “keep” opinions likely apply to another, more notable person of the same name, which means that they are not taken into consideration here. The “keep” opinions by Jleibowitz101 and are also not taken into account as they are not based on our inclusion rules and practices.  Sandstein  06:25, 11 April 2012 (UTC)

Amit Goswami

Amit Goswami (edit|talk|history|links|watch|logs) – (View log)
(Find sources: “Amit Goswami” – news · books · scholar · JSTOR · free images)

I’m just not convinced this article really demonstrates notability. He played a small role in a couple films, he wrote books outside his field for very minor publishers, and… er, that’s about it. I’m just not buying it, and the lack of good WP:RS – this has major primary sourcing issues – is another mark against it. Perhaps something can be salvaged, but I’m not convinced the case has been made. ETA: Guillaume2303’s point (below) that there are multiple people of this name, and this article appears to be on the much less notable one is rather significant. 86.** IP (talk) 21:07, 3 April 2012 (UTC) Read more [+]

Historiadores da Ciência rejeitam a tese de conflito entre Ciência e Religião

Mais material para o meu livro sobre Ateísmo 3.0

Conflict thesis

From Wikipedia, the free encyclopedia
For a socio-historical theory with a similar name, see Conflict theory.

Conflict: Galileo before the Holy Office, byJoseph-Nicolas Robert-Fleury, a 19th century depiction of the Galileo Affair, religion suppressing heliocentric science.

The conflict thesis is the proposition that there is an intrinsic intellectual conflict between religion and science and that the relationship between religion and science inevitably leads to public hostility. The thesis, refined beyond its most simplistic original forms, remains generally popular. However, historians of science no longer support it.[1][2][3][4]


Read more [+]

Palestra no Instituto de Estudos Avançados (RP) sobre Ciência e Religião


sexta-feira, 9 de novembro de 2012

Ciência e Religião: quatro perspectivas

Escrito por 

Data e Horário: 26/11 às 14h30
Local: Salão de Eventos do Centro de Informática de Ribeirão Preto – CIRP/USP (localização)

O evento, que será apresentado por Osame Kinouchi, discutirá quatro diferentes visões sobre a interação entre Ciência e Religião: o conflito, a separação, o diálogo e a integração. Examinando as fontes de conflito recentes (Culture Wars), o professor sugere que elas têm origem no Romantismo Anticientífico, religioso ou laico.

Segundo Osame, a ideia de separação entre os campos Religioso e Científico já não parece ser viável devido aos avanços da Ciência em tópicos antes considerados metafísicos, tais como as origens do Universo (Cosmologia), da Vida (Astrobiologia), da Mente (Neurociências) e mesmo das Religiões (Neuroteologia, Psicologia Evolucionária e Ciências da Religião).
A palestra mostrará também que tentativas de integração forçada ou prematura entre Religião e Ciência correm o risco de derivar para a Pseudociência. Sendo assim, na visão do professor, uma posição mais acadêmica de diálogo de alto nível pode ser um antídoto para uma polarização cultural ingênua entre Ateísmo e Religiosidade.

Vídeo do evento

Seleção Artificial Cosmológica: primeiras referências

Tive a mesma ideia em 1995, mas não publiquei. Sexta feira passada, achei numa pasta abandonada os escritos que estão digitalizados aqui.  Por um erro de memória, confundi Lee Smolin (em inglês e mais completo aqui) com Sidney Coleman.

Meduso-anthropic principle

The meduso-anthropic principle is a quasi-organic universe theory originally proposed by mathematician and quantum gravity scholar Louis Crane in 1994.



Universes and black holes as potential life cycle partners

Crane’s MAP is a variant of the hypothesis of cosmological natural selection (fecund universes), originally proposed by cosmologist Lee Smolin (1992). It is perhaps the first published hypothesis of cosmological natural selection with intelligence (CNS-I), where intelligence plays some proposed functional role in universe reproduction. It is also an interpretation of the anthropic principle (fine-tuning problem). The MAP suggests the development and life cycle of the universe is similar to that of Corals and Jellyfish, in which dynamic Medusa are analogs for universal intelligence, in co-evolution and co-development with sessile Polyp generations, which are analogs for both black-holes and universes. In the proposed life cycle, the Universe develops intelligent life and intelligent life produces new baby universes. Crane further speculates that our universe may also exist as a black hole in a parallel universe, and extraterrestrial life there may have created that black hole.

Crane’s work was published in 1994 as a preprint on arXiv.org. In 1995, in an an article in QJRAS, emeritus cosmologist Edward Harrison (1919-2007) independently proposed that the purpose of intelligent life is to produce successor universes, in a process driven by natural selection at the universal scale. Harrison’s work was apparently the first CNS-I hypothesis to be published in a peer-reviewed journal.

Why future civilizations might create black holes

Crane speculates that successful industrial civilizations will eventually create black holes, perhaps for scientific research, for energy production, or for waste disposal. After the hydrogen of the universe is exhausted civilizations may need to create black holes in order to survive and give their descendants the chance to survive. He proposes that Hawking radiation from very small, carefully engineered black holes would provide the energy enabling civilizations to continue living when other sources are exhausted.

Philosophical implications

According to Crane, Harrison, and other proponents of CNS-I, mind and matter are linked in an organic-like paradigm applied at the universe scale. Natural selection in living systems has given organisms the imperative to survive and reproduce, and directed their intelligence to that purpose. Crane’s MAP proposes a functional purpose for intelligence with respect to universe maintenance and reproduction. Universes of matter produce intelligence, and intelligent entities are ultimately driven to produce new universes.

See also


Os deuses de Richard Dawkins

File:NASA child bubble exploration.jpgMy personal theology is described in the Gifford lectures that I gave at Aberdeen in Scotland in 1985, published under the title, Infinite In All Directions. Here is a brief summary of my thinking. The universe shows evidence of the operations of mind on three levels. The first level is elementary physical processes, as we see them when we study atoms in the laboratory. The second level is our direct human experience of our own consciousness. The third level is the universe as a whole. Atoms in the laboratory are weird stuff, behaving like active agents rather than inert substances. They make unpredictable choices between alternative possibilities according to the laws of quantum mechanics. It appears that mind, as manifested by the capacity to make choices, is to some extent inherent in every atom. The universe as a whole is also weird, with laws of nature that make it hospitable to the growth of mind. I do not make any clear distinction between mind and God. God is what mind becomes when it has passed beyond the scale of our comprehension. God may be either a world-soul or a collection of world-souls. So I am thinking that atoms and humans and God may have minds that differ in degree but not in kind. We stand, in a manner of speaking, midway between the unpredictability of atoms and the unpredictability of God. Atoms are small pieces of our mental apparatus, and we are small pieces of God’s mental apparatus. Our minds may receive inputs equally from atoms and from God. This view of our place in the cosmos may not be true, but it is compatible with the active nature of atoms as revealed in the experiments of modern physics. I don’t say that this personal theology is supported or proved by scientific evidence. I only say that it is consistent with scientific evidence.  Freeman Dyson

Parece que Dawkins está rumando para uma posição similar à de Gardner, Clément Vidal e outros da comunidade Evo-Devo Universe.

Human Gods

After two hours of conversation, Professor Dawkins walks far afield. He talks of the possibility that we might co-evolve with computers, a silicon destiny. And he’s intrigued by the playful, even soul-stirring writings of Freeman Dyson, the theoretical physicist.

In one essay, Professor Dyson casts millions of speculative years into the future. Our galaxy is dying and humans have evolved into something like bolts of superpowerful intelligent and moral energy.

Doesn’t that description sound an awful lot like God?

“Certainly,” Professor Dawkins replies. “It’s highly plausible that in the universe there are God-like creatures.”

He raises his hand, just in case a reader thinks he’s gone around a religious bend. “It’s very important to understand that these Gods came into being by an explicable scientific progression of incremental evolution.”

Could they be immortal? The professor shrugs.

“Probably not.” He smiles and adds, “But I wouldn’t want to be too dogmatic about that.”

O melhor livro de divulgação científica que encontrei em quarenta anos de leituras

Depois escrevo minha resenha…

A REALIDADE OCULTA – Universos paralelos e as leis profundas do cosmo
Brian Greene
R$ 59,00 Comprar
R$ 39,00 E-Book
Indique Comente
É necessário estar logado para utilizar este recurso. Acompanhe

Meio século atrás, os cientistas encaravam com ironia a possibilidade de existirem outros universos além deste que habitamos. Tal hipótese não passava de um delírio digno de Alice no País das Maravilhas – e que, de todo modo, jamais poderia ser comprovada experimentalmente. Os desafios propostos pela Teoria da Relatividade e pela física quântica para o entendimento de nosso próprio universo já eram suficientemente complexos para ocupar gerações e gerações de pesquisadores. Entretanto, diversos estudos independentes entre si, conduzidos por cientistas respeitados em suas áreas de atuação – teoria das cordas, eletrodinâmica quântica, teoria da informação -, começaram a convergir para o mesmo ponto: a existência de universos paralelos – o multiverso – não só é provável como passou a ser a explicação mais plausível para diversos enigmas cosmológicos.
Em A realidade oculta, Brian Greene – um dos maiores especialistas mundiais em cosmologia e física de partículas – expõe o fantástico desenvolvimento da física do multiverso ao longo das últimas décadas. O autor de O universo elegante passa em revista as diferentes teorias sobre os universos paralelos a partir dos fundamentos da relatividade e da mecânica quântica. Por meio de uma linguagem acessível e valendo-se de numerosas figuras explicativas, Greene orienta o leitor pelos labirintos da realidade mais profunda da matéria e do pensamento.

“Se extraterrestres aparecessem amanhã e pedissem para conhecer as capacidades da mente humana, não poderíamos fazer nada melhor que lhes oferecer um exemplar deste livro.” – Timothy Ferris, New York Times Book Review

Determinando se vivemos dentro da Matrix

The Measurement That Would Reveal The Universe As A Computer Simulation

If the cosmos is a numerical simulation, there ought to be clues in the spectrum of high energy cosmic rays, say theorists

1 comment


Wednesday, October 10, 2012

One of modern physics’ most cherished ideas is quantum chromodynamics, the theory that describes the strong nuclear force, how it binds quarks and gluons into protons and neutrons, how these form nuclei that themselves interact. This is the universe at its most fundamental.

So an interesting pursuit is to simulate quantum chromodynamics on a computer to see what kind of complexity arises. The promise is that simulating physics on such a fundamental level is more or less equivalent to simulating the universe itself.

There are one or two challenges of course. The physics is mind-bogglingly complex and operates on a vanishingly small scale. So even using the world’s most powerful supercomputers, physicists have only managed to simulate tiny corners of the cosmos just a few femtometers across. (A femtometer is 10^-15 metres.)

That may not sound like much but the significant point is that the simulation is essentially indistinguishable from the real thing (at least as far as we understand it).

It’s not hard to imagine that Moore’s Law-type progress will allow physicists to simulate significantly larger regions of space. A region just a few micrometres across could encapsulate the entire workings of a human cell.

Again, the behaviour of this human cell would be indistinguishable from the real thing.

It’s this kind of thinking that forces physicists to consider the possibility that our entire cosmos could be running on a vastly powerful computer. If so, is there any way we could ever know?

Today, we get an answer of sorts from Silas Beane, at the University of Bonn in Germany, and a few pals.  They say there is a way to see evidence that we are being simulated, at least in certain scenarios.

First, some background. The problem with all simulations is that the laws of physics, which appear continuous, have to be superimposed onto a discrete three dimensional lattice which advances in steps of time.

The question that Beane and co ask is whether the lattice spacing imposes any kind of limitation on the physical processes we see in the universe. They examine, in particular, high energy processes, which probe smaller regions of space as they get more energetic

What they find is interesting. They say that the lattice spacing imposes a fundamental limit on the energy that particles can have. That’s because nothing can exist that is smaller than the lattice itself.

So if our cosmos is merely a simulation, there ought to be a cut off in the spectrum of high energy particles.

It turns out there is exactly this kind of cut off in the energy of cosmic ray particles,  a limit known as the Greisen–Zatsepin–Kuzmin or GZK cut off.

This cut-off has been well studied and comes about because high energy particles interact with the cosmic microwave background and so lose energy as they travel  long distances.

But Beane and co calculate that the lattice spacing imposes some additional features on the spectrum. “The most striking feature…is that the angular distribution of the highest energy components would exhibit cubic symmetry in the rest frame of the lattice, deviating significantly from isotropy,” they say.

In other words, the cosmic rays would travel preferentially along the axes of the lattice, so we wouldn’t see them equally in all directions.

That’s a measurement we could do now with current technology. Finding the effect would be equivalent to being able to to ‘see’ the orientation of lattice on which our universe is simulated.

That’s cool, mind-blowing even. But the calculations by Beane and co are not without some important caveats. One problem is that the computer lattice may be constructed in an entirely different way to the one envisaged by these guys.

Another is that this effect is only measurable if the lattice cut off is the same as the GZK cut off. This occurs when the lattice spacing is about 10^-12 femtometers. If the spacing is significantly smaller than that, we’ll see nothing.

Nevertheless, it’s surely worth looking for, if only to rule out the possibility that we’re part of a simulation of this particular kind but secretly in the hope that we’ll find good evidence of our robotic overlords once and for all.

Ref: arxiv.org/abs/1210.1847: Constraints on the Universe as a Numerical Simulation

Para que servem os ateus?


Coelhos = religiosos, raposas = ateus?

Estou achando que preciso correr para escrever o meu livro intitulado “Deus e Acaso”, baseado em postagens deste blog. Alguns dos temas do livro já estão sendo discutidos em papers recentes, parece que existe um interesse cada vez maior sobre o assunto. Ver por exemplo o artigo abaixo, que foi um target article em um número inteiro dedicado a discussões desse tipo na revista Religion, Brain & Behavior.

What are atheists for? Hypotheses on the functions of non-belief in the evolution of religion

DOI: 10.1080/2153599X.2012.667948

Dominic Johnsona*
pages 48-70

Version of record first published: 27 Apr 2012


An explosion of recent research suggests that religious beliefs and behaviors are universal, arise from deep-seated cognitive mechanisms, and were favored by natural selection over human evolutionary history. However, if a propensity towards religious beliefs is a fundamental characteristic of human brains (as both by-product theorists and adaptationists agree), and/or an important ingredient of Darwinian fitness (as adaptationists argue), then how do we explain the existence and prevalence of atheists – even among ancient and traditional societies? The null hypothesis is that – like other psychological traits – due to natural variation among individuals in genetics, physiology, and cognition, there will always be a range of strengths of religious beliefs. Atheists may therefore simply represent one end of a natural distribution of belief. However, an evolutionary approach to religion raises some more interesting adaptivehypotheses for atheism, which I explore here. Key among them are: (1) frequency dependence may mean that atheism as a “strategy” is selected for (along with selection for the “strategy” of belief), as long as atheists do not become too numerous; (2) ecological variation may mean that atheism outperforms belief in certain settings or at certain times, maintaining a mix in the overall population; (3) the presence of atheists may reinforce or temper religious beliefs and behaviors in the face of skepticism, boosting religious commitment, credibility, or practicality in the group as a whole; and (4) the presence of atheists may catalyze the functional advantages of religion, analogous to the way that loners or non-participants can enhance the evolution of cooperation. Just as evolutionary theorists ask what religious beliefs are “for” in terms of functional benefits for Darwinian fitness, an evolutionary approach suggests we should also at least consider what atheists might be for.

Referência para o experimento sobre o livre-arbítrio do tatu-bola


Insects as Model Animals

Steve Forrest for The New York Times

Dr. Jeremy Niven in his lab at the zoology department at Cambridge University.


Published: July 12, 2010

Jeremy Niven spends his days at Cambridge University running locusts across ladders and through mazes, trying to figure out how bugs think. Dr. Niven, 34, studies the evolution of brains and neurons in insects and other animals, like humans. We spoke during a break in last month’s World Science Festival in New York, where he was a guest presenter, and then again later via telephone. An edited version of the two conversations follows:


A. I think locusts are sweet. When you get used to them, they are actually quite nice.

Actually, I find that working with invertebrates opens your mind. Insects don’t perceive the world the way we do. Trying to understand them makes you think more about why we see the world as we do. Many animals have different sensors and receive different energies. Birds have ultraviolet vision. So do bees. They can see things we don’t. One learns respect for their capacities.

But the other thing is that insects in general and locusts in particular are admirable because they permit us to gain new information about nervous systems. With insects, we can actually study neural circuits and see how what happens in the neurons relates to behavior.


A. Yes. You see, with mammals, their nervous systems are very complex and everything you look at is more difficult to connect to behaviors. There are so many neurons in their brains — where do you even begin? How do you associate what’s going on in the neuron with what’s going on in the animal? Now, in insects, there are fewer neurons and so they can be identified more easily.

Of course, insect brains don’t work in the exactly same way a human brain does. But there is more overlap than many realize. It’s a consequence of evolution that animals have used the same biological tricks to get what they need from the environment. They mix and match different molecular components to build the system they need. So you can find the same components in a locust’s nervous system as in human. We just have more of it.


A. I wanted to know how locusts used their vision to coordinate their limbs in a changing situation. This research involved learning how insects combine visual information with decision making and motor patterns.

With locusts, we’ve long thought they used their limbs to feel things gradually or that they used their antennae to sense the environment, much like a blind person with a cane. But with our experiment, we showed they use their vision to make a kind of guesstimate of distance. Then they jump.

Many insects use an approximate approach. So they teach us that many behaviors that a psychologist might describe as very complicated, an insect can do with very few neurons, and by making a few rough guesses.


A. We have not discovered yet the neural circuits in humans that are involved in reaching for objects. However, we might be able to work that out in locusts. We already know that it doesn’t take a huge brain to accurately control the limbs. These insects do it. There are all kinds of possibilities for robotics and for rehabilitative medicine in these studies.


A. In 2007, we were able to study how much energy neurons used and we quantified it. We studied different types of insect eyes — from tiny fruit fly eyes to huge blowfly eyes. In each creature, we worked out how much energy it takes for neurons in the brain to process information. What we learned was that the more information a fly’s eye needed to process, the more energy each unit of information consumed. That means that it’s bad, in the evolutionary sense, for an animal to have a bigger brain than it needs for survival. It’s like having a gas-eating Ferrari, when what you really need is Honda Civic.


A. Bigger is better if you want to produce enormously complicated behavior. But in evolution, brains evolve by selection. There always is pressure on animals to produce behaviors for as little energy as possible. And that means for many animals, smaller brains are better because they won’t waste energy.

You know, there’s this pervasive idea in biology that I think is wrong. It goes: we humans are at the pinnacle of the evolutionary tree, and as you get up that tree, brain size must get bigger. But a fly is just as evolved as a human. It’s just evolved to a different niche.

In fact, in evolution there’s no drive towards bigger brains. It’s perfectly possible that under the right circumstances, you could get animals evolving small brains. Indeed, on some islands, where there’s reduced flora and fauna, you’ll see smaller versions of mainland species. I would argue that their brain size has been reduced because it saves energy, which permits them to survive in situations of scarcity. They also might not need big brains because they don’t have natural predators on the islands—and don’t have to be as smart because there’s nothing to avoid.


A. Because I thought it was a hominid. This thing about its being a human ancestor with a diseased brain never made much sense. The people who insisted it was a deformed early human couldn’t believe that it was possible to have such a huge reduction in brain size in any hominid. Yet, it’s possible to get a reduction in brain size of island animals as long as the selection pressure is there. There’s nothing to stop this from happening, even among hominids.


A. Because there’s this idea that nature moves inexorably towards bigger brains and some people find it very difficult to imagine why if you evolved a big brain — as ancient hominids had — why you would ever go back to a smaller one. But evolution doesn’t really care. This smaller brain could have helped this species survive better than an energy-consuming bigger one. The insects have shown us this.


A. I think probably because in the near past, we associated insects with disease. That’s a big part of it. On the other hand, Darwin loved insects. There’s a wonderful quote from him, where he’s talking about the marvelous brains of ants, and he says that they may be more marvelous than the brains of humans or monkeys because they are tiny and to be able to do so much behavior with such tiny brains, I can’t help but agree.

Por quem os sinos quânticos dobram?

John Bell And The Nature Of Reality

Posted: 13 Jul 2010 09:10 PM PDT

Why have so few heard of one of the great heroes of modern physics?

In 1935, Einstein and his colleagues Boris Podolsky and Nathan Rosen outlined an extraordinary paradox associated with the then emerging science of quantum mechanics.

They pointed out that quantum mechanics allows two objects to be described by the same single wave function. In effect, these separate objects somehow share the same existence so that a measurement on one immediately influences the other, regardless of the distance between them.

To Einstein, Podolsky and Rosen this clearly violated special relativity which prevents the transmission of signals at superluminal speed. Something had to give.

Despite the seriousness of this situation, the EPR paradox, as it became known, was more or less ignored by physicists until relatively recently.

Today, we call the relationship between objects that share the same existence entanglement. And it is the focus of intense interest from physicists studying everything from computing and lithography to black holes and photography.

It’s fair to say that while the nature of entanglement still eludes us, few physicists doubt that a better understanding will lead to hugely important insights into the nature of reality.

Many researchers have helped to turn the study of entanglement from a forgotten backwater into one of the driving forces of modern physics. But most of them would agree that one man can be credited with kickstarting this revolution.

This man was John Bell, a physicist at CERN for much of his career, who was incensed by the apparent contradictions and problems at the heart of quantum mechanics. In the early 60s, Bell laid the theoretical foundations for the experimental study of entanglement by deriving a set of inequalities that now now bear his name.

While Bell’s inequalities are now mainstream, Bell was more or less ignored at the time. Now Jeremy Bernstein, a physicist and writer who knew Bell, publishes a short account of the background to Bell’s work along with some interesting anecdotes about the man himself, some of which are entirely new (at least, to me). He recounts screaming arguments between Bell and his university lecturers about the nature of quantum mechanics. And says that at the time of his death in 1991, Bell had been nominated for a Nobel Prize, which he was expected to win.

That would have entirely changed Bell’s legacy. He is well remembered by many working on the foundations of quantum mechanics but not well known by people in other areas. As a good example of a scientist who took on the establishment and won, that is a shame.

Ref: arxiv.org/abs/1007.0769: A Chorus Of Bells

Pode a ciência explicar tudo?

Science explains, not describes

The experience of consciousness seems incommunicable and ineffable. Yet science can hope to explain how it arises

The question: Can science explain everything?

When Andrew Brown first posed this week’s question to me he asked “Can science describe everything?”. My instant, unreflective reply was “No”. He implied that this might be a less restrictive question than “Can science explain everything” and yet my instant reaction to this one was “Yes”. I’d like to explore this curious difference.

Science can (potentially at least) explain everything because its ways of trying to understand the universe by asking questions of it should not leave any areas off-limits. The methods of openness, inquiry, curiosity, theory building, hypothesis testing and so on can be adapted and developed to explore and try to explain anything.

But what is “everything”? I look out of my window and see green trees and grass and grazing cows, a river, a pond, birds, sky, clouds …. but everything? This is where description becomes so hard. There is just so much stuff in the universe and it’s all so complicated. Let me give two examples, a simpler one and a really tough one.

Let’s take those cows, or my black and white cat lying here on a comfy chair. There’s no way we can even aspire to precisely describing every black and white pattern on every cow and cat in the world. There are billions of them and each is unique. Even if everybody in the world devoted themselves to the task, they could never capture them all. Yet we can explain how genetic information codes for the construction of pigments, and developmental variations lead to the individual patterns.

To take a second example, closer to my heart and my research, there’s the “hard problem” of consciousness, of subjectivity, of private experiences, of “what it’s like” to be me.

Here I am, sitting at my desk, experiencing all sorts of sounds, sights, touches and smells, but I cannot adequately describe them to anyone else. This is the very essence of subjective experience – that it seems to be private to me. To raise old philosophical conundrums, I cannot know whether my experience of the greenness of the grass is anything like yours. What if my green experience were like your beige, and your black and white like my mauve and purple? I cannot describe my sensations (or qualia) of greenness in any other way than to say “it looks green”, implicitly comparing it with other colours in the world and using agreed names to do so. In this sense colour experiences (and smells, and noises, and tastes) are ineffable.

Ineffability is even more acute when we come to special states or transcendent experiences. What can I say, for example, about my spontaneous mystical experiences? That I became one with the universe, that I glimpsed another realm, that I seemed to be guided by something I can neither describe nor name? What can I say about states I havereached through meditation? That I could see the nature of arising experiences and stare into the indescribable ground of being? What can I say about deep states reached through taking LSD? That the world was alive and flowing through a me that was no longer me? I can say all these things, and some people will say “Oh yeah, I know what you mean”. But we will probably agree that nothing we say really does justice to those experiences.

Science cannot describe these experiences, but will it ever? Those who think the hard problem is real claim that the nature of experience will always remain beyond the grasp of both description and explanation. But those who think it’s a “hornswoggle problem“, a “non-problem” or an illusion, argue that when we really understand the workings of the brain the hard problem will have gone the way of caloric fluid or the élan vitalwhich was once sought so assiduously to explain the essence of life.

A subtler possibility is that we explain the ineffability itself. One example of this is a framework for thinking about natural and artificial information processing systems developed by Aaron Sloman and Ron Chrisley. They want to explain “the private, ineffable way things seem to us” by explaining how and why the ineffability problem arises at all. Their virtual machine (the CogAff architecture) includes processes that classify its own internal states. Unlike words that describe common experiences (such as seeing red in the world) these refer to internal states or concepts that are strictly not comparable from one virtual machine to another – just like qualia. If people protest that there is “something missing”; the indefinable quality, the what it’s like to be, or what zombieslack, their reply is that the fact that people think this way is what needs explaining, and can be explained in their model.

This and other competing theories suggest a new possibility – that conscious experiences may remain ineffable even when science thoroughly understands how and why. In this case I would be right in my intuition that science cannot describe everything but may well be able to explain that which it cannot describe.

Como ler artigos de pesquisa e não ser enganado por eles

How to read research papers

Posted By Daniel W. Drezner  Friday, July 9, 2010 – 5:37 PM   Share

Ezra Klein made an interesting observation a few days ago about how opinion journalists read papers by experts:

[T]his is one of the difficulties with analysis. Fairly few political commentators know enough to decide which research papers are methodologically convincing and which aren’t. So we often end up touting the papers that sound right, and the papers that sound right are, unsurprisingly, the ones that accord most closely with our view of the world.

To which Will Wilkinson said “Amen“: 

This is one of the reasons I tend not to blog as much I’d like about a lot of debates in economic policy. I just don’t know who to trust, and I don’t trust myself enough to not just tout work that confirms my biases. This is also why I tend to worry a lot about methodology in my policy papers. How much can we trust happiness surveys? How exactly is inequality measured? How exactly is inflation measured? Does standard practice bias standard measurements in a particular direction? Of course, the motive to dig deeper is often suspicion of research you feel can’t really be right. But this is, I believe, an honorable motive, as long as one digs honestly. Indeed, I’m pretty sure motivated cognition, when constrained by sound epistemic norms, is one of the mainsprings of intellectual progress.

One way to weigh competing research papers is to consider the publishing outlet.  Presumably, peer-reviewed articles will carry greater weight.  Except that Megan McArdle doesn’t presume:

Especially for papers that rely on empirical work with painstakingly assembled datasets, the only way for peer reviewers to do the kind of thorough vetting that many commentators seem to imagine is implied by the words “peer review” would be to . . . well, go back and re-do the whole thing.  Obviously, this is not what happens.  Peer reviewers check for obvious anomalies, originality, and broad methodological weakness.  They don’t replicate the work themselves.  Which means that there is immense space for things to go wrong–intentionally or not….

This is not to say that the peer review system is worthless.  But it’s limited.  Peer review doesn’t prove that a paper is right; it doesn’t even prove that the paper is any good (and it may serve as a gatekeeper that shuts out good, correct papers that don’t sit well with the field’s current establishment for one reason or another).  All it proves is that the paper has passed the most basic hurdles required to get published–that it be potentially interesting, and not obviously false.  This may commend it to our attention–but not to our instant belief.

This jibes with a recent Chonicle of Higher Education essay that bemoaned the explosion of research articles: 

 While brilliant and progressive research continues apace here and there, the amount of redundant, inconsequential, and outright poor research has swelled in recent decades, filling countless pages in journals and monographs. Consider this tally fromScience two decades ago: Only 45 percent of the articles published in the 4,500 top scientific journals were cited within the first five years after publication. In recent years, the figure seems to have dropped further. In a 2009 article in Online Information Review, Péter Jacsó found that 40.6 percent of the articles published in the top science and social-science journals (the figures do not include the humanities) were cited in the period 2002 to 2006.

None of this provides much comfort for the layman interested in navigating through the miasma of contradictory research papers.  How can the amateur policy wonk separate the wheat from the chaff? 

Below are seven useful rules of thumb to provide you.  These are not foolproof — in fact, that’s one of the rules — but they can provide some useful filtering while trying to discern good research from not-so-good research: 

1)  If you can’t read the abstract, don’t bother with the paper.  Most smart people, including academics, don’t like to admit when they don’t understand something that they read.  This provides an opening for those who purposefully write obscurant or jargon-filled papers.  If you’re befuddled after reading the paper abstract, don’t bother with the paper — a poorly-worded abstract is the first sign of bad writing.  And bad academic writing is commonly linked to bad analytic reasoning. 

2)  It’s not the publication, it’s the citation count.  If you’re trying to determine the relative importance of a paper, enter it into Google Scholar and check out the citation count.  The more a paper is cited, the greater its weight among those in the know.  Now, this doesn’t always hold — sometimes a paper is cited along the lines of, “My findings clearly demonstrate that Drezner’s (2007) argument was, like, total horses**t.”   Still, for papers that are more than a few years old, the citaion hit count is a useful metric.

3)  Yes, peer review is better.   Nothing Megan McArdle wrote is incorrect.  That said, peer review does provide some useful functions, so the reader doesn’t have to.  If nothing else, it’s a useful signal that the author thought it could pass muster with critical colleagues.  Now, there are times when a researcher will  bypass peer review to get something published sooner.  That said, in international relations, scholars who publish in non-refereed journals usually have a version of the paper intended for peer review. 

4)  Do you see a strawman?  It’s a causally complex world out there.  Any researcher who doesn’t test an argument against viable alternatives isn’t really interested in whether he’s right or not — he just wants to back up his gut instincts.  A “strawman” is when an author takes the most extreme caricature of the opposing argument as the viable alternative.  If the rival arguments sound absurd when you read about them in the paper, it’s probably because the author has no interest in presenting the sane version of them.  Which means you can ignore the paper. 

5)  Are the author’s conclusions the only possible conclusions to draw?  Sometimes a paper can rest on solid theory and evidence, but then jump to policy conclusions that seem a bit of a stretch (click here for one example).  If you can reason out different policy conclusions from the theory and data, then don’t take the author’s conclusions at face value.  To use some jargon, sometimes a paper’s positivist conclusions are sound, even if the normative conclusions derived from the positive ones are a bit wobbly.  

6)  Can you falsify the author’s argument?    Conduct this exercise when you’re done reading a research paper — can you picture the findings that would force the author to say, “you know what, I can’t explain this away — it turns out my hypothesis was wrong”?  If you can’t picture that, then you can discard what you’re reading a a piece of agitprop rather than a piece of research. 

7)  Fraudulent papers will still get through the cracks.  Trust is a public good that permeates all scholarship and reportage.  Peer reviewers assume that the author is not making up the data or plagiarizing someone else’s idea.  We assume this because if we didn’t, peer review would be virtually impossible.  Every once in a while, an unethical author or a reporter will exploit that trust and publish something that’s a load of crap.  The good news on this front is that the people who do can’t stop themselves from doing it on a regular basis, and eventually they make a mistake.  So the previous rules of thumb don’t always work.  The  publishing system is imperfect — but “imperfect” does not mean the same thing as “fatally flawed.” 

With those rules of thumb, go forth and read your research papers. 

Other useful rules of thumb are encouraged in the comments.

Físicos explicam porque existem paradigmas e revoluções científicas de Kuhn

Por que a maior parte dos blogueiros de ciência e céticos são Popperianos? Kuhn é muito melhor, e pode ser modelado pela sociophysics…

Highly connected – a recipe for success

Authors: Krzysztof Suchecki, Andrea Scharnhorst, Janusz A. Holyst
(Submitted on 5 Jul 2010 (v1), last revised 6 Jul 2010 (this version, v2))

Abstract: In this paper, we tackle the problem of innovation spreading from a modeling point of view. We consider a networked system of individuals, with a competition between two groups. We show its relation to the innovation spreading issues. We introduce an abstract model and show how it can be interpreted in this framework, as well as what conclusions we can draw form it. We further explain how model-derived conclusions can help to investigate the original problem, as well as other, similar problems. The model is an agent-based model assuming simple binary attributes of those agents. It uses a majority dynamics (Ising model to be exact), meaning that individuals attempt to be similar to the majority of their peers, barring the occasional purely individual decisions that are modeled as random. We show that this simplistic model can be related to the decision-making during innovation adoption processes. The majority dynamics for the model mean that when a dominant attribute, representing an existing practice or solution, is already established, it will persists in the system. We show however, that in a two group competition, a smaller group that represents innovation users can still convince the larger group, if it has high self-support. We argue that this conclusion, while drawn from a simple model, can be applied to real cases of innovation spreading. We also show that the model could be interpreted in different ways, allowing different problems to profit from our conclusions.
Comments: 36 pages, including 5 figures; for electronic journal revised to fix missing co-author
Subjects: Physics and Society (physics.soc-ph)
Cite as: arXiv:1007.0671v2 [physics.soc-ph]

Roberto Takata existe?

Acho que isso vale um post sobre ceticismo e metodologia científica. O objetivo é tentar determinar, usando evidência científica, se Roberto Takata existe ou é um pseudônimo de um coletivo de blogueiros. Reproduzo abaixo o diálogo com o suposto Takata:

Osame Kinouchi disse…
Desculpe citar voce, Takata, mas acho que, se Shermer nao escapou do conspiracionismo dos ceticos do clima, em principio nenhum cetico está imune de cair em conspiracionismos quando os mesmos se alinham com sua ideologia:

“Mas o ponto central é que, só com esses dados, nós só saberíamos que Alexandre existiu ” – Mas a questão aqui é exatamente mostrar que uma figura supostamente histórica provavelmente existiu.

Então que tal Robert in the Hood?


Roberto Takata

PS: Acho que no futuro Robert Takata será uma figura mítica. Colegas já comentaram que é impossivel para um ser humano estar onipresente em todos os blogs e jornais, twitar 24 h por dia, ter 30 blogs, etc, e que Takata é o pseudonimo de um grupo de escritores, tipo Bourbaki. Eu nao discuto a existencia historia de um tal Roberto Takata que conheci no Ewclipo, mas o Takata presencial nada tem a ver com o Takata internetico. Takata, prove que voce existe!

none disse…

Se vc não duvida da existência do Roberto Takata do euclipo, então vc tem bons indícios da existência de um Roberto Takata histórico. (Cuja existência, aliás, está documentada nos registros oficiais: carteira de identidade, certidão de nascimento, cadastro na Receita Federal, etc.)

Roberto Takata
Osame Kinouchi disse…

O que eu disse é que um tal de roberto takata historico existe, mas tudo o que supostamente se credita a Takata (vários blogs, twitter 24h, onipresenca nos comentarios de blogs e jornais) é humanamente incompativel com o tempo livre disponivel por um estudante de doutorado que está escrevendo a tese, de modo que Takata deve ser um pseudonimo de um coletivo de blogueiros. Ou seja, nada do que está escrito por Takata é evidencia, para alem da duvida razoavel, de que foi escrito pelo Takata historico que está no RG. Takata deve ser um mito da internet…

Outras evidências de que Takata não existe são:

1. O Takata da internet é extremamente falador e articulado mas o Takata presencial (um ator contratado para ir no EWCLIPO?) é muito tímido e quase não fala.
2. O curriculum Lattes de Roberto Takata não é atualizado desde 2007. Isso seria impossível se Takata estivesse fazendo pós-graduação na USP. Existem rumores de que o Takata histórico emigrou para a Japão em 2008.
3. O pseudônimo NONE usado por Takata pode significar que ele “não é ninguém” (No-one), ou seja, não existe, é um grupo de blogueiros tipo Bourbaki.
4. Takata evita assinar seus comentários, usa pseudônimos, e quase nunca assina Roberto (e muito menos Mitsuo).
5. Se Takata realmente existisse, ele mereceria estar no Science Blogs Brasil, mas isso não aconteceu. Há rumores de que Carlos Hotta e Kentaro Mori são dois dos blogueiros participantes do coletivo Takata e que todos os participantes são descendentes de japoneses.
6. Em japonês, Takata significa “campo de arroz alto”, uma alusão cifrada ao fato de que é um coletivo (“campo”).
7. Takata é um nome de um cinto de segurança japonês, ilustrado na foto, ou seja, uma marca comercial, não um sobrenome. Se fosse um sobrenome, isso configuraria violação de Copyright.
8. O supostamente histórico Takata consegue detectar erros de ortografia em textos que seriam de impossível detecção por seres humanos reais. É possível que Takata também seja o acrônimo de um revisor ortográfico desenvolvido por estudantes da USP: Tradutor Alfabético de Kanjis e Analizador Tipográfico Automático.

UPDATE: 9. Evidência importantíssima! Takata não aparece no sistema Janus da USP!
UPDATE: 10. O nome japonês de Carlos Hotta é Takeshi, mais uma evidência de que Takata é um pseudônimo de Carlos Hotta.

Mais evidências poderão ser adicionadas aqui, sugeridas por blogueiros realmente existentes.

Criatividade na Ciência

Dei a palestra “Criatividade na Ciência e Ciência da Criatividade” na disciplina Metodologia e Escrita Científicas. O pdf da mesma pode ser encontrado na STOA, clique aqui.

Para livros sobre criatividade com desconto, ver aqui.

Aleatoriedade quântica

First Evidence That Quantum Processes Generate Truly Random Numbers
Posted: 12 Apr 2010 09:10 PM PDT

Quantum number generators produce random numbers that are measurably different from those that computer programs generate.

There is a growing sense among physicists that all physical processes can be thought of in terms of the information they store and process; by some accounts information is the basic unit of existence in our cosmos. That kind of thinking has extraordinary implications: it means that reality is a kind of computation in which the basic processes at work simply chomp their way through a vast bedrock of information.

And yet this is at odds with another of the great challenges facing modern science: understanding the nature of randomness. While information can be defined as an ordered sequence of symbols, randomness is the opposite of order, the absence of pattern. One of the basic features of true randomness is that it cannot be produced by a computer, otherwise it wouldn’t be random and that sets up a mouthwatering problem.

If all physical processes in the universe are ongoing computations, how does randomness arise? What kind of process can be responsible for its creation?

Until recently, mathematicians could only study randomness generated by classical physical processes such as coin tosses or computer programs which generate so-called pseudo-randomness. Since physical processes like coin tosses are hard to prove unbiased and difficult to manage, the work horse random number generators are programs such as Mathematica which uses the interesting properties of cellular automata to generate psedorandom sequences of numbers. Another method is simply to choose a sequence of numbers from the digits of an irrational number such as pi.

This stuff looks and feels random but because it can be computed, mathematicians treat it with suspicion.

But in the last few years, scientists have found a new source of randomness that cannot be produced by a computer program. This is called algorithmic randomness and it is the gold standard when it comes to the absence of order. The new source of this randomness is the quantum world and comes from exploiting quantum processes such as whether a photon is transmitted or reflected by a semi-silvered mirror.

This ought to produce sequences that can never be created by a computer. But are these sequences measurably different from those produced by computers?
This question is settled today by Cristian Calude at the University of Auckland in New Zealand and a few mates. These guys have carried out the first experimental comparison of randomness generated in these different ways and they’ve done it on a huge scale, using sequences 2^32 long.

Calude and co compare several flavours of random sequence generated in different ways.The sequences come from a quantum random number generator called Quantis, another from physicists in Vienna who also exploit quantum processes, they also use conventional sequences generated by computer programs such as Mathematica and Maple as well as a sequence of 2^32 bits from a binary expansion of pi.

The team use four different tests in their comparison, which fall into four categories based on algorithmic information theory, statistical tests involving frequency counts, a test based on Shannon’s information theory and finally, a test based on random walks.

The results show that the sequence generated by Quantis is easily distinguishable from the other data sets. This say Calude and co, is evidence that quantum randomness is indeed incomputable. That means that it could not have been be generated by a computer.

Significantly, they leave unanswered the question of how convincing this evidence is that they’ve gathered and instead go to some length to point out that it is impossible to prove absolute randomness.

Nevertheless, if this evidence is taken at face value, it leaves us with a significant conceptual dilemma. On the one hand, it shows that Quantis produces sequences of random numbers that cannot be generated by a computer. And yet Quantis itself is a machine that must work by manipulating information in the way the laws of physics allow–it must be a computer of sorts.

This contradiction can only mean there is something wrong with the way we think about randomness or information or both (or at least with the way I’ve set it up here).

Of course, the answer must lie in the nature of information in the quantum world. It’s easy enough to define information classically as an ordered sequence of symbols. But that definition falls apart as soon as these symbols become quantum in nature.

If each bit can be both a 1 and a 0 at the same time, what does it mean for such a such a sequence to be in order? Equally, what would the absence of order look like in such a quantum sequence?

It’s in the tackling these questions that the nature of our universe is being teased apart.

Ref: arxiv.org/abs/1004.1521: Experimental Evidence of Quantum Randomness Incomputability

Pá de cal no Empirismo?

Determining dynamical equations is hard

Authors: Toby S. Cubitt, Jens Eisert, Michael M. Wolf

(Submitted on 30 Apr 2010)
Abstract: The behaviour of any physical system is governed by its underlying dynamical equations–the differential equations describing how the system evolves with time–and much of physics is ultimately concerned with discovering these dynamical equations and understanding their consequences. At the end of the day, any such dynamical law is identified by making measurements at different times, and computing the dynamical equation consistent with the acquired data. In this work, we show that, remarkably, this process is a provably computationally intractable problem (technically, it is NP-hard). That is, even for a moderately complex system, no matter how accurately we have specified the data, discovering its dynamical equations can take an infeasibly long time (unless P=NP). As such, we find a complexity-theoretic solution to both the quantum and the classical embedding problems; the classical version is a long-standing open problem, dating from 1937, which we finally lay to rest.

Comments: For mathematical details, see arXiv:0908.2128[math-ph].

Subjects: Quantum Physics (quant-ph)
Cite as: arXiv:1005.0005v1 [quant-ph]