Home // Posts tagged "anthropic principle"

Nosso universo vai congelar como uma cerveja super-resfriada…

SCIENTIFIC METHOD / SCIENCE & EXPLORATION

Finding the Higgs? Good news. Finding its mass? Not so good.

“Fireballs of doom” from a quantum phase change would wipe out present Universe.

by  – Feb 19 2013, 8:55pm HB

A collision in the LHC’s CMS detector.

Ohio State’s Christopher Hill joked he was showing scenes of an impending i-Product launch, and it was easy to believe him: young people were setting up mats in a hallway, ready to spend the night to secure a space in line for the big reveal. Except the date was July 3 and the location was CERN—where the discovery of the Higgs boson would be announced the next day.

It’s clear the LHC worked as intended and has definitively identified a Higgs-like particle. Hill put the chance of the ATLAS detector having registered a statistical fluke at less than 10-11, and he noted that wasn’t even considering the data generated by its partner, the CMS detector. But is it really the one-and-only Higgs and, if so, what does that mean? Hill was part of a panel that discussed those questions at the meeting of the American Association for the Advancement of Science.

As theorist Joe Lykken of Fermilab pointed out, the answers matter. If current results hold up, they indicate the Universe is currently inhabiting what’s called a false quantum vacuum. If it were ever to reach the real one, its existing structures (including us), would go away in what Lykken called “fireballs of doom.”

We’ll look at the less depressing stuff first, shall we?

Zeroing in on the Higgs

Thanks to the Standard Model, we were able to make some very specific predictions about the Higgs. These include the frequency with which it will decay via different pathways: two gamma-rays, two Z bosons (which further decay to four muons), etc. We can also predict the frequency of similar looking events that would occur if there were no Higgs. We can then scan each of the decay pathways (called channels), looking for energies where there is an excess of events, or bump. Bumps have shown up in several channels in roughly the same place in both CMS and ATLAS, which is why we know there’s a new particle.

But we still don’t know precisely what particle it is. The Standard Model Higgs should have a couple of properties: it should be scalar and should have a spin of zero. According to Hill, the new particle is almost certainly scalar; he showed a graph where the alternative, pseudoscalar, was nearly ruled out. Right now, spin is less clearly defined. It’s likely to be zero, but we haven’t yet ruled out a spin of two. So far, so Higgs-like.

The Higgs is the particle form of a quantum field that pervades our Universe (it’s a single quantum of the field), providing other particles with mass. In order to do that, its interactions with other particles vary—particles are heavier if they have stronger interactions with the Higgs. So, teams at CERN are sifting through the LHC data, checking for the strengths of these interactions. So far, with a few exceptions, the new particle is acting like the Higgs, although the error bars on these measurements are rather large.

As we said above, the Higgs is detected in a number of channels and each of them produces an independent estimate of its mass (along with an estimated error). As of the data Hill showed, not all of these estimates had converged on the same value, although they were all consistent within the given errors. These can also be combined mathematically for a single estimate, with each of the two detectors producing a value. So far, these overall estimates are quite close: CMS has the particle at 125.8GeV, Atlas at 125.2GeV. Again, the error bars on these values overlap.

Oops, there goes the Universe

That specific mass may seem fairly trivial—if it were 130GeV, would you care? Lykken made the argument you probably should. But he took some time to build to that.

Lykken pointed out, as the measurements mentioned above get more precise, we may find the Higgs isn’t decaying at precisely the rates we expect it to. This may be because we have some details of the Standard Model wrong. Or, it could be a sign the Higgs is also decaying into some particles we don’t know about—particles that are dark matter candidates would be a prime choice. The behavior of the Higgs might also provide some indication of why there’s such a large excess of matter in the Universe.

But much of Lykken’s talk focused on the mass. As we mentioned above, the Higgs field pervades the entire Universe; the vacuum of space is filled with it. And, with a value for the Higgs mass, we can start looking into the properties of the Higgs filed and thus the vacuum itself. “When we do this calculation,” Lykken said, “we get a nasty surprise.”

It turns out we’re not living in a stable vacuum. Eventually, the Universe will reach a point where the contents of the vacuum are the lowest energy possible, which means it will reach the most stable state possible. The mass of the Higgs tells us we’re not there yet, but are stuck in a metastable state at a somewhat higher energy. That means the Universe will be looking for an excuse to undergo a phase transition and enter the lower state.

What would that transition look like? In Lykken’s words, again, “fireballs of doom will form spontaneously and destroy the Universe.” Since the change would alter the very fabric of the Universe, anything embedded in that fabric—galaxies, planets, us—would be trashed during the transition. When an audience member asked “Are the fireballs of doom like ice-9?” Lykken replied, “They’re even worse than that.”

Lykken offered a couple of reasons for hope. He noted the outcome of these calculations is extremely sensitive to the values involved. Simply shifting the top quark’s mass by two percent to a value that’s still within the error bars of most measurements, would make for a far more stable Universe.

And then there’s supersymmetry. The news for supersymmetry out of the LHC has generally been negative, as various models with low-mass particles have been ruled out by the existing data (we’ll have more on that shortly). But supersymmetry actually predicts five Higgs particles. (Lykken noted this by showing a slide with five different photos of Higgs taken at various points in his career, in which he was “differing in mass and other properties, as happens to all of us.”) So, when the LHC starts up at higher energies in a couple of years, we’ll actually be looking for additional, heavier versions of the Higgs.

If those are found, then the destruction of our Universe would be permanently put on hold. “If you don’t like that fate of the Universe,” Lykken said, “root for supersymmetry”

Planetas extra-solares, Kepler 62 e o Paradoxo de Fermi local

Conforme aumentam o número de planetas extra-solares descobertos, também aumentamos vínculos sobre as previsões do modelo de percolação galática (Paradoxo de Fermi Local).
A previsão é que, se assumirmos que Biosferas Meméticas (Biosferas culturais ou Tecnosferas) são um resultado provável de Biosferas Genéticas, então devemos estar dentro de uma região com pucos planetas habitáveis. Pois se existirem planetas habitados (por seres inteligentes) por perto, com grande probabilidade eles são bem mais avançados do que nós, e já teriam nos colonizado.
Como isso ainda não ocorreu (a menos que se acredite nas teorias de conspiração dos ufólogos e nas teorias de Jesus ET, deuses astronautas etc.), segue que quanto mais os astronomos obtiverem dados, mais ficará evidente que nosso sistema solar é uma anomalia dentro de nossa vizinhança cósmica (1000 anos-luz?), ou seja, não podemos assumir o Princípio Copernicano em relação ao sistema solar: nosso sistema solar não é tipico em nossa vizinhança.  Bom, pelo menos, essa conclusão está batendo com os dados coletados até hoje…
Assim, é possível fazer a previsão de que uma maior análise dos planetas Kepler 62-e e Kepler 62-f revelará que eles não possuem uma atmosfera com oxigênio ou metano, sinais de um planeta com biosfera.

Persistence solves Fermi Paradox but challenges SETI projects

Osame Kinouchi (DFM-FFCLRP-Usp)
(Submitted on 8 Dec 2001)

Persistence phenomena in colonization processes could explain the negative results of SETI search preserving the possibility of a galactic civilization. However, persistence phenomena also indicates that search of technological civilizations in stars in the neighbourhood of Sun is a misdirected SETI strategy. This last conclusion is also suggested by a weaker form of the Fermi paradox. A simple model of a branching colonization which includes emergence, decay and branching of civilizations is proposed. The model could also be used in the context of ant nests diffusion.

03/05/2013 – 03h10

Possibilidade de vida não se resume a planetas similares à Terra, diz estudo

SALVADOR NOGUEIRA
COLABORAÇÃO PARA A FOLHA

Com as diferentes composições, massas e órbitas possíveis para os planetas fora do Sistema Solar, a vida talvez não esteja limitada a mundos similares à Terra em órbitas equivalentes à terrestre.

Editoria de arte/Folhapress

Essa é uma das conclusões apresentada por Sara Seager, do MIT (Instituto de Tecnologia de Massachusetts), nos EUA, em artigo de revisão publicado no periódico “Science“, com base na análise estatística dos cerca de 900 mundos já detectados ao redor de mais de 400 estrelas.

Seager destaca a possível existência de planetas cuja atmosfera seria tão densa a ponto de preservar água líquida na superfície mesmo a temperaturas bem mais baixas que a terrestre. Read more [+]

Os deuses de Richard Dawkins

File:NASA child bubble exploration.jpgMy personal theology is described in the Gifford lectures that I gave at Aberdeen in Scotland in 1985, published under the title, Infinite In All Directions. Here is a brief summary of my thinking. The universe shows evidence of the operations of mind on three levels. The first level is elementary physical processes, as we see them when we study atoms in the laboratory. The second level is our direct human experience of our own consciousness. The third level is the universe as a whole. Atoms in the laboratory are weird stuff, behaving like active agents rather than inert substances. They make unpredictable choices between alternative possibilities according to the laws of quantum mechanics. It appears that mind, as manifested by the capacity to make choices, is to some extent inherent in every atom. The universe as a whole is also weird, with laws of nature that make it hospitable to the growth of mind. I do not make any clear distinction between mind and God. God is what mind becomes when it has passed beyond the scale of our comprehension. God may be either a world-soul or a collection of world-souls. So I am thinking that atoms and humans and God may have minds that differ in degree but not in kind. We stand, in a manner of speaking, midway between the unpredictability of atoms and the unpredictability of God. Atoms are small pieces of our mental apparatus, and we are small pieces of God’s mental apparatus. Our minds may receive inputs equally from atoms and from God. This view of our place in the cosmos may not be true, but it is compatible with the active nature of atoms as revealed in the experiments of modern physics. I don’t say that this personal theology is supported or proved by scientific evidence. I only say that it is consistent with scientific evidence.  Freeman Dyson

Parece que Dawkins está rumando para uma posição similar à de Gardner, Clément Vidal e outros da comunidade Evo-Devo Universe.

Human Gods

After two hours of conversation, Professor Dawkins walks far afield. He talks of the possibility that we might co-evolve with computers, a silicon destiny. And he’s intrigued by the playful, even soul-stirring writings of Freeman Dyson, the theoretical physicist.

In one essay, Professor Dyson casts millions of speculative years into the future. Our galaxy is dying and humans have evolved into something like bolts of superpowerful intelligent and moral energy.

Doesn’t that description sound an awful lot like God?

“Certainly,” Professor Dawkins replies. “It’s highly plausible that in the universe there are God-like creatures.”

He raises his hand, just in case a reader thinks he’s gone around a religious bend. “It’s very important to understand that these Gods came into being by an explicable scientific progression of incremental evolution.”

Could they be immortal? The professor shrugs.

“Probably not.” He smiles and adds, “But I wouldn’t want to be too dogmatic about that.”

O melhor livro de divulgação científica que encontrei em quarenta anos de leituras

Depois escrevo minha resenha…

A REALIDADE OCULTA – Universos paralelos e as leis profundas do cosmo
Brian Greene
R$ 59,00 Comprar
R$ 39,00 E-Book
Indique Comente
É necessário estar logado para utilizar este recurso. Acompanhe

Meio século atrás, os cientistas encaravam com ironia a possibilidade de existirem outros universos além deste que habitamos. Tal hipótese não passava de um delírio digno de Alice no País das Maravilhas – e que, de todo modo, jamais poderia ser comprovada experimentalmente. Os desafios propostos pela Teoria da Relatividade e pela física quântica para o entendimento de nosso próprio universo já eram suficientemente complexos para ocupar gerações e gerações de pesquisadores. Entretanto, diversos estudos independentes entre si, conduzidos por cientistas respeitados em suas áreas de atuação – teoria das cordas, eletrodinâmica quântica, teoria da informação -, começaram a convergir para o mesmo ponto: a existência de universos paralelos – o multiverso – não só é provável como passou a ser a explicação mais plausível para diversos enigmas cosmológicos.
Em A realidade oculta, Brian Greene – um dos maiores especialistas mundiais em cosmologia e física de partículas – expõe o fantástico desenvolvimento da física do multiverso ao longo das últimas décadas. O autor de O universo elegante passa em revista as diferentes teorias sobre os universos paralelos a partir dos fundamentos da relatividade e da mecânica quântica. Por meio de uma linguagem acessível e valendo-se de numerosas figuras explicativas, Greene orienta o leitor pelos labirintos da realidade mais profunda da matéria e do pensamento.

“Se extraterrestres aparecessem amanhã e pedissem para conhecer as capacidades da mente humana, não poderíamos fazer nada melhor que lhes oferecer um exemplar deste livro.” – Timothy Ferris, New York Times Book Review

Em Alfa Centauri B, planeta com massa igual à da Terra

Acredito que o Paradoxo de Fermi tem um poder heurístico ainda inexplorado. Ou seja, o Paradoxo pode ser usado como evidência (a ser explicada) contra possibilidades ou especulações científicas tais como Inteligência Artificial, Viagens por Túneis de Minhoca ou Máquinas do Tempo. Ele estabelece afirmações de impossibilidade similares ao enunciado da segunda lei da Termodinâmica em termos de impossibilidade de se criar uma máquina do Moto Perpétuo.

Por exemplo, seja R(t) o raio de detecção de civilizações extraterrestres, ou seja, um raio (que depende do tempo) no qual nossa tecnologia é capaz de detectar tais civilizações. Podemos afirmar a partir desse conceito que não existe nenhuma civilização mais avançada que a nossa em um raio menor que R(t), dado que ela teria tido tempo de nos detectar e possivelmente nos colonizar.

Por outro lado, seja R_c o raio de colonização da civilização galática mais próxima do Sol e seja D a distância entre o centro dessa civilização e o Sol. Pelo Paradoxo de Fermi (“Onde está todo mundo?”), podemos concluir que D > R_c, a menos que o processo de colonização não seja descrito por uma difusão simples mas sim por uma difusão anômala, talvez fractal, de modo que a Terra se situa dentro de uma bolha vazia, não colonizada. Sendo assim, podemos concluir que não existem civilizações avançadas próximas de nós.

Também podemos prever que não estamos em uma região típica da Galáxia (em termos de densidade de planetas habitáveis). O mais provável é que estamos em uma região atípica (similar ao Deserto do Saara aqui na Terra) onde os planetas habitáveis e habitados são raros.  Ou seja, eu posso prever com algum grau de confiança que o telescópio Kepler vai detectar uma distribuição de planetas atípica (em termos de massa, distância da estrela central, presença na zona habitável da estrela – onde é possível haver água líquida etc.). Ou seja, vai ser muito difícil achar nas proximidades do Sol um planeta tipo Terra, situado na zona habitável de uma estrela mais velha que o Sol, pois tal planeta possivelmente seria habitado e sua civilização já teria  tido um monte de tempo para nos colonizar. 

Por outro lado, podemos usar o Paradoxo de Fermi para eliminar a possibilidade de Inteligencia Artificial Forte Auto-reprodutiva (sondas de Von Newman ou Monolitos Negros do filme 2010). Se tais sondas fossem factíveis de serem criadas, elas estariam já aqui.

Bom, a alternativa à todos esses argumentos baseados no Paradoxo de Fermi é que eles realmente já estão aqui: podemos elaborar todo tipo de raciocínio conspiratório à la Arquivo X para tentar justificar a pergunta básica de porque os ETs, se realmente existem, não entram em contado conosco. Uma hipótese menos conspiratória seria que eles são antropólogos bonzinhos que já aprenderam que toda civilização inferior é destruída ou no mínimo absorvida culturalmente, pela civilização superior após um contato (Hipótese Zoo).

Finalmente, o Paradoxo de Fermi aumenta o ceticismo em relação à viagens com velocidade superluminal, warp drives etc. E uma versão temporal do Paradoxo pergunta: se é possível construir máquinas do tempo, onde estão os visitantes temporais? 

17/10/2012 – 05h05

Pesquisadores encontram planeta vizinho que é gêmeo da Terra

SALVADOR NOGUEIRA
COLABORAÇÃO PARA A FOLHA

É provavelmente a notícia mais esperada desde que o primeiro planeta fora do Sistema Solar foi descoberto, em meados dos anos 1990. Finalmente foi encontrado um planeta que tem praticamente a mesma massa da Terra.

E a grande surpresa: ele fica ao redor de Alfa Centauri, o conjunto estelar mais próximo do Sol. Read more [+]

Determinando se vivemos dentro da Matrix

The Measurement That Would Reveal The Universe As A Computer Simulation

If the cosmos is a numerical simulation, there ought to be clues in the spectrum of high energy cosmic rays, say theorists

1 comment

THE PHYSICS ARXIV BLOG

Wednesday, October 10, 2012

One of modern physics’ most cherished ideas is quantum chromodynamics, the theory that describes the strong nuclear force, how it binds quarks and gluons into protons and neutrons, how these form nuclei that themselves interact. This is the universe at its most fundamental.

So an interesting pursuit is to simulate quantum chromodynamics on a computer to see what kind of complexity arises. The promise is that simulating physics on such a fundamental level is more or less equivalent to simulating the universe itself.

There are one or two challenges of course. The physics is mind-bogglingly complex and operates on a vanishingly small scale. So even using the world’s most powerful supercomputers, physicists have only managed to simulate tiny corners of the cosmos just a few femtometers across. (A femtometer is 10^-15 metres.)

That may not sound like much but the significant point is that the simulation is essentially indistinguishable from the real thing (at least as far as we understand it).

It’s not hard to imagine that Moore’s Law-type progress will allow physicists to simulate significantly larger regions of space. A region just a few micrometres across could encapsulate the entire workings of a human cell.

Again, the behaviour of this human cell would be indistinguishable from the real thing.

It’s this kind of thinking that forces physicists to consider the possibility that our entire cosmos could be running on a vastly powerful computer. If so, is there any way we could ever know?

Today, we get an answer of sorts from Silas Beane, at the University of Bonn in Germany, and a few pals.  They say there is a way to see evidence that we are being simulated, at least in certain scenarios.

First, some background. The problem with all simulations is that the laws of physics, which appear continuous, have to be superimposed onto a discrete three dimensional lattice which advances in steps of time.

The question that Beane and co ask is whether the lattice spacing imposes any kind of limitation on the physical processes we see in the universe. They examine, in particular, high energy processes, which probe smaller regions of space as they get more energetic

What they find is interesting. They say that the lattice spacing imposes a fundamental limit on the energy that particles can have. That’s because nothing can exist that is smaller than the lattice itself.

So if our cosmos is merely a simulation, there ought to be a cut off in the spectrum of high energy particles.

It turns out there is exactly this kind of cut off in the energy of cosmic ray particles,  a limit known as the Greisen–Zatsepin–Kuzmin or GZK cut off.

This cut-off has been well studied and comes about because high energy particles interact with the cosmic microwave background and so lose energy as they travel  long distances.

But Beane and co calculate that the lattice spacing imposes some additional features on the spectrum. “The most striking feature…is that the angular distribution of the highest energy components would exhibit cubic symmetry in the rest frame of the lattice, deviating significantly from isotropy,” they say.

In other words, the cosmic rays would travel preferentially along the axes of the lattice, so we wouldn’t see them equally in all directions.

That’s a measurement we could do now with current technology. Finding the effect would be equivalent to being able to to ‘see’ the orientation of lattice on which our universe is simulated.

That’s cool, mind-blowing even. But the calculations by Beane and co are not without some important caveats. One problem is that the computer lattice may be constructed in an entirely different way to the one envisaged by these guys.

Another is that this effect is only measurable if the lattice cut off is the same as the GZK cut off. This occurs when the lattice spacing is about 10^-12 femtometers. If the spacing is significantly smaller than that, we’ll see nothing.

Nevertheless, it’s surely worth looking for, if only to rule out the possibility that we’re part of a simulation of this particular kind but secretly in the hope that we’ll find good evidence of our robotic overlords once and for all.

Ref: arxiv.org/abs/1210.1847: Constraints on the Universe as a Numerical Simulation

Testando o intestável

Via The Physics ArXiv blog:

How to test the many worlds interpretation of quantum mechanics

Posted: 07 Oct 2008 12:59 AM CDT

mwi1.jpg The many worlds interpretation of quantum mechanics holds that before a measurement is made, identical copies of the observer exist in parallel universes and that all possible results of a measurement actually take place in these universes.

Until now there has been no way to distinguish between this and the Born interpretation. This holds that each outcome of a measurement has a specific probability and that, while an ensemble of measurements will match that distribution, there is no way to determine the outcome of specific measurement.

Now Frank Tipler, a physicist at Tulane University in New Orleans says he has hit upon a way in which these interpretations must produce different experimental results.

His idea is to measure how quickly individual photons hitting a screen build into a pattern. According to the many worlds interpretation, this pattern should build more quickly, says Tipler.

And he points out that an experiment to test this idea would be easy to perform. Simply send photons through a double slit, onto a screen and measure where each one hits. Once the experiment is over, a simple mathematical test of the data tells you how quickly the pattern formed.

This experiment is almost trivial so we should find out pretty quickly which interpretation of quantum mechanics Tipler’s test tells us is right.

Then it boils down to whether you believe his reasoning.

(And not everybody does. When Tipler published his book The Physics of Immortality one reviewer described it as ” a masterpiece of pseudoscience”.)

Let’s hope this paper is received a little more positively than his books.

Ref: arxiv.org/abs/0809.4422: Testing Many-Worlds Quantum Theory By Measuring Pattern Convergence Rates

Porque d = 3?

Há alguns posts atrás eu conjecturei sobre a possibilidade de que a não trivialidade de grande parte dos modelos de Física Estatística em d = 3 tivesse algo a ver com o fato de vivermos em um universo complexo também com d = 3. Este artigo abaixo vai na mesma direção.

Statistical mechanics and thermodynamic limit of self-gravitating fermions in D dimensions


Authors: Pierre-Henri Chavanis
(Submitted on 14 Aug 2007)
Abstract: We discuss the statistical mechanics of a system of self-gravitating fermions in a space of dimension $D$. We plot the caloric curves of the self-gravitating Fermi gas giving the temperature as a function of energy and investigate the nature of phase transitions as a function of the dimension of space. We consider stable states (global entropy maxima) as well as metastable states (local entropy maxima). We show that for $D\ge 4$, there exists a critical temperature (for sufficiently large systems) and a critical energy below which the system cannot be found in statistical equilibrium. Therefore, for $D\ge 4$, quantum mechanics cannot stabilize matter against gravitational collapse. This is similar to a result found by Ehrenfest (1917) at the atomic level for Coulombian forces. This makes the dimension D=3 of our universe very particular with possible implications regarding the anthropic principle. Our study enters in a long tradition of scientific and philosophical papers who studied how the dimension of space affects the laws of physics.
Subjects:
Statistical Mechanics (cond-mat.stat-mech)
Journal reference:
Phys. Rev. E, 69, 066126 (2004)
Cite as:
arXiv:0708.1888v1 [cond-mat.stat-mech]