Archive for novembro 13th, 2011

Para Me Educar de Vez em Quando: Pensando em Pensar Demasiado

domingo, novembro 13th, 2011

Thinking About Thinking Too Much

O teorista aprende depois de velho que, no restaurante, por exemplo, quando vem o cardápio, o melhor a fazer é passar o cardapio à esposa e perguntar: “Meu bem, o que eu quero?”

The problem of thinking too much has a prominent
place in the age-old debate between theory
and practice. Galen’s second-century attempts to
balance between rationalist and empiricist physicians
ring true today. In his Three Treatises on the
Nature of Science (trans. R. Walzer and M. Frode),
Galen noted that an opponent of the new theories
claimed “there was a simple way in which mankind
actually had made enormous progress in medicine.
Over the ages men had learned from dire experi-
ence, by trial and error, what was conducive and
what was detrimental to health. Not only did he
claim that one should not abandon this simple
method in favor of fanciful philosophical theories,
which do not lead anywhere; he also argued that
good doctors in practice relied on this experience
anyway, since their theories were too vague and too
general to guide their practice.” In my own field of
statistics, the rationalists are called decision theorists
and the empiricists are called exploratory data
analysts. The modern debaters make many of the
same rhetorical moves that Galen chronicled.
Economists use Herbert Simon’s ideas of “satisficing”
and “bounded rationality,” along with more
theoretical tools associated with John Harsanyi’s
“value of information.” Psychologists such as
Daniel Kahneman and Amos Tversky accept the
value of the heuristics that we use when we abandon
calculation and go with our gut. They have
created theories of framing and support that allow
adjustment for the inevitable biases. These give a
framework for balancing the decision to keep thinking
versus getting on with deciding.
Computer science explicitly recognizes the limits
of thinking through ideas like complexity theory.
For some tasks, computationally feasible algorithms
can be proved to do reasonably well. Here is
a simple example. Suppose you want to pack two
suitcases with objects of weight a, b, . . . , z. You

want to pack them as close to evenly as you can. It
can be shown that this is a virtually impossible
problem. Despite fifty years of effort, we don’t
know how to find the best method of packing, save
for trying all of the exponentially many possibilities.
Any progress would give solution to thousands
of other intractable problems. Most of us conclude
that the optimal solution is impossible to find.
Undeterred, my friend Ron Graham proposed the
following: sort the objects from heaviest to lightest
(this is quick to do). Then fill the two suitcases by
beginning with the heaviest item, and each time
placing the next thing into the lighter suitcase.
Here is an example with five things of weight 3, 3,
2, 2, 2. The algorithm builds up two groups as follows:
This misses the perfect solution, which puts 3, 3 in
one pile and 2, 2, 2 in the other. One measure of
the goodness of a proposed solution is the ratio of
the size of the larger pile to the size of the larger
pile in the optimal solution. This is 7/6 in the
example. Graham proved that in any problem, no
matter what the size of the numbers, this “greedy”
heuristic always does at worst 7/6 compared to the
optimal. We would be lucky to do as well in more
realistic problems.
An agglomeration of economics, psychology, decision
theory, and a bit of complexity theory is the
current dominant paradigm. It advises roughly
quantifying our uncertainty, costs, and benefits
(utility) and then choosing the course that maximizes
expected utility per unit of time. A lively
account can be found in I. J. Good’s book Good
Thinking (don’t miss his essay on “How Rational
Should a Manager Be?”).
To be honest, the academic discussion doesn’t shed
much light on the practical problem. Here’s an
illustration: Some years ago I was trying to decide

whether or not to move to Harvard from Stanford.
I had bored my friends silly with endless discussion.
Finally, one of them said, “You’re one of our
leading decision theorists. Maybe you should make
a list of the costs and benefits and try to roughly
calculate your expected utility.” Without thinking,
I blurted out, “Come on, Sandy, this is serious.”

Some Rules of Thumb

One of the most useful things to come out of my
study is a collection of the rules of thumb my
friends use in their decision making. For example,
one of my Ph.D. advisers, Fred Mosteller, told me,
“Other things being equal, finish the job that is
nearest done.” A famous physicist offered this
advice: “Don’t waste time on obscure fine points
that rarely occur.” I’ve been told that Albert
Einstein displayed the following aphorism in his
office: “Things that are difficult to do are being
done from the wrong centers and are not worth
doing.” Decision theorist I. J. Good writes, “The
older we become, the more important it is to use
what we know rather than learn more.” Galen
offered this: “If a lot of smart people have thought
about a problem [e.g., God’s existence, life on other
planets] and disagree, then it can’t be decided.”
There are many ways we avoid thinking. I’ve often
been offered the algorithm “Ask your wife to
decide” (but never “Ask your husband”). One of
my most endearing memories of the great psychologist
of decision making under uncertainty, Amos

Tversky, recalls his way of ordering in restaurants:
“Barbara? What do I want?”
Clearly, we have a wealth of experience, gathered
over millennia, coded into our gut responses. Surely,
we all hope to call on this. A rule of thumb in this
direction is “Trust your gut reaction when dealing
with natural tasks such as raising children.”
It’s a fascinating insight into the problem of thinking
too much that these rules of thumb seem more
useful than the conclusions drawn from more theoretical
attacks.
In retrospect, I think I should have followed my
friend’s advice and made a list of costs and benefits—
if only so that I could tap into what I was
really after, along the lines of the following “grook”

by Piet Hein:
A Psychological Tip
Whenever you’re called on to make up your mind,
and you’re hampered by not having any,
the best way to solve the dilemma, you’ll find,
is simply by spinning a penny.
No—not so that chance shall decide the affair
while you’re passively standing there moping;
but the moment the penny is up in the air,
you suddenly know what you’re hoping.

Remarks  2002 by Barry C. Mazur and Persi Diaconis,
respectively.
Photos  2002 by Martha Stewart.
“A Psychological Tip” is from an English-language edition
of Grooks by Piet Hein, published in Copenhagen
by Borgens Forlag (1982, p. 38);  Piet Hein.

Luz: Cientistas criando um Universo a partir da luz pura?!

domingo, novembro 13th, 2011

Out Of Pure Light, Physicists Create Particles Of Matter

ScienceDaily (Sep. 18, 1997)

http://www.sciencedaily.com/releases/1997/09/970918045841.htm

Luz: “You are light!” – Nassim Harassmen sôbre dois atomos se unindo

domingo, novembro 13th, 2011

Êste trecho foi extraído do meu artigo: “A Interessante Visão da Teoria da Sagrada Geometria”, fev,23,2011, sôbre Nassim Haramein:

Nassim continua revelando os enganos da Física. Notando que atomos se juntam dentro de um nucleo, mas que isso vai contra o conhecimento geral de que duas cargas iguais se repelem, para explicar isso inventaram uma nova fôrça que não existe: a fôrça forte. Nassim apresenta graficos e modêlos explicando que se explica a atração entre dois protons se eles forem, na verdade, mini-buracos negros. Mas a continuidade dos calculos indicam que dois protons nessa situação devem estarem girando em em torno do outro à velocidade da luz! O que significa que se um buraco negro isolado é escuro, dois juntos é pura luz. Daqui Nassim vai para o corpo humano levando esta noção de que nossos atomos é pura luz para terminar: “You are…light!”

xxxx
Devo perguntar isso a Físicos. Ou ver se Nassim tem um website, perguntar a êle.