gus_massa

> We therefore conclude that theoretically motivated experiment choice is potentially damaging for science, but in a way that will not be apparent to the scientists themselves.

They are analyzing a toy model of science. The details and in figure 1. They have a search space that has a few Gaussians like

f(x,y,z) = A0 * expt(-(x-x0)^2-(y-y0)^2-(z-z0)^2) + A1 * expt(-(x-x1)^2-(y-y1)^2-(z-z1)^2)

but maybe in more than 3 dimensions and maybe with more than 2 Gaussians.

They want the agents to find all of Gaussians.

It's somewhat similar to a maximization problem that is easier. There are many strategies for this, from gradient ascent to random sampling to a million more of variants. I like simulated annealing.

They claim that the best method is random sampling, that only work when the search space is small. But it breaks quite fast for high dimensional problems, unless the Gaussians are so big that cover most of the space, and perhaps I'm beeing too optimistic. Add noise, overlapping Gaussians and the problem gets super hard.

Let's get to a realistic example, all the molecules with 6 Carbons and 12 Hydrogens. Let's try to find all of them and their stables 3D configuration. This is chemistry from the first year in the university, perhaps earlier, no cutting edge science.

You have 18 atoms, so 18 * 3 = 54 dimensions, and the surface of -energy has a lot of mountains ranges and nasty stuff. Most of them very sharp. Let's try to find the local points of maximal -energy, that is much easier than the full map. These are the stable molecules, that (usually) have names.

* There is a cycle one with 6 Carbons, where each Carbon has 2 Hydrogens, https://en.wikipedia.org/wiki/Cyclohexane Note that it actually has two different 3D variants.

* There is one with a cycle of 5 Carbons and 1 carbon attached to the cycle https://en.wikipedia.org/wiki/Methylcyclopentane

* There are variants with shorter cycles, but I'm not sure how stable they are and Wikipedia has no page for them.

* There is also 3 linear versions, where the 6 Carbons are a s wavy line, and there is a double bound in one of the steps https://en.wikipedia.org/wiki/1-Hexene I'm not sure why the other two version have no page in Wikipedia, I think they should be stable, but sometimes it's not a local maximum or the local maximum is to shallow and the double bound jump and the Hydrogen reorganize.

* And there may be other nasty stuff, take a look at the complete list https://en.wikipedia.org/wiki/C6H12.

And don't try to make the complete list when of molecules that includes a few Nitrogen, because the number of molecules explodes exponentially.

So this random sampling method they propose, does not even work for an elementary Chemistry problem.

show comments
MarkusQ

This is really interesting, but it appears to hinge on an unstated (and unjustified) assumption: that scientists learn by back propagation, or something sufficiently similar that back propagation is a reasonable model.

It also:

* Bakes in the assumption that there are no internal mechanisms to be discovered ("Each environment is a mixture of multivariate Gaussian distributions")

* Ignores the possibility that their model of falsification is inadequate (they just test more near points with high error).

* Does a lot of "hopeful naming" which makes the results easy to misinterpret as saying more about like-named things in the real world than it actually does.

show comments
briandw

This remind me of [Why Greatness Cannot Be Planned](https://mythoftheobjective.com). When looking at scientific discovery there many examples of happy accidents. The researchers were not intending to find the breakthrough that they did. It was the willingness to change course and explore a new and interesting thing that they just stumbled onto. Examples: penicillin, superglue, radioactivity, cosmic background radiation etc. I loved the example of Robert Williams who pointed the HST at an empty patch of sky for 10 days. He had his time allocated and no one could stop him but the other astronomers thought it a poor use of resources. It resulted in the famous Hubble Deep Field image.

Counter example is the decades that amyloid cascade hypothesis was the only allowed / funded research of Alzheimer disease.

armchairhacker

In real life, can you choose an experiment perfectly randomly?

You can ask many people to propose hypotheses and choose one at random, and perhaps with a good sample you get better experiments. You can query a Markov chain until it produces an interpret-able hypothesis. But the people or Markov chain (because English itself) has significant bias.

Also, some experiments have wider-reaching implications than others (this is probably more relevant for the Markov chain, because I expect the hypotheses it forms to be like "frogs can learn to skate").

gavinray

  > "We find that agents who choose new experiments at random develop the most informative and predictive theories of the world. "
There's a neat book about this: "Why Greatness Cannot Be Planned (The Myth of the Objective")

https://www.goodreads.com/book/show/25670869-why-greatness-c...

Incidentally, the author works at OpenAI these days.

https://en.wikipedia.org/wiki/Kenneth_Stanley

Zobat

I fully admit that I only skimmed the abstract, but was reminded of an article in Wired about Sergey Brin and his "search for a parkinsson cure".

https://www.wired.com/2010/06/ff-sergeys-search/

He went backwards and started with just collecting an absurd amount of data. Later while talking to a researcher he could confirm years of research with a "simple" search in his database.

youknownothing

This is a thought-provoking idea but, even if true, I don't think it will gain much traction. We humans like to be right and earn awards for our predictions. A Nobel wouldn't feel quite the same if given to someone who just happened to randomly stumble upon something.

show comments
selridge

Weird that this doesn’t mention grounded theory, a social theory toolkit which people poo-poo for Popperian purposes.

show comments
lutusp

This idea suffers from a number of practical obstacles:

One, in a sufficiently advanced field of study, an idea's originator may be the only person able to imagine an experimental test. I doubt that many physicists would have immediately thought that Mercury's unexplained orbital precession would serve to either support or falsify Einstein's General Relativity -- but Einstein certainly could. Same with deflected starlight paths during a solar eclipse (both these effects were instrumental in validating GR).

Two, scientists are supposed to be the harshest critics of their own ideas, on the lookout for a contradicting observation. This was once part of a scientist's training -- I assume this is still the case.

Three, the falsifiability criterion. If an experimental proposal doesn't include the possibility of a conclusive falsification, it's not, strictly speaking, a scientific idea. So an idea's originator either has (and publishes) a falsifying criterion, or he doesn't have a legitimate basis for a scientific experiment.

Here's an example. Imagine if the development of the transistor relied on random experimentation with no preferred outcome. In the event, the inventors at Bell Labs knew exactly what they wanted to achieve -- the project was very focused from the outset.

Another example. Jonas Salk (polio vaccine) knew exactly what he wanted to achieve, his wasn't a random journey in a forest of Pyrex glassware. It's hard to imagine Salk's result arising from an aimless stochastic exploration.

So it seems science relies on people's integrity, not avoidance of any particular focus. If integrity can't be relied on, perhaps we should abandon the people, not the methods.

show comments