Once I did a research project on how much ideas versus implementation effort is important for stuff happening. If I recall, at least part of this was looking up when ideas first appeared and when they were implemented, and inferring if the gap was short then the idea was a bigger bottleneck.
I mentioned this to my boyfriend, and he was amused at my style of research question: it sounded like it was from hundreds of years ago—these days, people usually research things like ‘for this particular species of microbe is x or y more important to its behavior in this phase?’ whereas my question was incredibly high level.
I agreed with that, but then wondered why.
One theory is that it’s actually silly to ask such high level questions—this is just too general a question to have a good answer. We didn’t know that two hundred years ago, we do now.
Another theory is that all the high level questions have been answered, so at this point in history we are filling in the gaps regarding specific microbe specifics, and everything else.
A more interesting theory he suggested is that it relates to professionalism. Science has become much more professional, and that is somehow contrary to asking questions like this. Being oriented toward ‘having a job’ is different from trying to understand the world. I liked this theory, because it fits with my broader hypothesis that professionalism is terrible.
Got any more theories?
Image by Piotr Zakrzewski from Pixabay

I think this relates to two primary factors:
1) We now have a better understanding of how high dimensional and complex both the world and causation are. Most of the edifice of science, particularly post Replication Crisis, is concerned with deliberately narrowing the Garden of Forking Paths, defining the exact area you're going to be focused on, and then defining clear intents, terms, and tests beforehand, because if you DON'T do this, reality is high dimensional enough you can always come up with something, and that something won't be true or replicable, and all your efforts were wasted.
2) Related to this, is the tyranny of legibility. Precisely because we're concerned with truth / replicability / cashing out in better predictions, you have to quantify everything, and we can only do that on small scales in legible domains.
Big questions aren't really quantifiable that way. Say you found a general ratio that was broadly true for your big problem, let's say that ideas are the bottleneck 5-30% of the time, and execution the other 70-95%. Now what? What do we DO with that information?
Is it interesting to know? I guess? Might it be interesting or useful on a personal level, because you know that as a general prior, once you come up with an idea you still have 70-95% of the work to do? I guess? But wouldn't you have figured that out anyways, by *doing?*. Isn't the specific circumstances of your field or the types of problems you usually grapple with actually the more important prior to know? Is it useful societally? Probably not? Because it seems really variable, and like a "devil in the details" sort of domain? And most people in science or business and wherever will ALREADY freely tell you "ideas are 5%, execution is 95%?"
I think these two factors broadly generalize. We are too limited individually and collectively to actually address larger questions in rigorous and useful ways.
But what did you find out about ideas and implementation!?