A counterargument that isn't
Let’s say that you’re in a discussion, and it goes something like this:
- We should do institute regulatory measures, like taxes, to inhibit the growth of the economy so that we can slow down or stop climate change.
- That makes sense if you think that climate change is the bigger of the possible threats, but it probably isn’t. Growing our economy is the fastest way to get us over a certain technological capability threshold, which means that we’re that much more equipped to work on even more pressing issues than climate change (like aging, or S-risks).
- That makes sense if you think that economic growth is what’s necessary, but just because there’s a strong historical correlation, that doesn’t guarantee that more economic growth will lead to more revolutionary discoveries. You could very well imagine a world where you have diminishing incremental improvements on the new iPhone, or some such. All the while, you’re ignoring possible other avenues of governance that would get us over that same technological threshold (e.g. certain forms of technocracy).
- That makes sense if you think that there’s any way that something that is not modern-day capitalism can produce the same level of output as modern-day capitalism. And even ignoring the practicalities of such an endeavor (you’d essentially have to make the entire world switch over to a new paradigm, which is in practical terms extremely difficult), a) the game theory doesn’t seem to be there, and b) the tail risks seem to be there. Meaning, it’s uncertain that there actually is some arrangement in which parties, as they exist today, would transact among each other and raise the overall net value/profit/whatever for society in general - and that such an arrangement isn’t regulated market capitalism that we’re running on today. Moreover, if you do in fact find some game-theoretical arrangement that would work better than what we have today, there’s still the question of unknown unknowns, of tail risks. This is the problem of things sounding good “on paper”, but then you miss something important and/or tacit, something that you didn’t think about, and that fundamentally changes everything. Returning to the original claim, if we try to inhibit the economy, sure we might improve our environment in the short term, but look at what you’re doing in the long term: you’re preventing the unlocking of potential, going directly against incentives (smart people seeing the chance that they’ll become insanely rich if they provide value at scale). You’re putting the entire planet to neutral and hoping that somehow you’ll get out of the other threats that aren’t climate change.
- That makes sense if you think that…
This back-and-forth is good and welcome, and the people in this discussion are pretty respectful and are trying to learn, but one thing that jumps out to me in this discussion, and in discussions in general, is that the counterarguments aren’t actually counterarguments.
What you actually have is a counterargument for a certain part of an argument, and that counterargument isn’t something that you can tally, and whoever has more arguments/counterarguments, wins. So it’s not like, I say “A”, you say “B”, and we come to a state of “B over A; B is right, A is wrong”. It’s more like, I say “A”, you say “actually B, because certain problems with A”, I say “actually C, because certain problems with B” and so on. The counter-arguments get more and more informative that way because in a discussion, you’re imparting large amounts of information on edge cases. You can see it in the physical size of the text.
I first had the idea that counterarguments aren’t actually counterarguments when I read Bostrom’s crucial considerations talk. In it, he talks about whether it makes sense to vote, and goes back and forth: yes because A, but not because actually B, but actually yes because actually C, but actually no because actually D, and so on.
Each of the counterarguments isn’t actually a full-blown takedown of its argument. Removing the decision from the discussion altogether (what we should do with our conclusion), counterarguments are actually just angles, other ways of seeing, filters, positions that allow new information to come to light. That’s why their name is so wrong - it implies that they negate the original argument, and they mostly don’t. At the very best, you can negate a portion of the original argument, but they hardly ever negate the appropriateness of the decision based on the original argument.
A mathematical analogy: decision-making doesn’t actually look like a series of arguments and counterarguments that cross each other out, and whatever remains, wins. The argument-counterargument dynamic produces a set of parameters. Then you assign certain weights to the parameters, depending on how important they are, they affect the final decision more or less. This math can be made explicit, but with good-faith decision-makers, the math is already there, implicitly. Meaning, if someone actually considers your argument, even if they provide a counterargument which seems to “negate” your original claim, they will have integrated your claim, and will assign some weight to it.
“Counterargument” is soldier language instead of scout language, and it obscures the non-negating reality of counterarguments.