It is the perfect and correct antidote to any slippery slope argument. If the consequences of the law turns out to be as bad as you say they will be then we adjust the law.
Bizarrely horrible approach. A lot of damage would already be done, most importantly changing the status quo is inherently much harder than doing nothing. So going back won’t necessarily be straightforward.
Claiming that “slippery slope” is always a fallacy is a gross misconception and misinterpretation. It varies case by case, very often it can be a perfectly rational argument.
“Let’s restrict democracy and individual freedoms just a bit, maybe an authoritarian strongman is just what we need to get us out of this mess, we can always go back later..”
“Let’s try scanning all personal communication in a non intrusive way, if it doesn’t solve CSAM problems we can always adjust the law”, right.. as if that was ever going to happen.
Some lines need to be drawn that can never be crossed regardless of any good and well reasoned intentions.
I very heavily disagree here, we aren't doing as much of this as we should be.
Society is too complex of a system to predict what consequences a law will have. Badly written laws slip through. Loopholes are discovered after the fact. Incentives do what incentives do, and people eventually figure out how to game them to their own benefit. First order effects cause second order effects, which cause third order effects. Technology changes. We can't predict all of that in advance.
Trying to write a perfect law is like trying to write a perfect program on your first try, with no testing and verification, just reasoning about it in a notebook. If the code or law is of any complexity, it just can't be done. Programmers have figured this out and came up with ways to mitigate the problem, from unit testing and formal verification to canaries, feature flags, blue-green deployments and slow rollouts. Lawmakers could learn those same lessons (and use very similar strategies), but that is very rarely done.
In the same post you are arguing for and against "slippery slope".
Either it is possible to easy change law to make it worse ("slippery slope" is valid objection) or changing law is "much harder than doing nothing"("slippery slope" is a fallacy).
>Some lines need to be drawn that can never be crossed regardless of any good and well reasoned intentions.
Too late. We already let the government cross the lines during Covid with freedom of movement and freedom of speech restrictions, and they got away with it because it was "for your protection". Now a lot of EU countries are crossing them even more also "for your protection" due to "Russian misinformation" and "far right/hate speech" scaremongering, which at this point is a label applied loosely to anyone speaking against unpopular government policies or exposing their corruption.
And the snowball effect continues. Governments are only increasing their grip on power(looking enviously at what China has achieved), not loosening it back. And worse, not only are they more authoritarian, but they're also practicing selective enforcement of said strict rules with the justification that it's OK because we're doing it to the "bad guys". I'm afraid we aren't gonna go back to the levels of freedom we had in 2014- 2019, that ship has long sailed.
Nothing is more permanent in politics than temporary solution. As a Norwegian, for example, I am still paying a temporary 25% on all spending that was enacted as a "temporary" measure over 100 years ago.
Control Theory does not work (in the general) for politics for the simple reason that incentives are misaligned. That is to say that control theory itself obviosuly works, but for it to be a good solution in some political context you must additionally prove the existance of some Nash equilibrium where it is being correctly applied.
The thesis argues that dictators regularly both harm groups clearly inside the winning coalition, and please groups clearly outside of it. A common, but not the only reason, is ideology.
One has to be careful when using game-theory models on messy human entities. Sometimes it works, sometimes it doesn't, and it's hard to determine just at what point the model breaks down. At least without empirical research.
(Another example is that actual negotiation outcomes rarely end up at the minimax or Nash product equilibria that game theory sequential negotiation concepts would suggest.)
> If the consequences of the law turns out to be as bad
This is the usual "the market will regulate itself" argument. It works when the imbalance arises organically, not so much when it's intentional on the side with more power and part of their larger roadmap.
The conflict of interest needs to be accounted for. Consequences for whom? Think of initiatives like any generic backdooring of encrypted communication but legislators are exempt. If legislators aren't truly dogfooding the results of that law then there's no real "market pressure" to fix anything. There's only "deployment strategy", roll out the changes slowly enough that the people have time to acclimate.
Control theory doesn't apply all that well to dynamical systems made entirely of human beings. You need psychohistory for that.