Do other devs just open up multiple vs code windows for each project or something?
I can't stand not being able to run everything in the same window with ctrl P picking up files from across projects as a reference.
I feel like I'm the odd one out because I've noticed a lot of languages and Lang servers are making these assumptions about how devs work and organise code.
Or they're just being perfectionist opinionated twats.
> The compiler literally takes all the mental overhead away.
Increasing the strictness of the language has one effect: decreasing the number of potential solutions for a given problem.
If the restrictions are carefully chosen (like they are in rust) this leads to generally safer code. But don't fool yourself, the restrictions don't merely generate new solutions - they reject the ones that don't pass the tests.
A more extreme example is formal theorem provers. Carefully constructed proofs will take a lot of effort, but it will also make you confident that the code does what it needs to do.
The rust borrow checker is just a more restricted theorem prover that doesn't touch the logic, it just deals with the memory side of things. It's indeed very helpful in trying to explain what's wrong (and even suggest fixes), but it doesn't take the programmer overhead away.
In a more complex system rust inevitably makes it harder to come up with solutions. It will reject valid code just because it can't prove it's right, not because it's actually wrong. As a programmer, you're going to come up with such solutions, and while in time you get more used to writing code that rust likes, and rust too gets better at accepting correct solutions, you're going to have to fight the borrow checker sometimes.
I don't have a ton of experience with rust, but I encountered cases where equivalent C++ code would've worked just fine, but I had to change it because rust didn't like it.
Rust is an amazing language, but it definitely doesn't 'take all the mental overhead away'.
It's much like the difference between "how" and "what". SQL select statements are my favored example of declarative: you simply "declare" what you want ("all rows from this table with these values in those columns") and it's up to an engine to actually figure out how to do that for you. In an imperative language like C you could produce the same output, but you'd be specifying how, not just what: a for loop to iterate over each row in the table, if statements to compare values, etc.
But each layer’s what is the layer above its how. My C code is just declaring the program I want to run. How it is run is determined by a compiler, assembler, linker, operating system, and CPU.
There’s a sort of useless sense in which everything is declarative if you are allowed to shift around what the relevant “thing to be done” is, and it seems like many people are complaining that the article does this exact thing.
Sure it's a spectrum and the exact edges can be blurry. But no-one's going to argue that there isn't a qualitative difference between a sql query and the same logic written out in c
And I’m not arguing that. The article is dealing with those blurry edges, though, and what layer “matters” in evaluating the distinction is very much at issue. So if we’re going to compare to SQL/C, we need to bring in the relevant analogous nuances
I'm not even sure I agree with the author with that specifically:
> Flow control? Imperative.
> No flow control? Declarative.
I think I'd rather say:
Imperative: Tell the system what to do.
Declarative: Tell the system what you would like to be. The system figures out what to do to get there.
You could well have flow control to say things like "if things are like that, I want things to be such", or "here is a loop building up what I would the state of the system to look like", still without telling the system what to do to get to that state.
> I wonder if a language can be Turing-complete without flow control. My guess is no?
Depends on your definition of "flow control". Untyped lambda calculous is effectively entirely just declaration and application of anonymous functions with only one argument, but it's Turing complete.
You can even try it in any language that has lambdas as first class citizens, e.g. here's a factorial function I wrote in Lambda calculous, but implemented in python:
Does it have flow control? Depends. No python flow control is used, but some of those lambdas implement flow control. But they're not any part of the language (Lambda calculous), we just build them.
Would I call Lambda calculous declarative? Not at all!
Yep. That’s why this is a potentially useful rule of thumb for comparing programming languages, especially ones that skew very far in one direction or the other, but it’s not a rigorous definition of the concepts.
Some incredibly simple algorithms are difficult to explain naturally without it sounding like control flow. How would you describe the absolute value function without saying “if” or “when” or specifying one particular element from a set? Obviously any computer system needs to be able to determine what steps to take depending on specified conditions.
The author seems as confused as the people he complains about.
Just his example of `x = a ? b : c`, I never saw anybody try to claim code like this makes a language imperative. Instead, I've seen many people calling this logic "functional if".
If you go with his definition, only simple variable declaration is declarative.
(By the way, a very popular definition is that imperative languages have flow, not flow control. AKA, it makes a difference if you go and execute the instructions on a different order. That one is reasonable, but then, why single out non-associativity of operations when there are many other things that no language makes non-associative?)
> I wonder if a language can be Turing-complete without flow control. My guess is no?
I agree, unless something like pattern-matching does not count as control flow. But then isn't a conditional just a basic pattern? Why does it get a pass?
> I wonder if a language can be Turing-complete without flow control. My guess is no?
I can’t imagine why not. I think you have to make the distinction between explicit control flow in the syntax, which declarative languages don’t have (according to this rule of thumb), and the notion of a computer choosing to do one thing among several options depending on some condition. Of course any Turing-complete system can do the latter, but that’s totally unrelated to the syntax of the programming language.
Function calls don't make something declarative, otherwise C is declarative. And once you accept that, then words have no meaning anymore and we can't meaningfully discuss the topic.
Expressions instead of statements is a key tool to make things declarative; the functional and logic paradigms are both declarative programming paradigms, as opposed to the structured/procedural and OO paradigms, which are imperative.
I don't know what you're trying to get at, you seem to be agreeing with me. Function calls (present in C and other structured, procedural, and OO paradigm languages which are also imperative) do not make a language declarative.
That they're also present in the functional and logic paradigms doesn't change that. It's a feature common to both, so not a feature that can be used to distinguish the categories from each other.
And regarding expressions, Rust is more expression oriented than most mainstream procedural languages today, and it's not declarative, though it may have more declarative-style (particularly its heavy use of the iterator style) compared to many other imperative languages.
> Function calls (present in C and other structured, procedural, and OO paradigm languages which are also imperative) do not make a language declarative.
“Declarative” and “imperative” are not really features of languages in the first place. They are features of code (and coding paradigms), and you can write code of either style (and usually most paradigms) in almost any real-world, Turing-complete, higher-level-than-assembly language.
The specific replacement of an imperative loop with a map call you made the non-sequitur response about function calls to upthread however, made the code it was in more declarative, though, and is typical of the ways that the functional paradigm (where map is typically the idiomatic way to do that) is more declarative than the structured/procedural paradigm (where building up a collection in an imperative loop would be.)
Exactly. You could even write declarative style C, or make your code _more_ declarative. It’s not some binary decision. It generally just means writing higher level language describing the result you want, rather than all the instructions to produce it. Functional languages are very good at letting you right declarative code, especially when it comes to dealing with sequences and lists. Ruby’s enumerator in particular allows you to write code in that style, rather than complicated nested for loops full of conditional logic.
Unless they're talking about layers, with the function prototypes and function calls being declarative, and the next layer down being where the code is actually written out. Even that doesn't really work, though; main() usually has code in it... <shrug>
That’s a great rule of thumb to quickly identify whether some new syntax is more likely to be imperative or declarative, but it’s not a great explanation of what the two terms mean for someone new to the terms. A banana doesn’t have control flow but it’s definitely not a declarative programming language.
The "has control flow" vs "doesn't have control flow" description is a bad one, an attempt to reduce "declarative vs imperative" down to syntax instead of semantics. The parent to my comment is still in imperative style, despite having no control flow: It's specifying what actions to take to reach the desired outcome ("eat"). Declarative is describing the desired outcome, then letting the system figure out how to get there.
When you write an SQL query, you're not describing steps to take, you're describing the outcome you want (what rows and fields from which tables and how those tables relate to each other). The query engine then takes that description and figures out how to actually do the query (what indices to use, in what order, how to actually loop/hash/map/scan the tables and indices, etc). This is what EXPLAIN shows you, the actual imperative plan the query engine came up with given the declarative query you gave it.
Another common example would be CSS: It's all about describing the desired result and spatial relationships between elements, then the CSS engine takes those rules and figures out how to actually get those elements to do what you want. Like something as seemingly simple as "float: left" - for the text to wrap around it, the engine has to deal with the image size, position, font and font size, figuring out when to wrap the text, height of the font, which line of text is greater than the image height, etc. The engine does a lot of work figuring out the steps to take to get the result you've described.
I'd print off a piece of legacy code that was particularly ugly and ask them to review it.
I'd tell them I wasn't looking for silly stuff like parser errors and instead was just looking for their opinion on the code. How they thought it worked and how they would improve or do it differently.
It was mostly just an interview prompt with a consistent starting point that lead to interesting insights about the interviewee but generally had repeated talking points across candidates.
It's a completely free guide teaching content creators how to make courses. For some context, the company I made this for is a course creation plateform like Teachable or Podia, so we want to make the best resources for our customers and potential customers :)