Defer might be better than nothing, but it's still a poor solution. An obvious example of a better, structural solution is C#'s `using` blocks.
using (var resource = acquire()) {
} // implicit resource.Dispose();
While we don't have the same simplicity in C because we don't use this "disposable" pattern, we could still perhaps learn something from syntax and use a secondary block to have scoped defers. Something like:
That is a different approach, but I don't think you've demonstrated why it's better. Seems like that approach forces you to introduce a new scope for every resource, which might otherwise be unnecessary:
using (var resource1 = acquire() {
using (var resource2 = acquire()) {
using (var resource3 = acquire()) {
// use resources here..
}
}
}
Compared to:
var resource1 = acquire();
defer { release(resource1); }
var resource2 = acquire();
defer { release(resource2); }
var resource3 = acquire();
defer { release(resource3); }
// use resources here
Of course if you want the extra scopes (for whatever reason), you can still do that with defer, you're just not forced to.
While the macro version doesn't permit this, if it were built-in syntax (as in C#) we can write something like:
using (auto res1 = acquire1(); free(res1))
using (auto res2 = acquire2(); free(res2))
using (auto res3 = acquire3(); free(res3))
{
// use resources here
}
// free(res3); free(res2); free(res1); called in that order.
The argument for this approach is it is structural. `defer` statements are not structural control flow: They're `goto` or `comefrom` in disguise.
---
Even if we didn't want to introduce new scope, we could have something like F#'s `use`[1], which makes the resource available until the end of the scope it was introduced.
use auto res1 = acquire1() defer { free(res1); };
use auto res2 = acquire2() defer { free(res2); };
use auto res3 = acquire3() defer { free(res3); };
// use resources here
In either case (using or use-defer), the acquisition and release are coupled together in the code. With `defer` statements they're scattered as separate statements. The main argument for `defer` is to keep the acquisition and release of resources together in code, but defer statements fail at doing that.
Defer is a restricted form of COMEFROM with automatic labels. You COMEFROM the end of the next `defer` block in the same scope, or from the end of the function (before `return`) if there is no more `defer`. The order of execution of defer-blocks is backwards (bottom-to-top) rather than the typical top-to-bottom.
`defer` is obviously not implemented in this way, it will re-order the code to flow top-to-bottom and have fewer branches, but the control flow is effectively the same thing.
In theory a compiler could implement `comefrom` by re-ordering the basic blocks like `defer` does, so that the actual runtime evaluation of code is still top-to-bottom.
Doesn't help. The point is to avoid having to invoke a C compiler when working in X language. But it would certainly be nice if the distros enabled that.
`inline` is a hint, but he declares `static_inline` in the preprocessor to include `__attribute__((__always_inline__))`, which is more than just a hint. However, even `always_inline` may be troublesome over translation units, though we can still inline things in different translation units if using `-flto`, I believe there are occasional bugs. For libraries we'd also want to use `-ffat-lto-objects`.
Say I am writing a transpiler to C, and I have to choose whether I will target C89, or some arbitrary subset of C23. When would I ever choose the latter?
The only benefit I could think of is where you're also planning to write a new C compiler, and this is simplified by the C being restricted in some way. But if you're doing this, you're just writing a frontend and backend, with an awkward and unnecessary middle-end coupling to some arbitrary subset of C. What's the benefit of C being involved at all in this scenario?
And say you realise this, and opt to replace C with some kind of more abstract, C-like IR. Aren't you now just writing an LLVM clone, with all the work that entails? When the original point of targetting C was to get its portability and backends for free?
`uintptr_t` and `intptr_t` are integer types large enough to hold a pointer. They're not pointer types (They're also optional in the standard).
In the first `my_func`, there is the possiblity that `a` and `b` are equal if their struct layouts are equivalent (or one has a proper subset of the other's fields in the same order). To tell the compiler they don't overlap we would use `(strong_type1 *restrict a, strong_type2 *restrict b)`.
There's also the possibility that the pointers could point to the same address but be non-equal - eg if LAM/UAI/TBI are enabled, a simple pointer equality comparison is not sufficient because the high bits may not be equal. Or on platforms where memory access is always aligned, the low bits may be not equal. These bits are sometimes used to tag pointers with additional information.
I was disappointed when MS discontinued Axum, which I found pleasant to use and thought the language based approach was nicer than a library based solution like Orleans.
The Axum language had `domain` types, which could contain one or more `agent` and some state. Agents could have multiple functions and could share domain state, but not access state in other domains directly. The programming model was passing messages between agents over a typed `channel` using directional infix operators, which could also be used to build process pipelines. The channels could contain `schema` types and a state-machine like protocol spec for message ordering.
It didn't have "classes", but Axum files could live in the same projects as regular C# files and call into them. The C# compiler that came with it was modified to introduce an `isolated` keyword for classes, which prevented them from accessing `static` fields, which was key to ensuring state didn't escape the domain.
The software and most of the information was scrubbed from MS own website, but you can find an archived copy of the manual[1]. I still have a copy of the software installer somewhere but I doubt it would work on any recent Windows.
Sadly this project was axed before MS had embraced open source. It would've been nice if they had released the source when the decided to discontinue working on it.
> I would use std::uint64_t which guarantees a type of that size, provided it is supported.
The comment on the typedef points out that the signature of intrinsics uses `unsigned long long`, though he incorrectly states that `uint64_t` is `unsigned long` - which isn't true, as long is only guaranteed to be at least 32-bits and at least as large as `int`. In ILP64 and LLP64 for example, `long` is only 32-bits.
I don't think this really matters anyway. `long long` is 64-bits on pretty much everything that matters, and he is using architecture-specific intrinsics in the code so it is not going to be portable anyway.
If some future arch had 128-bit hardware integers and a data model where `long long` is 128-bits, we wouldn't need this code at all, as we would just use the hardware support for 128-bits.
But I agree that `uint64_t` is the correct type to use for the definition of `u128`, if we wanted to guarantee it occupies the same storage. The width-specific intrinsics should also use this type.
> I would be interested to see how all these operations fair against compiler-specific implementations
There's a godbolt link at the top of the article which has the comparison. The resulting assembly is basically equivalent to the built-in support.
(that's the best linkable reference I could find, unfortunately).
I've run into a similar problem where an overload resolution for uint64_t was not being used when calling with a size_t because one was unsigned long and the other was unsigned long long, which are both 64 bit uints, but according to the compiler, they're different types.
This was a while ago so the details may be off, but the silly shape of the issue is correct.
This was my point. It may be `unsigned long` on his machine (or any that use LP64), but that isn't what `uint64_t` means. `uint64_t` means a type that is 64-bits, whereas `unsigned long` is simply a type that is larger than `unsigned int` and at least 32-bits, and `unsigned long long` is a type that is at least as large as `unsigned long` and is at least 64-bits.
I was not aware of compilers rejecting the equivalence of `long` and `long long` on LP64. GCC on Linux certainly doesn't. On windows it would be the case because it uses LLP64 where `long` is 32-bits and `long long` is 64-bits.
An intrinsic like `_addcarry_u64` should be using the `uint64_t` type, since its behavior depends on it being precisely 64-bits, which neither `long` nor `long long` guarantee. Intel's intrinsics spec defines it as using the type `unsigned __int64`, but since `__int64` is not a standard type, it has probably implemented as a typedef or `#define __int64 long long` by the compiler or `<immintrin.h>` he is using.
long and long long are convertible, that's not the issue.
They are distinct types though, so long* and long long* are NOT implicitly convertible.
And uint64_t is not consistently the correct type.
Cryptography would be one application. Many crypto libraries use an arbitrary size `bigint` type, but the algorithms typically use modular arithmetic on some fixed width types (128-bit, 256-bit, 512-bit, or some in-between like 384-bits).
They're typically implemented with arrays of 64-bit or 32-bit unsigned integers, but if 128-bits were available in hardware, we could get a performance boost. Any arbitrary precision integer library would benefit from 128-bit hardware integers.
SIMD is for performing parallel operations on many smaller types. It can help with some cryptography, but It doesn't necessarily help when performing single arithmetic operations on larger types. Though it does help when performing logic and shift operations on larger types.
If we were performing 128-bit arithmetic in parallel over many values, then a SIMD implementation may help, but without a SIMD equivalent of `addcarry`, there's a limit to how much it can help.
Something like this could potentially be added to AVX-512 for example by utilizing the `k` mask registers for the carries.
The best we have currently is `adcx` and `adox` which let us use two interleaved addcarry chains, where one utilizes the carry flag and the other utilizes the overflow flag, which improves ILP. These instructions are quite niche but are used in bigint libraries to improve performance.
> but It doesn't necessarily help when performing single arithmetic operations on larger types.
For the curious, AFAIU the problem is the dependency chains. For example, for simple bignum addition you can't just naively perform all the adds on each limb in parallel and then apply the carries in parallel; the addition of each limb depends on the carries from the previous limbs. Working around these issues with masking and other tricks typically ends up adding too many additional operations, resulting in lower throughput than non-SIMD approaches.
There's quite a few papers on using SIMD to accelerate bignum arithmetic for single operations, but they all seem quite complicated and heavily qualified. The threshold for eeking out any gain is quite high, e.g. minimum 512-bit numbers or much greater, depending. And they tend to target complex or specialized operations (not straight addition, multiplication, etc) where clever algebraic rearrangements can profitably reorder dependency chains for SIMD specifically.
Sometimes you want the struct to be defined in a header so it can be passed and returned by value rather than pointer.
A technique I use is to leverage GCC's `poison` pragma to cause an error if attempting to access the struct's fields directly. I give the fields names that won't collide with anything, use macros to access them within the header and then `#undef` the macros at the end of the header.
Example - an immutable, pass-by-value string which couples the `char*` with the length of the string:
It just wraps `<string.h>` functions in a way that is slightly less error prone to use, and adds zero cost. We can pass the string everywhere by value rather than needing an opaque pointer. It's equivalent on SYSV (64-bit) to passing them as two separate arguments:
These DO NOT have the same calling convention. The latter is less efficient because it needs to dereference a pointer to return the out parameter. The former just returns length in `rax` and chars in `rdx` (`r0:r1`).
So returning a fat pointer is actually more efficient than returning a size and passing an out parameter on SYSV! (Though only marginally because in the latter case the pointer will be in cache).
Perhaps it's unfair to say "zero-cost" - it's slightly less than zero - cheaper than the conventional idiom of using an out parameter.
But it only works if the struct is <= 16-bytes and contains only INTEGER types. Any larger and the whole struct gets put on the stack for both arguments and returns. In that case it's probably better to use an opaque pointer.
That aside, when we define the struct in the header we can also `inline` most functions, so that avoids unnecessary branching overhead that we might have when using opaque pointers.
`#pragma GCC poison` is not portable, but it will be ignored wherever it isn't supported, so this won't prevent the code being compiled for other platforms - it just won't get the benefits we get from GCC & SYSV.
The biggest downside to this approach is we can't prevent the library user from using a struct initializer and creating an invalid structure (eg, length and actual string length not matching). It would be nice if there were some similar to trick to prevent using compound initializers with the type, then we could have full encapsulation without resorting to opaque pointers.
> The biggest downside to this approach is we can't prevent the library user from using a struct initializer and creating an invalid structure (eg, length and actual string length not matching). It would be nice if there were some similar to trick to prevent using compound initializers with the type, then we could have full encapsulation without resorting to opaque pointers.
Hmm, I found a solution and it was easier than expected. GCC has `__attribute__((designated_init))` we can stick on the struct which prevents positional initializers and requires the field names to be used (assuming -Werror). Since those names are poisoned, we won't be able to initialize except through functions defined in our library. We can similarly use a macro and #undef it.
Full encapsulation of a struct defined in a header:
Aside from horrible pointer aliasing tricks, the only way to create a `string_t` is via `string_alloc_from_chars` or other functions defined in the library which return `string_t`.
#include <stdio.h>
int main() {
string_t s = string_alloc_from_chars("Hello World!");
if (string_is_valid(s))
puts(string_to_chars(s));
string_free(s);
return 0;
}
reply