Price and scarcity go hand in hand, not value and scarcity.
Diamonds are pretty worthless but expensive because they're scarce (putting aside industrial applications), water is extremely valuable but cheap.
No doubt there are some goods where the value is related to price, but these are probably mostly status related goods. e.g. to many buyers, the whole point in a Rolex is that it's expensive.
This conflates use-value and exchange-value. Water to someone dying of thirst has extremely high use-value, while a diamond would in that same moment have nearly no use-value, except for the fact that, as a commodity, the diamond has an exchange-value, and so can be sold to help acquire a source of water.
In a sane world we would just give the poor guy some water and let him keep his precious diamond. And in the sane world, the guy would donate the precious diamond to a museum so that everyone could enjoy its beauty.
What are you describing happens if you follow the mathematical rules of your models too much and ignore the real world.
I prefer the `price = value = relative wealth != wealth = resources` paradigm. Thus, wars destroy wealth and tech advances create wealth, but that's just me
Price is just a proxy for value. Diamonds do not have inherent utility (to the layman) but they are expensive because we societally ascribe value to them.
I think a lot of this comes down to the question: Why aren't tables first class citizens in programming languages?
If you step back, it's kind of weird that there's no mainstream programming language that has tables as first class citizens. Instead, we're stuck learning multiple APIs (polars, pandas) which are effectively programming languages for tables.
R is perhaps the closest, because it has data.frame as a 'first class citizen', but most people don't seem to use it, and use e.g. tibbles from dplyr instead.
The root cause seems to be that we still haven't figured out the best language to use to manipulate tabular data yet (i.e. the way of expressing this). It feels like there's been some convergence on some common ideas. Polars is kindof similar to dplyr. But no standard, except perhaps SQL.
FWIW, I agree that Python is not great, but I think it's also true R is not great. I don't agree with the specific comparisons in the piece.
There's a number of structures that I think are missing in our major programming languages. Tables are one. Matrices are another. Graphs, and relatedly, state machines are tools that are grossly underused because of bad language-level support. Finally, not a structure per se, but I think most languages that are batteries-included enough to included a regex engine should have a a full-fledged PEG parsing engines. Most, if not all, Regex horror stories derive from a simple "Regex is built in".
What tools are easily available in a language, by default, shape the pretty path, and by extension, the entire feel of the language. An example that we've largely come around on is key-value stores. Today, they're table stakes for a standard library. Go back to 90's, and the most popular languages at best treated them as second-class citizens, more like imported objects than something fundamental like arrays. Sure, you can implement a hash map in any language, or import some else's implementation, but oftentimes you'll instead end up with nightmarish, hopefully-synchronized arrays, because those are built-in, and ready at hand.
When there is no clear canonical way of implementing something, adding it to a programming language (or a standard library) is risky. All too often, you realize too late that you made a wrong choice, and then you add a second version. And a third. And so on. And then you end up with a confusing language full of newbie traps.
Graphs are a good example, as they are a large family of related structures. For example, are the edges undirected, directed, or something more exotic? Do the nodes/edges have identifiers and/or labels? Are all nodes/edges of the same type, or are there multiple types? Can you have duplicate edges between the same nodes? Does that depend on the types of the nodes/edges, or on the labels?
Even the raw storage for graphs doesn't have just one answer: you could store edge lists or you could store adjacency matrixes. Some algorithms work better with one, some work better with the other. You probably don't want to store both because that can be extra memory overhead as well as a locking problem if you need to atomically update both at once. You probably don't want to automatically flip back and forth between representations because that could cause garbage collector churn if not also long breadth or depth searches, and you may not want to encourage manual conversions between data structures either (to avoid providing a performance footgun to your users). So you probably want the edge list Graph type and the adjacency matrix Graph type to look very different, even though (they are trivially convertible they may be expensive to convert as mentioned), and yeah that's the under-the-hood storage mechanism. From there you get into possible exponential explosion as you start to get into the other higher level distinctions between types of graphs (DAGs versus Trees versus cyclic structures and so forth, and all the variations on what a node can be, if edges can be weighted or labeled, etc).
> I think most languages that are batteries-included enough to included a regex engine should have a a full-fledged PEG parsing engines
Then there would be more PEG horror stories. In addition, string and indices in regex processing are universal, while a parser is necessarily more framework-like, far more complex and doomed to be mismatched for many applications.
Would love to see a language in which hierarchical state machines, math/linear algebra, I/O to sensors and actuators, and time/timing were first class citizens.
Mainly for programming control systems for robotics and aerospace applications
> There's a number of structures that I think are missing in our major programming languages. Tables are one. Matrices are another.
I disagree. Most programmers will go their entire career and never need a matrix data structure. Sure, they will use libraries that use matrices, but never use them directly themselves. It seems fine that matrices are not a separate data type in most modern programming languages.
Unless you think "most programmers" === "shitty webapp developers", I strongly disagree. Matrices are first class, important components in statistics, data analysis, graphics, video games, scientific computing, simulation, artificial intelligence and so, so much more.
And all of those programmers are either using specialized languages, (suffering problems when they want to turn their program into a shitty web app, for example), or committing crimes against syntax like
To be fair, I do use matrices a reasonable amount in gamedev. And if you're writing your engine from scratch, rather than using something like unity, you will almost certainly need matrices
Even through UE blueprints (assuming the most high level abstraction here) you will come across the need to perform calculations with matrices. While a lot is abstracted away, you still need to know about coordinate spaces, quirks around order of operations, etc.
I don't see why the majority of engineers need to cater to your niche use cases. It's a programming language, you can just make the library if it doesn't exist. Nobody's stopping you.
Plus, plenty of third party projects have been incorporated into the Python standard library.
There are a number of dynamic languages to choose from where tables/dataframes are truly first-class datatypes: perhaps most notably Q[0]. There are also emerging languages like Rye[1] or my own Lil[2].
I suspect that in the fullness of time, mainstream languages will eventually fully incorporate tabular programming in much the same way they have slowly absorbed a variety of idioms traditionally seen as part of functional programming, like map/filter/reduce on collections.
It's interesting how often there are similarities between Numshell, Rye and Lil, although I think they are from different influences. I guess it's sort of current zeitgeist if you want something light, high level and interactive.
Interesting links - tnx. Apropos the optimism of "eventually", I think of language support for say key-value pair collections, namespaces, as still quite impoverished. With each language supporting only a small subset of the concision, apis, and datastructures, found useful in some other. This some 3 decades after becoming mainstream, and the core of multiple mainstream languages. Diminishing returns, silos, segregation of application domains, divergence of paradigm/orientation/idioms, assorted dysfunctions as a field, etc... "eventually" can be decades. Maybe LLMs can quicken that... or perhaps call an end to this era, permitting a "no, we collectively just never got around to creating any one language which supported all of {X}".
> R is perhaps the closest, because it has data.frame as a 'first class citizen', but most people don't seem to use it, and use e.g. tibbles from dplyr instead.
Everyone in R uses data.frame because tibble (and data.table) inherits from data.frame. This means that "first class" (base R) functions work directly on tibble/data.table. It also makes it trivial to convert between tibble, data.table, and data.frames.
It makes sense from a historical perspective. Tables are a thing in many languages, just not the ones that mainstream devs use. In fact, if you rank programming languages by usage outside of devs, the top languages all have a table-ish metaphor (SQL, Excel, R, Matlab).
The languages devs use are largely Algol derived. Algol is a language that was used to express algorithms, which were largely abstractions over Turing machines, which are based around an infinite 1D tape of memory. This model of 1D memory was built into early computers, and early operating systems and early languages. We call it "mechanical sympathy".
Meanwhile, other languages at the same time were invented that weren't tied so closely to the machine, but were more for the purpose of doing science and math. They didn't care as much about this 1D view of the world. Early languages like Fortran and Matlab had notions of 2D data matrices because math and science had notions of 2D data matrices. Languages like C were happy to support these things by using an array of pointers because that mapped nicely to their data model.
The same thing can be said for 1-based and 0-based indexing -- languages like Matlab, R, and Excel are 1-based because that's how people index tables; whereas languages like C and Java are 0-based because that's how people index memory.
As a slight refinement of your point, C does have storage map based N-D arrays/tensors like Fortran, just with the old column-major/row-major difference and a clunky "multiple [][]" syntax. There was just a restriction early on to need compile-time known dimensions to the arrays (up to the final dimension, anyway) because it was a somewhat half-done/half-supported thing - and because that also fit the linear data model well. So, it is also common to see char *argv[] like arrays of pointers or in numerics sometimes libraries which do their own storage map equations from passed dimensions.
Also, the linear memory model itself is not really only because of Algol/Turing machines/theoretical CS/"early" hardware and mechanical sympathy. DRAM has rows & columns internally, but byte addressability leads to hiding that from HW client systems (unless someone is doing a rowhammer attack or something). More random access than tape rewind/fast forward is indeed a huge deal, but I think the actual popularity of linearity just comes from its simplicity as an interface more than anything else. E.g.s, segmented x86 memory with near/far pointers was considered ugly relative to a big 32-bit address space and disk files and other allocation arenas have internally a large linear address/seek spaces. People just want to defer using >1 number until they really need to. People learn univariate-X before they learn multivariate-X where X could be calculus, statistics, etc., etc.
> R is perhaps the closest, because it has data.frame as a 'first class citizen', but most people don't seem to use it, and use e.g. tibbles from dplyr instead.
Yeah data.table is just about the best-in-class tool/package for true high-throughput "live" data analysis. Dplyr is great if you are learning the ropes, or want to write something that your colleagues with less experience can easily spot check. But in my experience if you chat with people working in the trenches of banks, lenders, insurance companies, who are running hundreds of hand-spun crosstabs/correlational analyses daily, you will find a lot of data.table users.
Relevant to the author's point, Python is pretty poor for this kind of thing. Pandas is a perf mess. Polars, duckdb, dask etc, are fine perhaps for production data pipelines but quite verbose and persnickety for rapid iteration. If you put a gun to my head and told me to find some nuggets of insight in some massive flat files, I would ask for an RStudio cloud instance + data.table hosted on a VM with 256GB+ of RAM.
Every copy of Microsoft Excel includes Power Query which is in the M language and has tables as a type. Programs are essentially transformations of table columns and rows. Not sure if its mainstream but is widely available. M language is also included in other tools like PowerBI and Power Automate.
This is an interesting observation. One possible explanation for a lack of robust first class table manipulation support in mainstream languages could be due to the large variance in real-world table sizes and the mutually exclusive subproblems that come with each respective jump in order-of-magnitude row size.
The problems that one might encounter in dealing with a 1m row table are quite different to a 1b row table, and a 1b row table is a rounding error compared to the problems that a 1t row table presents. A standard library needs to support these massive variations at least somewhat gracefully and that's not a trivial API surface to design.
I don't think this is the real problem. In R and Julia tables are great, and they are libraries. The key is that these languages are very expressive and malleable.
Simplifying a lot, R is heavily inspired by Scheme, with some lazy evaluation added on top. Julia is another take at the design space first explored by Dylan.
People use data.table in R too (my favorite among those but it’s been a few years). data.table compared to dplyr is quite a contrast in terms of language to manipulate tabular data.
SQL is not just about a table but multiple tables and their relationships. If it was just about running queries against a single table then basic ordering, filtering, aggregation, and annotation would be easy to achieve in almost any language.
Soon as you start doing things like joins, it gets complicated but in theory you could do something like an API of an ORM to do most things. With using just operators you quickly run into the fact that you have to overload (abuse) operators or write a new language with different operator semantics:
Every time I see stuff like this (Google’s new SQL-ish language with pipes comes to mind), I am baffled. SQL to me is eminently readable, and flows beautifully.
For reference, I think the same is true of Python, so it’s not like I’m a Perl wizard or something.
Oh I agree. The problem is that they are two different languages. Inside a Python file, SQL is just a string. No syntax highlighting, no compile time checking, etc. A Kwisatz Haderach of languages that incorporates both its own language and SQL as first class concepts would be very nice but the problem is that SQL is just too different.
For one thing, SQL is not really meant to be dynamically constructed in SQL. But we often need to dynamically construct a query (for example customer applied several filters to the product listing). The SQL way to handle that would be to have a general purpose query with a thousand if/elses or stored procedures which I think takes it from “flows beautifully” to “oh god who wrote this?” Or you could just do string concatenation in a language that handles that well, like Python. Then wrap the whole thing in functions and objects and you get an ORM.
I still have not seen a language that incorporates anything like SQL into it that would allow for even basic ORM-like functionality.
I am not familiar with them but when I think of query generators I think of the lower level API for SQLAlchemy which is fine but still kludgy as it tries to translate SQL into a new “language” that is less known and less intuitive and still requires you to think in terms of is the data you are working with local or remote.
This is why key-value stores are so popular, I think. They make you do more but with all local data (that is data in your memory not in the database server). SQL can do a lot but because we almost never represent a user object as just a tuple there is a fundamental impedance mismatch between an environment that only deals with tuples in tables and an environment that deals with objects of some kind. Something that can do both at once would be the ultimate. Maybe the way to look at it isn’t to bring the database into your application but to run the entirety of the application inside a database. Imagine if all your business logic could easily be encoded into stored procedures and all you had to do was expose endpoints to draw a UI for it. That might actually work (and I know there are some systems that try this but none are mainstream enough).
PyTorch was first only Torch, and in Lua. I didn't follow it too close at the time, but apparently due to popular demand it got redone in Python and voila PyTorch.
R’s the best, bc it’s been a statistical analysis language from the beginning in 1974 (and was built and developed for the purpose of analysis / modeling). Also, the tidyverse is marvelous. It provides major productivity in organizing and augmenting the data. Then there’s ggplot, the undisputed best graphical visualization system + built-ins like barplot(), or plot().
But ultimately data analysis is going beyond Python and R into the realm of Stan and PyMC3, probabilistic programming languages. It’s because we want to do nested integrals and those software ecosystems provide the best way to do it (among other probabilistic programming languages). They allow us to understand complex situations and make good / valuable decisions.
I know the primary data structure in Lua is called a table, but I’m not very familiar with them and if they map to what’s expected from tables in data science.
Lua's tables are associative arrays, at least fundamentally. There's more to it than that, but it's not the same as the tables/data frames people are using with pandas and similar systems. You could build that kind of framework on top of Lua's tables, though.
this is my biggest complaint about SAS--everything is either a table or text.
most procs use tables as both input and output, and you better hope the tables have the correct columns.
you want a loop? you either get an implicit loop over rows in a table, write something using syscalls on each row in a table, or you're writing macros (all text).
Because there's no obvious universal optimal data structure for heterogeneous N-dimensional data with varying distributions? You can definitely do that, but it requires an order of magnitude more resource use as baseline.
It’s not that you can’t model data that way (or indeed with structs of arrays), it’s just that the user experience starts to suck. You might want a dataset bigger than RAM, or that you can transparently back by the filesystem, RAM or VRAM. You might want to efficiently index and query the data. You might want to dynamically join and project the data with other arrays of structs. You might want to know when you’re multiplying data of the wrong shapes together. You might want really excellent reflection support. All of this is obviously possible in current languages because that’s where it happens, but it could definitely be easier and feel more of a first class citizen.
What is a paragraph but an array of sentences? What is a sentence but an array of words? What's a word but an array of letters? You can do this all the way down. Eventually you need to assign meaning to things, and when you do, it helps to know what the thing actually is, specifically, because an array of structs can be many things that aren't a table.
I would argue that's about how the data is stored. What I'm trying to express is the idea of the programming language itself supporting high level tabular abstractions/transformations such as grouping, aggregation, joins and so on.
Implementing all of those things is an order of magnitude more complex than any other first class primitive datatype in most languages, and there's no obvious "one right way" to do it that would fit everyones use cases - seems like libraries and standalone databases are the way to do it, and that's what we do now.
Perfect solution for doing analysis on tables. Wes McKinney (inventor of pandas is rumored to have been inspired by it too).
My problem with APL is 1.) the syntax is less amazing at other more mundane stuff, and 2.) the only production worthy versions are all commercial. I'm not creating something that requires me to pay for a development license as well as distribution royalties.
Agreed. I once used it for data preparation for a data science project (GNU APL). After a steep learning curve, it felt very much like writing math formulas — it was fun and concise, and I liked it very much. However, it has zero adoption in today's data science landscape. Sharing your work is basically impossible. If you're doing something just for yourself, though, I would probably give it a chance again.
> Why aren't tables first class citizens in programming languages?
Because they were created by before the need for it and maybe before their invention.
Manipulating numeric arrays and matrices in python is a bit clunky because it was not designed as a scientific computing language so they were added as library. It's much more integrated and natural to use in scientific computer languages such as matlab. However the reverse is also true: because matlab wasn't designed to do what python does, it's a bit clunkier to use outside scientific computing
I'd say there are converging standards like Parquet for longterm on disk, Arrow for in memory cross language, and increasingly duckdb for just standard SQL on that in memory or on disk representation. If I had to guess most of the data table things vanish long term because everyone can just use SQL now for all the stuff they did with quirky hacked up APIs and patchy performance because of those hacked up APIs.
Yes, Wolfram Language (WL) -- aka Mathematica -- introduced `Tabular` in 2025. It is a new data structure with a constellation of related functions (like `ToTabular`, `PivotToColumns`, etc.) Using it is 10÷100 times faster than using WL's older `Dataset` structure. (In my experience. With both didactic and real life data of 1_000÷100_000 rows and 10÷100 columns.)
This. I really really want some kind of data frame which has actual compile time typing my LSP/IDE can understand. Kusto query language (Azure Data Explorer) has it and the auto completion and error checking is extremely useful. But kusto query language is really just limited to one cloud product.
Well you nailed it, the language you're looking for is SQL.
There's a reason why duckdb got such traction over the last years.
I think data scientists overlook SQL and Excel like tooling.
But on the other hand, that's doesn't mean SQL is ideal - far from it. When using DuckDB with Python, to make things more succinct, reusable and maintainable, I often fall into the pattern of writing Python functions that generate SQL strings.
But that hints at the drawbacks of SQL: it's mostly not composable as a language (compared to general purpose languages with first-class abstractions). DuckDB syntax does improve on this a little, but I think it's mostly fundamental to SQL. All I'm saying is that it feels like something better is possible.
There are a number of data-focussed no-code/visual/drag-and-drop tools where data tables/frames are very much a first class citizen (e.g. Easy Data Transform, Alteryx, Knime).
Saying that SQL is the standard for manipulating tabular data is like saying that COBOL is the standard for financial transactions. It may be true based on current usage, but nobody thinks it's a good idea long term. They're both based on the outdated idea that a programming language should look like pidgin English rather than math.
Thanks - this worked for me (some errors, some success).
Last week I was making a birthday card for my son with the old model. The new model is dramatically better - I'm asking for an image in comic book style, prompted with some images of him.
With the previous model, the boy was descriptively similar (e.g. hair colour and style) but looked nothing like him. With this model it's recognisably him.
- Anyone have any idea why it says 'confidential'?
- Anyone actually able to use it? I get 'You've reached your rate limit. Please try again later'. (That said, I don't have a paid plan, but I've always had pretty much unlimited access to 2.5 pro)
The key idea is that only the original ticket buyer is eligible for a large rebate when attending the event. It prevents touting, but does not mean everyone who wants a ticket gets one.
Though in practice it is perhaps to techie, and in the end not dramatically different to what Glastonbury Festival does, which is that the ticket is only valid for entrance by the original purchaser, using photo id.
The fact they're notorious makes them a biased sample.
My guess is for the majority of people interested in EA - the typical supporter who is not super wealthy or well known - the two central ideas are:
- For people living in wealthy countries, giving some % of your income makes little difference to your life, but can potentially make a big difference to someone else's
- We should carefully decide which charities to give to, because some are far more effective than others.
I would describe myself as an EA, but all that means to me is really the two points above. It certainly isn't anything like an indulgence that morally offsets poor behaviour elsewhere
I would say the problem with EA is the "E". Saying you're doing 'effective' altruism is another way of saying that everyone else's altruism is wasteful and ineffective. Which of course isn't the case. The "E" might as well stand for "Elitist" in that's the vibe it gives off. All truly altruistic acts would aim to be effective, otherwise it wouldn't be altruism - it would just be waste. Not to say there is no waste in some altruism acts, but I'm not convinced its actually any worse than EA. Given the fraud associated with some purported EA advocates, I'd say EA might even be worse. The EA movement reeks of the optimize-everything mindset of people convinced they are smarter than everyone else who just say just gives money to a charity A when they could have been 13% more effective if they sent the money directly to this particular school in country B with the condition they only spend it on X. The origins of EA may not be that, but that's what it has evolved into.
A lot of altruism is quite literally wasteful and ineffective, in which case it's pretty hard to call it altruism.
> they could have been 13% more effective
If you think the difference between ineffective and effective altruism is a 13% spread, I fear you have not looked deeply enough into either standard altruistic endeavors nor EA enough to have an informed opinion.
The gaps are actually astonishingly large and trivial to capitalize on (i.e. difference between clicking one Donate Here button versus a different Donate Here button).
The sheer scale of the spread is the impetus behind the entire train of thought.
It's absolutely worth looking at how effective the charities you donate to really are. Some charities spend a lot of money on fundraising to raise more funds and then reward their management for raising to much funds with only a small amount being spent on actual help. Others are primarily known for their help.
Especially rich people's vanity foundations are mostly a channel for dodging taxes and channeling corruption.
I donate to a lot of different organisations, and I do check which do the most good. Red Cross and Doctors Without Borders are very effective and always worthy of your donation, for example. Others are more a matter of opinion. Greenpeace has long been the only NGO that can really take on giant corporations, but they've also made some missteps over the years. Some are focused on helping specific people, like specific orphans in poor countries. Does that address the general poverty and injustice in those countries? Maybe not, but it does make a real difference for somebody.
And if you only look at the numbers, it's easy to overlook the individuals. The homeless person on the street. Why are they homeless, when we are rich? What are we doing about that?
But ultimately, any charity that's actually done, is going to be more effective than holding off because you're not sure how optimal this is. By all means optimise how you spend it, but don't let doubts hold you back from doing good.
For sure this is case. But just knowing what you are donating to doesn't need some sort of special designation. Like yes A is in fact much better than B, so I'll donate to A instead of B is no different than any other decision where you'd weigh options. Its like inventing 'effective shopping'. How is it different than regular shopping? Well, with ES, you evaluate the value and quality of the thing you are buying against its price, you might also read reviews or talk to people to have used the different products before. Its a new philosophy of shopping that no one has ever thought of before and its called 'effective shopping'. Only smart people are doing it.
The principal idea behind EA is that people often want their money to go as far as possible, but their intuitions for how to do that are way, way off.
Nobody said or suggested only smart people can or should or are “doing EA.” What people observe is these knee jerk reactions against what is, as you say, a fairly obvious idea once stated.
However it being an obvious idea once stated does not mean people intuitively enact that idea, especially prior to hearing it. Thus the need to label the approach
> However it being an obvious idea once stated does not mean people intuitively enact that idea, especially prior to hearing it. Thus the need to label the approach
This has some truth to it and if EA were primarily about reminding people that not all donations to charitable causes pack the same punch and that some might even be deleterious, then I wouldn't have any issues with it at all. But that's not what it is anymore, at least not the most notable version of it. My knee jerk reaction to it comes from this version. The one where narcissistic tech bros posture moral and intellectual superiority not only because they give, but because they give better than you.
Out of interest, do you identify any of the comments in this discussion as that kind of posturing? The "pro-EA" comments I see here seem (to me) to be fairly defensive in character. Whereas comments attacking EA seem pretty strident. Are you perceiving something different?
My impression of EA is not based on the comments here but the more public figures in this space. It is likely that others attacking EA are reacting to this also, while those defending it are doing so about the general concept of EA rather than a specific realization of EA that commenters like myself are against.
> Subtract billionaire activity from your perception of EA attitude
But that's the problem, that is my entire perception of EA. I see regular altruism where, like in the shopping example I gave above, wanting to be effective is already intrinsic. Doing things like giving people information that some forms of giving are better than others is just great. No issues there at all, but again I see that as a part of plain old regular altruism.
Then there is Effective Altruism (tm) which is the billionaire version that I see as performative and corrupt. Even when it helps people, this seems to be incidental rather that the main goal which appears to be marketing the EA premise for self promotion and back patting.
Obviously EA has a perception problem, but I have to admit it’s a little odd hearing someone just say that they know their perception is probably inaccurate and yet they choose to believe and propagate it regardless.
If it helps, instead of thinking of it as a perception problem, maybe think of it as a language problem. There are (at least) two versions of EA. One of them has good intentions and the other doesn't. But they are both called EA, so its not that people are perceiving incorrectly, its that they hear the term and associate it with one of those two versions. I tried to disambiguate by referring the one just regular altruism and other by the co-opted name. EA has been negatively branded and its very hard to come back from that association.
"A lot of people think that EA is some hifalutin, condescending endeavor and billionaire utilitarians hijack its ideology to justify extreme greed (and sometimes fraud!), but in reality, EA is simply the imperative (accessible to anyone) to direct their altruistic efforts toward what will actually do the most good for the causes they care about. This is in contrast to the most people's default mode of relying on marketing, locality, vibes, or personal emotional satisfaction to guide their generosity."
See? Fair and accurate, and without propagating things I know or suspect to be untrue!
This is perfectly fine definition, if you change the "but in reality" to "and". Like it or not, EA means both of these things simultaneously. So its not that if someone uses one definition that they are wrong, only that they are using that definition. Language is like that. There is no official definition, its whatever people on mass decide to use and sometimes there is a split vote.
I see your point, but if the only red-headed people ever saw was Kathy Griffin and Carrot Top and they were unfunny to them, and also Kathy and Carrot Top were loudly and sincerely proclaiming that they were funny, and that they were funnier than any other comedians, and that it was because they were red headed. How irrational is that perception?
I agree. I think the criticism of EA's most notorious supporters is warranted, but it's criticism of those notorious supporters and the people around them, not the core concept of EA itself.
The core notions as you state them are entirely a good idea. But the good you do with part of your money does not absolve you for the bad things you do with the rest, or the bad things you did to get rich in the first place.
Mind you, that's how the rich have always used philanthropy; Andrew Carnegie is now known for his philanthropy, but in life we was a brutal industrialist responsible for oppressive working conditions, strike breaking, and deaths.
Is that really effective altruism? I don't think so. How you make your money matters too. Not just how you spend it.
Salaries don't tend to be strongly correlated with bad working conditions or stress. In most industries (like software development) it's just supply and demand, and I imagine there are more people willing and able to work for £65k as a train driver than as a software developer. It's a bit different for train drivers because of the strong unions; my guess is that explains their high salaries more than lack of supply.
Yeah, that's kind of what I was thinking. Saying "it's low stress therefore shouldn't attract a high salary" doesn't add up to me.
I don't necessarily know what the right salary is but it's shift work (and you don't get to choose your hours), you're in charge of a lot of people's safety, there's a non zero chance you'll watch someone die in front of you (if they jump on the tracks). It's... not nothing. And if we're looking at how much economic benefit a given job provides a country a train driver is surely a large multiplier.
Is that true? My understanding is they command very high wages because their unions are strong and they have a lot of leverage: by striking they can impose extremely high costs on the wider economy (not to mention bad press for politicians).
Yes. This was literally a case study for my Sociology degree in the '90s.
(I would also mindfully say that there is a lot of subtle political propaganda in the UK around this issue- the powers that be want the public to blame train drivers for the failures of privatisation)
That doesn't convince me. Companies commonly handle situations like this just fine. I know because I have seen it. I think you are the one who fell for the propaganda here.
Companies may or may not handle strikes properly. If it were that easy, industrial interests (and their emissaries, like Reagan and Thatcher) would not have spent more than a century trying to break unions.
But you haven't made an argument, so you're in no position to criticize. You asserted that "train drivers became rarer due to shareholder reluctance to train and recruit them" and then didn't back this strange claim up, just said "I'm right" in different words. A sociology degree isn't convincing btw.
Our ports in the US, which affects the cost of endless goods, are still running with 1950's level tech because the unions are so strong (and have heavy mob ties).
> You try it, you get a visit. Same way it's always been.
In that case, there's no obstacle. This is exactly what happened with containerization, and guess what? The ports that did containerize, including some ports constructed from scratch specifically because existing unionized ports blocked containerization, replaced the ports that didn't.
My understanding is that the mob extracted large payments in order for containerization to be permitted. Some half of the time they just live on container royalties which are exactly the mechanism the mob used for extraction.
So sure, there's no big deal, just pay the mob. People do argue for that.
Any evidence mob lives off container royalties? And what are container royalties to begin with???? (I had no idea when I borrowed a container for moving I was paying royalties.)
My problem with people asking for evidence is that they often are expecting me to do a lot of work to give them a better world model, when they nonetheless have no intention of accepting any evidence.
So I have a guideline of good faith: you can hire me and I don’t mind doing it for you; or you can go read up on the subject, reach some understanding of what these things are, and we can discuss; or you can go do some research of equivalent value to me and bring that to me as a barter[0].
Otherwise, it’s really not a big deal to me if you don’t believe something that’s true.
0: for the moment, I’m willing to trade for the DNA height model that some people claim exists. If you can find it for me, I’ll find you some sources for container royalties.
I'm arguing that if you are someone who tries to bring change to the ports, like the developer contracted by the government to do an assessment, you will get a visit, most likely at your home.
They won't kill you or rough you up, but they will tell you what your assessment will find. And reiterate that they are at your house, where your family lives.
I want someone who cares to be driving the train I'm on. They also require in person participation which make outsourcing hard. I'm fine with self driving when the line is designed for it. It will be a very long time before existing lines are self driving, and its not because on unions.
Not really sure why people like moaning about train drivers. Are they jealous a train driver is making more than them? While in the case of tech workers they sit quietly and watch their £65k jobs go to India.
reply