We were worried about that as well. But we have found that most people are not doing well on our take home. If we get to the point that most people are crushing it, then we may need to think more about AI and take homes (maybe tweak the it with the explicit expectation that they may use AI, etc.)
They also need to be able to reason well about why they made the choices they did. Something useful when talking to them can be asking questions like "If X changed, how would that impact your design?". If they were reliant on AI for vibing (rather than just using it as a tool), then those can be more difficult questions to answer well.
It’s a rough heuristic, but it’s not true. I’ve worked at micro managed startups where the CEO wanted to review every change, and giant companies where it’s me shipping a massive feature.
I don't, sadly. The only coverage of this that was first-principles accessible was the course taught by Prof Stergios Romelioutis where I went to grad school.
You could do fine by reading some old books by Bar-Shalom. Any practical textbook like his would include all the "other stuff" about the EKF that helps you understand how nonperforming it often is.
But the actual derivation of the EKF is probably only one or two pages in such a textbook, which is a damn shame nobody includes it.
The background required is simply:
* Know the form of exponential family of PDFs (like Normal Gaussian)
* Bayes rule
* Recognize that to maximize f~= exp(-a), you have to minimize 'a'
* Know how to take derivative of matrix equation ('a', above)
* Solve it
* Use 'matrix inversion lemma' to transform solution to what KF/EKF provides.
Probabilistic Robotics covers Kalman Filter from a first principles probabilistic viewpoint, and its extension the EKF. It's quite readable for someone with basic understanding of linear algebra, probability, and calculus. I believe it also has a refresher on these basics in the introduction.
> Everything just assumes that without Sam they’re worse off.
>
> But what if, my gosh, they aren’t? What if innovation accelerates?
It reads like they ousted him because they wanted to slow the pace down, so by design and intent it would seem unlikely innovation would accelerate. Which seems doubly bad if they effectively spawned a competitor that is made up by all the other people that wanted to move faster
Very good point. And with very long lifespans (thousands of years), all of those low-probability events that may cause accidental death (airplane crash, getting hit by a car crossing the street, violence, etc.) may really start to add up to a not-so-low probability of at least one of them happening within your extended lifespan.
They also need to be able to reason well about why they made the choices they did. Something useful when talking to them can be asking questions like "If X changed, how would that impact your design?". If they were reliant on AI for vibing (rather than just using it as a tool), then those can be more difficult questions to answer well.