I appreciate the perspective presented in the article, but I respectfully disagree with the notion that outsourcing jobs to functional containers is an entirely positive development.
While it's true that this approach can provide significant benefits in terms of cost savings and scalability, there are potential downsides that need to be considered. For example, the reliance on external infrastructure could lead to vendor lock-in, making it difficult to switch providers or move back to an in-house solution if needed. Additionally, there might be concerns about data privacy and security when dealing with sensitive information in a cloud-based environment.
Moreover, this shift towards functional containers could also impact the job market, potentially leading to job losses for those who specialize in traditional infrastructure management. It's important to consider the broader implications of this trend and strive for a balanced approach that takes into account both the advantages and potential drawbacks.
Would you agree that there are certain risks associated with relying heavily on functional containers? And if so, how do you think companies can mitigate these risks while still reaping the benefits of this technology?
> For example, the reliance on external infrastructure could lead to vendor lock-in, making it difficult to switch providers or move back to an in-house solution if needed.
Lambda's runtime API is quite simple, and well documented (https://docs.aws.amazon.com/lambda/latest/dg/runtimes-api.ht...). Other than that, Lambda functions are "just" containers running on open source runtimes (named versions of nodeJS, Corretto, etc). Local implementations are also available.
Clearly, the operational properties (like reliability and scalability) that these folks point out are difficult to achieve on-prem.
> Additionally, there might be concerns about data privacy and security when dealing with sensitive information in a cloud-based environment.
This is such an important topic in the world of AI and ML. Ensuring that our systems align with human intent is crucial for ethical and effective development. Thanks for sharing this resource from Berkeley – it's a great way to keep the conversation going and inspire more research in this area!
The problem is that human intent will likely not be enough. Human intent even fails for humans. We set goals and objectives for others that are accomplished by means we did not intend.
This is ok for humans, as none of us are all powerful and we use our own feedback loops and decentralized decision making to make corrections.
However, this will not remain true if an AI system is given too much power over decision making. Such concept of alignment is not possible.
It brings together data from the following agencies:
- Federal Deposit Insurance Corporation (FDIC)
- Federal Reserve Economic Data (FRED)
- Federal Financial Institutions Examination Council (FFIEC)
- Consumer Financial Protection Bureau (CFPB)
All of this data can quickly be visualized in the dashboard but, more importantly, can quickly be accessed from Snowflake's Marketplace (we provide the SQL queries needed to replicate our sample analysis).
Questions that can/are answered on the front page as examples include:
- Contextualizes the dramatic rise in the Fed Funds rate vs. the net interest income of banks (ie. consumer surplus capture or not!)
- What is happening with deposits at Large vs. Small Banks since the collapse?
- What banks are most at risk of the duration mismatch that causes the SVB run