When I first started PyXL, this kind of vision was exactly on my mind.
Maybe not AWS Lambda specifically, but definitely server-side acceleration — especially for machine learning feature generation, backend control logic, and anywhere pure Python becomes a bottleneck.
It could definitely get there — but it would require building a full-scale deployment model and much broader library and dynamic feature support.
That said, the underlying potential is absolutely there.
PyXL today is aimed more at embedded and real-time systems.
For server-class use, I'd need to mature heap management, add basic concurrency, a simple network stack, and gather real-world benchmarks (like requests/sec).
That said, I wouldn’t try to fully replicate CPython for servers — that's a very competitive space with a huge surface area.
I'd rather focus on specific use cases where deterministic, low-latency Python execution could offer a real advantage — like real-time data preprocessing or lightweight event-driven backends.
When I originally started this project, I was actually thinking about machine learning feature generation workloads — pure Python code (branches, loops, dynamic types) without heavy SIMD needs. PyXL is very well suited for that kind of structured, control-flow-heavy workload.
If I wanted to pitch PyXL to VCs, I wouldn’t aim for general-purpose servers right away.
I'd first find a specific, focused use case where PyXL's strengths matter, and iterate on that to prove value before expanding more broadly.
Maybe not AWS Lambda specifically, but definitely server-side acceleration — especially for machine learning feature generation, backend control logic, and anywhere pure Python becomes a bottleneck.
It could definitely get there — but it would require building a full-scale deployment model and much broader library and dynamic feature support.
That said, the underlying potential is absolutely there.