Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Appreciate it's just an MVP but I think there's a good niche you can go down. Big Data on AWS is such a pain to set-up (Glue, EMR, RedShift, LakeFormation), with IAM policies and roles a simple data pipeline is around 500 lines of YAML. Would be good if you could add native support for that, so say you have some CSV in S3 you want to convert to parquet, drop null fields, and then make shareable with another AWS account. Would solve a massive problem for me


Interesting idea. We will think about it and see what we can do.

It's not the most requested feature, but I do agree that is solves a huge paint point.

Also, maybe some of you big data use-cases could be handled by a custom batch job? https://docs.stacktape.com/resources/batch-jobs/


That would be extremely useful. Apart from a CSV on S3, would be amazing if it handled DynamoDB to parquet in S3 for Athena querying.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: