Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The site looks nice, but I think you'd benefit from having some beta testers who are not familiar with HW design already. I mention a few issues below, which I hope you take as constructive suggestions of how to improve the site, and not just as criticism.

It feels a bit like you are expecting completely inexperienced users, but the site doesn't quite anticipate their needs. Loading on mobile was not legible (which is fine, it's an early version.) Switching to the desktop version, the instruction for the initial design to "set" a bit is unclear. What is the function you want the user to implement? It's underspecified.

Further, there needs to be an explanation of what a testbench is, before you present a "Run" button, as well as an explanation as to what stimui the TB will present, and even the entire idea of Verilog simulation. It would be good to have an opportunity to see the testbench code and what the site expects the correct output to be; I didn't see an option for that. New users may not understand waveforms at all without an explanation.

The AI component is fine, but it feels a bit like all of the educational aspect has been delegated to the user to ask questions of the AI. A user who is inexperienced would not even know to use the word "testbench" is to frame their questions. I would suggest some careful thought as to who exactly your target audience is, and specifying early on what prior knowledge you expect them to have.



Thank you for taking the time to give a detailed feedback. We have a mix of beginners and experienced users, so your points about clear instructions makes sense, we will definitely see to that.

We are working on beginner-friendly starting points with zero-to-hero roadmaps for each tool and niche. We have basic roadmaps for RISC-V, x86, and embedded C already, and we are adding them for more topics and is also working on developing visual aids to ease the learning process.

Showing the testbench code and expected outputs in task briefs is a great idea. We are working on detailed summaries that explain test inputs and expected outputs in every task brief.

When simulation runs into errors, our AI mentor automatically gives context-aware responses and mini-lessons. We will also add suggested prompts to help new users get started.

If you have any more suggestions or want to share what would help you as a new user, we would love to hear. We are learning and improving continuously.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: