Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Cameras are nowhere near the capability of our eyes especially in low light.

Computers are nowhere near the capability of our brains in terms of image processing, object identification, path anticipation etc.

Until these things improve, self driving cars need to compensate for this gap with more data inputs to be reliable.



Camera are better in low light than our eyes Computers are faster than our brain Software (Neural Net) are not at the level of our brain, in particular they do not fine tune constantly to the current conditions and problems. But they are getting there. Just looking at the visualisation in FSD you can see how accurate the system is at recognizing all the cars, their position, theirs speeds, etc. Human only only track a few objects and only when they catch our attention. Furthermore theirs system has 8 cameras, no distraction, sleepiness, etc.


I saw this visualisation recently in a taxi. It was constantly changing its mind about things. One second a scooter would appear as a garbage bin, the second it would disappear and then show up as a scooter again. Pedestrians would only be displayed while they moved. I was surprised how inaccurate it was.

And cameras are really bad at noise levels at low light and don't nearly have the dynamic range levels needed for good night vision.


Depend on version of fsd. They used to go through a normalization process that tried to make the video similar in differents conditions. This has been ripped off and now the signal go strait to NN. I heard (and saw video) where the camera detected animal in the dark that were impossible to see to the naked eyes.

It’s possible their system is not that good, but eventually cameras will outperform the human eyes in every way if that’s not already the case.

Our perception is not that good, but our brain filter it out and make us believe that we see the full picture were if fact we see only what catch our attention.


Just wait for better cameras? Big pixels solve the noice level problems and multiple cameras solve the problem of dynamic range. People also can not measure distance to static things if seing capability is limited to only one eye (riding bicycle is possible with one eye but driving a car is not).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: