Hacker Newsnew | past | comments | ask | show | jobs | submit | kyrofa's commentslogin

> [It] runs a heavily stripped-down version of Linux that lacks systemd and apt. And these are just a few of the issues.

You mean it's not Debian-based? How is this an issue?


Loved this as a kid. But as a parent, even putting it in the garage is just too much noise for my household, haha!


Pretty sure Immich is on github, so I assume they have a workflow for it, but in case you're interested in this concept in general, gitlab has first-class support for this which I've been using for years: https://docs.gitlab.com/ci/review_apps/ . Very cool and handy stuff.


Wow, my worlds are colliding right now-- although seeing Benn on HN in retrospect shouldn't be that surprising. Go check out his music, The Flashbulb, he's one of my favorite artists.


> When a drive fails, one of the key factors in data security is how fast an array can be rebuilt into a healthy status. Of course, Amazon is just one vendor, but they have the distribution to do same-day and early morning overnight parts to a large portion of the US. Even overnighting a drive that arrives by noon from another vendor would be slower to arrive than two of the four other options at Amazon.

In a way this is a valid point, but it also feels a bit silly. Do people really make use of devices like this and then try to overnight a drive when something fails? You're building an array-- you're designing for failure-- but then you don't plan on it? You should have spare drives on hand. Replenishing those spares is rarely an emergency situation.


>>You should have spare drives on hand.

I've never heard of anyone doing that for a home nas. I have one and I don't keep spare drives purely because it's hard to justify the expense.


The drives I have obtained ahead of failure only after drives have been used past the 8 year mark are for a rebuild. I would hardly call them spares.

I did end up with a spare before at the 3 year mark but the bathtub curve of failure has held true and now that so-called spare is 6 years old, unused, too small of a drive, and so never planned to be used in any way.

The conventional wisdom is that you should not store drives that don't get spun up infrequently, so what does it mean to have spares unless you are spinning them up once a month and expecting them to last any longer once actually used?


I do. Also, I have an unopened 990 EVO Plus ready to drop into whatever machine needs it.

I'm not made of money. I just don't want to make excuses over some $90 bit of junk. So I have have spare wifi, headset, ATX PSU, input devices, and a low cost "lab" PSU to replace any dead wallwart. That last one was a life saver: the SMPS for my ISPs "business class" router died one day, so I cut and stripped the wires, set the volts+amps and powered it that way for a few days while they shipped a replacement.


I had a hot spare in the form of a backup drive. It was a 12 TB external WD that I'd already burned in and had as a backup target for the NAS. Then when one of the drives in the NAS failed, I broke the HDD out of the enclosure and used it to replace the broken drive. It hadn't been in use for many months and I'd rather sacrifice some backups rather than the array. I also technically had offsite backups for it that I could restore in an emergency.


always run the previous drive gen space.

i budget 300usd each, for 2 or 3 drivers. that is always the sweet spot since forever. get the largest enterprise model for exactly that price.

that was 2tb 10yrs ago. 10tb 5yrs ago.

so 5yrs ago i rebuilt storage on those 10tb drivers but only using 2tb volumes (coulda be 5, but i was still keeping the last gen size as data haven't grow), now my old drivers are spares/monthly off-machine copies. i used one when getting a warranty for a failed new 10tb one btw.

now i can get 20tb drivers for that price, i will probably still only increase the volumes to 10tb at most and have two spares.


Heh, I suppose you've heard of one now. Fair enough, I could be in the minority here.


Yeah. If you don't have a couple spare 100TB ssd nases you can turn on in the event of failure yoh are doing it wrong


A lot of these are home power users.

They build the array to support a drive failure but as home power users without unlimited funds don’t have a hot spare or store room they can run to. It’s completely reasonable to order a spare on failure unless it’s mission critical data needing 24/7 uptime.

They completely planned for it. They’ve planned for if there is a failure they can get a new drive within 24 hours which for home power users is generally enough, especially when likely get a warning before complete failure.


I agree, I don't buy spares, but when I have a drive failure, the first thing I do is an incremental backup, so that I know my data is safe regardless, while I am waiting for a drive.

Also worth noting that I don't think I experienced hard fails, it's often the unrecoverable error count shooting up in more than one event, which tells me it's time to replace. So I don't wait for the array to be degraded.

But I guess that's an important point, monitor your drives. Synology will do that for you, but you should monitor all your other drives. I have a script that uploads all the smart data off all my drives across all my machines to a central location, to keep an eye on SSD wear levels, SSD bytes written (sometimes you have surprises), free disk space and smart errors.


Do you have a link to your script? Mostly I'd love to have a good dashboard for that data.


Not the full script but can share some pointers.

Using smartctl to extract smart data as it works so well.

Generally "smartctl -j --all -l devstat -l ssd /dev/sdXXX". You might need to add "-d sat" to capture certain devices on linux (like drive on an expansion unit on synology). By the way, synology ships with an ancient version of smartctl, you can use a xcopy newer version on synology. "-j" export to json format.

Then you need to do a bit of magic to normalise the data. Like some wear level are expressed in health (start = 100) or percent used (start = 0). There are different versions of smart data, the "-l devstat" outputs a much more useful set of stats but older SSDs won't support that.

Host writes are probably the messiest part, because sometimes they are expressed in blocks, or units of 32MB, or something else. My logic is:

  if (nvme_smart_health_information_log != null)
  {
   return nvme_smart_health_information_log.data_units_written * logical_block_size * 1000;
  }
  if (scsi_error_counter_log?.write != null)
  {
   // should be 1000*1000*1000
   return (long)(double.Parse(scsi_error_counter_log.write.gigabytes_processed) * 1024 * 1024 * 1024);
  }
  var devstat = GetAtaDeviceStat("General Statistics", "Logical Sectors Written");
  if (devstat != null)
  {
   return devstat.value * logical_block_size;
  }
  if (ata_smart_attributes?.table != null)
  {
   foreach (var att in ata_smart_attributes.table)
   {
    var name = att.name;
    if (name == "Host_Writes_32MiB")
    {
     return att.raw.value * 32 * 1024 * 1024;
    }
    if (name == "Host_Writes_GiB" || name == "Total_Writes_GB" || name == "Total_Writes_GiB")
    {
     return att.raw.value * 1024 * 1024 * 1024;
    }
    if (name == "Host_Writes_MiB")
    {
     return att.raw.value * 1024 * 1024;
    }
    if (name == "Total Host Writes")
    {
     return att.raw.value;
    }
    if (name == "Total LBAs Written" || name == "Total_LBAs_Written" || name == "Cumulative Host Sectors Written")
    {
     return att.raw.value * logical_block_size;
    }
   }

  }
and even that fails in some cases where the logical block size is 4096.

I think you need to test it against your drives estate. My advice, just store the raw json output from smartctl centrally, and re-parse it as you improve your logic for all these edge cases based on your own drives.


My synology NAS is for my own use. I do not keep spare drives on hand, I would go to the nearby shop that's 20 minutes away from me to get a new drive. They wouldn't have synology branded drives but they have the toshiba MG series, Western Digital and Seagate.

Within my NAS, I have 2 different pool, 1 is for important data, it's 2 hard disk with SHR1 replicated to an offsite NAS. Another pool is for less important data (movies, etc), it's SHR1 with 5 hard disks, 75TB total capacity, none of the hard disks are the same batch or production date. Not having the data immediately is not a problem. Losing that data would suck but I'd rebuild so I'm fine not having a spare drive on hand.


Failures should be rare, which means a spare HD might be sitting in a drawer without spinning for years, which HDs don’t like to do.

When you need to replace a drive, it’s better to purchase one new. It was manufactured recently and not sitting for very long.


> a spare HD might be sitting in a drawer without spinning for years, which HDs don’t like to do.

How so? Does this imply drives "age out" while sitting at distribution warehouses too?


The Linux kernel supports rebooting using a number of different strategies[1]. Some PCs need a different one than the default in order to make sure everything is properly reset.

[1]: https://github.com/torvalds/linux/blob/9b2ffa6148b1e4468d08f...


Linux now uses exactly the same reboot strategy as Windows does, so no PC should "need" a different one - it may be the case that driver code leaves the hardware in a state the system vendor didn't test, and using a different reboot approach may work around that, but it's not fundamentally the reboot method that's causing the problem there (https://mjg59.dreamwidth.org/3561.html goes into some more detail on how all this actually works)


Yes, I didn't mean to imply that Linux was doing anything wrong, just that some hardware seems to work better with other approaches, for the reasons you state.


> First if all, it wants to do everything and does none well (or better than specialized apps)

Yep. And any extra apps beyond the default just make upgrades go sideways. I've given up on it. Using syncthing instead (just for file syncing) and haven't looked back. It's not my favorite either, but just because it's a pain to configure. Once configured, it's been rock solid.


> Using syncthing instead (just for file syncing) and haven't looked back

After giving up on Nextcloud I tried syncthing too - hated it - and most of that time was pandemic and not like I was syncing outside of my own home network.

Went back to dropbox instead (just for file syncing) and haven't looked back.



v1.0 before 100 commits, wow.


It appears that it does not use 0ver: https://0ver.org/


Haha, you just made my day. How have I not seen this?! Thank you.


I developed, the app as a side project for my needs. Made the decision to move to another public repository to open sourced and not to keep the git history. App code is simple to read if you want to check security and privacy concerns.


Only if you assume that it is their first repository and that they performed the first commit after the first line of code.


> the likelihood that they'll even receive the complaint is next to nothing.

They don't need you to complain: they got an automated notification as soon as you saw the error page.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: