It appears to work fine (it contains my home partition for my main machine I daily drive) and I haven’t noticed signs of failure. Not noticeably slow either. I used to boot Windows off of it once upon a time which was incredibly slow to start up, but I haven’t noticed slowness since using it for my home partition for my personal files.

Articles online seem to suggest the life expectancy for an HDD is 5–7 years. Should I be worried? How do I know when to get a new drive?

  • FiskFisk33@startrek.website
    link
    fedilink
    arrow-up
    36
    ·
    6 days ago

    a HDD can fail at any given time. It could fail within a week of buying it, could last over a decade.

    What I’m saying is, if you have data you don’t want to lose, yes you should be worried. Keeping backups is the only safe option.

  • Raddnaar@sh.itjust.works
    link
    fedilink
    arrow-up
    13
    arrow-down
    1
    ·
    6 days ago

    There are only. 2 kinds of people:

    1. Those who have lost data
    2. Those who will lose data.

    Plan accordingly

  • thepreciousboar@lemm.ee
    link
    fedilink
    arrow-up
    13
    ·
    edit-2
    6 days ago

    Hdd can live a long and happy life, but absolutely don’t trust a single drive ever, independently of how rugged, old or expensive it is.

    My main hard drive lasted 5 years with 1 year of power on hours, working fine and suddenly failed. It was a good fail because I was able to get all the data from it, but it took almost one month for how slow it was.

    Always assume your data storage is going to die tomorrow and be ready to replace it.

    • lazynooblet@lazysoci.al
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      5
      ·
      edit-2
      5 days ago

      don’t trust a sibgle drive

      sibgle?

      Edit: oh I see the edit now. “single” is what it meant. I couldn’t figure that out at the time. Shitty to be downvoted for asking a question.

  • metaStatic@kbin.earth
    link
    fedilink
    arrow-up
    13
    ·
    6 days ago

    2 of my main system drives have been powered on for 5 and 7 years respectively and are therefore much older.

    Just don’t wait for them to start clicking before thinking about backups.

  • MangoPenguin@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    5 days ago

    Have backups, follow the 3-2-1 rule.

    All drives fail, at any time, and you will eventually lose data if you don’t have good backups in place.

  • nicerdicer@feddit.org
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    6 days ago

    As long as you have multiple backups of your data, you shouldn’t be concerned. HDDs as well as SDDs can potentially fail at any given time.

    The key is to have more than one backup. You shouldn#t rely on only one backup alone.

  • somenonewho@feddit.org
    link
    fedilink
    arrow-up
    7
    ·
    6 days ago

    I had an external HDD that I was using for years. Some of that time it was attached to a Server basically running 24/7 definitely dropped that thing a couple of times. That HDD has been out of use for years now but I’m sure I could just plug it in tomorrow and it would spin up fine. HDDs can last forever untill they don’t.

    So Backups! And don’t worry about the rest.

    Also as others said if you’re interested how long and hard it’s actually been working check out the smart data if there are any fail criteria you might wanna get a new one just to avoid restoring from Backup but if all’s green just let it keep chugging until it doesn’t and remember Backups!

  • adarza@lemmy.ca
    link
    fedilink
    English
    arrow-up
    10
    ·
    6 days ago

    backup. backup. backup.

    then also check the SMART stats on it and run the internal tests. if you don’t know how, gsmartcontrol is a good place to start.

    i’ve had a couple disks fail right away, and others that just go forever–and one of those is a deathstar, even.

  • 🎨 Elaine Cortez 🇨🇦 @lemm.ee
    link
    fedilink
    English
    arrow-up
    8
    ·
    6 days ago

    Always make sure that important files and folders are backed up at least twice! Even when drives are new, they can and do fail at random without warning. My HDD’s are the better half of a decade old and I had no issue with them at all until last year. They’re now starting to experience random corruptions that will sometimes compromise entire folders.

    • communism@lemmy.mlOP
      link
      fedilink
      arrow-up
      4
      ·
      6 days ago

      I’ve not responded to the majority of comments in this thread because I’d have nothing to add except “thanks”, but here:

      They’re now starting to experience random corruptions that will sometimes compromise entire folders.

      Er why haven’t you bought new drives at that point??

      • ReadMoreBooks@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        Er why haven’t you bought new drives at that point??

        There’s different ways to arrange data on multiple physical drives. One group of ways is called RAID. One specific type of RAID is called RAID5. And, one can have 3 or more drives in the RAID5 array.

        I’ve 3 drives, each 2TB. In RAID5 I only get 4TB of effective storage (not 6TB). If any one of my 3 physical drives fails, the array preserves all data and continues to operate at a slower speed. The failed drive can be replaced, a rebuilding process performed, and performance restored. If a second drive fails then data is lost and the array stops working. But, even then, new drives can be purchased and data restored from backup.

        In a business we never want unplanned downtime because it’s costly. We’d be replace hard drives before they fail on a schedule we choose: planned downtime when no one is working. But, at home, particularly with backups, unplanned downtime often isn’t very costly. We can keep using our old hardware, maximizing its value, until it fails entirely.

      • 🎨 Elaine Cortez 🇨🇦 @lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        I’m gonna buy a new computer when this one inevitably refuses to boot up 🤷‍♀️ there’s more age related issues besides just the HDD’s at this point so it’ll be less hassle to start over.

    • communism@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      2
      ·
      6 days ago

      I’ve not responded to the majority of comments in this thread because I’d have nothing to add except “thanks”, but here:

      They’re now starting to experience random corruptions that will sometimes compromise entire folders.

      Er why haven’t you bought new drives at that point??

    • communism@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      6 days ago

      I’ve not responded to the majority of comments in this thread because I’d have nothing to add except “thanks”, but here:

      They’re now starting to experience random corruptions that will sometimes compromise entire folders.

      Er why haven’t you bought new drives at that point??

  • brokenlcd@feddit.it
    link
    fedilink
    arrow-up
    9
    ·
    6 days ago

    I have old 500 gb drives from 2009 that i ripped out of beaten laptops still working 24/7 and i’ve had new drives grenade themselfs two weeks in use, there are too many factors to properly gauge how long of a life a drive has, the best option is to have backups, even something as simple as a copy on a flash drive is better than nothing.

    I get people saying follow the 3-2-1 rule, but there are places like mine where storage is prohibitively expensive, so just do what you can, anything is better than nothing in this cases.

  • Otherbarry@lemmy.zip
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    6 days ago

    If you’re not seeing anything of concern in the SMART info then there’s little to worry about. You could install/run smartctl from the command line, or for something with a gui try gsmartcontrol / any other app that can interact with your hard drive diagnostics.

    Hard drives can last a long time, as a general rule if your hard drive made it through its first 1-2 years without issue then there’s a good chance it’ll keep chugging along for years. I personally haven’t found that hard drives consistently die in 5-7 years, not too sure where you got that info from.

    In any case backups are your friend, not just in case the hard drive dies but there’s always the possibility that your entire OS blows up somehow or you get a bad case of malware.

  • golden_zealot@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    ·
    6 days ago

    As others have said, you don’t have to be concerned about anything if you keep good backups. Disk storage at this time is very cheap compared to what it used to be, you could probably find a 5200 RPM 5 TB disk for ~100 dollars USD, or even better, two 2 TB disks which you could configure with software RAID.

  • ragebutt@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    ·
    6 days ago

    Hard drives can last a long long time. I have test equipment with hard drives from the 90s that still run fine. That said when hard drives fail they fail quickly

    I run a 15 drive nas. You’ll often see a few smart errors one day then total drive failure the next day. Sometimes the drive fails completely without any smart warning, especially if it’s that old. I try to retire drives from my nas before they fail for that reason (if they hit 7 year service life, and that’s pretty long but my nas is just a home server thing)

    • Elise@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      6 days ago

      Do you only count active years as service life? I have one I hadn’t used for years and luckily it still worked just fine and had no data loss after a couple of years, but I am not sure if I should count those years towards the max 7. Also it’s a NAS drive, not the standard stuff.

      • ragebutt@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 days ago

        That’s a pretty good question. I’ve never had it come up though; every drive in the nas is purchased and thrown in there. Although now that I’m thinking about it I don’t think I’ve ever purchased a brand new drive for my nas. I only ever buy refurbs from places that decommission server drives so I guess my “years” are inflated a bit, at least 2-3. Maybe I should adjust that number down! Although it’s been fine for years tbf

        • Elise@beehaw.org
          link
          fedilink
          arrow-up
          1
          ·
          6 days ago

          Do you raid? I just have one rn and am wondering if I could get a 2nd one and put it in raid without accidentally wiping the current one. I guess that would mitigate any failures

          • ragebutt@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 days ago

            Yeah I have a 15 drive array.

            You can raid 1 and that’s basically just keeping a constant copy of the drive. A lot of people don’t do this because they want to maximize storage space but if you only have a 2 drive array it’s probably your safest option

            it’s only when you get to 3 (2 drive array + parity) that you have some potential to maximize storage space. Note that here you’re still basically sacrificing the space of an entire drive but now you basically double it and this is more resilient overall because the data is spread out over multiple drives. But it costs more because obviously you need multiple drives

            Keep in mind none of these are back up solutions though. It’s true that when a drive dies in a raid array you can rebuild the data from other drives but it is also true that this operation is extremely stressful and can lead to death of the array. Eg in raid 1 a single drive dies and when adding a new drive the second drive that held the copy of your data starts having sector corruption during rebuild of the new drive, or in raid 2 one of the 3+ drives dies and when you rebuild from parity the parity drive dies for similar reasons. These drives are normally only being accessed occasionally and the rebuild operation is basically seeking to every sector on the drive if you have a lot of data, and often puts the drive under a lot of read operation for a very long period of time (like days) especially if you get very large modern drives (18,20,24tb)

            So either be okay with your data going “poof” or back up your data as well. When I got started I was okay with certain things going “poof”, like pirated media, and would backup essential documents to cloud providers. This was really the only feasible solution because my array is huge (about 200tb with about 100tb used). But now I have tape backup so I back everything up locally although I still back up critical documents to backblaze. Depends on your needs. I am very strict about not wanting to be integrated to google, apple, dropbox, etc. and my media collection is not simply stuff I can retorrent, it’s a lot of custom media I’ve put together the “best” version of to my taste. but to set something up like this either takes a hefty investment or if you’re like me years of trawling ewaste/recycling centers and decommission auctions (and it’s still pricey then but at least my data is on my server and not googles)

            • Elise@beehaw.org
              link
              fedilink
              arrow-up
              1
              ·
              6 days ago

              Hmm. Yeah I’m thinking of keeping my operation lean and simple, with an online copy. One issue I’ve noticed is that sometimes files just get corrupted. Perhaps due to a radiation event? A parity drive could solve that, but I want something simpler. I’m thinking just a tar with hash and then store multiple copies. What do you think?

              • ragebutt@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                2
                ·
                6 days ago

                Bitrot sucks

                Zfs protects against this. It historically has been a pain to work with for home users but recently the implementation raidz expansion has made things a lot easier as you can now expand vdevs and increase the size of arrays without doubling the amount of disks.

                This is a potential great option for someone like you who is just starting out but still would require a minimum of 3 disks and the associated hardware. Sucks for people like me though who built arrays lonnnnng before zfs had this feature! It was literally up streamed like less than a year ago, good timing on your part (or maybe bad, maybe it doesn’t work well? I haven’t read much about it tbf but from the small amount I have read it seems to work fine. They worked on it for years)

                Btrfs is also an option for similar reasons as it has built in protections against bitrot. If you read on this there can be a lot of debate about whether it’s actually useful or dangerous. FWIW the consensus seems to be for single drives it’s fine. My array has a separate raid1 array of 2tb nvme drives, these are utilized as much higher speed cache/working storage for the services that run. Eg if a torrent downloads it goes to the nvme first as this storage is much easier to work with than the slow rotational drives that are even slower because they are in a massive array, then later the file is moved to the large array for storage in the middle of the night. Reading from the array is generally not an intensive operation but writing to it can be and a torrent that saturates my gigabit connection sometimes can’t keep up (or other operations that aren’t internet dependent like muxing or transcoding a video file). Anyway, this array has btrfs and has had 0 issues. That said I personally wouldn’t recommend it for raid5/6 and given the nature of this array I don’t care at all about the data on it

                My array has xfs. This doesn’t protect against bitrot. What you can do if you are in this scenario is what I do: once a week I run a plugin that checksums all new files and verifies checksums of old files. If checksums don’t match it warns me. I can then restore the invalid file from backup and investigate for issues (smart errors, bad sata cable, ecc problem with ram, etc). The upside of my xfs array is that I can expand it very easily and storage is maximized. I have 2 parity drives and at any point I can simply pop in another drive and extend the array to be bigger. This was not an option with zfs until about 9 months ago. This is a relatively “dangerous” setup but my array isn’t storing amazing critical data, it’s fully backed up despite that, and despite all of that it’s been going for 6+ years and has survived at least 3 drive failures

                That said my approach is inferior to btrfs and zfs because in this scenario they could revert to snapshot rather than needing to manually restore from backup. One day I will likely rebuild my array with zfs especially now that raidz expansion is complete. I was basically waiting for that

                As always double check everything I say. It is very possible someone will reply and tell me I’m stupid and wrong for several reasons. People can be very passionate about filesystems

                • Elise@beehaw.org
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  5 days ago

                  Where do you store the checksums? Is it for every file? I thought of just making a tar for each year and then storing it next to it, and storing a copy off-site.

  • xmunk@sh.itjust.works
    link
    fedilink
    arrow-up
    5
    ·
    6 days ago

    You should get a new drive when yours breaks - its usually pretty obvious when that’s happening.

    You absolutely should ensure your important files are backed up though, even on a brand new drive.

  • Soapbox1858@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 days ago

    I’ve got a 300gb WD velociraptor 10k rpm model that has been running almost non stop in every computer I have built for the last 20 years. I only use it as an extension of my steam library though so when it does die I won’t lose anything.