• I eat words@group.lt
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    3
    ·
    10 months ago

    well this is probably PR as there is no such system nor it can be made that can have 100% uptime. not talking about the fact that network engineers rarely work with servers :)

    • PoTayToes@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      31
      ·
      10 months ago

      Not 100% but 99.9%… IIRC Guild Wars 2 servers had like 1 actual outage in 11 years. They have pretty amazing structure.

      • drislands@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 months ago

        Fun fact, uptime goals are measured in nines – for example, 99.9% is three nines of uptime. If that one outage lasted an entire day, and they were never down at any other time, that would indeed be three nines of uptime.

        • fibojoly@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          Yeah, my net admin colleagues explained that one to me a while back because the bosses were making similar uninformed demands (“this needs to never go down!” “Sure, here is how much that costs”). It was very enlightening :)

          • KᑌᔕᕼIᗩ@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            Once I got a serious response to that from a manager saying that he could go on eBay, buy his own servers and do it himself. My response was to quit.

    • P03 Locke@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      11
      ·
      10 months ago

      well this is probably PR as there is no such system nor it can be made that can have 100% uptime.

      Five-nines is entirely possible with enough resources and competent outage-minded engineers.

      • send_me_your_ink@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        2
        ·
        10 months ago

        Hell. Five nines is doable with eks, a single engineer and thinking through your changes before pushing them to prod. Ask me how I know…

        • P03 Locke@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          5
          ·
          10 months ago

          Operations like this don’t have a single engineer. The more complex the project, the higher the risk of complications and outages. It’s not a matter of “oh, just think harder about your changes”.

          Ask me how I know…

        • masterspace@lemmy.ca
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          10 months ago

          Distinguishing between 5 nines and 100% is just semantics in any discussion outside of contractual ones.

    • Zeusbottom@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      10 months ago

      This is a software development business, which is a positively bananas trade no matter what’s getting written. And the smaller the business, the more hats network guys wear. We work with everything from the server app down to the coffee machine fueling the devs. And 100% uptime isn’t the most crazy demand I’ve heard. I’m sure Chujo is busier than a one-armed paper hanger with jock itch.

      At least he’s got money to throw at his hosting company. Scaling up would have been much slower in the old days.

      • Meloku@feddit.cl
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 months ago

        I’m not versed in videogame network infrastructures, but wouldn’t be enough just having a load balancer and a couple of instances to ensure “100% uptime”? At least before all instances and the load balancer itself decide to join a suicidal pact, but more instances mean less chance of a critical event happening, no?

        • Zeusbottom@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          Depends on the cloud provider. AWS, as an example, have up to three “availability zones” within a single data center. If the customer needs HA, they are encouraged to run their applications in separate availability zones. It means different subnets within the VPC, redundant LBs spread across those zones, and more.

          There is also probably DNS-based global load balancing across different data centers.

          That’s just the hosting infrastructure. I’m sure Chujo works on the office LAN as well. He might wear the infosec hat also, which means he’s up to his eyeballs in firewall policy.

          I don’t envy my brethren in software development orgs. Been there, done that, got that t-shirt long ago.