• cute_noker@feddit.dk
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 days ago

    That shit will overheat and burn down. You will have to settle with a picture of my dick… Which is very big… I have a big penus… Not even close to being small… Definitely not the size of a french fry

  • Lyra_Lycan@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    42
    ·
    3 days ago

    Plot twist: they’re 256MB drives from 2002 and total… 61.44GB. Still impressive, nvm. If they were the largest available currently (36TB) they’d total 8.64PB

    • thisbenzingring@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      20
      ·
      3 days ago

      that array is a POS. Changing failed drives in that would be a major pain in the ass… and the way it doesn’t disapate heat, those drives probably failed pretty regularly.

      • EtherWhack@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        3 days ago

        JBODS like those are actually pretty common in data centers though and are popular with cold storage configs that don’t keep drives spun up unless needed.

        For the cooling, they usually use the pressure gradient between what’re called cold and hot aisles to force air through the server racks. The pressure also tends to be strong enough that passive cooling can be used and any fans on the hardware would be more used to just direct the airflow.

      • Fuck u/spez@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        13
        ·
        3 days ago

        If you’re paying per U of rack space for colocation then maximizing the storage density is going to be a bigger priority than ease of maintenance, especially since there should be multiple layers of redundancy involved here.

        • thisbenzingring@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          1
          ·
          3 days ago

          you still have to replace failed drives, this design is poor.

          I work in a datacenter that has many drive arrays, my main storage space direct array has 900TB with redundancy. I have been pulling old arrays out and even some of the older ones are better then this if they have front loading drives cages.

          there is no airflow gaps in that thing… I bet the heat it generates is massive

          • Agent641@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 days ago

            They probably wait for like 20%of the drives in an array to fail before taking it offline and swapping them all out.

            Also, this doesn’t sound like the architects problem, sounds like the techs problem 🤷

    • Bad Jojo@lemmy.blahaj.zoneM
      link
      fedilink
      English
      arrow-up
      9
      ·
      3 days ago

      The interface is SATA, not EIDE or SCSI, so I’m going to guess 2TB minimum but I’d bet they are more than likely 8TB drives.

    • Ging@anarchist.nexus
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 days ago

      This is a helluva range, do any wizards have a best guess at how much total disk space we’re looking at here?

    • The average woman’s height is 1.588 m and the average woman’s shoulder width is 0.367 m.

      Assuming that this average woman fits exactly in this photo, the photo’s “area” would be 1.588 m × 0.367 m = 0.583 m².

      Assuming the pixel format is RGB and 8 bits per colour channel, each pixel in the photo would consist of 3 bytes. 2 PB is equal to 2 × 10¹⁵ B, which divided by 3 B for each pixel means there could be at least 6.67 × 10¹⁴ in this photo. In reality most of the time images are compressed so in practice you could get even more pixels. How much more depends exactly on the image and the desired image quality.

      To calculate the area of each pixel, divide the photo’s area by the number of pixels. This gives 0.583 m² / 6.67 × 10¹⁴ = 8.74 × 10⁻¹⁶ m² for each pixel. To get the side length of each pixel, take its square root to get 2.96 × 10⁻⁸ m = 29.6 nanometres!

      Dividing the widths and heights in metres by the length of each pixel gives (width, height) = (1.588, 0.367) m / 2.96 × 10⁻⁸ m = an image resolution of 12,412,583×53,708,941 pixels!

      When it comes for feature size, the bottleneck isn’t actually the pixel size. Assuming the image is in visible light, the shortest wavelength visible to the human eye is 380 nm, so increasing the resolution beyond that point is useless.

      In such a photo where features as small as 380 nm can be identified. To quantify the resolution you can see these features in, define an “effective pixel” to be a pixel of side length 380 nm. The actual pixels in such an image aren’t relevant at this point.

      Individual skin cells can be identified being 30 μm / 380 nm = 79 effective pixels wide. With similar calculations, blood cells can be identified with a width of 18 effective pixels, and you might be able to even identify individual bacteria. An E. coli bacterium has a length of 2 μm, which is 5 effective pixels.

      • PastafARRian@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        2 days ago

        A few comments as yours is close and I’m too lazy to do the write up myself, bumming off your work:

        • Nyquist frequency is half the sample, so your effective pixel for visible light will actually be at 380nm / 2
        • An electron microscope can capture down to 2 nm so actually 2 PB is a limiting factor for that! 29.6nm would definitely be possible.
        • Image compression ratio (assuming PNG) would probably be 2-4, so the actual pixel size would be sqrt(29.6) to 29.6 / 2.
      • TonyTonyChopper@mander.xyz
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 days ago

        If you took the image with an electron microscope you can easily get better than 30 nm resolution. Would be in black and white though. And you would need to cover your mom in carbon or gold. And expose her to a vacuum. For biological samples they typically freeze them so they don’t boil in there

    • da_cow (she/her)@feddit.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 days ago

      OK, so assuming that each Hard drive has A size of 16TB we have 12 Hard drives per layer and 20 layers so in total we have

      12 * 20 * 16TB = 3840 TB of storage.

      This is The same as 3840 * 1012 bytes

      In RGB a Pixel has 3 Values (Red, Green and Blue) each having a value ranging from 0 to 255, so 256 possible valuesbin total. A single byte can store numbers up to 256. This means, that storing a single pixel takes 3 Bytes.

      3840 * 1012 / 3 = 1280 * 1012 Pixels that WE can store.

      To get the maximum length of one side of the image we have to take the square root of this so

      √(1280 * 1012) = 35.777.087

      So if I didnt miscalculate this server could store a single image with the size of approximately 35.777.087 x 35.777.087 Pixels in RGB encoding.

      We also assume that no other space on the server gets used and we can utilize the full 16TB of each Hard drive. It is probably impossible to view the image, due to its size, but you could store it.