![]() I am curious why? I have to warn you, building 1 storage pod is generally considered a bad idea. There's a ton of models out there, so do your research, but they're a good starting place without having to invent a chassis from scratch or investing too much money. I'm not sure what kind of budget you have in mind, where you're located, and if you want to use new/used equipment, but 24 to 36 drive SuperMicro servers are pretty regularly available on eBay as used enterprise equipment (or brand new from SM), and usually come with SAS backplanes and drive caddies and stuff, and you can customize them to suit your needs, like replacing the noisy power supplies with quiet ones. Backblaze obtains a lot of performance by splitting workloads and parity between 20 separate servers but a single (or a few) storage server needs some more performance which SAS architecture can make a lot easier. Yeah, I definitely think you should research SAS as a general technology - particularly HBAs, and SAS expanders since you'll desire more raw I/O performance than a regular Backblaze storage pod could provide alone with SATA Port Multipliers. ![]() If you have any other comments and suggestions, I would really appreciate it. I built the computing servers with high CPU and GPU resources, and they work very well, but I don't have much experience in building data servers. We use SSDs as cache on the computing servers, but we should still have fast access to the data from the storage pod. So, our needs are to have decent fault tolerance and maximum speed (ideally 10Gbps). Also, once the data is stored, we have in-house powerful servers with high computing power that can use the data for AI training and simulations. Robots generate a lot of data that we want to store. We are a robotic company, and we use advanced algorithms including AI to make autonomous robots. I would like to build this for the office. I was planing to use FreeNAS and ZFS for RAID configuration. This is very informative to me, I'll dig deeper into the links and proposed solution. Reliability, fault tolerance, low downtime, maximum speed, maximum value? I use unRAID at home because I want some redundancy but mostly just ease of use and value, but would also consider something running ZFS as I tend to like software solutions rather than hardware.įor good home server build information and research, check out, and /r/datahoarder. In terms of RAID levels and arrangement, it really depends what you want out of the system. ![]() Alternately a HBA with at least 6 SFF-8087 connectors will get you to 24 drives. (now for my personal opinion and advice) In 2020 if I were doing a DIY storage pod style setup and wanted 24 drives, I'd go with a PCIe LSI HBA connected to a SFF-8087 to SFF-8087 cable, then to a PCIe SAS expander with at least 6 SFF-8087 connectors, and then SFF-8087 to 4 SATA breakout cable for each group of 4 drives. Performance wise it was very good, but they were expensive (at the time, the card was $700?) and we had some reliability issues. ![]() We did try SAS Expanders in the "direct wire" 4.0 Pod. Performance for our needs is fine - they share bandwidth between all 5 drives on the expander so there can be some saturation issues for some workflows, but ours is distributed enough to not be a problem. Our storage pods use SATA Port Multiplier Backplanes (the Sunrich ones) for a couple reasons: they're cheap (in bulk for us at least), they're something we're familiar with, and at the time we designed the very early pods, they were really the only thing available at reasonable prices. ![]()
0 Comments
Leave a Reply. |