Wednesday, March 25, 2020

COVID-19: Finding a cure with Folding@Home

Got extra compute power and want to help research scientists create a cure for the virus?  Folding@Home is a distributed computing network where we donate our spare compute power by running a client which processes work units which are like puzzle pieces for the bigger compute dataset. They've already grown the total compute power to larger than the world's 7 fastest super computers combined.

How do you know if you are helping research the cure?  Check this link and look at your project ID.  I've stood up an 8 computer F@H farm and plan on adding another 2 computers once I get parts delivered.  4 of these nodes are actually unraid VM's which each have a GPU assigned even though they are physically one server. You can see a picture of the unraid server build for the GPUs here.  The 5 CPU only nodes are below minus my laptop.  These are all the systems I replaced with the single unraid server.



I've used the advanced control tool to add in all of the node members.  You need to set up each one enabling remote access as it's locked down by default.  When they are all added, you can see your total Points Per Day for all nodes on the bottom:


I found that running 4 GPU's + all VM vcores draws > 1,100 watts and overloads my UPS.  GPUs run much faster than the CPU cores, so I've prioritized GPUs for the unraid server which is why the 4 VM nodes don't have CPU slots (saving just enough juice to not overload the UPS).  The 6 other machines are CPU only as they are using onboard video since I pulled the GPU's for the unraid server build.  So in total, the unraid server pulls about 900 watts while 4 of the 6 CPU only systems are currently pulling about 500 watts.

Update: You'll want to disable Spectre & Meltdown microcode mitigation as it slows down intel based systems by up to 30%.

What I've found is that the back end servers are having bandwidth/connectivity issues which causes the client to end up spending a long time waiting for work.  I create a few different scripts for forcing the client to attempt a more frequent download.  Here's the script I run locally on my main system when it gets stuck in waiting for work.



I created this script which runs on startup for pure folding machines which has loop logic to restart when CPU & GPU usage is low.

Tuesday, February 25, 2020

UNRAID: My venture into virtualized desktops

Unraid is a paid distribution of Linux by Lime Technology Inc which uses the KVM hypervisor, Docker, and it's own proprietary parity storage system to provide an all in one server platform that can do just about anything.  Inspired by Linus Tech Tip's videos , I decided to do a proof of concept (PoC) by virtualizing my own system.

After a bit of a learning curve, I worked out the issues and successfully got a VM running with SSD, GPU, and USB pass-through working flawlessly.  I then decided pull the trigger after doing a few weeks worth of research to combine 5 systems into a new Unraid server build.  4 desktop gaming VMs, each with dedicated a USB port and GPU.  I decided to use virtual disks for their drives on my fastest M.2 NVMe drive.

In all it's glory (with the front tinted glass removed to see better):


The server also runs a dozen docker containers and a couple of other server VMs.  It literally replaces all of the computers in the house with a single system.  Here are the specs:
  • AMD Ryzen 9 3950X 3.5ghz 16 Core
  • Enermax AIO 360mm liquid III cooler
  • Asus ROG Strix X570-E Gaming Motherboard
  • 64gb of Timetec Hynix PC3600mhz DDR4 16x4 DIMMs
  • Rosewill TOKAMAK 1200watt Titanium PSU
  • Thermaltake Core P5 wall mounted case
  • FebSmart 4 channel 8 port USB card (quad USB chips for pass through)
  • Storage:
    • 2 x M.2 NVMe Drives (1.5TB)
    • 4 x SSD (2.75TB)
    • 4 x 8TB Seagate green HDDs (32TB)
  • GPUs:
    • EVGA SC2 nVidia GTX 1080 Ti 11gb
    • EVGA SC nVidia GTX 1060 6gb
    • XFX AMD RX 570 8gb
    • Gigabyte AMD RX 550 2gb
  • 2 x PCIe x1 to x16 USB extensions (one for RX 550 GPU and one for USB card)
  • 3 x 50ft HDMI 2.0 cables
  • 3 x 50ft USB 2.0 cables w/ power repeaters in the middle
  • 4 USB hubs
  • 200mm RGB cooling fan
  • RAM cooler

It's a beast and by far my most expensive build, coming in around $5k.  I didn't buy all of the parts at once, the GPUs were pulled from their respective systems as well as the disks.  I didn't need all of the SSD's, but I figured since the motherboard has two M.2 slots and 8 SATA ports, I might as well hook them all up.  The main components for the build I had to buy new cost just under $2k.




Here's what the main page looks like and how I've decided to allocate my drives.  The M.2 drives get VM's and Dockers, one SSD for caching, the others are for game library shares.  As you can see, the main array is 24TB in size, one of the drives is used entirely for parity. This the main feature of unraid.  You can have up to two parity drives if needed per array.  Everything is formatted BTRFS except the 240gb SSD which is XFS for swapfile (which hasn't been needed yet).  The NTFS drive will be formatted to BTRFS and will become a secondary game library (yes, we have over 1TB of games just on Steam alone)

One of the cool things you can do when you have 4 gaming systems that share a single piece of hardware, is use networked shared drives for game storage.  That way you only have to install the game one time and all users can access it and you don't have to have duplicate copies of the installed files taking up extra space.  The game libraries are shared using virtual 100GB ethernet adapters on each of the VMs so load times are lightning fast.  Steam natively supports network share drives for installation, Epic Games has to be "tricked", and Blizzard games don't work at all, but I have another technique for saving disk space, using ref-linked virtual disks.

Looking at the VMs, you can see the breakdown of resources.  All CPUs are actually hyper-threaded vcores which is why there are 32 in total.  All of the allocated vcores are isolated and dedicated for each VM.  The kids each get 4 vcores, 8gb ram, and a AMD based GPU.  The missus gets 6 cores, 12gb ram, and the GTX 1060. I've found that the 8 vcores and 20gb ram with the GTX 1080Ti about on part with my old Intel i7 8700K 6 core CPU.  A good general rule of thumb when sizing VMs is to part 1 vcore per 2gb of ram.  I might up my VM to 10 vcores, but right now, the remaining vcores are reserved for the unraid system and docker containers which really need more CPU I've found.



I was able to get a nice overclock thanks to the water cooling from 3.5ghz to 4.3ghz all cores.  The main dash board page looks like this:




I had to do some custom modding to get 4 GPUs to fit.  The RX 550 only gets a x1 PCIe lane which is surprisingly enough bandwidth for it to function at max speed and 1080p gaming is fine.  I had to modify the GTX 1060 by removing some of the plastic from the housing so the PCIe extension card would fit next to it.






I also had to remove the PCIe bracket and install the USB controller card internally and used the case's 4 front ports to connect to the card's USB headers.  I covered the back and sides of the card with tape to prevent it from shorting out by the card touching the case.  You can also see the 4 x 8TB drives and 4 SSD's are just jammed in with all the cable management (mostly PCIe power connectors and SATA cables).



I had to mount the RX 550 GPU upside down using the horizontal GPU kit installed backwards with the PCIe extension hidden just out of view.


Finally, I installed VESA TV mounting hardware (rated for 150 pounds) and anchored the mounting rails into two studs using 4 x 3" drywall screws for extra security.  The whole unit with glass weighs around 60 pounds.




So far, I'm extremely happy with how things turned out for this build.  Overall, I think there is some room of optimization.  I know I can push the overclock to at least 4.5 ghz while losing some power efficiency; I'd rather save on power and heat and have a solid system.  I would have preferred a TRX40 Thread-ripper base, but that would have increased costs by at least $1500.  And I don't really need more cores, just more PCIe lanes. That said, I do feel like the 4 GPUs aren't starved for bandwidth; the more powerful nvidia's get x8 each and the RX 570 gets x2 while the RX 550 still runs fine with x1.

A larger E-ATX server board would have saved me from using PCIe extensions, but again, cost.  I would have also preferred to go with 128gb ram, but I'm only using 90% of the ram now.  And I could also leverage the swap file disk, but I'm not over allocating ram.  Audio over HDMI did require some tweaking (MSI interrupts) so the audio doesn't get choppy, but I figured that out.  I also had to adjust the windows VM real time virtual clock settings in the config to stop it from using more CPU then the VM was actually utilizing:



The PCIe USB card is a huge pain in that sometimes only 2 or 3 of the ports show up and a reboot or two is required to bring back the missing ports.  It only happens upon reboot, so that's at least tolerable.  With everything powered on and running at idle, the system uses only around 200 watts surprisingly, and about 850+ when all four of us play games.

Another weird issue I'm having is using a docker to backup the array to an online backup service provider.  The docker invokes many threads in the underlining core OS which can cause latency issues on my main VM.  I've found a workaround for now (cron job restarting the docker every few hours), but I'd like to figure out how to prevent it. 

I don't think I'll ever go back to separate gaming computers especially since I've cabled up the house for the 4 workstations.  I absolutely love all the RGB and wall mounted case; it's a wonderful show piece.  It also saves power (almost $600/year by my estimates), allows for some upgrades (I will likely swap out GPUs and eventually go to 128gb ram).  I would love to add water cooling to my GPU, but it's a low priority.  I'll probably do it when I swap it out with my next upgrade.

More importantly, this was the first time I changed out my storage OS to a purely linux solution.  The old VM/storage server was running Windows Server 2016 with HyperV VM's and Storage Spaces for the array.  Unraid is definitely faster in terms of disk I/O and Dockers are more flexible and easier to configure than VMs.  I also really like how recoverable data is if the array breaks, which happens sometimes.

If you are a hard core enthusiast like myself, I recommend this project.  It's not for the faint of heart, but it's really not as bad as trying to build systems in early days of bitcoin/altcoin mining.  Unraid is a wonderful OS and the forum support is pretty great.  Lots of helpful videos from SpaceInvaderOne.