The Experiment Begins
Mid December I posted about my then new build and my plans on using it to compare projects depending on their payout. After having had quiet a lot of trouble to go through (see here), I have finally started the experiment!
Parts in my build:
- AMD Threadripper 1950X: 16 cores / 32 threads at 3.4GHz
- NVIDIA GeForce GTX 1080
- NVIDIA GeForce GTX 1060 3GB
- 24GB DDR4 RAM
I have not overclocked the CPU nor the GPUs, as this would dramatically increase the power consumption and also the heat output / noise level.
So, what is the experiment? Initially, I planed on running 8 projects on the CPU with 4 threads each. As not all projects have the option to specify the number of cores they are maximally allowed to use, I had to modify the experiment and just attach all projects with the same resource share. On the one hand side, it makes the data for the comparison only reliant over longer timespans, as Boinc now runs a lot of WU of a few projects and only switches to different projects after those are completed. On the other hand, this allows me to attach more than 8 projects, as I am no more limited to an integer division of 32.
I will not include GPU projects in this experiment, since the GPU usage depends a lot on the threads I make available on the CPU. AmicableNumbers or GPUGrid are going to be the only ones I run at this point.
In the project included are now pretty much all CPU only projects available at the moment. I had to leave out yafu/SRBase/Tn-Grid as those projects did not accept new users; LHC@home/Cosmology@home as those only result in errors.
Attached now are the following CPU projects: Citizen Science Grid, DrugDiscovery@Home, Sourcefinder, ODLK1 (soon to be whitelisted), NFS@Home, NumberFields@home, theSkyNet POGS, Rosetta@home, Universe@Home, VGTU project@home, World Community Grid, yoyo@home.
As some projects do not provide sufficient WU to really make their share of 1/12th of all CPU time, I am looking for an alternative to compare their runtime to the GRC they reward. I thought about somehow collecting the data from the task-page of each project, but don't know how to do it as of now. Any ideas are welcome!
If you have any suggestions to improve the data collection, please leave a comment. I will now let the projects run for about one or two weeks. This should guarantee some equality in the CPU time spent on each of the 12 projects and will be interesting to see for me - and you I hope!
Edit: Yafu/SRBase/Tn-Grid were added thanks to @parejan.
Sounds cool! One thing though -- Are you sure the BOINC client really does share resources evenly among projects (measured by total CPU time per project)? Maybe it would be more accurate to compare projects one by one?
At any rate, answering questions surrounding resource allocation for multiple projects is certainly worthwhile in and of itself! I'm interested to hear others' thoughts on this.
I would have preferred to be able to allocate a certain number of cores to each project (some projects allow that) but it was not possible for enough projects unfortunately. The best way would be now to somehow compare cpu-time I guess. I would really like to write a program that extracts it from the task-pages of the projects, but I don't know where to even start unfortunately (chemistry-programming-lectures don't teach you how to extract information out of the internet...).
Edit: Concerning the point of doing one project at a time: I don't want to do that, as I would have to run each project for at least a few days, lets say 4. Doing that for 12 Projects takes 48 days. In that time the competition might have changed dramatically already and the comparison would suffer I guess.
That's fair. I don't know an easy way of tracking cpu-time either. But yeah, anyways the BOINC manager should at least do a roughly decent job of balancing the workload.
Use the following invitation codes to connect to the projects:
All projects accept new users.
oh great, thank you! almost added already.