How do you decide how much credit a work unit is worth? Points are determined by the performance of a given machine relative to a becnhmark machine. Before putting out any new work unit, we benchmark it on a dedicated 2.8GHz Pentium 4 machine with SSE2 disabled (more specifically, as reported by /proc/cpuinfo on linux: vendor_id : GenuineIntel, cpu family : 15, model : 2, model name : Intel(R) Pentium(R) 4 CPU 2.80GHz, stepping : 9, cpu MHz : 2806.438, cache size : 512 KB). This machine runs linux, so all WUs are benchmarked with the linux core.
We plug the results of this into the following formula:
points = 110 * (daysPerWU)
where daysPerWU is the number of days it took to complete the unit. This equation was chosen to match the points for previous Gromacs WUs to the previous point system. The upshot is that Tinker WUs will be worth more than before we set up the new points (i.e. before April 2004).
Why are some projects given significantly more points than others? Certain projects require substantially more donor computer resources than others, either in terms of more disk space, more network transfer, or more RAM used. By default, these work units are given out to clients which opt in to request them. To reward those donors for donating resources beyond the typical client, we currently give bonus points for these larger work units.
How big are bonus points? Currently the bonus points are a 50% increase over the standard benchmark point determination (described above). Please note that this value is subject to change.
"You want to tempt the wrath of the whatever from high atop the thing?" --Toby, West Wing
Guru didn't mention it, so I will. Our Folding@Home distributed computing group, helping with studies on various illnesses and ailments, has now moved into the top 1000 of all groups (over 40 thousand teams).
You can join, if you want to help out. It just takes a little processor power and you might be part of curing, say Alzheimer's disease. How cool would that be?
Just click on the F@H link in the menu at the top of the page.
We'll be back right after order has been restored here in the Omni Center.
Congrats to all team members on making the top 1k, and Guru will break into the top 10k individuals in the next week or so.
A quick question to Guru that may be of interest here, the data that you are using to drive the F@H page here, are you picking up the text files off vspx27 every three hours and processing those, or getting specific team data another way?
I'm interested to perform some statistical analysis and forecasting, but I don't want to download that amount of data every time I want a refresh.
edit: spell checker doesn't pick up an 'an' when I meant to say 'and'.
What I was thinking of doing was messing around with the user data from our team to see whether I could come up with more accurate forecasting model. Many of the stats sites forecasts seem to be based on the the seven day totals, which seems to be a bit choppy for our style of users.
I have bandwidth considerations, so I didn't want to have to pull down the complete text files (which I believe can be quite large) just to strip out our teams stuff. Seeing as you had already done team specific stats, I was hoping that some sort of web service was in use from Stanford, and that I was too stupid to find it
If you had a data set that you wanted (each hour for each person for the team or something) I could probably export it to a text file after I'm done. I don't think that would be very difficult.
Thanks Guru, What I think I'll do is mess around with the concept some more. I have a copy of daily_user_summary.txt so I know what to expect data wise. I think its better for me to be certain that I can do what I'm trying to do before I start imposing effort on others
My spare time and headspace can be somewhat sporadic at times so it may be a while before I have something tangible and be in a position to regularly pick up a text data file. I'll speak up if/when I get to this point and we can resume the dialog piggy backing data off you guys.
On an unrelated note, was the user "thecubfan" actually "thecubsfan" who had a typo when setting up a new machine?
FYI... over on Something Awful there was a poster that had a cool F@H sig with his stats and all that; we'd have to break the 2000 plateau to get stats from that site, but the other site, dynasig.com, will let you create a .sig with stats.
Working for an ISP in Atlantic canada, the Accelerator we use is a Proxy that degrades images to make them smaller to download getting faster download speeds. Not bad if it's free only our's is an extra 5.95 a month