cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Dynatrace x Folding@Home - Protein Folding for Good!

bsnurka
Dynatrace Advisor
Dynatrace Advisor

During the Pandemic, many techies found out about the Organization called Folding@Home which empowers everyday people to use their spare compute power to perform protein folding in the name of scientific research! I became involved once I understood that this would be a perfect use case for my spare Homelab parts and quickly started racking up the research points. The best part? You can perform all of this research as part of the Dynatrace Folding@Home Team!

Now running the Folding client is easy enough, but I became curious if there was some way I could integrate this with Dynatrace itself. Thanks to the power of Workflows, Automation Engine, and Dashboards, this was all possible!

 

It starts off with a simple scheduled Workflow set to run every hour. Using our Github for Workflows integration, I was easily able to setup the ingest of a basic .txt file from a remote private Github repository that gets updated with the Dynatrace Team's Stats on an hourly basis.

With this .txt file, we can then use our Metrics v2 Ingest API endpoint to ingest the raw F@H statistics as Metric Data within Dynatrace using the following format.

fah.user_score.points,team={TEAM_NAME},team_id={TEAM_ID},user={USER} {SCORE}
fah.user_score.wu,team={TEAM_NAME},team_id={TEAM_ID},user={USER} {WORK_UNITS}
fah.team_score.points,team={TEAM_NAME},team_id={TEAM_ID} {SCORE}
fah.team_score.wu,team={TEAM_NAME},team_id={TEAM_ID} {WORK_UNITS}

This leads to reliable metric data being ingest into the Dynatrace Tenant on an hourly basis, pulling from the constantly updating Github repository.

bsnurka_0-1735596296513.png

Now this is all well and good, but why stop here? Lets further break down the data using DQL!

I've built out a (for now) private Dashboard, showing all of the user & team data over the last 24 hour period. I hope to further expand out the details & breakdowns available in the near future, but need to learn a bit more about complex DQL queries beforehand.

bsnurka_1-1735596433493.png

 

1 REPLY 1

AntonioSousa
DynaMight Guru
DynaMight Guru

@bsnurka,

This is very nice!

Please don't get me wrong about what I'm going to say next, as I was a very intervening actor in the very beginning of the first volunteer computing project, almost 30 years ago, and that is still running today: GIMPS

Later I was also quite involved in SETI@home but it's now in hibernation... Folding@home was launched in the same year, 1999!

But, I'm also a very interested person in efficiency & electricity consumption. Some 20 years ago, I noticed that my "collective power" was generating too much heat, and since virtually all electricity consumed by a computer ends up in heat, I quickly figured out that the electricity consumption was way too high for what it should be.

And that added to the fact that I was processing it in a country where electricity prices were way too high compared to other locations. Retrospectively, it might have been a great mistake of mine, as that was the main factor I decided not to mine Bitcoins in early 2011 (yes, the reason was this podcast)...

So, for a lot of time I have been a defender that these type of calculations should be made with efficient computing, where watts usage is really maxed out. If we do not do it that way, we're only contributing to getting more heat, which is a sustainability problem, as we all know.

Now, this is where Dynatrace will also excel. If you complement your analysis with the Carbon Impact app, you might get a sense of whom is contributing in the most sustainable way 🙂

Antonio Sousa

Featured Posts