• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Happy Holidays! Looking back at 2013 and forward to 2014 …

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

'Cuda340

Very Welcoming Senior, Premium Member #11
Joined
May 30, 2004
Location
Folding@Home
December 23, 2013 by Vijay Pande ·


As the year ends, it’s a natural time to look at some key highlights of we’ve done in 2013 and look ahead to what’s on deck for 2014. One of the challenges of science results can easily take a year for us to run on FAH, analyze, and publish. So, the work of 2012 comes out in 2013 and the projects running in 2013 get to reach the public in 2014. So, it makes sense to talk about both work that’s been going on this year and how all of this will shape up in 2014. We’ve been doing a lot behind the scenes in 2013 and I’m excited about all of this getting out to the public in 2014. Here are some key developments.

FAH development in 2013. We’ve had several new developments in how FAH works, from both the donor perspective as well as how FAH works behind the scenes. With the addition of new programmer Yutong “Proteneer” Zhao, Core 17 was a big advance for FAH. It brought the most advanced features from the OpenMM GPU code, especially major speed increases for AMD/ATI cards. Moreover, it also brought a natural path to true Linux GPU support and lots of useful scientific updates. The v7 client continues to advance, with a new web interface; these simplifications have been important especially for certain partners who have wanted to see an easier to use FAH client before they can help us push for greater deployment. Adaptive sampling approaches have been a part of how FAH works, but we’ve also been moving to automate this process, leading to more powerful ways we can use the power of the FAH clients.

New FAH results. Several of our long running projects have come to fruition in 2013 and gotten various honors. One major result which also gives a good idea of where we’re going in the future was our work on GPCRs, key proteins at the heart of much of disease and drug design. Our work on GPCRs, running a FAH-like infrastructure on Google’s internal processors, sets the stage for similar calculations to be run on FAH (as I’ll describe below). In a nutshell, what we’ve been able to show is that the tools used to understand protein folding (developed in FAH) can shed light on how GPCRs behave, especially revealing intermediate structures which are interesting and potentially important new targets for drug design.

New projects. Kinases are key drug targets for cancer. 2013 brought new projects which will lead to new papers in 2014. Many donors have asked us to push into more areas in cancer and our pilot projects have worked out well, leading to full fledged FAH projects in cancer. Our first paper in this area has been submitted, with several more in the pipeline. I’ll post more when the paper is out, but briefly, we’ve been able simulate kinase dynamics, identifying new intermediate states that could be useful drug targets. Our goal is to help develop kinase drugs with greater specificity, leading to a dramatic improvement in cancer therapies, especially without the devastating issues of current chemotherapies, which comes from the fact that current drugs are not very specific to a give kinase and affect the whole body—not just the tumor cells—in a very negative manner. We also now have some very high power collaborators on board, including those at the Memorial Sloan Kettering Cancer Center in New York. We also have new results in Alzheimer’s Disease and Infectious Disease which we are in the process of submitting for publication. These results are in the area of drug repurposing, leading to the ability of the FAH team to go straight to therapies for these diseases using combinations of existing drugs; we’ve very excited about this direction as it allows us to go quickly to phase 2 clinical trials, getting new drugs in the hands of patients decades before other methods would.

Looking to the future of FAH: on deck for 2014. We’ve been working behind the scenes with various partners to get Folding@home disseminated into more people’s hands, increasing what we can all do together. We expect to release a whole new type of client and backend server in 2014, which will help make it easier and easier for many donors to contribute to FAH. Also, we’ve gotten a lot of useful responses to the donor survey and we’re working to implement the most important suggestions. Specifically, we’re hiring a new position solely for donor relations. We hope that this will solve the challenge of the science team having to decide between donor relations and getting their science done, by having someone whose sole role is donor relations. Finally, we have several partnerships with companies who have been working to promote Folding@home that we will hopefully be able to announce in 2014.

For us, it’s a very exciting time. 2013 was very much a year of laying seeds for 2014. The key elements are an improved client and more powerful set of cores (especially GPU core17) combined with new backend methods, applied to key new scientific projects running in 2013 in the areas cancer, Alzheimer’s Disease, and infectious disease. All of this, pushed forward by a lot of raw power from partnering could easily make 2014 a landmark year for FAH. We’d like to thank all of the donors who make this possible and look forward to hopefully working together to make 2014 the best year FAH has ever seen.
Filed Under: Uncategorized ·
Under the hood at Berkeley
November 13, 2013 by Greg Bowman ·

The Bowman lab at the University of California, Berkeley recently purchased its first Folding@home server and I thought I would share a little bit about what’s under the hood. The main purpose of this machine is to serve and store Folding@home work units, so it is equipped with a dozen 4TB drives. Two drives are dedicated to redundant storage (RAID-6 for experts who are curious to know), leaving us with 40 TB of storage space for collecting data. Many of the projects I run take 10-100GB of disk space, so conservatively we’re now prepared to run 400 projects! The machine also has four Gigabit Ethernet connections for sending/receiving data. Another highlight is a dozen 2.0 GHz cores and 64GB of memory, allowing us to run some of our analysis locally. The bulk of our analysis will still be conducted on clusters at Berkeley though, where we can parallelize many of the steps across hundreds of processors.
 
Back