HUPO 2011: lessons for Proteomics from the Genomics Tsunami

Our whistlestop summer conference tour circumnavigating the globe has come to a jetlagged end, with the final conference being last weeks HUPO (Human Proteomics Organisation) congress in Geneva. With it being the 10th anniversary meeting it was a good opportunity to look back on how Proteomics has progressed over the past decade, from it’s early gel-based origins to its current more mass-spectrometry based incarnation as a key high-throughput “Omics” technology. Whilst there have been huge challenges and some criticism relating to issues with reproducibility (leading even to a “fix-proteomics” campaign), the several sessions relating to standards, data and repositories were good opportunities to observe how these are currently being addressed.

The many talks from members of HUPO-PSI (Proteomics Standards Initiative), including four from our editorial board member Henning Hermjakob, demonstrated how organized the community has been to systematically divide up and produce standards, formats, tools and repositories for a diverse range of data types. The HUPO-PSI Initiative Program session followed the full pectrum, from 2D-gels (Juan Pablo Albar presenting on his recent BMC Research Notes paper on best practice for data sharing in  Proteomics) to Molecular Interaction data (Sandra Orchard presenting on the IMEx consortium).

Many of the biggest challenges seemed to be economic and cultural rather than technical, with much discussion on the closing of Peptidome by NCBI, and recent stability issues at the main ProteomExchange raw data portal – Tranche. Whilst this is unfortunate, there seemed to be much work in process to rectify issues with raw data hosting, and processed and annotated data seemed to be in safe hands with the PRIDE and PeptideAtlas repositories. Whilst adoption and journal compliance is still building up (for an example see our last GigaBlog posting), PRIDE in particular offers authors and reviewers great visualization and quality  assessment tools (PRIDEInspector), and in light of this our editorial policies strongly recommend deposition of suitable data in this database.

With a 9-year history and over 50-publications and white-papers produced to date, HUPO-PSI has tried to follow many of the lessons learned by Proteomics slightly older “big-brother” the Genomics community. With this subject in mind GigaScience presented a talk at the “Proteomics Repositories and Journals – a partnership made in heaven/hell?” session specifically focusing on lessons learned for the Proteomics community from the Genomics “Tsunami” (slides here). Whilst  Proteomics data-volumes are still smaller than the petabytes that the genomics community are currently struggling with, it’s reassuring that the growing Proteomics community are trying to preempt these issues. There were interesting talks on show demonstrating very “genomics-esque” cloud-based workflow systems such as ISB’s TPP (transproteomic pipeline) amztpp command line tool. It was also interesting to see areas the two fields are coalescing, with Mike Snyder presenting a fantastic personalized-medicine oriented multi-“Omics” talk on what he terms Whole “Omics” Profiling (and BGI calls “Trans-Omics“).

Whilst there are obviously huge challenges that lie ahead, it is clear Proteomics has come a long way in the last decade, and as a key part of the scope of  GigaScience we hope to be there to cover much of what will progress as the field matures in the decades to come. Please contact us at editorial@gigasciencejournal.com if you have Proteomics data related research, reviews and comment you would like us to consider for the journal. Looking  forward to meeting many of you at HUPO 2012 in Boston!

View the latest posts on the GigaBlog homepage

Comments

By commenting, you’re agreeing to follow our community guidelines.

Your email address will not be published. Required fields are marked *