In the last decade special attention was paid to determine and analyze genomes and proteomes. Recently, more and more evidence is provided that non-genetically determined biomolecules such as metabolites and lipids are the key in bio molecular regulation. Today it is obvious that lipids are not only important for energy homeostasis and as an environmental-cellular barrier, but are also a central part of our signal transduction machinery. Disruptions of the sensitive lipid metabolism are highly correlated with different types of diseases including thrombocytopenia, the metabolic syndrome, diabetes, cancer and hyperlipidemia. This is especially true for the latter ones, which are also reaching pandemic levels, causing a larger annual health burden than infectious diseases; therefore lipid metabolism is becoming again an emerging scientific field and is a central part of pharmacological research (statins, cyclooxygenase inhibitors, Endurobol in doping) today.
Lipidomics as an “omics” technology is relatively young and was introduced in 2003 as an independent research field. As in proteomics based research, liquid chromatography based mass spectrometry builds the basis and is the driving engine of the field. Nevertheless, high throughput data sets are not commonly obtained. The reasons for this are manifold, including the lack of efficient data analysis tool sets, standard operating procedures, reference standards and miniaturization. As a result throughput and sensitivity are bottlenecks in lipidomics and advanced workflows are needed to analyze complex biological samples in the near future.
An especially challenging aspect of lipidomics is the diversity of the chemical structures of lipids, in contrast to nucleic acids and proteins which are assembled out of repetitive building blocks. Therefore a high biological diversity and variability is created, resulting in over 10,000 lipid species in a complex system such as stem and blood cells.
Diffuse gliomas are the most frequent primary human brain tumors with glioblastoma being the most aggressive among them. The mean survival for patients suffering from glioblastoma is restricted to 12 months, despite multimodal therapy regimens. Tumor cells commonly exhibit high levels of endoplasmic reticulum stress, which triggers the unfolded protein response (UPR), a mechanism, which recently has gained a lot of attention in the treatment of malignancies. Despite its broad clinical importance, quantitative models that systematically describe the UPR in cancer cells are missing so far.In order to lever the UPR for therapeutic intervention in glioma, strong needs exist for an integrated vision of how this molecular pathway contributes to tumor growth and infiltration.
The aim of the SUPR-G systems biology approach is to combine interdisciplinary approaches and state of the art methodology – including translatome and proteome analyses, computational modeling, human glioma specimen and in vivo animal model target validation – to gain novel and system-wide insights into the UPR.
These data will serve to establish the first highly integrated quantitative network model of the UPR in glioma to reveal potential therapeutic candidates for subsequent validation using the individual model systems of the consortium. The constructed model will be made publically available via a web based interface and will be integrated into already existing online tools enabling the scientific community to develop novel targeted therapies interfering with UPR-mediated cell fate decisions in the context of glioma and beyond.
Proteins are important research entities as they are the executive biomolecules in cells, tissues and organs. ‘Proteomics’ comprises the identification, quantification and spatial resolution of many proteins in parallel. Proteomics experiments are indispensable for a comprehensive understanding of biological functions and diseases. Thus integration of a ‘Bioinformatics for Proteomics’ unit (BioInfra.Prot) into de.NBI will accelerate progress in life sciences such as medicine or biology. The objectives of BioInfra.Prot include implementation, establishment and provision of bioinformatics for proteomics-related services: 1) We will provide our existing proteomics software tools together with consulting and support. 2) We will offer a conversion service to convert data to XML-based standard formats. We will also offer an upload service into proteomics data repositories and preparation of journal-compliant data structures. 3) We will provide a statistical consulting service concerning experiment/study design and statistical analysis. Furthermore we will offer a bioinformatics consulting service regarding data handling, managing large data amounts and choosing suitable workflow-specific software tools. We will also provide a hardware and toolbox sharing service containing our own and other tools via hardware virtualization. Additionally KNIME and Galaxy workflows will be supported. 4) We will develop a quality standard database (QSDB) as a validated reference peptide database for targeted proteomics. 5) We will organize annual courses about ‘bioinformatics for proteomics’ topics and we will participate in organizing the de.NBI-wide education activities (such as summer schools).
The targeted audience of these services comprises all kinds of users, including application users, expert data analysts and developers. We will evaluate our services by measuring user feedback and the frequency of use via the metrics proposed within de.NBI (esp. number of users and user satisfaction).