Researchers from NTU are utilising NSCC’s supercomputing resources to improve the performance of learning models in molecular data analysis.
AI is expected to play an increasingly important role in the years to come, especially in biological studies. As a major part of AI, neural networks cannot achieve such success without the support of enormous data.
The huge advancements in biological sciences and technologies has led to the accumulation of unprecedented amounts of biomolecular data. For example, in a protein data bank, there are about 150,000 three-dimensional biomolecular structures (Berman, Westbrook et al. 2000). There is an abundance of available biological structures, data analysis methods and models, including data mining, manifold learning and graph or network models. Leveraging the data, topological data analysis (TDA), for example, can potentially provide great promise in the big data era and have become increasingly popular in bioinformatics and computational biology in the past two decades.
Data-driven learning models are among the most important and rapidly evolving areas in chemoinformatics and bioinformatics. Featurization, or feature engineering, is key to the performance of machine learning models in material, chemical, and biological systems. As such, a group of researchers at the School of Physical and Mathematical Sciences at Nanyang Technological University Singapore are using high performance computing to develop a new molecular representation framework, known as persistent spectral (PerSpect), and PerSpect based machine learning (PerSpect ML) for protein-ligand binding affinity prediction. The proposed PerSpect theory provides a powerful feature engineering framework. PerSpect ML models demonstrate great potential to significantly improve the performance of learning models in molecular data analysis.
High performance computing plays a pivotal role in the team’s daily work. The project needs to process large databases which contains thousands of entries, the databases needs to be divided into several pieces and parallel computing has to be employed to treat each part. Additionally, some algorithms are time/memory-consuming and computing resources with multiple cores and large memory are needed to run them.
To find out more about the NSCC’s HPC resources and how you can tap on them, please contact [email protected].
NSCC NewsBytes May 2021
Other Case Studies
Increasing the effectiveness of antibiotics by analysing antibiotic resistance using supercomputers
Researchers from NTU are utilising NSCC’s supercomputing resources to understand the permeation of antibiotics in order to design new novel antibiotics. Antibiotic resistance is...
Studying the mutation mechanisms of the flu virus to develop better vaccines
Utilising high performance computing to understand the life cycle of Influenza A Virus to aid in the development and commercial production of vaccines. Influenza A Virus (IAV) is...
A quieter way to fly – Reducing jet engine noise through HPC research
Researchers from NUS are harnessing the power of supercomputing to understand the mechanism of noise generated by jet engines to reduce the impact of noise emission on the...