Publicity
Accessible news articles about our research
Our 2022 ACM Gordon Bell Prize finalist work on exascale protein homology search is described here.
Our HUNTRESS method for tumor phylogeny reconstruction for cancer genomics using single-cell data is described here.
Here is an article on GraphBLAST (accompanied with one pleasant picture), our GPU sparse matrix library for graph computations
Here is an accessible article about the GraphBLAS effort (accompanied with three unpleasant pictures). More info, including a reference implementation by Tim Davis and the C language API can be found in the GraphBLAS Forum website (standard building blocks for graph algorithms in the language of matrices).
HipMCL clusters protein similarity networks with 70 billion edges in a couple of hours. This highly-scalable implementation of the Markov Cluster algorithm is available open source. Here is an accessible press release.
Communication-avoiding algorithms help accelerate graphical model estimation (a.k.a. inverse covariance matrix estimation): http://cs.lbl.gov/news-media/news/2018/scalable-machine-learning-with-hp-concord/
Several presentations with accompanying videos
Sparse matrices powering three pillars of science: simulation, data, and learning, ACM ISSAC, Invited Tutorial, July 2022 [Video part 1] [Video part 2]
Parallel Sparse Matrix Algorithms for Data Analysis and Machine Learning, ETH Zurich, March 2022 [Video][Slides]
Large-scale graph representation learning and computational biology through sparse matrices, NJIT Institute for Data Science, April 2021. [Video][Slides]
The COVID-19 reality encouraged us to create a Youtube channel for our research presentations.
Communication-Avoiding Sparse Matrix Algorithms for Large Graph and Machine Learning Problems, New Architectures and Algorithms Workshop at IPAM (UCLA), November 2018. [Video] [Slides]
Communication-Avoiding Sparse-Matrix Primitives for Parallel Machine Learning, Sparse Days, Toulouse, September 2018. [Video] [Slides]
Genomics, Graphs and the GraphBLAS, GraphXD: Graphs Across Domains, Berkeley, April 2018. [Video] [Slides]
Reducing Communication in Parallel Graph Computations, MMDS, Berkeley, June 2014. [Video] [Slides]
Three Goals in Parallel Graph Computations: High Performance, High Productivity, and Reduced Communication, Simons Institute for Theory of Computing, October 2013, [Video] [Slides]