The Challenge on Ultrasound Beamforming with Deep Learning (CUBDL) was offered as a component of the 2020 IEEE International Ultrasonics Symposium. Prof. Bell was the primary organizer of this challenge, which resulted in the following CUBDL-related resources:
Additional details about CUBDL-related resources are available in the following three citations required for use of these resources:
- MAL Bell, J Huang, D Hyun, YC Eldar, R van Sloun, M Mischi, “Challenge on Ultrasound Beamforming with Deep Learning (CUBDL)”, Proceedings of the 2020 IEEE International Ultrasonics Symposium, 2020 [pdf]
- Muyinatu A. Lediju Bell, Jiaqi Huang, Alycen Wiacek, Ping Gong, Shigao Chen, Alessandro Ramalli, Piero Tortoli, Ben Luijten, Massimo Mischi, Ole Marius Hoel Rindal, Vincent Perrot , Hervé Liebgott, Xi Zhang, Jianwen Luo, Eniola Oluyemi, Emily Ambinder, “Challenge on Ultrasound Beamforming with Deep Learning (CUBDL) Datasets”, IEEE DataPort, 2019 [Online]. Available: http://dx.doi.org/10.21227/f0hn-8f92
- D. Hyun, A. Wiacek, S. Goudarzi, S. Rothlübbers, A. Asif, K. Eickel, Y. C. Eldar, J. Huang, M. Mischi, H. Rivaz, D. Sinden, R.J.G. van Sloun, H. Strohm, M. A. L. Bell, Deep Learning for Ultrasound Image Formation: CUBDL Evaluation Framework & Open Datasets, IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, 68(12):3466-3483, 2021 [pdf]
The UltraSound Toolbox (USTB) is a free MATLAB toolbox for processing ultrasonic signals. The primary purpose of the USTB is to facilitate the comparison of imaging techniques and the dissemination of research results. The PULSE Lab is proud to collaborate on this effort to deliver SLSC beamforming to the broader ultrasound community (http://www.ustb.no/examples/advanced-beamforming/short-lag-spatial-coherence-slsc/), as well as heart and phantom datasets, and the SLSC beamforming code, which are all freely available to use. Additional datasets and beamforming code can be found by perusing the USTB website.
Photoacoustic Deep Learning Datasets and Code
Our lab is pioneering the application of deep learning to bypass traditional beamforming steps and use raw channel data to directly display specific features of interest in ultrasound and photoacoustic images. We train with simulated data and transfer the networks to experimental data. Our trained deep neural networks, experimental datasets, and instructions for use are freely available in order to foster reproducibility and future comparisons with our associated publications on this topic.
If you use these datasets or code, please cite:
- D Allman, A Reiter, MAL Bell, Photoacoustic source detection and reflection artifact removal enabled by deep learning, IEEE Transactions on Medical Imaging, 37(6):1464-1477, 2018 [pdf]
- D Allman, A Reiter, MAL Bell, Photoacoustic Source Detection and Reflection Artifact Deep Learning Dataset, IEEE Dataport, 2018. [Online]. Available: http://dx.doi.org/10.21227/H2ZD39.