Sparse Convoluted Neural Networks Making Waves in Particle Experiments
Category Computer Science Thursday - June 15 2023, 22:55 UTC - 1 year ago SCNNs are machine learning tools that can focus on relevant parts of data while screening out the rest. These networks have been used to speed up data analysis and have been further developed for use in particle experiments across three continents, marking a historic change in the physics community as machine learning is now leading the way for them in relation to data analysis.
Suppose you have a thousand-page book, but each page has only a single line of text. You’re supposed to extract the information contained in the book using a scanner, only this particular scanner systematically goes through each and every page, scanning one square inch at a time. It would take you a long time to get through the whole book with that scanner, and most of that time would be wasted scanning empty space.Such is the life of many an experimental physicist. In particle experiments, detectors capture and analyze vast amounts of data, even though only a tiny fraction of it contains useful information. "In a photograph of, say, a bird flying in the sky, every pixel can be meaningful," explained Kazuhiro Terao, a physicist at the SLAC National Accelerator Laboratory. But in the images a physicist looks at, often only a small portion of it actually matters. In circumstances like that, poring over every detail needlessly consumes time and computational resources.
But that’s starting to change. With a machine learning tool known as a sparse convolutional neural network (SCNN), researchers can focus on the relevant parts of their data and screen out the rest. Researchers have used these networks to vastly accelerate their ability to do real-time data analysis. And they plan to employ SCNNs in upcoming or existing experiments on at least three continents. The switch marks a historic change for the physics community."In physics, we are used to developing our own algorithms and computational approaches," said Carlos Argüelles-Delgado, a physicist at Harvard University. "We have always been on the forefront of development, but now, on the computational end of things, computer science is often leading the way." .
Sparse Characters .
The work that would lead to SCNNs began in 2012, when Benjamin Graham, then at the University of Warwick, wanted to make a neural network that could recognize Chinese handwriting.The premier tools at the time for image-related tasks like this were convolutional neural networks (CNNs). For the Chinese handwriting task, a writer would trace a character on a digital tablet, producing an image of, say, 10,000 pixels. The CNN would then move a 3-by-3 grid called a kernel across the entire image, centering the kernel on each pixel individually. For every placement of the kernel, the network would perform a complicated mathematical calculation called a convolution that looked for distinguishing features.
CNNs were designed to be used with information-dense images such as photographs. But an image containing a Chinese character is mostly empty; researchers refer to data with this property as sparse. It’s a common feature of anything in the natural world. "To give an example of how sparse the world can be," Graham said, if the Eiffel Tower were encased in the smallest possible rectangle, that rectangle would consist of "99.98% air and just 0.02% iron." .
Graham tried tweaking the CNN approach so that the kernel would only be placed on 3-by-3 sections of the image that contain at least one pixel that has nonzero value (and is not just blank). In this way, he succeeded in producing a system that .
Share