Physicists Can Give AI a Boost With Physics-Inspired Models
Category Science Tuesday - September 26 2023, 22:01 UTC - 1 year ago Physicists can help to advance AI technology by replacing the 'black box' algorithms of neural networks with better-understood equations of physical process. Poisson flow generative models (PFGM) rely on a physical set up of a electric field that is learned by the neural network during the training process. PFGM create images with the same quality as diffusion-based models while being 10-20x faster.
The tools of artificial intelligence — neural networks in particular — have been good to physicists. For years, this technology has helped researchers reconstruct particle trajectories in accelerator experiments, search for evidence of new particles, and detect gravitational waves and exoplanets. While AI tools can clearly do a lot for physicists, the question now, according to Max Tegmark, a physicist at the Massachusetts Institute of Technology, is: "Can we give anything back?" .
Tegmark believes that his physicist peers can make significant contributions to the science of AI, and he has made this his top research priority. One way physicists could help advance AI technology, he said, would be to replace the "black box" algorithms of neural networks, whose workings are largely inscrutable, with well-understood equations of physical processes.The idea is not brand-new. Generative AI models based on diffusion — the process that, for instance, causes milk poured into a cup of coffee to spread uniformly — first emerged in 2015, and the quality of the images they generate has improved significantly since then. That technology powers popular image-producing software such as DALL·E 2 and Midjourney. Now, Tegmark and his colleagues are learning whether other physics-inspired generative models might work as well as diffusion-based models, or even better.
Late last year, Tegmark’s team introduced a promising new method of producing images called the Poisson flow generative model (PFGM). In it, data is represented by charged particles, which combine to create an electric field whose properties depend on the distribution of the charges at any given moment. It’s called a Poisson flow model because the movement of charges is governed by the Poisson equation, which derives from the principle stating that the electrostatic force between two charges varies inversely with the square of the distance between them (similar to the formulation of Newtonian gravity).
That physical process is at the heart of PFGM. "Our model can be characterized almost completely by the strength and direction of the electric field at every point in space," said Yilun Xu, a graduate student at MIT and co-author of the paper. "What the neural network learns during the training process is how to estimate that electric field." And in so doing, it can learn to create images because an image in this model can be succinctly described by an electric field.
PFGM can create images of the same quality as those produced by diffusion-based approaches and do so 10 to 20 times faster. "It utilizes a physical construct, the electric field, in a way we’ve never seen before," said Hananel Hazan, a computer scientist at Tufts University. "That opens the door to the possibility of other physical phenomena being harnessed to improve our neural networks." .
Diffusion and Poisson flow models have a lot in common, besides being based on equations imported from physics. During training, a diffusion model designed for image generation typically starts with a picture — a dog, let’s say — and then adds visual noise, altering each pixel in a random way until its features blur and the resulting image is a vignette of the original. Such images contain random artifacts, Hazan said, likening them to an unfocused Monet painting.
Share