Severity: Warning
Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 197
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3165
Function: getPubMedXML
File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 317
Function: require_once
98%
921
2 minutes
20
Generative models cover various application areas, including image and video synthesis, natural language processing and molecular design, among many others. As digital generative models become larger, scalable inference in a fast and energy-efficient manner becomes a challenge. Here we present optical generative models inspired by diffusion models, where a shallow and fast digital encoder first maps random noise into phase patterns that serve as optical generative seeds for a desired data distribution; a jointly trained free-space-based reconfigurable decoder all-optically processes these generative seeds to create images never seen before following the target data distribution. Except for the illumination power and the random seed generation through a shallow encoder, these optical generative models do not consume computing power during the synthesis of the images. We report the optical generation of monochrome and multicolour images of handwritten digits, fashion products, butterflies, human faces and artworks, following the data distributions of MNIST, Fashion-MNIST, Butterflies-100, Celeb-A datasets, and Van Gogh's paintings and drawings, respectively, achieving an overall performance comparable to digital neural-network-based generative models. To experimentally demonstrate optical generative models, we used visible light to generate images of handwritten digits and fashion products. In addition, we generated Van Gogh-style artworks using both monochrome and multiwavelength illumination. These optical generative models might pave the way for energy-efficient and scalable inference tasks, further exploiting the potentials of optics and photonics for artificial-intelligence-generated content.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC12390839 | PMC |
http://dx.doi.org/10.1038/s41586-025-09446-5 | DOI Listing |