
The effort will enable US customers to use the DIRSIG™ model with Rendered.ai’s cloud capabilities to generate Earth Observation datasets for AI training
BELLEVUE, Wash. and ROCHESTER, N.Y. (Rendered.ai PR) — Rendered.ai, the leading platform for physics-based synthetic data, and the Rochester Institute of Technology’s Digital Imaging and Remote Sensing (DIRS) Laboratory today announced a collaboration to combine the physics-driven accuracy of the DIRSIG synthetic imagery model with Rendered.ai’s cloud-based platform for high volume synthetic data generation.
Machine Learning (ML) algorithms using Computer Vision (CV) data provide a key tool for exploiting the rapidly expanding capability and content of Earth Observations (EO) collection and analytics companies around the world. Rendered.ai provides a platform as a service (PaaS) for data scientists and CV engineers to scalably produce large, configurable synthetic CV datasets in the cloud for training Artificial Intelligence (AI) and ML systems.
The DIRSIG model produces a range of simulated output representing passive single-band, multi-spectral, or hyper-spectral imagery from the visible through the thermal infrared region of the electromagnetic spectrum. DIRSIG is widely used to test algorithms and to train analysts on simulated standard imagery products. The Rendered.ai team has built simulators for visible light and synthetic aperture radar (SAR), however DIRSIG’s breadth of capability and ongoing investment by granting agencies will provide qualified Rendered.ai customers a much wider range of field-tested and production-quality sensor modeling technology.
“DIRSIG has been providing synthetic imagery to expert customers for decades,” said Scott Brown, Ph.D., principal scientist and project lead. “Our collaboration with Rendered.ai enables us to bring our proven capability to a wider audience at a time when satellite and other forms of remote sensing data collection are rapidly expanding.”
(more…)