Blog

Danti

12.11.2024

Conquering the Data Deluge: How Multimodal AI Brings Order to Overload

National Geospatial Intelligence Agency (NGA) Director Vice Adm. Frank Whitworth recently spoke at a conference on the importance of multimodal AI and what it means for the NGA analyst workforce and the future of the agency.

This capability and emerging technology is the key to the NGA scaling its efforts as the deluge of data cripples analysts’ ability to take in, understand, and make sense of all the new and existing sources coming their way. The brain drain of deep expert knowledge out of these agencies as analyst forces retire also makes it clear we won’t hire our way out of this problem. With this data distributed all over the place, the expansion in data types, and a bottle neck of expanding end users who have important and time sensitive questions, the challenge is clear. Vice Adm. Whitworth said: “We want to ask questions about geospatial environments and get answers that are multimodal in context. … That will be the beginning. The world will be our oyster.” That beginning is now.

This complex challenge is exactly why we started Danti. I’ve watched over the last decade as amazing new sensing technologies like drones come on the market, commercial satellite availability and accessibility expand, new forms of signals information, open source news, social and other sources including public data sets from NASA/NOAA, computer vision systems and others all come online. Despite these new and amazing tools for collecting and analyzing data, few experts know how to use it, tie it to questions, and bring it together in a multimodal way. We need a way to tie all of this together in a natural and contextual way.

It’s not just about the NGA; every agency, service branch to commercial enterprises, and even individual consumers leverage this information every day whether they know it or not. But it’s still held in the hands of experts who desperately want to take the restraints off. Leaders like Mark Munsell, the NGA’s new Chief Artificial Intelligence Officer understand this more than anyone. Maven and other programs are now generating orders of magnitude more information on imagery and other sources coming in, but its users still face an overload problem. Can AI take us from running computer vision on a thousand images and leveraging huge compute to knowing which 2 images have the answer and taking the user right to where it is?

Danti is a knowledge engine that uses multimodal AI to allow anyone to make sense of all the data that exists across the internet, government and commercial data systems, and sensors around the world, by simply asking a question.

Whether you are looking for information on global events, property risk, environmental conditions, tracking maritime activities, or monitoring humanitarian operations, Danti connects and understands data from multiple sources, systems and contextualizes it into clear answers or the core sources you need to make that determination.
Danti allows the user, be it an expert NGA analyst, or deployed service member overseas who has a simple question, to pose that question directly, and allow properly trained models, data ontologies, information drive trains to build the context, fuse the right sources and give the user the answer, or the context they need to answer the question with the mark one eyeball.

Here’s a look at Danti and how our team is working with our government partners to enable anyone to, as the Vice Admiral stated, “ask questions about geospatial environments and get answers that are multimodal in context.”

We strongly believe this technology is the way in which we can empower the analyst force of the future to do more without necessarily more people, enable information advantage at the edge regardless of background and skill, and finally unleash this data from current bottlenecks. The future is multimodal, AI tools will make it accessible to all.

Up Next