The Intelligence Advanced Research Projects Activity is where the intelligence community turns to solve some of its toughest programs — it's billed as the IC's high-risk, high-payoff science lab. At IARPA, researchers are

developing and testing futuristic technologies that can predict the events that sway geopolitics.

IARPA Director Jason Matheny recently spoke with Senior Staff Writer Amber Corrin to talk about his agency's mission and some of the unique ways they're working on some seriously complex stuff.

One thing you do at IARPA is try to predict the unpredictable, which seems like a tall order. How do you go about taking on problems of such great mass and intensity?

A lot of our programs are organized as tournaments where research teams are competing against each other on a problem, such as speech recognition or finding or matching video from a large collection of, say, open-source video. They might be locating where a particular image was taken. Other programs have forecasting challenges, in which teams compete to forecast outcomes of political elections, disease outbreaks or whether a treaty gets signed.

We've run a few different forecasting tournaments in the past and we're about to start more. In science, one way of testing a theory is to see whether it explains what's happened historically. But the way in which most scientists have attempted to provide the strongest test of a theory was to not only explain history, but to explain events that haven't happened. So they might develop a new experiment and predict the outcome, and see if the experiment results are as expected.

In our programs we take the same approach — we have research teams from academia and industry who develop theories and methods, and then test how well their methods perform by what happens in the world.

How does using these tournaments to solve challenges help innovate in what you do? What are some of the other ways you try to innovate?

IARPA invests in science, but we want to make sure our research doesn't just yield an interesting science project, but also delivers something at the end that really solves a real-world problem. In order to prove that something solves a real-world problem, we spend about a quarter of our budget on test and evaluation in every program. And that means testing our research methods against real problems in real time. To do that we organize tournaments in which multiple research teams will test their methods against one another on these real-world problems — which might be ranging from predicting elections to determining whether a particular computer can achieve a certain performance milestone with a limited amount of power.

Another point that's important is that the best forecasts were not generated from a single individual, but by combining judgments — the idea of crowd wisdom, that you could do better by combining multiple independent judgments from people with different sources of information and different beliefs about the world.

You've mentioned open-source, which is a booming area in intelligence right now. How are you using open-source intelligence in your programs?

One of our programs focuses on open-source indicators, where forecasts are automatically generated by machines that look at large volumes of open-source data. One area we looked at was disease events, such as predicting flu outbreaks.

We look at a variety of indicators of people falling ill: One is Web searches of symptoms; another is cancellations of dinner dates or flight reservations. Even though the data is anonymized, we can see if there were a large number of cancellations at restaurants or of flights. That allows you to take the pulse of a population and get an idea of events happening in society.

How can national security officials prepare for a crisis before it happens?

One of the problems IARPA works on is anticipatory intelligence. Oftentimes national security decision-makers are looking for judgments about what will happen in the next day, or the next week or the next year in geopolitics, in public health, in weapons treaties. In order to deliver forecasts to decision-makers we have to continuously monitor the environment for indicators of change. That might be indicators of political unrest in a region, indicators that a disease outbreak is occurring or indicators that a cyberattack is under way. In order to develop methods that are good at forecasting, we run forecasting tournaments in which teams made up of universities and industry labs are actually forecasting real-world events before the occur. And then we keep score – who got it right, who got it wrong, what distinguished the good forecasts from the bad forecasts.

What's better for forecasting, man or machine?

We've organized the world's largest forecasting tournaments, some that have included thousands of participants from all over the world, others that have included machine-learning models that are generating thousands of forecasts themselves based on lots of open-source data, everything from news feeds to trends in social media. What we've learned from that research is that it takes some amount of crowd-sourcing to get the best forecasts. And in fact there's lot of relevant of information out in the world – we have to find better ways of combining that information in order to generate good forecasts.

What are the next big areas for problem solving?

Problems that we focus on currently include high-performance computing, machine learning, development of new sensors and improvements in human judgments about very complex geopolitical issues ranging from elections to multi-lateral treaties to interstate conflict.  Some of the new areas where we're investing include bio-security, cybersecurity, and improvements in privacy protections including a form of encryption called homomorphic encryption. That allows us to protect peoples' data while still being able to use that data in useful ways.

Share:
In Other News
Load More