4 Keys to Unlocking the Power of AI for Mission Acceleration

4 Keys to Unlocking the Power of AI for Mission Acceleration

Getting the most out of artificial intelligence capabilities requires understanding of what it is and what it is not

Calendar icon 09-05-2023
Profile photo Jason Meil
Category icon DATA ANALYTICS

Key Takeaways:


  • Leverage AI as an assistive tool for human decision-making, as AI currently still solves narrow, data-focused problems.
  • Concentrate on the partnership aspect between humans and machines when introducing AI to missions and facilitate trust-building in the tools.
  • Organize and secure your data first, before implementing AI tools.

LISTEN TO THIS BLOG:

 

 

Artificial intelligence is a hot-button topic. While debates around AI rage on, we have become comfortable over time with machines ubiquitously helping us accomplish our tasks and making our everyday lives easier. One example is smart navigation on our phones and in our cars that monitors traffic and gets us from point A to point B in the shortest time. In government, AI is similarly about understanding technologies, their capabilities, and how to work with them to accelerate and improve mission outcomes.

My top priority when interacting with SAIC's government partners is to clarify confusion around AI and facilitate approaches to maximize its utility. Here are four ways to reach AI success.

1. Form human-machine partnerships where AI assists us

There are many misconceptions and fears surrounding AI and its abilities, including that it will replace humans and our decision-making power. It's important to understand that AI currently excels at solving specific problems, and we have not achieved generalized AI or machine consciousness.

By utilizing AI to handle the deluge of data, we can unleash human capacity for strategic thinking, reasoning and decision-making. Focus on the present benefits of AI and machine learning, such as reducing human cognitive load, speeding through tedious tasks and enhancing our daily work. In human-machine teaming, computers do what they do best, which is sorting massive amounts of data and extracting signals and patterns, while we are freed up to make better inferences and decisions.

2. Organize your data and create an authoritative source of truth

To fully utilize AI's potential, we must first tackle the data challenge. We have data coming from disparate sources, decades of legacy data, and the fact that most data is unstructured — all of which might be in silos and need to be brought together for machine learning models. Vast amounts of data lack structure, quality and integration, but that data must be ingested, cleaned and indexed before we can start doing AI.

Co-locating your data within a single place, and then being able to conditionally and logically separate it for mission utility, sets the foundation for effective AI implementation. Solutions on the market exist that allow organizations to process and store together structured data like Excel spreadsheets and unstructured data like documents and PowerPoint files. Seek out a platform that provides multi-level security access based on user attributes to protect co-located data, like Koverse, so that only the right data is disseminated to the right people at the right time, always, for a secure and reliable data backbone to power AI-based analytics.

Superficial intelligence refers to situations where AI doesn't work optimally due to poor data quality or unstructured data residing in silos.

3. Build trust and “explainability” in your AI

Humans still decide how to use and act on the analytical information that AI provides, so its adoption hinges on understanding how it works. As government missions increasingly look to use AI, users must be comfortable with the outputs and decisions made by learning models. Having humans in the loop, with users continuously validating and training models, is critical to establishing “responsible AI” and trusted human-machine partnerships.

Establish transparency in your AI processes and solutions that allows machine-learning products to be traced and audited. “Explainability” is being able to go back and understand, starting from data ingest, what datasets were used, how the data was prepared, and how learning models were developed and by whom to make proper decisions and perform accurately. Once a model has been operationalized, look at its deep metrics for biases and drift. Seek out transparent tooling solutions to promote trustworthy and ethical AI.

4. Make AI accessible and unintimidating

It's very hard to find data scientists, and even harder to find those with clearances for defense and national security missions. I believe in creating “citizen data scientists” by giving them the ability to interact with data and machine-learning models intuitively. For the non-technical members on mission teams, it’s about usability and comfort and empowering them to be able to work with analytics tools and extract the insights they need self-sufficiently.

There are powerful AI solutions on the market that do great things, but focus on those that provide step-by-step repeatability for users and open-architecture integration for current and future mission systems. Maybe surprising is that across government — the Department of Defense, intelligence community and civilian agencies — different missions are looking to AI to solve very similar problems, such as detecting objects within images, videos and text. Listening to our government partners and their users, SAIC has developed prebuilt, easy-to-use and tailorable AI models for things like computer vision and data fusion — what we call Mission Accelerators in our Tenjin AI-development and orchestration tool.

AI is not about replacing human capital but about augmenting it

Despite challenges encountered whenever new technologies are brought on, the federal government is implementing AI initiatives. It's not enough to solve the data problem and build powerful AI tools; we must ensure that users trust and embrace them, while also addressing the main obstacles of dealing with data, policy and adoption.

Responsible AI with explainability and intuitive user interactions is crucial to uptake across government missions. It's also important to keep in mind that AI is meant to enhance our abilities and speed up our progress towards our goals. Augmenting workflows and allowing us to do higher-value work, AI empowers us to accomplish missions faster and better.

As part of SAIC’s LinkedIn Live series, I recently had the opportunity to discuss these topics and more with Manmeet Singh from Koverse, Inc., an SAIC Company, to help our audience members make informed decisions when implementing AI. Watch the LinkedIn Live recording for the full discussion.

 

Posted by: Jason Meil

Director of Data Science

Jason “Jay” Meil is the director of data science and chief data scientist for SAIC’s Artificial Intelligence (AI) Innovation Factory, where he leads AI technical strategy and solutions that enable rapid decision-making at scale in support of multiple intelligence disciplines.

With two decades of experience in applied mathematics, data science and deep learning, Meil is a recognized expert in analytical tradecraft, all source intelligence and open source intelligence. He serves as a technical advisor to numerous intelligence organizations within the Intelligence Community (IC) and Department of Defense (DOD), applying his tradecraft to the domains of intelligence, surveillance and reconnaissance; targeting; and algorithmic warfare.

Meil has led cross-functional teams in designing, building and deploying deep learning models to support federal government customers in complex missions of national importance, with the ultimate objective of making the nation safe against peer and near-pear threats. In addition to IC and DOD customers, he has supported civilian agencies including the Department of Homeland Security.

As an SAIC research fellow, Meil has focused on two areas:

  • integrating multi-modal intelligent decision support systems into command and control operations
  • offensive and defensive AI algorithms in identity intelligence, information warfare, information operations and unconventional warfare (I2/IW/IO/UW) operations.

Meil is a frequent participant in research panels and industry discussions on the impact of AI on national security, including with the John Hopkins University Applied Physics Laboratory, the Center for Security in Politics at UC-Berkeley on behalf of DARPA, CERN’s OpenLab and Quantum Technology initiative, the European Geosciences Union, the Atlantic Council Scowcroft Center for Strategy and Security, and AFCEA.

Meil, who is committed to lifelong learning, has numerous academic credentials in computer science, data science and AI. He has completed a MicroMasters from MIT in data, economics and development policy. He has certifications in quantitative analytics from the Wharton School (Aretsy School of Executive Education) and applied mathematics for deep learning from Imperial College of London. He holds certifications as a senior data scientist from the Data Science Council of America and data analytics from Six Sigma Global Institute. He has completed a 9-month intensive fellowship in data science and machine learning with Lambda Institute.

Read other blog posts from Jason Meil >

Connect with Jason Meil: linkedin icon

< Return to Blogs