Delivering the Promise of Responsible Artificial Intelligence

By Carl Hahn, Dr. Amanda Muller, Dr. Jordan Gosselin, and Jonathan Dyer

Artificial intelligence is a transformative technology that is widely recognized but poorly understood. As such, AI suffers from an image and reputation problem, especially among the public and regulators who are understandably concerned by a powerful technology they do not understand. At American aerospace, defense and security company Northrop Grumman, efforts are underway to better explain AI to the world, and more importantly, to outline how AI might be used responsibly.

 

AI is pivotal technology. It is already ubiquitous in our everyday lives, from streaming services to navigation apps, and robotic vacuums to secure banking.

But AI is also playing a larger and larger role in national defense, such as way-finding for unmanned vehicles, enhanced target recognition, and many other applications that cam benefit from speed, scale, and efficiency. Some functions are simply not possible using traditional computation or manual processes, but AI provides the necessary cognitive and computing power to make them a reality.

The real genius of AI is its ability to learn and adapt to changing situations. The battlefield is a dynamic environment and the side that adapts fastest typically gains the advantage. But like with any system, AI is vulnerable to attack and failure.

WHAT IS RESPONSIBLE AI?

AI has an image and reputation problem. The media frequently produce stories of AI gone rogue, bias in algorithms, or the dystopian specter of unaccountable killer robots. The lack of understanding about what AI can do, doesn’t do, or shouldn’t do has simply increased the confusion amongst the public and policymakers. This confusion will impede the innovation and progress needed to fully capture the potentially transformative benefits of AI unless urgent action is taken to build confidence and trust in AI solutions.

Fortunately, such action is well underway. Governments, think tanks, industry associations and many leading technology and other companies have publicly announced their commitment to the development and implementation of responsible, trustworthy artificial intelligence. The US government, particularly the Department of Defense (DoD), has been at the forefront of these efforts and in February 2020 the DoD formally adopted five principles which require AI to be (1) Responsible; (2) Equitable; (3) Traceable; (4) Reliable; and (5) Governable [A]. The US Intelligence Community released similar principles in July 2020, which further emphasized the importance rights, and freedoms, but also protecting privacy, civil rights, and civil liberties [B].

In June 2022, the DoD issued its Responsible Artificial Intelligence Strategy and Implementation Pathway which is required reading for companies in the Defense sector because it points the way for embedding Responsible AI into the all-important acquisition and delivery of technology and solutions [C]. As stated in the Pathway document, “[i]t is imperative that the DoD adopts responsible behavior, processes, and objectives and implements them in a manner that reflects the Department’s commitment to its AI

Ethical Principles. Failure to adopt AI responsibly puts our warfighters, the public, and our partnerships at risk.”

Leading industry organizations such as the Business Roundtable have announced their own AI ethics principles along with their priorities for AI policy and the United States is not alone. The European Union (EU) AI Act is moving through the legislative process and positions taken by other governments such as the UK’s plan to deliver “ambitious, safe, responsible” AI in support of defense [D]. The Atlantic Council has done important work in this area as well, recently publishing its Principles to Practice: Using Ethical Spectrums to Guide Decision-Making [E].

TURNING PRINCIPLES AND ADJECTIVES INTO ACTION

The common theme of these (and many other) sets of principles and frameworks is that developers of AI need to exercise discipline in their coding process so document and can explain what they’ve done, how they’ve done it, and the intent behind the solution design. This includes how data is used, the sources of that data, the limitations or any error rate associated with the data, and how data evolution and drift will be monitored and tested. From greater transparency will flow increased understanding and acceptance of the solution, and with that, heightened trust amongst users, policymakers, and ultimately the public.

Northrop Grumman is taking a system engineering approach to AI development and is a conduit for pulling in cutting-edge university research, commercial best practices, and government expertise and oversight. We have partnered with Credo AI, a leading Responsible AI Governance platform, to help Northrop Grumman create AI in accordance with the highest ethical standards. With Credo AI’s governance tools, we are using comprehensive and contextual AI policies to guide Responsible AI development, deployment, and use. We are also working with top universities to develop new, secure and ethical AI and data governance best practices, and technology companies to leverage commercial best practices.

The company is also extending its DevSecOps process to automate and document best practices in the development, testing, deployment, and monitoring of AI software systems. These practices enable effective and agile governance as well as real-time management of AI-related risks and opportunities. Critical to success is Northrop Grumman’s AI workforce – because knowing how to develop AI technology is just one piece of the complex mosaic. Our AI engineers must also understand the mission implications of the technology they develop to ensure operational effectiveness of AI systems in its intended mission space. That is why we are investing in a mission-focused AI workforce through formal training, mentoring, and apprenticeship programs.

Our use of Responsible AI principles and processes is not limited to our customer-facing endeavors. Northrop Grumman is also leveraging the power of AI for internal operations. Applications include AI chatbots for employee IT services, predictive modeling for software code review, natural language understanding for compliance risk, and numerous others. By embedding Responsible AI into internal information infrastructure, our timely and effective business operations and develop capabilities can be further leveraged for our customers’ benefit.

TACKLING THE DATA SET CHALLENGE

A key component of any AI-enabled system is the data used to train and operate it. Critical to the success of responsible AI-enabled systems is limiting data bias. Datasets are a representation of the real

world, and like any representation, an individual dataset can’t represent the world exactly as it is. So, every dataset is both susceptible and prone to bias. High profile cases of bias have been demonstrated in commercial cases ranging from a chatbot making inflammatory and offensive tweets to more serious cases such as prejudice in models built for criminal sentencing. If ignored, data bias can have serious implications in the national security space. Understanding the nature of the bias and the risk associated with that bias is key to providing equitable technology solutions. By working to recognize potential sources of bias, and testing for bias, we actively working to mitigate bias in our data sets and AI systems.

As an additional complication, the events of interest in a dynamic battlefield environment are likely to be rare events as the adversary purposefully works to obscure their actions and surprise the United States and its allies. So, it may be necessary to complement data collections with augmented, simulated, and synthetic data to provide sufficient coverage of a domain. Adversaries may also seek to fill datasets with misinformation to spoof or subvert AI capabilities. To develop AI responsibly in the face of these challenges, it is critical to maintain records of data provenance, data lineage, and the impact of changing data sets on AI model performance.

Northrop Grumman established a Chief Data Office (CDO) to unify its customer-facing data management efforts and to address these challenges for its internal operations. The CDO sets and executes an enterprise data strategy and maintains a corporate data architecture to enable data-driven decision-making. Key tenets of the data strategy include securing and protecting the data, ensuring the usefulness and quality of the data, and accountable data access to information systems and stakeholders. This deliberate and comprehensive focus on data quality and access is a key enabling function to the responsible development of AI systems, both for internal operations and for customer-focused development.

CONCLUSION

AI enables revolutionary changes in the way national security operations are conducted. With the incredible power this technology enables, it is incumbent upon its developers and operators to be responsible and transparent in its design and use. Northrop Grumman and its industry partners are committed to the responsible development and use of AI and continuing to contribute to research and development regarding public policy and ethical use guidelines for AI in national security applications. Transparency, equitability, reliability, and governance are, and should continue to be, requirements for the responsible use of AI-enabled systems.

 

REFERENCES

[A] DOD Adopts Ethical Principles for Artificial Intelligence, U.S. Department of Defense, 24 February 2020. www.defense. gov/Newsroom/Releases/Release/Article/2091996/dod-adopts-ethical-principles-for-artificial-intelligence/

[B] Intelligence Community Releases Artificial Intelligence Principles and Framework, Office of Director of National Intelligence. www.dni.gov/index-php/newsroom/ press-releases/item/2134-intelligence-community-releases-artificial-intelligence-principles-and-framework

[C] Responsible Artificial Intelligence Strategy and Implementation Pathway, U.S. Department of Defense, 22 June 2022, https://media.defense.gov

[D] The Artificial Intelligence Act, The AI Act, 21 April 2021, www.artificialintelligenceact.eu

[E] Principles to Practice, The Atlantic Council, 28 July 2022, www.atlanticcouncil.org

 

ABOUT THE AUTHORS

  • Carl Hahn is VP & Chief Compliance Officer for Northrop Grumman, a global aerospace, defense, and security company headquartered in Falls Church, VA. The majority of Northrop Grumman’s business is with the U.S. government, principally the Department of Defense and the intelligence community. In addition, the company delivers solutions to global and commercial customers.
  • Dr. Amanda Muller is Chief, Responsible Technology for Northrop Grumman.
  • Dr. Jordan Gosselin is AI Campaign Chief Engineer & Technical Fellow for Northrop Grumman.
  • Jonathan Dyer is Principal AI Engineer Systems Architect for Northrop Grumman.
  • This article is from the Fall 2022 issue of Ethisphere Magazine. To read the full issue, click here.

Futureproofing Your Ethics and Compliance Program in 2023

Bringing the BELA Perspective to 2023

Futureproofing Your Ethics and Compliance Program, Part 3

 

Interview by Bill Coffin

Sarah Neumann is the Senior Director of Engagement for the Business Ethics Leadership Alliance (BELA), the premier membership organization for ethics and compliance professionals. Sarah works closely with BELA members to help them get the most out of their memberships, whether it’s networking with other members, participating in high-level working groups, or sharing content from their own organizations as an example of practices in action. In this interview, Sarah lends her unique perspective on the trends shaping ethics and compliance and what organizations are doing now to stay ahead.

 

A lot of companies are trying to solve for what happens after an employee reports a concern. Can you talk about what ethics and compliance teams are doing to create more transparency in the reporting process and promote speak up culture?

 

This continues to be a real area of focus for ethics and compliance teams on the BELA community. This topic has been a big favorite at some of our roundtable events, and some really innovative ideas have come up during those conversations.

One recent idea that our team liked was shared to the BELA Member Hub by an ethics and compliance team for a health insurance provider. They worked closely with their HR partners to produce an employee-facing speak-up report that provides an annual overview of their organization’s compliance and ethics processes. The report highlights trends that they’re seeing, reporting and statistics related to investigations and issues, and how they were remediated, and it does a really good job of demystifying what happens after a report is made while emphasizing that organization’s standards for ethical behavior.

A lot of companies are finding ways to tell stories of real misconduct reports and the outcomes. They’re telling these stories to their employees to address the misconception that every report results in termination. Some storytelling examples of what we’re seeing include blog posts and videos to retell what happened. If ethics and compliance teams don’t have buy-in from legal to use real anonymized stories, they are opting to use communication tools like short videos that they have purchased or produced that are more generic but still explain what happens when someone reports, how to report, and why it’s important to do so.

Companies are coming up with really creative ways to brand their storytelling, and not only do they get a lot of clicks, they humanize the compliance and ethics team and it makes them relatable as real people. That goes a long way to advance the program’s culture and get people comfortable with going to them with an issue.

More ethics and compliance teams are creating or refreshing toolkits for their managers and people leaders. Those toolkits are taking them through the whole process step by step on what to do when

someone reports a concern to you. We have some good examples of these types of toolkits and trainings on the BELA Member Hub.

Training is under constant pressure to become more targeted, interactive, and concise in the drive to engage people. How are you seeing companies evolve their training programs in this direction?

 

We held a training evolution roundtable earlier this year, where two global multinational BELA member companies shared overviews of their training programs. Both programs really focused on relevance and impact, and they were doing this by making changes based on employee feedback about what was impactful and relevant to them. Employees consistently were telling them that they liked videos, for example, so they invested in more short videos that featured role- and scenario-based content. That all came from employee feedback.

Also, companies are focusing on relevant and role-based training. But to make it engaging and useful, they are letting employees have some say in what they’re completing. Companies might have a two- or three-year rotating library of trainings that they offer. Some of these certainly will be requested every year. But others might be optional, and employees might be asked to select three from a list of topics and have until a certain date to complete them. This gives the employees some control over what they are being trained on.

How are programs responding to the challenge of measuring the effectiveness and/or test the impact of training? In a lot of cases, training programs are mandated to run for three hours, and given short attention spans, that might not be a very impactful way to reach people.

 

Measuring training effectiveness is so important now, because of the DOJ’s updated evaluation of corporate compliance programs guidance. Now we know that just tracking training completion isn’t sufficient, so companies are trying to figure out how to measure whether the information that’s covered in the training is retained and applied.

Some good ideas around that have come out of a recent BELA working group focused on measuring training effectiveness. Two ideas that group came up with included measuring the value that learners perceive but also measuring how behavior was impacted. That value piece could be as simple as asking if they found the training informative or are getting feedback through post training surveys or interviews, and then using that that feedback to make the training better as time goes on.

For behavioral impact, you could look at things like reporting rates. One idea that also came out of that working group for tracking that metric was to add a question or a field in the reporting form where employees can confirm whether or not training factored into their reporting. That’s very simple and straightforward, but what an easy way to gather that metric.

When it comes to mandated training, those sessions can be very long. That’s a challenge, particularly since companies are so focused on training fatigue and trying to move to short or targeted trainings. One idea that we’ve heard of—where it’s allowed—would be in those areas where many hours of training on a specific topic are legally required, use that flexibility piece. Maybe companies can spread out longer trainings incrementally over time. In one roundtable, a BELA member shared that they try very hard to never have any one training period exceed 45 minutes, as a way to address training fatigue.

With the recent shift to remote working, side employment has become a major conflict-of-interest issue. What are you seeing on this front?

 

We’ve had a couple of roundtables on this, and I think we’ll be doing another one. It’s certainly not something that we were hearing about very often three years ago. Some companies have policies that prohibit outside or secondary employment, and some require disclosure of potential conflicts of interest. But the line gets pretty fuzzy for employees and employers when the outside activity is something maybe like a hobby blog that has evolved into social media influencing. There, we’re seeing more and more companies update their social media policies and their conflicts of interest policies to reference each other, or at least to define what a social media influencer is. They can explain that this could be considered secondary employment or a side business, and

that it could be a conflict, and if so, it should be disclosed. But from there it gets even more interesting, because once someone does disclose some kind of activity like that, then the compliance and ethics team has to evaluate whether it really is a conflict, and I think that is where there is a real challenge.

In the roundtable we had on this, there was a lot of discussion around what should they be looking at. Is it the thing that is being promoted? Is it a competing product, directly or indirectly? Is that employee being compensated, and how? And then another interesting question is what if they’re promoting your organization’s own products and using commission-based links to earn money off of that?

We have a lot of members who are thinking about these issues and what they’ve been trying to do in our roundtables is to come together, lay out all these challenges and the ideas that they’ve been using to evaluate them, and try to create some consistencies around what is permissible and what isn’t when evaluating these conflicts. But every situation is different, and people need to feel like they’re free to pursue what fulfills them. It’s nice to know that when you’re disclosing something that you think could be a potential conflict, usually all the teams are trying to do is to figure out how to make sure that it’s not. It might be as simple as “if you make these changes, then we have no concerns.”

What changes do you are you seeing with how companies are conducting risk assessments?

 

This is coming up as a priority for a lot of BELA members for the next fiscal year. Companies are reviewing and updating how they do risk assessments. They’re taking a more holistic approach to their risk assessments and using data to make sure that their program is truly doing what they want it to do. We’re seeing more data analysts sitting with ethics and compliance teams to help with that.

A couple of challenges that we’re hearing relate to the strategic timing of risk assessments so resources are allocated appropriately, and investing in tools that help eliminate manual processes. Those tools can help improve visibility, data analytics, and other activities so that people are all on the same page.

As we look to 2023, what kind of year should ethics and compliance teams expect?

 

Based on conversations that our team is having with the BELA community, I think that in addition to everything that we just discussed, there are a few other things that they should expect to address.

I think they’ll be spending more time on trade sanctions challenges and making sure that their internal processes are keeping up with all of the changing activity there. I think they are going to be looking at their anti-money laundering programs, since countering money laundering is such a focus in many countries.

I know a lot of our BELA members are looking at policies and controls around data regulation and things like AI fairness and bias. Those issues continue to evolve, as AI does. I think the momentum with ESG will just continue to build and teams will be making sure to stay on top of all of the changing global standards and disclosure requirements. BELA will be kicking off another ESG working group this year, so we’ll continue that work.

Whatever the coming year brings, I know that the BELA team is really looking forward to continuing to support our members. A lot of these issues that we’re talking about lend themselves to our member hub library and closed roundtable discussions, where you can come and talk about the challenges that you’re having, and share ideas.

If any of our members are interested in hosting a roundtable event, either virtually or in person, or if they have a program example or examples that address any of the kind of topics that we talked about here that they want to share with the community, we would love to hear from them. We want to champion their programs and the good work that the teams are doing within the BELA community, and some of the best ways that we can do that is by highlighting a program initiative at an event or doing a piece about it on the member hub. It could be a super-short interview accompanied by a program example, whether it’s a communication, a training, or a presentation. We are always open to the community sharing with us what they are particularly proud of, so that we can then share that more widely and inspire others. We get excited about that.

 

ABOUT THE EXPERT

Sarah Neumann is the Senior Director of BELA Engagement. Prior to joining Ethisphere, Sarah spent 10 years at Thomson Reuters, most recently in the areas of continuing legal education, learning management and professional development. For more information on BELA, email Sarah at [email protected].

This article is from the Fall 2022 issue of Ethisphere Magazine. To read the full issue, click here.