Common concerns

I don’t have the budget for an AI risk assessment. How should I pitch this internally?

Ask your engineering, product, and finance colleagues the following questions:

Shouldn’t we wait until we have finished our AI project(s) before engaging you?

If you would prefer engineering and product teams to start deploying AI-powered products rapidly, without worrying about cybersecurity or privacy, then this makes sense. But what are you going to do after they have already:

If you think you are going to be able to apply guardrails after all of this is already done, think again. Even the forward-leaning companies we have already helped had to battle some inertia because of the previous pace of AI-related development. Security teams often get left out of discussions about new product and feature development, so the time is now (frankly, yesterday) to start talking about how you are going to:

Ok, so what level of sophistication with AI does my company need to have for you to conduct a risk assessment?

We can do an AI risk assessment at any stage of development or maturity. The process is designed to be flexible and can even incorporate future roadmap plans. This means that even if development or rollout is in its early stages, a risk assessment can still provide valuable insights for strategic planning and development. In fact, this is the best time to start!

Shouldn’t we hire someone to build this in-house?

Having a full-time employee run your AI GRC program might make sense under certain conditions. For example:

You have or can acquire sufficient in-house leadership expertise.

This can certainly be true if you are a big 4 U.S. bank or equivalent. Bank of America, for example, made Ann Chang the global head of “security of AI.”

For organizations with the right resources, doing it yourself might be the best way to go.

If you can’t afford to dedicate at least 25% of a senior leader’s time to this task (and find someone capable of the role), though, you might consider getting external help immediately.

You can monitor the constantly-changing landscape yourself.

With the correct mix of:

this may be possible to do on your own.

You'll need to stay on top of:

But otherwise, it probably makes sense to leverage a specialist solely focused on these issues.

You want to focus on AI governance in addition to business use cases.

You will know the business goals of your AI projects better than anyone else, so for certain extremely niche situations, achieving them requires strict oversight of the GRC program as well.

While being able to exert complete control over your program by having employees build it may sound appealing, also consider the opportunity costs:

You can also see this video response here

Will I be reliant on you to keep my AI governance, risk, and compliance program running indefinitely?

No. StackAware's goal is to

As we (and the entire industry) builds out best practices for AI security and risk management, white glove service is definitely appropriate for certain organizations.

When these standards solidify, it will be time to move into self-service mode.

Check out this video response as well

Determining the right fit

Are you focused on mitigating AI-powered threats (malware, phishing, etc.)?

No. StackAware is focused on enabling the secure use of AI technologies and systems, not defending against AI-powered threats.

We think we have been hacked. Do you do incident response or forensics?

No. StackAware is focused entirely on preventing AI-related cybersecurity incidents. We do not manage incident responses or post-breach investigations. We do, however, have referral partners whom we can recommend.

Do you have a data sheet I can review and share internally?

Yes. We have ones available for both our assessment and governance offerings.

We are a software product company that provides software partially or exclusively for customers to run in their own environments (i.e. not -as-a-Service). Do you have experience working in these types of situations?

Yes. We have experience at multiple companies that have offered both customer-managed (e.g., on-premises) and vendor-managed (e.g., SaaS) products and know how to handle both models.

What is required for an AI penetration test?

You’ll need a functioning application the tester can interact with, either with a user (UI) or application programming interface (API). It can be in the early stages, but the tester will need to be able to authenticate and supply inputs to the application.

Logistics

Do you just email a PDF at the end of engagement?

No. StackAware provides a machine-readable risk register (in .csv format) that you can upload to your governance, risk, and compliance (GRC) tool. We also provide a Google Slides presentation that is executive-friendly.

How do you use AI? How do you protect my data?

StackAware is an AI-powered company and uses generative AI technologies like GPT for a variety of use cases. Specifically, we use retrieval-augmented generation (RAG) techniques with GPT-4 to analyze customer architectures and technological approaches for compliance implications.

To protect security-sensitive customer data when doing so, StackAware uses only the OpenAI API - which has more favorable data retention policies than the ChatGPT user interface and is opted out of training by default - for RAG analysis. Additionally, out of an abundance of caution, StackAware uses an automated approach to sanitize customer names and identifiers from data prior to its passage to the OpenAI API. Thus, even if there were some data breach or leakage caused by OpenAI, the specific information would not be attributable to your organization.

Separately, StackAware has executed a Data Processing Addendum (DPA) with OpenAI to govern the handling of any personal data processed by the service.

We pride ourselves on our transparency and list all known AI processing in our supply chain in our software bill of materials (SBOM).

You can also learn more about our data security practices at our trust center.

Why do I need to designate a business leader who is accountable for the organization’s security? Shouldn’t that be a security person?

While security might be everyone’s responsibility, only one person should be accountable for it, at the end of the day. Ensuring a leader with holistic responsibility for your business and all the risks it faces is the one who makes risk management decisions is key to having an effective and coherent security program.

Why do I need to designate a single security advisor in our contract? Isn’t that you?

StackAware enables your security program to succeed by applying not only technical but organizational best practices. Ensuring you have a single, full-time individual responsible for giving security advice and implementing business decisions is the best way to set you up for success. If we do our job right and allow your business to scale, eventually you won’t need our consulting services anymore. Grooming a leader to take over when that happens is the best way to prepare for this.