Clark County CIO Shares Insights on Securing AI Innovation and Broadband Initiatives

Written by

Clark County, Nevada, is the most populous county in the state and has the 11th highest population in the country, with 2.5 million citizens.

At the helm of Clark County’s 200 IT employees is Bob Leek, Clark County’s Chief Information Officer.

Leek’s team is tasked with securing the enterprise technology infrastructure of the county, emergency services and critical infrastructure like the Harry Reid International Airport in Las Vegas.

Alongside the task of cybersecurity, the team also innovates to provide better ways for employees to do their jobs and for residents to access services.

The county is currently working on a new broadband initiative which will connect the ‘last mile’ of fiber to homes and businesses in underserved areas. Leek’s team is also working on 15 active AI pilots, which he discussed in an exclusive interview with Infosecurity during Black Hat USA 2024.

During our conversation, Leek also described some of his biggest cybersecurity concerns, Clark County’s cybersecurity successes and how he’d like to see a more proactive approach to cybersecurity develop.

Infosecurity Magazine: First, what are your biggest concerns in cybersecurity today?

Bob Leek: Cybersecurity challenges are multi-factor. The continuity of our operations is extremely important. With digitizing local government, which we are well down the path of having done, the capability of our operations to withstand an outage, of any type, is compromised. This is to the point where some teams have said it would be very difficult for them to go to any type of manual procedure.

With this wholesale move to digital solutions, the ability to move to something that isn’t digital has gone by the wayside.

Second, I think the impact of a cybersecurity incident has changed. It’s almost as if cybersecurity issues are a background noise now. There is such a generally low level of trust in government that having a cybersecurity-based incident is almost expected, not a surprise.

Yes, Facebook for example has a lot of your data, but on the government side we know a lot about you and the people we tend to provide services to come from vulnerable populations.

If you think about housing assistance programs, food assistance programs, the people that get in touch with us for services relating to domestic violence, there’s a lot of very sensitive data involved in that.

Having a cybersecurity incident is not just about systems being down, its about the impact on the trust level, which is already low.

IM: On the flip side, what are the biggest successes the cybersecurity industry is experiencing today?

BL: A success is the investment we have made in cybersecurity; we probably thwart a million event or attempts to compromise our systems a week.

It is now important to measure the success of all the investments.

One measure is how we respond to cyber-events, such as whether we isolated it. Another way is quantifying how many attacks we’ve been able to fend off before they disrupt our systems. That’s a positive outcome because it means our systems continue to be up and available.

IM: Clark County is working on a Broadband project to extend the reach of fiber connectivity. When Clark County embarks on a project of this nature what are the main cybersecurity considerations?

BL: It starts with a security-first mindset. For far too long security was something that you would ask for at the end of a project.

Now we have pulled all the thoughts around security into the design phase. We start with asking ourselves what the approach is relating to security for every solution we are implementing.

That way we’re building security into the solution, rather than assuming it’ll be taken care of it later.

This comes in different forms; one example would be the access that people have to the data. There's a concept called least privileged access, or in other words you are only allowed to see what you're supposed to see.

“Pulling security earlier in the process is the methodology that an organization like ours has embraced.”

If we haven't thought about what data is in the solution, including personal data, and who should have access to that data, we run the risk of compromise.

The other piece to that is the government is a highly regulated industry. The compliance requirements people are most familiar with are the Health Insurance Portability and Accountability Act (HIPAA), related to health data, and the Payment Card Industry (PCI) standards.

If we're building a solution and we don’t consider if it is going to be compliant with those regulatory guidelines, then we could run the risk of developing a solution that ends up failing an audit. This would then have a tremendous cost.

Pulling security earlier in the process is the methodology that an organization like ours has embraced.

IM: The county is working on several AI projects. What are the main focuses and how you're ensuring those projects are secure?

BL: One example relates to the 30,000 Uber and Lyft drivers in the Las Vegas Market. Ever year they must review their license and to do this they have to produce certain documents.

We're going to develop an AI driven engine where people can upload their documents and the AI will undertake identity validation and identity verification. It will also do a document validation to ensure your driver's license and utility bill match in terms of the address.

We currently do that in person, so up until last year we had 30,000 people come into our office to be processed.

The failure rate on those documents was around 30%, meaning that around 9000 of those people had to leave and then come back.

If we shift all of that work to an online engine, then the only people we need to come into the office are those for whom we need to physically inspect their documents.

That language model and the work that we're doing around identity means all you have to do is share your identity with us. And we don't keep all of that information, we only use it to validate who you are.

"Using publicly available data sets is where the risk comes to light."

Applying an AI workflow to that process alongside identity verification will reduce the impact on Uber and Lyft drivers having to take time out of their day and not earning a fare.

Using publicly available data sets is where the risk comes to light. For example, we don’t want to use ChatGPT as an engine because we don’t know where, when and how all its data got created.

For me, this comes back to public trust.

If we use data that we are the steward for, that you've already shared with us and we build solutions that make it easier for you to do business with us, we're increasing the level of trust.

Conversely, if we use language models and reference libraries where we don't know who built them and where they got their data, then we run the risk of eroding trust.

In our work we take care of vulnerable populations, so we have a responsibility for stewarding that data in an appropriate way and we take that responsibility very seriously. So, we're being very thoughtful about how to apply AI.

For me AI is a ‘how’ and not a ‘what’. We already know what we do and why we do it. AI is bringing a new set of ‘how’s’ and I don't call it artificial intelligence. We call it augmented intelligence, AI is a ‘how’ in order for us to do what we do in a better way.

IM: Clarke County is in the business of pivoting to a more proactive approach to cybersecurity, what was the turning point and how are you achieving this?

BL: We get all the alerts that come out of the Cybersecurity and Infrastructure Security Agency (CISA). We get an alert that says this happened, here's what you should do about it.

What I would like to see is intelligence telling us a cyber event is going to happen next week, and you should do something about it.

The availability of that intelligence exists in organizations like the FBI and CIA, and threat intelligence organizations that are scanning the landscape of these bad actors.

A proactive approach that tells us, ‘we think in three weeks this could happen’ is something I would like to see more of.

"We’re very good at reacting, we need to be better at proacting."

Then we can look at that information and apply a risk tolerance. For example. If something is going to happen in the Google environment, but I have no Google environment-based solutions, I can ignore that alert. But if something is going to happen to Azure, and we're heavily invested in Azure, we need to take action.

When I talk about the proactive side, it's the threat intelligence that is happening coupled with knowledge of our environment.

So, if I tell Armis [a cyber exposure management platform provider] here’s what I have, and you can separate out the threats so that I only need to pay attention to the ones that are going to matter to me then I think that that's going to contribute to that proactive stance that many of us would like to take. We’re very good at reacting, we need to be better at proacting.

What’s hot on Infosecurity Magazine?