Ep. 49: Cyberwarfare tomorrow
This episode, we turn to the future world of cyberwarfare — from life after encryption to the 5G debate, from the next election to the next generation of cyber professionals, and a lot more.
Our guests include:
- Dawn Thomas, Associate Director and Research Analyst on the Safety and Security team of CNA;
- Paul Gagliardi, a former U.S. intelligence contractor and current threat intelligence analyst at SecurityScorecard;
- Dmitri Alperovitch, Co-Founder and CTO at CrowdStrike;
- Adam Segal, who directs the Digital and Cyberspace Policy Program at the Council on Foreign Relations;
- Matt Wyckhouse, CEO at Finite State;
- and B. Edwin Wilson, Deputy Assistant Secretary of Defense for Cyber Policy.
Find CNA's report "Cybersecurity Futures 2025" here.
Find Finite State's Huawei report (PDF) here.
A transcript of this week's episode is below.
Find last week's episode here.
Subscribe either on Google Play, iTunes, or Overcast, or wherever you listen to podcasts. Thanks for listening!
Last week we reviewed how cyberwarfare has shifted from worry about other organizations and big companies to folks like you and me — to the stuff in our cell phones and our wallets, and how well we protect those things with a healthy dose of skepticism and discipline and patience. Don’t click on phishing links, or put random thumb drives into your computer, for example. We learned how nefarious hackers are pivoting from massive data breaches to more targeted leaks, down to specific individuals. And how some marketing companies are adopting some of those same tactics as well, strangely enough.
This week, we’re gonna inform and possibly scare you just a little bit more. Because our discussion this week turns more toward nation state hackers — programming teams from America’s so-called great power competitors like China and Russia.
It also concerns the technology race for 5G services that’s playing out across the globe today. We’ll get into a bit about autonomy and quantum technology — and how some experts see those things shaking up our lives as well as our already-shaken up definitions of things like “cyber” and privacy and what to expect in future presidential elections here in the states.
Dawn Thomas has given many of these forecasts about the future a great deal of thought. You may recall from last week, she’s an Associate Director and Research Analyst on the Safety and Security team of CNA — and that’s a nonprofit research organization based in Arlington, Virginia. This spring, her team at CNA released a report entitled “Cybersecurity Futures 2025,” and she sat down with me recently to elaborate on a few of those findings.
Thomas: “This project started a long time ago, so this is the next iteration of the first round. And the first round consisted of some folks at UC-Berkeley’s Center for Long Term Cybersecurity, developing some scenarios talking about the world in 2020. And that was the future date. And it was writing scenarios to think about the what-ifs — what if cyber took on a different meaning in our world, in our societies, in our countries, and what would be the kind of non-obvious results of cyber invading kind of our worlds in that way? So they wrote these scenarios and then they kind of wrote a report and they sent it out to the world. The second time around, they thought well part of this is global, it’s a global question. So how is America gonna respond to these things versus how people in Asia respond, and how people in Europe respond? And obviously when you’re talking about now kind of a much wider audience and the differences between them, you start to see where traditional alliances may not hold up anymore. And traditional enemies may become — you may have a lot more in common with them in this new world. So the concept was that they would join with CNA and with a partner from Steptoe & Johnson and with the World Economic Forum and say well let’s start doing these — these new scenarios for 2025 around the world and start answering those questions: How do people see things differently based on where you sit.”
Watson: “So this first scenario that I’m looking at here, it’s called Quantum Leap. And as I was reading it, scenario one of four, as I was reading over it I was reminded of the changing nature of cyber, the changing applications of cyber. We don’t even have a quantum computer yet. Right now, Quantum Leap, of course, is the Scott Bakula show that I grew up with. And I was thinking that maybe I could use this as an excuse to get the theme song in. Maybe we will, I don’t know.”
Thomas: “That’s a great show, so yes, do that.”
Watson: “I still love it. But this is a whole other, of course, a whole other framing. And quantum technology, as far as I understand, the Defense One headlines that we have and the reporting here and there on quantum computing tends to be a almost a neck-to-neck race between the U.S. and China. What can you tell us out of scenario one, Quantum Leap, which is maybe most useful for our listeners who are trying to make sense of the risks of cyber in the coming weeks, days, months and years?”
Thomas: “I think the Quantum Leap scenario is one of two that has the most implications for geopolitics. That the division on what you do with quantum once you have it — if you give it to everybody, if you just give it to your military, if it’s government-controlled, if you decide to put it out into the world, if everyone has it then kind of no one has it — that kind of decision-making process that every country is gonna have to go through is a game-changer in the geopolitical world. And also in the criminal world. The idea that this kind of technology can fall into the hands of nonstate actors, but also just cybercriminals that are just looking to watch the world as it burns or just make a lot of money when it does. It has extremely both frightening and amazing consequences, right? Because the flip side of all of this stuff is what opportunities are we opening up? What can we do in the medical world, for example, once we have quantum? And the advances that we could make are mind-blowing.”
Watson: “Could you kind of illuminate some of the applications? Like I hadn’t even thought about drug cartels utilizing this technology, and it’s really probably because I just don’t have a fluency in its applications — basically the worries about proliferation have been stuck to kind of armed drones over the last couple years with the rise of ISIS, these like very very clearly bad guys. What about some of the quiet guys like a drug cartel or what are some of these applications of quantum? I guess the only other application that I’m aware of is messaging, maybe highly secure messaging.”
Thomas: “Right. So if you think of encryption — and I’m not an expert on this topic, and I think if you ask most experts on this topic you’re gonna get different answers as well. So that kinda covers me. So I think, though, when you think about encryption no longer being the way that you keep anything secure, and the implications of that, that’s pretty much it in a nutshell what kind of these groups could do when they can kind of obliterate any files from Interpol. Or they can get into—”
Watson: “That’s frightening.”
Thomas: “Yes. Or they can get into any information that border security uses so that maybe you have great facial recognition that you use at the border — or even other biometrics like fingerprints — but then the data’s not there anymore. Or, even scarier, it’s manipulated. When you think of kind of criminal organizations being able to have this kind of identity fluidity because nothing’s secret anymore, that’s the kind of worst-case scenario that people hide under their beds for.”
Watson: “Interesting. Off the top of your head, we have three scenarios left. Is there another that kind of stands out above the other two? I’ve got Wiggle Room. I’ve got Barlow’s Revenge. And Trust Us.”
Thomas: “I actually love Trust Us, and it’s probably because I’m a child of the 80s and I watched those ‘Minority Report’ and the ‘iRobot’ and kind of all those things.”
Watson: “Philip K. Dick.”
Thomas: “Right. So anything where the machines take over is something I watched growing up — ’Terminator,’ all that stuff, I kind of lived and breathed it.”
Watson: “Legendary Skynet.”
Thomas: “Exactly. So anyone who grew up in that time, definitely this strikes a chord because we already saw it played out. So when we read it we say, ‘Yes, I know what this is gonna look like, and it’s dark.’ So I think that’s the one that — it gets to the very fiber of what it is to be a human. And I think anytime you get to that question, you’re talking about something of consequence.”
Watson: “Those are the best questions.”
Thomas: “Yes, those are the best questions. And the capabilities to almost build a human is mind-blowing. It’s fantastic in where you could use it to help society, to help our environment, to help kind of the big problems of our time. It’s terrifying existentially — like, what is a person? — but it’s also very scary in application about kind of how would we as humans maintain control of these systems that we built to function without us. I’m all for something other than humans driving cars because I just think the data probably bear out that an autonomous vehicle could do it better than a human. So in some ways I’m like bring it on. That’s what I would like to see is kind of little old ladies and the very aggressive drivers taken out of the loop.”
Thomas: “On the other hand, nothing is 100 percent secure. So what happens when things are manipulated? It’s an easy way to have a high-impact event happen. It’s not easy; the kid in the basement, hacker type is not gonna be able to do it. But a nation state? Yeah. Those are the kinds of things I worry about once you hand over certain aspects of the way we live to autonomy.”
We’ll return to the risks of autonomy a bit later. Because there are some concerns about the risks of cyberwarfare that we’re likely to encounter before robots completely reshape the global workforce.
We covered influence operations and propaganda work in last week’s episode. Here’s Adam Segal — who directs the Digital and Cyberspace Policy Program at the Council on Foreign Relations — with some of the things he’s a bit more concerned with as we look to the future.
Segal: “I think there’s been a lot of talk about deep fakes and since we’re talking about influence operations and the manipulation through machine learning and other tools of video and audio to either, again, to create disinformation, try to stir up social disruption, or to basically allow also people to say, ‘No, I didn’t say that. That’s a deep fake. It’s all fake.’ So I think people are worried about that as we move forward into the elections. I think other applications of AI to cyber attacks, either in just allowing for greater scale because you can automate, or perhaps providing some insight into targeting. Those I think are areas that people are most kind of focused on. I think looking probably farther out past the next couple of years, the impact of quantum information systems as the Chinese seem to be moving ahead at least on quantum communications, if there are breakthroughs on quantum computing, then that’s gonna have large impacts on encryption if we’re not prepared for them.”
Other experts aren’t quite so worried about the quantum realm. I called up Dmitri Alperovitch of the cybersecurity firm CrowdStrike. And here’s what he advised about the whole quantum technology debate.
Alperovitch: “I’m not as concerned about quantum in the cyber realm. Probably the biggest impact of quantum computing will be the ability to break some of the existing cryptographic algorithms like the RSA algorithm and the Diffie–Hellman algorithm, which are used in public key cryptography. But we now have algorithms that can replace that; they’re quantum-resistant; we are now working hard to standardize those algorithms and get them deployed. By the time quantum is gonna be here in significant fashion, I’m pretty confident we will have quantum resistant cryptography, so I’m less concerned about that. And in other areas, quantum is gonna be incredibly powerful, but it’s not gonna solve every problem; it’s not a panacea. Quantum computers are not general computing devices; you’re never gonna run Excel or Outlook on a quantum computer. It’s gonna be focused on solving very, very specific computationally hard problems like breaking cryptography and many optimization problems. But I don’t think it’s actually gonna have that huge of an effect on security. The one area that people have thought about using quantum computers to actually make things more secure is with quantum key distribution where you’re basically leveraging the properties of quantum physics to basically have unbreakable and untappable cryptography; but unfortunately that doesn’t really scale. It requires a direct-to-direct link where you can have photons going over that line — doesn’t really lend itself well to our internet-based systems where we go through different packet switching devices numerous times before we reach our destination. So I think even application of those systems is gonna be quite limited.”
5G and Huawei
The more immediate concern, the 50-meter target, as the U.S. military would say, is giving us all faster service with a new network infrastructure with 5G speeds. And that brings tensions between the U.S. and China into very sharp focus.
Segal: “The real issue is because of the nature of 5G, and so much data going back and forth between the core and the periphery, some of the base stations, and the constant need to push out new software and update the systems, even if you inspected the source code and the software, once version 2.0 or 3.0 came out, you would still not be assured that there wasn’t information being gathered…”
Information gathered by, for example, hackers and spies working for the ruling Chinese Communist Party, which experts like Segal refer to in shorthand as the CCP.
Segal, continued: “...So you have to trust the company, and the U.S. government has argued you can’t really trust Huawei because it’s connected to the CCP [Chinese Communist Party] and is under obligation to share that information. Huawei has said that is not true, and that they would never turn over the government, which is not a particularly reassuring kind of assurance given the way that we know the government and the CCP work in China. The other thread is disruption, that somehow Huawei would turn off the system in times of national crisis. And again, that is also possible; you have to trust the operator. So I think the U.S. has done the right thing about warning about the risk; I think the issue has been that we’ve tried to convince our friends and allies that they shouldn’t use Huawei, but we haven’t provided an alternative, right? The market is dominated by Huawei and ZTE, another Chinese company, and Nokia and Erickson — European companies. So there’s no place for the U.S. to easily step in. So you have to either think about are we going to, how is the U.S. gonna help with that issue? Are there kind of security practices we want to develop in concert with our allies and partners? Do we want to think about investing in the next generation? Can you remediate the risk in other ways? Which, that part of the strategy, I think, has been less helpful.”
As for the Trump administration, there is a new or updated policy on the use of cyber tools and warfare and posture. But the White House isn’t talking about any of that publicly. House lawmakers even passed a bill to finally see that update, and so far there’s been no resolution to that one, but the bill just passed the House last week.
And we would, of course, love to learn more about how America is defending its citizens, infrastructure and allies from cyberwarfare today. But as you might imagine, they’re not saying a heckuva lot about it all. One official, however, did make an effort of a sort just last month in a conversation with my colleague Patrick Tucker.
The official was B. Edwin Wilson, and he’s Deputy Assistant Secretary of Defense for Cyber Policy. Here’s a bit of that conversation from the 2019 Defense One Tech Summit in Washington.
Wilson: “The way I think about it, when I look at different technologies, especially in our arena, is really we’re in a state of digital transformation. And so I think you can point to a whole series of technologies that independently are really surging. There’s, you know, there was a little bit of a discussion earlier — AI, quantum computing, we’ve got the whole world of autonomy, autonomous behavior in terms of cars and all of the above, large data analytics in terms of processing, et cetera, et cetera, I won’t go through the whole string — but when you bring those together, I think it’s really a two-edged sword in a lot of ways. It’s two-edged in terms of being able to defend yourself from a military perspective and provide more robustness, but it also presents challenges because others are using it for offensive [cyberwarfare] and we would do the same obviously for high-end warfare, is this digital transformation is at a pace that I don’t know that in history we’ve seen anything that would match the changes that are coming at us. And it’s both challenges and opportunities, no matter what walk of life you’re in — if you’re in business I think it’s presenting tremendous opportunity for productivity, efficiencies, et cetera. In the world that I live in day in and day out, that speaks to threats, and then what are we going to do about those threats. And so the digital transformation I would describe it as the challenge is one of the things that dominates our thought day in and day out. I wouldn’t sit that thought on any single one of those technologies; I think it’s the maturation of the ability to weave those together in solutions that at times — I would add 5G to that, you know 6G someday — it’s just so pervasive, the pace of those threats, the scale and scope of those threats and quickly the sophistication of those threats (and opportunities) but threats in our world present a unique challenge that I’m just not — I can’t come up with a historical analogy and if anybody in the audience has it, I’m all ears because there just has not been a time in history from a national security perspective that we’ve seen this kind of a threat and challenge, but also opportunity.”
Tucker: “In terms of adversaries or, if you will, potential competitors that are able to use and leverage those current trends in the exponential rate of information technology, where do you spend most of your time — China, right?”
Wilson: “China is the bellwether in this case. Russia as well. Both have the technology wherewithal as well as the capacity to put these technologies to use in a significant way.”
And in case you’ve been living in a cave lately, you’ve heard about the debate over Chinese tech firm Huawei. They’re on the forefront of the 5G revolution. But their products have also attracted enormous suspicion from U.S. officials as part of a larger tech war with China that will unfold with more drama in the years to come.
More drama because Huawei devices are rotten with security risks and so-called backdoors that allow cyber specialists access we ordinary consumers would probably be made quite uncomfortable discovering — and backdoors you can bet U.S. government personnel are deeply disturbed by.
Matt Wyckhouse is co-founder and CEO of Finite State. His firm just completed a fairly damning analysis of Huawei equipment in June. And here he is unpacking his firm’s very own quite dramatic findings.
Wyckhouse: “What we did is we wanted to get a broader picture of the security risks of Huawei devices than what’s been done to date...”
And that’s one of the points Adam Segal was making. The U.S. government has alleged risk in Huawei equipment, but it hadn’t put out very robust proof to back that claim up. That’s where Finite State’s report comes in, as Matt explains.
Wyckhouse, continued: “...There’s been effectively a gap in the analysis where you know policymakers on one side are saying we just assume Huawei has backdoor access to the devices that they deploy and Huawei is saying absolutely not, we don’t operate the infrastructure so we don’t have access to these devices. And so what we did at Finite State was try to understand the overall risk of each device, and we did that very comprehensively. We looked at over 500 different products, almost 10,000 different firmware images for those products and looked for trends across that entire dataset. So we looked at nine different dimensions of risk, a few of those that stand out that are important are: Are there backdoor credentials backed into the devices? Are there accounts that might be undocumented that exist in the device that would allow someone to log into the device with some sort of an administrative privilege or just some of access into that product which could facilitate additional access and exploitation of that device. We looked at what are the known vulnerabilities in the software that’s being used inside of the device — are they using particularly vulnerable versions of third-party libraries? That’s actually where a lot of vulnerabilities come from in embedded devices like network equipment and IoT devices is they might be using an old or vulnerable third-party library. We look at that. We also assess different types of risks associated with the software engineering practices and security engineering practices of the company. So we look for evidence to see are the engineers there making good security decisions? And one way we looked at that was analyzing whether they were using safe functions versus unsafe functions. And oftentimes in libraries that are used, you can choose to use a particular function to, for example, copy memory from one location to another, or copy a string from one location to another. And it's a fairly straightforward substitution to use a fake version of that function which would prevent a buffer overflow. What we saw with Huawei devices was that they most of the time were using the unsafe versions. And when we looked at the credentials we found between the credentials and possibly baked into cryptographic material, more than half of the devices had some sort of a possible backdoor. And when we looked at the known vulnerabilities we saw that there were more than 100 different known vulnerabilities or known CVEs in every device and those are all very high numbers.”
You might be wondering what an enormous and influential company like Huawei thought of Matt’s report.
Wyckhouse: “Huawei has responded. They initially — they have responded in a very fragmented fashion, let’s say. So their U.S. chief security officer Andy Purdy originally came out right after the report and said this is exactly the type of transparency that the telecommunications industry needs, and if the U.S. were to implement this type of analysis across the board, everyone would be more secure. Effectively, I’m paraphrasing here. Then the Huawei product security instant response team had a fairly barbed response that they shot back at us looking for holes in the analysis that we did and they believed that they found some. They accused us of using outdated firmware, which was untrue. More than 95 percent of all the firmware we used was the latest version as of April 2019. And they accused us of using flawed analysis methods, which is also untrue. They wrote off some of the backdoors that we found as not remotely exploitable, but the point is that they’re there and they can facilitate privilege escalation and all sorts of other parts of the attack chain. And there’s no legitimate reason that they need to be there in the first place. So we’ve had some pushback and we’ve actually responded to that as well, and you can see on our website we go through the pushback that we’ve received and explain why as part of a secure software development or hardware development practice, you shouldn’t have these things in there. So there has been some response. But we stand by it. The bottom line is the findings that we came up with are quite consistent with the other public analysis that’s been done, which is the UK’s HC-Sec analysis and with our report we can see that two completely independent sources have found that the software development practices at Huawei are far below industry standards from a security standpoint and a quality standpoint. We both are independently saying there are high numbers of vulnerabilities of these devices. We are going out there and saying there are trivially-exploitable vulnerabilities in these devices that can facilitate access. Some of those are as simple as knowing what the password is to a backdoor account named Huawei on that device. And that account is oftentimes undocumented. So there are serious security issues here and it’s very hard to push back on this analysis, and that’s why we did it the way we did it. We looked at this across the board — this isn’t just one or two Huawei devices that has these problems; this is consistent across their entire product line.”
Alperovitch: “I think people are rightly calling the internet the fifth domain of warfare…”
Here’s Dmitri Alperovitch again.
Alperovitch, continued: “...and in any domain of warfare, you do not want to rely on your potential adversary for key weapon systems and key capabilities. So the Huawei debate, for me, is actually a little bit misguided because I think the focus on a singular company is not helpful, because it really is not about the company. It is about the countries that are adversaries. And we do not buy tanks and aircraft carriers from Russia, China, Iran or North Korea for very good reasons. So I do not know why we would consider buying key infrastructure in the digital domain from those countries regardless of who the company is because obviously the intelligence services and the military services of those countries would do everything in their power to plant backdoors and try to degrade those devices in a time of conflict. And we can just not trust it just like we would not trust buying military equipment from them.”
So what are the alternatives to using Huawei equipment? Are there any? I put the question to Matt of Finite State.
Wyckhouse: “Yeah, that’s a really good question, and there’s a very complex set of policy issues at hand here for lawmakers around the world. The problem is there’s been a bit of a market failure in the 5G space, in particular. And that’s where there’s the most concern right now because 5G is really gonna be critical infrastructure with a lot of services of national and international strategic value riding on top of it as we become more dependent on these faster networks. The challenge here is that there are only three or four companies that make 5G equipment. And Huawei has effectively taken the lead in terms of features and time to market. And so because of that market failure that’s happened over the last several years, companies that want to implement 5G are in a bit of a bind: you have Huawei that’s offering a low price, a high degree of features options, and China often incentivizes that beyond just the product at least appearing to be the best. So in short: very, very complicated. We need to figure out how to incentivize other providers in the space and increase competition. Part of that I think is making sure that everyone understands the total cost of going with something that might appear on the surface to have more features and might be first to market but really is possibly lower quality. It could be lower quality in terms of maintainability, but also in terms of security. If you have something that has a weaker security posture and you go an implement that, over time you’re gonna have to spend a lot more on security controls for that network. And if you have these devices getting hacked, you have to spend more on incident response and recovery and dealing with down time. And so security needs to be a requirement. At Finite State, we really believe that the answer here — and I agree with Dmitri that it’s not necessarily about one company — it’s that we should screen everything that’s going into critical infrastructure. And if we increase that transparency and we set a minimum bar for the security of these devices no matter where they come from, it’s gonna start to equalize the market because those who are skimping on security won’t necessarily be able to win anymore. They’re gonna have to invest there, and that levels the playing field.”
The next election
As we all wait for that 5G market to diversify and give us alternatives that don’t come with the baggage of a centralized state like China, we here in the U.S. have a presidential election to prepare for.
And while the news from this beat is predictably full of noise and divisiveness and increasingly overt racism from the president of the United States and those attending his campaign rallies in, for example, Greenville, N.C., this week, cybersecurity analysts like Paul Galiardi is one of the few voices sounding alarm bells over the voting infrastructure ahead of November 2020. And not just the systems our ballots go into, but also the cyber hygiene of the major parties themselves. It’s all still quite bad, according to a recent analysis from Galiardi’s employer, SecurityScorecard. Here’s Paul on what they discovered.
Galiardi: “So we came out with a report recently where we focused on the security of the parties themselves. Not necessarily the voting systems or the actual voting implementation. What we found domestically was that we do think that the Republicans and Democrats are certainly improved since the 2016 election; I would say that their hygiene is not matching what I witness in the DOD or financial sectors. So our system finds cyber hygiene issues or factors that normally we wouldn’t want to see in a completely buttoned-up company.”
You might think it shouldn’t exactly be like this — poor cyber hygiene now three years since the Russian influence operation of the last U.S. presidential election. But perhaps we’re just overestimating what we Americans have in fact learned from the past.
These political parties, Paul told me, don’t have the resources for this kind of defensive posture that you might expect them to have.
Galiardi: “The Bank of America CISO came out and said we have a blank check for our cybersecurity defenses. You know, Northrop Grumman and Boeing have significant resourcing to defend their IP and network. The Democratic party probably does not have that significant of a budget. The smaller parties certainly do not. We observed — we’re not gonna name the party, but it’s not the Democrats or Republicans — they had exposed a web server where you could type in someone’s name and out would pop their address, date of birth, full name, and it was seemingly some voter validation form. This was — we found that about a month ago. We called them, disclosed it to them, they fixed it within 12 hours. So we were pleased with the turnaround of the fix. That is a rather glaring problem. And if we could find that within a few hours of work, I think we could extrapolate that a motivated attacker would have some success.”
Watson: “Are you noticing a little bit smarter behavior across the board, since the  election?”
Galiardi: “I think we are. I think SecurityScorecard’s perspective on this is that we’ve seen a large attack vector through vendor risk or vendor ecosystem. So if you want to get through to a large corporation or company, sometimes it’s easier to go through one of their vendors or suppliers or contractors. That attack has repeated itself and in terms of defending yourself against that type of attack, you do have to stand up third-party risk or vendor-risk management entire teams. You really have to no longer look at just defending your own little castle; you have to assume that in a world of interconnected information that your information is actually being handled or controlled by other vendors.”
Alperovitch: “I think election interference is here to stay.”
Dmitri Alperovitch of CrowdStrike again.
Alperovitch, continued: “And a variety of threat actors, both domestic and foreign, will likely play in this space. Obviously we have seen what can be done in this domain in the past, both from an influence operation perspective — leveraging social media and trolls and bots and the like — to try to impact the public opinion. But also probably the most impactfully the hack and dump schemes that we have seen in the past targeting a number of different countries over the years. Those things will likely continue. The big concern of course is around the election infrastructure itself — the voting rolls, the vote-tallying systems, and the reporting systems, everything that’s actually involved in doing an election — we have not yet seen significant attempts to interfere with that process in the past; but that, I’m sure, is coming.”
I asked Dmitri if he thought we should just go to an all-paper ballot system for our elections, and here’s his answer.
Galiardi: “I think it’s very telling when cybersecurity experts, myself included, are suggesting that virtual voting booths should not be encouraged.”
That’s Paul Galiardi again.
Galiardi, continued: “... When you have peoples whose job it is to automate things and bring stuff into the IT computer world suggesting that we need to revert back to paper ballots hand counted, that’s obviously very telling.”
Watson: “Is that where you are?”
Galiardi: “I would certainly suggest that as well, yeah.”
Also ahead in the future: all those things sharing your router’s wifi signal at home are probably going to be silently hijacked in all kinds of ways we can hardly even imagine at this point, Matt Wyckhouse told me. That’s one of the things that you could say keeps him up at night.
Wyckhouse: “At Finite State and me personally, so I’m very interested in how the Internet of Things impacts cybersecurity. And we are acutely focused on that. What’s happening, the big trends that we see is that we’ve moved from an era where cybersecurity was all about information loss — you know, intellectual property was being stolen, medical records, credit card numbers were being stolen. And that’s how the attackers were either making money or achieving their strategic objectives, if it was a nation state. With the growth of the internet of things, and there’s also been a simultaneous growth in ransomware, and those two things kind of converge, attackers are finding they can win either economically or strategically by causing damage or threatening to cause damage all the way up to the possibility of loss of life. So cybersecurity is not just information security anymore, it also includes safety and resilience and reliability of the systems that are running our daily lives more and more. So the internet of things is where the digital world and the physical world overlap; and where those are overlapping right now, and there are vulnerabilities attackers are starting to exploit those things and hold them for ransom or use that to achieve mission objectives, and that is probably the most concerning trend. The attackers are moving from stealing to harming. And harming people has these effects in the real world when we’re talking about systems that are connected to these physical devices — medical devices are a huge area of vulnerability right now where you have devices that have been built over long periods of time with long supply chains and a highly regulated environment that had a lot of vulnerabilities in them and those are keeping patients alive at some points and they’re sitting on networks that might have other types of devices on them. We actually do spend quite a bit of time looking at the healthcare industry and helping hospitals.”
The next generation
Thomas: “So the great thing about where all these things are going in a scary way is that they’re also going in this way in ways that can protect us…”
That’s Dawn Thomas again.
Thomas, continued: “...and we just need to make sure that we (a) know we need to be protected, (b) know what’s worth protecting, and then (c), have a way to do it.”
How can we do those things better in the future? For Dawn and others I spoke with, awareness of the risks of our very cyber lives is becoming more commonplace among America’s youth. And that’s a promising development. But to make our future generations a bit smarter and bit more skeptical, I suggested to Dawn perhaps we ought to teach stuff like symbolic logic at younger ages across more high schools in America. Which is a start, she said.
Thomas: “Logic is needed no matter what. And I would expand it even further; we don’t even need to put cyber in there. We just need a savviness about information. We need to better understand where we’re gonna get our information from. And where it might be coming from. And to always question, to always validate. Not to take things at face value, not to click on that link in your email. That should be built in; and the only way I know to build it in is to do it from when you’re young. You know, one of the folks that we work with at Berkeley is always trying to emphasize the upside, right? So I would want to end on a note of kind of with all this advancement is amazing opportunity. There’s amazing opportunity to answer the questions of life right now: How to protect our planet? How do we feed our people? How do we keep peace between nations? How do we make sure that even the most vulnerable among us are protected? That we could use the data and the tools and all these things that we’re talking about in a very negative way because it can get scary, but we can use them to better society. We just need to be doing a lot more protecting, thinking, learning, teaching to make sure that we’re opening opportunities and kind of closing the most major of the vulnerabilities.”
Dmitri of CrowdStrike is also not all gloom and doom about the risks that lie ahead and how awareness and defensive measures are getting better. Here he is with a bit of optimism about some of the trends he’s seen lately.
Alperovitch: “I coined this phrase that has been repeated many times and I’m sure you’ve heard it, that there are two types of companies: those that have been hacked and know it, and those that have been hacked but don’t know it — with the implication being of course that everyone’s been hacked. But I’ve recently been thinking a lot about it and I’ve been talking to Rob Knake and Dick Clark who just wrote a book on the fifth domain where they quote me on this, and I’ve since amended that phrase where I now believe there’s a third type of company, which is being targeted continuously just like the other two types, but is actually able to resist those attacks. I now believe and I’ve seen this in day to day job here in CrowdStrike, how you can actually defend an organization against persistent adversaries, nation states, criminal groups, and it all comes down to speed. You have to be faster than the adversary, you have to assume that they’re inside, you have to find them quickly, and you have to eject them before they accomplish their objective. And there are companies that are doing this every single day out there. And it is possible, so I do want to end on an optimistic note that not everything is bad out there. And we’re actually learning how to defend ourselves, even against very dedicated threats.”
Do you know teenagers fretting about what to do with their future? They’re probably better positioned for a career in cybersecurity than you or I were when we walked up the stage to get our diplomas. If the young folks in your life are asking you what to pursue, consider this input from Matt Wyckhouse of Finite State — he describes a team that, to my ears, begins to sound a bit like a real-life version of “the Avengers.”
Wyckhouse: “I mean there’s a massive talent shortage in cybersecurity. And so anyone that’s interested I would highly encourage them to join us and help protect the world and keep our future safe. There are opportunities for people with backgrounds in cybersecurity specifically or computer science, but cybersecurity is a field that is so complex that we need people all backgrounds — from public policy, to law, to the social sciences and psychology — it all comes together in this very complex dynamic. And there’s room for everyone, and we really need to embrace that as an industry. And the point is if you’re young and you’re interested in cybersecurity, I would highly encourage you to go after it, because every single company is hiring for it right now.”
As we finished production on this episode, the Trump administration’s Director of National Intelligence Dan Coats announced he’d just created a new job called “election threats executive.”
Shelby Pierson is taking the new post announced Friday. She’s been in the intelligence community for more than two decades, and most recently acted as crisis manager for election security during the 2018 midterm elections — so maybe moving offices this weekend won’t be much of an issue.
Coats also said he’s ordered America’s other spy agencies — including the FBI and the CIA — to name similar chiefs of election security.
That’s it for us this week.
We’d love to hear what you think as we pivot to the past with our final episode in this series next week. Email us at firstname.lastname@example.org. Or leave us a voice mail at 731-617-9124.
Thanks for listening, everybody. And we’ll see you again next week.