Governing Cyberspace

Governing Cyberspace

Pawel Kopczynski/Reuters
from Academic and Higher Education Webinars

More on:

Global

Cybersecurity

Digital Policy

Adam Segal, the Ira A. Lipman chair in emerging technologies and national security and director of the Digital and Cyberspace Policy Program at CFR, discusses the increasingly contentious geopolitics of cyberspace and cybersecurity policies, as part of CFR’s Academic Conference Call series.

Learn more about CFR’s resources for the classroom at CFR Campus.

Speakers

Adam Segal

Ira A. Lipman Chair in Emerging Technologies and National Security and Director of the Digital and Cyberspace Policy Program, Council on Foreign Relations

Presiders

Maria Casa

Director, National Program and Outreach Administration

CASA: Good afternoon from New York and welcome to the CFR Academic Conference Call Series. I am Maria Casa, director for the national program and outreach here at CFR. Thank you all for joining us today.

This call is on the record, and the audio file and transcript will be available on our website, CFR.org, within the next few days if you would like to share it with your colleagues or classmates.

We are delighted to have Adam Segal with us today to discuss the geopolitics of cyberspace and cybersecurity policies. Dr. Segal is CFR’s Ira A. Lipton chair in Emerging Technologies and National Security and director of the Digital and Cyberspace Policy Program at the Council. Dr. Segal was also the project director for the CFR-sponsored Independent Task Force Report “Defending an Open, Global, Secure, and Resilient Internet.” Prior to joining CFR, he was an arms control analyst for the China Project at the Union of Concerned Scientists. Dr. Segal has been a visiting scholar at the Massachusetts Institute of Technology’s Center for International Studies, the Shanghai Academy of Social Sciences, and Tsinghua University in Beijing. Dr. Segal’s latest book is, “The Hacked World Order: How Nations Fight, Trade, Maneuver, and Manipulate in the Digital Age.” He writes for the CFR blog Net Politics and tweets at @ADSChina.

Welcome, Adam. Thank you very much for being with us today.

SEGAL: My pleasure to be here.

CASA: Your book, The Hacked World Order, came out just about a year ago.

Can you tell us a little bit about the premise of the book and what, if anything, has changed in terms of geopolitics and cybersecurity since its publication?

SEGAL: Sure. I will kind of lay out some of the larger arguments and then where I might have gotten things slightly wrong.

And so what the book tries to do is describe some of the patters about how states are behaving in cyberspace, and one of the basic premises of the book is that we kind of expected cyberspace to be an area that radically empowered the individual or the non-state actor. We’ve had an idea of kind of a super hackers or terrorist groups that could take down the grid, but what we’ve seen so far, in fact, is that nation-states have turned out to be very flexible and agile players. And the most powerful nation-states are the most powerful players in cyberspace, and what they’ve done is reasserted or tried to reassert their control, their power, their sovereignty into a space where we thought that sovereignty and territoriality were not going to play a large role.

In thinking about what the states have done, I think it’s useful to think about kind of five distinctions that you can start to shift states off into one side or the other. The first distinction is, do they see the cyber threat as primarily domestic or internal or external? And here you can see a real difference between countries that are worried about the flow of information and content as a threat to regime stability or domestic legitimacy, and those that are more concerned about attacks from hackers or nation-states on their power grids or computer networks. This idea is probably most clearly illustrated in what the Chinese and Russians call information security; so are concerned both about threats on networks but also the flow of information, in what the U.S. and its allies tend to call cybersecurity. So how does a country think about the threat? Is it a mix of internal and external, or is it primarily internal or primarily external?

The second focus is, how do you think about how you use cyber weapons? And here we can see that countries—some countries tend to think of cyber weapons as another form of military conflict, primarily; that they’re going to be tightly controlled, that they’re going to be developed by the military. So, for example, the U.S. has come up with two defense strategies that talk about cyberspace, has extended cyber command from about hundred—a thousand people to about 6,000 people, and has only admitted the use of cyber weapons, so far once, against the so-called Islamic State. So cyber weapons seem to be fairly tightly controlled.

In contrast, we see countries, again like Russia and China, that have developed a much more expansive view of cyber conflict. They tend to use non-state actors, criminal groups or other proxies, and their cyber conflict involves information and operations than information and warfare.

The third distinction is, how do you create influence and how do you think about diplomacy in cyberspace? And here we see, again, the United States and many of its allies thinking about very precise narratives or counter-narratives, empowering the right actors, coming up with the right type of Twitter stories or other Facebook narratives. But on the other side what we’ve seen, particularly coming out of Russia, is an argument or a belief that you can just drown out most of the information with disinformation. So you use botnets, which are a series of a large number of controlled computers just to push out as much disinformation as possible to kind of drown out other stories.

Another distinction is between how do you view innovation, especially in information technology, and here we can kind of see kind of three visions: the U.S. vision, which is based in Silicon Valley, that is kind of a bottom-up, driven by private companies and universities; a Chinese version, which is very techno-nationalist, driven by the top, very focused on reducing technology dependence on the West and other countries; and then a European vision that’s very focused on creating a common market and a large market for a single digital product and is very focused on privacy and other concerns for individuals.

And the subcategory is, how do you think about data and its relationship to citizens in the state? And here there’s a kind of Lockean view that the debate over data is going to happen between citizens and states—it’s going to often involve transparency and accountability in trying to balance privacy and national security—or a more Hobbesian version that looks at how data can strengthen the state. And here you see in Russia and in China and other places a discussion about data as the new oil, and how do you control it and create it.

So those are the five categories that I think that really help us think about how states act in this space. I’ll just wrap-up by pointing to three things that I talk about in the book that I, perhaps, got the nuance wrong or got the balance wrong, and the first one was, as I was finishing the book, President Xi of China and President Obama were just—had just signed an agreement where the two sides said that they—neither side would knowingly support or tolerate the commercial theft of intellectual property for company gain; so no cyber industrial espionage, which had been a long concern of the United States, and there had been a long campaign of putting increasing pressure on China. I didn’t expect the Chinese to actually follow through with that agreement, but it turns out that about a year after that most of the public evidence is that they are following that agreement, that the attacks on private companies has gone down. There’s still espionage conducted against political or military targets, but that was the type of cyber espionage that the U.S. government said was a kind of legitimate form of action.

The second area is that I have a large chapter in the book that talks about Russian uses of information operations against its neighbors, and in the book I make it very clear that I’m, you know, most concerned about a cyberattack that undermines the legitimacy and trust in complex institutions, not the kind of digital Pearl Harbor or cyber Pearl Harbor that a lot of people have talked about that creates widespread destruction and mass dislocation. But—and so I had described how Russians had used those operations against Estonia in Georgia and in Crimea in Ukraine, but I did not think that Russia was likely to use those against the United States, but, in fact, that’s what we saw happen, you know, since June of 2016 throughout the election, and now we see it continuing being used against France and Germany and the Netherlands as they go through their elections.

And then, finally, one of the big concerns I had in the book for U.S. cyber power moving forward was the increasing gap between Washington and Silicon Valley and how they saw their interests. A lot of this emerged out of the Snowden disclosures, where it became clear that U.S. companies were often forced by U.S. law to cooperate with U.S. intelligence gathering, and so the companies were increasingly trying to distance themselves, primarily through the use of encryption—end-to-end encryption as a way of saying we can’t cooperate with the U.S. government even if we wanted to. I was probably more optimistic than I should have been about closing that gap as we’ve seen the gap seems to have grown larger in the first hundred days of the Trump administration, driven by concerns over the executive order over immigration, the budget for science and technology, and other areas.

So those are the three areas that perhaps I got wrong in the book, and I’m happy to take any questions now.

OPERATOR: At this time we will open the floor—

CASA: OK. Thank you.

OPERATOR: At this time we will open the floor—

CASA: Yes. Thank you for the overview. (Off mic)—students on the call for questions.

OPERATOR: At this time we will open the floor for questions.

(Gives queuing instructions.)

CASA: Adam, while people are getting their questions in order, can I just follow up on something that you mentioned? You touched on the term legitimate action, and I was wondering if you could draw a line for us between what would be considered legitimate and illegitimate in terms of cyber?

SEGAL: Well, I think the United States has been trying very hard to define what it considers the legitimate behavior in cyberspace. So the distinction we were trying to make with the Chinese was between the theft of secrets for political and military reasons, which we basically have said is legitimate because all countries spy on each other and the United States is not going to stop doing that; versus the theft of secrets, in particular industrial secrets, to help companies grow, right, for competitive reasons. So that was the argument the United States made, and that’s why the response, for example, to the hacking of companies was much different than the hacking of the Office of Personnel Management. So, in that hack, it seems as if China-based hackers stole about 22 million records of federal employees, including (something ?) called the SF 86, which is a form you fill out that, you know, details if you had an adulterous affair, you know, drug problem, whatever other personal information you have in there, and the U.S. government’s response to that hacking was very mooted. In fact, Director of National Intelligence Jim Clapper basically said, well, that’s the kind of thing we would go after ourselves, and you kind of have to tip your hats to the Chinese for stealing that. So that’s one of the distinctions that the U.S. has tried to make.

Then, through a process at the U.N. called the Group of Government Experts on Information Technology, there’s been a discussion about can we start defining some of the rules of the road for how you would use cyber in conflict. Do, for example, the laws of armed conflict that apply in the real world, in the kinetic world, apply also in cyberspace?

CASA: Thank you. All right, we’re ready for our first question.

OPERATOR: And as a reminder, if you’d like to ask a question, please press the star key, followed by the one key.

Our first question comes from Northeastern University.

Q: Hi. Michael Trudeau from Northeastern University.

Recently, in the news, I’ve noticed that there was an attack by The Shadow Brokers. They have some ties to the NSA and the keys that the NSA essentially holds. You spoke about undermining institutions as the attack—a Pearl Harbor-style attack. Would this qualify or is this something different? Or are you even aware of it?

SEGAL: Yeah, so for—The Shadow Brokers have been releasing on the web NSA—National Security Agency—and CIA hacking tools. They started this summer. Many people suspect that they are probably Russian hackers, and you know, what their goal in doing it is uncertain. At one point, when they first started, it was right after the United States had attributed the hack on the DNC to Russia, and some people interpreted that as saying—as a kind of message from Russia to the U.S. government—saying, you know, if you want to go down this road, we can also play; and you have vulnerabilities and we’re willing to expose. The most recent hacks—the release of the information seems to come after President Trump ordered the missile attack on Syria. Again, some people have said that, you know, they were trying to undermine some of President Trump’s domestic support since they tend to be more non-interventionist.

So what their ultimate goal is is hard to say. It does seem, in fact, you know, certainly to undermine the NSA and the CIA and the intelligence agencies. I mean, it is hard to create these types of weapons. Once they’re revealed they’re no good, right; you just patch them, and there’s not an unlimited supply to them. And they also further undermine the trust between the tech companies and the government because the government has said that, you know, any time we find a vulnerability, our default position is to tell the companies so they can patch it, so they can make us secure, and there’s supposed to be an internal process called the VEP, the Vulnerabilities Equity Process, where the government has a discussion about when it’s going to reveal those vulnerabilities. But these types of release make it seem as if the United States government is sitting on these vulnerabilities for quite a while before they decide to let the companies know, and so making the companies and all of us as users perhaps less secure.

Q: Thank you.

CASA: Thank you.

Next question, please.

OPERATOR: Our next question comes from Franklin & Marshall College.

Q: Hi. Thank you, Adam.

I have a question, just briefly, about the House resolution discussion draft that Tom Graves put out that’s being referred to as the hack back discussion draft, and I’m wondering what your sense is on either just in the United States in the development of more aggressive active defense—formalizing active defense for private industry, as well as globally?

SEGAL: Yeah, so for those of you who are not familiar, hacking back is basically the idea that companies themselves should have the ability to, in some cases, go after their attackers, right, because the U.S. government seems unable to be able to defend the private sector from most types of cyberattacks, lots of capacity in the private sector, and so should they be able to defend themselves. Right now, that would be illegal under the Computer Fraud and Abuse Act, and so people have suggested that we should begin to think about how you would change the Computer Fraud Abuse Act and allow those companies to take more active defense measures or hacking back. The historical analogy that people like is letters of marque, right. So, during the age of piracy, you would empower certain private actors to go after the pirates.

You know, this debate comes back, from what I can tell, every three to five years. I don’t think it’s a particularly good idea. It’s not clear to me how you do this without creating a huge amount of escalation potential, and I’m particularly worried about signaling and escalation control during a crisis. So you can imagine a situation where, for example, the U.S. and China are having some standoff in the South China Sea or in the Taiwan Strait, and coincidentally some U.S. company is hacking back because Chinese hackers had gone after them and in doing that, they accidentally take down some power network or some other type of network that’s very important to the Chinese. Then we have this confusion about, well, are they acting under the auspices of the U.S. government, are they acting independently, and how could we convince the Chinese otherwise of doing that? So my concern is mainly from a national security perspective and a foreign policy perspective, that this is space that is already too noisy and it’s already difficult to signal to potential adversaries what your intentions are, and that hacking back is unlikely to be all that effective and to worsen those problems.

Q: Now, would you have any suggestions for how we might resolve these issues? What would be the fora—(off mic)?

SEGAL: Yeah, I mean, there have been—there have been—you know, so internationally the rumors are, you know, that you can find companies to do it for you; Israeli companies seem to be fairly active in this space, at least that’s what the rumors are. There have been discussions about, you know, only a certain number of companies would be allowed to do this. There would be a fairly high bar, and they would have to be—you know, have some type of authority that—and liability if they cause a situation, but I just—I think this should be a national security concern, and it should remain national security actors. So, you know, the responsibility of the private sector is going to be to increase defense and resilience and not work on hacking-back measures.

Q: Thank you.

OPERATOR: Now, as a reminder, if you would like to ask a question, please press the star key, followed by the one key on your touchtone phone now.

Our next question comes from the University of Utah.

Q: Hello, this is Tobias Hofmann from the University of Utah.

One of my students has the following question: How can we effectively counteract Russia information warfare campaign in the U.S. and the West in general? Thank you.

SEGAL: Not an easy answer because part of the reason why the information operations are effective is that we have, you know, widespread and growing questioning of mainstream institutions in the United States that have nothing to do with cyber operations. So along, you know, general decline in trust in the mainstream media and other institutions and expertise, and what the Russians did was to take advantage of that. So there aren’t a lot of cyber policy responses.

Of course, there is the traditional response, which is, you know, you just have to better defend the networks. Political organizations have to be ready and cognizant that they’re going to be targets. They have to be aware of what type of information they have on their networks and figure out what they need to defend, but that’s the type of cyber defense security advice you would give to anybody.

I think partly on the, you know, fake news and the rumor and the information operations side, we are—there’s been some discussion about technological solutions, right; so Facebook and others are experimenting with systems that would tag fake news before you could share it. But I think those are going to really only go just so far, and it’s going to be a very hard thing for us to address because it’s really kind of a wider issue about domestic politics and how we—kind of a shared narrative about facts and truth that’s kind of outside of a cyber policy kind of framework.

CASA: Next question, please.

OPERATOR: And our next question comes from the University of Southern Mississippi.

Q: Hi. Thank you for talking with us today. Some developing countries, such as India, are using biometrics because they do not have Social Security Numbers or safe banking systems. Do you— or when do you think the United States will be using biometrics for our computers and other technological safety needs?

SEGAL: Yeah, so there’s already some move towards that, right, if you’re—you can open your iPhone now with your fingerprint, and many people have talked about replacing passwords, which are fundamentally unsafe, with some other types of biometric solutions. I don’t think we’re ever going to move fully towards that. I mean, as, you know, you brought up the India case with the UID, the universal ID, they are discovering that there are security problems even relying on biometrics. There was a—there’s a German hacking collective that’s already hacked a member of the parliament’s Apple phone—iPhone by taking a picture of her finger, blowing it up, and cloning her fingerprint from that. So biometrics is not going to be full-proof, just like anything else. So I suspect we will continue to have a kind of a mixture of passwords, biometrics, and other ways of trying to develop those trusted IDs.

Q: Thank you.

CASA: Adam, how could the U.S. government collaborate with Silicon Valley to protect internet and network vulnerabilities from foreign governments?

SEGAL: Well, the primary way so far has been what’s called information sharing, right, which is that if I see an attack, it makes everyone more secure if you can get that information out as quickly as possible, right, so the attacker can’t use the same vulnerability against everybody. We had a long kind of debate about that in Congress. There was kind of widespread support for it, and finally, a bill passed that allowed for it, because there were some privacy concerns and some liability concerns. So that now is happening at a faster pace and a greater scale. It’s still not perfect, but that’s one of the things that has to happen.

The other way, of course, is just this debate over encryption, and encryption, you know, is used in many parts of the kind of technology chain from, you know, the pad locks we see on our phone, on the web browser to the encryption that we have in WhatsApp and on our phones. The issue has been, you know, as more and more devices become encrypted and terrorists or lone wolves and others use those to plan those attacks, how do you strike the balance between privacy and security, and here there has been very little common ground. You know, the companies and the technologists have basically argued you can’t weaken encryption without weakening everyone, and companies say that it’s impossible to build what’s called a backdoor, right—a way to provide access without weakening security for everybody. And the U.S. government has basically said, no, we think the technologists can come up with it, and there’s no reason to think why should my cellphone be some separate kind of space where the U.S. government has no access to it because the U.S. government gets access to lots of things under legitimate law enforcement reasons. So that’s the debate we’ve been having, and we haven’t made a lot of progress on it.

OPERATOR: And as a reminder, if you would like to ask a question, please press the star key, followed by the one key.

And we have a follow-up question from the University of Utah.

Q: Hi. This is Max Becker from the University of Utah.

There has been some talk in the U.K. of censoring ISIS propaganda online, which has been met with mixed responses. Obviously, this is a new frontier and a new world which we need laws and boundaries, but because of the fluidity of the internet it’s impossible for one nation or entity to regulate this space at the moment. What you’ve been discussing today sounds like the Wild West, for good or for bad, but cyber law is becoming an increasingly complex issue.

Do you think that one group will come forward as a leading player in creating or enforcing cyber law? Do you think that a multinational group will be assembled to write these laws? And basically, do you see the internet being tamed in the next few decades?

Thank you.

SEGAL: So I think we are seeing the process of groups trying to come together and write laws, or if not laws then agree on norms or kind of shared basis of agreement. I think it’s very much going to depend on the issue and the context. So you know, cyberspace or cybersecurity are all very broad topics and all have very—and cover everything from crime to cyber war to espionage, and so each of those areas is going to have a different set of actors that come together. The internet itself, right, is governed by what’s called the multi-stakeholder process, which is a collection of private actors, companies, and the governments who help define some of the technology standards and procedures that help the internet that work. On crime, there is an international convention called the Budapest Convention that came out of Europe that has—more than 40 countries have signed it that help regulate it, so that tends to be state-led. On the issue of cyberwar, that tends to be state-led, but as you said, for example, on fighting violent extremism online, those norms or rules are going to be created both by governments and the companies, and by groups like civil liberties groups and privacy groups.

So I think in each of those issue areas it’s going to be a different set of actors, and—but we are, as you said, I think, slowly seeing rules being extended out into what was once considered the Wild West.

CASA: Thank you.

Next question, please.

OPERATOR: And our next question comes from Franklin & Marshall College.

Q: Thank you, Adam.

So of the sort of number of successful attacks that you’re looking at, what’s your sense of the sophistication of these attacks? Is it—how much of this would you understand as being just basic cyber hygiene problems that companies and governments have, and how much of this is sort of really cutting-edge, shockingly sophisticated stuff?

SEGAL: Yeah, the vast majority of it is pretty basic: failure to patch or falling for a phishing email or other social engineering. I think—you know, several years ago, the head of GCHQ, you know the British intel agency, gave the number about 80 percent of attacks could be easily dealt with by basic cyber hygiene and 20 percent are more sophisticated. But I think when we think about the range of attacks that have seemed to have had a political impact, the number of sophisticated ones I could probably count on one hand, if not two: Stuxnet, the U.S. attack—the U.S.-Israeli attack on the centrifuges at Natanz to slow down the Iranian nuclear program; the attacks on the Ukrainian power grid in 2015 and 2016; perhaps the attack on TV5 that happened the year before last. Those were all fairly sophisticated attacks, but the vast majority seem to be pretty basic.

CASA: So, Adam, what advice would you give to students who are interested in working in cybersecurity issues?

SEGAL: You know, it’s hard because there’s no—it’s unclear how you become a cybersecurity expert. Everyone I know in the field their paths are very different. You know, I came to the field primarily as a China expert. There are people who, you know, are computer science majors who then got interested in international relations. There are people that are operators—were involved in cyber operations in the U.S. government and then came out; people who are interested in the economic side of it or the user interface. So I think right now there are lots of interesting ways that you can approach the problem and approach the field. I think, you know, there are a number of schools now that are developing masters in cyber policy kind of expertise that are trying to bring all of these things together. I think you’d want to be—you know, the way that I often describe someone’s career in any kind of foreign policy space is like an hour glass—you want to start out fairly broad and then slowly narrow over time to what specific issue you’re interested in and develop an expertise, and once you have an expertise then you can start broadening out again from there and writing about other things as well.

So I don’t think there’s any one path. I think you want to—you can come at it from economics, you can come at it from anthropology, you can come at it from computer science or international relations. You just want to narrow within that kind of broad scope—are you interested in the criminal aspect of it, are you interested in how nations behave or something like that.

CASA: Great.

OK, Adam. Well, thank you very much for this informative discussion, and thanks to all of you who have participated.

This concludes our winter/spring Academic Conference Call Series. Our next call will take place in September at the start of the fall semester, and we’ll be sharing the fall line-up early in the summer. In the meantime, for information on new CFR resources and upcoming events, I encourage you to follow CFR’s Academic Outreach Initiative on Twitter at @CFR_Academic.

So thank you, again, for your participation. Good luck on your exams, and have a wonderful summer.

(END)

Top Stories on CFR

India

The election date for the world’s largest democracy is set to begin April 19 and last six weeks. What would the results of a third term for Prime Minister Modi mean for India’s economy, democracy, and position in the Global South? 

RealEcon

The response to the temporary closure of the Port of Baltimore—from a deadly tanker collision—demonstrates the resilience of U.S. supply chains despite fears of costly disruptions.

Terrorism and Counterterrorism

Violence around U.S. elections in 2024 could not only destabilize American democracy but also embolden autocrats across the world. Jacob Ware recommends that political leaders take steps to shore up civic trust and remove the opportunity for violence ahead of the 2024 election season.