Security – a history and multiple challenges: A conversation with Sherif El-Kassas




Cyber Security


2021-05-05 11:14:16 1618 0

Sherif El-Kassas teaches computer science at the American University in Cairo. He has been researching issues of security management and open sources technologies. In this conversation with Lina Attalah, he talks about the history of security, the incentives around it, the large spectrum that exists between being a low hanging fruit for security threats and being secured and tensions surrounding state breaches of users’ privacy. Lina Attalah: Let’s start with a history of when security became an issue since the introduction and the expansion of the IT sector in this country. Who started thinking about it? Was it the state? Businesses? Users? Sherif El Kassas: Let me a take a step backward and think of the history of security and computing. This is old history in general. Security in the sense of securing messages and communications is very old and goes back to Ancient Egypt, Julius Cesar, etc… It matured in the 1970s. In modern times, where we had cyber something in phone networks, the interest in securing communications was primarily from the military. The business interest wasn’t there. Obviously in times of war, you don’t want the enemy to know what you are saying to each other. So people started to invent different ways to encipher messages. Originally it started with enciphering texts on pieces of paper, and then it became online, radio communications, telegraphs and so on. At that time, there was a deep history of how encryption and its tools are top secrets, while stories of betrayal and backdoors are as old as the industry itself. One of the famous stories is that during World War Two, the Germans had the so-called Enigma Cipher, which the Allies have broken with the help of some Polish and British scientists in Cambridge. They never announced that they broke it. But after World War Two, some British and Italian companies started selling the Enigma Cipher as if it is unbreakable. Thirty years later, the documents are made public and it was a bit of a scandal when that happened. This is a case of what the legal people call the caveat antonym, which means you should know what you are buying. Ever since then, when computers became widespread and also used in military and police intelligence, security became an issue. It no longer became an issue of enciphering messages when they are transferred, but computers held important data and there was a question of who has access to it. So as early as 1975, solid design principles of secured systems came out. Cryptography advanced in huge leaps. The fundamentals of that science were out there. To some extent, this tradition of security carried into the other applications of computing. When you started using general purpose computers, particularly multi- user computers, this idea of securing and protecting people’s data started to rise, not as a military application but as a business application, because now you have bank accounts, and the need for privacy and protecting information, authenticity and so forth. It’s been coined in the three-letter C.I.A.; not the intel agency, but as in confidentiality, integrity and authenticity. Another “a” has been added, which is availability and which has become a big issue. Ever since that time, security has been evolving very rapidly and a lot of people have been studying it. Up till the turn of the century, 2000 or so, most of the focus was on technical work and trying to solve security’s technical problems: Developing better algorithms, better encryption, better implementation, figuring out why we can’t build proper systems, but there was always this realization that security is not a technical issue only. There is a people factor and a lot of other factors. In 2000, a group of scientists started looking into security economics. It is not an issue of studying the expenses of security. This is an issue of applying ideas from micro-economics and information economics into security to study things like incentive and motivation, because the technical platform doesn’t help you answer these questions. Observations have been that failures of security have resulted because people who are responsible for security don’t lose much if security fails. So the question of incentives and human behavior is becoming more important now. It’s been 14 or 15 years since the first conference that discussed these issues. This idea has not just expanded to economics, but also to applications of sociology, psychology and so forth. In today’s world, most people think that you can’t fix security by just fixing the technical problems. So you can’t fix security by getting a better encryption algorithm, or a better operating system and so forth. Those things are necessary for sure. But they are not sufficient. You have to look at the bigger picture; the socio-technical enclave that includes people, machines and everything in between. If one looks at modern attacks against systems, like hacking attempts, you will often observe that they typically follow three dimensions: there are physical attacks where someone breaks into your office and steals your hard disc and leaves. There are technical attacks where someone hacks into your computer because of a weakness in it. And there are social attacks where someone convinces you to do something you are not supposed to do, either by coercion, bribe or deception. If you think about it, you cannot really solve security without looking at these three dimensions. So this brings us to the really important question, which is how do you fix privacy in this world? The other challenge facing security, I think, is our general computing needs. It has always been the case that usability and utility trump security. For example, there is a group of crazy Dutch people on the web that created something called Please Rob Me who decided to exploit weaknesses in how people tweet. So you know how people tweet, they say their locations and say that they are at work, left work, went for coffee, gone home and done this and that. So those guys use public tweets to track people and those who have geographic information, they actually mapped how distant they were from their home, or the office and so forth and as soon as it matches a certain criteria, they would post something like X’s home is free so you can rob it because he is two hours away. Now thankfully, those guys have shut down their service. They only keep their archive and they keep a service: You can give them a Twitter account and they can tell you how bad or well; configured it is. But this brings the message home that if you are blinded by the utility of it, you don’t understand the implications of the other side. There are very profound societal implications for our use of technology and perhaps people like me and technologists are guilty because we are just driven by the excitement of being better, bigger and more advanced. But the implications of how these technologies impact society are not very clear, most of the time, and often just an afterthought, when something goes terribly wrong. Of course the Please Rob Me example is just a joke. But you can imagine what a determined attacker can do and you can imagine the amount of information that is mined about each and everyone of us because of how we use the cloud and because of the incentives associated with this use on both sides. This study of incentives is very interesting because if you look at companies’ perspectives versus users’ perspectives, a lot of people have made the observation that to Facebook, Google and all the others, we are not the customers, we are the products. And the customers are the advertisers in these companies. It can be a win win situation because they do offer good services that we like using but we have to understand where do we draw the line of where our privacy ends or begins. LA: And with us becoming the product and the advertisers being the customers, is there a tension by default around the viability of security in the sense of privacy to this economic model, since data and information are core to the product? SK: The only way to make companies do what’s right to all parties is to have a legal framework. That creates another incentive for us and they have to comply by it to grow users. You need the societal rules because society chooses that privacy is important, so laws are passed to reflect that and companies have to comply. If you are a member of many of the clubs in Cairo you’ll notice how they use members’ data, which is just incredible. The amount of spam you get on your phone is incredible. There is no rule to protect information. It’s the same thing unless somebody sets the rules. It also creates an alternative, because competitors can create a better Facebook because it complies more with privacy. So security can create a dimension for competition. But this requires awareness at the client side. LA: That’s probably why the alternatives are not picking up, no? SK: People don’t care most of the time. Convenience trumps security. Actually, often times, mediocrity trumps excellence in IT just because it is more convenient. There is a lot of debate about open source versus closed source for security and which works better. It turns out to be mostly a question of incentives. If you competed in the product market in the beginning of the 1990s which is dying out anyways today, it would be in your interest to get a product as quickly as possible out there because you want to capture more clients and when you capture them they become your network so you build on them. There are so many benefits from doing that. The question of quality can be an afterthought. The prototype of this story is Microsoft because of how they manage their strategy with product quality and security. I don’t mean this as a bash against them, because this is perfectly rational behavior. They want to capture the maximum of the market, and they are, so they are doing their job. So it’s up to us to do our job to create a counter incentive. LA: If we take this conversation to the local context, and the heavy political framing to it with the state’s ongoing quest to practice control over people’s lives and information. How has this quest by the state been practiced? I am not talking just about surveillance software which is more recent, but also infrastructural control of the businesses providing online access, the fiber optics, the submarines cables, etc? SK: This is a very old debate. I need to take a step back to put some context to it. There is something very common across all governments of the world, which is the need for the so-called lawful interception. The idea is, if criminals can do their business online, then the state has to be able to catch them and to collect evidence against them. So every telephony device built was designed with lawful interception enabled to them. No body likes to talk about that because it is a backdoor essentially. People don’t like to advertise backdoors because they think others will find out and exploit them. In fact, people have consistently found backdoors and exploited them even of legal interception, which is a big technical problem. Often times, these backdoors are hidden or protected by obscurity. Obscurity as a security philosophy doesn’t work very well because any failure is a complete failure. This debate of the need for surveillance is an old one and is rooted in other parts of the world. In France for sometimes, it was illegal to use cryptography. In the US, it was illegal to export cryptographic software. In those times, they were called crypto wars, because it was human rights versus the state. Some observers feel that we live in crypto wars again because there is this renewed fear that the civilian cryptography has become so good that you cannot really break it in an easy way. So manufacturers won’t create backdoors in technology to be able to offer lawful interception. In the old days, lawful interception was easy because there was no encryption and you had to go to the phone switch, so a couple of wires would be installed on the line and you can listen to the calls. Now it is more complicated, everything is in software, it can be easily well encrypted, so you can either get into the end points, so instead of going to the switch, you go to someone’s phone or computer, or you have some backdoor in the infrastructure that enables you to capture the message. Of course there are good reasons to want to have the police and other legal bodies listen to criminals’ calls. On the other hand, you don’t want it to be abused because too much power will eventually corrupt any person. There is not any clear answer to this problem. The issue is that you often solve these problems by having oversight and scrutiny. Whenever the police do lawful interception, you require a court order for it. But, ultimately, if someone is at the top of the power ladder, there is little scrutiny over them. The idea is that you want to make sure that when you have lawful interception or the equivalent of it, you want to place it at the right level of classification so you have enough scrutiny. So you don’t take everything to national security top-secret level, but you need to try to balance the power. How exactly, that’s of course very difficult because every country has its own approach, but also has broken its own rules. I am sure you are very mindful of the Snowden revelations and this systematic effort to backdoor everything on the planet. There is a lot of redundancy in what Snowden has revealed, and that redundancy is very telling of the scale of the power of this machine. Before that of course there was the famous so called Echelon project or the network of English-speaking nations around the world, probably, a relic of World War Two, but that continued and evolved; eavesdropping on every piece of communication all over the world. That I guess is the big picture. You might have seen the movie Starman. In one of it, there is this conversation, between the starman and an earthling, and at the end of it, the starman concludes “I think you are a very primitive species.” So we still build silos around countries. We still fight about pathetic things and as a result, there is always this need for an advantage. If you are the major technology superpower like the US, you want your advantage. It is a very rational behavior. You then build all the tools that can give you whatever advantage you need. And this ripples down to everything: businesses, individuals at work, state police and intelligence agencies. We are still competing at that level. That’s the fact of life; we have that type of competition and we have those types of incentives. Being too naïve about it doesn’t help. Some of those competitions and competitive attitudes create a balance that is necessary for progress in general. At every level, people or entities will look at advantages to try to get to information. Those might be legal, like in a typical legal framework, or might be state-driven to compete with other states or to curb internal unrest. Basically, the gloves are off, there are not many controls. At the technology level, I think it is almost impossible to stop a determined adversary with enough resources to get into your data. You can make their job very difficult, but it is almost impossible. You know I encrypt my hard drive. I do my homework. But I leave my computer here because I have to go to my lectures, somebody comes in, installs a hardware and log on to it, I am not going to know. The next day he is going to collect it and he is going to have all my data. There is very little I can do about it. Somebody really determined can infect the software update sources I update my system from, and the next update will actually contain a virus. Somebody who is really determined can write a virus that can escape all the known anti-virus tricks because it is just brand new, it is written for me only and the person who wrote it is an expert and can bypass control. So it becomes really difficult. I am not trying to promote excessive paranoia. We often think of security as extremes, I am either secure or not secure. And if you think of those extremes, it’s not going to work out very well. It becomes like you have to stay at home and not do much in your life. But in fact, as we do in life in general, we have to assess risks intelligently and decide what’s appropriate and what’s not. Every time we get into the car and press the breaks we are not sure it is going to work but we accept it as a reasonable risk because it works 99 percent of the time. So that seems like a reasonable compromise for us to move around. We have to able to perform this risk assessment that we do in life in our choices of technology to decide what’s appropriate. If something is incredibly important, you have to go to the bank yourself and do it, unless you can accept the risk of loosing a few thousand pounds for an online transaction. You shouldn’t try to buy into the illusion that any of them is a perfect choice. This is also an added difficulty with risk assessment in technology. In general we are trained to do risk assessment in the natural world, which is made out of physics and we grew in it and understand how gravity works, and so on. So, it’s fairly intuitive. Now, when you come to take decisions based on technology, we often use our knowledge, our physics mindset, but in a world that is not governed by the same rules. If you are standing outside my door, the only way to come in is either you have the key or I open the door for you. The physical constrains stand there. If you are standing in front of a network gateway, if you think in the same way, you will probably be wrong, because to be in you probably have to change your address. So changing a number changes your state completely from being inside of a room to being outside it or vice versa and this is hard to do because it requires an understanding of the underlying technology. This is actually a criticism of technology and not of the users because not everyone has to have a degree in computer science to be able to figure out the risk associated with technology. How we build technology for users has missed this dimension of helping them make intuitive decisions about security. One of the common examples is when you go to a website, that is HTTPS that is authenticated, you get a warning message saying there is something wrong with this site’s certificate, do you want to continue? Yes or no? Most people just want to continue because the implications of continuing are totally beyond them. They don’t understand that by continuing, they are opening up this door of attacks. In 2003, a group in the US called the Computing and Research Association came together to coin the so-called 10-year challenges for computer security and one of the challenges were building systems that users can understand and make proper decisions about them. The other challenge was to get rid of epidemics, viruses, denial of service attacks and so on. Another one was building design principles in software engineering so we can build societal applications in which security we are confident. A final one was applying security economics in a meaningful way. Now the 10 years have expired and I think we need 10 more years. So we are still very faraway. LA: What you are saying is very interesting in the local context where some of the discussion on security is reduced to the binary of being secure and non-secure, particularly in the context of online activists, some of whom promote security as a lifestyle and an expression of being geeky, while others recognize this impossibility of being totally able to protect yourself, which translates into resisting the security discourse altogether and articulating a counter discourse of assertiveness around the information provided online. SK: Ultimate security is probably impossible. But reasonable security is possible. Judging it is your own personal assessment exercise. Often times, you can draw it as a triangle. At its top, there is the high security level. Anyone who is a civilian can have no hope in reaching that level of security. But you don’t want to be down there as well, because down here you get these random attacks. Down here is every one who is a computer user. Down here is a person who doesn’t have an updated anti-virus, or who doesn’t update their passwords. You don’t want to be at the bottom of that pyramid. You want to be somewhere in the middle. And to be here, you will need to follow some best practices such as making sure you don’t use somebody else’s computer to log on to your email. Following best practices is a good idea because they put you in this medium level. Even in that level, there is a huge distance of deciding what to do. For example, I don’t use Facebook at all. That’s a measure I opted for. Am I really better off? AUC email is on Google and Google knows my birthday, even though I didn’t input it myself. So you have to figure out what that level you want to be at and that’s an important exercise, especially if you have privacy concerns. The two sides of this triangle are adversaries versus who you are. It seems now that the state of the art of cyber interception is focused on two things: passive interception, which means listening to network traffic and trying to decrypt it, and end point interception, which means someone deploys a device or an agent on your machine or your phone and listens to what you have. I don’t know this as a piece of information, but I am pretty much sure that all governments around the world have invested in these technologies and so has Egypt. Companies are agents of these pieces of software, so they sell and by law you can only sell it to governments. It is a no brainer to know that everyone has these tools and is actually using them. Now, again, everyone targets the low hanging fruit. So if you are the low hanging fruit, then tough luck. It seems now everyone is depending on Secure Sockets Layer (SSL) for encrypting connections, Transport Layer Security (TLS) depending on who you talk to, so it’s a lot of effort going into intercepting SSL and TLS. I would assume that these efforts can be made successful because there are so many ways to attack trust. Trust is another huge issue. SSL is based on an assumption of trust between systems. And this trust stems from an authority that gives you a certificate. What if I am powerful person? I can go to the certificate authority and tell them give me one that says I am Google or Lina or whatever. If they are obligated to do so by law, I am going to have a legal certificate, but you can’t tell the difference between it and any other one. Therefore the deception would be perfect. When you think you are talking to your favorite site, you are actually talking to what cryptographers call the man in the middle. There is no way to detect that. If you want to make it slightly harder, you have to make a second layer of encryption on top of those messages. But this by necessity is an arms’ race. As soon as too many people start encrypting their messages, it will be realized that too many people are using this tool on top of SSL so we have to figure out how to intercept or to have smart end point agents so we can collect the data before it is encrypted. LA: Is there a tension between encryption and the archive? Do you worry about the disappearance of the record with more drive for encryption or more generally its absence because of more security consciousness? SK: For sure there are also so many tensions in that area. If you lose the key to decrypt the data than you have lost all the data. There is also an interesting crime called taking data hostage. So let’s say I am unhappy with AUC, so I encrypt the grades and leave and say unless you pay me that much money, I will keep the key to the grades. There are some viruses that do that. Even people not talking out because of their security consciousness is a huge problem. As soon as people get rattled about their security, a lot of other useful ideas get out of the door unfortunately. LA: To finalize, what’s your main observation about the evolution of the security mindset in Egypt on the two fronts: the state and the user? SK: Let me start by the user front. People are aware much more than five years ago of the issues of security. They are worried about security, hacking, keeping their phone private. There is also a lot of misinformation. There is a lot of what cryptographers call snake oil, something that cures everything but doesn’t do anything really. There is a lot of awareness but there is not much security practice. You can see that best practices are not practiced at all. Even though best practices only help you 60 to 70 percent of what you need to do but if you don’t do them, you are at the bottom of that pyramid. LA: And it is mostly connected to incentive as you said. So you will find more best practices in the banking sector for example. SK: It is interesting. The banking sector really pulled their act when Visa and MasterCard put their foot down and said here is a standard called PCI DSS and if you don’t follow it, you pay more money for credit card service. But in fairness, the Central Bank of Egypt is doing a great job now because they built the national standards which I think will have a really good impact in years to come. For the state, I think it is at multiple levels. If you look at government offices, a lot of them need to do more work. The security is terrible. The perception is that because it is not all automated. But a lot needs to be done. There aren’t very clear standards for government security. To do that, you just need to create the incentive. Some laws have been passed like the telecom act and the e-signature law as well. I think we need much more than that like general IT security for governments. And there have also been some interesting bodies established. For example at the Ministry of Communications and Information Technology, there is the Egyptian Computer Emergency Response Team, which receives calls and information about cyber security. It is hosted in the National Telecommunications Regulatory Authority because it is supposed to support the telecom industry at large. I think there should be similar entities for the banking and other sectors as well. It is a start and they have some very good people who are doing some good work. But it is not really enough. There must be a top down mandate that says that you have to be compliant with this standard set if you want to use a data center inside your government office and here is how you do it. But this is very expensive and there needs to be a political will to actually do it. Law enforcement has made a lot of progress in areas like forensics, recovering data and evidence. I know for a fact that everybody has interest in cyber security both on the defensive and the offensive level. Just by guessing companies that sell software like FinFisher, we know for a fact that somebody is buying this stuff, and using it for some reason or the other. It is expected and normal to find out that these guys have these tools and using them. This example is for end users tools. There are other things for intercepting traffic basically playing tricks on SSL. LA: Are there tensions between the legal measures taken up by the state with regards to security and users’ quest to protect their security in our context? For example, you refer to the Telecom Act, which prevents users from deploying encryption, which means that by being security conscious, we engage with multiple unlawful acts everyday. SK: It is unfortunate and I am hoping this can be revised at some point. I heard some talk that there is a plan to do that. What we really need in this country is more transparency in these issues, oversight and proper scrutiny of all these practices. I think if we have enough of those, we can find an acceptable compromise. We need to catch dealers who understand that we have to put those records of surveillance somewhere. We have to know exactly how many people have been eavesdropped on and why, even if we don’t know it before hand, it should be published later on. There should be somebody whose responsibility is to make sure this stuff is published, so that by scrutinizing your work we make sure that you are doing the right thing and not just twisting people’s arms. I think this argument can be made and can be listened to. I think it will take a lot of time to make it happen, here and elsewhere, but here we have more ground to cover. There is a level beyond which the state will be doing it without telling you that they are doing it. The idea is that you don’t want everything to be at that level. You want things that are really country critical through national security to be at that level. Everything else should be subject to scrutiny. There are so many models for scrutiny and governance.