Should government employees be able to work from their home office with sensitive data? Why is software often full of bugs that can be exploited? Is there some progress in the way how software is tested? In this interview, Prof. Matthew Smith answers all these questions and explains how the startup Code Intelligence makes the software world safer.
Scientific advisor, Code Intelligence
Prof. Matthew Smith teaches Computer Science at the Rheinische Friedrich-Wilhelms-Universität in Bonn.
Matthew is a renowned expert for Usable Security and Privacy and also a member of the Fraunhofer FKIE in Bonn. He is a scientific advisor to the startup Code Intelligence.
There is often a discrepancy between the goals of IT Security and the stuff that users actually do. Just recently I heard the story that a firm blocked access from work laptops to any other printer than the ones in the office. What people who work from home do to circumvent that is to download their documents on a USB stick and print them from their private laptops. What do you make of such anecdotes?
From a purely technical standpoint, this measure makes sense. Wireless printers are a security risk and an attractive entry point for hackers. But as the story illustrates, if you put up barriers, people will get creative to find a way around them. This points to a bigger problem. IT security should enable users to do things in a secure way, be that printing or something else, and not inhibit it. The human aspect is often ignored. Passwords are another good example of this. If you define that passwords need 20 characters and must be changed every week, the result you’ll get is that people write them down on a post-it note and stick it on their monitor.
Does this mean that security measures must consider the weakest link, the human?
I disagree vehemently with this way to put it. It is the technology that has to adapt to the human, not the other way around. People who work in IT love to figure things out, they breathe technology, so to speak. But typical users just want to get work done. Just telling them to “RTFM”, read the ****** manual, doesn’t work. This must be taken into consideration, and systems must be designed so that they can be used intuitively.
A local politician recently ranted that it is unacceptable to let government employees who handle sensitive data work from home. Is home office inherently less secure than working in an office environment?
Yes and no. You can create the same secure environment at home, but it takes effort. If you work on a laptop provided by the employer, without admin rights, it’s already much more secure than using a family laptop which your kids use to install games. Then there are ways to secure access to your network such as VPN. But I think security should be considered in a broader sense, and take health risks for employees into account as well. But there is the element of physical security as well. If you imagine a targeted attack of someone entering a building and threatening an employee to give away his password, this is certainly easier to do in a private home than in a government building with a security service.
How big of a problem is IT security anyway?
It is gigantic. It is so big that it is overwhelming. The current status is that it is next to impossible to program secure software. Even organizations that have quasi limitless resources like the NSA or Google can’t write software that is completely free of bugs. And if they can’t, imagine how far from this goal other companies are.
“Securing software is as hard as securing a whole city without knowing where an attack will come from.”
Why is it so hard to do?
Writing software means that you have a list of specifications that you want to accomplish with it. But for the things that could go wrong, there is no such list. Securing software is as hard as securing a whole city without knowing where an attack will come from. Another element is that the problems that might appear aren’t visible and that many problems stem from combinations of parts of code which, by themselves, pose no problem at all. Detecting them is really hard. Imagine the typical case. The software developers of a company have built something and now security consultants, so-called penetration testers, try to figure out the weaknesses. These people are very costly experts, and their job is difficult. They didn’t write the code themselves and need to get acquainted with it. It’s a bit like finding errors in a book written in a foreign language.
Now, these people use tools as well, and there have been quite some advances as I understand. Can you give us a brief history of how tools to find software bugs have evolved?
The fist hacking attempts date back to the 1970s and some of the tools and principles that were developed at that time are still in use. There are two major approaches that need to be distinguished: static and dynamic testing. Static testing means that you look at the source code and try to identify patterns that could be critical. Take, for example, a line of code that opens an unsecured internet connection. This line would get highlighted by a software tool that does static testing because it could potentially be dangerous.
Why do you say “it could be dangerous” and not “it is”?
Because it depends if the line of code is ever actually executed and if there is confidential information that is shared through this connection or not. And that is exactly the problem that limits the usefulness of the static analysis method. Context is everything, and if the context isn’t clear, you will end up with many alerts that highlight potential problems that actually aren’t problems at all. There are only very few lines of code that are literally always a bad idea.
Does that mean that static testing isn’t very useful because it creates too many false alerts?
Yes, static testing is an evolutionary dead-end. In medium-sized software projects, it spews out thousands if not tens of thousands of alerts. We did an experiment at the university to assess how many alerts people can cope with. 100 are manageable. With 600, most people give up.
What about the other approach, dynamic software testing?
The big difference is that with this approach, you run the software. This means that if there is a line of code opening a network connection, you can see if it is actually executed and analyze the data sent. The oldest approach to use this dynamic method is to throw random data at the program. This is a brute force approach that has been around for a while, and it works to identify some easy to find problems. Very recently, starting about 5 years ago, there has been a quantum leap in dynamic analysis. People started to use instrumentation as markers in the code giving the AI feedback on what is happening in the program. So now it is possible to see how inputs move through the code and to tune the data that gets thrown at the software accordingly. This feedback-loop refined your attacks, so to speak. This method is called feedback-based fuzzing and it performs spectacularly well in identifying bugs.
You said that dynamic testing throws data at the program to find weaknesses. How does that even work? I mean, if a program prompts me to enter my name in a field, throwing “Peter” “Paul” and “Mary” at it instead of my real name will probably not reveal a bug.
Let’s look at this example of user registration. In this case, the software will run a so-called unit test to make sure you entered a name and a password and add them to the database. If you use a good fuzzer, this unit test can be recognized and turned into a security test. It will try to use a very common form of attack, an SQL injection, that aims to manipulate the database by adding a command to the name, such as the code equivalent of “delete database”. The beauty of the approach is that using instrumentation and evolutionary algorithms means that the fuzzing software learns these attacks without having to be told about them beforehand. With each iteration, the fuzzer gets better and better at identifying bugs. And the big advantage of dynamic testing compared to static testing is that it doesn’t create thousands of meaningless alerts, it only identifies real problems.
If dynamic testing is so much better, why isn’t every company on earth using it?
As I said, academics have been very enthusiastic about this method, and big companies like Google, Cisco, and Microsoft have hired every Ph.D. student versed in fuzzing they could lay their hands on. The problem is that fuzzing is an extremely complicated topic, and not user-friendly at all. We did tests at our university, even master students in informatics who are really into the topic of IT security fail to apply fuzzing effectively, only a very few of the most talented get it right. This, in turn, means that even if you’re quite a large company but can’t hire such specialists, you can’t apply this method in your software testing setup.
So it comes back to an economic argument. As you already pointed out, security is costly. What are the incentives for companies to spend real money on hardening their software?
I can give you an example. There was a large software company that hired some consultants to do static analysis of their code. The analysis turned out with about a million alerts. Do you know how many working days the company budgeted for the consultants to get rid of the security issues?
“There is a common misbelief that software testing is an awful and expensive process.”
How many?
Two days. When they said that they can’t even read through a million lines of code in such a short time, let alone analyze them, the company replied that they don’t care and the whole exercise was just to tick a box in a compliance sheet anyway. Now, don’t get me wrong, there are many exemplary companies that do all they can to reduce security risks. But the underlying problem here is that there is a common misbelief that software testing is an awful and expensive process. With dynamic testing, it can be much easier and more effective. It makes economic sense as well because it can find bugs early in the development process when they’re less costly to fix. And I think that no serious company can just rely on the wishful thinking that nobody will attack its software.
Isn’t the problem that companies are stuck with static testing because the better approach of fuzzing is mastered just by a few experts?
Yes, but I’m convinced that dynamic testing will replace static testing for serious bug hunting, in a few year’s time. The challenge is to make fuzzing effortless and inexpensive so that companies can spend much less time and effort on testing but find many more bugs nonetheless. And the startup Code Intelligence is doing exactly that, making fuzzing user-friendly.
Where do you know the founders of Code Intelligence from?
They are all former Ph.D. students of mine. Henning Perl is an expert in source code analysis and machine learning. Sergej Dechand knows what mental models software developers have and how to make complex security software usable, he brings the human aspect to the table. And Khaled Yakdan knows assembly code inside out and worked as malware analyst. He is an expert at creating novel algorithms that automate software testing. Together, they’re an incredibly strong technical team, I don’t know any other that compares to them.
So what they did with the startup is to take an established approach, fuzzing, and turned it into a product?
Fuzzing as it is right now will never replace static testing at scale because of its complexity. The USP of Code Intelligence is that they make fuzzing effortless. But they did much more than just put a nice user interface on it. They have profoundly changed the algorithms, something that can’t just be copied quickly. The potential is enormous, this could become the standard solution for software testing from large companies down to a single developer because it makes fuzzing usable for everybody and a cost saver. The world is getting more and more digital, and it is just a question of time until something goes horribly wrong if we continue to build software like we do now. We need to give developers a powerful tool to improve their code, and that is what Code Intelligence does.
What will Code Intelligence’s biggest challenge be?
There are big software companies that hire more marketing and salespeople than developers which sell what essentially are not very good tools. Their challenge will be to compete with a lot of marketing hype with little real content that these companies churn out. But Code Intelligence uses a valid sales approach, which is connecting to the technical experts in companies that see the difference. For now, the company is growing nicely, because their product helps companies save money and speed up the development process. My personal satisfaction is if they make the software world safer, and show the world that software testing is not something painful.
Written by
WITH US, YOU CANCO-INVEST IN DEEP TECH STARTUPS
Verve's investor network
With annual investments of EUR 60-70 mio, we belong to the top 10% most active startup investors in Europe. We therefore get you into competitive financing rounds alongside other world-class venture capital funds.
We empower you to build your individual portfolio.
More News
14.12.2020
“Cybersecurity is a grand game of chess”
With more than 3 decades of experience in cybersecurity, Pierre Noel is an expert recognized worldwide for his work. In this interview, he explains what kind of cyberthreats companies and governments face, how they can defend themselves, and why he thinks the cybersecurity startup Threatray he advises has a lot of potential.
19.11.2020
How to build successful software sales teams
Sébastien Lieutaud has done it in the past, and now does it again: building a sales team from scratch and drive the internationalization of a startup. The seasoned sales executive recounts his experience at the French success story Akeneo and explains what enticed him to join the young software startup Artifakt.
12.11.2020
The big data myth
Data is more than abundant. But instead of creating value out of this “new oil”, companies struggle to work with the data they already have. In order to change that, a completely new approach is needed, explain Prof. Anastasia Ailamaki and Lars Färnström from RAW Labs in this interview.
Startups,Innovation andVenture Capital
Sign up to receive our weekly newsletter and learn about investing in technologies that are changing the world.