DeviceLine

Security News for Mobile, Apps & IoT

DeviceLine Radio: Dan Stickel On Self-Defending Software

When you have devices out in the field, how do you know the software hasn’t been compromised?

The concept of self-defending software is not new, but in a conversation with Dan Stickel, CEO of Metaforic, host Robert Vamosi, CISSP and senior analyst at Mocana, discuss the need in the Internet of Things and means to accomplish it.

You can hear my full conversation, along with a recap of this week’s news here.

A transcript follows:

Robert Vamosi:
We’ll start out with…Tell me a little bit about your company, Metaforic, and then tell me more about the concept of self-defending software.

Dan Stickel: Metaforic has a software immune system that we make available to companies so that they can ensure that their programs will operate correctly even if those programs have to run under hostile or even imperfectly protected operating environments. We look at the state of security and we see that whether it’s malware or trojans or worms or hackers or disgruntled employees or blackmailed employees or system administrators who’ve made mistakes or whatever. We realize that the days of expecting that you can write a piece of expecting that you can write a piece of software and have it run and require perfect operating conditions are long gone.

We’re taking our cue from the world of biology and enabling software and device creators to have their devices be functional in the real world.

Robert: How do they perform integrity checks when they’re out in the field with very limited resources and so forth? Just a high level example of how that might work.

Dan: This is something that I could go on for hours. I’ll try to keep it fairly high level. Basically we inject thousands or tens of thousands of security primitives or, in this biological analogy, antibodies into the software program as it’s being created so that then, as it runs in the field, as it displays a menu, achieves a transaction, checks the temperature reading, does whatever the device or the software program is supposed to be doing…
As a natural, in built part of doing that, it’s also running these various security primitives which are checking itself for health. The simplest example might be that one of these security primitives might be checking a region of the code to make sure that it’s still the way it was when it left the factory.

If it ever finds anything that’s gone awry, it can trigger a response to either try to self repair or to send out a cry for help or to gracefully shut down or to initiate…Whatever kind of custom response makes sense for that particular device and that particular deployment environment.

One of the things that’s really important about this is it’s all self contained. This isn’t something that needs constant updating to take care of the latest exploit. It’s not something that needs to fight tooth and nail with the latest hackers. It’s just constantly looking at itself to make sure that it’s perfect and if it finds anything out of alignment or cancerous in this biological analogy, then it does something about it.

I can see it going as much as you please, Bob. Cut me off whenever you want, but the idea of a program checking itself for health is very old. Think about something like code signing. That’s one big check for help, right?

The problem with code signing. Well, there’s many problems with code signing, one of which is that the operating system has to enforce it. If you’re deploying on a Windows box, for example, it’s virtually useless. If you’re deploying on a platform that actually tries to enforce code signing like Windows RT or the iPhone, it’s a little bit stronger there.

But then again, you’re still only checking once when you install the app or when you run it once. There are so many exploits that attack the program after that initial check. The idea is to have thousands or tens of thousands of these checks constantly firing, giving no window of opportunity for an outside source to do anything malevolent.

If the hacker tried to defeat one of these software antibodies or one of these checks, what they’ll discover is that that check was itself checked by multiple checks. Each one of those were checked by multiple checks, so that by the time you’re done trying to remove one check to make one change to the program, you really had to go through the vast majority of the protections, which can be quite time consuming.

Like any security company, we’re not going to claim perfection and complete hacker resistance. But certainly, we’re going to make things extremely difficult, and take a very long time to do anything.

Robert: You mentioned code signing. Another common method is the value of code obfuscation. Can you talk a little bit about that?

Dan: Yeah, I have to say that we’re not huge fans of code obfuscation. Now, the product that we offer has a variety of different elements and we’ve been focusing on kind of the core code integrity checking part of it. When we send our products to White Hat Hackers for testing, we often tell them where the checks are. We don’t rely on hiding things. We make it difficult to remove the protections that we’ve put in. Having said that, there are use cases for obfuscation. For example, if you have some sort of key that you’re trying to hide and you don’t want people to find it easily, or you’ve got a secret algorithm that runs very quickly and you don’t want your competitors to understand what it is. Or you’re doing some of your own device health checks and you don’t want hackers to understand exactly the factors that you’re looking for.

There are, indeed, use cases that make sense for obfuscation. But we tend to think that obfuscation is a bit overused and over relied upon. Anything that relies on software through obscurity, or I’m sorry, security for obscurity, generally doesn’t last very long. Again, as I say, we offer this in our product. It’s called Metaphoric Concealer. But we don’t think that’s the core of any kind of security strength.
Robert: What are your opinions around hardware enforcement?

Dan: I think hardware enforcement has a lot of promise. In theory, in an ideal world, that is definitely the way to go. If you look at something like ARM TrustZone, if you can establish that trust from the very base level and continue it throughout everything that executes, in theory it looks like you could completely protect your program and your system and your device. But all that we can say is that in the real world, that helps, but it has proven to be insufficient. We have yet, as a species, to create a device that has not been cracked. I think that this comes back to the philosophy that I started at the beginning here, which is that we think that the days of writing software programs that rely on a perfectly protected operating environment should be over. And that if you are creating software, deploying software, that needs to run correctly, you should take advantage of every defense mechanism that you have at your disposal.

If you’re going to deploy in a trusted hardware environment, that’s good. But you still can’t, necessarily, trust that. If you look at the recent exploits on the Samsung phone where you can get access to all of their memory, programs that were designed to run on those phones assumed that the operating system would protect them. But indeed, there was a mistake made and malware could get access to any memory for any program and make whatever changes they want.

Now, if people had protected the software programs that they run or that they’re creating with a software immune system…By the way, I don’t want to say it has to be metaphoric. I’m just saying that this is the philosophy, that even though there was a breakdown in an operating system’s protection, your programs would still have been protected.

Another example is Windows RT where they’re trying to enforce code signing and fairly quickly exploits were discovered that could defeat the requirement for that code signing.

Coming back to the original question about hardware protection, I think that’s a good thing. I think that just because we’re espousing the need to put a software immune system into people’s software, doesn’t mean that we’re suggesting that you should just throw out all other forms of protection, that you should get rid of your firewall protection system. But we do think that those have been pretty convincingly shown to be insufficient.

Robert: So the Internet of things is that, as we all know. Where is a particular sweet spot that you feel that your products fit and make sense?

Dan: [laughs] We’re little bit of religious zealots about where our product fits and makes sense. Ultimately, the way that we think about this is that last century when humans first created software, all that programmers had to do was the right code that would operate under perfect conditions. But pretty quickly, they discovered that things that interacted with human beings had to deal with human imperfection. People would not answer the questions, would not provide the input quite correctly. From there they discovered that some humans provided the input incorrectly on purpose to make the program misbehave. If you think about that it’s a pretty rapid decline from there into trojans and malware and [inaudible 00:10:12] , which is basically a variant on these themes that I talked about in the beginning.

We look at that and we say that the entire world is running on software, whether it’s the software that runs your car, the software that’s guiding the operating tools in the operating room in the hospital. It’s the software that runs the airplane, the nuclear power plant, or just the more obvious software that runs the financial transaction systems or that’s on your mobile phone. Basically anything, it’s designing a bridge.

The entire world is being leveraged and improved by the use of software. But because of that pervasiveness, the entire world has become more vulnerable to potential bad actors.

You know what, it’s not just even bad actors, it’s actually mistakes that creep in, or we have some people that we work with that are concerned about sun spots and power surges actually corrupting the software. So it’s not even a case of this is all about hacking, sometimes software gets corrupted via other mechanisms as well.

And so, you know, it’s our belief that long term, all software in every category, as I mentioned, and more, should be protected. But of course, the real world doesn’t work that way, and it doesn’t adopt everything instantaneously and uniformly. So there are really two main areas where we see that there’s a lot of attention being placed.

One is on the mobile phone. Now, that’s, you know, arguably an embedded device, maybe, maybe not, but everybody’s carrying them around these days, and they’re putting more and more of their life onto those phones. And you have things like BYOD trends that put the enterprise at risk, and you just have consumers that are running mobile banking apps on their phone.

And if you think about a financial institution, they are quite comfortable with their old school bank vault full of gold bullion with armed guards and video surveillance cameras and background checks. And then they were pulled into the, you know, 20th century with the data center, and they kept the armed guard and the video surveillance and the background checks and added in proven prevention systems.

But now they’re being asked to run banking software on an undefended consumer phone, and that is, frankly, dangerous and terrifying for many different reasons, one of which…well, we can talk about more of that if you like, but I’ll put aside mobile phones for right now, mobile phones and financial institutions.

And the other, of course, is just infrastructure. And anybody who really follows software security is quite familiar, probably overly familiar, with Stuxnet, which is sort of a canonical example. But it just so happens to be a perfect example for what I’m going to talk about, and for multiple reasons.
One is that the Iranians knew when they built their enrichment facilities that they would be under cyber attack. So they built those facilities with the perfect firewall, an air gap. It literally had no connection to the outside world. So they felt that there was no way that they could be infected with anything. And of course, ultimately, they were. And so that’s just a perfect illustration of my theory here, my thesis, that you can’t rely on the perfectly protected operating environment.

And then what happened with Stuxnet is that it actually modified the Siemens controller software so that the centrifuges spun too quickly and basically self destructed. And again, had that Siemens controller software been protected by its own software main system, it would have detected the attempt to modify parameters and prevented that kind of destruction.

Robert: So I just want to clarify something before we go too far. If I’m hearing you correctly, you’re talking about the entropy of the software, anything that changes it once it’s out in the field, and that your agents plug in at the programming part. Is there anything to protect against, you know, bad programming? I understand like security’s not often taught in programming courses; how do you defend at that level?

Dan: You know, it’s interesting, that’s a question that we get asked quite often. And I guess the answer to this is twofold. One, we are indeed protecting against change, whether it’s, you know, nature random entropy or directed hacking kinds of changes. But we’re trying to preserve what it is that you wanted to send out. We are not, in our company, trying to enforce best programming practices. And so if you think about it, when people build programs today, what they do, in a [inaudible 00:15:20] fashion, is they write the requirements, they design the software, they code it up, they test it for performance, for features, for usability, and they also now are starting to test it for security vulnerabilities, and that’s what companies like Fortify or Coverity will help you with that.

So you can run their tools and scan your code for known weaknesses and exploits, and then have your programmers go back and patch those up so they’re not there any longer, and then, with our company, you know, we say now, inject the immune system, activate the immune system, and ship the product.
So just as many years ago, people didn’t realize that they needed to provide an installer with their program, and people just copied files into different directories and tried to make it run, and then there was this breakthrough realization that, you know, we should make that easy for people, we should have installers.

Or we should have a QA department that runs our software under all the different conditions and the operating environments under which this is supposed to work, I think there’s going to be a realization that, you know, we need to provide some sort of defense mechanism as it ships in the field.
So the short answer is that the kind of thing that’s looking for protection from exploits is something that would be a great complement to what we do, to look for the exploits, take them out, activate the immune system, and ship it.

So much of security these days assumes that the system is perfect and that the only danger is in running some kind of unknown software. So they build all sorts of protections and virtual machines. And if you look at a company like [inaudible 00:17:04] that tries to encapsulate the different programs that are running in case there’s something bad in the program, those are all good endeavors, but what if the system already has something lurking in it? What if there’s an advanced, persistent threat that’s already in the system? What if the system administrator is actually a turncoat? What if there’s a zero day exploit that comes along later?

If you have a piece of software that’s running on that system that you still need to operate correctly even though the system is compromised, whether you’re operating a nuclear power plant or a water treatment facility or just a banking application, that’s where we come in. So the vast majority of the security industry is trying to contain that threat, to protect their perfect systems, and we are trying to protect the elements of the system in case it does get compromised, and by the way, every system known to man has been compromised.

Robert: Right. And I want to thank you for articulating what I’ve been trying to for many years about why I don’t do banking on my mobile phone. You’re putting an app in a very hostile environment. The PC I think, you know, we’re comfortable with, but the mobile phone is all new, and I hadn’t articulated it the way you just did a moment ago. Is there any hope, then, for the financial services industries that want to put their apps on these devices, since, you know, we can’t really trust the mobile environment?

Dan: Well, I have two pieces of information that might be interesting for you. One, we did a survey on mobile phone users and banking, and we found that you are not alone. We found that 68 percent of non adopters said that they would adopt, but that the main reason that they haven’t adopted was because of security. If something could be done to either prove that the security was strong or to reimburse them for any losses that they would adopt the banking app. Now, of course, what those consumers don’t realize is that in much of the world you are protected and if you lose money the bank has to reimburse you. Those people should feel comfortable in adopting it if they’re not worried about things like identity theft and dealing with all the hassle of proving that it wasn’t you and all this and that, which is not minimal.

However, on the commercial side those protections don’t exist. If the bank can show that it did take reasonable security precautions and you still lose money, that’s your problem and you’re just out of hundreds of thousands or millions or tens of millions of dollars. That’s a very different kettle of fish.

The point being that you are not alone. Other people are concerned about the security of the mobile app. And, in fact, in the UK there’s a bank called Natwest which this fall, the fall of 2012, had to pull their app from the shelves because their customers were experiencing fraudulent activity. So, it’s not an unfounded fear that you have and that I have myself, honestly, and that others share. It’s a very real fear.

In that same survey that we did, we found, and this was the most unbelievable statistic, we found that 19 percent of the people that we surveyed either had been the victim of hacking fraud on their mobile phone or knew someone who had. That seems like such a high number. I started to question it myself.
And so, I asked our director of marketing if she knew anybody that had been hacked like that and indeed she had. There was anecdotal confirmation of the statistic we had gotten from this larger survey.

To answer the second part of your question, is there hope? I think the answer is absolutely yes, there’s hope. There’s hope on multiple fronts. By the way, there are an estimated 300 million users around the world of mobile banking today. In parts of the world, people only have access to mobile devices. It’s difficult for them to get to places physically so they use it.

People in the US use it. People all over the world are using this, with some degree of danger today. But I’d say that the hope is really twofold. One is that the banks have very sophisticated back end behavioral algorithms that are trying to monitor things to make sure that they believe that it’s really you.

If you are someone that always lives and always acts in Omaha, Nebraska and you do a transaction and then two hours later you’re doing a transaction in China, they’re probably not going to allow that. That’s not particularly sophisticated, but it’s an easily accessible thing. They’re monitoring that.
The second thing is that banks are quite aware of the problems that they face on the mobile phone. They’re aware of the explosion in mobile malware. And so, they’re taking significant steps to provide strong security on those apps, whether it’s through the use of a proper immune system, like I’ve been describing here today, or other techniques that they use. For example, looking to see if the phone is jailbroken or rooted, and that would influence their ability to trust any transactions that come from that device.

So many banks just, you know, disallow any transaction that comes from a jailbroken phone, but you know, sometimes they’ll allow it anyway, because there are countries in the world where up to 50 percent of devices are jailbroken, and you’re not going to do banking if you can’t support that. But it just goes into their whole trustworthiness equation.


Robert:
Dan, thank you very much for your time and your insights. Thank you.

Dan: All right, you’re welcome.

Tags: , , ,

No Comments

Leave a Comment