Cybersecurity, DLP and the Human Element: Raj Koo, Dtex
Lee Razo interviews Raj Koo, Senior President of Engineering and Cyber Intelligence at Dtex, about the latest developments in cybersecurity and how cybersecurity and Data Loss Prevention (DLP) has to change fundamentally in order to stay ahead of new and quickly evolving threats, while also protecting employee and customer privacy.
…this notion of a data lake, that you can just pull in data from everywhere put it in your data lake and ply these techniques on top it, really is a broken approach.
It's about collecting the right data. And on the privacy side, collecting the least amount of data that you need to solve the problem. And both of those are really going towards actually solving the problem with less resource.
Welcome, Raj Koo from Dtex Systems.
It's a pleasure to have a chance to speak with you.
Raj is the Senior President of Engineering and Cyber Intelligence at Dtex, which is focused specifically on the topic of cyber security, which sort of as a non-security-focused person - you know, I'm focused on all things IT and security's always in the background.
I just see it as a really huge topic.
I have a colleague who's fond of saying that security is never finished.
And, you know, we see it in the news, SolarWinds.
It's always a part of everything that we do.
But Dtex is particularly focused on a certain area, on workforce cyber security or cyber intelligence and data loss prevention.
Can you tell us a little bit about that and what that focus is on?
Yeah, fantastic, Lee.
Great to see you again.
So, you know, Dtex is really focused around the human element of cyber security.
And I think it's an element that often gets missed in all the buzz words around APTs and, kind of, advanced malware, and things like that.
There's always a human element behind every cyber security breach.
Whether it's either threat-related or it's somebody masquerading as an insider to get internal access to your organisation, there is a human element behind that.
So, you know, when we look at workforce cyber intelligence, in particular for Dtex, we're trying to understand intent.
Those behaviours that lead up to an event, whether they're accidental or benign, or whether they are intentionally bad, you know, those pieces of intent often determine how you can act or recover from a cyber security event.
So, you know, it often gets overlooked in the world that we're in, and there's so much, kind of, focus on malware that, kind of, the modern cyber security industry has been built on that advanced persistent threat.
But I think it's time that we really look at a human behavioural-centric approach to solving some of these problems.
Yeah, no, it's really interesting because, indeed, you know - I mean, apart from working at IT, I also travel, or at least I used to, and it reminds me a lot of airport security, like what you experience is a lot of very reactive mindset, like something happens and now you can't wear shoes on a plane or when you go through security, or you can't bring liquids.
It's never really proactive.
So, you know, this sounds like you're taking, kind of, this under consideration, so let's talk about that a little bit.
Why is intent so important to focus on and how can we actually achieve that?
Yeah. And if you kind of look at the industry around DLP, data loss prevention, if you think of the kind of keywords like "insider threat", what most organisations are looking at is, "I might have sensitive IP.
I might have customer data.
How is that going to leave my organisation?" And typically the industry has focused investment, cyber security budget, but also their technologies are looking at exfiltration.
So how does that data get out? What is sensitive? And what is the medium by which it leaves my organisation? And what that really does is that focus on that exfiltration.
While you may detect certain things, certain events that are really important to act on, you also pick up lots of things about people trying to just get their job done, but using, you know, maybe a tool like Dropbox, where they should be using OneDrive, or leveraging other tools that they shouldn't otherwise be using, but they've actually got good intentions.
And it's often those false positives that has led to, you know, a big, kind of...
In the industry, there's really no actions that can be taken based on those, kind of, good-intentioned users.
And often a knee-jerk reaction ends up reducing productivity, operational efficiency.
But when you understand that intent for those truly bad actors - what actually you look at is an individual that has really bad intent is often attempting to cover their tracks, trying not to get caught.
So those behaviours that relate to covering their tracks, it could be obfuscating data or hiding their internet activity.
Even doing research around how to bypass internal security controls, those kind of what we call "indicators of intent", really stick out like sore thumbs.
And then that's often used to try and determine what action you should take from there.
Often you'll see with a data loss event is no attribution back to the human that might have contributed to that event, and that's because the why and how that occurred is often missing.
So that's why intent is so important in our industry.
And we kind of looked earlier on that kill chain.
We often talk about the kill chain in cyber security, that exfiltration is right at the end of the kill chain, whereas early on the kill chain, things like reconnaissance and surf convention and obfuscation, those types of behaviours are really early on, and they allow you to predict some of these things that are going to happen long before the exfiltration occurs.
Yeah, that seems really important now because we're just getting more and more decentralised in all the technologies.
Like what we used to call "shadow IT", you know, what people would sort of do on the side, is kind of becoming the de facto now.
Like you mentioned Dropbox, OneDrive, I remember when we used to focus on, like, thumb drives, like, "Oh my god, you know, somebody could come in here with a thumb drive." And it seems so quaint now, you know? (overlapping conversation)
Oh, hundred percent.
And it's a case of whack-a-mole.
You know, the cyber security team, the SOC team are really busy.
But are you going to stop that data leaving the organisation? No, you know, history has shown that by just following that kind of exfiltration event, you're really just trying to whack things down, and eventually the data does leave.
When you have somebody that has really, truly bad intentions, they'll find a way to get that out.
You know, interestingly, we've just completed a study with the MITRE Corporation.
Many of your, kind of, viewers would be familiar with MIDA because of MITRE ATT&ACK.
It's become synonymous in the cyber security industry, all of those adversary tactics that people can use to get inside your organisation, steal data, those kinds of things.
We've been working with MITRE on looking at the insider problem.
It's one of the largest studies that we - you know, from a Dtex and MITRE perspective, we believe it's the largest study of its kind that has really taken, you know, roughly 10,000 users - out of those 10,000 users, taken 200 people that have been specifically commissioned to go and perform insider events, actually take sensitive data and distribute that to a third party within an actual production environment.
So really interesting study.
But what it showed - they had 50 malicious users, 50 super malicious users, but everybody that was given that task, even though there were security controls in place, anybody that had malicious intent was given that task ended up completing that task.
They ended up getting data out.
They used many weird and wonderful means, some that we've never seen before, and some that we never knew were possible.
But everybody was able to conduct that event.
And what that really showed is if you got truly bad intent, regardless of the security controls, you'd typically will find a way.
Yeah. People are creative, that's for sure.
And entrepreneurial, also in the bad ways, you know? (overlapping conversation)
Yeah, so that's like the world's biggest white hat exfiltration exercise, I guess, that it amounts to, that you were able to get your data from.
And what this industry is missing - you know, if you think of insider threat as an industry or data loss, many solutions and technologies have built their product based on case studies.
You know, take that Edward Snowden.
You know, insider threat comes up, people think of Edward Snowden.
And they build the capabilities around those specific case studies, but the reality is, if you just try and protect against the Edward Snowden, the reality is the next insider within your organisation is typically going to do something completely different.
So you may put in controls that protect from an Edward Snowden, but that reality is, you haven't employed an Edward Snowden, you employed your own employees.
Understanding what is normal for your current employees and contractors and administrations is actually far more important than trying to think of and look at case studies that have occurred in the news and trying to map controls against those case studies.
Yeah, and actually that's the airport problem again.
It's very reactive, you know? Edward Snowden already happened.
Yeah. No, that's really fascinating.
So the other side of it then is how do you balance the requirement and the value of privacy and, you know, protecting the privacy of employees? Because, obviously, this is a very data-intensive sort of approach, especially, like, I'm in Europe, where GDPR, of course, is a big issue.
Not really an issue, it's actually very important, but it's a big challenge for a lot of businesses.
How do you manage that? How do you get the information you need and also protect and preserve employee security - or privacy, rather?
It's extremely difficult to get that balance right.
Any technology - and, really, our journey with Dtex, even though we were founded here in Australia, really early on in around the year 2000, our journey from where we are today really started around - come 2008, we moved the headquarters to London.
And, really, our kind of customer base had already - always been, kind of, federal, government-based organisations, which we were doing, you know, pretty - from a data loss prevention perspective, or from a monitoring perspective, we were monitoring absolutely everything.
Content of files and web pages and all these sorts of things that you can do within a federal government kind of entity.
The GFC hit and, you know, fewer, kind of, contracts for us, and the government space dried up and we had to start to get creative.
What that meant was we started to look into, you know, the telco community, some advanced engineering organisations.
And we started to do work, especially in those really privacy-stringent countries, like Germany, Spain, France.
And what we were finding is, you know, once you've kind of understand - this was long before GDPR days, but the, kind of, common theme within those privacy regulations, one, employees have a fundamental right to privacy.
Very different philosophy in the U.S., which is very employer-centric.
But that fundamental right to privacy means that the monitoring that you can do on an individual has to be proportional to the risk that you face by that individual.
And that proportionality test has been, kind of, encompassed in GDPR.
But what that meant for our technology is we started to remove all of the content capture.
We took away screen capture, we took away keystroke logging, all of those more intrusive, kind of, surveillance monitoring techniques.
We actually pulled those out of our product, but we still wanted to get to the same conclusion that, you know, Joe Bloggs was intentionally trying to steal this piece of data.
We wanted to get that level of information, but we were doing it with a metadata audit trail, so we were having to infer what we knew beforehand with more, kind of, invasive techniques.
What that did as a by-product, it kind of allowed us to create what we believe now is one of the richest data sets in the enterprise today.
And that was really...
When, kind of, 2014, 2015 rolled around, where a lot of the big data, kind of, technology started to become open-source, like Hadoop and Apache Spark and Elasticsearch, and all of those wonderful technologies which I wish we invented but we didn't, but we've leveraged them to great effect because of the data set that we created.
Having the best data set, applying data science on top of that, and really people think of machine learning, artificial intelligence as the catchphrase, and this notion of a data lake, that you can just pull in data from everywhere put it in your data lake and ply these techniques on top it, really is a broken approach.
It's about collecting the right data. And on the privacy side, collecting the least amount of data that you need to solve the problem. And both of those are really going towards actually solving the problem with less resource.
You know, most organisations kind of throw countless data sciences at a specific problem.
And that's really kind of allowed us to, you know - while I work for a cyber intelligence company, it's really the data set that has underpinned everything that we do today.
Wow, it's a great example of how constraints can really drive innovation, you know? The things that you had to take out of your product and then still come to the same conclusion is a real motivator.
You know, we kind of say, this was all by design.
But the reality is, we were kind of forced down this approach and it just, like, happened and, kind of, the stars aligned with a lot of these technologies becoming open-source, and it kind of was the perfect time to have the right data set.
Yeah, well, and so, you know, the approaches that you've taken, you removed the more invasive approaches to collecting data, which was a great step.
But in the end, you still have a lot of data.
How do you go about issues around protecting, like, personally identifiable information, things like that that you may end up having even if you collected them in a non-intrusive way?
Yeah, so along that same lines of proportionality, and you'll see Dtex very early on - this is still kind of pre-GDPR days, back in 2011.
We actually filed for a patent around how - it's classified now as pseudo-anonymisation, that how we pseudo-anonymise that PII.
When you think of PII - you know, people think of an email address, an IP address, or, you know, Social Security Numbers, or things like that.
But there's other information within metadata that could also be used to attribute a user.
If you think of you're, kind, Twitter handle, it could be just a random string of characters, but because we are already statistically baselining what is normal for an individual, we could start to tell this random string of characters only occurs uniquely with Lee Razo and your user account and doesn't really occur for anybody else in the organisation.
So that random string of characters - I've got no idea what it is - I may choose to tokenize, encrypt that as well because, potentially, that could be used to identify Lee.
So we have a broad patent around that and how we pseudo-anonymise that metadata, and then we allow a two-person approach.
Before you can de-mask any kind of tokenized PII, you can give to HR and give the keys to Legal so that, you know, two people have to approve a request to de-mask that data.
What that really does in GDPR is it removes analyst bias.
So when you have, like, a truly malicious, kind of, insider event and you're going to have to prosecute an individual potentially, it gives you that audit trail to show that your analyst had no bias against Lee to - you know, there was no, kind of, you know, falsely, kind of, creating a case against Lee because I have a bias against you.
There's no ability for your analyst to do that.
So, really, that pseudo-anonymisation is as much about removing analyst bias as it is around proportionality.
Yeah, and I guess, what would've been my next question is that - and there's those cases where you do need to provide that data, so you have a way of doing that.
You know, if a court orders that the data actually be found, you then get the two key holders and you can actually produce it while protecting all the private information.
Okay, that's great.
That's an important approach.
And, again, that really comes back down to the constraints that you were working with prior to GDPR and other things.
And another thing you've mentioned is that - and I think probably goes to constraints as well, is that Dtex itself has moved from different countries, right? You were founded in Adelaide in Australia, just mentioned that you'd moved to London for a time, and now you're based in Silicon Valley.
Each one of these areas are completely different, or let's say, very different.
There's some in common, and different rules and different considerations.
And, I guess, each one of these is a set of constraints that's helped shaped your product.
What's that experience like? What was it like moving the business around - you know, yourself, living in all these different places?
Yeah. I think what the biggest thing it's showed me - I had lots of, kind of, preconceived ideas about what Silicon Valley was like.
You know, you kind of watch the TV show - I'm sure if anybody that watches the TV show - like, I wish I had watched it before coming here.
It's kind of like a guide book.
It's unbelievable how many of those things even occurred within our own business.
But what you realise - what's different about Silicon Valley in comparison to London, and different about Adelaide - initially I would've thought, kind of, the talent, the individuals, the brains behind these things, you know, the data scientists coming of the likes of Google and Facebook, you know, I had this misconception that people were smarter.
And that's really not the case.
You know, I look here and Adelaide, I look in London.
Even, you know, we had a number of engineers in Poland, and things like that.
Actually, the talent is really distributed around the globe.
And in many different countries, there's no one country that I can say, "Oh, that group of people are smarter than another group of people." I think in Silicon Valley, the culture around taking risks and the investment - you know, investors in the VC community and their ability to take risks is far greater than the VC investment community in London, and then far greater than our investment community in Sidney as well.
So I think what makes Silicon Valley unique is that kind of appetite for risk taking and adopting new ideas.
But often we found - you know, I'm back here during COVID times.
I'm back here in Adelaide as well.
Like a lot of the data scientists that we have here in Adelaide - and we've got one of our leading universities across the road, Adelaide University.
We're next door to the Australian Institute of Machine Learning.
The talent that we have here are rivals and often betters that coming out of Silicon Valley as well.
So, you know, one of the misconceptions was, one, you need to be based in Silicon Valley and then you need to, kind of, hire people from the Bay Area.
That really wasn't true.
Now, especially with COVID, we hire people wherever we can find them and the right people for the job.
People - you know, time zones are taxing, definitely.
If we could flatten all the time zones, this would really be, kind of, a global community.
I think that's literally the only limiting factor in sourcing people from around the world.
Yeah. I guess it's proof the world isn't flat.
Yeah. We'll work on that.
No, it's true what you're saying about Silicon Valley.
I came the other way around.
I kind of started my career there, and then came out to Europe.
I feel like there were a couple of differences, like you mentioned.
You know, and, one, I think, in the Silicon Valley, you're very much on the - let's call it the "sell side".
You know, everybody's focused on the product, the technology, the coolest thing.
You go out socialising, people are talking about Linux and whatever.
And then once I left the Silicon Valley, I was very much on the buy side, hanging out with customers who are really only interested in what problem they're going to solve with it.
You know, they're not interested in the nuances of hammers and screwdrivers.
Yeah. And I really like it because, to me, it really gets me closer to the why, rather than the what and how.
And so, yeah.
So apart from, you know, having moved around with the business, there's also been quite a history and different inflection points, I think, in that time.
Things, you know, totally out of control, like the financial crisis in 2008, which, you know, you guys have been working long before that on this.
How did that event affect your business and your technology?
It just meant we had to kind of redirect.
You know, for our company, for really the first, kind of, 10 years of its existence, was kind of a niche company serving very few, kind of, government-based customers.
And that allowed to us to kind of get quite comfortable, but, you know, we were really an unknown entity.
We had developed things based on, you know, customer requirements, and we just met those customer requirements and were quite successful in doing that for many years.
But it kind of took us out of that comfort zone, forced us out of that comfort zone, and actually started to, kind of, streamline the product.
You know we had, back in those days, quite a bloated product.
If you think of tradition DLP products, you know, people think of Symantec, people think of McAfee DLP.
These are really kind of old-school, traditional products.
But if you look at the products that came before those - there was a product called Vontu that was being developed very much like, you know, Dtex in the early days, and that was acquired by Symantec.
There was another product called Oakley Networks that was acquired by Raytheon that became the Forcepoint product.
And that was being developed very much like Dtex in those early days.
You know, 2008, that global financial crisis really threw us and made us break the mould.
And because we weren't based in the U.S. at that time, it meant that we had these privacy regulations kind of forced down on us at the same time and it made us kind of go down that metadata path.
I think if that didn't happen, we probably would've never gone down that metadata path and we would've ended up like a traditional DLP solution, much like those other technologies. If we had made it at all.
So I think it was - you know, it kind of forced us down a route, which in the early days we probably wouldn't be able to see that, you know, data science, machine learning, AI would become such a key piece in our industry.
And it's really the data set that, I think, kind of - it really breaks the mould.
You know, for us, insider threat, data loss, cyber security are really just the tip of the spear.
When you have, you know, an extremely rich data set and apply a lot of statistical analysis, machine learning on top of that, there's really very few problems you can solve - you can't solve, rather.
So, you know, I think that's really started - it kind of forced the company down the trajectory that we're still on today.
Yeah. And that's been another trend, I think, in the last several years, this whole emergence of machine learning and all these new techniques of analysing data.
I mean, data science has been there probably since the '50s in some form, but what's happened in the last five, six, seven years has really been incredible.
How have you guys taken that on and how have you, you know, embraced that into the product?
Yeah. So one of the interesting things - when we landed, you know, Series A investment, moved to the headquarters to Silicon Valley.
You know, one of the first hires in Silicon Valley was a renowned data scientist coming straight out of Google.
You know, we wanted to get those credentials straight away out of the gate.
And what we realised early on is a data scientist is such a - you know, you put that in front of your title in Silicon Valley, you can add another zero to the end of your pay packet.
But it doesn't necessarily mean you're going to deliver results.
There's so many, kind of, data scientists that we used to - in the worlds of Google and Apple and others that have all the tools and the capabilities under the sun, that they can do these kind of, you know, never-ending research projects, but not necessarily force to deliver an end output, you know, within a given time frame.
And what we found out, you know, really early on is that, you know, if you have the right data set and if you have the right tools in place, those individuals that have an understanding about what other business problems you're trying to solve, you can often solve lots of problems just through basic statistical analysis.
You know, the mathematicians coming out of Mathematics, advanced Mathematics degrees often are far more effective than some of the world's most renowned data scientists.
So, you know, I kind of take the data science, kind of, credential with a pinch of salt these days.
Anybody with a Statistics background, we often give a lot more weight.
Those that are used to solving real-world problems.
You don't have have to have, you know, the best algorithm.
It doesn't win.
It's who is solving that problem by the simplest means possible.
Yeah, I think there's also still a very broad misconception about what data science actually is.
You know, there's so many aspects.
It's such a broad term.
And it has been a bit of a buzz word.
You know, I think we're coming towards the end of a hype cycle on that.
There is something very real underneath it, but, you know, you have to dig deeper.
So it's great.
You guys really have to stay focused on the actual problem you're trying to solve.
And, you know, that topic of buzz words, another one I hear a lot is "zero trust".
What does that actually mean? I've heard that in a lot of different contexts.
Yeah. That's a common term that kind of gets construed to mean different things.
Obviously, when an employee hears "zero trust", especially coming from the security team, their kind of knee-jerk reaction is that "The company doesn't trust me." But the philosophy - zero trust was started as a philosophy about - that your organisation is already breached.
If you're a CISO or a CSO in an organisation, you need to start with the philosophy that your organisation is already breached.
And how you instrument your security controls, your policies, your technologies based on that posture is fundamentally different than trying to protect your organisation from being breached, if that makes sense.
A lot of the technologies that have started down that approach and those, kind of, CISOs and CSOs that adopt that approach we see, it's really about, kind of, getting an understanding about what risk is in that business, and only giving the rights based on that continuous risk profile.
It's not about not trusting your employees.
It still is about trusting your employees, giving them the tools and the things that they need to do their job, but if there's anything that becomes outside the normal or that, kind of, zero trust score comes down, reducing the impact that any one individual can have on your operations, on your business.
So, really, it's that philosophy.
It's misused by a lot vendors, ourselves included, to mean lots of different things based on what products the cyber security community are trying to sell you, and then that's - you know, cyber security's extremely noisy at the moment.
It's a really hard job for CISOs and CSOs to try and cut through the noise and work out "what it is that I actually need to solve the problem".
But, actually, at its basis, you know, that philosophy of zero trust really does work, and that's why it has resonated through the community.
But now it's starting to mean different things to different people, and that misconception is what we're trying to, kind of, break through.
I mean, I guess, as an optimist, it sounds, really, like what you're doing is you plan for the worst-case scenario, not with any specific person in mind, so that, in fact, you're free that you can trust your employees, you know, without any worry, right? It's like an employment contract.
You know, all those clauses in there are not because they think you're going to do all those things, but it's because everybody gets it, now we can just relax and trust each other, right?
And, you know, the likes of SolarWinds and this Microsoft Exchange attack, they just show that even the most sophisticated organisations, even, you know, security organisations themselves, are susceptible to breach these kind of sophisticated supply chain attacks.
When they happen, there's very little that you can do to stop them actually getting a foothold in your organisation.
But if you do follow that zero trust mentality, you will have a chance to recover, you will have chance to reduce the damage that actually gets done, you will be able to respond quicker.
So that thought, that kind of preparation that you're already breached or you're very likely to be breached, allows you to kind of adopt your processes, your procedures much more effectively than trying to say, you know, "I'm going to deploy all of these defensive security solutions and I'm not going to get attacked." That's just not the case in today's world.
As a company, you are going to be breached.
How are you going to deal with that? What are you going to do when you are?
Yeah. SolarWinds is a great example.
That's another example of how, when somebody has a bad intent, they will find a way.
And, in fact, they also benefit from constraints.
The more constraints you put, the more creative they get.
You know, and so.
Yeah, that was a really devastating attack, so it's very important to...
What you're saying about taking this approach.
And in the more current times, I guess the trend now, the thing we'll probably be talking about years from now, is what's happened in the last year with COVID and working from home.
How have you seen that from a privacy, from a security, from DLP perspective? How does that look to you guys?
I mean, it's really pushed - especially the human angle, if you think - we kind of use a term called "the human firewall".
For many organisations, it's kind of gone from using and relying on corporate IT as their corporate network, to now everybody is there, kind of - they're on their home IT, they're their own IT administrator as well.
It's kind of put a lot of security controls and, you know, trust in the end employed to manage their own network.
One that we've seen - you know, people have been very effective in adapting to remote work, and everybody has been quite creative in how they've been able to get their work done.
But from a security perspective, it's really increased the attack surfers.
It means that there is a chance for phishing emails and ransomware attacks and all of those bad things happening that the prevalence has really only increased.
For a cyber security company, it means business is good, business is booming.
But what it means from a practical standpoint for our organisations is it just got extremely difficult to protect every person's network.
It really means that perimeter has gone, that old-school firewall and network perimeter that we used to invest in and protect, like we would the walls of our, kind of, office buildings, it's just gone.
And it's been a positive change, I feel, from, you know, companies adopting ways and means to kind of allow their end users to still get their job done.
But it's kind of opened the eyes for many organisations that were really traditional, bricks-and-mortar organisations, and the perimeter has essentially dissolved.
It means, from a cyber security perspective, the endpoint is now more important than ever.
The closest piece where are users connecting and interfacing with corporate applications, that's more important than ever.
But you can kind of see that the cyber security industry's booming at the moment.
So vendors have benefited from it, but companies, I think, can benefit from it, too, if they adopt the right kind of approaches to enabling their employees and not really trying to restrict them as much as well.
So what advice would you give to, like, a veteran, seasoned security professional who's been at this for, you know, 20 years, seen it all, done it all? What should they be thinking about differently? I mean, I guess what my colleague said, "Security is never finished" is just confirmed in this conversation, you know? What is the sort of a mindset shift that somebody who's really experienced should think about and the new things that they should really look into now given those (overlapping conversation)?
I mean, security has to be an enabler.
It's engaging your workforce.
It's educating your workforce.
It's empowering your workforce to do the right thing.
At the end of the day, that well-educated, engaged workforce would do more to protect your company than you can do as an individual, as a CISO or head of security, than you can do on your own.
And if you're adopting - you know, if you're still of the mindset that you have to lock things down and protect your walls, hundred percent from a regulatory perspective - we have many banks and things like that that have to do the - tick the boxes around the requirements from that perspective.
But if you're not doing enough to engage your workforce and bring them into the tent and feel that everybody is a part of the security team, then you're really not doing enough in today's day and age, and there's, you know, issues that are going to arise.
You see, the CISO, the CSO, it's a job where many people are only in there for two years, three years, and they move around and that's because it's a very high-risk area of the business.
And you've got to get in part of your business and enable your employees and start to see them as solutions to problem, rather than the problem itself.
Yeah, I love that.
That's basically viewing the workforce as your ally in this, rather than as a part of the problem.
You know, I think it seems obvious when you say it, but sometimes we don't think about those things, that, in fact, there's power in numbers, and you have a whole workforce on your side as well that wants the same thing.
Easier said than done.
It takes - but for many of the individuals that we work with, you can see this, kind of, twelve months has been kind of painful.
But once you see them start adopting some of those approaches, it kind of - one, it can give end users more flexibility.
But, two, you're kind of loosening those controls, but then being able to detect certain things as well, kind of, it is really empowering.
Yeah, no, that's great.
So, speaking of buzz words, what we were talking about earlier, so, you know...
I work for CloudNativeX and, you know, despite that, in my opinion, this is true and also a buzz word, "cloud native".
You know, what is cloud native? Now, I have my opinions about it, but I'm curious.
This is like my standard closing question with all my interviews now, is what does that word mean to you, "cloud native"?
Look, the reality is, cloud native, for me, is more important than ever.
You know, where the network perimeter ceases to exist, I think developing technologies that were born to live, expand and contract within the cloud, is really important.
Now, cloud native to Dtex means the ability to, kind of, have a platform that can grow and shrink and be in different regions of the word all at the same time.
And it's really moving away from - especially a data platform like ourselves, that traditional data warehouse, you know, that kind of odd, kind of, you know, Oracle or Microsoft SQL world where you stand up different databases and you connect them through, and you have, you know, your disaster recovery plans, and, you know, data centre here, and a data centre over there, and you have your failover plans - that traditional, kind of, data-centric view and that data warehouse of the world has changed.
Being cloud native means you can expand and contract, you can live in - it's yours, you can live in AWS, you can live in GCP, and still be able to expand and grow and minimise your cost.
But really live anywhere that you can in the cloud.
It is actually...
It's a difficult thing to achieve.
Often, for many organisations that have security platforms, there's a lot of, you know, kind of, engineering debt in having to transform your whole platform to allow that, and people would try and kind of put on, you know, point fixers.
I think for Dtex early on, we realised that, you know, for this succeed, we have to be cloud native.
We have to live in the cloud.
And that meant a massive transformation.
It took the better part of two years, in fact, to just transform our staff and, kind of, let go of legacy technologies that weren't going to allow us expand.
So, you know, I think the architecture for cloud native solutions is so important.
And then, you know, it's one of the buzz words that can mean different things, and people claim to have it when, really, they don't.
But I think it's been a really important part of our business.
Yeah, it's great.
And I think it just goes to show that what you're doing is doing to be even more and more necessary.
Because, you know, as the world gets more decentralised and more out there, these things are just going to become more and more of a challenge.
I think that's (overlapping conversation).
Yeah. So I'm really interested in this report you mentioned, though, with the MITRE Corporation.
Will that be available? And if so, when and how can people follow up on that?
So really good question, you know.
Very much it's dissimilar to how MITRE ATT&CK is a really, kind of, public distribution of adversary tactics that the full report will only be shared with, kind of, very specific organisations, but just because of some of the behaviours that have been identified and the things that act as precursors.
You know, if you start to publicly advertise those kind of behaviours as traits, you kind of give away your secret source and insiders will change their behaviours inherently.
So there will be distribution in more of a closed-door, closed-community way, but, kind of, the sharing between many of our, kind of, critical infrastructure organisations, sharing that with... Especially organisations within the Five Eyes community as well.
But some of the things that we're seeing that are coming out of that are really eye-opening.
I think, you know - the study isn't finalised and we'll let, kind of, MITRE, kind of, publish those results.
Dtex was a participant in that.
We are also - you know, I'm sitting here in the Australian Cyber Security Collaboration Centre.
They were a partner in the study as well.
But what it really showed is that where there's a will, there's a way.
And, really, it's kind of proven that intent piece and the fact that, if we try to work out and detect all the different ways that people could steal this data that they were given approval to steal, but they were told to not get caught, it was actually the act of not getting caught.
Looking at the act of getting...
The ways in which people were trying to, kind of, evade detection was far easier than detecting the theft itself.
So it really kind of puts some importance around there.
And there were some algorithms that we looked at, and there were behaviours that, when you combine them, that allowed us identify those behaviours that were really kind of, you know, not really intuitive at all.
You know, when you looked at the data, you wouldn't think that, you know, certain behaviours when combined would kind of positively detect, you know, a whole range of these different individuals.
But there were some behaviours that we found that detected, kind of, 98% of these individuals doing bad things.
So it kind of showed that if you know what to look for, you can really effectively identify this type of thing in the world.
Yeah. Well, I guess the important bit is that your product and your customers will benefit from that experience and that knowledge that you gained from participating in it.
And it's really fascinating that, indeed, intent, psychology, I mean, that becomes more and more central as the technology part gets more and more complex and diverse and...
You know, one story I remember back in the - I don't know, when USB sticks were the big thing, that one of the breaches was done by a recordable CD, CDR-W that nobody thought to look at because everybody was focused on USBs.
They actually went further back in time in technology, so.
And printing, like, during COVID, like, we're getting a lot of incidents of really sensitive information being printed to hired printers.
It's kind of very old-school, but, you know, some of the methods that people use, especially in lockdown environments, technically, those kind old-school capabilities, you know, kind of, SD cards and PCI cards and things like that that are on legacy devices that people don't really think about are some of the weird and wonderful ways that people find to get data out.
Yeah. That's great.
Well, it was very valuable topic.
And I really appreciate taking the time and talking about it.
I've learned a lot, even.
You know, even doing all my research ahead of time, there's still more to learn, and that there's still is more to learn.
So I look forward to working with you some more and maybe having a follow-up at some point.
We'd love to get into it some more.
So thanks very much, Raj.
And speak with you very soon.
Thank you very much, Lee.