Distributed Cloud Computing and the Next Big Wave of Innovation: Jonathan Seelig, Ridge

Lee Razo interviews Jonathan Seelig, co-founder & CEO at Ridge, and co-founder Akamai, about distributed cloud computing and the next big wave of cloud innovation.

…we see this tremendous opportunity for applications that are in fact going to want much more distributed infrastructure than is available to them.

We see the ability of application owners to really just, sort of, quite dynamically say, "Here are all the places around the world where I need stuff to function. Well, make that happen for me." And that, to us, is the promise of cloud computing.

Jonathan Seelig, Ridge

Video Excerpts

Keeping up with rapidly growing cloud computing demand.

What does 'cloud native' mean?

The Evolution of the Data Center

Delivering on the promise of cloud computing

What do car sharing and cloud computing have in common?

How Akamai managed to pivot after the dot-com bust

Full Transcript


It is my pleasure to welcome Jonathan Seelig, co-founder and CEO of Ridge, which is a distributed cloud platform enabling cloud workloads to be distributed quite literally anywhere in the world.

We're going to talk about that as well.

But in addition, Jonathan was one of the original co-founders way back in the 1990s, late 1990s, of a little company that probably a lot us have heard of called Akamai Technologies.

Now, the story of Akamai in and of itself is pretty amazing, but I actually see a really strong connection between the problem that Akamai was founded to address and the one that Ridge is solving today.

So, I wonder if we could start with maybe just talking a little about that.

Can you tell us a little bit about that connection and what Ridge is doing?


I'm very, very happy to do so, Lee.

Thank you for inviting me to be on this platform and on this podcast.

I'm happy to chat about all the stuff that we're doing at Ridge and things that we did at Akamai before that.

I guess, maybe, chronologically, in the late '90s, as you said, and you had a chuckle to it when you "the late '90s" as though that's so, so long ago that nobody was alive at the time.

So, when the dinosaurs roamed the earth in the late '90s and the internet was sort of in its early days, content providers who wanted to build up both a large-scale website that was popular and had lots and lots of visitors did so in very centralised ways.

They would often buy their own servers, stick them in a data centre somewhere, and you would have one, single physical point on the earth where every single user needed to traipse back to make their web request for your content. At Akamai, what we did in the late '90s is we introduced the idea of the content delivery network or CDN.

And what CDNs did is effectively allowed content owners to automatically really put their content in lots and lots and lots of different places around the world, thereby improving the scalability, the performance, and the reliability of their websites for users all over the world.

What it meant is that a user in Tokyo all of a sudden was getting the CNN website from servers in Tokyo, as opposed to servers in Atlanta, Georgia.

And, you know, it's pretty obvious why that's a better solution from the content world.

One of the really cool things that happened as a result of the CDN coming into being is that all kinds of new content- based capabilities came into being.

There were just certain things that you couldn't do from a centralised infrastructure and there was no way that a centralised infrastructure was going to scale to enable the YouTubes and the Netflixes and the Twitches of the world.

It just wouldn't be possible.

And so what the CDN effectively did is give - you know, is create an infrastructure that allowed for some of these content innovations to happen subsequently.

Well, fast-forward at this point, you know, almost 25 years, 20-plus years, to where we are today, and today we're in a world where application owners who have compute-based capabilities that they need have moved from what they used to do, which is buy servers and stick them in a data centre, to many of the new applications, if not all of the new applications being developed, really thinking about being cloud- native or cloud-first, and starting with an architecture that is going to be deployable across some of the public cloud services that are out there today.

Brand new application getting started isn't thinking about buying HP servers and sticking them in a rack in a data centre and running their own code on them.

But are thinking about whether they want to be on AWS or GCP or Azure or Digital Ocean, or any of the - or Oracle, or any of the existing public cloud platforms. And just like when we started Akamai, the fundamental deployment model most application owners use is a pretty centralised deployment model.

They're picking a facility that they're going to run in in that public cloud.

And that's all good and well for certain applications.

But there are a whole new set of cloud-native applications that are looking for a much more distributed infrastructure.

They know that they need to get some level of proximity to their end user for the application to actually do what it's designed to do.

And so what that means is that you've got these application owners trying to build Edge AI and Edge ML, AR and VR, and autonomous control for drones or low-latency games.

Or you have a set of application owners who have particular data sovereignty and data residency concerns.

They work in an industry where they must comply with a particular regulation in order to be allowed to run in a particu - you know, to run an application in a particular place.

And so with those two, sort of, constraints that exist in the modern application, you know, kind of architectural world, those application owners need to figure out how to get themselves deployed in more than just a single location.

And that's what we enable at Ridge.

What we enable at Ridge is we give application owners a public cloud capability, systems that look just like the managed container and managed Kubernetes and managed object storage capabilities that the large public clouds provide.

But we give the application owner the ability to check a box in any of a hundred different locations right now around the world where they would be able to run that particular workload.


That is - especially now as, you know, all these IOT in different applications are starting to emerge.

But, you know, the - I think the perception of the cloud is that it's sort of this magic.

You know, you have a - in fact, I think the word "cloud" came because we would use to draw in those network drives back in the prehistoric '90s, you know, a picture of a cloud, which just meant "The network".




So this whole problem of, like, data locality and having workloads in all these different places, why is that such a difficult problem to solve and why aren't the public cloud providers able to solve that themselves?


Yeah. Well, so, you know, the abstraction that we create to understand things...

Just because we all drew a cloud on a piece of paper and said, "Okay, this is what the network looks like," doesn't mean that it actually looks that way.

At the end of the day, most people don't actually understand the underlying technology that makes something that they are using work and most of us don't understand what's going on inside of that little guy, you know, to make phone calls.

Most of us don't understand what is going on in the network to make these communications happen.

People today, the vast majority of people, who use the internet if you ask them, "Does it work better today than it did in 1997?" It's, "Yeah, it works much, much better." If they had to explain to you why, they couldn't tell you that it's because of, you know, increase throughput at the edge in terms of broadband connectivity and the ubiquity of that expectation plus, you know, network and protocols that have - plus the CDN, plus additional fibre capacity that - so they couldn't explain why it is.

They just know that it is.

So the fact that this idea of the cloud has been abstracted to look like it's just everywhere...

Well, it's accessible from everywhere, but that doesn't mean that it's equally accessible or accessible with the same level of performance from everywhere.

And so, you know, the people who are really thinking about this problem are very often the DevOps teams or the, kind of, infrastructure engineering teams at the application companies who are tasked with making their application work well.

The second part of your question of why can't the hyperscale clouds, the Amazons, the Microsoft's, the Googles of the world just sort of say, "Great, we're going to go everywhere"? Well, you know, what happens is that they do in fact expand, and they do in fact acknowledge that they need to be in a lot more geographies than they started in in order for this thing to work well.

So, they never said, "Hey, we're going to be able to make the entire application infrastructure of the world out of Seattle." And when you look at the expansion that they've had over the last number of years, the hyperscale clouds tend to expand by two or three geographies every year.

That's the way that they - that's their plan, and they that's how they move on. And when they do, when they move into a region, they build very large-scale infrastructure in that region to support all of the hundreds of services that they offer and to support the large customer base that they, you know, expect to bring in there.

They are building, you know, tens and tens and tens, if not hundreds, of megawatts of data centre capacity in new places that they would like to go.

Our view is that application owners should absolutely be leveraging those facilities and the underlying technology staff that the hyperscalers have deployed in those facilities.

And, at the same time, the world's a really big place.

And if you want to get close to users in lots and lots of other places around the world, the way that you want to do that is by having the access to the ultimate networks in terms of their space-power connectivity and capacity in particular markets.

I was on the phone just yesterday with a data centre group in Morocco.

There's a big internet user population in Morocco and I don't think AWS is getting there any time soon.

And we have a partnership with a data centre operator in India.

We're currently in seven different locations around the country.

It turns out that being in seven geographies, going to probably 50, in India is a lot better than being in one that the, kind of, large cloud companies might end up giving you.

And so that reality that these markets do in fact have outstanding service providers who want to be in that game of delivering applications and delivering to end users is what we are, kind of, riding on at Ridge.

Our business is based on partnering with those service providers in lots of different markets around the world to utilise the infrastructure that they have in those markets to provide these types of services.

That's what our business is based on.

We are not building a cloud in that we are building data centres, buying servers, and deploying them.

We are building a cloud through partnership with service providers in lots of places around the world.


I see a couple of interesting parallels.

This may sound like a tangent, but my mind, it's really related of, like, if anybody has ever spent time in a developing country, in Africa or someplace like that...

I've done this in the past.

I learned two things.

One is we take infrastructure for granted here in the West, you know.

The streets have always been paved, it just came from nature.

And then you live somewhere where you actually see what happens when it's not taken care of and I think that goes to your answer about the cloud picture.

You know, drawing a picture of a cloud versus knowing how it actually works.

You know, can you tell me where your water actually comes from, right? And that's that part.

And the other part, which I think is what I'm starting to pick up from this story is, if you've been in a developing country, the hardest problem is last mile distribution of just about anything.

I can always get goods into a country, but to actually get it out to the farer reaches of the country is a different problem and it sounds like Ridge is kind of doing something like that for cloud services and resources.


So, you know, the last mile problem in connectivity and in the network world is one that has been talked about and that placed into so many different technologies.

So, in the early days of Akamai, one of the things that some of the, sort of, doubters about the CDN technologies would say is, "Well, won't ubiquitous broadband and lots and lots of additional last mile capacity just solve this problem?" And our answer was, "No, no, no.

That's going to make this problem much worse." If you give people a much bigger on-and-off ramp to a freeway, but you don't expand the freeway, you're not going to fix the problem.

And if you give people a lot more connectivity to the network, what you're going to generate is a lot more demand.

And if you still need traipse all the way from, as in that first analogy, Tokyo, all the way back to Atlanta to grab CNN.com content and bring it back to Tokyo, and now you have people wanting to do it more and more and more, wanting to do it at higher speeds, at higher resolutions, for more hours per day, all you're doing is making that problem worse.

And so that solution of putting the supply of - in the case of Akamai, the content, in the case of Ridge, the application infrastructure - in proximity to that extended last mile capacity is the fix.


It does come back to an infrastructure mindset.

You know, you need to build the infrastructure for that demand which ultimately should be good for all the providers of the applications and content and data, you know, just like with Akamai.

As you're building out that capacity, you're actually building demand for whole new segments of industries.





So that makes perfect sense.

I think that's a great approach.

I could see, like, the most obvious use cases that come to mind are anything that would involve indeed streaming or gaming and things like these. But what about more tradition things like, you know, manufacturing or retail.

How can they benefit from this approach?


Yeah. So one of the things that the, kind of, proliferation of compute - and more and more devices in more and more places out towards the edge of the network causes is - it causes a lot of data production and a lot of, kind of, source data starting to get collected and created further out towards the edges of the network.

One of the things that people are certainly becoming aware of in the 5G world is that as soon as you put, you know, 200 megabits per second of upload capacity in every single person's pocket, or you expect every sensor on a factory floor, every IOT device to have massive levels of connectivity to bring data back into the network, is you start to deal with the problem of how to bring so much data back into the core of the network without saturating the ingress capacity, you know, of the network.

And so we've talked a little bit about - in Akamai, in the early days, was solving the "how do you get content from the network to the edge device" problem.

And we've been talking a little bit in the conversation about the question at Ridge of "how do you get application data to an end user with low-latency or in a particular geography".

But there is, in fact, also, as you just suggested, this other problem that's starting to show up more and more, which is, how do we deal with preprocessing of data so that we're not saturating the core of the network on the way in? How do we, at the edge of the network, deal with data reduction from these devices that are capturing so much out in the field now? And that's absolutely, we think, going to be one of the additional drivers of this kind of edge computing or distributed computing use case.

I'm sometimes a little bit loathe to use terms like "edge computing" because it's a...

You know, sometimes I feel like we should start these conversations with a little glossary of terms at the beginning because a lot of these terms are used very often, but are used quite differently by different people in the industry.

So sometimes I feel like the jargon hurts us rather than helps us.


Yeah, they do.

It's a double-edged sword with, I guess, jargon, with buzz words that, in the one hand, they help focus - I think at the beginning, they're very useful.

You know, like "big data".

That actually meant something once.

And then slowly it becomes a little bit of everything and nothing at once, you know.

And, in fact, actually, you know, that's a good prompt because one of the questions that I ask everybody is, what does cloud-native mean, actually? What do you think that term (overlapping conversation)?


Oh, that's a very good question.

So I think that cloud-native - when we talk about cloud-native applications, we're talking about applications that were engineered with the idea that they would always run exclusively in the cloud.

So they are applications that are engineered to run on a stack that one sees from public clouds today. No one - a cloud-native application will not have any dedicated hardware sitting in a dedicated rack in a data centre that is driving, you know, the database or the storage or the compute for that application.

So, to me, a cloud-native application is an application designed by an inception to run exclusively on public cloud infrastructure.

The extension of that is what people are talking about a little bit now as multi-cloud-native applications.

Those are cloud- native applications that are designed at inception to stay away from some of the technologies that some of the hyperscale clouds offer that will explicitly, kind of, lock you into their platform.

So it's an application that's designed with the intent of being able to run across multiple different cloud environments concurrently.

And, you know, what we are seeing in the market, and I think this has become quite conventional wisdom, and for us, a few years ago when we were starting Ridge, we made a bet on this, really, is that one of the fundamental technologies that is enabling this multi-cloud world to happen is Kubernetes.

And that has really become sort of...

You know, almost a de facto standard for how people are going to architect an application that will then be able to easily move between cloud environments.


Yeah. So it's really about abstracting away from the dependencies to the lower levels of the infrastructure as much as possible, I think. They can bene (overlapping) -


Yeah. Yeah. Yeah. It's, you know - I guess, I wouldn't even say to the lower levels of the infrastructure so much as simply...

You know, so much as simply choosing a set of technologies in the way you architecture application that are going to maintain that level of flexibility.

And, you know, the technologies that are, sort of, standards-compliant or open source, you know, are always going to give you more flexibility in terms of the underlying infrastructure that you're going to be able to use, you know, than the proprietary ones.


Yeah. Well, it makes a lot of sense.

It's a great answer.

So in terms of Ridge's platform, you know, different kind of technologies and use cases that are emerging, things like IOT, you know, in manufacturing. Or you were mentioning about being able to process data more at, let's call it, the edge.

By the way, I also believe that most of these terms will no longer be in use in a few years, but the technologies will.


Well, you know, the term "edge" has been in use for over 20 years at this point.

It just keeps meaning something different.


Yeah, that's true.

Well, I'm also from the '90s, actually.

And I remember terms -


Oh, look at you with that admission.


I know.

Don't tell anybody.

I'm going to edit it out.

But one thing I remember are terms like "information superhighway".

You know, Ridge was the big thing at the time.

And, you know, "e-this" and "e- that".

Well, you know, nobody says "information superhighway". They take it for granted, although all those things still exist.

"E- commerce" still exists, though.

That term lasted and means something a little different now, you know.

And Red Van back in the '90s (overlapping background noise) deliver groceries, which was a ridiculous idea.

And now we have Amazon owning Whole Foods, you know.

So these things, they eventually come true even if we stop using the terms.

And probably, I'm betting, even though I'm, you know, representing an organisation called CloudNativeX, I think in five years nobody will say "cloud-native".

It'll just be part of the infrastructure.


Yeah. Right.

Well, that's...


And the...

You know, I think that we'll maybe get more granular in the distinctions that we make about what type of cloud-native are you developing for and working towards.

But, yeah, I think that, you know, what you're saying is certainly true, that as something moves towards ubiquity, it becomes something less, kind of, talked about in an explicit kind of way.

You know, I don't think that if you...

I think 15 years ago, if somebody was developing a brand new application, there was some debate internally and there were, sort of, religious, you know, wars about whether you were going to own the infrastructure yourself or whether you were going to put something in the cloud.

And, by the way, 10 years prior, there were religious-war kind of arguments inside of content provider organisations about whether they should own their own data centre and it should be in their building, or whether it should be outsourced to a third party.

The first time that met CNN as a prospective client for Akamai, they had a data centre in one CNN Center in Atlanta.

And, you know, they trusted their data centre operations more than they would have a service provider down the street.

At the time, part of the reason that people regional data centres in different markets was because the infrastructure teams at these companies wanted to be able to go and physically touch and deal with their servers that were in a data centre somewhere.

And so, you know, that became something that, at the time, was...

There were these religious camps around how this was going to happen.

You know, I don't think that anybody today thinks that they're going to build a data centre for their application infrastructure.

And I don't think that any brand new application starting up is thinking about whether they should own their own infrastructure or utilise the public cloud.

You know, the question of when enterprise applications are going to fully migrate to the cloud, the question of how legacy applications and legacy IT organisations are going to manage cloud migration, you know, is interesting.

But there are, you know, more applications yet to be born than ones that need to migrate to the cloud.

So, you know, it's sort of a longitudinal - right? It's a cohort analysis that you kind of need to be doing.

So there isn't any company that was started in the 2020s, like last year and this year, that thought to go and buy a server and stick it somewhere.


Right. Yeah. I remember looking up the server prices, you know, like, "Aw, man, where am I going to get 20k for this thing?" Now it's just a credit card, you know.

It's amazing.


No, that's right.

Do you want to go some microsystems and buy the spark, you know, whatever, with all the redundancy in it? Or are you going to some...

Or are you going to go to IBM for theirs or Dell for theirs? Or are you going to some generic manufacturer that'll stick your logo on the box? You know? Those were the debates we had back in the day.


Yeah. And do you have a powerful enough airconditioner to run it? All these sort of (overlapping conversation)...


Right. Well, you know, listen, this is still true.

You know? Power density in...

The power density of server infrastructure and of what we're doing today in the data centre is massive.

You know, in the early days at Akamai, I would go spend a lot of time with data centres.

And in the early days of Ridge, I'd been spending a lot of time walking around data centres.

This is a 20-plus-year gap between my, you know - between what I was doing in there and what I'm doing in here.

They sound different today.

Data centres sound different than they did 20 years ago.

There is a much louder hum and there is a lot more cooling, and the power density of data centres is huge compared to what it used to be.

Because we're sticking so many more processors into such a smaller space, they run just as hot as they ever did.

And now you're needing to cool that density.

It's quite a, you know, different world.


Yeah, and along comes NVIDIA to make that worse.


Yeah, sure, right.



The GPUs that are like - I don't know, they're like thousands of cores in one, single thing, and, yeah, I guess (audio breaking) - we think in software terms, but you can't change the laws of physics.


Right. And so, you know, you move to liquid cooling, and you move to this - there's lots of things that people are, you know...

There's so much innovation at every layer of the stack, but it is simply true that, you know, at the end of the day, where we run our application is not actually a cloud-shaped box that somebody drew on a PowerPoint presentation.

Where we run our application is actually on a physical processor made of silicone, in a box, in a building, with a big airconditioner on it, connected to the network.

That's where we run it.

People can not know that, people can ignore that, people can pretend that you just swipe a credit card and, you know, it's all good. That's actually where the thing happens.



So it comes back to the infrastructure.

Where does your water and electricity actually come from, you know? Actually, most of us don't know, you know.


Most of us have no idea.


Yeah. That's right.

So, no that's great.

So something a little bit different, but also, I think, quite interesting as I mentioned earlier.

I had, also from that era, over the '90s, and one of the things I remember about .com boom and bust was, you know, a lot of companies like Akamai - I was at a company called NetApp, which was in that same era in the same place.

We were selling pickaxes to gold miners, you know.


You had a lot of caches, and you (overlapping conversation).


Yeah. And you rode that.

You build a business on this whole industry that just disappeared overnight.

You know, 70% of NetApp's customers were gone in one day, may as well, effectively, as if it was overnight.

And I think companies like NetApp and Microsoft, a lot of others, Akamai managed to get through that and even thrive after that, you know.

What was that like? I think you were still there at that time.

What was it like to go through that cycle and then to pivot and (overlapping background noise) take it from there?


Well, so we were already, you know, a public company as was NetApp at the time, obviously.

And it sucked.

It was terrible.

We had a year at Akamai where we transitioned - it was close to a 100% of the business - I think it was in the '80s or a low-90% of our customer base literally went away.

And, you know, you weren't going to be able to collect from SurfMonkey.com after they went out of business.

And they were a customer you were super excited to get when you got them because they had the prospect of being this high-flying - you know, they could be next Yahoo.

You know, what happened at Akamai at the time was, as was the case for many technology companies, NetApp among them, you know, your stock gets destroyed and you do layoffs because you're worried that you're not going to be able to stay in business.

And at a management and, kind of, you know, pure human level, it's pretty brutal.

It's a tough, tough, tough time.

You know, at a business level, Akamai was able to, over the course of that year, stay basically flat revenue-wise, from one year to the next.

But swapping out, as I've just mentioned, you know, the vast majority of the logos that represented that revenue. And, you know, if you exit a year flat revenue-wise, Wall Street thinks that you're a disaster and you're about to die.

But if you look at a set of logos that starts with, you know, SurfMonkey.com and whatever other, you know, companies you had there, and ends up with FedEx and, you know,the large-scale retailers, and you know, the established, kind of, "not going anywhere and they're going to be around for a while" companies.

That's a much better, much more stable, much more a company on whose longevity you can probably bank a lot more than the one that's servicing this earlier customer base.

So that year for us, as a company, was a year that looked flat on paper.

But, in fact, caused us to understand how to sell into and support much more reliable, old-school, old-line, you know, real businesses.

And so I think that that year really, in the final analysis, positioned us to be able to address the enterprise market much, much more.

And so, you know, those eras are...

If you make it through, you make it through stronger.


Well, in a way, it's kind of like those .coms, those webmonkeys basically funded your R&D so that you could actually have a product that, when that came, it was 100% ready to actually go to FedEx and have something that works for them.


Yeah, that's right.

And, you know, to their credit, those early customers who maybe didn't survive that downturn were much more inclined to take a risk on an early-stage infrastructure provider than the FedExes of the world were at the time.

So, yeah, for sure.

I think that's actually - I'd never really phrased it that way, as those companies sort of funding our R&D, but I think it's actually kind of a nice phrase.


Yeah, yeah, I mean, you know, I think with companies like NetApp and Microsoft and Sun - well, Sun didn't survive all the way.

Well, now, maybe it's not the right example.

But I think that we experience that a lot.

Had it not been for those maybe, you know, three to five years or so of serving them, we wouldn't have been ready to address the new market when that opportunity came.

So, yeah.

That was a really great experience.

Sorry, it's just that I lost my train of thought there for a second on that one.


Oh, yes. So do you see any kind of parallels now to that time? So either what's happening already or what could happen in terms of trends changing like that?


Well, it's a good question.

You know, I think what we are...

I guess what I would say is, I think that a lot of the early users of or use cases for edge computing or distributed computing, or, you know, these newer, kind of, technologies could very well be start-up companies that are doing something new and innovative.

You know, those technologies that I think then will expand over time and proliferate out into the mainstream, sort of, enterprise - in the same way as we're now seeing enterprise applications migrate to the public cloud in a pretty aggressive way, but, you know, 10 years ago, I don't think that the folks at AWS or Azure were really thinking that that was where they - were going to get immediate traction, right? Sure, they had enterprise teams.

Sure, they were going to go in and talk to, you know, the Pentagon and General Electric and the government of, you know, whatever country.

But, you know, I don't think that there was an expectation that there was any immediate revenue opportunity necessarily in those accounts.

I might be wrong.

Maybe they did. But I don't think that that was where they imagined the near-term revenue coming from.

The near-term revenue was going to come from the Surf Monkeys, you know, of the era.

And, by the way, like, Lyft 10 years ago was probably about as good a bet as a customer as, you know, or Pinterest...

Maybe not 10, maybe 12, or whatever years ago, was probably as safe of a bet as Akamai thinking that SurfMonkey.com was going to be a huge customer for a really long time.

Like, who knows?


Yeah, there's a lot of luck involved.

I mean, I think you as founder certainly know this, there's a lot of talent and hard work in being a successful founder.

But there's a lot of luck, too, in being (overlapping conversation) -


Oh, there's a huge amount of luck in building successful businesses and, you know, you got to work hard and you got to show up kind of dressed to play every single day.

And then, you know, hopefully also get lucky.


Yeah. So, of course, prior to this I did a little research.

I saw that your on the board of a lot of different companies and have been on the board of a lot of a lot of interesting companies.

A couple of them, in particular, jumped out at me because - so I live in Amsterdam, which, you know, we get around on bicycles here.

It's actually a terrible city to own a car.

So I'm a heavy user of something called Greenwheels, which is basi - it's car sharing, you know.

When I want a car, when I want to go to Home Depot or the whatever, I book a car 20 metres from my door for an hour.

I bring it back. Somebody else has to take care of it and maintain it and pay for the upkeep and the parking.

I love it.

And I realise this is basically just cloud computing for cars, you know.

I saw that currently you're on the board of a company called Zagster, which does that for bicycles and e-bikes, scooters.

And one of the board previously of a company called Zipcar in the U.S., which does a very similar thing.

Do you see some parallels? Or what do you see in that space?


So I was the Chairman of the Board for over 10 years of Zipcar.

I was on the Board of Zagster.

I no longer am, but I was for a few years. And, you know, I think that the whole idea of, sort of, shared resources is one that is clearly here to stay.

Shared resources in the physical world, whether it's an Airbnb, right? Or a Zipcar, or Greenwheels, or, you know, any of these sorts of, you know, opportunities.

Or in the cloud world.

And I think that this idea of shared resource and not actually owning the physical asset, but rather paying for use of the physical asset, is one that is very much a trend that's here to stay.

And then I think that there's also within that, sort of, the subcategories of, are you using a resource that is owned by a company that is monetising that resources as a business? That's what Zipcar did.

We had the cars.

We didn't own them, we leased them, but they were our, you know, asset, right? That we were then putting out there for people to utilise.

Or are you having people use an asset that is actually owned by somebody else, not by the company operating it, right? That's Airbnb, that's Uber, that's the, you know...

I'm not sure if Green Bikes is like that, or if they actually own the bikes, but those are, I think, two very distinct models in that world.

In the cloud computing world, we have yet to see any real success with compute assets that are owned by, you know, random third parties that are all over the place.

That the old, sort of, SETI@home model of "Let's harness unused compute on your desktop" and then enterprise data-centred.

That model hasn't yet really, kind of, taken hold in a meaningful way.

The cloud model today really still is about the cloud operator having the ownership of the asset that they then monetise.

Ridge is maybe one step further towards this more decentralised model because the servers aren't in fact ours, but they do belong to a data centre partner we work with, as opposed to, sort of, an application owner who's then monetising unused capacity.

That feels to me like we're not quite there yet in terms of the technologies that are going to be needed for that.


Yeah, but it is one step closer.

I can agree with that.

Like if I need to spin up service in, I don't know, Uganda (overlapping background noise) there's a data centre there, but there's a lot of reasons why I'd want to use something nearby.


That's right.


I don't need to wait for Amazon to figure out to start a data centre in Uganda or Philippines or any - (overlapping background conversation) wherever it is that it may be.


And this is where I think there will be a drive of some of the larger companies needing these kinds of things, you know.

The car manufacturers are a great example of this, right? They're all trying to figure out cloud connectivity for their vehicles.

And when you talk to Volvos or the Toyotas of the world, it's pretty clear to them that there are going to be Volvo cars in more countries than there will be Amazon data centres for the foreseeable future.


Yeah. No, absolutely.

No, that's really...

I'm really thrilled to be in this business because it just keeps - in some ways, it stays exactly the same.

You know, there's so many things that don't change.

You know, the cloud is just a big mainframe in a way.

And I think Amazon Glacier is actually taped somewhere, you know? On the other hand, it just keeps the other use cases.

The ways in which we take it for granted just keep changing, and I find that really fascinating.

So where do you see then Ridge looking at in the next year or two? You know, what are sort of the plans right now, where you want to take the company?


We see at Ridge - is we see this tremendous opportunity for applications that are in fact going to want much more distributed infrastructure than is available to them.

We see the ability of application owners to really just, sort of, quite dynamically say, "Here are all the places around the world where I need stuff to function. Well, make that happen for me." And that, to us, is the promise of cloud computing.

And cloud computing was in fact supposed to be just this big, you know, block, kind of, logo on our PowerPoint, you know, presentations that we used to talk about infrastructure.

And people should be able to assume that, as you were saying at the beginning, Lee, you know, when you plug something into the wall, there's electricity.

When you open a faucet, there is water.

And when you want the application with some proximity to you and a level of performance to you that supports what it needs to do, that you can do that with ease.

And, to us, that is the vision of modern infrastructure.

And, you know, there's never been an infrastructure technology where a single company, or two, or three dominate the entire, you know, infrastructure ecosystem.

It wasn't true for the phone companies, it wasn't true for the ISPs, it wasn't true for the data centre companies.

And I don't think that it's going to be true for the clouds either.

I think that the hyperscale clouds are going to be powerful, powerful players in this market.

And they'll be big drivers of innovations, and they'll be big drivers of standards.

In the case of Google, for example.

Kubernetes, which has really become a standard originated at Google, who are a cloud, you know, a competitor, right? And that kind of reality of how infrastructure will evolve as the big players being strong, but there being lots and lots and lots of players that get you the world global coverage these applications are going to want and need is how we see things evolving.

That's why we are so excited about building distributed infrastructure capabilities at Ridge.


Yep, no, I think there's a lot of potential.

And I'm also very excited to see where this whole thing goes, so I think that's a great place to leave it at.

Thanks very much for your time today, Jonathan.

It's been a great conversation.

Hopefully we got to have some more in the near future.



Thanks, Lee, I appreciate the time.


All right.

Thanks a lot.

Lee Razo

Consultant, CloudNativeX

Jonathan Seelig

Co-founder & CEO, Ridge