GOTO - Today, Tomorrow and the Future

Expert Talk: gRPC, Kubernetes & .NET • Mark Rendle & Matt Turner

July 22, 2022 Mark Rendle, Matt Turner & GOTO Season 2 Episode 28
GOTO - Today, Tomorrow and the Future
Expert Talk: gRPC, Kubernetes & .NET • Mark Rendle & Matt Turner
Show Notes Transcript Chapter Markers

This interview was recorded for GOTO Unscripted at CodeNode in London.
gotopia.tech

Read the full transcription of this interview here

Mark Rendle - Incurable Programmer & Lover of C#, .NET Core, Containers, Clouds & DevOps
Matt Turner - DevOps Leader, Software Engineer at Tetrate

DESCRIPTION
Join Mark Rendle, MS Dev Tech MVP, and Matt Turner, DevOps leader, architect, and engineer at Marshall Wace, in a passionate discussion about gRPC’s past and future and how it fits in with technologies such as .NET and service meshes. They get deep in the weeds on technology cycles while debating the future of infrastructure as a code and Kubernetes. And Mark has a brilliant idea on how to build an alternative to Facebook.

RECOMMENDED BOOKS
Burns, Beda & Hightower • Kubernetes: Up & Running
Burns, Villalba, Strebel & Evenson • Kubernetes Best Practices
Kasun Indrasiri & Danesh Kuruppu • gRPC: Up and Running
Liz Rice • Container Security
Liz Rice • Kubernetes Security
John Arundel & Justin Domingus • Cloud Native DevOps with Kubernetes
Hausenblas & Schimanski • Programming Kubernetes

Twitter
LinkedIn
Facebook

Looking for a unique learning experience?
Attend the next GOTO conference near you! Get your ticket at gotopia.tech

SUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted almost daily.

Twitter
Instagram
LinkedIn
Facebook

Looking for a unique learning experience?
Attend the next GOTO conference near you! Get your ticket: gotopia.tech

SUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted daily!

Intro

Mark Rendle: So, hi, I'm Mark Rendle.

Matt Turner: Nice to meet you, Mark.

Mark Rendle: And I am an incurable programmer. I started programming 40 years ago. And I'm still mainly programming as a job, that's kind of what I like doing, working for lots of different companies, generally doing .NET things and also, yeah, charging around, speaking at conferences and pathologically and capable of saying no to things. So here I am.

Matt Turner: Sounds familiar. So I'm Matt Turner. I think I started programming, I think I was eight. This was before the internet, right. So I go in, there was a computer in the house and I got a book and the book told me it was basic. I got another book on assembler after that, so I could go a bit better.

Mark Rendle: Which assembler?

Matt Turner: I like say six. Like we just...

Mark Rendle: Oh, okay.

Matt Turner: Just had like a, like a 386, but did computer science and I did my first job as embedded systems, embedded C, I did some .NET desktop stuff for a couple of years, but I've been doing infrastructure, like the dreaded DevOps word, I guess recently.

Mark Rendle: Infrastructure is code and stuff. And you were at QCon yesterday.

Matt Turner: I was at QCon yesterday.

Writing APIs upfront

Mark Rendle: What were you talking about at QCon?

Matt Turner: I talked about it, so I was on the Kubernetes expert panel. They were genuine experts on the panel with me. So, you know, Liz writes some people for Lister Assister, the maintainer of container D, we got a bunch of questions about what kinds of aspects of Kubernetes, a lot less about how it works now and more about how to use it a lot more day to questions. Interesting. Then the talk was on, I was on the API track, and the talk, the track host thought would fit the plugin the gaps fit, the narrative was API gateways and why we used them. And how we might move to sort of a service mesh, sidecar, you know, distributed approach, that kind of stuff.

So I covered that, but also talked about some like shift left tooling for API development. So I sort of advocate for people writing APIs upfront, like doing like schema-driven development, essentially writing your API interface upfront, and then using a bunch of tooling to link those definitions and to do breaking change detection and stuff like left in the dev cycle and then to generate client and server stub libraries from those, so you don't have to write a loading code because then you don't need something like an API gateway check-in request or response body schema. After all, you know, you can't send an invalid one.

Mark Rendle: That's loud, that's a weird thing, in the .NET world, in particular, there's always been a tendency to say, you write your service in C# or VB.NET, and then we'll generate those schema, those sort of definitions from your C# code.

When the world was doing SOAP. And you had WSDL files, you just use WCF and you defined an interface in C# and decorated it with some attributes and it would generate the WSDL file. These days you write your API and ASP.NET Core, and again, we have these there are two plugins there's Swashbuckle and Nswag. And both of those will generate your open API documentation. But I know, there are people including in the .NET world who are like, no, no, no, that is the wrong way round, your open API documentation should exist in its repo.

That is your living document. Then you write your services to fulfill that contract. So, and that's the way you would say it should be done?

Matt Turner: That's what I was advocating for yesterday. I think just being able to generate those files is better than nothing. There's a big movement where I work. People are moving Python, I don't care. But they are moving from some Python framework to this thing called FastAPI, and one of its features is being able to generate those open API files from all the annotations, that's better than nothing because you can, you then can take them, make them into pretty documentation, you know, discover the API.

So they are around the place and generate client libraries to be able to talk to them. So it is better than nothing technically, but yes, I advocate for doing it upfront because there's nothing new, right? It's contract-driven development or interface-driven development, whatever you want to call it.

It's going to the green light meeting and say we want to spend four engineering, four person-months on building this thing, we would like the business to invest in this service. You know this is its contract, this is what it's gonna do. These are the SLAs for the sort of performance of reliability of it. And I think being able to take that sort of interface definition there and say, this is the service it's gonna provide even before and with the meeting, with the people who hold the budget, even before you have the meeting with the sort of arcadia, the Principal engineer where you show the block diagram and say, well, can we build this? Is this even gonna work? I think that mindset shift of what is this service for? Like what services does it provide to whom? What's its contract? I think that's, really, really powerful.

gRPC in .Net

Mark Rendle: But no. So the other thing that I got into in the last sort of four years because it suddenly became a viable option in .NET is gRPC, and of course, with gRPC, you write your proto files, and then it generates stubs for either the server or the client in whatever language. Someone has come up with a way of doing that the wrong way round in .NET but...

Matt Turner: And hibernate in stuff was there. I think this CoFo first approach because I did see Charlotte 10 years ago, I think all that CoFos first approach comes from the ORM people. Because they, it was always that the database first, that the DDL or the code. And I understand why, especially as Visual Studio was starting to sort of not suck around 2010. But that was a powerful way of getting things done and they were fighting back against Ruby and all like all of that stuff. I understand it. Sorry to interrupt.

Mark Rendle: But no, so Microsoft puts some developers full time on a project to make a fully managed implementation of gRPC on top of their ASP.NET orchestral engine. It's the fastest gRPC implementation except for Rust, faster than Go, and faster than Java. It's very, very fast, and you have to write that proto file first and then generate this. The thing I like about it is that the C# that it generates, generates a base class for you to inherit from, to implement the service and the methods on it are declared virtual rather than abstract. So you override them, but you can not override them. And if you don't override the method, it throws a gRPC not implemented exception. Which is nice. That's the way it should be. Because it means you can sort of writing your test against it and you can write automated tests.

Matt Turner: You really can do that. Red-green cycle loop. A test's red because there's no implementation.

Mark Rendle: I do a workshop at conferences and am also available for children's parties. Two-day workshop and day one is this is gRPC, this is protobuf. This is how we define our messages. And as services, we do that upfront and then we write the code to implement them and go through all that.

Then the second day is, right, this is how you run it in production. This is how you do authentication. This is how you build a Docker image. This is how you get it running on Kubernetes. This is how you add Linkerd in as a service.

Matt Turner: I was going to say.

Mark Rendle: Because otherwise, .NET has this tendency, when you are using the HTTPClient, it does a DNS lookup, takes the first IP address it gets, and then that's it, it won't do another DNS lookup.

Matt Turner: Oh, even the gRPC libraries don't override that because that's kind of that thing.

Mark Rendle: They built something into the gRPC library specifically to override that behavior because they have to do this load balancing thing. But it's not built into regular HTTP yet. It might be in .NET 7.

Matt Turner: Oh, interesting. But that's what a service measure does to you?

Mark Rendle: It does so much more as well. It sort of does heartbeat checking. So it doesn't wait until you're trying to send a request to go, oh, that service isn't there anymore. And you can, it's sort of building on the mutual TLS authentication and everything, so.

gRPC and service mesh

Matt Turner:  I got a question yesterday about service mesh plus gRPC. A lot of the client libraries are very dumb unless you write a whole load of code, which you don't wanna do. So you get a service mesh. I said gRPC was designed to be this fat client library. It does client-side load balancing. It can even talk to like a look aside, meaning load balancer thing. It does well, because it's doing, it's recreating DNS all the time. Basically. It's essentially, looks, the health check is like in three stages, it's going to check with the service discovery system, even though that is usually DNS, but it is pluggable. Other service discovery systems are available. It does active health checking by pinging and making sure that that channel's still up.

It does passive health checking by seeing if like 500 stuff flooding back. They all just drop that one out of the pool. So it's trying to do all of that stuff for you because this person had deployed Istio with it. And I'm like, right. So all of that goes out the window when you've got Istio because Istio is gonna do it for you. And that's great. That's a great addition to an HTTP. It's like, you need to configure both of them in such a way that they don't fight. But yeah, gRPC had all those ideas.

Mark Rendle:  Istio terrifies me.

Matt Turner:  It's complicated.

Mark Rendle: Because it's like as I knew for this workshop that I wanted to demonstrate a service mesh and the problem because certainly the .NET core 3.1 and the .NET 5 versions of gRPC didn't do that client-side load balancing. So you did. The problem is the DNS always returns the addresses in the same order.

So you've got five instances of a service running and then all the other things are talking to that one instance and these four are just sitting there going, this is quiet. I was kind of like looking through how to set up Istio and in the context of, I want to spend two hours on this maximum in a workshop like between a coffee and a lunch, and yeah, and then it was, how do you set up Linkerd? Well, you do you download the Linkerd binary and then you do Linkerd check. And then you do Linkerd install, and then you do Linkerd inject into your YAML file that you've already got, which adds a single attribute to your service.

Matt Turner: As an annotation and then they use a mutating repo.

Mark Rendle: That is it.

Matt Turner: Yes.

Mark Rendle: And then suddenly you've got a service mesh running and it's got Grafana built-in and it's pumping things out to Prometheus and all this. I'm just like, this is magic.

Matt Turner: This is something sort of the glib rendering of it, but they built a product and it's still built a platform. It's just all Kubernetes.

They've Googled it but have gone and built Ant falls and whatever on top of it. It's built into open shifts. I think there was a difference in approach there. I mean, I do Istio workshops and first, you've gotta get people up and running with the Kubernetes cluster at first. They may not have seen it or they may just have one provided by their IT department so it mini cubed

But that's not part of the learning, I just want that to be done. So they bring a laptop they've got route on and I give them this long minikube command that gets all the little options right. And I just run it and trust it fine. But then Istio, the first exercise, the first couple of hours of the workshop is like we're gonna install this thing. I could have given it, so Istio has got better and you can now Istioctl install. If you want it to have Grafana and do a bunch of useful stuff out of the box, you've gotta give it a profile file, like a YAML config. It took me a while to work out exactly what all those options should be for the demo.

Also, I could just give them that, but actually like, no, you need to understand this, like a big part of this is understanding what these options mean and what they do. So we spend like an hour or two. In the morning it gets a key plus to have a copy install Istio. I was like, this is I think, valuable learning. So maybe it shouldn't be like that, but.

Mark Rendler: No, if you've got the sort of infrastructure engineering resources and you've got very specific needs, then I think Istio gives you a lot more sort of points where you can hook into it and configure what to do. Linkerd is just super opinionated and goes, this is the way it works out the box. That's probably okay for 80% of use cases.

Matt Turner: I would agree with that. I think both have their place.

Mark Rendler: Yes.

Matt Turner: I should maybe syndicate your workshop then, do the Go version.

Mark Rendler: Yes.

Matt Turner: It sounds like a, yeah. I think gRPC in that approach is just a great way to do it.

Mark Rendler: Yes.

Matt Turner: All those things and explaining to people that the name is quite bad. So it's, to me, it's transport, right. Protobuf, is it good encoding? HTTP 2 with the multiplexing and then, you know, no head of line blocking. The generated stubs are a great way of punting messages around. What you build is the API you build on top of that can be RPC, like OOP, send a message to an object that's got a hidden state and ask it to perform services. Encapsulated, you know, pop a 90 stuff, or it can be CRUD. Like it doesn't have to be an RPC styling place. It can be a REST CRUD interface where you move whole objects around.

Mark Rendler: Go, here is a thing, here's an update method. Here's an insert method. You can do that.

Matt Turner: Yes. And I've had that conversation with people who say, oh, I don't wanna do gRPC because we believe in REST and, everybody's used to CRUD and I've had bad experiences trying to write RPC because you get consistency problems and like, yeah, I get that. It's a bad name. You can...

Mark Rendler: It is.

Matt Turner: You can do CRUD if you want.

gRPC and performance

Mark Rendler: It's kind of HTTP on steroids with bidirectional streaming.

Matt Turner:  Yes. And the streaming's so useful.

Mark Rendler: Oh, it's great. I love the streaming.

Matt Turner:  Because what's your alternative? Deploy Kafka or use WebSocket. Can you give me some streaming that works?

Mark Rendler: I remember my first encounter with gRPC. I was working in canary-auth and they had a WPF application that took so long to start up that the users would get to their desk and they would turn the computer on and they would double click the icon to start the application, and then they would go and get a cup of coffee. And this is at 9:00 in the morning, so there's a queue for the coffee machine and they get the cup of coffee and they come back and it still hasn't started.

Matt Turner: Wow.

Mark Rendler: And so I did, I was, I'd done some stuff and, I love performance. I love trying to squeeze extra performance out or finding out why things are slow. And so they said, oh, please take a look at this. We got a bunch. We got to like the entire team is currently focused on trying to make the thing faster. And so I joined the sort of sprint kickoff meeting for the next sprint, and they were going, well, I think it might be this. I'm gonna try that. And I think it might be this, and I'm gonna try that. And I was like, "Well, have you got any measurement in there to see sort of which bit?" And they said, "No." And so I said, can I instrument your code? And so I got them to let me set up an influx DB database. Yes. It needed to be pushed. We couldn't use Prometheus because you can't have Prometheus talking to WPF applications.

Matt Turner: Well, this is more recent than I thought, given the problems you talked about, I was like, is this spinning Rust? Prometheus dates, this is more recent than, it should have been.

Mark Rendle: It was one of those things where the application had...they'd started on it literally when WPF came out, so like 2005, 2006. And so it was more than 10 years old by the time I got to it and it had just grown and grown and grown. But anyway, so I put diagnostic bits all over the place and stopwatches, start a timer, stop a timer, write that to the influx, and so forth. And then I presented my results and they said, "Oh no, your measuring's wrong." This is showing us making 20,000 individual calls to our gRPC service. And it doesn't do that. It makes one call to the gRPC service and then it streams back 20,000 results. And I went, "No, it doesn't, no, it doesn't."

Matt Turner: Sorry.

Mark Rendle: It creates the client. It makes a single call and then it disposes the client and then it loops around and it does it again 20,000 times. And they went, "Oh, oh, we should change that." Which immediately knocked two minutes off. And it, just to me that highlighted the importance of measuring and monitoring and, this observability thing. So now my next workshop I think, is going to be open telemetry which is just huge. And everything just supports that like out of the box, all your platforms are service things.

Matt Turner: Yeah. I was going to say trace bands are a really good way of getting observability into things, especially with the service mesh, you deploy that, and you get these traces on the network. So into service, you know, service to service hops for free and then instrument within your application. The request is coming like I've seen it hop the network because the service mesh told me I'm now gonna use the library, the client library for open telemetry to instrument my code.

I can see the hops through the major parts of the service. And then you get this when your user request comes in, you see inside service A, look over the network, inside service B, and you get this one, you just probably get the context and you just get this one trace. It's interesting because people are, "Oh, it's the network. Oh, that service is slow." Like, "Oh no, it's not our fault. We called the database. We have to block,". You just get that whole, that visibility, and the same tool as well. Like it's not, oh, infrastructure is slow, now the app's slow like, well here's, you know, this...

Mark Rendle: Here's everything that it could be. And look, this bit's long, and this bit's short, and yes.

Matt Turner: Being able to see inside and out is useful.

Mark Rendle: You can sort of, because you've got the network call and then say you're using gRPC, you can see the sort of milliseconds that it spent decoding and the live streams...

Matt Turner: Yes, and the better gRPC libraries do admit because they don't all, do they, but the better ones they emit spans for.

Mark Rendle: Yes. And in .NET, Microsoft had gone because I've been doing .NET since 2002, and you used to get your performance information out to .NET via event tracing for Windows and you'd have to go into it because it was all windows all the time.

Matt Turner: I would like the Windows event. It was just like the Windows event view.

Mark Rendle: Yes. So you'd go into event viewer and you'd go into PerfView if you wanted to find them about garbage collection cycles and so forth.

Matt Turner: Wow. Right.

Mark Rendle: Then when they did .NET Core, they went, "Oh, we need to find a way of doing this on Linux." So they built in this whole diagnostics pipeline. Then when the weirdest thing, I think, in my entire experience of the computer industry, so you had open tracing and open census and they defied that XKCD comic and went, "No, let's combine them into one." So now open tracing redirects you to open census and redirects you to telemetry and yes. Microsoft went, "Well, we'll make what we built in .NET compatible with open telemetry." It just has a different name. So what open telemetry calls a span, .NET calls an activity. They're hing, they've got baggage, they've got [inaudible].

Matt Turner: And the wire format's compatible.

Mark Rendle: Yes, exactly. And you drop in an open telemetry library, and then it's got automatic instrumentation of the HTTP stack, both sides, client and server for various databases. Redis, gRPC's got its one. Yes. And, you just go, here's a Jaeger host, dump it all out.

Matt Turner: Send it over there.

Mark Rendle: It all just magically appears and then you're kind of going and like I can figure that Jaeger host into Linkerd as well. Yes, no, it's a kind of magic.

Matt Turner: Yes. No, it is. It is good, that stuff's got a lot better.

Technology cycles

Mark Rendle: So what year did you start professionally?

Matt Turner: When I graduated, 2008.

Mark Rendle: 2008. So that's like the same year as AWS first started renting out.

Matt Turner: Yes. I guess it is.

Mark Rendle: Two instances.

Matt Turner: I guess it is. I didn't touch AWS for a long time after that, but yeah, I guess it is.

Mark Rendle: Because I've been going 18 years by that point. Like I'm so old. But it is.

Matt Turner: It all comes in cycles. How long do you think the cycle is then? Because all of these things, all of these things get reinvented. We've got Kubernetes. Isn't that great? All these kids running around at conferences, like have you heard of Erlang? How long is that? How long is that cycle?

Mark Rendle: So when I started in right at the beginning of 1990, there were eight of us working on a tendon 286 running SCO Xenix. We had a table tennis table downstairs because the bill took three to four hours.

Matt Turner: Well, that XKCD of my codes compiling. It used to be that, it used to be.

Mark Rendle: That was absolutely a thing. We have thousands of lines of code. So obviously it took...

Matt Turner: Well, the embedded, see the first job was seeing, we have maybe a million lines with all the vendors' libraries and there was no cloud. There was no real way to do that.

Mark Rendle: And it does take a long time to do that.

Matt Turner: You could have had some kind of bill box in the server room maybe. We did try it, but it was difficult. So I had a super overclocked Pentium 4 before under my desk. IT bought the most expensive Pentium 4 they could get hold of, I was a...I'm a gamer.

I went in one evening and overclocked the thing because they'd over-specify these machines. I think they'd just got them from SAN supply and they had a big copper. I took a sign off it once, they had this big copper heat sink. There's this sort of gaming-orientated motherboard because you just have to get something that could deliver 280 watts right to this stupid Pentium 4, so it's like, this is clockable. I got about 5 gigs out of it, 5.2 was like stupid. Like it did use to make a lot of noise when it ramped up. But yeah, it got my bill time down a little bit. But you would go and get a coffee.

Mark Rendle: Yes.

Matt Turner: The lady was saying about the WPF thing. I remember that we had a kind of similar problem. The second job, the C# place, would walk in in the morning, and press the button on the computer because we turn our computers off every night, right, for energy efficiency. But it was still desktops. If anybody with them saw what they are. Would walk in press the button and then it would, you know, have to go and get a coffee while the thing started up. Because it missed spinning [inaudible 00:24:52] whatever. So one week we were like, right, this is silly. And we built a little thing that plugged into the DHCP server. Spotted when... So you'd get... So there was WIFI and there were smartphones just about, so you would get close to the building, and your phone would join the WIFI network. Everybody's MAC address was on the table. It would recognize like, oh Matt Turner's phone has joined the WIFI network. There's this MAC address that he's registered. He must be getting close. It was in a wake-on-LAN packet to your desktop. He would walk into the building, say good morning to everybody, and get a cup of coffee. And then by the time you got to your desk, you would start your day and, whatever it was, Windows 2000, pop on the screen.

Mark Rendle: That's genius. That's fantastic.

Matt Turner: It wasn't my idea. I can't claim that idea, but yeah, we did the...

Mark Rendle: Can you make this application start faster? No, but I can make it start automatically from 200 yards away.

Matt Turner: I can make it magic. How about that? Would you like some magic instead?

Mark Rendle: I love that.

Matt Turner: I would say it was a fun time. It took us, it was a whole, it was a fully Windows estate. So it took us a long time with the Windows server, Windows main controller DHCP to do the practical parts of it. But it was great. It was a good far week.

Mark Rendle: I remember when I first got into Windows. So talking about the cycle, the application I was writing for like the first five years ran on Wyse terminals on Unix boxes. And so you had what antsy graphics could draw lines around things and stuff.

Matt Turner: Curses.

Mark Rendle: Yes. Curse and curses and writing C code with SQL embedded in it between dollar signs, and then we went to windows, and then everything was suddenly client-server. So put in a sort of all the logic was in your desktop client and it was just a shared database on a server somewhere. And then...

Matt Turner: The actual logic.

Mark Rendle: Sorry.

Matt Turner: I was gonna, you're good that the logic was on the client because I've seen a lot of those where there's the client and then all the logic is in a store procedure in the database.

Mark Rendle: Those two, those two. And then the web came along and then we were suddenly going, oh no, all the logic should be on the server as well, using SOAP and XML and all that sort of stuff. Then once browsers got a bit smarter and you could do better things. So again, probably in the mid-2000s, you got to this thing where all the work was being done on the server and it was just spewing HTML out and sending it down to this stupid, this dumb client. And I think that's a cycle because when you're on one of those VT100 or Wyse terminals, right at the end of the '80s and start of the '90s, you had a central process that was just sending the characters to display to a TTY on the system.

Then the internet was essentially the same thing. It was sending HTML to say to the browser, here's the text and images and stuff to display.

Infrastructure as a code

Matt Turner: What's a Chromebook, if not a mainframe terminal?

Mark Rendle: Exactly.

Matt Turner: But it's gone back the other way. So we don't service a side render anymore, like Jamstack. It's cool.

Mark Rendle: So, yes.

Matt Turner: And I mean, I guess a lot of the communities has done service side, but all the rendering heavy lifting is done.

Mark Rendle: But it's done another cycle because it's gone. Then everyone was kinda like, oh, we've got to write single-page applications with angular and all that sort of thing. And then, as you say, now there's Jamstack and you've got Next.js and Spell Kit and Angular going server-side rendering. Yeah and like weren't you the ones who told us to stop doing that?

Matt Turner: Clients who told us who can bind to your API and...

Mark Rendle: Yes. And the cloud back in the first sort of 10, 20 years of my career, if you wanted a new server, it was filled out a form and justify it and go to like three meetings to explain to increasingly senior people why you need a new server. Then it would take six months for it to get built and shipped to the data center and installed. And now it's kind of like...

Matt Turner: There's an API on that. Yeah.

Mark Rendle: And, or AWS CLI or Azure CLI or whatever

Matt Turner: Terraform.

Mark Rendle: And Terraform, Pulumi. Did you play with Pulumi?

Matt Turner: A little bit. I don't like the model. No, just that it doesn't work for me.

Mark Rendle: I think Terraform.

Matt Turner: I like Terraform.

Mark Rendle: Terraform was the thing that you've got to displace now. Isn't it? It's the sort of, Terraform's the obvious answer. And if you wanna come up with a better way of doing it.

Matt Turner: I saw a really interesting talk from Crossplane yesterday, which might maybe be the answer, but yeah. I like Terraform because it's not Terran complete. It's declarative. It's just, it's a config file it says.

Mark Rendle: Yes.

Matt Turner: I would like there to be these and they're deliberately staying away for they've added a few more features, but there's no loops. There's no logic. Pulumi is like, an AWS CDK and all these proprietary ones, they're like a library there, but basically like lib cloud. So they like write logic that's just gonna make API calls to Pulumi and then Pulumi will make cloud resources. I find inevitably people just shoot themselves in the foot with that use, given that sort of ensuring complete logic to drive that thing.

Mark Rendle: And do you really want to be creating high-performance compute clusters in a loop?

Matt Turner: Right? Exactly. And then people were using it. So people were doing nice thing. People were trying to reconcile with it. So I remember Terraform and there was this WebFlux tool called like terradiff back in the day. I think maybe it should be maintained, because you know, Terraform is, it's one shot, you run it once and then maybe there's some drift, you know, I remember, I'm sure you remember database drift problems back in the day, right, as, you know, SQL compare and all that kind of stuff.

Mark Rendle: I've written a couple tools for that.

Matt Turner: No, I used to work at Redgate, right. So SQL compare, SQL source control stuff. It was a real problem. So you'd apply Terraform. And I was, again, I've seen this all before, like, you know, deployed a database and it goes awry. So there was this terradiff tool and you'd have it wide up to an alarm, which would say, oh, the clouds drifted from the Terraform I think. So people were quite rightly trying to use Pulumi. I saw a couple of examples of trying to like, just keep running the loop. Right? Keep reconciling. But people, I think, people would pollute that with a lot of decisions. So Crossplane, they'll shoot me if they watched this, I probably completely get it wrong, but it's kind of like a Terraform, it's that you give it Kubernetes style YAML, but it's a declarative declaration of what you want. It's like a static thing, but then they run that reconciliation. You feed it into the system that then runs a reconciliation loop and just keeps it on course. Seems to be a nice model.

Mark Rendle: So because I've just started working with this startup PolyScale. So they are CloudFlare for databases.

Matt Turner: Okay.

Mark Rendle: So if you've got a database in sort of the United States data center somewhere, and you're deploying code that talks to that in a UK data center is like, so you've got sort of edge API locations but one central database. So PolyScale sits as a sort of proxy slash cache in front of that database and cache is the result of queries.

Matt Turner: It sounds like a Read Replicas.

Mark Rendle: It basically, yes. But much simpler than a Read Replicas.

Matt Turner: Oh, okay.

Mark Rendle: So you hit it with a query and it goes, oh, I've got that. I'll just ping that straight back to you. And, or I haven't got that. I'll get it and I'll cache it. And you can say time to live is like...

Matt Turner: I was gonna say, how do you do cache expiry?

Mark Rendle: So two ways you, either specify time to live...

Matt Turner: For the client.

Mark Rendle: ...manually at the, what they call the pop or it's got smart invalidation. So if it sees an insert update or delete, because you're proxying every call to your database through this proxy,

Matt Turner: But only from that region.

Mark Rendle: But only for that region. Then you might not use it in the other region, but you might use it in a single region, just you've got a read-heavy database and you just want to take some load off the database so you can run it on a smaller, instant size or something. But yes, anytime it sees data manipulation queries going through, it just invalidates the cache. It just goes that table's been updated. I'm invalidating all my cache that uses that table. But they, obviously the sort of key thing when you're doing something like that, they're early-stage startup, they've just got a good round of funding to sort of expand, and do some Mark Rendleeting and stuff. So they've been pretty much stealth up to now. But one of the sort of key things is we can, at the moment, we've got public pops in eight Amazon data centers, seven, seven or eight Amazon data centers.

Matt Turner: But they're like a VPC, but they're like a partner, I guess. And you can VPC to them. Good stuff.

Mark Rendle: Yes. But if you say, oh, you don't have it in Australia West, or you don't have it in this one, they can spin it up in an AWS data center, spin up a whole pop in 20 minutes. Because it's a combination of Terraform to create a Kubernetes cluster. And then just Kubernetes files to actually run the code. And then it's self-updating because they've got something running inside the cluster that checks to see if new images have been released.

It's absolutely insane. And you just sort of think, and I that's I can spin up a sort of anywhere in the world, to do this and it's...

Matt Turner: Yes. It was crazy. And ironically, you've kind of come full cycle again because you can't, if you tried to buy a server now, you can't. Because of the chip shortage, because all the chips are going to GCP and AWS basically because chip manufacturers have to keep them happy and yeah, you just can't, you can't have to have somebody else's server.

Mark Rendle: Yes.

Matt Turner: It was kind of my worry into DevOps. It was interesting. I was a big telco provider, and was a software engineer, but ended up at the classic case, I guess, that there was some infrastructure needed doing and nobody else was around to do it. So I sort of picked it up, kind of wrote, I would blow my own trumpet. I mean, it's not as impressive as it sounds maybe, but sort of did wrote a Terraform before there was a Terraform just from this thing at least had a, it was OpenStack.

Mark Rendle: Right.

Matt Turner: Which was, yeah, let's not go there with PTSD, but it was OpenStack, which is a very early EC2 because EC2 follows a lot of the OpenStack APIs. It was OpenStack was the first thing in Amazon actually when they were a small player and, you know, they copied the OpenStack API. But anyway, it's the first kind of, you run it on your own data center, you bought the hardware, but it was, you know, VMs and the VXLANs and storage volumes within API on the front, very early sort of on on-prem cloud. So you could call the C, you could click through it. But I was like, oh, I'm a programmer. There's a CLI. So I sort of wrote the CLI and then before you know, you've got a script, like a Bash file that's just batch.

Then you get some loops in it. And then I was like, oh, this is a bit of what I wanna really to do is write, describe, I wanna barely reuse this stuff because it's all hardcoded. I wanna write the right descriptions. And it was in Bash. I had this thing where I wrote a Bash scripts and then you would write M files basically if you wanted like this kind of VM and that VM and that VM, you would have a bunch of files that would literally declare Bash variables.

This script would go through source one and apply it and source one. So I kind of wrote Terraform before Terraform existed, but it was really shit and in Bash. And then did almost the same thing with, I don't wanna say Kubernetes, but we had more Bash that would take tarballs and extract them and all of this was my kind of route into the DevOps stuff with seeing these kind of problems and then making tools for them.

Kubernetes: the modern evolution of old mainframes

Mark Rendle: This serverless, an open-source serverless thing that I think is largely written in Bash that schedules things.

Matt Turner: What's the Kubernetes, what's it called? Metacontroller or something where you wanna write a controller because this client-go. Right? But they've just brought that out to a nice interface. So it's you can hook it with any language you want. I think it just makes you REST codes or something. No, I think it executes a directory of hooks. Maybe it just executes anything with an X bit set. You just give it a directory so you can write, you know, Ruby or whatever. You can totally write Bash. You can totally write controllers and mission webhooks in Bash. You maybe shouldn't but you can.

Mark Rendle: But Kubernetes because talking about the cycles and you've got sort of cycles within cycles, but in the mainframe days you would write your code, and then you would ask the mainframe to schedule it to run. And it would sort of like sit it running in a background process or run it once and give you the results and all this sort of stuff. And so Kubernetes is basically the modern evolution of the old mainframes.

Matt Turner: Yeah. Sort of.

Mark Rendle: Discuss.

Matt Turner: I'll say more like a plan nine, because people are saying, well, Kubernetes is the new POSIX and it is this interface that you can sort of write systems against. I would say it's made me more like a distributed, like a normal offer, like a multi-processing operating system. I totally take like plan nine, I guess like a very networked because I know I'd totally say your point. I think if you see a lot of the high-performance compute stuff that's going on, a lot of the like big batch processing. So SAN did a Kubecon keynote. Right. Because they did their number crunching for the ssbostan [SP] stuff on massive Kubernetes cluster.

They did a keynote where they explained it and it was really good and they actually have the cluster idle and they ran it again, they rediscovered the ssbostan on like. They said offered the start talking and by the end, it crunched enough numbers to do it.

Mark Rendle: That's quite impressive.

Matt Turner: It was very cool. But that was Kubernetes with a lot of the moving parts replaced. So the scheduler, for example, so you see what, yeah, the people like SAN and a lot of the financial institutions who are doing batch runs, they're even using heavily modified Kubernetes or they're on slur models or one of this other sort of batch process, OpenMP or one of these other batch processing systems. And you can make kube so extensive, but you can make it do that but.

Mark Rendle: But the processing power now... So I did some work from McLaren Racing way back in the very early as your days, The rules, Formula 1 keep changing the rules to try and make it possible... Well, to make it so that the teams with huge amounts of money like Ferrari and Mercedes aren't just buying championships. So they sort of put limits on how much you can spend.

Matt Turner: This has been a big reset as well of the rules.

Mark Rendle: Hasn't it?

Matt Turner: Nobody started from a clean sheet.

Mark Rendle: But back whenever this was, they put a limit on the amount you could spend on computer hardware.

Matt Turner: Oh, interesting. Those were the days of crazy error though, weren't they?

Mark Rendle: Yes.

Matt Turner: I guess that was why they were, because if you can't afford a wind tunnel or you can't afford a supercomputer, you were just out of luck.

Mark Rendle: So but when I turned up, they had MATLAB running in what they called their MATLAB data center, but was actually eight laptops in a cupboard. This is McLaren, right?

Matt Turner: Yes

Mark Rendle: This is McLaren Racing. I go in sort of every day I was there and you walk past sort of Louis Hamilton's race winning McLaren and Bruce McLaren's orange monstrosity with a sort of arrow wing, six feet up in the air.

Matt Turner: Oh, that was one of the F1 like GTM...

Mark Rendle: Yeah, like really when it was just the wild west of racing.

Matt Turner: It seems real because are they 3D print titanium? And they do all kinds of stuff.

Mark Rendle: Yes. They've got this bonker stuff. People would submit their MATLAB programs to this internal batch processor and they would run overnight and they'd come in in the morning and their results would be done.

I understand very little of Maths. And I certainly don't know any MATLAB, but I knew about Azure and so I managed to create a compute image template that included MATLAB and ran the installer, and managed to apply a license key.

Matt Turner: Yes.

Mark Rendle: So it went from eight laptops sitting in a cupboard to how many machines do I need to spin up to paralyze this so that I can get the results in under an hour, because at the time, as you are charged, if you had a machine on for a second, it charged you for the hour. How close can I get that to like 59 minutes and 59 seconds, but not shut them?

Matt Turner: Not one minute, one second, because you've done your spend in the FIA.

Mark Rendle: I think it was sort of estimated that 55 minutes was your thing. So if it ran over a bit and you had time for the startup and the shutdown and so on, but yeah, and it was just spinning up a 100 machines, just to run one person's thing. But it was 100 hours and Microsoft obviously sponsored McLaren at the time, so I'm not sure they were paying for their Azure.

Matt Turner: Interesting. But yeah, it's how much compute do you want?

Mark Rendle: And now you've got sort of these Kubernetes clusters and you've got nodes with, and video GPUs installed that are just crunching through machine learning.

Matt Turner: I wonder whether any of the crypto people have ever used Kubernetes because you get this more and more specialist hardware. Right. So you originally, you mine crypto on a CPU and then we like, oh GPU. It is the way to go, because it's a more specialized, and then it moved on to ASIC quite quickly. Right. Actually specialized circuit. They're literally just hardware that FPGAs or fabbed chips that just did the right kind of hash algorithm. I wonder whether anybody ever used Kubernetes as a control plan for that, but...

Mark Rendle: Probably.

Matt Turner: I'm just thinking if that's maybe where the financial style on whatever is gonna go, because they've gone from CPUs to GPUs. There is a reasonably new Kubernetes feature about managing arbitrary because those things are finite, right? You've got like a certain number of GPUs or whatever plugged into a box.

You need to allocate it to be the schedule that needs to know that, you know, this thing is full, not in terms of CPU and round, but in terms of it's got five ultra fast NICs or five graphic cards or something. A part has to have precisely one, has exclusive use of it because they can't be virtualized. That's recently been extended to model any other kind of like finite hardware you can think of. So I do wonder.

Mark Rendle: It's just, see, this is the stuff that makes me just kind of, there's levels of knowing Kubernetes. My level of knowing Kubernetes is I can write a deployment and a service, and the deployment creates one or more pods and the service exposes it to the rest of the cluster. I can work with the secrets, certificates, but the very, very basic stuff.

Matt Turner: You should watch my LinkedIn Learning courses on Kubernetes.

Mark Rendle: I will.

Matt Turner: I've been deep. I've been deep. It's...

Mark Rendle: Yes.

Matt Turner: Having done OpenStack and having done, having written a web browser, that was for the first job, it's not bad. It's not bad. There's some complexity. The complexity is usually necessary. Like Go is great in some ways and really doesn't help in others. There's goods and bads, but it's not bad.

I've been deep and you see some things, you see some things and then you come back and you be like, that's why it works like it does. That explains the slightly odd behavior that I'm seeing. It's because of this weird not involved down there, but.

Mark Rendle: But the thing I find really strange with Kubernetes now is, it became, so it's not just, this is Kubernetes because you've got micro k8s, you've got k3s, k0s. Is that a thing?

Matt Turner: That is a thing. And then the minikube kind and then the distributions that OpenShifts and whatever as well.

Mark Rendle: So you've got different implementations. Kubernetes is basically now just a standard.

Matt Turner: They're all based on the same code. The Kubernetes API, I guess, is a standard and it could be. You do get people reimplementing schedulers and stuff to that standard. All of the things you talked about are actually the same code-based. They're more like distributions, so.

Mark Rendle: So like Linux distributions.

Matt Turner: Yes. K3s, k0s are designed to be small. They started to link it all into one binary. And they skip, it's like a small Linux distribution. You mostly just drop drivers. Right. The first step to shrinking your Linux is to know what hardware you're targeting is just strip all the drivers out that you don't need. Right. Only after then, do you start taking more important things out of the kernel, right? Like maybe even for microcontroller, it's like the memory manager goes, because you have virtual memory.

From some, at some point you go from a hot plug to like device tree and all. So yes. K0s, k3s are just small. They strip components. They optimize the build, the distributions things like OpenShift actually had stuff on top. Different controllers and stuff. They are all as far as I'm aware, they're all based on the same code base, but nothing to stop people reimplementing it. But at the moment...

Mark Rendle: You could if you wanted to reimplement the same APIs using RUST and...

Matt Turner: You absolutely could. There's very good RUST now for writing controllers. You could rewrite the COBIT in Rust, the schedule in the bottleneck with Q plus to scaling certainly used to be ATD, actually it is a database. People, that was the one thing that was hardcoded into Kubernetes in the early day. The API server code, especially use the HCD library without any kind of side in front of it in a whole load of places in the depth of the code. Alot of work was done to rip that out. And to make it pluggable. So now I think I'm right in saying k3s uses SQLite.

Mark Rendle: I think it does.

Matt Turner: Something like that. That's now replaceable and actually, it's happening in both ways. People are using SQLite and stuff even may be in memory stores for smaller clusters, bigger clusters, because that was usually the scaling bottleneck using some replacement. I think there's maybe a facade that lets you use Postgres and then you can scale out or something. Don't quote me on it. That seems like the first place you'd go, you'd use TiKV or something. Right. And just go Rust there. But no reason not to reimplement the rest of it.

Building an alternative to Facebook

Mark Rendle: Because one thing I was looking at was, I have this notion to take down Facebook. Someone has to. The idea is build a social network where you eventually buy something like an Amazon Echo device, a thing that goes in your house and you connect it to the WIFI and then that's your node of the social net. So your updates and your photos and everything go onto there, and then you can connect to other people's nodes in their houses, and it sort of constructs the feed and everything. The front end would just be like a spa that's then using OIDC open auth.

Matt Turner: And adding a fringe is designing an allow list.

Mark Rendle: Yes, exactly. So your friend goes, "I want to connect with you," and you, "Oh. I will allow you to connect with me and..."

Matt Turner: We'll do a key sign and we'll do a key exchange.

Mark Rendle: Here's my public key. And you go, "Here's my public key." Then when you want to exchange information, you use public-key encryption, and then they can decrypt it on their side using their private key that they've still got and all that sort. So essentially, and then they expose all of those things to the internet using Cloudflare tunnels.

Matt Turner: All right.

Mark Rendle: Until Cloudflare ask me please to stop because I've got a billion users and they can't handle it.

Matt Turner: That'd be a good problem to have.

Mark Rendle: A good problem.

Matt Turner: I'm sure there have been attempts to do things like that. Decentralized, maybe not as going as far as you are suggesting, but using torrent essentially or something like just to keep the information like either in the individual's hands, like you are saying, I mean, maybe you do like encrypted, you know, M plus three replica occasions for backups or something, be encrypted or privately.

Mark Rendle: As you add a friend who's in another location, you would basically go that friend I trust that's... And so can you please, and they've said that I can use their node for backing up my data or for making my data accessible in the case that my internet goes offline. But anyway, yes. And their idea is to write the sort of software as open source and distribute it as an image, you can install on a Raspberry Pi and get geeks using it first and then go, "Hey, look, this is great." And then get funding. And the business model is selling the devices, selling the hardware. So it's a hardware business rather than a software business. But obviously coping with updates and things is challenging when you've got a piece of software that's running and then you need another piece of software running to see if an update is available, that can then download it, somehow install it while that one's still running and then seamlessly switch over. It's actually easier to put k0s or k3s on a Raspberry Pi, distribute the software as a Docker image. Then I found out that you can actually use the .NET Kubernetes client to talk to Kubernetes from inside the cluster.

Matt Turner: Oh yeah.

Mark Rendle: Essentially it's default behavior. Because if you don't give it any other information.

Matt Turner: Yes. It'll discover the cluster.

Mark Rendle: You can literally just go, "Hello, I'm running, do a rolling update with this new version of this image. And then I'll die when..." And Kubernetes just takes care of all that complexity for you.

Matt Turner: The cost of using some resources, like you could use CoreOS or Flatcar or something and take back a level and do your own rolling update orchestration. Because then you've got the Docker host, but you don't need the rest of Kubernetes or how do you update Kubernetes though? Your watches, that watcher.

Mark Rendle: I guess if you used Ubuntu, you could use micro k8s, which is a snap, right. And then you can just snap auto-updates.

Matt Turner: How do you update Ubuntu?

Mark Rendle: How do you update Ubuntu? Well, you know, eventually, you say to people, you need to buy a new piece of hardware.

Matt Turner: Business model sounds flawless.

Mark Rendle: Yes.

Matt Turner: No, sorry, I'm being facetious, but I mean, yeah, you do. I mean, do you wanna know how sort of switches and like this kind of embedded systems usually do it? They normally, it's not seamless. That's the thing. If it doesn't need to be, you normally would have two partitions on the disc. The software is sort of built into an operating system. Basically like a snap, right. But a whole operating system image that you just like copy onto disc. So you would bundle if you're talking about like a switch or a router or something, you take the packages, whatever you install into Linux, you snatch out the disc, you have two partitions, you boot off partition A and then the update is partition A is doing its thing routing packets, but it'll also download the new image, dump it onto partition B. T

hen you reboot into B. If for some reason there's a problem, it doesn't start, like the firmware is clever enough to go back to A, but then you're running off from B until as long as there, you know, until there is another update, B will stream it down right into A, and then you flip over.

Mark Rendle: Cool.

Matt Turner: Then if you need seamless upgrades, well, if you need zero downtime, you just have two of those running in a pair. Right. While one's doing the flip over, the other one takes over.

Mark Rendle: Well, obviously the zero downtime hardware will be bigger and more expensive.

Matt Turner: Yes. You know, so premium.

Mark Rendle: Yes.

Matt Turner: So Macbook premium.

Mark Rendle: My idea is that we sort of start off with literally just like the Amazon Echo puck style thing, which you can only access through a browser interface. We sell them in packs of six so that people like us buy a pack of six and then just here's your Christmas present. We're all getting this, you're all signed up. But then sort of Robert Scoble and Elon Musk can just buy like the Ferrari version of that with sort of a terabyte of solid-state storage and multiple redundancies and all that sort of thing. But anyway, the software bit of it's gonna be open source. So when the time comes, I'll just ping you and you can go. Yep. So this is that jewel partition? No, not Rust, .NET because yeah, you can now.

Matt Turner: Yes, no, you can.

The programming language ecosystem today

Mark Rendle: .NET is cool now. Well, It's really, it's got a real image problem. And Microsoft...

Matt Turner: I feel like Java is always gonna have that image problem.

Mark Rendle: That's the thing, it's kind of...

Matt Turner: C# is a good language. I mean Java, the JV, the JVM. I think people are always going to remember the performance problems and stuff. It's a tricky thing. But I think Java's always going to have the image problem, but it's fine because Kotlin is just better. We tried Scala if you like functional programming, fine. If you've got a team of people who know Scala really well, you can get some great stuff done, but it's basically unreadable unless your head's really in it. I mean, I write Scala, I can't read my own Scala six months later.

Mark Rendle: It's what they call point once, isn't it?

Matt Turner: Yeah. Clojure. I mean Brackets. Yeah, you did list, but Unix and you wanna keep doing that. Right. But Kotlin very solid language works in JVM. C# is we don't need another one. Like C# is a good language. Anders Hejlsberg or whatever he is, whatever the guy's called.

Mark Rendle: Anders Hejlsberg and Mads Torgerson. Because Anders went off to do TypeScript, which has also been very successful.

Matt Turner: It's good language and yeah. I hope it can shake that. Because it's getting gRPC and as you say, they're doing it properly. It's not like an afterthought like it is in some languages.

Mark Rendle: It's properly built, the really nice thing is you can actually run. So you can run an HTTP API. You can. So, you know gRPC web.

Matt Turner: Yes.

Mark Rendle: Which is the thing. So browsers, which don't implement HTTP 2 properly can actually talk to gRPC services, which normally you run as like a proxy Docker image in .NET, they've implemented it as middleware. It literally hooks into your HTTP pipeline and then talks back to the gRPC service, which is running in the same process just by calling it and then returns the results back out in .NET 7. They're implementing gRPC JSON, which is this new thing where...

Matt Turner: Just as JSON in coding are the protobuf basically.

Mark Rendle: It's basically replacing the protobuf with JSON and I think doing something with you anyway, but yeah. So browsers will be able to talk to that as well.

Matt Turner: I would argue the middleware is actually not the right place for that, but...It's kind of Microsoft thinking maybe.

Mark Rendle: It is just, it's I may be slightly responsible. So have you ever done Ruby?

Matt Turner: A little bit.

Mark Rendle: So, you know Rack?

Matt Turner: Yes.

Mark Rendle: And Python has WSGI?

Matt Turner: Whatever it, yeah.

Mark Rendle: It was this thing of building up sort of middleware things. And so a bunch of .NET people including myself and we should have something like that in .NET. We defined this standard where essentially the HTTP request was a map or a dictionary and the response was a dictionary each with potentially a stream, if there was a body, and then you could build up this middleware thing. Oh, okay. Which we at one point, because we were trying to make it work with .NET 3.5, I think. Okay. Which didn't have tasks.

Matt Turner: Oh, no, it didn't.

Mark Rendle: No, It didn't, tasks were .NET 4. And so rather than having tasks, we had a lot of callbacks. And actually, we called it the delegate of doom. And if you had 80 columns in your code, just the definition of this delegate was like five lines long. And they went, you know what, that's stupid. Let's just use tasks and make it .NET 4.

Matt Turner: gRPC on .NET 3.5.

Mark Rendle: Exactly. But then Microsoft kind of implemented the official version of that, which was OWI, the open web interface for .NET, and then when they did .NET Core, it was essentially an extension of that. They used the same thing. We have middleware pipelines who, your authentication and...

Matt Turner: Yes.

Mark Rendle: Static files and...

Matt Turner: No, I didn't realize, .NET didn't have one. Yeah, definitely not arguing with that to add log in and crash out and stuff. I'm not sure gRPC is...

Mark Rendle: They stuck gRPC in there as essentially... I don't think it's middleware as much. You've also got endpoints. So you got multiple endpoints and routing deals with all that sort of stuff. But so yeah. And like I say, it's the second-fastest gRPC implementation outside of Rust and it's not that far off. And you don't have to write Rust.

Matt Turner: I love writing Rust.

Mark Rendle: Everybody loves Rust, but it's terrifying. It's the quote, the lifetime thing. Lifetimes. I've got ownership. I struggle with lifetimes. I will get there I just need to write something real in Rust.

Matt Turner: I think I get the concept, but I actually practically applying it, like actually doing it. It's like, yeah, you stored this and you made this thing and then you, I stuck it in a linked list on the heap, and then you're getting it out again. Like from a different HTTP. It's like, yeah, HTTP handler, you make some data, you save it on the heap, obviously because you've got to point it to some tree or linked list or whatever data structure. And then you wanna get the thing out again. It'll only let you do that if you can prove to the compiler that it's gonna...that it's not gonna get dropped. Because there isn't...it's not GC. Like you've gotta prove that it's not gonna get dropped. So you clone it, but then like, and yeah. I've just like. I promise that this lives as long as like, if you give it some hints, it's like, this should live as long as this and that's safe and then it can go, and tell whether you are correcting your assertion and I, yeah, it's terrifying. And the terrifying thing is I think I like writing it as a hobby. If I could persuade my employer to take it up, I'd just be terrified that I'd be coding away and it would all be good. And I'd get to like a Thursday afternoon and I'd just get utterly stuck on some lifetime issue and I'd be there for like three weeks. What? Like how do I make this? Like all productivity stops where you're just fighting some bar check edge case or something. Days and you just can't get past it.

Mark Rendle: So, I love the error messages. I always say Rust error messages. It's like compilers just giving you a hard...do you watch taskmaster.

Matt Turner: Very stupid.

Mark Rendle: Sometimes Greg Davies takes one of the comedians off and has a little chat with them and the Rust compiler's like that. It's like, no, see that owns that. So this thing can't change it. Can it? No, no. So what are we gonna do about this? Yeah. Okay. You figure it out.

Matt Turner: It's a little more inverter. I don't find it so quite, so advertising, but yeah, it is like the 6 foot 8 because Greg Davies is huge.

Mark Rendle: It's very helpful. Let's put it that way.

Matt Turner: It is like the sort of, you know, the granddad giving that hug and saying, let me teach you some things about the world. Like it's just, you tried, we all respect that, but if you're gonna make it work, we just need to, maybe I'll give you some hints.

Mark Rendle: And I still remember sort of 40 years ago, having C compilers that would just go, "Nope. Nope. That doesn't work. It doesn't compile." I'm not telling you why not?

Matt Turner: Nope. Error one.

Mark Rendle: There is an error. It's in this file. It's at this line and it's at this character and actually it's not, it's eight lines earlier. It's just, that was the point at which he stopped being able to make sense of whatever garbage you'd thrown at it.

Matt Turner: The semantic ones. I mean the very few, I remember the early C compilers would basically compile just about anything as long as the syntax was okay. They would let you do just about anything. If even if that had like completely undefined effects, either in the C standard or on your CPU, like nowadays they'll be like, oh, you might wanna put a memory barrier in there. They never used to do that. I mean nonsense. If you've got a, if you missed a bracket, that they would just blow up because one of the points, one of the things the C standard is very clear about. Because it's not clear about much. One of the things it does specify a lot. One of the things he does say is you've gotta be able to pass and compile this in one pass.

Like the whole language is like, you've gotta be able to start. That's why definitions are start. You've gotta declare variables before you use them. You gotta declare functions before you use them. Either head of file or static, you've gotta be go through in one thing. And it goes top to bottom and it like, they, I think it was to save memory. So the compiler doesn't have to trick the syntax. The grammar is so simple that you never have to backtrack. Basically you don't have to store any state in your compiler, in your lexer and I'm all forgetting the computer science degree I've got on like, you know, type 1, 2, 3, 4 languages is one of those. And anyway, so yeah, you would miss a bracket, and then it would, as you say, blow up like 10 lines later. And it was always those we're like, "What have I done?"

Mark Rendle: It's kinda like missing boom. These days I use Visual Studio or Rider or VS Code, one of those three or you know, the various JetBrains IDE. So I wrote Kotlin for the first time a couple of weeks ago which was a nice experience actually. But yeah, I've got rainbow brackets installed on all these. And so all my brackets are different colors and...

Matt Turner: Well, they just don't indent if you...

Mark Rendle: And I've got indentation lines and drawn on the UI and stuff.

Matt Turner: You can tell if you miss a bracket because the indentation would be wrong.

Mark Rendle: Yes. Whereas back in the day it was VI.

Matt Turner: It was VIM and, but it couldn't do all that much.

Mark Rendle: Yeah, VI on a monitor, which couldn't do syntax highlighting because it was literally black and white.

Matt Turner: Right? Oh, you didn't even have an orange and black one. They were so...

Mark Rendle: So when I was junior, I had green and black. Then I went off freelancing for a bit for this company and then came back and became the person in charge of the training program, taking on the new people. And at that point I got promoted up to a Wyse 120 Amber and Black.

Matt Turner: I was gonna, there was definitely, there was a pecking order, right.

Mark Rendle: Hundred and twenty columns. Oh, it was.

Matt Turner: What are you gonna do with this space? Well, the fact you just get the screen and you just spit it in half. What would I do with all the 80 columns? My coding standard says 80 columns. There was no Git indication in the left, there was nothing.

Mark Rendle: I can't remember, but yes.

Matt Turner: You were this white and black?

Mark Rendle: We started having, brought RC file for VI to tell it you've got 120 columns. Otherwise, it assumes you haven't, so.

Matt Turner: It's all better than what was it? What was, does abort, ignore, cancel. Is that it?

Mark Rendle: Abort, retry, ignore.

Matt Turner: Something like that.

Mark Rendle: Abort, retry, ignore was the four, was the three choices, A, R and C.

Matt Turner: It was DOS, wasn't it?

Mark Rendle: But there was abort, retry, ignore and cancel. You used to get sometimes.

Matt Turner: Yes, you did, didn't you? No help. Right? No documentation.

Mark Rendle: Yes.

Matt Turner: You couldn't even Google back then. You couldn't even Google, like what on earth this meant?

Mark Rendle: To this day when you try and close a Microsoft application without saving your changes. And it goes, you have unsaved changes. Do you want to save them now? Yes. No, cancel. And you're kinda like, right. It's only because I know that cancel says what you are canceling is closing Word, closing Excel, closing Visual Studio.

Matt Turner: Yes means say save it and quit.

Mark Rendle: Yes save it and quit. No, don't save it.

Matt Turner: Or walk away and quit. Yes.

Mark Rendle: Because I've made a terrible mess of that paragraph. And cancel is, no don't quit.

Matt Turner: Yes. And I think the DOS ones were similar. They just use even worse words. So I think abort was stop trying whatever. And you're trying to run a program, abort was stop. And you can't read from the disk. Right? Abort was like, stop trying to run that program. Ignore was the spicy one actually, it was like the no equivalent. Ignore was it doesn't Matt Turnerer that you can't read, like carry on, like leave that piece of memory.

Mark Rendle: Do what you can.

Matt Turner: We're trying to read, there's a bad section on the disc. Leave that section of RAM uninitialized and basically set the program counter here and just execute whatever garbage that was in RAM because you couldn't read it off this. Ignore was the one that you would actually didn't wanna press. Yeah. And then retry was obviously trying to read the disc again and then I've forgotten what cancel did. Maybe cancel was just like turn the system off.

Mark Rendle: Shut the whole thing down. Yeah. Yes. To be fair. All I ever actually used DOS for was playing Doom and Wolfenstein so...

Matt Turner: Typing Win.

Mark Rendle: Yes. And typing Win.

Matt Turner: Win. Oh, you're closing windows when you needed a surround back to play Doom and Wolfenstein.

Mark Rendle: Yes. I'll tell you something. When we stopped recording about Windows, the old win.com file, but it is not suitable for broadcast.

Matt Turner: Interesting.

Conclusion

Mark Rendle: So yes, we should... I think we've established that things move in cycles. And that's good.

Matt Turner: Nothing is new. And some of them get better. Kubernetes is better than OpenStack. .NET is better than it used to be.

Mark Rendle: .NET is better than. NET. Kotlin is better than Java. Go is good.

Matt Turner: Go is better than C.

Mark Rendle: Go is definitely better than C.

Matt Turner: Like it has problems. Don't get me wrong, every time I get frustrated with it. I'm like, no, this is C with the really worst bits. And that's all it actually ever meant to be. C with nonpoint exceptions and head of file inclusion and stuff fixed and that's fine.

Mark Rendle: It's better than using what's a low bar. Tropical diseases that are better than C.

Matt Turner: Yes.

Outro

Mark Rendle: So yes, we should do another one of these though.

Matt Turner: We should, this has been really fun.

Mark Rendle: This is sort of an annual thing or something.

Matt Turner: I'd love to, I'd love to talk about the comedy. I tried to do a funny about computers twice. I was thinking before we came up and they were, yeah.

Mark Rendle: Well, I was actually a professional for...

Matt Turner: Well, you're right. For exactly.

Mark Rendle: ...some years. And, it's great. Now I get to do both. I get to kind of be funny about computers.

Matt Turner: It's just so much to this fish in a barrel, isn't it? It's just fish in a barrel a lot of the time.

Mark Rendle: So I'm working on a new one at the moment. Programming's greatest mistakes, which is a kind of in-depth analysis of things like the Y2K bug and just the very existence of null.

Matt Turner: Null, the billion-dollar mistake.Yes. So that'll be showing up on YouTube sometime later this year. I will check it out.

Mark Rendle: Right. Cool. Thank you so much.

Matt Turner: Thanks very much.

Intro
Writing APIs upfront
gRPC in .NET
gRPC & service mesh
gRPC & performance
Technology cycles
Infrastructure as a code
Kubernetes: The modern evolution of old mainframes
Building an alternative to Facebook
The programming language ecosystem today
Conclusion
Outro