GOTO - Today, Tomorrow and the Future

The Ideal Programming Language • Richard Feldman & Erik Doernenburg

April 22, 2022 Richard Feldman, Erik Doernenburg, Lars Jensen & GOTO Season 2 Episode 15
GOTO - Today, Tomorrow and the Future
The Ideal Programming Language • Richard Feldman & Erik Doernenburg
Show Notes Transcript Chapter Markers

This interview was recorded at GOTO Copenhagen 2021 for GOTO Unscripted.
gotopia.tech

Read the full transcription of this interview here

Richard Feldman - Author of "Elm in Action" & Head of Technology at NoRedInk
Erik Doernenburg - Head of Technology at Thoughtworks & Passionate Technologist
Lars Jensen - Lead Developer at GOTO

DESCRIPTION
What would your ideal programming language look like?
Erik Doernenburg, head of technology at Thoughtworks, and Richard Feldman, author of “Elm in Action,” sat together at GOTO Copenhagen 2021 to chat about what theirs would look like. They also had a look into the future of up-and-coming languages.

RECOMMENDED BOOKS
Richard Feldman • Elm in Action
Jeremy Fairbank • Programming Elm
Wolfgang Loder • Web Applications with Elm
Cristian Salcescu • Functional Programming in JavaScript
Tim McNamara • Rust in Action
Saša Jurić • Elixir in Action
Dijkstra, Gøtze & Van Der Ploeg • Right Sourcing
Richard Monson-Haefel • 97 Things Every Software Architect Should Know
Thoughtworks Inc. • The Thoughtworks Anthology
Jimmy Nilsson • Applying Domain-Driven Design And Patterns

Twitter
LinkedIn
Facebook

Looking for a unique learning experience?
Attend the next GOTO conference near you! Get your ticket at gotopia.tech

SUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted almost daily.

Twitter
Instagram
LinkedIn
Facebook

Looking for a unique learning experience?
Attend the next GOTO conference near you! Get your ticket: gotopia.tech

SUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted daily!

Intro

Lars Jensen: Hi, I'm Lars from the GOTO team, and I'm here in Copenhagen at GOTO Copenhagen. This is a GOTO Unscripted episode where I'm joined by Richard Feldman and Erik Doernenburg. We're gonna nerd out a little bit about programming languages, but before we do that, could you say a few words about yourselves? Richard, would you like to go first?

Richard Feldman: Sure. I'm Richard. I work at a company called NoRedInk. We're hiring. I've done a lot of Elm in my career and I'm the author of "Elm in Action" from Manning Publications. I’m also really into Rust and doing a lot of that on the side because I'm working on making a programming language that complies with written Rust. I've also done a course on frontend masters for Elm and also for Rust.

Lars Jensen: Thanks, and you, Erik?

Erik Doernenburg: I work at ThoughtWorks, a consulting company. I work mostly as a consultant, helping clients make more out of software, write software in new ways, using different programming languages as the client requires, as the circumstances require and as new programming languages appear.

Frontend or backend?

Lars Jensen: Interesting. I wanted to do a fun experiment with you two. At the party keynote that we had last night — I think you were both there — we had Mark Rendle design the worst programming language in the world by looking at previous languages that do horrible, horrible things and taking the worst parts of all of those and building the worst language he could think of. I was hoping we could do a little bit of the opposite exercise here, where if you were to design your ideal language, what kind of features would you take from where and what languages would you draw inspiration from? I know, Richard, you are working on your programming language, so you might have been through some of this exercise already. Maybe you want to start.

Richard Feldman: Sure. Well, it's a dangerous place to start because I'm liable to talk about it forever. I'll try to keep it constrained to characteristics I want, what languages do I wanna take inspiration from. So, first off, the prompt was the ideal language. To me, I don't know that there's such a thing as one ideal language for all problems. It's more domain-specific. If you're building an operating system, I think you want a pretty different language than if you're building a web server versus if you're building a video game. Beta systems would be another example.

In my career, I mainly did web development, so, let's just maybe keep it constrained to that, at least I can start there. I think if you're doing web development, there's the frontend and the backend. Frontend, I'm really happy with Elm, so I'm actually not trying to design a language for the web frontend. I guess the language I'm working on, if it's used in web development, would be used on the server-side.

I'm a big fan of really ergonomic type checking. I've definitely used languages that have varying degrees of ergonomics around type checking. Somewhere it seems it's a net negative, it's so painful that I would rather not have it, rather have dynamic types. Elm has an extremely well-designed type system with really helpful, friendly error messages. So, I think that is my ideal. That's definitely something I would take from Elm.

As far as memory management goes, I think automatic memory management would definitely be something I would want, building an operating system, probably not. You probably want direct control over that. Maybe I can just start with those two, just so I don't talk forever. What do you think?

Erik Doernenburg: I'm in complete agreement about the consultant answer: “It depends,” right? I don't think you should try to design a language. Part of Mark's keynote was the fun of saying he wants to design a language for everything and that is, of course, the first mistake if you do. So, I completely agree, it depends on what your target is. I think for the memory management, we could even on the lowest level, do something like automatic reference counting, or what Rust does with its memory management, with the borrow checker. Humans have proven over 20, I don't know, 30 years, that we are incapable of doing memory management correctly with all the tools and best intentions. So yeah, give that to the machine.

Types, I'm similarly conflicted. I've written large systems in what is, I learned last night, gradual typing. I noticed from Objective-C, where you can also design types or you can leave them out if you want. In a large team, it is often quite good in large code places to have the type system because it really helps you understand the code better and most of the time it's important that you can read the code. Writing is only done once most of the time, reading is happening often, so types really help.

I guess one aspect that you didn't touch upon is this often portrayed divide between object-oriented programming, which was considered the winner until about five years ago, and then functional programming, and I think the languages that allow doing both are probably something I would draw inspiration from, or would at least design a language that can use something for very state heavy things that are based on classes and attached behavior, but also allow you to do something that is more functional, but not trying to say I'm either/or.

Richard Feldman: Gotcha. So, I'm definitely on the functional side of things. I spent the first half of my career doing object-oriented programming, the second half doing functional programming, and I definitely appreciate where the “let's do both” is coming from. But, from my perspective, the thing that I like about functional programming is the subtractive aspect about “let's take things away and make it smaller and simpler”. And so, since that's part of the appeal, constraining it down and just saying, "Let's just do functional and let's have this be one small set of simple primitives," for me that's the way to go. That's my ideal.

Programming languages in the field

Erik Doernenburg: I'm completely with you. I have a computer science background. I completely agree. I love the elegance. I mentioned it in the talk I did about Rust that I did the precursor of the implementation I was showing in Clojure and I enjoyed that actually more. I'm a great fan of LISP-like languages, but I recognize that there is something in object-oriented programming that appeals to human beings: categorizing, classifying, putting things somewhere, and I think that is hard to get over. And there's one system we wrote for, or with a client, it's a large system, with many microservices, and the backend is not the backend. The backend is just an adapt to cover some real backend that can sometimes be a mainframe, and we thought what better place to use functional programming because you're basically transforming what's coming from the real backend to something that goes into an HTTP endpoint JSON to the Javascript frontend.

We chose Clojure for it. And people were not so used to it. They were struggling and we gave it about two years, to be honest with you, on the services and still, when we then asked the teams in the next microservice, "What do you want to write it in?" The answer was Kotlin. They were, like, "Yeah, they're still out of state and we want to ship these objects. It's a sales system, there's a custom object and products and we can model this in our head and we understand this. We know the patterns." It was heartbreaking for me to see. That was a place where the client didn't object to us using Clojure, where it was really a great area of application. Still, on the whole, the team said the experience was good. But when asked, "Would you do it again," which is the critical question, they said, "No, we choose Kotlin."

Richard Feldman: That's interesting. I definitely know people who have stories on both sides of that. People who have a similar story where they tried, maybe in the case of Elm, they were used to React and TypeScript. They tried Elm and then they didn't like it. They're like, "You know what? I wanna go back." Or maybe they went with ReScript or some of the slightly more object-oriented direction. I guess some people would argue with me about that, but I think that's accurate.

But then, I also know quite a lot of people who have the opposite story, where they tried it, and they're like, "I can never go back." Actually, a lot of our hiring comes from people who are like, "Now that I've had a taste, I can't go back to…" not Kotlin specifically but like, "the object-oriented world. I need more of this functional stuff in my life."

I wanna go back to something you said though because I think it is a good observation that humans like classification. That's kind of a fun activity for us. Something that I was reflecting on somewhat recently was looking back at my history with object-oriented programming, and I did spend a lot of time on that and I did enjoy it, but I don't think it actually helped me out that much in terms of my code. I spent a lot of time classifying what is this thing? What should it be? What should the taxonomy be? What should the hierarchy be? But with regards to what I actually get out of it in terms of productivity, I don't think it really paid off. So, I agree with the point that as humans, we like that, but I'm not sure if that means that it should be in the language. Maybe it's a temptation that's better removed.

Erik Doernenburg: Inheritance is probably a temptation that is best removed. I mean, you can do object-oriented programming and completely overdo it with inheritance. I have seen that.

Richard Feldman: Absolutely.

Erik Doernenburg: It isn't everything. On an abstract level it’s just something, and that's true. But when you write code, you get these almost empty base classes. There's hardly anything in there and you're like, "Why did we even make a common supertype?" I get that.

Richard Feldman: I also remember one of the enterprise Java jobs I worked on earlier in my career. This was more than 10 years ago, but I think this is probably still done in quite a lot of places. We had a rule on the team that we always build the interface, then only return concrete types when you need to. So, we always say we're gonna use the list interface everywhere except when you need to actually make one, and then you're gonna use array list.

The thing is we always did exactly that. We always had an exact list for the interface, and then we always had an exact array list for the implementation. And then, whenever we're making our classes, there was also a corresponding rule that you had to make an interface first. Then we would always make an interface, and then make exactly one class that implemented all the methods of that interface and then that's what we would use everywhere. And I looked back and I think I know the principle was, theoretically, if we want to, we can swap out the implementation for something else. But based on how often we did that, which was the two years I worked at that company, zero times, I think we probably would have been better off if we'd just been like, "You know what? Let's just assume we're gonna pay the price if we ever do need to change this and we won't have all the interfaces in place. We'll just go through and do the change to switch it from one class to another." I think that would have been much cheaper.

This was also not about classification, really just about premature abstraction perhaps. Thinking about how we can spend so much time following what seems like a nice thing or a best practice, and then being like, "But did it really pay for itself? I don't know." really stuck with me.

Testability

Erik Doernenburg: Good point. That brings me to testability. That really should be in the language. Why am I thinking that? Because historically, one of the drivers for this interface implementation separation was testability. Remember about 10, 15 years ago, dependency injection was something new. People sometimes did this and said, "Okay, let's separate the interface from the implementation" because then in the test, you only have the interface. You can't instantiate it. Or in your code, you can't instantiate an interface, which means you must declare dependency. And then, you force the teams to use dependency injection. So, there were some other motivations behind it. I'm noticing again when you do unit testing, you often want to mock the surrounding things. I've actually written a mock object framework for Objective-C. And when Swift came out, people asked me, "Can you make the Swift version?" I said, "No, I can't because I don't even have the runtime to do this." There's just no way that I can do all the trickery in the runtime that I could do with Objective-C. But, that means then, the interface is becoming more important, because then you can step away and you can do things in your test and the unit test especially.

Richard Feldman: I think that's a great point about testability in general, and I would take it a step further and say that, to me, you might as well plan to design any new language with tests in mind. Historically, it seems like most languages were designed with a compiler, and we got a language design, and then there are these ancillary things that everyone's going to do sooner or later. But for some reason it's always considered outside of the scope of the language, even though you know it's gonna happen. Someone's gonna write a test framework. There's gonna be a package manager and now in more recent years, it seems very safe to assume there's gonna be a formatter that's gonna format your code in a specific way. I think Go started that but, I mean, plenty of languages have picked that up now.

And so, at that point, if you know this is going to be done by someone at some point, it seems like you should just design for it. You could do a better job or make it more well supported if you're planning ahead for it. And yeah, testing is definitely a big one.

Erik Doernenburg: And this is something we see with Rust, for example. They put the package manager into the language. I mean, not the programming language, but it comes with Rust. There's no way to do Rust without the package manager. They put the unit test actually into the same file as the code, so they really thought that through. That's an inspiration to take from it. I mean, it should be testable. I think in this day in age, we really have to have a language where you can write unit tests because we know that we are writing increasing amounts of code that stay around for a while. We talk about shifting away from project to products, these services we expect to live for a while and if you don't write tests, you might as well give up, I guess.

Memory management

Richard Feldman: They're definitely indispensable, can't do without them. Okay. I wanna go back to memory management for a second because you mentioned Rust. And so, I think we agree that, at least for the types of problems we're talking about solving, we probably wanna go automatic memory management. But, there are a couple of different ways of doing that. You mentioned automatic reference counting or Rust has the ownership system, so it's just freeing allocating things on the fly. And then, you have the most popular, which is tracing garbage collection.

Of those three, to me, tracing garbage collection is the least appealing, even though it's the most popular, just because you have GC pauses. And I know that JVM and Go have spent a lot of time decreasing pause times, decreasing latency and all that stuff. But, it still seems like at the end of the day, you just don't have to worry about that problem if you can do automatic reference counting or allocating in free on the fly like Rust does.

But, then there's always the question of latency versus throughput and supposedly tracing GCs have the highest throughput over a given period of time compared to automatic reference counting and so forth. But, I'm also aware of some research and this is, spoiler alert, the language I'm working on, we're going down the automatic reference counting route. But, you can do stuff like compiler time reference counting, where you can detect that, oh, there's going to be an increment here, a decrement here. Those will just cancel each other out so we're not going to do either of those. It's hard to say in practice. The language is called Roc, R-O-C, roc-lang.org. And we haven't gotten any big enough projects yet because it's a very work in progress, very early stages kind of thing. But so far, it seems like the frequency with which it's possible to use these papers and these techniques that we learned to do this compiler time reference guide seems like quite a lot of the reference counts can be alighted, so we'll see.

Erik Doernenburg: And my impression was also the tracing garbage collectors just work. I mean, there's a lot of theory behind it, you can get reasonable implementation work. You have long pauses, but in some cases, it doesn't matter so much. And I think it is exactly like you said. The research has gone further now, so understanding this and, I mean, Rust didn't invent the borrowship or this concept, but they were the first ones to get it actually implemented properly. And I think what we're seeing now is a lot of the concepts that are somewhere in the middle between no memory management and the tracing garbage collectors, which they're, like, a blunt tool, if you will, right? You know they work and you can't make mistakes, and we're exploring the middle ground now and I think we will see more and more successful languages in that middle ground.

On the performance, I'm curious. I know all the theory in Java about the different generations of objects and most objects are thrown away. Again, if you're writing websites, probably most of the objects are there just for a brief moment in time to create the response. You can throw them away. You don't get heat fragmentation like you would in other languages, but sometimes you can compact the heap. I think there's room for implementation and I think there's enough systems that will be, for the programmers, they will be happy if they don't know. I mean if you can't tell where they're using the tracing garbage collector or automatic reference content or something similar, as a programmer, you don't care. And they probably choose the language based on other features, but if the language doesn't have to use a tracing garbage collector, it's probably better for you.

Richard Feldman: Yeah, yeah. Two interesting areas of research around that. First Compacting. There was a really cool talk at Strange Loop a couple years ago, called Compacting the Uncompactable. And it was basically about how they designed an implementation of Malloc that actually can do compacting. Really impressive stuff and they were using some very fancy tricks to make that happen. Of course, an automatic reference counting system could use that.

And then throughput of automatic reference counting versus tracing GC. Apparently, Apple is, because Swift does automatic reference counting, actually working on it at the hardware level, making that faster and adding some...I don't remember exactly what it was. It was some sort of either a new CPU instruction or augmenting existing CPU instructions for atomic reference counts to make them faster.

And that's definitely an interesting sign of potential things to come because if you think about it, you have this big company that's really heavily invested not only in Swift as a language, but also in making their own hardware. That's the type of thing that can influence other processor makers to try and keep pace with what Apple's doing. And so, if Apple's making hardware-level optimizations for automatic reference counting but not for tracing GC, or I don't really know what that would look like, but that's an interesting potential thing to keep in mind that even at the hardware level, there might be potential improvements, even if the software algorithms stay the same.

Erik Doernenburg: And Apple is motivated. I mean, they have tried garbage collectors, they tried the whole Java intro. They tried to have a garbage collector for Objective-C and they really concluded that for their use case, it didn't work. So, they are very, very motivated to make automatic reference counting work and, I mean, by all means, it generally works. I mean, there are probably slightly more cases where you can have errors of cyclical dependencies and holding on to each other. You can still create the same program with the tracing garbage collector, though if you just have a static variable somewhere in there where you store something, but on the whole, I think they will make it work. And like you said, I mean if they can reduce the performance impact at that level, then there's very little that stands in the way of doing that more widespread.

Richard Feldman: Yeah, interesting note about cyclical dependencies. This was one of the reasons that we decided to go with automatic reference counting. Roc is a pure functional language and it's actually because there's no semantic way to express mutation in the language, there's also no way to create cyclical data. So, we don't have to worry about that at all. But, that's unusual. Only if you've subtracted that much from the language can you get away with that.

Concurrency model

Lars Jensen: What kind of concurrency model do you think would be a good fit for the ideal language that doesn't exist?

Richard Feldman: Hmm, I don't think I can explain that in the context of Roc without going on a really long tangent.

Lars Jensen: I come from Elixir, and I think that has a really great concurrency model. It's sort of building upon the past in the sense that it uses Erlang as its foundation and Erlang was building for the telecom systems. When you're on a phone call, it should be invisible to the user that, now you're actually going through that cell tower, not that cell tower. In a sense, it's very good for distributed systems and as a programmer in the language, it's incredibly easy to write concurrent code. It's all built-in. It's sort of by default. You don't really have to think too much about it. Running a piece of code on this computer versus running it on another computer across the world over the internet is extremely similar to a programmer. You don't have to jump through a lot of hoops to do that. The beam takes care of it all, as long as you have those notes clustered together in the same cluster, it's a matter of just telling, "Hey, other computer run this code, please." And then, it does that. And I think that's a really powerful model.

Richard Feldman: Yeah, so I think Erlang's an interesting example of a domain-specific language, because the concurrency model that it has. Personally, I have not actually written a line of Erlang or Elixir, but my understanding from reading up about it and talking to people is that it's basically like a message queue-based system. And so, you can pass messages between different, they call them processes, but what they mean is, I don't want to say threads because they don't, that's operating system threads, but you know, thread-like things perhaps. And then, basically, the message queues are automatically handled by the runtime and since it's all immutable, you don't have to worry about things like data races and stuff like that.

That makes a lot of sense for a lot of use cases, but I suspect that, for example, somebody who's thinking about concurrency in the context of making a really high-performance game, they're a lot more concerned with A, single-threaded performance and B, the overhead about communication. They probably actually want to do mutations and locks and direct mutation of things. And even though that's a more error-prone concurrency model, it also runs faster and they're probably a lot more concerned with that. So, I think again, it kinda goes back down to use cases. So, if you're focusing on distributed systems like servers specifically I think that's a great model for concurrency. I guess it depends on what your ideal language wants to be ideal for.

Lars Jensen: Yeah, I guess it's all a tradeoff, right?

Richard Feldman: Yeah.

Erik Doernenburg: I think I agree with the idea of having, in Go, I think they're called channels or something, which makes it easy to shift data from one thread to another. To build that in Rust is something similar, the languages are doing that. I think though sometimes this parallel programming is overestimated. I mean, we don't see these massively parallel computers that people talked about, like, 20 years ago. We're seeing more cores now. We're seeing 20, 30, maybe even 60 cores, but that's going to 100. Like you said, in video games and so on, for most cases, you don't have a single application that does one task and needs to utilize all the cores for it. Maybe on the graphics card end, but not for the processing itself.

So, what we often find is that you need parallel computing in server applications, where you are servicing multiple parallel requests, and then it becomes very easy. If you have a web server and you have a thousand concurrent users and you have a thousand threads, the question is should you do that from a throughput perspective? But, if you have a thousand threads, then it's very easy. They just shouldn't get in their way. You should have something to make the threads isolated. But, to have this thing where all the threads communicate all the time, I mean, I mentioned this in the Rust talk, in the simulation, it didn't even make sense to break it up into multiple threads because the communication overhead was so big and the calculation then became completely background noise in all the communication between the threads that it didn't get a speed up.

Richard Feldman: Interestingly, so one form of parallelism we haven't talked about is really low-level data parallelism on the CPU, like, SIMD. It seems like there actually have been a lot of advances there, in terms of SIMD JSON is several 100% faster than other JSON, like really highly optimized JSON parsers that don't use SIMD. But the algorithms are completely different. I started reading the paper on that and it doesn't even look like parsing any more because it's just so completely different what they're doing. But it runs way faster because they can do 8 to 16x amount of work at once.

At least currently, they're really good language-level abstractions for. I don't even know how you would design an abstraction for that other than just saying "Well, maybe the optimizer can recognize that we can SIMD-ify this chunk of instructions." I would love to steal something from a language where it's like, this is Erlang, this style of concurrency is really great for servers. Is there some similar thing for expressing SIMD algorithms, like, SIMD JSON in an abstract way so you don't have to get as low level about it? I'm not aware of it. Maybe someone will come up with it, but I just don't know.

Upcoming new languages

Lars Jensen: You both seem excited about Rust right now, but are there any other languages you see on the horizon that are up and coming, very early stages that you think look interesting?

Richard Feldman: Absolutely.

Erik Doernenburg: I've just heard about one, Roc.

Richard Feldman: I wasn't gonna say Roc. I was gonna say Zig.

Lars Jensen: Okay. I haven't heard about Zig. What's that about?

Richard Feldman: Zig is, I would say approaching the same problem domains as Rust, but coming at it from a very different angle. We actually use Zig and Roc for the standard library. So, the standard library in Roc is implemented in Zig. I think if I were to pitch Zig, the way that I would pitch it is it's basically like: Let's take C and keep the simplicity and the sort of barebonesness. Let's add ergonomics on top of it but without adding a lot of complexity. But, Rust is like, “Let's try to make a completely different language from C.” Any amount of complexity is acceptable as long as we have these really strong guarantees about memory safety. If it compiles, it's probably gonna work, really, really strong guarantees. So, Zig is definitely less on the side of guarantees, but very much on the side of ergonomics and especially in terms of speed for the developer. The Zip compiler runs super, super fast. The Rust compiler is extremely not. I mean, that's my number one, number two, number three and number four and five and six complaints about Rust is compile times. How long will I spend waiting for it.

Not the case with Zig. It's really fast. They're working on hot code loading for C-like languages, which is ridiculous. And Zig also cross compiles anything. So on my Mac, I can compile a Linux binary and a Mac binary and a Windows binary. You know, I don't even have to spit up a VM. There are just all these little things that I can't do in Rust either, so if I were to make a Venn diagram, it's like, both of them can get really low level about memory management and things like this. Zig does not have the borrow checker, so it does not have the guarantees about memory safety, which I definitely very much value from Rust. But, whenever I'm sitting there waiting for Rust to compile or being like, how are we gonna build Roc for Windows and Linux and all these things, and I'm like, "Well, wouldn't it be nice if I could just actually cross-compile to it?"

So, you know, as an up-and-coming language Zig is much younger than Rust, but I can definitely see a really strong appeal. Personally, my prediction is that Zig will probably outcompete Rust in the specific niche of people making games because I think if you're making games, you're probably gonna have to do a lot of memory unsafe stuff anyway. This is my impression as someone who does not make games. But, it seems like just to squeeze every last inch of performance out of a game engine or something like that. I'd imagine if you were writing Rust, you're using the unsafe keyword a lot, at which point it's like, "Well, why don't we get all these ergonomic improvements too?" Especially games have a reputation for crunch time and spending a lot of time waiting for a compiler, it really kinda adds up. But I don't know. Time will tell.

Lars Jensen: That sounds interesting. I'll keep an eye out for that. What about you, Erik? Do you have any up-and-coming languages you're excited about?

Erik Doernenburg: Curiously not really, to be honest with you. I mean, at ThoughtWorks, a consulting company, we work with a lot of companies, usually in the enterprise space, consumer-facing websites, internal systems and so on. In this part of our culture, we really look at new things. We’re always trying to find new things. I mentioned this system that we were beginning to build in Clojure. And we've tracked a lot of different programming languages over time. About 5, 10 years ago, there was this wave of new programming languages, a lot of exploration, a lot of excitement, but from our perspective, it has really settled down. I mean in the web browser at the moment, there's, from our perspective, a clear winner in TypeScript, that's what you use. And on the server-side, as I mentioned, Kotlin is the one that most teams really find a good compromise. It probably doesn't win in any category, but it seems to be a really good all-around compromise.

That said, I am curious to see what will happen in the web browser with WebAssembly, because I mean I'm telling no secret, I'm not a great fan of JavaScript or TypeScript. I think we even had talks here at GOTO Conference, talking about how even the original designers of JavaScript said they probably would have liked to do a better job of it if they hadn't been pushed to do it in a very short amount of time. And we're still settled with it.

Even in JavaScript, you do see this move towards more functional programming, by the way, and even big frameworks like React are moving that way and it's still under the hood as JavaScript. And there are reasons for all the jokes about JavaScript, about the inconsistency and it was featured quite heavily also in the party keynote. So, I've been waiting for a proper replacement and the transpilation approaches weren’t that successful, I think. I mean Dart is sometimes mentioned, but I don't think you could get it into all the different web browsers. I'm curious to see what will happen with WebAssembly and we'll see what new programming language will emerge to write web applications, not writing video games running in a web browser, but something to replace what we currently do with TypeScript. I'm looking forward to that.

Lars Jensen: Yeah. I think WebAssembly is really interesting because we've had a long period of time where if you wanted to run something in the browser, JavaScript was your only choice. I guess we're getting close to the point where you can use whatever language you prefer if the community supports it and builds the tooling for it. So, that's going to be interesting to see in the next few years.

Erik Doernenburg: Yeah, we've had first engagements with clients where people are using Blazor, which is an implementation of C# and the corresponding tooling and frameworks. I mean, it still has first load times you wouldn't use it on a B2C website. But, I mean, that is a first promising sign, I think, of a more modern and better design programming language that, I'm talking about C#, can run in the web browser.

Lars Jensen: And Elm cross compiles to JavaScript, right?

Richard Feldman: Correct, yeah.

Lars Jensen: Or compiles to JavaScript.

Richard Feldman: Yeah. It's theoretically possible that Elm could compile to WebAssembly and Roc actually already does compile to WebAssembly. If I were to make a bet, I would bet that I don't think WebAssembly's going to change much when it comes to web applications, at least not in the next decade. Maybe it's hard to predict further than that. I think it's mainly just gonna be games, to be honest. A lot of thoughts about why, but one of the big ones is just that I don't think that people care that much about performance in web applications. I think it's sort of close enough.

And one of the reasons I think this is the first time Elm realized benchmarks it's like, "Hey, look, we're faster than all these JavaScript frameworks, faster at rendering smaller asset sizes." There was this real-world app which was 4,000 lines of Elm and it's an entire Medium clone, not an entire but you know. It's a substantial application that does a lot of stuff. Compiled, , that entire application is smaller than just React. Evan spent a lot of time making really small assets because everybody was like, "Oh, we got to decrease our bundle sizes." Nobody cared. And then Evan did a bunch of work optimizing the rendering. It's like, "Look, we're faster than React and Angular and Vue and everything." And again, people are like, "Okay, that's nice."

The idea that, "Oh, well, now that we have WebAssembly, we can finally do even better than that on performance," I don't think that's the real pitch. I do think there is definitely a potential pitch for, “Now you can use whatever language you want.” You can use C#. But the thing is, we've already had stuff like Scala JS. I know one team that's ever used Scala JS, but there's a ton of people using Scala on the backend. So, is that really the issue? Is it the lack of speed in the frontend? There was GHCJS for Haskell, which I guess had a lot of performance problems. But, I guess it seems to me that that has been done before in the compiled JavaScript thing, and I'm a little bit skeptical that the only missing piece was, if only you could compile to something closer to machine code, then it would be fine. I think it's just really that JavaScript and now TypeScript has this huge cultural momentum. And if we look at what's been successful in terms of mass popularity, it's really been JavaScript, CoffeeScript, whose tagline was, "It's just JavaScript," and TypeScript, whose tagline is just, "It's just JavaScript." And that's it. Those are the three big success stories and everything else, like, Elm, it is actually the most popular widely used compiler to JavaScript language that's not TypeScript, at least according to the state of JS survey, but it's very, very distant, below TypeScript.

So, I think the real issue here is just that there's this huge ecosystem, this huge cultural momentum and all this drive to do that, that even though everybody complains about JavaScript, it still takes something more than WebAssembly to change that cultural momentum.

Erik Doernenburg: I get what you mean. By the way, I don't think CoffeeScript is really a strong contender anymore these days even.

Richard Feldman: Not anymore.

Erik Doernenburg: I mean, I agree with you on websites, where we would consider a website or a shopping website or banking website and so on, what we also see is a lot of internal IT systems. I mean, the one I mentioned earlier is a sales system that is used by sales representatives, thousands of them and it's downloaded in the cache, of course, of their web browsers on iPads or Windows machines. I think there, this is probably no coincidence, it is Blazor and C#, but it's the first one that we are seeing making use of WebAssembly. I think there's still a ton of developers out there who are used to writing applications used in-house.

And they have so far tried React. And React has a steep learning curve and then our people were told, "Don't use this, use Angular," and then they go, "Oh, which one are you using?" And they'll say, "No, no, forget about this. Let's use Vue." And they're a bit confused about this. Also JavaScript is not a productive programming language, and I think what you will see, that's at least my prediction, is that big companies like Microsoft have something to gain here. And I think C# is already spreading more on the server-side. You can run it on operating systems other than Windows, and that whole idea that you can stay within one ecosystem will put some extra weight on it. I think performance is a hygiene factor. It can't be slow in the web browser, but I don't think you'll win this because you say, "We are faster than the most optimized Javascript." But if it's fast enough then you have a different story, like say, "Look, there are these component libraries. We can write C#, which is a better language. You can use it on the server. You can use it in the browser," because frankly, we've seen the other trend. People say, "Oh, we all know JavaScript. We run it in the web browser and therefore, we write the server applications in Node.js" and that really has some really terrible consequences, I would say, from a performance perspective, from security and so on. So, I think maybe you get different dimensions in this world of in-house applications and large organizations that will jump on that more. But I agree. If I were a startup that builds a B2C website, I wouldn't bet on WebAssembly either.

Richard Feldman: You make a great point. Yeah. I was surprised there's this sort of, like, hidden market of people doing really big in-house teams. I talked to a guy who's an Angular consultant. I was like, "Where are all the Angular apps? Everybody I talk to does React. That's it. It's all React." And whenever people are coming to Elm, they're coming from React. And because I'm in, like you said, the B2C startup world primarily, he was like, "Oh no, all of my consultances are 400-person teams that only build software that's used inside the company." And I had no idea that there were so many of those, but it's apparently a huge thing. You would know it much better than I would, but yeah. That's a really interesting perspective. I hadn't thought of that.

Outro

Lars Jensen: Well, thank you so much for spending some time chatting with me. It's been a lot of fun. I feel like we could have gone on for much, much longer, time permitting. But, it's been a pleasure hosting you. I hope you enjoyed it.

Richard Feldman: Yeah. Thanks.

Erik Doernenburg: Thank you.

Intro
Frontend or backend?
Programming languages in the field
Testability
Memory management
Concurrency model
Upcoming new languages
Outro