More Languages Won't Fix The Computing World
On major problems in computing, and why new general-purpose, systems programming languages within the existing ecosystem will not solve them.
Programming computers always sounded awesome when I was a young kid. In my head, the possibilities were endless—someone could sit down at their computer, have an idea for a game, simulation, or animation, and gracefully go from having nothing to producing an interesting experience. Computers provided the ultimate form of expression that was magically interactive, both for the consumer of the expression, and for the producer. In some strange way—in my child head—making art on a computer felt like experiencing art. Not every step would be a finished product, obviously, but being on a computer meant you could rapidly create concretized realizations of ideas and then tweak them, weaving between designer and experiencer.
You can more-or-less directly see the science fiction scene that depicts this—a sharply-dressed and well-groomed man (with a hairdresser and fashion sense that in no way matches the reality of programmers), interacting with holograms with his stylus, gracefully sculpting and shaping something new, making real what was once merely an abstract thought. With every stroke of his stylus, the computer aids him with simple but informative visualizations. The computer acts as an extension of his mind, a tool for bridging the gap between the theoretical and the practical.
Furthermore, the computer—truly an extension of his mind—is his, and everything he makes with it is his (just like his thoughts). Whatever he is creating isn’t sent over the network to a corporation’s server, and his access, usage, commercialization, and so on all aren’t contingent on a corporation’s decisions. The creation—and whatever software is used to manipulate it—is locally stored, and it’s truly his. He can share it, sell it, and make tweaks to it forever. It is the ultimate upgrade to a piece of paper and a pen—like drawing, only better.
The reality, as I learned, is not so fantastical. Programming is, instead, very much a rough approximation of what my child brain had dreamed up to be the case. Day-to-day life programming on a computer, it turns out, looks more like this:
Sorringence, indeed.
It’s all so rough of an approximation, in fact, that I’ve been driven over the years to deem substantially improving the computing world one of my important life goals, because virtually everything on modern computers is unsatisfying. It’s so far from what I wanted it to be as a kid, and so far from what it feels like it could be.
Let’s just take a simple problem: putting an image on the screen. This is a task that you’d hope would be very trivial. What does a kid who wants to use his computer to artistically express himself need to do to put an image on his screen?
Of course, it largely depends. By far the easiest, programmatic way to do this with common software development technology is with web technology (unfortunately). To put a hand-drawn image on the screen with web technology, the kid might need to do something like the following on a Windows machine:
Open up MS Paint, draw something, save the image file
Move the image to the desktop, let’s say it is called
cake.png
Right click a blank area on the desktop
Click the “New Text Document” button
Double click the text document, opening it in Notepad
Type
<img>cake.png</img>
Save the file with a special
.html
extensionDouble click the HTML file on the desktop
That is often regarded as a dramatically simpler sequence of steps than what you might have to do in, say, C, even with a few helpful libraries—and, thus, understanding what a library is, how to install it, and how to start calling it from a C program—and, of course, understanding what a compiler is, how to install a compiler, how to set up a basic C program, so on and so on... And, surely, this is much simpler than all of that.
But, nevertheless, it requires tens of millions of lines of code to work at all—it is not even close to the minimum necessary computational work on a modern machine to put an image on the screen. It relies on corporate giants and bureaucratic committees churning out massive browsers that are far removed from first-principles-computation. It has achieved an experience that’s simpler than systems programming has, to be sure, but in doing so it has sacrificed principles of computational simplicity and ownership, and thus it fails to reasonably approximate a solution. Unfortunately, the most ownership-minded—and computationally simple—way of putting an image on the screen is much more complicated.
As some have said, it wasn’t always this way. I, for one, don’t believe that our computing environments must necessarily give up simplicity, ease-of-use and good design principles, nor respect for ownership. I think the binary of “tens of millions of lines of committee and corporation-controlled code” and “obscure, impenetrable, complex systems programming” is an entirely artificial one.
Recall the original science fiction vision I wrote about in the beginning of this article. How would visualizing an image in that world work? I don’t fully know, but if I wrote a science fiction scene set in that world, you might imagine that the kid would be holding a stylus, and would have a surface to draw on (perhaps not even a flat surface—but for the sake of making it even remotely tractable, let’s imagine it’s just a surface). There would not be any “drawing program” to open, nor “editor program” to open, nor files to save or reference by file path. He would press the stylus to the surface, and draw whatever he wanted. His drawing would immediately become an “entity” in its own right. By virtue of creating that entity, it’d already be displayed on the screen. But, if he did want to write some “code” that recreated this drawing somewhere else, you could similarly imagine him expressing this by placing an “execution point” somewhere that loops in on itself—this might require a different stylus, which would be differently designed to support different expressions (than, for example, drawing). He might begin executing this cycle by pressing yet another type of stylus to it, which causes “activation”. Once executing, every time this execution point cycles back onto itself, it’d “recreate” some referenced “entity”—which he’d express to be his drawing.
That all gets very vague, and it’s obvious that there are innumerous design decisions and problems to solve in creating something even remotely approximating that world. To even get close, it’d require rethinking virtually everything about the existing computing stack—low-level hardware details, peripheral devices, operating systems, file systems, memory and cold storage, security, operating system “window managers”, and so on.
Needless to say, something like it is a computing world that’s preferable to that we have today. Now I ask, is it so difficult to imagine something that gets closer to that world than the existing world of text files, file paths, libraries, special and invisible rules about file paths and syntax, along with any other concepts the kid might first need to understand to recreate an image on the screen? Is it so difficult to imagine something that gets closer even just by changing, simplifying, and rewriting the software we use, and keeping the rest of the stack the same?
I’ve been in many discussions on this subject, and unfortunately, it does seem difficult to imagine. In fact, it is difficult for me. The process of growing older and better understanding the programming world has a way of divorcing people from the spirit of imagination, creativity, and magic. This is a shame, but I don’t think hope is lost forever—I think, by first understanding the problem, we can imagine a better interface for expressing computation and data on a computer from first principles, and thus get a bit closer to what the experience should perhaps be like. I propose that many problems required to bring about this vision are, in fact, not unsolvable in the same way that other science-fiction-technology problems are—and instead that we already know how to solve many of them.
Many might scoff at this vision, and call it ridiculous. Like I said, the existing computing world is unsatisfying, and unsatisfying might not be enough of a problem for some. But, importantly, it isn’t just unsatisfying. I find the computing world important. Particularly, I find the ability of computers to be easily used for the purposes of artistic expression, communication, and self-reliance as crucial, for a vast number of reasons—some of which are philosophical, some of which are political, some of which are cultural, some of which are technical.
So the failures of the computing world are not merely small problems, with the solutions being purely useful to beautify the world a bit more—they are instead massive problems that, gone unsolved, would have serious downstream effects on technology, mathematics, art, life, and civilization.
Programmers To The “Rescue”
There are many fields in which people may begin addressing these failures, but I am a programmer, and this is a programming article, so I’d like to focus on programmers, particularly systems programmers—who, I claim, are one group of people who might be uniquely positioned to drastically improve the experience of expressing both computation and data on a computer. I suggest this primarily because solving this problem requires those who are able to write things from scratch, and who understand a broad portion of the technology stack, from a very low level to a very high level. Through this understanding, they are those who have the technical ability to simplify the problem, to rewrite cornerstones of the ecosystem from scratch, and build a better computing environment.
What are these programmers, broadly, doing to improve the situation? What new inventions are they working on? How are they simplifying the ecosystem so that it isn’t such a mess, and so that it can perhaps be more rapidly iterated on by more people? How are they improving the experience of people looking to use their computer to express computation and data?
A large number of programmers—in fact, perhaps a hefty majority—disagree with any claim that there is a problem. They are perfectly happy with the state of the computing world.
Don’t worry, though. Some programmers aren’t so happy—or, at least, they claim to not be happy. They’re perfectly willing to admit that there is, in fact, a problem. The problem, they claim, is that the program that takes in text files and either produces executable machine code, or spits out a big text log with errors about the text files they submitted, has problems. The syntax for the text files makes things too annoying to express, the error messages could be better, the program might be able to catch more unintended consequences in the expressed text files, the expressions supported by their programming language are not quite what they’d prefer, and so on. You may even hear them say things like “error messages are the user interface to the compiler”, or “the compiler doesn’t do enough, and it needs more features”, or “the compiler should make it impossible to specify a text file that encodes a program that has bugs in it”.
These people—who are so confident that the problem with language inventors and compiler writers of history is purely that they were not as enlightened as their modern counterpart—will then go on to invent new programming languages, and write compilers for those languages. These compilers will take at least a half-decade to a decade to become usable for a real software development project. Even after those several years, they’ll be littered with bugs, complexity, and odd design choices, definitely rivaling those of—for instance—C. These compilers often over-prioritize the problem of producing executable code, and under-prioritize a number of other problems, like debug information generation, and offering tools that provide insight into how a program is working. Debuggers, some of them will claim, will simply not be as useful with their magical language, which “eliminates entire classes of bugs”. Or, they’ll perhaps suggest that new kinds of debuggers are required for their new language, adding at least another decade of research and development to the project of making their new language viable.
Their new language will carve out a market share of the systems programmer crowd, and it’ll become more difficult for those systems programmers to communicate, share ideas and code, and work on projects together. It will contribute to embarrassing Internet flame wars about programming languages, further muddying waters regarding useful software engineering practices, and further occupying the time of otherwise-useful developers.
I’m not sure how to carefully introduce my response to this point, so instead I’ll just do it bluntly: this is an entirely fruitless exercise. It does nothing to better approximate the world I’ve described thus far, and it only further entrenches us in the world we’re in. It serves to complexify the software world more, instead of simplify it. It does not remedy any of the computing world’s failures, and is disconnected from even noticing such failures, let alone solving them.
I am not the only person who has noticed the waste caused by many systems programming language projects, and by the bizarre assumption that the systems programming languages and compilers are the primary problem to solve within the computing ecosystem. Just recently, Abner Coimbre—host of the Handmade Seattle conference—announced that he’d be rejecting any conference talk proposals that directly advertise programming language projects. My response to this, along with many others, was relief and gratitude for this decision—Handmade Seattle has been a wonderful conference for those concerned with reshaping the software world to attend, and it (like many other programming conferences and communities) faced a threat of bike shedding on programming languages to the end of time, which—left unchecked—may have eliminated any possibility of change brought about by the conference whatsoever. Other responses—particularly from some of those who are personally invested in the development of more programming languages, for a variety of reasons—were more fiery. But Abner made the right call, which was to refocus the conference on what matters.
I’d like to be concrete about why I feel this way, though, so I’d like to write on some of the computing world’s major failures to which I refer, particularly those that are widening the gap between what I feel programming could be like, and what programming is today. It will—I hope—become clear that all of these failures will remain failures, even with a new systems programming language.
The Impossibility of Self-Reliance
In my last post, I wrote about the risk associated with abdicating self-reliance, particularly as it relates to the failure of The Machinery game engine. My post generated a number of responses. One such response was the point that true self-reliance was simply not viable, due to the exploding complexity of the modern software world, and that of its supporting industries. This is a common objection—after all, to be truly “self-reliant” you might need to start reading up on modern mining techniques, so you can begin mining silicon to produce your own chips from scratch.
I am not suggesting that programmers go and do such a thing—that is a misunderstanding of my point. My understanding of the problem is, instead, that a broad culture of abdicating self-reliance will, in fact, lead to centralization, chokepoints, and thus the loss of ownership, rights, and expression. Even though it’d be ridiculous to suggest that each programmer needs to become a silicon miner, it’d also be ridiculous to suggest that there should only be a single company responsible for silicon mining. Someone must adopt that responsibility.
In other words, there is a range of problems over which people within a field should be willing to adopt responsibility. For programmers, that range doesn’t include silicon mining. But, it certainly does include other problems. In some cases, it is simply not viable to adopt that responsibility, in which case I suggested that the programmer—or for that matter, the digital creator—can still improve their ownership over what they create by diversifying their investments into various technologies. So, even if they are unwilling to write a specific piece of software from scratch, they should avoid building their project around the success or failure of whatever single software project they choose to depend on in place of a custom solution.
Furthermore, the state of programming technology is such that artists and game designers actually cannot do any of this easily. They cannot “diversify their technology investments”—which would require some environment which allows for the composition of various technologies (in other words, a programming environment)—nor can they gradually introduce self-reliance for certain problems (which would require programming). This is because their only solutions—at the moment—are largely large and monolithic proprietary game engines with non-ownership-respecting licenses. Even if those game engines included ownership-respecting licenses, they’d still be in difficult waters if that game engine’s owner abandoned the project. This point is closely related to my earlier-described vision—when simple programming with technology that respects ownership becomes trivial, then a greater degree of self-reliance (and indeed of artistic expression) becomes viable for the artist and game designer.
A new programming language—while it may eliminate some bizarre elements of older languages, like C—will not radically reshape the interface for programming. Programming in these new languages will remain as impenetrable as programming in C is to artists or game designers. This is due to a fundamental property of how the ecosystem connects together, and not due to syntax or semantics decisions—a better C does not even slightly approach the computing world in which the artist is suddenly capable of doing custom programming.
What’s worse, perhaps, is that the ability of the hardcore systems programmer to either diversify their “technology investments”, or to do fully self-reliant programming, is also compromised by an increasingly-complex computing ecosystem. Almost every new language does not address this problem. Instead, the designers of these new languages have a tendency to make their languages more complex than something like C (and thus require more complex compilers, that require more code, and more skilled engineers to produce). This makes the problem worse—but even if a new language was dramatically simpler than C, it nevertheless fails to address the heart of the issue, which is how compiler, linker, debugger, version control, editor, and design tools come together to form an ecosystem.
Thus, the only conclusion I can draw with respect to this issue is that a new programming language does not meaningfully address the problem. It does not make the systems programmer more meaningfully self-reliant—in fact, it may increase their dependence on a centrally-designed language with a design that is still in-flight, with far fewer compiler implementations than C, for instance—and it does not radically alter the programming interface to make it more viable for (and thus improve the self-reliance of) artists or designers.
Complex, Slow, Underpowered Development Tools
There are a number of important roles in a programming ecosystem. Someone must implement an editor, a compiler, a debugger, a version control system, many static analyzers, and so on. These tools—in order for them to work at all—must share a medium of exchange with the other tools to which they communicate. That medium of exchange, in the current computing world, is text files. An editor can load, display, modify, then save a text file. A compiler can load, parse, and generate code from a text file. A version control system can store, manage history for, and aid merge conflict resolutions in text files. A debugger can also load and display a text file, along with other information in another format produced by a compiler (the debug information). The text file data format is that which allows all of these tools to display text files in (nearly) the same way, as well as that which allows all of these tools to report coordinates within some text data in the same way.
Some information, however, is not directly embedded within the text file, and thus requires extraction, meaning someone must write a data transformation that takes text and produces some data that provides the information they’re looking for. The compiler is one such example of a tool that requires this—it must tokenize and then parse the text file in order to produce data encoding the abstract-syntax-tree of the program. This might not be a big issue, if the compiler were the only tool that needed to do it—but that is not the case. Instead, many editors necessarily introduce tokenizers and parsers that extract abstract-syntax-tree or abstract-syntax-tree-like information. Similarly, debuggers also require such tokenizers and parsers. To do this without sacrificing, for example, real-time interaction guarantees, these other tools must also parse text in a much different way than, say, a compiler does.
Importantly, the tokenization of a text file, and the abstract-syntax-tree of a text file, both differ on a per-programming-language basis. Ultimately, this means that each tool that requires tokenization and parsing does not just need one tokenizer and one parser, but—in effect—N tokenizers and N parsers. The entire computing ecosystem (which wants tools that understand more than just the text) suffers because of this—the amount of code and complexity it takes to implement an editor, a debugger, a compiler, or many other tools dramatically increases unless they deliberately make concessions. As a result, the ability to become self-reliant of the programmer goes down—it is now more difficult to implement an editor, for instance, and so the market for editors (particularly editors with smart features, like IDEs) becomes more barren, and competition suffers.
A new systems programming language simply increments N, but it also fails to address the heart of the problem, which is that the text file format is fairly divorced from what it is being used to encode—abstract-syntax-tree-like information. If, instead, the format being used were something other than text—something that more directly encoded such information—it would ease the time and complexity requirements for many useful features in many tools, and especially many useful features that would increase the ability of non-programmers to meaningfully program (without requiring embedding oneself into the world of obscure programming tools), and thus those that would better approximate the vision of computing I initially introduced.
The Separation Between Computation And Design
This is also related to my last post, in which I describe the fact that programmers and designers currently live in two different worlds, and also that it’s remarkably more difficult than perhaps it should be for one person to occupy both roles, or to seamlessly flow between them.
This is at least partly because of a separation of tooling, which is largely due to the lack of a common data format—this is, of course, relevant to the previous section. Right now, the only common layer at which a programmer’s data is considered the same as an artist’s data is, more-or-less, the filesystem. This does not need to be the case, however; code, art, and design information share a number of fundamental properties: hierarchical structuring, ordering, references (of one piece of data from another), labels, tags, or arbitrary blobs of data being attached to various nodes of information, and so on. These patterns arise in a significant amount of data authored by game designers, artists, tech artists, graphic designers—game levels (of many varieties), 3D model formats, vector and bitmap art (not necessarily the final baked versions—which have various performance and size constraints—but the “loose” file formats used for development, that track data regarding layers, groups, and so on), and so on. I’d suggest that—due to such commonalities at some layer other than a filesystem (which also comes coupled with the concept of cold—and thus much slower—storage)—there is a strong argument for the creation of a common format that can encode the aforementioned common structures. With a common format, there exists the ability to build common tooling that may house both data for computation and data for art or design. A simple version of such tooling could first include the ability to organize, explore, search, tag, and manage versions of a project’s data. When more specific tooling is required—for example, a more specialized code editor, or a game’s level editor, or a 3D model editor, or a vector art editor—then this tooling can call into more specialized tools when necessary. From common tooling, tool-, idea-, and resource-propagation from one space into the other will follow.
Because of a separation of tooling, and because of the rigid boundaries within which various tools must interact, there arises a separation in authored content, in data formats, conventions, communities, communication, and so on. Thus, people are forced arbitrarily to choose between becoming skilled in the world of tooling to encode computation, or becoming skilled in the world of tooling to author designs (and, of course, skilled in design itself). This leads to the strange property I mentioned in my last post—those who are most able to design are least able to execute their designs; those who are most able to execute designs are least able to design. And, to restate what I’ve said, the most compelling work appears to arise when someone who is capable of design is also uniquely able to execute their design—in other words, embodying both roles.
It’s repetitive for me to say it now, but a new programming language does not meaningfully address this problem. The format into which code is encoded—and thus, a format fundamentally incapable of encoding a large portion of authored design data—remains the same with a new programming language. Therefore, the boundary between programming and design tooling remains, and therefore the artificial boundary between programming and design remains.
Closing Thoughts
A new general-purpose systems programming language would require, at least, nearly a decade of work before it became realistically usable as a full replacement for, say, C. It may work in isolated scenarios before that point, but nevertheless will not be entirely viable until nearly a decade of work occurs. It will have several decades of inertia in existing languages to overcome, and even after many problems already solved in C are solved in the new language, it will still need to exist in a world ruled by decisions built around C—it will require interop with C in order to interact with operating systems, it will need to cater itself to a programming culture ruled by C-like mindsets, and it may even require—for instance—a C compiler to be built into itself.
After that near-decade, the probably-few compiler implementations will have several undiscovered bugs. It will have designs that, despite initial appeal, will be untested by history, and thus have uncovered rough edges. It will have problems interacting with tools that actually offer insight into the way programs are operating, like static analyzers and debuggers. It will not necessarily work out-of-the-box with a large number of editors, which have mountains of code to offer even the simplest insight and features into parsed text files. It will just be getting started in overcoming the tsunami behind the existing ecosystem.
I look at that reality—fortified by a number of systems language projects that have matched precisely the pattern I describe—and my only response can be “it’s not worth it”. The reality offered by a perhaps-cleaned-up general-purpose systems language is simply not better enough than that offered by our existing ecosystem. I will still be using the same debuggers, which may work even less well for this new language. I will still be using the same editors, which also may work even less well for this new language. I will have a number of tools simply not available to me. I will cut myself off from sharing code with a huge segment of the programming world. And for what?
Nevertheless, there are non-general-purpose scenarios in which a language may be a worthwhile endeavor. For example, I see the work that went into Lua as a net-win—it made flexible scripting, embedded within a C project, easy. Many people have used it to better-approximate a flexible and expressive programming environment. I still don’t see it as a dramatic departure from the current ecosystem (which it wasn’t trying to be), but it seems—to me—to be a much more reasonable, conservative attempt at improving the current ecosystem. And, surely, it has been usable for many artists and designers on many teams. I don’t see a problem with this. In other, also-special-case scenarios, there are similar reasons to introduce a language. Just recently, I wrote on a simple code generation strategy I’ve been using frequently, which in effect changes which language into which I express some data, and allows me to generate more code and data from that language, where I control both the language itself and the generator. This has been a useful tool within the current ecosystem for me, and it is—in fact—a “new language”.
So, more-constrained languages may really improve the day-to-day experience of programming in the short- to medium-term future. That seems perfectly worthwhile to me. Furthermore, a language project can be a useful educational project. Writing tokenizers, parsers, and code generators are legitimately useful skills that translate to other problems, and they can be applied in the more-constrained scenarios I’ve mentioned.
But what I do have a problem with is the culture that is willing to waste decades of effort in producing marginal (at best) improvements to the current computing world. I suspect that this culture is not driven by a legitimate interest in improving the status quo, and is instead driven by ego-building and pissing contests.
I am not willing to spend a decade or two playing catchup, just to, in the end, obtain an experience that matches or slightly beats that of C. I would be willing to spend a decade or two playing catchup for something much better than what these languages are offering—something much closer to the vision I introduced first in this post. I would also be willing to entertain short-term tools that require far less investment that improve the experience of programming today.
At the end of the day, I don’t want to control what people do, and even if I did want to, I can’t. If you’d like to go work on a systems programming language, or use a new systems programming language, you can do so, and nobody will stop you. But I wanted to concretize my reasons for being so skeptical of such language projects, and argue that the systems programming world should be far more pessimistic about new languages than they currently seem to be.
To truly realize a computing world that respects artistic expression, computational simplicity, ownership, ease-of-use, and universal accessibility to computation, we must clearly express our vision of that world, and understand the factors that make it different from the current world. One of those factors is not that our textual programming languages are not good enough. I want to be a part of a systems programming movement that is serious about really improving the ecosystem, and that is simply not a movement that is investing many decades into new textual programming languages, which will go largely unnoticed in the history of the computing world.
If you enjoyed this post, please consider subscribing. Thanks for reading.
-Ryan
Mostly I've found myself agreeing with your thoughts in previous posts. But there are two major ideas in this post that I'd like to offer a different perspective on.
First, I've heard a number of people recently suggesting that text files are not the way that programs should be represented. I can understand some of this, but if you've ever used a programming system that *doesn't* store the programs as a simple text file, you'll understand why that is a Really Bad Idea. You suggest, for example, Dion as a better programming model. Pray that this doesn't happen, but what if Dion gets Our Machinery'd, and you didn't save a text copy of your programs?
You suggest "there is a strong argument for the creation of a common format that can encode the aforementioned common structures," yet about the languages themselves say "I am not willing to spend a decade or two playing catchup, just to, in the end, obtain an experience that matches or slightly beats that of C." Any new format to store programs in is going to need at least *several* decades before I would trust my programs to be stored in that format rather than a text file. I learned this the hard way.
The most important thing about plain text is that it is one of the few file formats that will be readable, basically forever (as long as a machine readable copy of the file exists). I've been using computers since the late 70s. There are so many programs I've had that created files for me from which I can no longer access MY content. The programs no longer exist, even the OSs the programs used no longer exist. And had I been diligent to copy the content from one media to the replacement media, every time a new one came along, I *still* wouldn't be able to access much of that content because it used proprietary data formats. What I *can* still access are files that used the simplest of formats out there. Plain text. Or simple .BMP files, for example. There are video formats/codecs from as little as 20 years ago that are difficult to find current products that will recognize them.
So, in this case, sorry, you are wrong. Text is by far the best format for now, at least if you want to be able to still look at and use your code any distance into the future.
Oddly, what could change this is the second thing I have a different perspective on. I say, let programming languages be fruitful and multiply. The VAST majority of languages will wither and die. But occasionally one will have a new concept, or simplify something, or be vastly more powerful in some way, and that idea might catch on a little. Maybe it's that language, or maybe another even newer one that just borrows that concept, that might gain some popularity. Only the very strongest will survive, though.
As with any new tool, if you aren't willing to risk the likely situation that the programming language become unsupported, or worse, will vanish completely, you should not be using it. Most new programming languages seem to me to be toys to experiment with some idea or another. Sadly, most of those ideas are simply a different syntax, but I encourage even that exploration because if you don't, you eventually are stuck with the atrocities of C++ syntax.
If you don't support the testing out of new languages, would you apply the same logic to any other tool you use? Imagine if you had to use something like MSPaint to edit your photos because someone said we shouldn't waste decades in producing marginal improvements to the graphical editor world. (Or imagine if you still had to use the original implementations of FORTRAN, COBOL, PL/1, and Lisp, because someone made your argument in the 60's instead of the 2020s.)
People having religious wars over programming languages (or editors, etc.) is just something that happens when they are heavily invested in them. You can ignore such fanaticism for what it is. Any arguments over new languages are like that too. They help create the environments that eventually survive (someone has to champion for a thing for it to become popular), but are otherwise meaningless.
Maybe at some point, some new language will take off that uses a new, non-text, format to store its programs in, and an entire ecosystem pops up to support that new format, not just for that language, but for others that decide to imitate it. After a few decades, and hundreds or even thousands of programs written that support this new file format (editors, debuggers, etc.), maybe I would be willing to risk writing something important in this language, having finally become convinced that my own work won't disappear because the format won't vanish, became outdated, or be replaced by something ostensibly newer or better with only nominal support of backwards compatibility.
But this will only happen if many new languages are tried, and the good ideas from them, like genes that provide a survival benefit, are allowed to survive and become fitter.
Even though I had some disagreements, thank you for writing your thoughts on the topic. I always enjoy reading them.
I agree with a lot of this article and unfortunately I just have to accept that there will never be the perfect language, platform, or libraries. Making new languages all the time just seems naive, after all if they are Turing complete they will always have the same capabilities, but have the downside of not supporting prior libraries natively. And the new languages either "solve" usage issues of earlier languages by automating tasks and preventing the programmer from making those specific mistakes, or they add more features, which developers get frustrated with because more features gives more ways to shoot themselves in the foot.
It seems like at this point languages have been expanded upon to the point that there is no need to make new ones because you already have all the functionality in the existing ones. There is usually a simpler default language (like C, Js), and a backwards compatible expansion to them (C++, Typescript). According to developers, the simple languages are too verbose and hard to manage in large projects (ie C), but the expanded languages give too many features that allows them to shoot themselves in the foot (ie C++). It's always a grass is greener on the other side situation, there will never be a perfect language that matches everyone's way of thinking and reading and use cases. So then they make a "safe" language that takes years to develop (you know which one I'm talking about) that isn't actually safe (you can never really protect a bad programmer from themselves, in the same way that you can never make a UI simple enough for Grandma to understand), so why not just focus on improving IDE's or programming styles instead of making non-backwards compatible languages with poor support?