@annasofialesiv

telling stories about our tools

🗂️ blog archives
  • The Blog

    trees

    The first blog I read religiously was Raw Thought.

    It first opened my eyes to what a blog could be; diary, confessional, scratch pad of ideas, place to rant, advice column.

    On Raw Thought, Aaron provided in-depth, game theoretical analyses on the films he watched. He discussed the efficacy of various diets he was experimenting with, including something called the Shangri-La diet, which involved eating a tablespoon of olive oil when hungry. Apparently, it was so unappetizing, that it just eliminated the desire to eat entirely. He wrote about how lonely he found it at Stanford, and admitted how one of the most interesting people he met there was his sociology professor. He was fascinated by institutions, loved Noam Chomsky and despised ‘the news’.

    I was amazed by the simplicity of the site. I found the left-aligned text column a rebellious and stubborn design choice, as I did the tiny Georgia-style font. I can just imagine designers decrying these moves, but on Raw Thought they just worked. The ideas spoke for themselves. They didn’t need web design to make them more palatable.

    The blog was a kaleidoscopic experience of Aaron’s frustrations, obsessions, and projects. There wasn’t anything he wasn’t interested in. He clearly had a truly insatiable appetite for knowledge, reading a baffling 140 books per year, and instructing the rest of us on how we could do so too.

    Raw Thought was what a great novel should add up to, broken up over hundreds of individual posts. It was a near-complete description of Aaron’s perception of the world. But unlike a novel, which once written is static, the blog is a living thing. The story continues, and readers begin to form a kinship and sense of comfort when they log back into a familiar site, to read an entertaining update from a familiar voice.

    It was extremely tragic when Aaron’s story ended all too soon, but the writings he left behind impacted me greatly and inspired a vision of what intellectual life on the Internet could be. That vision was something like a community of thinkers, learning and discussing in the open, committed to making the journey to discovery public. It was also a vision that offered incredible autonomy to the writer. The blog was not merely about its posts, but about the reader’s holistic experience on the site. There was no limit to how you could configure a web page, so there was no limit to how you could communicate ideas with readers. (Take worrydream for instance.)

    Personal blogs were akin to personal gardens. There was so much to be surprised by, so much delight and joy to happen upon as you skipped through these private gardens, tended in public.

    For the most part, that kind of peer-to-peer ideal is gone now. It continues to exist only in small enclaves. Most online writing now lives on decidedly unsurprising platforms like Substack. There’s a part of me that hopes, however, that people will re-discover the excitement that comes with having a unique spot of your own on the web. That they will find deep rewards not only in broadcasting their ideas, but an expansive sense of self-expression afforded by the freedom of the medium. After a brief hiatus, I’m unearthing my blogging habit. I have missed tending the garden, and missed engaging on ideas that require more than a few characters to convey.

    If you start, or already have a blog — let me know. I’d love to explore your corner of the web!

  • Criticizing Computers

    comp

    Do you ever get really mad at your computer? Are you frustrated by how everything works these days? Do you get angry not knowing who to complain to when programs or applications break down? Well, you’re certainly not the only one.

    Computers and applications are our gods now. They operate in an invisible ether and live out there on some distant, mysterious Mount Olympus — better known to us as a data center. Their mercurial whims dictate our moods and our fortunes. We plead with them, but have no idea if they are listening.

    Of course, it wasn’t always this way. Computers used to be small and straight forward. They used to be cute! Now they are big and scary and mind boggling. Now they are so complex that barely anyone alive understands how they work in their entirety.

    We were warned about this. We were warned about the dangers of too much complexity, but we didn’t care. Now, rather than understanding how the thing works and what to do to fix it, we basically cross our fingers anytime we boot up our computer or open an app. These criticisms aren’t original to me, I am just a messenger.

    The history of cursing modern computation is storied and exciting, and below is a summary of some of the most biting critiques issued by the most brilliant minds in the field.

    1. Good hardware is masking really bad software.

    The processors in our laptops are over 1,000 times faster than they were thirty years ago. So it’s natural to ask, as Joe Armstrong does, why our programs don’t run 1,000 times faster.

    Jonathan Blow answers because software is in decline, even though it looks like it’s flourishing. Software has been free-riding on hardware for decades and actually, software technology has not improved in years.

    He says that “today, we simply don’t expect software to work anymore. … ‘The Five 9s’ used to be a very common phrase in the 90s when people wanted to sell you software or a hardware system. What it means is “this system is up and working and available 99.999% of the time.” We don’t use this anymore. In part because the number of 9s would be going down, and we can’t make them go up again. We’ve even lost the rhetoric of quality that we used to have.”

    2. All the bad code is piling up, and we don’t know how to get rid of it.

    Joe Armstrong says that in the last 40 years, we have written billions of lines of code that will keep programmers employed for trillions of man hours in the next few thousand years to maintain and debug the code we’ve written.

    On the one hand, software engineers should be thanked for this incredible jobs program they’ve created. On the other hand, to the extent that implementing software was intended as a solution to reduce labor costs, it seems that implementing software may have inadvertently increased them?

    If, as the brilliant mathematician Edsger Dijkstra said, “computing is about controlling complexity,” then the body of work produced by the computer industry’s programmers has entirely failed.

    In the past, when computers only had so much memory, programmers had to be selective about which programs would live on in memory. These constraints imposed a strict discipline on programmers, and overtime, a process of natural selection retained the good programs while weeding out the bad.

    Today, memory is not an issue! We have more memory than we know what to do with. Rather than natural selection, we have a regime of hoarding. All the bad code can live on to torment and confuse the programmers of the future.

    3. Programs used to be pipeable. Now, GUIs are killing pipes.

    There already exists a programming philosophy with ideas on how to tame entropy and ensure that computers actually control complexity, instead of increase it. Those ideas are summarized in the Unix philosophy, which I’ve added below for convenience:

    1. Write programs that do one thing and do it well.
    2. Write programs to work together.
    3. Write programs to handle text streams, because that is a universal interface.

    If you think about programs as modular units of computation, then you can think about combining or piping these programs together to do all sorts of new things. That way, you never have to re-program anything when you’re just trying to create a new combination of functions, and if something goes wrong in your program — it’s easy to debug since every component of your program only ever does one thing. Furthermore, because the input and output of every program is just text we can even pipe together programs written in different languages!

    Pipeable programs promised a golden world of future possibilities. Combine programs together in any way you like to have the computer do whatever you ask of it! You are the master of your destiny and the computer is the ship that takes you there!

    The only problem is that most of the programs we use aren’t pipeable. Usually, when I’m using an app, I am interacting with it through a graphical user interface (GUI). I click on a button, and then the app does some function. Maybe I click on the weather app to ask what temperature it will be tomorrow. The weather app will not give me my answer in a convenient output format that I can pipe into another program. The weather app might just display a number on my screen that I then have to remember and re-type in elsewhere.

    4. Computers are telling me what to do, but I can’t tell them what to do.

    It’s not an exaggeration to say that today, computers have more sway over human behavior than humans do over computer behavior. This, actually, was the whole idea behind the design of modern personal computers. The big idea was that users shouldn’t have the ability to tinker with how their file system is organized or change the appearance or function of their operating systems. It’s an idea that then spread to the entire world of computer application. Now it is entirely the norm to give users non-programmable interfaces where they cannot change anything about the programs that they interact with!

    5. Software is not technology. It is ideology.

    “With ideas come the politics of ideas. There are thousands of computer ideas and so there are thousands of computer religions. Every faction wants to pull you in. Every faction wants you to think that they are the wave of the future and because there are no objective criteria, as in religion, there are thousands of sects and splinter groups. Everyone has a favorite language and they’re fanatical about it! C++, Perl, Python, Ruby, PCP, C#, Lisp — but which Lisp? — East Coast Unix versus West Coast Unix. But — who gets to decide what methods will be used? What ideas will be followed in creating software? Ah — that’s the politics of the computer world.”

    “In Hollywood, they have it down to a system. They determine who directs. In the computer world, everyone wants to design software, but they don’t call it creative control. In software, it’s the same issue as in Hollywood, but nobody recognizes it. And that’s partly because interactive software is really a kind of movie. What is a movie? A movie is events on a screen that affect the heart and mind of the viewer. What is interactive software? Events on a screen that affect the heart and mind of the user.”

    “As long as the industry thinks that software is technology, the process will not improve.” — Ted Nelson

    6. Computers are fast, but in the dumbest way possible.

    “The semiconductor guys don’t know much about computers so they’ve copied a bunch of ancient architectures. So what we have in the personal computers is creaky old architectures with this wonderful (semiconductor) technology. So the people driving the technology don’t know anything about systems and the people who have built traditional large computer systems haven’t participated in the (semiconductor) technology revolution. Supercomputers are an extremely inefficient use of power and space. They just brute-forced it. They said we’re not going to use any cleverness at all, we’re just going to pour the coal to it and see how fast we can make it run. So they just turn up the knob and go for it. And you end up with these giant steam engines that blow off all this heat. It doesn’t make any sense at all from any point of view. But, you see, the situation is actually much better than that. And that’s the part nobody counts. I mean if you take today’s technology and use it to do a really novel architecture you can get a factor of 10,000 right now. Today.” — Carver Mead

    7. No one knows what they’re doing, and it’s a civilizational problem.

    Jonathan Blow says it takes a lot of energy to communicate information from generation to generation and that “without generational transfer of knowledge, civilizations can die because the technology that those civilizations depend on degrades and fails.”

    Well, most people can’t program without help from Google and Stack Overflow, and as mentioned, basically no one understands how the computer works in its entirety, so we’ve really screwed up our chances for educating the next generation in how computers really work.

    One of the greatest programmers in history, who did indeed understand every component of the computer from the software down to the chip, Niklaus Wirth, wrote “the belief that complex systems require armies of designers and programmers is wrong. A system that is not understood in its entirety, or at least to a significant degree of detail by a single individual, should probably not be built.” Not only have we built such systems, all our collective livelihoods depend on them!

    I guess the best we can do is pray to the computing gods that everything works out!

  • A Conversation on Technological Literacy

    technolit

    I spent most this summer in New York City, as a member of Interact’s residency cohort. I wanted time to develop my ideas on what I call “technological literacy” — or our growing need to understand the technological environment we live in.

    There’s a growing discrepancy between our reliance on technological systems and our understanding of them. Given the complexity of the industrial and digital processes surrounding us, this is entirely understandable. However, in my view, delegating understanding is never a good thing, and it’s best not to be blind to the inner workings of processes we interact with on a daily basis. Furthermore, by resolving to only be a passive consumer of technological goods means, in some ways, foregoing the ability to alter or manipulate the technological tools we use. Good writing about technology, it seems, is one of the missing pieces in this puzzle, without which, we’re destined to continue walking blind in the modern maze of systems and gadgets.

    On August 13th, I hosted a conversation with some of my favorite writers; Nadia Asparouhova and Danny Crichton, joined by a few additional special guests to discuss these and many other! topics. I’ve transcribed the conversation below, because it’s too good to not be shared.

    Anna-Sofia: It’s pretty obvious that the world is very much powered by technology and technological designs. The thing that’s different and interesting about this today is that most of these designs, whether they are industrial processes, whether there are devices, the software we use — the inner workings of them are hidden, abstracted or literally invisible to the human eye.

    Like if we think about the circuit logic in an iPhone, this is electron level stuff. So we can’t even see it with our own eyes if we wanted to. All of computer engineering – really anything digital, is built on abstractions. So it’s much more difficult to see technologically and to intuit about the technologies that we use in our daily lives today than it was in the past.

    This is the first passage from the book The Elements of Computing Systems, which is a book about how computers work. So it begins,

    Once upon a time, every computer specialist had a gestalt understanding of how computers worked. The overall interactions among hardware, software, compilers, and the operating system were simple and transparent enough to produce a coherent picture of the computer’s operations. As modern computer technologies have become increasingly more complex, this clarity is all but lost: the most fundamental ideas and techniques in computer science—the very essence of the field—are now hidden under many layers of obscure interfaces and proprietary implementations. An inevitable consequence of this complexity has been specialization, leading to computer science curricula of many courses, each covering a single aspect of the field. We wrote this book because we felt that many computer science students are missing the forest for the trees.”

    So there’s a sense in which maybe we’re all kind of missing the forest for the trees in some way. I wrote, as part of this residency, three theses based on these ideas, and those were:

    1. I think, technological literacy, like understanding the scientific, physical, mathematical concepts behind technological designs, will be increasingly important to retain agency in the world. So you can imagine, for example, economic or financial literacy is really important to how every responsible adult navigates the modern world. And so there’s a claim to be made, or my claim is that going forward, having technological literacy is going to be very important to be able to orient yourself in the world.

    2. Okay, second thing is all technological advancements create new problems. 

    3. And the third is that the role of public writers should be to articulate those problems, and promote greater technological literacy through their writing.

    So basically, what Nadia and Danny are doing really well. The other problem is, increasingly, it seems like there’s fewer and fewer voices, institutions and publications that are telling stories about technology in these new ways, which is something I hope to dive into in our conversation today. 

    What I think is most lacking from the discourse, is highlighted in following quote from the social critic, Paul Goodman, who says,

    Whether or not it draws on new scientific research, technology is a branch of moral philosophy, not of science.

    I think that, in general, going forward, there should be more of an identity created between technology and culture. 

    Technology is crucial in determining resource allocation, our habits, how we do things. I also think that technology is just a matter of design. It’s a design with a lot of flexibility in the same way that an artwork is a design. And it’s a reflection of a particular way of doing things. But there might be multiple expressions of that way of doing things. And so that’s something that I also think is missing. 

    The last thing is something that Maran Nelson, the founder of Interact pointed out to me, which is that if we accept that technology is art, we know that within the ecosystem of art, there’s also an ecosystem of art critics that comment on the art. That’s not really the case for technology, there’s not really a culture or an ecosystem of critics like that. And is that maybe that something that will change? Or how do we think about that? Okay, that’s enough from me.

    Now, I’ve spoken with Danny and Nadia about these issues individually, but I’m really excited for the opportunity to speak about these themes for the first time as a group. To introduce them:

    Danny is like a polyglot, he studied math, computational science, and began a PhD in public policy. Danny was the managing editor of TechCrunch and now is the head of editorial at Lux, where he is writing about the complexities of our economic and technological world

    Nadia is someone that I really respect because she is really interested in the world of the unseen and the hidden. She’s an independent researcher and writer, she published a really amazing report about the unseen world of our digital infrastructure and who’s building it. She later wrote a book about open source software and how it’s built and maintained. Now, Nadia is focused on researching the emergent philanthropic institutions that tech wealth is creating and how it will be influencing the world. So really important questions which all coincide with tech, narrative storytelling, and institutions, which is basically the focus of our conversation.

    Danny Crichton: Thank you so much. Thank you, everyone, for coming.

    Anna-Sofia Lesiv: Yeah, thank you guys for being here. To open it up, I’m curious if you guys have any immediate reactions to the intro, otherwise, we can jump into a linear sequence of questions!

    Nadia Asparouhova: I was thinking about what technological literacy means today, and how that’s sort of evolving because, in some senses, people need to know less about their computers, so I feel like some of these conversations have kind of fallen off.

    But then, they’re kind of missing this other side of like, yeah, maybe I don’t need to know how to debug my computer anymore. But, one of the things that I always think about is, lack of technological literacy leads to fear of like, “the algorithm.” And the way people write about “the algorithm” in the media suggests this underlying fear or lack of understanding of how products work. That’s just something that came to mind as you were talking.

    Danny Crichton: I was thinking this morning about how similar writing and software code is. I mean, they’re really parallels right? They’re quite interactive, and we seem to be entering a world in which we’re increasingly doing hybrid technology. Software and hardware. Bio and tech, where it’s not just enough to know the software, you actually have to know other fields of engineering and science in order to put it all together. And so, when we talk about literacy, even if you have software literacy, you don’t necessarily have the complete literacy as to your first quote, from The Elements of Computing Systems, and to me that’s a huge gap as we’re entering the metaverse, VR, as more of our technologies are entering the physical world again. And I feel like we have to expand the definition of literacy quite a lot.

    Anna-Sofia Lesiv: Okay, so maybe one good place to start is with this quote that ‘technology is branch of moral philosophy.’ To me, basically, any and every question about “should” resolves to some fundamental technology, in the sense that a technology is a mechanism for how something “should” be done. So I wanted to open that up to you guys and ask, what is the most valuable or operative framework that you use when looking at technology or when trying to critique it or ask questions about it? And another way of putting that is basically like, do you agree with this quote, that technology is a branch of moral philosophy? Or is it something else?

    Danny Crichton: I think it is a branch of moral philosophy. It’s maybe more constrained than an open discussion of philosophy. Every technology offers capabilities — there’s a range we oftentimes talk about in the government sector as dual use, right? There are technologies that can be used for good or for evil, and the user fundamentally gets to choose. So for instance, CRISPR, can be used to create vaccines to save lives, for COVID-19. At the same time, they can be used to optimize biological weapons, and, you know, create pathogens, that kill all of us. That said, I do think that there’s, and this maybe is related to your tech critic piece — there hasn’t been as much focus on the creators. I mean, the people who are building these technologies are shaping, in some ways, the destinies of how these things get used. 

    So I would love to see both a better form of moral philosophy around technology a better framework and comprehensive view on it. I think that requires more tech critics, but also requires you to have the full swath of knowledge, everything from the limits of computation to the design constraints that an engineer, product manager or company faces to the psychologies of users. And unfortunately, that’s a lot of fields all in one head. I don’t know if anyone could ever pull it all off. But I agree with the premise that you’re giving. I would love to see more of technology being considered a moral philosophy.

    Anna-Sofia Lesiv: So Nadia, when you were writing about open source, who did you see as the audience because obviously, it’s quite niche. And there are limits to what you can do once you formed an opinion about something, as you mentioned, are you guys both writing for like this future generation of tech critics that will hopefully through your writing become more enlightened — and then talk to the engineers? Or do you envision a broader audience for yourselves?

    Nadia Asparouhova: I had a really specific decision to focus on developers and people at technology companies who are using open source software. Because yeah, I mean, this was a question early on, do we want this to be a more mainstream conversation? Do we want to try to get like, placements in mainstream publications? 

    I think I asked myself, like, how do I need to change the argument in order for this to have mainstream coverage. Open source is usually covered as like, all about security vulnerabilities. It’s a very narrow way of understanding what open source is because that’s the only thing it’s relevant to, you know, random person reading Washington Post or whatever. 

    And then also, I got to ask myself, what is the impact if I achieve mainstream interest in this topic? Like, is that really going to change anything? I really care about impact. I don’t want to just write stuff where random people go, “Oh, that’s a nice story,” and then they don’t think about it anymore. 

    If I really care about impacting open source, then I need to speak to the people that are actually consuming it. And when I first started, I just kind of assumed that if you’re a software developer that uses open source, you understand a lot of the stuff that’s going on. However, I quickly realized there was actually this huge gap. Just because you use open source does not mean you understand what is actually going on within open source projects. And it could actually be really useful to serve that translational gap.

    Danny Crichton: You’d be amazed how few topics people care about. Like, I’m always amazed — if you were to pull out all TechCrunch traffic, there are like three subjects. It’s like Apple, Tesla and like Elon Musk’s latest shenanigans are like 70% of all attention that stories get. 

    It’s just the reality that a Tesla story will do 100 times better than almost any startup. Right? And if you think about it, that makes sense. Like, no one’s ever heard of the startup. Why would I care about something I have never heard of before, which is why you will oftentimes see in the headlines like Sequoia-backed or YC-backed because these are signals, you know, signifiers that we should care about it. 

    If you really want to do pathbreaking original work in technology, whether it’s criticism, whether it’s a new field, whether it’s you know, science that hasn’t been covered before, I mean, there’s not an audience. There’s no one who cares. There’s no one who’s searching for it. 

    So that was one of the challenges I faced. I covered a lot of the US-China trade politics, which, thanks to events in the world has become much more interesting than when I started and there were like six writers covering that and, you know, we were focused on open source technologies and 5G back in like 2017. I used to get no views. Then, thanks to politics and everything, we got more and more attention.

    I really think that as a writer, you have to constantly be thinking, not just who is your audience, because oftentimes, there is no audience, you have to be thinking who can I convince and get them to say things like, “I really have to get into this”, “I want to read multiple things”, “I want to read a book”, “I want to maybe subscribe to the newsletter”, because you are filling in a gap that they have. Much in the way that a technology product fills, you know, needs or gaps that folks have as consumers. 

    As a writer, I think of it as, what are the intellectual gaps? In my own newsletter, Securities, that gap is interdisciplinary focus on technology, science, finance, and geopolitical complexity, connecting dots between a lot of different fields, all simultaneously, things that we’ve seen in the news that are usually written separately, and just saying, like, “Look, all these are intertwined and connected for various reasons”. Because to me, so much of modern press is very siloed and verticalized.

    Anna-Sofia Lesiv: Yeah, well, this really, I think this explodes the conversation into the direction I’d like to take it. 

    I wanted to ask you guys — what exactly are these gaps that are not being talked about or written about? What are the ways that technology is not being written about as a form or style question that you would like to see more of?

    Nadia Asparouhova: I mean, I’m biased based on the stuff that I’m looking at now. 

    But I think there’s been a shift from the era of startup building and wealth accumulation to, how do we now grapple with our newfound power as tech and think about our place in society?

    I think that transition is slowly happening, like you see things happening on the ground that suggests that is happening within tech, but then I don’t feel like that has quite entered the mainstream narrative yet. 

    So that’s like the topic that I’m personally interested in. But yeah, I still think that the broader conversations around, what is the role of technology in my life? and how do I grapple with it? 

    I think the technology critics that are writing about those kinds of things, right now, tend to write all in the same way. I feel like it’s all the same sort of critique. The critics exist, but they’re all kind of saying the same thing. If you can always predict, like I said, you could just automatically generate …

    Danny Crichton: GPT-3

    Nadia Asparouhova: It’s so focused on social media, it’s always focusing on these very specific areas of technology. To me, good critiques don’t start out with this end goal in mind. It’s more about starting by observing what is going on. Start by just trying to explain what is happening and then slowly form a thesis through that. I just don’t feel like I see that kind of nuanced critique of tech right now.

    Danny Crichton:  I’ll give an example. Carolyn Chen, has a new book that just came out, I want to say three or four months ago, called Work Pray Code. Good title. She focuses on the translation of East Asian Buddhism, Zen Buddhism, from East Asia into California in the 1960s with the countercultural movement and then how it morphed into the modern day — like mindfulness, wellness, particularly corporate-directed mindfulness programs like at Google and other large tech companies. And she talks about how  this very nonmaterial religion, because Buddhism is all about accumulating wealth, has transformed places like large tech companies into this culture where, at the beginning of every meeting at some Google, you take five minutes as a breathing exercise. It’s like, “Here’s this Buddhism moment … and now we’re going to talk about how to optimize the ads.” She was the first critic I saw that was offering a sociological, a different, angle. I take that as an example, since I’m always really interested in all these intersections. 

    I’m also interested in the built environment, and the built environment that is starting to reach end of life. If we look around us, you know, our sewers, our transportation systems, airports, our power grid, a lot of it was built in the post-war era. A lot of it was built for 50-70 year horizons, a lot of it’s coming to end of life and needs to be replaced. 

    And as a generation, my generation of millennials, or Gen Z, or even Gen X, none of us have ever been part of these systems and replacing the systems in our built environment. And so as an example, like one of the more successful essays I did, as a writer at TechCrunch, was a focus on the New York City subways and how they brought wireless and WiFi into stations. I thought that this was going to be a really niche subject, no one would care. And then I interviewed Transit Wireless, the company that did it. They spent seven years putting WiFi and wireless into the stations and what the story ended up becoming was just like the impossibility of what it takes to get a signal into these stations. Namely, they have to deal with rats, they have to deal with water, terrorism, people peeing on them. I was talking to the CEO and he was like, “You know, milspec, or military spec, like doesn’t even come close”. Because like no one pees on tanks. It’s not something that the Defense Department normally worries about. But if you pee on one of these antennas, it shorts out. And so it ended up being a really successful article. I can see the numbers. And I think it’s a good example that it is possible to go beneath the surface and to show how things work. 

    The challenge has been getting away from the glare of a couple of core technologies. Social media is obviously one. There are like three books on making subways, and there are like 1000 books on how social media is eating your mind. And I don’t know how to redirect. 

    This is the challenge with audiences. Ultimately, there’s buyers for these books. The reason you keep seeing “social media is terrible” books is because there’s a huge market of readers who want to read, “oh my God, my brain is being fried”. But I then that’s where I go back to the audience development. I really want to create more audiences. We’re all in a market driven publishing environment. We need more audiences that are curious about how we built the built environment, and how do we replace it, and how do we not lose what we already have and make it better going forward for the next generation?

    Nadia Asparouhova: I do wonder, is it realistic to create a market for that? Going back to my own standard on directly impacting people’s lives — one reason why the social media narratives have been so successful is partly because it’s fun scare tactics or whatever, and they’ve worked really well. But it’s also because like, everyone deals with social media every single day, and it is infecting their minds every day. So it is directly impactful to think about it and have a conversation about it. 

    And, there is a little bit of the like, “isn’t it so cool how this thing works?” and everyone goes, “yeah, it’s really interesting how this works,” and then they go back to their lives. So I’m trying to think of how to draw that line from these more nuanced conversations. Like where do you see the impacts of that on someone’s life?

    Danny Crichton:  This is an insight I’ve been thinking about and I haven’t pushed the limit too far — but this is the first generation that widely codes. If you step back and look over the last 30-40 years, computer science used to be in really tight ivory tower circuits. At one point, you could actually meet every person who could code a computer, like in a room this size.

    I learned the code in elementary school and middle school, and it was really hard. I bought books, you went to Barnes and Noble and got like How to Code C++, like this was prior to Stack Overflow, all the tutorials, all the modern ways to learn, and now I think with Roblox, Minecraft, a bunch of other platforms, there’s this openness, where people by default, maybe even if not at a very proficient level, are learning basic if statements, code structures, control structures, functions, at ridiculously early ages. So part of me is asking, maybe there’s an audience for a lot of the stuff around technology today. 

    If you look at people who are 20, going into their 30s, there are now millions and millions of people who can at least understand the rudimentary basics of coding to understand how digital technologies work — which didn’t exist before. I actually think that this is a huge gap for publishers. I just don’t think people are realizing that actually, all these people that speak the same language could have a similar base of knowledge to build upon. So I call it generation code.

    Anna-Sofia Lesiv: It’s really interesting, because, on the one hand, when computers first came about, they were massive, so expensive to build, they literally were only used for like large scale scientific projects to replicate a sense of scale — and then Moore’s Law kicked in and now everyone has like two or more computers, literally on their person all the time. And no one really knew what to do with that. 

    There was this incredible programmer who wrote memoirs of her experience programming, Ellen Ullman. And she basically wrote, ‘The computer’s gonna enter our body, our experience. It’ll be in our veins, but we don’t have the literacy with which to use it. The first computers were designed with the idea End User Programming. So if you wanted to use it, you have to program it yourself to do stuff for you.

    However now, similarly to the programs that Ellen Ullman built for scale, require, design thinking which assumes that the user is an idiot. You want the end user to not be able to mess up the system in any way. They have screens where all they can do is click “okay”. The system tells you how to do it, you can’t tell the system how to do it. So to the extent that there’s an identity between technology and culture, culture is a way of doing things or the habits that you form.

    What we lose, when we don’t have this kind of literacy is like, let’s say they have a daily habit or a way that we like to do things, now, it’s really difficult for people to make the computer do it that way. Instead, they have to do the task the way the computer does it. So like, where I think this literacy could be more useful, is for people to have more of a sense of control over their tools. 

    Omar Rizwan: I think one of the hopes is that there’s some kind of latent demand. Like, it’s like one of those things where it’s like, why should we build a bike lane here — because nobody bikes here. Well, once you build a big bike lane, people will bike there. Or like, why should we build a bridge here? Nobody crosses this river because they don’t swim. Sometimes there’s latent demand that appears when you build the infrastructure.

    Nadia Asparouhova: I think I’ve been going in the opposite direction, where people are becoming both more and less technologically literate and that like, I don’t see us reverting back to a world where people are deeply engaged with programming their tools.

    With Roblox and Minecraft, they’re not even really like programming — technically speaking — they’re not coding, but it’s the same behavior, and I feel like the thing that we’re seeing more is everyone just instantly being able to see their whole world as programmable in some shape or form which is teaching people agency or something.

    Omar Rizwan:  I think the question is like, if you have these different systems that people are using, like Roblox, or Instagram, or like the web or whatever, like, what are the externalities of people learning to use that system? Like what other stuff do they end up being able to do — because they learn how to make webpages or learn how to use Roblox?

    I think there is like a difference of like, if you know how to use Instagram, you spend all your time on Instagram. There’s kind of like a cap on like, what does that lead you to really think? If you learn to program an Apple II, I think that like actually kind of is an introduction into some kind of broader world of programming in a way that’s useful for society.

    Danny Crichton: I mean, in this modern world, we also have all the no-code tools and all the no-code platforms. And so at some point, we have to ask, like, what is programming, right? If programming is If-Then statements, control statements, functions, inputs, outputs, like, you know, if you’ve designed a dashboard with Retool that takes some data process, displays it in different ways — like that is visual programming, but it is programming. 

    Like — this also counts!

    If you’re using your technology, that’s allowing you to do what you want to do. It’s no different than like, if you’ve coded your own smartphone systems, right? When I walk into a room and the lights turned on, that is a form of programming. It wasn’t designed to do that. You chose to do it, you just, you know, developed a flowchart of decision making that goes into it. 

    To go back to the original question, I think the ideal is a format where design is sort of in layers. It can do everything autonomously, I can do it visually, at a very simple level, I can also get into the code and get an open source layer and actually change everything and how it works. Because most of our technologies are fairly locked down somewhere in that stack, right? I can’t go into the hardware of my iPhone and change, like how it communicates to a cell tower. I also can’t change much of the software either, like it’s actually hidden.

    Omar Rizwan:  One of the things that’s interesting is like, even for stuff that’s nominally open source — this is one of the points we made at Dynamicland — has anybody read the Google Chrome source code? Has anybody changed the Google Chrome source code? I would bet that the answer is no. Like in this room, like it’s open source … you can read it. But you know, I think there’s like a practical side to that.

    Anna-Sofia Lesiv: I think what’s interesting, like, Danny, to your point, you can’t change how your device communicates with the cell tower. 

    That’s something that Omar and I were talking about, this notion of like, how easy is it to creatively destroy stuff like in the digital medium? Like, is it easier or harder than in the physical world? Some say in the digital world, you have so much more leverage, you know, like, I do one change, and it reaches a million users. On the flip side, you can just tear a building down and renew it. And if I wanted to change, let’s say, the network architecture of the Internet, or one of these, like fundamental standards that everyone now is using, like, it’s really difficult to do. And I think realizing how difficult that is, is something that most people like don’t understand, because there’s this prevailing idea that like, “Oh, you get so much leverage in the digital medium.”

    Danny Crichton: Well, one positive direction, as an example, if you look at semiconductors right now, there’s a massive push to actually open source core technologies around chip design. So for instance, last week, Google will now in the next couple of years, allow anyone on the internet to basically create chips at an actual fab, they’ll print them, they’ll etch them, and they’ll mail them to you.

    Omar Rizwan: I’m waiting for mine in the mail.

    Danny Crichton: And like chips are one of those pieces where you don’t have any control over the chip in your phone, your computer but for the first time ever, like in decades, we’re actually gonna be able to run full fab runs of custom chips, which means the x86 model could be changed. The risk processing models need to change. We might have 30 different designs in the future and instruction sets. Whereas today, we only have like three or four — on mobile one, on desktop two.

    Omar Rizwan: Skywater is one of the fabs that’s doing this, and then I think another one is Global Foundries.

    Danny Crichton:  And they’re using a technology company called efabless. They announced it last year. But as our software is getting more and more locked down, there’s extreme concern around the hardware side of things. From the government, from national security, from a lot of people in electrical engineering, and that’s what we’re seeing like with with RAN and Open RAN technology, 5G is being opened up, semiconductor design is being opened up, there’s a real movement underway on the hardware side to like, re-allow tinkering. And I think this connects to the broader right to repair movement that’s going on.

    Nadia Asparouhova: I just wonder how widespread it’s all going to become. Because the reason why I think things are getting harder to change in the digital world is because we have these complex webs of interdependencies.

    A friend recently jailbroke his Nintendo Switch or whatever, but he said, the problem is — ‘I can’t do any software updates anymore’. So even if you have your bespoke version of hardware or bespoke version of software then like, you’re missing out.

    It feels like there will be those options for people that want them, but the common user experience is still going to be like more and more these harder-to-change systems, because you’re depending on something.

    Danny Crichton: One of the other challenges is that the fabs are going out of the 130 nanometers designs, and our chips today are five nanometers going towards three nanometers. So we’re talking like microscopic size. We’re constrained not just by software and those interfaces, but it’s actually a much more physical constraint, which is the speed that these things have to run on.I was talking about playing with your 5G antenna. The reality is, your 5G antenna is this amazingly optimized piece of equipment that is designed to — with almost no battery usage whatsoever, communicate gigabytes to the cloud instantaneously. So you can tinker, but almost all your tinkers are gonna be worse than what’s already out there which is why consumers struggle with this so much.

    We see the same thing with buildings and building design. It’s more expensive today to build a skyscraper than it was 100 years ago. It’s not because we’re somehow worse at building skyscrapers. It’s actually the opposite. We’re far, far better. Skyscrapers are amazingly better today than they were 100 years ago. They’re better for heat and cooling. They’re better for movement of people, they require less pillars. We’re actually better with better materials. But that means that the costs are higher. 

    And so we have these constraints where, you want to tinker, but if you actually want people to use this stuff in the real world, you’re competing, ultimately, against what else is out there, and the optimal stuff is always going to win.

    Danny Crichton: I’ve given up the idea that all of our technology will be legible. It’s just not feasible anymore.

    Every single one of our technological systems has gotten more complicated because it’s under more stress. It’s under more constraints. You know, the power grid, go back to energy, is not that complicated, right? Like if you actually go back to the fundamentals of electricity, it doesn’t actually take much to learn. The problem is the modern grid which requires 100% uptime with no variance in the amount of power that’s coming out, has backups and redundancies. All that is what adds complication to the legibility of these systems. Look at your phone — just the lightning port adapter! Go into a lightning port and I think there’s 32 channels, 26 are data, 6 are power, those were actually built on other standards from USB and elsewhere. Those have compression standards that are designed so that you can fit as much data through that tube as fast as possible.

    To give a more tangible example, look at the car today versus what you could do 100 years ago. 100 years ago, you could get a car in the mail that was built with 500 parts and a manual in like the early 1900s. Today, there are 600 microchips in a car. This is why our cars are all delayed because of supply chains. I just I think if you want heated and cooling seats, with satellite radio, GPS built in, you know, an entertainment system in the dashboard, at a certain point, all this adds up and you say that it’s just not legible.

    Like, there’s just no way to understand how all this connects. Now, you could zoom in, and I think in most cases are principles. You know, there are fundamentals in most of these fields that if you were to go into say, 5G — I’ve been to like the NYU 5G Center — there’s like four core technologies and you if you kind of understand what’s going on with each of them, you’ll mostly understand what’s happening. But like even that, is that legibility is that literacy? I don’t know.

    Anna-Sofia Lesiv:  But how does that view connect with the right to repair movement? Because if you’re repairing something, you kind of need to know how to repair it.

    Danny Crichton: I think the right to repair movement is going the complete wrong direction. I don’t think anything is repairable. I think there’s some stuff, like a John Deere tractor, that should be, you know, ideally, more repairable. But we’re talking very small chips, like battery replacement and a phone. I don’t know if you’ve seen how this battery gets fit into an iPhone. You need specialists. It’s just not something where you’re gonna be able to do it yourself. 

    Kevin Kwok: So, as a follow up, if you believe that it’s not full legibility that we should have, is there some degree that you think we should have? And if not, is there something else that you think is important, instead of legibility? 

    Anna-Sofia Lesiv: I will wager an answer to this. You know, you could say nothing is legible because it’s so complex, and we can’t see it … you could also say that, like when humans began travelling great distances, and discovered that the world was round, you know, from a first person perspective, you can’t visualize that. But eventually, we developed maps of the world. And then we learned that we’re actually in a solar system, and actually, in a galaxy, actually a universe. And we have maps of those things, even though we can’t see them with our eyes, we can kind of represent them where we have a mental model, where we can visualize the system. And I think what’s lacking is a mental model for a lot of the interfaces and interactions between different technologies, these types of things. Like I think that kind of work somehow hasn’t been done. We don’t have maps of our technological ecosystem.

    Danny Crichton: I think a huge part of it is trust. I will never understand how huge parts of my technology work. And I have a computer science background.

    I wish I could figure out all the security and like the crazy levels of detail that requires like all these systems to work, but I can’t. I have to trust other specialists’ expertise going into this. You know, you look at every part of the stack. There are PhDs who do that particular field, someone who has spent 10-12 years to like, learn Ethernet. And I’m really not exaggerating. There are parts where it’s like, at this point in the USB standard, like you need a PhD in order to even get up to speed on how this technology works. And I just have to trust that! I think that’s actually one of the big tensions in technology today. If you look at crypto as like trustless and moving towards a model where we don’t have to trust other people, to the fact that in a technology sense, we actually have to trust people all the time. None of us can actually observe all of our code, nothing is verifiable. Our phones, today, I think want us to but it’s like 200 million lines of code. You can’t even physically read the code that is on this Phone anymore. Like it’s just not possible. And so once we’re in a world in which you can’t read everything, it’s fundamentally going to be illegible, right? Like at some, at some scale. So maybe we need maps? Maybe we need principles?

    But is just knowing that there are four key principles to 5G enough to understand enough? Is that competent enough? I couldn’t build it. If all the technology in the world disappeared tomorrow, I’m back in a cave.

    Nadia Asparouhova: To me, what’s more important is teaching this skill of believing that you have agency over technology. That’s the big dividing line that I see between people that are afraid of technology versus excited. Because I don’t know anything about 5G or whatever, but I’m not afraid of it.

    I would wager that the vast majority of the general population just think that technology happens to them. Like when something doesn’t work, they’re just like, “Oh no, it’s like attacking me!”, right? Instead of being like, “Okay, maybe I don’t know what’s going on, but I can try to understand it.” It’s a two way conversation — I can engage with this thing, I can figure out how to get around it. That sense that you can program your world or that the world is inherently programmable — makes you feel that you have agency over the world. I feel like that is like an under-discussed character trait. Like, when I think about teaching technological literacy, that’s what I want to teach.

    Shrey Jain: Yeah, before we were saying how hard it is to program the digital world, like have autonomy over standards — but I was curious to know, what do you think about standards that people have, like, let’s say, Internet standards, or like podcast standards, or RSS, or healthcare standards like FHIR? I wonder why it still feels so constrained when in practice, that’s not the case with many early stage companies.

    Omar Rizwan: Well, for one thing — I think there’s a difference between, if you’re a big tech company, and you have the audience already, you control the demand, then you can change the standards.

    The web standard is not controlled by like the Web Consortium, it’s controlled by the browser vendors. And then like, the standard is just whatever the browser vendors want to do. I think that is true. I think that if you’re an upstart, and you’re trying to do something new, I think that the barrier to entry imposed by like, what if you want to make a new web browser, you have to implement this, like, gigantic web standard that nobody has ever succeeded in doing. Nobody’s ever written a new web browser in like, 20 years? 

    And I think there are a lot of things like that. Suppose you want to make a new computer, okay? Like, how are you gonna support network cards, how you’re gonna support graphics cards? How are you gonna do all this stuff? And you need to do that stuff to meet the expected level of functionality that the users expect. And I think that imposes extremely onerous barriers to entry for new technology.

    Danny Crichton: I think I like Facebook moving to Graph QL from like JSON or something like that, you know, if you control both the app and like the server, and there’s an API that’s well organized, you can control both sides. It’s actually really easy to change.

    But then you look at like, COBOL installations at large banks — the mainframes that run our entire financial system, and everyone always says, like, “why is it still running off of 1960s computers” and like, this is why your balances are always pending for 24 hours.

    I used to live in Korea. You couldn’t get an ATM transaction from midnight to 3AM every night because all banks shut down for those three hours to actually process all the transactions so that they would turn back on and like 3 or 4 AM. 

    So at bars, you either had to pay prior to 12 or you had to wait until 3 AM to pay off your bill. Like, it’s so hard to remove some systems because they do get to such levels of complexity. 

    Like actually, there’s no one alive today who understands them anymore. I actually think that these systems are widely available if you look at nuclear power plants, large parts of the grid.

    Have you ever seen like the signalling systems in the subway? It’s all literally like vacuum tubes, like it’s a mechanical signalling system. There’s no digital technology whatsoever. No one knows how this operates. Because it was installed by people two generations ago. And they didn’t teach their kids, and certainly not their grandkids. 

    So to me, we’re actually surrounded by systems that no one understands. In which case, you get into a Chesterton’s fence kind of principle of like, what do you do with the technology that you don’t understand? One answer is you should just rip it out because you can’t replace it anyway, or you can’t fix it … but it’s working.

    Anna-Sofia Lesiv: So I really want to get back to Nadia’s point about basically like, what is the role of the individual and how do we make them feel that they have agency in the world again.

    I also really want to add something on to what you just said, Danny. We are surrounded by systems that we don’t understand. And in many cases, probably there’s like, no one that understands them. And I’ve been thinking about this a lot. In particular, I was prompted to think about this by Jonathan Blow, who actually Keegan introduced me to. So he has this thesis essentially that software is getting worse. Software engineers in general are getting worse, they’re getting less knowledgeable, and everything runs on software. And everything is really old software. And at some point, people won’t know how to fix old software and old software, which controls, a lot of stuff will just like break or stop working, and we won’t know how to fix it. 

    He has this great analogy. Our civilization is, indeed very technologically advanced — but so was the Roman civilization. When Roman civilization perished, that knowledge perished with them and Europe entered into a period of Dark Ages. 

    So, it’s happened before, you know, it could potentially happen again. And from that perspective, there really might be some cause for concern.

    Nadia Asparouhova: That’s a really good plot for a sci fi.

    Anna-Sofia Lesiv: It’s interesting — the literary world is obsessed with apocalyptic scenarios. After all, when modern technology is wiped out — could we rebuild? In most cases, you can’t. You need large scales of social organization to even be able to, like, do a lot of these things. You already need the computer to …

    Omar Rizwan: make the computer. Yeah.

    Anna-Sofia Lesiv: All right, I really wanted to bring it back to this question of personal agency.

    Every process is just so interconnected or depends on something else. We just said that we can’t survive in a world without technology. We depend on technology for literally everything. 

    It does feel like we’re babies that are nannied by this technological state. So what is the role of the human in this kind of world? I think it’s a very unanswered and unclear question. So how do we encourage people to feel like they’re in control?

    Danny Crichton: Has anybody read the short story from E. M. Forster, The Machine Stops?  

    Nadia Asparouhova: Yes!

    Danny Crichton: It’s a really short story. If you haven’t read it, you can read it in about 15 minutes before the end of this.

    But basically, Forster is writing about a society where folks are, I believe in caves, if I recall correctly? You’re sort of enveloped in like an entirely machine interface with the world. You communicate through the machine. This was written 100 years ago. And all of a sudden, one day, the machine stops. People are forced to figure out how to survive, how to get food, how to, you know, like, what happens when the internet goes down? There’s no DoorDash. And there’s no Whole Foods delivery, like — “Oh my God, I don’t even know where to go!”

    This question never animates me as much as it does some people. I feel like I have a lot of agency, which either means I’m deluded, or I have agency.

    I don’t feel like I’m locked into my devices. As a writer, and as someone who really focuses on deep work, I can go a day without my devices. Yesterday I spent 12 hours editing a doc — I didn’t look at email. I mean, I use a computer — I could use a typewriter I suppose?

    But I don’t feel like I was ever locked into my devices. I guess I was using electricity, so there’s some level of civilizational technology I needed in order to survive, but I feel like I have a high degree of agency. 

    Now, I do know folks who are very addicted to their phones. I feel like they certainly are lacking some level of agency, because they literally just can’t, like, let go. I do look at some of these Pew studies that show like 85 or 90% of people are scrolling through social media as they fall asleep. I don’t even have my phone in my bedroom. I’ve never had my phone in my bedroom! I just find that strange. At least for me, I feel like I control the technology. I can choose when to use it. I can choose what to install on it. I am measured in how I use my attention on all these devices.

    Nadia Asparouhova: Same boat, yeah. 

    To take this in a slightly different direction — I was thinking about this in relation to anti-natalism as a byproduct of social movements where people that say, “I don’t think I want to have kids because the world is just clearly going to hell in a handbasket, and climate change is gonna ruin the world — why should we bring children into this world?” And I have a really hard time relating to that sentiment, which is, you know, it’s widespread, it’s not unusual. 

    And I really can’t relate to that at all. I think it does come down to this feeling where if you believe that the world is happening to you and you have kids, then they’re just sort of the victims of whatever is happening to the environment.

    Omar Rizwan: It’s like I’m creating more subjects versus creating more agents.

    Nadia Asparouhova:  Right. Whereas, I’m excited to have kids for the exact opposite reason! I’m like, yeah — go fix climate change, go save the world! So yeah, it’s a totally different relationship.

    Lisa Wehden: I just wanted to push you on that. Our technology now is going to influence the physical infrastructure at this juncture in which we have to rebuild our physical environment and therefore, we’re going to use technology to redesign housing, electricity, climate change — there’s a lot of different opportunities here. 

    As a result, I think this new segment of people are going to have to become much more technologically sophisticated. Do you feel like that is a trend in the right direction in terms of helping people regain agency over technology? As I hear you both speaking now, maybe you have technological literacy when you have agency over your devices, and that’s what this means?

    Danny Crichton: I’ll connect a couple of dots. I think the West in particular focuses on individual agency as an ability to control the system. And I think the answer is we don’t, right?

    Anna-Sofia Lesiv: On that note, it is impossible to predict where things are going. If you look at chaos theory, or the butterfly effect, which gives the illustrative example of a butterfly flapping its wings and eventually causing a tornado — we have no idea like, what action will impact the system in what way just because it’s now such a complicated system, there are so many factors.

    Danny Crichton: So, I think we have individual agency over our devices, all your personal technology.

    But at the societal level, I have no control over the winds and the fires burning down whole parts of California, or that Europe is under drought and there’s no water.

    To me, there’s an opportunity for collective agency and societal agency or team based agency. Which is to say, I don’t know how to grow food most effectively on a piece of land, but, as a group, if we’re actually gonna continue to produce the food supplied and meet the needs of another 2 billion people who are joining the plant in the next 30-40 years, we have to continue to be more productive on all land than we are even today, right? After all the optimizations over thousands of years, we still need to extract another 25% out of that while also using less land.

    And I think the path is basically what (Lisa) sort of indicated, which is really raised the bar. Going from monoculture with a single combine that’s going over all corn and picking it up to saying you’re gonna need 8 or 10 plants read-in properly, using machine learning, using vision to quickly adapt based on soil conditions and hydrology, and that’s gonna require going from a brute force, physical labor model to one where you’re using technology as an appendage to a group of people trying to sustainably manage a piece of land. And to me, there is collective action, there is collective agency, which is to say, you can raise the bar and create more complexity.

    Complexity is good, by the way. That’s another piece, I don’t think we’ve actually brought it up — and I’ve been complaining about it, but like, complexity is fundamentally the definition of civilization. It’s going from creating niches for different people, different specializations. You know something I don’t therefore we both benefit because we’re offering both of our knowledge to the economy. 

    So the only way I think we actually survive the future in this transition to new infrastructures is precisely what you’ve indicated which is, we have to join in teams, we’re going to specialize even further, and we’re gonna have to communicate, we’re gonna actually have to spend more time on it.

    Today, one of the most magical things about our civilization is that less than 1% of people are in farming, and less than 1% of people work in energy production. Right? If you think about it — the two things we need are food and fuel — less than 1% for both!

    The other 98% of us get to do everything else in this society — culture and computer science, and anything we want to do, but that initial number will go up. And in some ways, it’ll be okay, because a lot of those jobs are actually going to be much more interesting. They’re going to be a lot more engaging. They’re gonna require a lot more skill and thinking. Some of us may actually be farmers someday, and I would not be surprised.

    Omar Rizwan: I mean, I think something interesting in what Lisa said implied this relationship between literacy or understanding and agency. 

    One of the issues that I have with (Anna-Sofia’s) initial framing of technological literacy is, is it actually our goal just to have people understand things? Like, is it our goal to just educate people in some set of knowledge or facts? Does that constitute literacy? Or do you have to be able to make something or replicate some kind of thing? Literacy in the domain of text means you can both read and write. You’re able to produce some sort of artifact, and that’s kind of the test of literacy.

    I think that’s kind of like what (Lisa) is getting at — that producing some kind of new technology is what constitutes technological literacy there.

    Anna-Sofia Lesiv: All right, I have one final question. And then I think we’ll wrap it up. 

    Okay, we are writers, we write words. Is this a dying technology? Is this the right way to communicate our ideas? When we think about encoding information for posterity, how should we think about the media?

    Nadia Asparouhova: I mean, yeah, I’m a traditionalist here. 

    I feel like people have been saying blogs are dying for a long time, they say long form is dying. There was that whole time people thought video was gonna be the next thing. I think this probably reflects my own biases, but I’m extremely text heavy and text dependent. I’m really bad at audio, I’m really bad at video.

    I write everything in like a text editor, so it’s just the way that I think. In all these years that everyone keeps proclaiming the death of long form, the death of writing, it continues to persist, it grows. I mean, with Substack, like, you know, it became a thing again. It just finds new ways to continue to live on. 

    There’s just no substitute for it. Visuals just induce a different way of thinking about a medium than the written word does. Like when you look at words, it stimulates your own imagination and stimulates you to think about things in different ways. I feel like when I watch a movie or something, it’s this passive relationship or it doesn’t stimulate my own creativity. So yeah, I’m just sort of a words maximalist. I think writing is gonna live on forever. And I reject all attempts to innovate there.

    Danny Crichton: I agree 100% with everything just said. 

    I would add in, I think writing is somewhat unique among the media that we see in our world today, which is still very individual. You as a person can write an article — by yourself. Whereas with video, I mean, you have to have camera crews and editors and sound engineers and mixers. And, suddenly that ethereal vision goes away. 

    I think the magic of writing even today is the range of styles are so broad, the topics are so unique, and that comes from the last point — it’s very decentralized. I can do what I want to do, you can do what you want to do, and I don’t have to get 20 other people to agree. I don’t have to raise a million dollars to go produce a 30-40 minute documentary in order to produce at the scale that readers would be interested in.

    It’s like what my math professor used to say, “the best part about math is all you need is a pencil and a piece of paper.” That’s true as much in writing as it is in math.

    Anna-Sofia Lesiv: Great. Well, thank you all so much. And thank you everybody for coming and asking fantastic questions and participating. This was wonderful.

    talk

  • Seeing Technologically

    seeing

    Recently, I decided to learn to draw. The idea came out of the realization that I wanted to start using my hands more. Most of my work involved staring at a computer screen. It was where I read everything, wrote everything, talked to my friends, talked to my co-workers, learnt new things, where I went to “explore”, and so on. I wanted something to give me a reason to start looking closely at the real world again. Drawing seemed like something that would better engage my eyes, test my skills of perception and develop my hand-eye coordination.

    In fact, one of the first things you’re supposed to practice when beginning to draw – is simply observation. Most people don’t really pay attention to the details of the objects around them. Rather than identifying the tree across the street for its particular leaf shapes, the hues of its colors, the light patterns that strike it as the day progresses, most might just encode the tree in their minds as ‘tree,’ without attaching too many distinguishing characteristics.

    Those that start drawing often find they make most of their mistakes when they draw what they think they know of an object, without drawing what they actually see. Learning to observe objects anew is humbling, because it’s a process that often reveals elements we neglected to notice.

    I found becoming a better observer to be extremely rewarding. After all, I’m a big proponent of the view that “the unexamined life is not worth living.”

    However, it was curious that something about our daily activities was leading us to become less observant in general. The more I thought about it, the more it became obvious that most of our current world, especially given its digital orientation, is built on top of, and depends on, abstractions.

    From the industrial processes to the digital devices that run our social lives and economies, the inner workings of nearly all our technologies are increasingly hidden from view or invisible to the human eye. As much as living a well-examined life is a virtue, it is increasingly difficult to do. Instead, we are left with a superficial perspective on the workings of the modern world, while a gestalt view of our technological and virtual society becomes ever more onerous to achieve.

    Even specialists within well-defined categories are finding it difficult to achieve a holistic view of the totality of their fields. Take for instance, the opening passage of the book “The Elements of Computing Systems,”

    Once upon a time, every computer specialist had a gestalt understanding of how computers worked. The overall interactions among hardware, software, compilers, and the operating system were simple and transparent enough to produce a coherent picture of the computer’s operations. As modern computer technologies have become increasingly more complex, this clarity is all but lost: the most fundamental ideas and techniques in computer science—the very essence of the field—are now hidden under many layers of obscure interfaces and proprietary implementations. An inevitable consequence of this complexity has been specialization, leading to computer science curricula of many courses, each covering a single aspect of the field. We wrote this book because we felt that many computer science students are missing the forest for the trees.

    It’s not just ‘computer specialists’ that are developing this kind of myopia. It seems all of us are missing the forest for the trees in some larger sense.

    As our technologies evolve, new mental models are required to better comprehend and analyze them.

    I don’t think it’s a coincidence that ‘vision’ is used in so many metaphors to convey a sense of enlightenment. From the term ‘insight’ to ‘visionary’ it seems that the ability to picture a concept in your mind’s eye is the corollary to understanding it.

    Whenever humans have encountered new frontiers in the past, new types of perspectives and mappings were required to properly understand and assess them. As humans began traveling larger geographic distances, though a holistic picture of the Earth was not possible through the traditional first-person perspective, we eventually learned how to reflect the shapes and configurations of Earth’s land masses on maps. Later on, as we attempted to orient ourselves within the larger cosmos, we created maps of the solar system, the galaxy, and even our universe.

    As our industrialized societies and their economies became more complex, we created an entire new discipline to study and map their mechanics — the field of economics. In fact, today, an economic education and perspective is a crucial skill every responsible and agentic adult is required to have in modern society.

    In the same way that financial literacy is crucial for us to engage in today’s world, I strongly believe that technological literacy will be crucial for us to engage in the world of tomorrow.

    Developing a better mapping of the technological system in which we exist will require us to learn to see technologically — and by this I mean become better acquainted with the scientific, physical, mathematical concepts that underlie the design of our technological processes and tools.

    Irrespective of the role of our existing or emergent institutions in promoting this new form of technological literacy, each person can work individually on teaching themselves to see technologically — attempt to understand a process so well they can picture it in their mind’s eye. In the first place, developing this muscle leads to a richer perspective on life, illuminating the ingenuity of the inventions and science all around us. In the second place, this kind of perspective allows us to ask better questions and imagine new kinds of technological combinations that might not have occurred to us before.

    On a broader level, achieving this kind of technological vision is useful in the same way that all great maps or visualizations are useful — they help us achieve new insights, they help us orient ourselves in a larger system, and they arm us with the skills to better maintain or enhance the systems we have.

    A world where we are not actively developing our ability to generate better systemic models and observations is a world where our ability to arrive at insights and to orient ourselves declines.

    Training our muscles of observation and inquiry in a world that emphasizes intellectual delegation and distraction will be one of the prime challenges of our age.

    Confucius said, “What I hear, I forget. What I see, I remember. What I do, I understand.” We should heed this sentiment, keep our natural tools of perception sharpened and seek out ways to refine them further.

    PS. For those interested in developing observational and drawing skills, I highly recommend reading John Torreano’s Drawing by Seeing.

    PPS. Molly Mielke sent me this fascinating essay by Mary Gaitskill which discusses how our declining observational acuity is now evident in our literature.

  • The Role of the Writer

    monks

    A colleague of mine recently expressed his view to me that “there’s nothing new under the sun.”

    It’s funny — because I totally agree. Though, what’s peculiar about both of us sharing this view is that both of us are writers, and writing is a thing you typically do when you have something new to say. I mean, what’s the point of saying anything at all if it’s already been said before? What’s the point of being a ‘writer’ when nothing you could possibly write would be original.

    These are things I believed for a long time. Increasingly, though, the emphasis on originality stopped being — original? As time went on, it became clear that the only kind of originality we were achieving was a superficial kind — some marginally interesting recombination of aesthetic influences.

    So, it’s not possible to say anything new anymore — does that mean we no longer need writers? Absolutely not.

    The notion of a writer being forced to produce original ideas is a very modern concept that soon, I think, will peter out. After all the initial function and purpose of writing was not to be generative and produce net-new ideas, but to encode and transmit the best ideas through time.

    To store wisdom, societies need scribes. Monks didn’t learn to read and write to become original writers, they did it so they could translate and copy sacred or essential texts. The more copies of a text were produced, the more likely those ideas would survive and be accessible to posterity.

    As I was thinking about the contemporary — digital — ways our knowledge is being stored for posterity — I came across this fascinating thread:

    Maintaining our information digitally through time requires lots of things to continue going right. The above thread examines a few potential ways that we might lose access to all the information we are currently storing digitally — and then what?

    How much knowledge and how many important ideas would we stand to lose? Should we turn away from storing our data on servers and find a more durable solution to warehouse our knowledge? The books that the monks hand-copied did a good job of standing the test of time, as did writings etched into stone à la the code of Hammurabi — should we turn back to these mediums as our civilizational stores of knowledge?

    I raised this question with a friend who offered a fascinating answer — the durability of the storage device isn’t what determined which information carried over into the present day. Rather, the encoding and repeated copying of information in the first place was the determinant of the knowledge that survived versus the knowledge that was forgotten or destroyed.

    In other words, the stories that will survive are those that are shared. In order to have a more durable store of knowledge and wisdom, society just needs more scribes — working to copy over and transcribe history’s best ideas.

    That’s it — the very simple task and role of the writer. It’s nothing fancy. To be a good and socially useful writer, you don’t need to be — and probably shouldn’t try to be original.

    Being a valuable writer is just a matter of taking the ideas you think are most salient and preserving them with the hopes that someone else might come across them and pick up where you left off.

    Thanks goes to Omar Rizwan and Santi Ruiz for discussions that spurred the writing of this post.

  • Lindy Structures

    beanstalk

    I’m in the midst of reading one of my favorite books of the year called Structures: Or Why Things Don’t Fall Down and I am astounded by the information I’m discovering therein.

    Upon hearing the title “Structures” one might imagine that this book is about how to properly build homes, skyscrapers, bridges — in other words, complex feats of man-made engineering. And in fact, the book does examine these things in some detail, but what truly motivates that author’s intellectual pursuit through the world of structures are the structures that quietly, modestly surround us, especially the structures that have existed long before humans ever roamed the Earth.

    The book was written by James Gordon who was a professor and one of the founders of the field of material science. Gordon’s philosophy and perspective on engineering are pretty nicely summarized in the following quote,

    The lilies of the field toil not, neither do they calculate, but they are probably excellent structures, and indeed Nature is generally a better engineer than man. For one thing she has more patience and, for another, her way of going about the design process is quite different.

    Learning the structure of Nature’s designs, argues Gordon, is the best education an engineer could hope for. The structures of trees, muscles, tendons are ingenious because they are as lindy as it gets. They have outlasted the oldest man-made structures by eons.

    It’s not just nature’s lindy techniques that ought to be taken more seriously. The constructions and techniques of history’s craftsmen need to be studied more intensely too. Gordon argued that modern engineers have a lot to learn from the traditional and mysterious techniques that were employed for centuries by artisans and masons.

    He describes how the Greeks and Romans figured out to build incredibly powerful bows and catapults from efficient natural materials like tendon — not because they could mathematically determine that tendon was the most effective material due to its elasticity and stiffness, but because they had developed an intuition for this kind of thing, an intuition which they passed on for generations until the classical culture vanished.

    The great structures of the medieval period, too, were not built with the help of formulas or reason, but rather by the intuition carried through tradition.

    “On the face of it it would seem obvious that the medieval masons knew a great deal about how to build churches and cathedrals, and of course they were often highly successful and superbly good at it. However, if you had had the chance to ask the Master Mason how it was really done and why the thing stood up at all, I think he might have said something like ‘The building is kept up by the hand of God — always provided that, when we built it, we duly followed the traditional rules and mysteries of our craft.’”

    “Although some of the achievements of the medieval craftsmen are impressive, the intellectual basis of their ‘rules’ and ‘mysteries’ was not very different from that of a cookery book.”

    The most impressive structures, Gordon, seems to say, are the lindy ones. You needn’t travel to a dense urban center, populated with tall skyscrapers to find structures worthy of our attention and awe. In fact, these kinds of structures are all around us, produced with painstaking care and experimentation by Nature.

    It’s a growing disinterest in the wonders of nature’s mysteries that Gordon is pushing back against in his book. He is bored of the traditional materials used in modern construction. Limiting modern construction to only a handful of materials has constrained the scope of many contemporary engineers’ curiosity and narrowed their skill sets.

    “The use of metals, which are so conveniently tough and uniform, has taken some of the intuition and also some of the thinking out of engineering.”

    “On the whole, the introduction of steam and machinery resulted in a dilution of skills, and it also limited the range of materials in general use in ‘advanced technology’ to a few standardized rigid substances such as steel and concrete.”

    The following quote, in part, explains Gordon’s fascination with materials, which are crucial to understanding the composition of structures.

    There is no clear-cut dividing line between a material and a structure. Steel is undoubtedly a material and the Forth bridge is undoubtedly a structure, but reinforced concrete and wood and human flesh — all of which have a rather complicated constitution — may be considered as either materials or structures.

    Materials, are, after all, the structures that Nature built.

  • The Annals of Progress

    classics and iPhones

    The Classical Soul

    Humans did not always wish to progress. Unlike the present day, where our lives are consumed by thoughts and decisions made for a coming future, life in antiquity was entirely free from the conception of a future as such. 

    The notion of humanity evolving or mutating into an improved, elevated form was nonsense and unthinkable. For the Greeks, the following day was always the same as the present day, which itself was no better than the day preceding it.

    “The Classical culture possessed no memory, no organ of history in this special sense. The memory of the Classical man … is something different, since past and future, as arraying perspectives in the working consciousness are absent and the “pure present,” fills that life with an intensity that to us is perfectly unknown,” writes Oswald Spengler in The Decline of the West.

    “To the Greeks, for example, historical events and destinies were certainly not simply meaningless — they were full of import and sense, but they were not meaningful in the sense of being directed toward an ultimate end in a transcendent purpose that comprehends the whole course of events,” writes Karl Löwith in Meaning and History

    The classical “pure present” of the ancients predicated a negation of time and direction. The Greeks had no use for functions or dynamics. Instead, their experience of the world relied on definite magnitudes and statics. Their system of mathematics, encapsulated by Euclidean geometry, arose out of a completely alien world-picture relative to our contemporary system of mathematics, expressed in calculus, or the infinite and imperceptible distances conveyed by the Cartesian plane. 

    The Western Soul

    The Romans, with their expansionary, “unspiritual, unphilosophical” attitude, “aiming relentlessly at tangible successes,” marked a stark transition away from the ancient Greek’s spiritual concern with the pure present, and according to Spengler, marked the conclusion of the classical period. The Roman emphasis on intellect and pragmatism over culture marked the death of the classical soul. 

    The conception of money, “as an inorganic and abstract magnitude, entirely disconnected from the notion of the fruitful earth and the primitive values,” is an idea attributable to the Romans. Money as a salient metric on which one could measure the quality of life took root here. “It is possible to understand the Greeks without mentioning their economic relations; the Romans, on the other hand, can only be understood through these,” Spengler writes. 

    It was this legacy of economic thinking, fused with the increasingly popular influence of Christianity, which lay the groundwork for a new kind of soul, to be born among the ashes of the fallen Roman Empire. The disintegration of Rome proved fertile ground for the spread of Christianity. “Men learn through suffering, and whom the Lord loveth he chasteneth. Thus Christianity was born in the death throes of a collapsing Hellenic society, which served as a good handmaid to the Christian religion,” writes Löwith. 

    “To the Jews and Christians … history was primarily a history of salvation and, as such, the proper concern of prophets, preachers and teachers. The very existence of a philosophy of history and its quest for a meaning is due to the history of salvation; it emerged from the faith in an ultimate purpose,” he goes on. 

    The influence of this Christian metaphysic injected history with the power of a divine will, guiding the course of mankind in a particular, purposeful direction. History, therefore, became the lens through which the will of God, if studied carefully, might become legible. 

    With the onset of the Enlightenment, however, and newly unearthed frontiers of knowledge, Western man was overtaken with aspirations for the potential of the human intellect. The Christian scheme of a historical direction channeled toward a particular purpose was secularized. It was no longer the will of God steering the path of history, but human reason. 

    The essence of this secularized Christian metaphysic was best encapsulated in Goethe’s notion of the Faustian man. The story of Faust, of course, is a morality tale of a doctor of philosophy who bargains away his chance for salvation in exchange for earthly power, making the first and most famous ‘deal with the Devil.’

    The hubris and ambition of Dr. Faust was analogous to the hubris and ambition of the man of Enlightenment. “[Man] meant, not merely to plunder [Nature] of her materials, but to enslave and harness her very forces so as to multiply his own strength. This monstrous and unparalleled idea is as old as the Faustian Culture itself. Already the steam engine, the steamship, and the air machine are in the thoughts of Roger Bacon and Albertus Magnus. … Many a monk busied himself in his cell with the idea of Perpetual Motion. This last idea never thereafter let go its hold on us, for success would mean the final victory over ‘God or Nature.’ …. To build a world oneself, to be oneself God — that is the Faustian inventor’s dream, and from it has sprung all our designing and redesigning of machines to approximate as nearly as possible to the unattainable limit of perpetual motion,” wrote Spengler.

    The End of Progress

    What has mankind received in exchange for its risky, Faustian bargain? We live in an era marked by profound discontent at the failures of the Enlightenment, irreversible environmental damage caused by industrial society, and a rampant spiritual emptiness in an existence mediated by cold, unfeeling, utilitarian reasoning. The world increasingly wants an escape out of its deal with the Devil. Though what kind of a bargain or ‘deal’ should take its place is still very unclear. 

    Time is still moving forward, and yet our salvation has seemingly been delayed given that our promised continued economic and scientific progress has stopped.

    The meaning of this new reality, and what it portends of the world to come, is a mystery to us. Spengler did not know what would follow a world where the Faustian spirit exhausted itself. Today, there are only competing views about the direction humanity ought to take. 

    The question of history’s course along with its ultimate meaning, has once again become the crucial axis of debate. The role of the individual in the unfolding of such a grand narrative, is similarly open for re-interpretation. Given that staying in the same place is so universally undesirable, it seems there are only two contending solutions to this contemporary dilemma — walk back the Faustian philosophy and resurrect a more ancient way of being, or push forward into the future, re-invest in the human intellect, and mine the frontiers of knowledge to find paths out of our present material and spiritual stagnation.

    Contenders of the first approach are intellectual descendants of the critiques raised by Heidegger who lamented the rise of an instrumentalist ontology brought about by secular, industrial society. A world where everything could be decomposed into inputs whose purpose was determined by the function they fulfilled within a larger economic or historical process was a cruel and brutal one, incompatible with the continued flourishing of humanity-as-such.  

    However, Heidegger was never clear on what such a reversal in direction could actually look like — and besides, this kind of romantic nostalgia has rarely ever borne fruit. There are virtually no examples of humanity ever successfully re-adapting to bygone modes of living. Time machines remain paradoxical and confined to the realm of imagination. Furthermore, it’s entirely unclear whether the material comforts of our world could be sustained in a system other than the one we have now. Our uncomfortable realization is that we are chained to our Faustian bargain. We just have to figure out how to live with it.

    Among those urging a re-invigorated path onward, two positions are becoming clear. Secular thinkers like Tyler Cowen believe that a renewed attention to progress should be re-affirmed as a moral imperative for our times. A retrospective on the course of history shows that economic growth can compound infinitely into the future. On a purely utilitarian basis, restricting any amount of growth today taxes the opportunities and comforts available to future generations. However romantic moving away from the linear narrative of progress might seem, doing so is ultimately selfish and a waste.

    Peter Thiel, who likewise wishes to accelerate scientific and real economic progress, believes we must re-inject the notion of a providential will into the course of this type of progress. It will be important to read within further advances in science and technology the workings of a divine will. And if this might not be convincing enough for some, then the argument that continued material progress is our only way out of a zero-sum world, which inevitably will plunge us into a Hobbesian or Malthusian trap, ought to seal the deal. 

    Progress via Technology, not History

    For centuries, the assessment of humanity’s direction has been studied through the lens of history. Historical events have been conceived of as the agents through which the will of reason or the divine make their mark. However, as it becomes increasingly clear that outside of some cataclysm, our world is inextricably chained to the path of industrialization and technicization we have ventured upon, the power of historical events to sway our course is waning. Elections and political events are no longer vessels through which ideas take hold. Immensely more influential are the designs of the technologies that govern the constraints and comforts of our lives.

    It is time to reframe our analysis of progress and view it not through the lens of history and historical events, but through a philosophy of the technologies which we develop, adopt and integrate into our daily lives.

    To this day, the attitude toward scientific progress and scientific discoveries is treated as though they are inevitable. It is only a matter of time before we discover the cure to cancer, just as it is only a matter of time before the next version of the iPhone is released. This approach, perpetuated by scientific writing and the institutionalization and professionalization of science, obscures the element of design, choice and ethics that is ever-present in all scientific investigations and technological models. 

    As humanity deliberates on the direction it ought to steer itself, it should look increasingly on the canvas of our technologies as the vehicle through which these changes should be implemented.

    Elon writes that a new philosophy of the future is needed. To this, I would add, a more informed and enlightened philosophy of technology must take root too. 

    I would like to sincerely thank Santi Ruiz, Omar Rizwan, Cristobal Sciutto and Keegan McNamara for discussing these subjects with me at length and inspiring the writing of this post.

  • Ferrante's Philosophy

    elena

    Elena Ferrante’s novels are worlds unto themselves. The greatest of her works is certainly The Neopolitan Quartet, a series of four novels which span the entire lives of two girls born in post-war Naples. Throughout the first novel, My Brilliant Friend, you feel as though you’ve peered into the narrator’s mind. All the “secret thoughts, memories and confessions” of the narrator become as natural to you as your own, and by the end of the series, in The Story of The Lost Child, you feel as though you’ve lived another life.

    Such a totalizing work of literature has been done successfully before, but it’s worth mentioning that though Marcel Proust did this in In Search of Lost Time or, more recently, Karl Ove Knausgard did it in My Struggle, their works were entirely autobiographical. Elena Ferrante succeeded at this with fiction.

    After writing that series, Ferrante said she was done — and thankfully she was lying. Within a few years, The Lying Life of Adults came out and last month, she coyly published a collection of essays called In The Margins.

    A lot of people that read In The Margins have said that this was a wonderful series of essays that gives us a behind-the-scenes look at Ferrante’s inspirations, her writing process, and so on. But In The Margins is a lot more than that. It is Ferrante’s philosophical treatise.

    The Real

    Many great artists wrote works articulating the philosophy of their work. Kazimir Malevich wrote a treatise on Suprematism in his book “Black Square” — which unfortunately, I can’t find any English translations of. Mark Rothko did the same in The Artist’s Reality: Philosophies of Art. Reading these, you get the sense that the greatest ambition and struggle of artists is the same — the challenge of conveying the real.

    Malevich, for instance, hated realism. For him, a realistic painting was merely a copy of some object or scene that exists. There is nothing of the artist added to the finished work other than their fine skills and brush strokes. You have to move beyond reality and into abstraction in order to unearth something new to the viewer and connect them with the mind of the artist in an illuminating way.

    Ferrante has the same challenge. No matter how hard you try to desribe an object with words, it still won’t fully capture its presence and meaning. While capturing the totality of an object is hard, it is even harder to capture the totality of experience itself.

    “The “genuine ‘real life’,” as Dostoyevsky called it, is an obsession, a torment for the writer. With greater or less ability we fabricate fictions not so that the false will seem true but to tell the most unspeakable truth with absolute faithfulness through the fiction,” she writes. This is fiction’s greatest magic trick when done well — and it is Ferrante’s speciality.

    The Perspective

    Of course what matters isn’t just the object that you’re depicting, but the position from which you’re attempting to portray it. If you’re painting a person, is it from the perspective of their profile, from behind, from a birds-eye view? The author’s relationship to the object creates the narrative with which they describe it.

    “From an aspirant to absolute realism I had become a disheartened realist, who now said to herself: I can recount “out there” only if I also recount the me who is out there along with all the rest,” Ferrante writes.

    When she initially started writing, Ferrante struggled with this “I”. Writing in the detached and objective third person felt unnatural. “Even when I was around thirteen … and had the impression that my writing was good, I felt that someone was telling me what should be written and how. At times he was male but invisibile. I didn’t even know if he was my age or grown up, perhaps old. … I imagined becoming male yet at the same time remaining female.”

    I imagine this is the case for many of us when we write. A voice — from somewhere — dictates to us the correct words in the correct order. The voice is distinguished and wise, so we trust it and follow its advice.

    This voice, according to Ferrante, the way it speaks, the things it finds funny, its moral compass, and so on, is composed of the swirl of stories, newspapers, films, television, songs we have consumed “almost without noticing.”

    Writing down what this voice tells you is fine, but it’s imitating a style and telling us what we already know. As Malevich would have said, it’s copying. But this leads to an uncomfortable paradox. If they very way we think and speak is not our own, how are we to ever create anything original and insightful?

    Ferrante uses Ingeborg Bachmann’s words to explain. “We have to work hard with the bad lagnauge that we have inherited to arrive at that language which has never yet ruled, but which rules our intuition, and which we imitate.”

    Bachmann goes on, “I believe that old images, like Mörike’s and Goethe’s, can no longer be used, that they shouldn’t be used anymore, because in our mouths they sound false. We have to find true sentences, which correspond to the condition of our conscience and to this changed world.”

    This ability to take existing “bad language” and contort it effectively around our “changed world” is the mark of a great writer. The greatest writers can take a language of cliches and transform them into new cliches, new images which will in turn settle into the minds of future generations and dictate sentences to them from the backseat.

    The Point

    The potential of creating a voice that can speak through the minds of many is, while not in itself the goal of writing, its power.

    Our sense of humanity and the people that populated its past are, absent some technology like the Animus, understood almost exclusively through literary representation.

    Literature, as a reflection of this rich past, is an imperfect mirror and grows more imperfect with distance — especially if the literature is sparse, poorly written, or written by only a select few. This feeling is particularly salient when, like Ferrante, you grow up reading works written only by men and suddenly as a thirteen-year-old have an elderly man dictating essays to you from your head.

    This is why the act of writing, applying old words to the novelty of your station, is a crucial contribution to the catalogue of humanity. It is an act that is not only additive, but also generative in that it influences the perception of the present and the perspective of the future for all those who read it.

    It’s how — and why — Malevich used standard paints on a flat canvas to invent new worlds and new forms. Malevich’s style of suprematism eventually became a cliche itself, but a cliche that has stayed with us and influenced the path of art forever more.

    Ferrante opens In The Margins with a poem by Gaspara Stampa, a great Italian Renaissance poet writing in the 16th century.

    If I, a low and abject woman, may
    Bear, deep within myself, so bright a flame,
    Why should I not show the world today
    A mark of the style and vein of that same?
    If, to a place I could not seek to claim,
    Love’s new, unfamiliar spark has raised me,
    Why should it not, with fresh skill, equally,
    Give ‘pain’ and ‘pen’ in me an equal name?
    And if not by the simple force of nature
    Perchance by some miracle, that may
    Overcome, exceed, break every measure.
    How this might be I cannot truly say,
    I only know that, with my great venture,
    I feel my heart sets a new style in play.

    “If … I who feel that I am a woman to throw out, a woman without any value, am still capable of containing in myself a flame of love so sublime, why shouldn’t I have at least a little inspiration and some beautiful words to give shape to that fire and show it to the world?”

    This question is the driving question of Ferrante’s life, and ought to be the driving question of anyone capable of containing in themselves this flame.

  • Douglass North on Institutions

    douglass

    The late economist Douglass North is one of the most underrated in the field. He won the Nobel Prize, but you’ll never hear his name or work discussed in a traditional economics class. It’s a shame, because the work he produced was foundational to creating a coherent framework of how societies, power structures, and economies evolve.

    North was precocious and curious enough to triple major in political science, philosophy, and economics, yet not terribly concerned over something as cosmetic as a GPA — he graduated with a healthy C average from Berkeley. A clear bon vivant, he sailed the seas for three years as a deckhand and nearly became a photographer before ultimately opting to complete an economics PhD, a necessary step on the path to his eventual Nobel Prize. I like to imagine this wide aperture on life made North particularly able to pinpoint the paradoxes and puzzles that long went unexamined in economics.

    The greatest frustration North had with the study of economics is best summarized in his own words. “The formal economic constraints or property rights are specified and enforced by political institutions, and the literature simply takes those as a given. But economic history is overwhelmingly a story of economies that failed to produce a set of economic rules of the game (with enforcement) that induce sustained economic growth.” North’s major contribution to economics was to uncover how the principles we took for granted came to be in the first place, how societies evolved from those where kings controlled all lands and subjects to ones where all individuals could inherit and protect what was theirs.

    His work is powerful because it teaches the reader to recognize the world and its underlying structures as dynamic, untethering one from the naive belief that “things always return to normal.” North knew that normalcy could be lost, for better or worse.

    The era we inhabit is one of institutional upheaval and in this era, it’s best to be armed with North’s wisdom to understand how we got here and where we are likely to go. Below is a summary and commentary on North’s essay, “Institutions.”

    The Game Theory in Economic Interactions

    A game theoretic framework is foundational in helping us understand how economies work and what behaviors can be expected from its participants. Okay — but what’s a game?

    I pass the baton to Professor Gass at the University of Maryland to explain, “A game is the set of rules that describe it. An instance of the game from beginning to end is known as a play of the game.”

    In tic-tac-toe, the rules are:

    • You play on a grid of 3 x 3 squares
    • Player 1 is X, Player 2 is O.
    • Players take turns putting their marks in empty squares.
    • The first player to get 3 marks in a row wins.
    • If all nine squares are full and no player has managed a row-of-3, the game ends in a tie.

    Professor Gass adds, “a pure strategy is an overall plan specifying moves to be taken in all eventualities that can arise in a play of the game. A game is said to have perfect information if, throughout its play, all the rules, possible choices, and past history of play by any player are known to all participants. Games like tic-tac-toe, backgammon and chess are games with perfect information and such games are solved by pure strategies. But whereas you may be able to describe all such pure strategies for tic-tac-toe, it is not possible to do so for chess, hence the latter’s age-old intrigue.”

    Sequential games can be visually mapped out in trees like the one below, and we can use them to assess which strategies are most optimal for players. The rules of the game below are:

    • Players take turns picking whether to go left or right.
    • Player 1 goes first, followed by Player 2, and then Player 3.
    • After Player 3’s turn, the game ends and each player receives their payoff.

    By analyzing the payoff tree below, we can deduce what strategies players should take to end up with the highest payoff.

    So for example, if we use backwards induction, we can determine the set of Player 3’s optimal strategies, which helps us determine the set of Player 2’s optimal strategies, which helps us determine Player 1’s optimal strategy — which is playing “Left” to end up at a final payout of (2,5,5). At this point, we have “solved” this game because we know that Player 1 will always go left.

    Payoff Tree

    Finding pure strategies in chess through backwards induction was what José Raul Capablanca was talking about when he said, “In order to improve your game, you must study the endgame before everything else. For whereas the endings can be studied and mastered by themselves, the middle game and opening must be studied in relation to the end game.”

    By looking at this game tree, we can also see that changes to either the rules or payout structure of the game will influence the optimal strategies for the players in the game.

    Institutions, North says, are just like the rules and incentive structures exhibited in these games. They are the rules that provide the incentive structure for our economy.

    Institutions are the humanly devised constraints that structure political, economic and social interaction. They consist of both informal constraints (sanctions, taboos, customs, traditions, and codes of conduct), and formal rules (constitutions, laws, property rights). Together with the standard constraints of economics they define the choice set and therefore determine transaction and production costs and hence the profitability and feasibility of engaging in economic activity.

    Why do we need rules for the economic game? Perhaps the best way to visualize this is to imagine a game of chess with no rules. If there’s no definition of what “winning” or “losing” looks like, it’s not clear why we would engage in the activity in the first place. Institutions don’t just help set the outcomes of the game, by doing so, they also help inform us about what we can expect from others playing the game alongside us. Without some set of rules we all agree on, trust and cooperation become impossible.

    Games can also be broken down into either finite or infinite games. Finite games end in a finite number of moves whereas infinite games consist of a series of repeated interactions that go on ad infinitum. Game theory teaches us that under certain types of games, cooperation is more difficult to attain.

    To illustrate this, let’s suppose we have a flour merchant, A, at a local market playing a finite game, and flour merchant, B, playing an infinite game. In both cases, the merchants’ objective are the same — to maximize profit. Since merchant A is playing a finite game, one way he can maximize his profits is by not selling flour at all, but rather bagging sand and selling it to unwitting buyers. Since he will never return to the market again, he can pocket the profits after one day of sales and end the game. In other words, the fact that merchant A is engaged in a finite game means it is easier for him to renege on his agreements with his buyers. In this finite game, lying is an optimal strategy.

    This strategy, however, wouldn’t work for merchant B who must return to the market and sell flour every day ad infinitum. If he pulled such a stunt, he would soon lose all customers, because by the second day, no one would trust him. The element of repeated play makes it much more difficult for merchant B to renege on agreements with buyers, and the costs of reneging induce him to cooperate and make good on his promises with buyers. In this infinite game, lying is a sub-optimal strategy.

    This is what North means when he says, wealth-maximizing individuals will usually find it worthwhile to cooperate with other players when the play is repeated, when they possess complete information about the other player’s past performance, and when there are small numbers of players. But turn the game upside down. Cooperation is difficult to sustain when the game is not repeated (or there is an endgame), when information on the other players is lacking, and when there are large numbers of players.

    Whenever the ability to renege is high, cooperation becomes more difficult and the costs of transacting go up. When the costs of transacting exceed the profit from the interaction, economic activity stops. Institutions, therefore, are put in place to allow economic exchange to continue taking place. As North puts it, institutions and the effectiveness of enforcement (together with the technology employed) determine the cost of transacting. Effective institutions raise the benefits of cooperative solutions or the costs of defection, to use game theoretic terms.

    Evolving Economies, Evolving Institutions

    The economy is a game and we’re all players in it — but who gets to set the rules? North answers this excellent question in a book he wrote alongside John Wallis and Barry Weingast called Violence and Social Orders, A Conceptual Framework for Interpreting Recorded Human History. It’s an answer I’ll definitely treat in a future blog post, but to sum it up — those with the most power are the ones that set the rules. However, power shifts from group to group, from an individual to a collective and vice versa, as changes in circumstance, material wealth and technology benefit certain players over others.

    Changes in power and resources create new circumstances. These circumstances lead to new incentive structures for players that changes the economic game they’re playing.

    North doesn’t go into all that in this essay, but he does outline how the economic “game” has changed over time, and what institutions were required to be built as the game got more complex. I’ve interspersed quotes from the essay below, they’re italicized. I’ve bucketed the main stages of these changes as follows:

    1. Autarky and small scale trade. Trade takes place in local markets. The main institutions governing exchange come from strict social customs that severely punish those who break agreements. Dense social networks ensure that trust is high. The threat of violence is a continuous force for preserving order. Costs of transacting are low.

    2. Large-scale trade. Trade graduates to a regional market connecting multiple localities. Trade happens for the first time with strangers. The size of the market grows and transaction costs increase sharply because the dense social network is replaced; hence, more resources must be devoted to measurement and enforcement. Greater information asymmetry means parties need to ensure they’re really getting what they paid for. These quality checks make transacting more expensive. North writes, In the absence of a state that enforced contracts, religious precepts usually imposed standards of conduct on the players.

    3. Long-distance trade. The growth of long distance trade poses two distinct transaction cost problems. One is a classical problem of agency, which historically was met by use of kin in long-distance trade. You wouldn’t trust a stranger to transport a shipment of goods to a distant land with limited communication. Otherwise, each day you’d be wondering if the ship would ever return, and if it did, would the profits ever be returned to you? Nepotism is an optimal strategy that arises to decrease the risks from the principal-agent problem. Here, meritocracy is a sub-optimal strategy. A second problem consisted of contract negotiation and enforcement in alien parts of the world, where there is no easily available way to achieve agreement and enforce contracts. Enforcement means not only such enforcement of agreements but also protection of the goods and services en route from pirates, brigands, and so on. More resources must be devoted to security, whether by hiring men to protect cargo, or by creating long-standing agreements with agents abroad who earn rents from agreeing to protect shipments in the future. The problems of enforcement en route were met by armed forces protecting the ship or caravan or by the payment of tolls or protection money to local coercive groups. Given that continuous trade overseas is a repeated game, defection from such agreements by “local coercive groups” would be a sub-optimal strategy, which increases the likelihood of their cooperation.

    4. The demands of production outgrow the capacity of trustworthy labor. You don’t have enough kin to handle the production, packaging and shipment of goods. To grow marketshare, labor must be found elsewhere and must specialize. Micromanagement cannot scale. Economies of scale result in the beginnings of hierarchical producing organizations, with full-time workers working either in a central place or in a sequential production process. The principal-agent problem rears its head again. Such societies need effective, impersonal contract enforcement, because personal ties, voluntaristic constraints, and ostracism are no longer effective as more complex and impersonal forms of exchange emerge. To prevent hired labor from stealing, for instance, impersonal institutions, like courts must exist which will increase the costs of acting unfaithfully.

    5. The demands of production outgrow the capacity of available capital. You’ve hired all the people you possibly can, and would hire more — but don’t have enough cash on hand to do it. Without credit, growth is impossible. To expand production, capital markets must emerge. Abstraction, in the form of non-tangible assets, enter the economic picture. Lenders may issue debt, but need strong assurances that it will be repaid. Here, lenders need assurances that even if institutions with the power to penalize delinquent borrowers exist — that these institutions will not rule against the lender arbitrarily, seize their assets, or act unjustly in any way. In other words, without secure property rights, capital markets cannot emerge. Establishing a credible commitment to secure property rights over time requires either a ruler who exercises forbearance and restraint in using coercive force, or the shackling of the ruler’s power to prevent arbitrary seizure of assets. He goes on to say, the first alternative was seldom successful for very long in the face of the ubiquitous fiscal crises of rulers (largely as a consequence of repeated warfare). The latter entailed a fundamental restructuring of the polity such as occurred in England as a result of the Glorious Revolution of 1688, which resulted in parliamentary supremacy over the crown. Absolute centralized rule may be effective and decisive, but isn’t very good for capital markets long-term. Governments, which are economic agents too and must borrow money, are also playing a long term game. Making it extremely costly for governments to seize property is one strategy to curtail the arbitrary use of power. Douglass North and Barry Weingast chronicled the emergence of property rights in England and documented how elites curtailed the power of the crown in “Constitutions and Commitment: The Evolution of Institutions Governing Public Choice in Seventeenth-Century England.”

    6. Growing economic complexity requires specialized management. As impersonal institutions like courts grow in authority, economies of scale and ever-greater specialization become possible. In this final stage, specialization requires increasing percentages of the resources of the society to be engaged in transacting, so that the transaction sector rises to be a large percentage of gross national product. If laboring households spend all their time involved in producing only one piece of the economic pie, they will need to transact to secure every additional good or service they require to live. As transactions themselves become impersonal and ubiquitous, institutions like the financial system and banks must arise to guarantee the transactions.

    Technology vs. Institutions

    As economies grow, institutions can help set the rules of exchange and lower the barriers to transacting. That doesn’t necessarily mean that institutions are durable over time, or even that effective! It’s very possible for institutions to solve one problem, while creating another. Often, institutions fail and new ones are needed to replace them. Other times, institutions become obsolete or so rigid that they begin to stand in the way of willing parties who want to transact.

    The story of technology, which is not treated in-depth in North’s analysis here but which North gives us the tools to assess, is a crucial narrative that co-exists alongside the evolution of institutions. On the one hand, it is effective institutions that enable innovative technologies to be produced in the first place. On the other hand, technologies themselves inevitably become institution-killers.

    Imagine, for instance, if cellphones had existed when long-distance trade emerged. If traders were able to keep better track of their shipments, they could opt to hire the best negotiators in town to shepherd their cargo, rather than their nephews. If some of us fear that governments have too much power today to seize our assets or arbitrarily change their value, we might opt to store value in decentralized, digital currencies. However, in this case, we might expose ourselves to digital pirates or brigands who may succeed in robbing us of our balances. Whether new institutions or new technologies will be created to confront these challenges, we are yet to see.

    One thing is certain — the infinite game continues!