Automated updates: 2022-07-07

This commit is contained in:
John Colagioia 2022-07-07 17:04:21 -04:00
parent 8c9ebf64f8
commit 6579f46917
7 changed files with 384 additions and 84 deletions

View File

@ -6,6 +6,7 @@ tags: [quora, programming, career, rant]
summary: It's the end of software...again
thumbnail: /blog/assets/ENIAC-function-table-at-Aberdeen.png
offset: -51%
proofed: true
---
As I've hinted at [once]({% post_url 2019-12-08-greetings %}) or [twice]({% post_url 2020-01-12-quora %}), I'm recycling some answers I have written on Quora---and some other writing---and updating them for my current line of thinking.
@ -19,19 +20,19 @@ This post is partly based on...
* [<i class="fab fa-quora"></i> Is it true that 80% of IT jobs can be replaced by automation? What does that mean for software developers?](https://www.quora.com/Is-it-true-that-80-of-IT-jobs-can-be-replaced-by-automation-What-does-that-mean-for-software-developers), which I originally answered on Sunday, March 26th, 2017.
* [<i class="fab fa-quora"></i> Will artificial intelligence ever be good enough to translate legal contracts into language people can understand?](https://www.quora.com/Will-artificial-intelligence-ever-be-good-enough-to-translate-legal-contracts-into-language-people-can-understand), which I originally answered on Saturday, July 14th, 2018.
Obviously, they have been edited substantially to fit together, and to better fit the tone and format of **Entropy Arbitrage**.
Obviously, I have edited them substantially to fit together, and to better fit the tone and format of **Entropy Arbitrage**.
## Background
Programming on systems like the [Colossus computer](https://en.wikipedia.org/wiki/Colossus_computer) has been described as instructing the machine to make specific calculations/counts, and after studying the results, providing the next job. Computers of the day didn't remember intermediate results, and didn't have the means to act on any intermediate results it had.
People have described programming on systems like the [Colossus computer](https://en.wikipedia.org/wiki/Colossus_computer) as resembling instructing the machine to make specific calculations/counts, and after studying the results, providing the next job. Computers of the day didn't remember intermediate results, and didn't have the means to act on any intermediate results it had.
This process included---in addition to many technicians and engineers standing around to make repairs---hand-punching paper tape loops and specifying every single instruction that the machine needed to execute to solve the problem at hand.
In some ways, the [ENIAC](https://en.wikipedia.org/wiki/ENIAC) was even stranger, basically being a fixed collection of complex problem-solvers that were more or less single instructions in a process. Programmers---mostly women, by the way, though they've been ignored for decades---specified the instructions by setting switches on panels, such as the panel you can see in the post's header image.
In some ways, the [ENIAC](https://en.wikipedia.org/wiki/ENIAC) was even stranger, basically constructed as a fixed collection of complex problem-solvers that more or less acted as single instructions in a process. Programmers---mostly women, by the way, though we've quietly ignored that detail for decades---specified the instructions by setting switches on panels, such as the panel you can see in the post's header image.
## Advances
Today, though? I can write some fairly high-level descriptions of what I want---for some cases, I can even just specify the problem and trust that someone else has solved the problem for everybody---with an editor that will catch a lot of my mistakes for me and make suggestions to improve what I write, probably leveraging frameworks generating a lot of the most boring code and figuring out what libraries I'm going to need to get things to work, compile it in seconds without me needing to know anything about the underlying computer, then send the whole thing out to a dynamically-scaled cluster of computers.
Today, though? I can write some fairly high-level descriptions of what I want---for some cases, I can even just specify the problem and trust that someone else has solved the problem for everybody---with an editor that will catch a lot of my mistakes for me and make suggestions to improve what I write, probably leveraging frameworks generating a lot of the most boring code and figuring out what libraries I'll likely need to get things to work, compile it in seconds without me needing to know anything about the underlying computer, then send the whole thing out to a dynamically-scaled cluster of computers.
As technology advances, the question consistently comes up: Is *this* the end of programming? With such sophisticated tools, can't managers or even uneducated customers can just use those tools directly and cut out the programmers?
@ -41,43 +42,43 @@ While by no means complete---because not everybody announces every project, and
### COBOL
The cheap shot is the [COBOL](https://en.wikipedia.org/wiki/COBOL), the "COmmon Business-Oriented Language." While the underlying intent of developing the language was to commoditize computer hardware by making it easier to move software between mainframes, you'll occasionally stumble across a contemporary (non-humor) book or article suggesting that, because COBOL's syntax looks like English, managers will soon be able to eliminate their developer teams and do the programming themselves, since the managers already know English, and so don't need the programmers to "translate" for them.
The cheap shot aims at [COBOL](https://en.wikipedia.org/wiki/COBOL), the "COmmon Business-Oriented Language." While the underlying intent of developing the language intended to commoditize computer hardware by making it easier to move software between mainframes, you'll occasionally stumble across a contemporary (non-humor) book or article suggesting that, because COBOL's syntax looks like English, managers will soon have the ability to eliminate their developer teams and do the programming themselves, since the managers already know English, and so don't need the programmers to "translate" for them.
The industry has probably complicated that sentiment, by the fact that most software project managers are former (or current) developers, but the same antipathy and self-hatred still exists towards "management," as somehow incapable of programming. It's only worse when executives use "wives" as the low-water mark for technical ability.
The industry has probably complicated that sentiment, by the fact that most software project managers spent (or still spend) time as developers, but the same antipathy and self-hatred still exists towards "management," as somehow incapable of programming. It only makes the situation worse, when executives use "wives" as the low-water mark for technical ability.
Having used COBOL on a couple of projects, I can tell you that...it's fine. I hate the idea of breaking up the program into sections---the Identification, Environment, Configuration, Data, and Procedure divisions---but I started out programming in [BASIC](https://en.wikipedia.org/wiki/BASIC), so I'm not afraid of the occasional `GOTO` in my code when there's no other option. And COBOL is fairly similar to other period languages, once you get used to the syntax.
Having used COBOL on a couple of projects, I can tell you that...it's fine. I hate the idea of breaking up the program into sections---the Identification, Environment, Configuration, Data, and Procedure divisions---but I started out programming in [BASIC](https://en.wikipedia.org/wiki/BASIC), so I don't worry about the occasional `GOTO` in my code when I see no other option. And COBOL feels fairly similar to other period languages, once you get used to the syntax.
### Diagrams
My kindergarten class was taught to read and draw [flowcharts](https://en.wikipedia.org/wiki/Flowchart), bizarre as that might sound. I never learned *why* it was in the curriculum, and I've never thought to ask students in other classes if they had the same experience, on the rare cases that I run across someone from those days---since I generally prefer to *not* sound like I'm having a nervous breakdown when I'm excited to see someone---but we also wired up telephones to a couple of car batteries for one lecture. I know that it wasn't an "advanced" class or a wealthy school district. So, maybe I just had a great teacher who I didn't appreciate enough.
My kindergarten teacher taught us to read and draw [flowcharts](https://en.wikipedia.org/wiki/Flowchart), bizarre as that might sound. I never learned *why* anybody put it in the curriculum, and I've never thought to ask students in other classes if they had the same experience, on the rare cases that I run across someone from those days---since I generally prefer to *not* sound like I might have a nervous breakdown when I get excited to see someone---but we also wired up telephones to a couple of car batteries for one lecture. I know that they didn't designate it as an "advanced" kindergarten class (we didn't have those, back then, to my knowledge) or a wealthy school district. Maybe I just had a great teacher who I didn't appreciate enough.
Digression: For those of you who are younger or less technically inclined, what we now call "landlines" used to be powered by the phone company over the phone lines. If the phone had 48 volts, you could place a call. In most cases, this meant that neighborhoods or houses without electricity still had phone service. It also meant that, in certain cases, if the phone company's local office also lost power, you could *supply* power to the phone---though the company wouldn't be pleased with you if they found out---and could sometimes connect to someone if there was still a working switch. So, my kindergarten teacher showed us how to build an intercom system, basically, by connecting the phones together with power.
Digression: For those of you who come from younger cohorts or feel less technically inclined, the phone company previously powered what we now call "landlines" over the phone cables themselves, separate from the electric service. If the phone had 48 volts, you could place a call. In most cases, this meant that neighborhoods or houses without electricity still had phone service. It also meant that, in certain cases, if the phone company's local office also lost power, you could *supply* power to the phone---though the company wouldn't appreciate your ingenuity if they found out---and could sometimes connect to someone if the local switch still worked. You could also use the electricity from the phone line to power small devices, and pull other tricks that gave rise to the [phreaking](https://en.wikipedia.org/wiki/Phreaking) community. In essence, my kindergarten teacher showed us how to build a basic intercom system, by connecting the phones together with power.
Anyway, you can probably see where this is going. Because a flowchart has no fussy syntax rules---well, OK, it actually *does*, but you don't type them---even a child can do it, and therefore we don't really need any programmers, if we only had the technology to draw flowcharts on the screen.
You know what I love (in an ironic way) about this industry, though? "Any idiot can write code with flowcharts" was obviously a flop, and today, most developers I know would laugh if you started drawing a flowchart to explain something. But [Blockly](https://developers.google.com/blockly) and similar systems in the last decade or so, basically just expensive versions of the same idea, still show up regularly as brilliant new ideas. Nothing in the software industry ever goes away permanently...
You know what I love (in an ironic way) about this industry, though? "Any fool can write code with flowcharts" obviously flopped, and today, most developers I know would laugh if you started drawing a flowchart to explain something. But [Blockly](https://developers.google.com/blockly) and similar systems in the last decade or so, basically just expensive versions of the same idea, still show up regularly as brilliant new ideas. Nothing in the software industry ever goes away permanently...
In between flowcharts and Blockly, we should also mention [UML](https://en.wikipedia.org/wiki/Unified_Modeling_Language), which...it's a flow chart for data, instead of code. Let's leave it at that, because there was about ten years when every office I worked announced that they were requiring every task to include submitting an updated UML diagram, but never actually followed through. If you remember those days, that's more than enough talk about UML for you, too.
In between flowcharts and Blockly, we should also mention [UML](https://en.wikipedia.org/wiki/Unified_Modeling_Language), which...it's a flow chart for data, instead of code. Let's leave it at that, because we collectively spent about ten years when every office I worked announced that they planned to require every task to include submitting an updated UML diagram, but never actually followed through. If you remember those days, that probably feels like more than enough talk about UML for you, too.
### Computer Aided Software Engineering
This is a funny category, to me, because parts of the idea exploded and changed the industry for the better, whereas others never manifested. [Computer-Aided Software Engineering](https://en.wikipedia.org/wiki/Computer-aided_software_engineering) was the idea that we could integrate important tools so tightly into the development process, that developers could afford to be far less experienced and far less talented, because the tools would catch issues before anybody else saw them.
This feels like a funny category, to me, because parts of the idea exploded and changed the industry for the better, whereas others never manifested. [Computer-Aided Software Engineering](https://en.wikipedia.org/wiki/Computer-aided_software_engineering) suggested that we could integrate important tools so tightly into the development process, that developers could afford to come far less experienced and far less talented---and cheaper, of course---because the tools would catch issues before anybody else saw them.
In some cases, this was probably valid. For example, [distributed revision control](https://en.wikipedia.org/wiki/Distributed_version_control) means that developers can keep track of their work in the public repository, *without* contaminating anybody else's work, until it's time to merge the pieces together. Automatic code analysis---"linting"---has saved everybody the embarrassment of revealing certain kinds of mistakes.
In some cases, this probably worked out. For example, [distributed revision control](https://en.wikipedia.org/wiki/Distributed_version_control) means that developers can keep track of their work in the public repository, *without* contaminating anybody else's work, until the time comes to merge the pieces together. Automatic code analysis---"linting"---has saved everybody the embarrassment of revealing certain kinds of mistakes.
Other cases never worked well enough for anybody to release it. I remember being shown demonstrations that would generate code from diagrams (like from the last section), but that never manifested. I was once told that multiple companies were just months away from releasing [penetration testing](https://en.wikipedia.org/wiki/Penetration_test) systems that were fully automated, able to assimilate results and automatically generate new hypotheses about flaws, but decades later, penetration testing is still manual and expensive.
Other cases never worked well enough for anybody to release them. I remember seeing demonstrations that would generate code from diagrams---like from the last section---but that never manifested. Several people at the time also told me that multiple companies stood just months away from releasing [penetration testing](https://en.wikipedia.org/wiki/Penetration_test) systems that could operate fully automated, able to assimilate results and automatically generate new hypotheses about flaws, with no human intervention. But decades later, penetration testing still takes manual and expensive labor.
And then there were cases that where we got what was promised, but the promises were overblown. For example, computer science education for a *long* time was built around [proving program correctness](https://en.wikipedia.org/wiki/Formal_verification); some classes in my college taught programming with tedious documentation requirements about the expectations of various parts of the program, not for human readability, but for the benefit of a hypothetical code verification system. To my knowledge, that's still the domain of hiring teams of mathematicians to review critical code, rather than something that companies would do to save time. Similarly, the [CORBA](https://en.wikipedia.org/wiki/Common_Object_Request_Broker_Architecture) advocates promised a future where we---*anyone*---would be able to stitch together features that we wanted from many vendors across the Internet, only needing to write original code, but we never got anything like the promised flexibility or ubiquity.
And then we had cases where we technically got what developers promised, but the promises had an overblown aspect to them. For example, we built computer science education, for a *long* time, around [proving program correctness](https://en.wikipedia.org/wiki/Formal_verification); some classes in my college taught programming with tedious documentation requirements about the expectations of various parts of the program, not for human readability, but for the benefit of a hypothetical code verification system. To my knowledge, that still fits the domain of hiring teams of mathematicians to review critical code, rather than something that companies would do to save time. Similarly, the [CORBA](https://en.wikipedia.org/wiki/Common_Object_Request_Broker_Architecture) advocates promised a future where we---*anyone*---would have the ability to stitch together features that we wanted from many vendors across the Internet, only needing to write original code, but we never got anything like the promised flexibility or ubiquity.
### No-Code or Low-Code
Going back at least as far as [FileMaker Pro](https://en.wikipedia.org/wiki/FileMaker), there have been systems meant to take the "coding" out of programming, hoping to make the system accessible to average people. Generally, the idea is that you get a database with an example-based approach to requesting data, a form builder, and the ability to trigger events when certain things happen.
Going back at least as far as [FileMaker Pro](https://en.wikipedia.org/wiki/FileMaker), we have had systems meant to take the "coding" out of programming available, hoping to make the system accessible to average people. Generally, the product gives you a database with an example-based approach to requesting data, a form builder, and the ability to trigger events when certain things happen.
The general idea is that, as long as the application that you want happens to look like what the developers planned for, you probably won't have much work to do. In reality, while these systems have their advocates---I had a manager who was still using FileMaker as late as 2001 for internal tools---the most important skill in using them is understanding what it's meant to do and not trying to exceed those boundaries.
The general idea suggests that, as long as the application that you want happens to look like what the developers planned for, you probably won't have much work to do. In reality, while these systems have their advocates---I had a manager who still used FileMaker as late as 2001 for internal tools---the most important skills in using them involve understanding what the developers meant their no-code tool to do, and trying not to exceed those boundaries.
### Artificial Intelligence
Things were quiet on this front, for a while, but then neural networks came knocking on the door. Now rebranded "machine learning," we pretend that we've cracked artificial intelligence, even though the technique hasn't changed since long before I took a neural networks class in 1995.
Things stayed quiet on this front, for a while, but then neural networks came knocking on the door. Now rebranded "machine learning," we pretend that we've cracked artificial intelligence, even though the technique hasn't changed since long before I took a neural networks class in 1995.
Because machine learning systems have---and I'll get into this more in a bit---largely replicated every other technique we use for spoofing artificial intelligence, it has become the latest technology to claim to cut down the software development workforce, especially when it comes from one of the Big Five software companies---Amazon, Apple, Facebook, Google, and Microsoft---and so our latest contender enters the ring.
@ -85,73 +86,77 @@ Because machine learning systems have---and I'll get into this more in a bit---l
Microsoft has now released [GitHub Copilot](https://en.wikipedia.org/wiki/GitHub_Copilot), a machine-learning tool to generate code to supplement what a human developer has written.
I should note that I haven't used it, yet. I'm on the wait list and will probably write a post about the experience when I do. But before I get there, the product is...problematic.
I should note that I haven't used it, yet. I put myself on the wait list and will probably write a post about the experience when I do. But before I get there, the product seems...problematic.
**Update, 2021-11-03**: Since writing this post, I have used Copilot and [wrote about my experiences]({% post_url 2021-11-03-copilot2 %}) with it. You might already know that from a link at the top of the post.
**Update, 2022-07-05**: GitHub now charges for Copilot access, and Amazon has released their similar *CodeWhisperer* offering. The two announcements have led various software rights groups to [recommend a boycott](https://sfconservancy.org/blog/2022/jun/30/give-up-github-launch/).
## The Machine Learning Twist
The big issue is that, as much as Microsoft/GitHub wants to call it a "code synthesizer, not a search engine," it still admits that's not true, with variations on the following quote.
The big issue seems that, as much as Microsoft/GitHub wants to call it a "code synthesizer, not a search engine," it still admits that won't prove true, with multiple variations on the following quote.
> ...the suggestion may contain some snippets that are verbatim from the training set.
This makes sense, of course. From the original press for [AlphaGo](https://en.wikipedia.org/wiki/AlphaGo) playing Go, it's been obvious that machine learning generally just replicates a combination of [Markov processes](https://en.wikipedia.org/wiki/Markov_chain) and [alpha-beta pruning](https://en.wikipedia.org/wiki/Alpha%E2%80%93beta_pruning), even though it's difficult to inspect those intermediate representations if they're not explicit parts of the output. That means that code isn't being "synthesized." It's being regurgitated in various-sized pieces, in such a way that the pieces all fit together seamlessly.
This makes sense, of course. From the original press for [AlphaGo](https://en.wikipedia.org/wiki/AlphaGo) playing Go, it has seemed obvious that machine learning generally just replicates a combination of [Markov processes](https://en.wikipedia.org/wiki/Markov_chain) and [alpha-beta pruning](https://en.wikipedia.org/wiki/Alpha%E2%80%93beta_pruning), even though it quickly becomes difficult to inspect those intermediate representations if nobody made them explicit parts of the output. That means that the software doesn't "synthesize" anything. It *regurgitates* code in various-sized pieces, in such a way that the pieces all fit together seamlessly.
I'm going to guess that the candidates are then filtered by a normal "linter"---mentioned with the [CASE tools](#computer-aided-software-engineering)---to make sure that none of the suggestions are complete garbage. If you follow any of the actual experts---people like author [Janelle Shane ](https://aiweirdness.com/aboutme), rather than someone who's trying to sell systems---you know that every list of fun things or a great story that a neural network generated involved generating hundreds or thousands of candidates, then manually picking the best; code has the advantage of having a rigorous grammar, but will still need filtering.
I would personally dare to guess that the candidates then get filtered by a normal "linter"---mentioned with the [CASE tools](#computer-aided-software-engineering)---to make sure that none of the suggestions look like complete garbage. If you follow any actual experts in making neural networks do interesting things---people like author [Janelle Shane](https://aiweirdness.com/aboutme), for example, rather than someone who wants to sell systems---you know that every list of fun things or a great story that a neural network generated involved generating hundreds or thousands of candidates, then manually picking the best; code has the advantage of having a rigorous grammar, but will still need filtering.
In any case, the regurgitation creates a problem, because we don't know what projects the Copilot team has chosen in their training set. If any of it was posted without a public license, using results that include components would be a derived work, and that would be piracy. If any of the pieces are from code released under the terms of the [GPL](https://en.wikipedia.org/wiki/GNU_General_Public_License), you shouldn't use it in a project that *isn't* released under the GPL. Even if the training data was all released under the terms of the [MIT license](https://en.wikipedia.org/wiki/MIT_License), that has terms to comply with, too. That doesn't even get into whether the training set includes code created in a jurisdiction with "moral rights" on works.
In any case, the regurgitation creates a problem, because we don't know what projects the Copilot team has chosen in their training set. If the creators posted any of it without a public license, using results that include components would produce a derived work, and that would qualify piracy. If any of the pieces come from code released under the terms of the [GPL](https://en.wikipedia.org/wiki/GNU_General_Public_License), then you shouldn't use it in a project that you *didn't* released under the GPL. Even if they released all the training data under the terms of the [MIT license](https://en.wikipedia.org/wiki/MIT_License), that has terms to comply with, too. That doesn't even get into whether the training set includes code created in a jurisdiction with "moral rights" on works.
In other words, Microsoft and GitHub might want to convince everyone otherwise, but there's a *fairly* good chance that their project is inciting people to violate copyright law. And I'd be willing to bet that the licensing agreement to use the service doesn't indemnify the developers who use Copilot, as a company might do if they could guarantee that they weren't feeding you protected code. More likely, the license disclaims all responsibility for vetting what it recommends to you.
In other words, Microsoft and GitHub might want to convince everyone otherwise, but they have a *fairly* good chance that their project incites people to violate copyright law. And I'd bet that the licensing agreement to use the service doesn't indemnify the developers who use Copilot, as a company might do if they could guarantee that they didn't feed you protected code. More likely, the license disclaims all responsibility for vetting what it recommends to you.
If you think that's an overreaction, consider an obvious question: Have they ever mentioned training the model with their *own* code? Is there some tiny chance that it will suggest code copied verbatim from Windows, Office, Azure, or GitHub? If not, why not, if Copilot isn't creating derived works and there aren't any licensing problems...?
If you think that seems like an overreaction, consider an obvious question: Have they ever mentioned training the model with their *own* code? Do you see some tiny chance that Copilot will suggest code copied verbatim from Windows, Office, Azure, or GitHub? If not, why not, if Copilot doesn't create derived works and any licensing problems come from over-active imaginations...?
Keep in mind, even if Copilot manages to somehow be the tool that magically understands human needs---ha!---if it's doing so through piracy, lawyers charge significantly more per hour than software engineers do.
Keep in mind that, even if Copilot manages to somehow become the tool that magically understands human needs---ha!---if it does so through piracy, lawyers charge significantly more per hour than software engineers do.
Regardless, let's assume that everything with Copilot is aboveboard. There are still reasons to assume that it won't reduce the need for developers.
Regardless, let's assume that everything with Copilot operates aboveboard. We still have reasons to assume that it won't reduce the need for developers.
## Compilers Automate, Too
In the simplest interpretation, Copilot is a compiler. It looks at code that the developer wrote, and produces an error message, offering corrections.
In the simplest interpretation, Copilot acts as a compiler. It looks at code that the developer wrote, and produces an error message, offering corrections.
It's a *fancy* compiler, sure, but if we strip away all the conceptual frills, it takes input code and outputs different code and some warnings. And improved compilers have only *increased* the market for programmers. Good tools make programmers more cost-effective, meaning that smaller budgets can afford to get things done.
It acts as a *fancy* compiler, sure, but if we strip away all the conceptual frills, it takes input code and outputs different code and some warnings. And improved compilers have only *increased* the market for programmers. Good tools make programmers more cost-effective, meaning that smaller budgets can afford to get things done.
Even in just the twenty-plus years that I've been programming professionally---to say nothing of the years that I was a hobbyist or student---the difference between how I worked then and how I work today is night and day. The point is that, between the time I start working on code and the end-user gets access to software, I would be *shocked* if ten percent of my work is actually written by me. The amount of automation is shocking, when you actually stop to look at it, and it's only going to get better.
Even in just the twenty-plus years that I've professionally programmed---to say nothing of the years that I spent as a hobbyist or student---the difference between how I worked then and how I work today looks like the difference between night and day. Specifically, between the time I start working on code and the end-user gets access to software, it would *shock* me, if ten percent of my work actually came from my efforts. The amount of automation sometimes seems shocking, when you actually stop to look at it, and it will only get better.
What might this mean? Probably more opportunity, if history is any indication. As the automated side of the job gains functionality, people have more ideas for what we can be doing with it. It's not long ago that we never would have considered sending telephone calls and films across computer networks, but today, that's the default.
What might this mean? Probably more opportunity, if history serves as any indication. As the automated side of the job gains functionality, people have more ideas for what we can do with it. Not long ago, we never would have considered sending telephone calls and films across computer networks, but today, that has become the default situation.
However, the jobs will change, like they always do. Years ago, I used to work with people who had titles like "Build Engineer," people whose actual job it was to make sure that the compilers were configured and the installation packages were generated correctly. Today, that's mostly a solved problem, and developers almost exclusively work on translating user requirements into solutions. Those that haven't, have further specialized into "[DevOps](https://en.wikipedia.org/wiki/DevOps) engineers." That trend will certainly continue.
However, the jobs will change, like they always do. Years ago, I used to work with people who had titles like "Build Engineer," people whose actual job centered on configuring the compilers and the installation packages that we generated, to make sure that they did everything that the team needed. Today, we mostly see that as a solved problem, and developers almost exclusively work on translating user requirements into solutions. Those that haven't, have further specialized into "[DevOps](https://en.wikipedia.org/wiki/DevOps) engineers." That trend will certainly continue.
Regardless of that trend, programming isn't likely to go away. No matter what the technology---COBOL, CASE tools, Copilot, whatever---promises that we'll have "software without programming" always fall short, because someone needs to bridge the gap between translating what people say to the sort of specific, unambiguous explanation that maximizes the chances that people get what they want. And that person is a programmer, even the job looks like talking conversationally with some sort of artificial intelligence.
Regardless of that trend, programming doesn't seem likely to go away. No matter what the technology---COBOL, CASE tools, Copilot, whatever---promises that we'll have "software without programming" always fall short, because someone needs to bridge the gap between translating what people say to the sort of specific, unambiguous explanation that maximizes the chances that people get what they want. And that person equates to a programmer, even the job looks like talking conversationally with some sort of artificial intelligence...which it won't.
## Writing Code versus Programming
For at least the thirty-ish years that I've been paying attention to the software industry, but really going back as far as [COBOL](https://en.wikipedia.org/wiki/COBOL), we've been continuously promised a tool *just* over the horizon that's going to make it trivial for normal people to write code. Every one of them has achieved the goal, and is a failure, because **writing code isn't the hard part of programming**. Turning ideas into an unambiguous specification is the hard part, and that requires communication, not code.
For at least the thirty-ish years that I've paid attention to the software industry, but really going back as far as [COBOL](https://en.wikipedia.org/wiki/COBOL), companies have continuously promised a tool *just* over the horizon that will make it trivial for normal people to write code. Every one of them has achieved the goal, and became a failure, because **writing code isn't the hard part of programming**. Turning ideas into an unambiguous specification makes the job hard, and that requires communication, not code.
I'll repeat that, because it has been [making the rounds on Twitter <i class="fab fa-twitter"></i>](https://twitter.com/dijkstradev/status/1412078831062503425), so I guess maybe that stumbled on something useful.
I'll repeat that, because it has [made the rounds on Twitter <i class="fab fa-twitter"></i>](https://twitter.com/dijkstradev/status/1412078831062503425), so I guess maybe that stumbled on something useful.
> Writing code isn't the hard part of programming. Turning ideas into an unambiguous specification is the hard part, and that requires communication, not code.
> Writing code isn't the hard part of programming. Turning ideas into an unambiguous specification makes the job hard, and that requires communication, not code.
Does that clarify why those prior systems haven't reduced the number of programming jobs? All they do is simplify writing code, the easiest part of the job, the part that *doesn't* help programmers grow.
Does that clarify why those prior systems haven't reduced the number of programming jobs? All they do involves simplifying writing code, the easiest part of the job, the part that *doesn't* help programmers grow.
## Been There, Done That
Again, I haven't looked at Copilot specifically, but I'm tempted to add a fourth item, something I've already alluded to.
Again, I haven't looked at Copilot specifically, but I feel tempted to add a fourth item, something I've already alluded to.
> Almost every OMG-we-finally-cracked-AI product just generates [Markov chains](https://en.wikipedia.org/wiki/Markov_chain), but without the ability to inspect and debug the generation process.
A Markov process is basically a special kind of search engine, that asks: Given the most recent inputs, what is the *next* most likely input? So, while Microsoft is correct in saying that they didn't *write* a search engine, the odds that they didn't *produce* a search engine are low.
A Markov process basically acts like a special kind of search engine, that asks: Given the most recent inputs, what looks like the *next* most likely input? Therefore, while Microsoft might feel correct in saying that they didn't *write* a search engine, the odds that they didn't *produce* a search engine strike me as low.
That's not necessarily a bad thing. There is absolutely a role for an editor or compiler that---for example---queues up relevant questions from [Stack Overflow](https://stackoverflow.com/), so that the developer doesn't need to hunt for them. But such a feature changes the speed of work, not the kind of work.
I wouldn't necessarily call that a bad thing. Modern development absolutely still requires a role for an editor or compiler that---for example---queues up relevant questions from [Stack Overflow](https://stackoverflow.com/), so that the developer doesn't need to hunt for them. But such a feature changes the speed of work, not the kind of work.
## Making It Perfect
Here's a maybe-controversial assertion: The only way that artificial intelligence---or any product---might displace human software developers would be if we cede our autonomy to computers, in some dystopian future.
Let me propose a maybe-controversial assertion: The only way that artificial intelligence---or any product---might displace human software developers would require us to cede our autonomy to computers, in some dystopian future.
What I mean is that, maybe computers can---one day---produce all the specifics of code. But we only write software in response to a human need, and someone needs to communicate that need and what it means to a computer. Can the person with the need explain it directly?
To clarify, maybe computers can---one day---produce all the specifics of code. But we only write software in response to a human need, and someone needs to communicate that need and what it means to a computer. Can the person with the need explain it directly?
Possibly, but then we'll all be programmers, because that *is* the program. Again, this is the problem with these "even your manager will be able to develop applications" products. They remove the typing from programming to bridge the gap from one direction, sure. But they never bother to teach people how to think about software or how to solve problems, so the gap is never bridged from the other direction.
Possibly, but then we'll all become programmers, because that *becomes* the program. Again, this strikes at the problem with these "even your manager will have the ability to develop applications" products. They remove the typing from programming to bridge the gap from one direction, sure. But they never bother to teach people how to think about software or how to solve problems, so they never bridge the gap from the other direction.
Like I said, there's one possible exception. If we're willing to live in a world where software monitors everything we do and other software is automatically written and rewritten to squeeze more productivity out of us based on some predefined metric, then it's possible that's the last program a person will write. At that point, software is no longer written to serve human needs; we're all just watching our [pomodoro clocks](https://en.wikipedia.org/wiki/Pomodoro_Technique) and shuffling tickets between swim-lanes to please our robot overlords 🤖.
Like I said, though, I can see one possible exception. If we feel willing to live in a world where software monitors everything we do and other software automatically gets written and rewritten to squeeze more productivity out of us based on some predefined metric, then it becomes possible for that to stand as the last program that a person will write. At that point, software no longer gets written to serve human needs; we all just watch our [pomodoro clocks](https://en.wikipedia.org/wiki/Pomodoro_Technique) and shuffle tickets between swim-lanes to please our robot overlords 🤖.
However, I don't see the utility in removing all discretion and self-direction from human beings. By and large, the historical societies that have attempted to do that have tended to fare poorly. It's possible that I [write]({% post_url 2020-09-06-enlight %}) [about]({% post_url 2020-11-15-love-leave %}) [this]({% post_url 2021-05-30-winning %}) topic, occasionally.
However, I don't see the utility in removing all discretion and self-direction from human beings. By and large, the historical societies that have attempted to do that have tended to fare poorly. In fact, I may [write]({% post_url 2020-09-06-enlight %}) [about]({% post_url 2020-11-15-love-leave %}) [this]({% post_url 2021-05-30-winning %}) topic, occasionally.
The industry will *shift*, and it may shift dramatically, but it won't go away unless we all decide to work for computers rather than have them doing work for us.
@ -159,19 +164,21 @@ The industry will *shift*, and it may shift dramatically, but it won't go away u
A related question to AI replacing programmers revolves around the idea of AI replacing lawyer, especially when dealing in contract law.
And this is a similar answer, too. Legal contracts are *already* in a language that non-lawyer people understand, after all, generally the native language of the people involved in the contract. The contracts are long and complex to remove ambiguity---just like computer programs---and to prevent the relationship from falling apart due to a misunderstanding. They're complex, but they're not (assuming nobody is trying to commit fraud) complicated.
And this has a similar answer, too. People *already* write legal contracts in a language that non-lawyer people understand, after all, generally the native language of the people involved in the contract. The contracts grow long and complex to remove ambiguity---just like computer programs---and to prevent the relationship from falling apart due to a misunderstanding. They become complex, but they do not become (assuming nobody is trying to commit fraud) complicated. The complexity exists to handle some nuance that might become important.
So, while the answer is *technically* "yes," software could act as your contract lawyer in that it can just show you the contract, the answer that a person would *want* is "no," because any simplification of the text reintroduces ambiguity to and potentially changes the meaning of the contract, so you're not actually understanding the document you're signing, just an interpretation.
Therefore, while the answer *technically* sounds a lot like "yes," software could act as your contract lawyer in that it can just show you the contract, the answer that a reasonable person would *want* is "no," because any simplification of the text reintroduces ambiguity to and potentially changes the meaning of the contract, so you wouldn't actually understand the document that you want to sign, just an interpretation that overlooks a variety of details.
Robot lawyers, however, introduce an entirely new problem of introducing *bias* into the relationship. With code, we might not care if the examples used to train our AI to sort names were written by a sociopath. A list is either alphabetized or it isn't, after all. But when we start talking about the law, medicine, or any other fields that revolve around deciding people's fates, there are plenty of examples where machine learning has learned to treat Black people worse than everybody else, or that wealthy people should get what they want, because we have a long history of *courts* making those dumb decisions to draw from. In those fields, handing them off to neural networks is equivalent to depriving people of autonomy.
Robot lawyers, however, introduce an entirely new problem of introducing *bias* into the relationship. With code, we might not---though we should---care if the examples used to train our AI to sort names written by a person with no empathy. A list either presents itself in alphabetical order or it doesn't, after all. But when we start talking about the law, medicine, or any other fields that revolve around deciding people's fates, we have plenty of examples where machine learning has learned to treat Black people worse than everybody else, or that wealthy people should get what they want, because we have a long history of *courts* making those biased decisions to draw from. In those fields, handing them off to neural networks equates to depriving people of autonomy, which I covered ☝ back up there.
## Preliminary Copilot Verdict
Since this post is nominally about GitHub Copilot and--again---I haven't actually used it, I'm still comfortable in saying that it probably won't change the world.
Since this post exists nominally to talk about GitHub Copilot and--again---I haven't actually used it, I still feel comfortable in saying that it probably won't change the world.
Given the issues, I want to give it a try for personal projects---where licensing isn't an issue, since I'm going to release things under GPL-variants, anyway---and might consider it for other open source projects. But it's not going to save anybody any more money than UML saved companies in the '90s.
Given the issues, I want to give it a try for personal projects---where licensing won't become an issue, since I release most things under GPL-variants, anyway---and might consider it for other open source projects. But it won't save anybody any more money than UML saved companies in the '90s.
If I get to the head of the waitlist and it's life-changing, though, you'll all be the first to know...
If I get to the head of the waitlist, and it feels life-changing, though, you'll all become the first to know...
**Update, 2021-11-03**: As mentioned above, since writing this post, I have used Copilot and [wrote about my experiences]({% post_url 2021-11-03-copilot2 %}) with it.
#### <i class="fab fa-quora"></i>

View File

@ -14,25 +14,29 @@ proofed: true
![A copilot](/blog/assets/7581301810_2fb22999a8_o.png "Bleep bloop?")
After months since writing about [overblown worries about developer tools]({% post_url 2021-07-18-copilot %}), I am now officially a GitHub Copilot user. I promised that I'd write about it when I had some hands-on time with the system, so that's this post.
After months since writing about [overblown worries about developer tools]({% post_url 2021-07-18-copilot %}), I officially became a GitHub Copilot user. I promised that I'd write about it when I had some hands-on time with the system, so that became this post.
Long story short, I continue to not believe that this is the end of the world.
**Update, 2022-07-05**: I should mention that GitHub now charges for Copilot access, and Amazon has apparently launched a similar CodeWhisperer service. This has led to [calls for a boycott](https://sfconservancy.org/blog/2022/jun/30/give-up-github-launch/), which don't quite ring true to me. After all, these organizations have spent decades trying to explain that people can charge for Free Software, but not it makes them angry that GitHub will charge for Free Software. But they stayed silent (as far as I can remember) while people worried about the copyright implications, which I get to in this post.
Long story short, I continue to not believe that this product will end of the world.
## Setup
We got off to an inauspicious start, with an activation process that isn't actually rigged up correctly. The authentication code needs to be added manually.
We got off to an inauspicious start, with an activation process that they didn't actually rig up correctly. The authentication code needs someone to add it manually.
It's also computationally- or memory-intensive, for some reason, crashing Visual Studio Code---and nearly my laptop---twice in an hour, just because I asked for a sort routine.
It also seems computationally- or memory-intensive, for some reason, crashing Visual Studio Code---and nearly my laptop---twice in an hour, just because I asked for a sort routine.
Worse, the suggestions are mediocre at best, often throwing in bogus code or being indecisive about whether a variable will be constant or not. Microsoft calls the product "Your AI pair programmer," and that's somewhat accurate, if you assume that you're the *senior* developer in the relationship, the one who's there to improve their partner, rather than the one being improved. Copilot is more like the junior partner, who wants to prove that they can come up with something---anything---but then relies on the team to figure out what's wrong.
Worse, the suggestions seem mediocre at best, often throwing in bogus code or acting indecisive about whether a variable needs to stay constant or not. Microsoft calls the product "Your AI pair programmer," and that feels somewhat accurate, if you assume that you have the role of the *senior* developer in the relationship, the one who shows up to improve their partner, rather than the one undergoing improvement. Copilot acts more like the junior partner, who wants to prove that they can come up with something---anything---but then relies on the team to figure out what has gone wrong and how to solve the problem.
## Copilot, Make a Sundial
Here's an example. I had the idea of a "clock" that would show a sundial shadow, so I provided the comment, "Return the position of the Sun in the sky, based on the date and time." It tried to auto-fill my *comment* to suggest a date and time of *birth*, because I guess Copilot is into astrology, but instead I gave it an empty function to work with.
Take an example, my first attempt, taking the marketing mostly at its word. I had the idea of a "clock" that would show a sundial shadow, so I provided the comment, "Return the position of the Sun in the sky, based on the date and time." It tried to auto-fill my *comment* to suggest a date and time of *birth*, because I guess Copilot has gotten into astrology, but instead gave me an empty function to work with.
Now, I imagined this to be a simple task. The Sun's position is central to determining sunrise and sunset (critical to many fields), orienting solar energy collectors throughout the day, and other tasks. Plus, it's just trigonometry: The time of day gives the angle, and the date gives the inclination of the circle.
Now, I imagined this as a straightforward task, worthy of an AI pair programmer. The Sun's position stands central to determining sunrise and sunset (critical to many fields), orienting solar energy collectors throughout the day, and other tasks. Plus, it really just takes some trigonometry: The time of day gives the angle, and the date gives the inclination of the circle.
This would be a JavaScript project, since if it worked, I could add the graphics code---or let Copilot continue---and embed the result in this page.
I tried turning the tables on it, then, giving *it* the empty function, to see if it could work out the trigonometry.
This would have become a JavaScript project, since if it worked, I could add the graphics code---or let Copilot continue---and embed the result in this page.
Instead, Copilot gave me...this.
@ -55,15 +59,15 @@ sunPosition(date) {
}
```
First, you'll notice that all we have are initialized variables, no code that does anything. Second, [Julian Day](https://en.wikipedia.org/wiki/Julian_day) is a great and interesting concept, but it's not really meaningful to the position of the Sun. Then, we have a bunch of centuries that are similarly useless. And...I'm pretty sure that they're also calculated incorrectly, because centuries don't have exponentially growing numbers of days in them.
First, you'll probably notice that we only have initialized variables, with no code that does anything. Second, I certainly appreciate the [Julian Day](https://en.wikipedia.org/wiki/Julian_day) as a great and interesting concept, but doesn't have any real meaning in determining the position of the Sun. Then, we have a bunch of centuries that similarly prove useless. And...I feel pretty sure that it also calculated them incorrectly, because centuries don't have exponentially growing numbers of days in them.
The final line also cuts out in the middle of a *number*, no less.
The final line also cuts out in the middle of a *number*, presumably just to insult me.
## Copilot, Download a Book
Let's try another example, now in Ruby. This time, I wanted to download a random book from [Project Gutenberg](https://gutenberg.org), but Copilot had a better idea that we download a specified book from the [Internet Archive](https://archive.org). By "better," I mean that's just what Copilot tried to auto-fill in the comment, and I'm just messing around, so I figured I should let it play. Surely, if it came up with the idea, it's prepared to execute on the idea...
Let's try another example, now in Ruby. This time, I wanted to download a random book from [Project Gutenberg](https://gutenberg.org), but Copilot had a *better* idea that we download a specified book from the [Internet Archive](https://archive.org). By "better," I mean that Copilot tried to auto-fill that idea in the comment---again, it decided that I needed help writing comments, based on no code---and I just wanted to mess around, so I figured that I should let it play. Surely, if it came up with the idea, it must have prepared to execute on the idea...
Here's what it came up with to execute its own "idea."
Check out what it came up with, to execute its own "plan."
```ruby
# Download a book from the Internet Archive
@ -73,11 +77,11 @@ def download_from_ia(book_id, filename)
end
```
I'm less than impressed that it abstracts nothing about the Internet Archive's organization, and just calls [cUrl](https://curl.se/) on the majority of a URL.
I feel somewhat less than impressed that it abstracts nothing about the Internet Archive's organization, and just calls [cUrl](https://curl.se/) on the majority of a URL.
## Copilot, Open a Desktop Window
Maybe the problem is that I'm not thinking big enough. Let's try graphical programming in Go. Again, if it had a suggestion for my comment, I opted to let Copilot take the lead, but I started with "open a window."
Maybe, I considered, the problem involved that I didn't think big enough. Let's try graphical programming in Go. Again, if it had a suggestion for my comment, I opted to let Copilot take the lead, but I started with "open a window."
```go
package main;
@ -153,11 +157,11 @@ func vec3(x float32, y float32, z float32) vec3 {
}
```
I had to convince it to create the functions that it called. We have at least one infinite recursive loop in `clear_color()`, and multiple functions that don't actually do anything. None of those functions produces a window or the spinning cube **that it offered**.
I had to convince it to create the functions that it called, and I have no idea what it assumed that any of them would do. We have at least one infinite recursive loop in `clear_color()`, and multiple functions that don't actually do anything. None of those functions produces a window or the spinning cube **that it offered**.
### Copilot, Just Follow Directions
What if I'm more specific, as in writing the worst possible comment that discusses the technology instead of the result?
What if I get more specific, as in writing the worst possible comment that discusses the technology instead of the result?
```go
package main;
@ -166,11 +170,11 @@ package main;
var window = app.NewWindow("Hello Fyne!");
```
That's...almost better. I mean, obviously, it just lifted code straight out of the [Fyne tutorials](https://developer.fyne.io/tour/introduction/), but this at least has some non-zero chance of working.
That...almost seems better. I mean, obviously, it just lifted code straight out of the [Fyne tutorials](https://developer.fyne.io/tour/introduction/), but this at least has some non-zero chance of working.
## Copilot, Find Files
It's possible that my problem is that I'm being too abstract, imagining toy problems instead of putting it to work, as if this is a job interview. What if I build an example based on a real comment in a real project that I'm currently working on? [**INTERN**](https://github.com/jcolag/intern) has a function that visits every file in a folder, so that's concrete, and I have working code to compare it against.
It seemed possible that I tried to work too abstractly, imagining toy problems instead of putting it to work, as if this served as a job interview, and so causing problems. What if I built an example based on a real comment in a real project on which I currently had work to do? [**INTERN**](https://github.com/jcolag/intern) has a function that visits every file in a folder, so that seems concrete, and I have working code to compare it against.
```rust
// Iterate through files in the specified folder.
@ -192,7 +196,7 @@ fn process_folder(path: &str) {
I had to provide the comment and the function signature, but it at least gave a decent shot at solving the problem. This will, in fact, iterate through each file in the folder.
I'd prefer it, of course, if there was at least some acknowledgement that `read_dir()` might fail if `path` doesn't exist. And I have no idea why it decided to sort the list, especially since sorting is computationally expensive on large lists. But despite those issues, after writing an appropriate `process_file()` function, this code will work as requested. Considering that I didn't ask it to do anything with the files, that's fair.
I'd prefer it, of course, if it carried at least some acknowledgement that `read_dir()` might fail if `path` doesn't exist. And I have no idea why it decided to sort the list, especially since sorting becomes computationally expensive on large lists. But despite those issues, after writing an appropriate `process_file()` function, this code will work as requested. Considering that I didn't ask it to do anything with the files, that seems fair.
## Copilot, Consume an API
@ -222,11 +226,11 @@ string GetWebData(string url)
}
```
It was doing *so well* until it just...gave up on actually making the request. I guess it was time for AI-lunch.
It seemed to do *so well*, for a while, until it just...gave up on actually making the request. I guess it hit time for AI-lunch.
## Copilot, Violate a License for Me
For one test---since I raised the possibility that Copilot is copying code under reciprocal licenses without notifying the developer of their responsibilities---I opted to test this directly. I started writing the following (C#, but that probably doesn't matter) comment.
For one test---since I raised the possibility that Copilot copies code under reciprocal licenses without notifying the developer of their responsibilities---I opted to test this directly. I started writing the following (C#, but that probably doesn't matter) comment, as the most direct experiment.
```c#
/*
@ -235,21 +239,21 @@ For one test---since I raised the possibility that Copilot is copying code under
In chunks of about eighty characters per line, Copilot regurgitated half the preamble to the [GNU Affero General Public License](https://www.gnu.org/licenses/agpl-3.0.html) before I got bored.
The fact that it clearly has the license in its training data and is able to reproduce it tells us that it is doing the same for code.
The fact that it clearly has the license in its training data and can quickly reproduce it tells us that it does the same for code.
A lot of these results raise an interesting question that I don't have an answer to: Why does GitHub Copilot think that it should be in the business of suggesting comments? Does someone at Microsoft really believe that machine learning can figure out how to explain what I want better than I can?
A lot of these results raise an interesting question that I don't have an answer to: Why does GitHub Copilot think that it should get into the business of suggesting comments? Does someone at Microsoft really believe that machine learning can figure out how to explain what I want better than I can?
## Not the Best Candidate...
Unfortunately, these were *not* cherry-picked results. Even when I handed Copilot a complete program and described the results of an addition that I wanted in detail, it generally produced the same sorts of results, generating fragile code that accomplishes either nothing or some trivial unrelated task.
Unfortunately, these do *not* represent cherry-picked results. Even when I handed Copilot a complete program and described the results of an addition that I wanted in detail, it generally produced the same sorts of results, generating fragile code that accomplishes either nothing or some trivial unrelated task.
In one extreme case that I didn't bother to copy, Copilot locked itself into some sort of death-spiral, where I'd accept its recommendation to calculate something over multiple lines, to see what happened next, only for it to repeat the same calculation every time I accepted it.
The exceptions to this general rule were where I provided a comment similar to a homework or exam question for a first-year programming class. For example, if I ask the system---by writing a comment---to write code to sort an array of strings, *and* tell it the sort order, *and* suggest a sorting algorithm, it'll faithfully reproduce the requested algorithm. It doesn't seem to be much more capable than that, though.
The exceptions to this general rule revolved around where I provided a comment similar to a homework or exam question for a first-year programming class. For example, if I ask the system---by writing a comment---to write code to sort an array of strings, *and* tell it the sort order, *and* suggest a sorting algorithm, it'll faithfully reproduce the requested algorithm. It doesn't seem much more capable than that, though.
In other cases, I was able to provoke Copilot to write superficially plausible code, by writing comments requesting the answers to "brain-teaser"-type questions sometimes asked in job interviews, like calculating the number of ping-pong balls required to fill a room or the number of McDonald's restaurants likely to be found in a city. In both those cases, it wrote code that would produce an answer that might be close enough to be accepted, but wasn't correct. For example, it guessed at the size of a ball---0.75 of whatever units it imagined were in use, which isn't correct in metric or imperial units---calculated the volume, divided the room's volume by the ball's volume, and then decided whether to add one extra. Especially with that last step, it *sounds* like it's doing the right thing, but clearly isn't, since the entire point of the exercise is that there are gaps when you pack spheres together.
In other cases, I managed to provoke Copilot to write superficially plausible code, by writing comments requesting the answers to "brain-teaser"-type questions sometimes asked in job interviews, like calculating the number of ping-pong balls required to fill a room or the number of McDonald's restaurants likely to appear in a given city. In both those cases, it wrote code that would produce an answer that might look close enough to convince the interviewer to move on, but had major inaccuracies. For example, it guessed at the size of a ball---0.75 of whatever units it imagined it needed to use, which doesn't match up in metric *or* imperial units, by the way---calculated the volume, divided the room's volume by the ball's volume, and then decided whether to add one extra. Especially with that last step, it *sounds* like it does the right thing, but clearly does not, since the entire point of the exercise comes from realizing that you have gaps when you pack spheres together. Spheres have a *round* shape.
In fact, these tests felt a *lot* like conducting a job interview with a candidate who doesn't care, in the way that it is dismissive of instructions and can't even accomplish its version of the tasks given to it without serious help. As I suggested above, it's the pair programmer that you're expected to guide and improve, not the pair programmer that's expected to guide or improve you. So, in the spirit of GitHub Copilot mindlessly repeating things, I will also repeat some things that I've said in that summer post.
In fact, these tests felt a *lot* like conducting a job interview with a candidate who doesn't care, in the way that it acts dismissive of instructions and can't even accomplish its version of the tasks given to it without serious help. As I suggested above, it feels like the pair programmer that management expects you to guide and improve, not the pair programmer that anybody expects to guide or improve you. And in the spirit of GitHub Copilot mindlessly repeating things, I will also repeat some things that I said in that summer post.
> Writing code isnt the hard part of programming. Turning ideas into an unambiguous specification is the hard part, and that requires communication, not code.
@ -257,21 +261,21 @@ Copilot only produces code, not necessarily to specification, and rarely good.
> At that point, software is no longer written to serve human needs; were all just watching our pomodoro clocks and shuffling tickets between swim-lanes to please our robot overlords 🤖.
That's a surprisingly accurate description of what it's like to work with Copilot. You, the human who needs food, sleep, and other care, spend a significant amount of time tailoring comments and function signatures to coerce Copilot to write some code that might pass inspection. You're working for it, rather than the other way around.
That feels like a surprisingly accurate description of what it feels like to work with Copilot. You, the human who needs food, sleep, and other care, spend a significant amount of time tailoring comments and function signatures to coerce Copilot to write some code that might pass inspection. You end up working for it, rather than the other way around.
> Its not going to save anybody any more money than UML saved companies in the 90s.
I may actually want to revise that assertion. While I still stand by it, I should point out that this seems poised to *lose* companies money, as developers waste time trying to get decent code out of the AI, fix it, and ultimately discover that it's actually mostly from a project licensed under the GPL. If I interviewed a candidate for a job that behaved like Copilot does, I'd block the hire. A candidate that only speeds up typing can be replaced by a stenotype keyboard...
I may actually want to revise that assertion. While I still stand by it, I should point out that this seems poised to *lose* companies money, as developers waste time trying to get decent code out of the AI, fix it, and ultimately discover that it actually mostly comes from a project licensed under the GPL. If I interviewed a candidate for a job that behaved like Copilot does, I'd block the hire. A candidate that only speeds up typing works best when replaced by a stenotype keyboard...
The upshot to all of this is that, unless you are looking for help with elementary programming exercises or terrible interview questions, using GitHub Copilot is probably worse than having no help at all. And even if you are doing elementary work, you're still the senior partner in the relationship that needs to verify and correct what you receive. As much as GitHub and Microsoft claim otherwise, they produced a search engine, not an assistant.
The upshot to all of this comes out as, unless you desperately want help with elementary programming exercises or terrible interview questions, using GitHub Copilot will probably turn out worse than having no help at all. And even if you do need to finish elementary work, you still need to act like the senior partner in the relationship, that needs to verify and correct what you receive. As much as GitHub and Microsoft claim otherwise, they produced a search engine, not an assistant.
## What Could Have Been
To close out, I'd like to make the point that this *could* have been more useful, if Copilot's developers didn't think about generating code. After all, writing is easy. If you don't care about the results, you don't even need to be *literate* to write, technically.
To close out, I'd like to make the point that this *could* have become more useful, if Copilot's developers didn't think about generating code. After all, writing is easy. If you don't care about the results, you don't even need *literacy* to write, technically.
Rather, I wish they had focused more on the "pair programming" aspect. What I'd *really* like to see, here, would be to use the GitHub data to just warn programmers when they're about to make mistakes. Surely, after all, a machine learning algorithm can look at millions of code repositories and see that, when someone checks in code that looks like yours, there's usually a second commit in the near future that corrects a problem to something else.
Rather, I wish they had focused more on the "pair programming" aspect. I'd *really* like to see, here, a way to use the GitHub data to just warn programmers when they come close to making mistakes. Surely, after all, a machine learning algorithm can look at millions of code repositories and see that, when someone checks in code that looks like yours, the repositories usually have a second commit in the near future that corrects a problem to something related.
That would be a good time to suggest code. Instead, we got a Copilot who doesn't actually know how to fly in real conditions, and doesn't actually care if the plane crashes...
That would make a good time to suggest code. Instead, we got a Copilot who doesn't actually know how to fly in real conditions, and doesn't actually care if the plane crashes...
* * *

View File

@ -7,6 +7,7 @@ tags: [programming, project, devjournal]
summary: Progress on assorted projects
thumbnail: /blog/assets/Into-the-Jaws-of-Death.png
offset: -38%
proofed: true
---
Today commemorates the seventy-eighth anniversary of the [Normandy Landings](https://en.wikipedia.org/wiki/Normandy_landings), known widely as "D-Day," which gave the Allies a foothold in Nazi territory, allowing them to work effectively towards victory.

View File

@ -6,6 +6,7 @@ categories:
tags: [programming, project, devjournal]
summary: Progress on assorted projects
thumbnail: /blog/assets/WRDMultiling.png
proofed: true
---
Today we celebrate [World Refugee Day](https://en.wikipedia.org/wiki/World_Refugee_Day), honoring refugees from around the world. These days, it feels like we can't do better than spare refugees an occasional thought, and realize that we all live one disaster---artificial or natural---away from needing to hope that someone overseas can take us in.

View File

@ -6,6 +6,7 @@ categories:
tags: [programming, project, devjournal]
summary: Progress on assorted projects
thumbnail: /blog/assets/PTSD-stress-brain.png
proofed: true
---
Today, we try to bring [awareness to post-traumatic stress disorder](https://en.wikipedia.org/wiki/National_PTSD_Awareness_Day). As a shared culture, we have spent centuries noting it---often in soldiers, to the exclusion of everyone else---and proceeding to marginalize the victims in various ways. We seem to finally have our first broad understanding of the issues involved and treatments beyond hiding from the world.

View File

@ -6,6 +6,7 @@ categories:
tags: [programming, project, devjournal]
summary: Progress on assorted projects
thumbnail: /blog/assets/American-1902-Fourth-of-July-fireworks.png
proofed: true
---
Today celebrates [Independence Day](https://en.wikipedia.org/wiki/Independence_Day_%28United_States%29) in the United States, maybe explaining why much of the business world goes dark. Coincidentally [Abkhazia](https://en.wikipedia.org/wiki/Abkhazia), the [Northern Mariana Islands](https://en.wikipedia.org/wiki/Northern_Mariana_Islands), [Rwanda](https://en.wikipedia.org/wiki/Rwanda), and the [Philippines](https://en.wikipedia.org/wiki/Philippines) all celebrate similar holidays. You find a variety of national celebrations, this time of year; I have no proof, but I *suspect* that they might all trace to some ancient solstice celebration involving color and fire---translating to fireworks---plus parades.

285
2022-07-07-justice.md Normal file
View File

@ -0,0 +1,285 @@
---
layout: post
title: Real Life in Star Trek, Justice
date: 2022-07-07 17:04:04-0400
categories:
tags: [scifi, startrek, closereading]
summary: <i class="far fa-hand-spock"></i> The outside world in Star Trek
thumbnail: /blog/assets/Aachen-Kaiserbad-1682.png
offset: -43%
proofed: true
---
![Hot springs at Aachen, Germany](/blog/assets/Aachen-Kaiserbad-1682.png "You don't even want to know the punishment if you have imperfect table manners at the floating wine bar...")
## Disclaimer
In these posts, we discuss a non-"Free as in Freedom" popular culture franchise property, including occasional references to part of that franchise behind a paywall. My discussion and conclusions carry a Free Culture license, but nothing about the discussion or conclusions should imply any attack on the ownership of the properties. All the big names are trademarks of the owners, and so forth, and everything here relies on sitting squarely within the bounds of [Fair Use](https://en.wikipedia.org/wiki/Fair_use), as criticism that uses tiny parts of each show to extrapolate the world that the characters live in.
## Previously...
I initially outlined the project [in this post]({% post_url 2020-01-02-trek-00 %}), for those falling into this from somewhere else. In short, we attempt to use the details presented in *Star Trek* to assemble a view of what life looks like in the Federation. This "phase" of the project changes from previous posts, however. **The Next Generation** takes place long after the original series, so we shouldn't expect similar politics and socialization. Maybe more importantly, I enjoy the series less.
Put simply, you shouldn't read this expecting a recap or review of an episode. Those have both been done to death over nearly sixty years. You *will* find a catalog of information that we learn from each episode, though, so expect everything to be a potential "spoiler," if that's an [irrational fear](https://www.theguardian.com/books/booksblog/2011/aug/17/spoilers-enhance-enjoyment-psychologists) that you might have.
Rather than list every post in the series here, you can easily find them all on [the *startrek* tag page](/blog/tag/startrek/).
## Justice
Buckle in, everyone, because this episode will hurt, and I don't want anybody compounding the injury by falling out of your chairs.
> Captain's log, stardate 41255.6. After delivering a party of Earth colonists to the Strnad solar system, we have discovered another Class M planet in the adjoining Rubicon star system. We are now in orbit there, having determined it to be inhabited as well as unusually lovely. My first officer has taken an away team down to make contact, and they are in the process of returning to the ship.
The [Rubicon](https://en.wikipedia.org/wiki/Rubicon) river runs through northern Italy, also known as Fiumicino. You may know the phrase "crossing the Rubicon," specifically referring to Julius Caesar sparking a civil war by bringing armies across the river into the Roman Republic. The river and the phrase have become synonymous with points of no return.
Strnad comes from Czech, a semi-common surname.
> **CRUSHER**: Captain? Sorry, Troi.
It seems odd for anybody, but a doctor in particular, to refer to someone just by surname, rather than title or given name.
> **RIKER**: As per report, sir. Class M, Earth-like, beautiful. It will startle you.
Leaving aside the detail that all location shoots happen on Earth---and therefore almost every planet in the franchise seems remarkably Earth-like until the more recent days of virtual sets---we still see so many planets that resemble Earth, that I would imagine that people wouldn't find that a novelty.
> **CRUSHER**: It sounds wonderful for the children. The holodecks are marvelous, of course, but there's nothing like open spaces and fresh air.
This may verge on the technology side, but it sounds like their "magic" technologies don't quite have their claimed fidelity. Despite literally materializing some precise combination of molecules to simulate a space, people can still tell that the results only represent the space.
It makes me wonder whether Riker had the knowledge in [*Lonely Among Us*]({% post_url 2022-06-30-lonely %}) to so authoritatively state that manufactured meat tastes exactly like the kind coming from an animal's corpse.
> **LAFORGE**: They're wild in some ways, actually puritanical in others. Neat as pins, ultra-lawful, and make love at the drop of a hat.
>
> **YAR**: Any hat.
This looks like another instance---I won't bother to cite the previous times that I've mentioned it---of the **Phase II** thinking, trying to both excite us with the prospect of spontaneous sex and, I assume, assure the television station managers that the crew *definitely* doesn't have promiscuous sex.
> **PICARD**: Of course. Wesley? If we go down, I'd like you to join the away team to evaluate this world as a place for young people to relax.
Given that we haven't seen Wesley socialize with anybody but his mother, insisted in [*Where No One Has Gone Before*]({% post_url 2022-06-23-gone %}) that he doesn't like school, and seems to spend his free time either reading or...riding elevators, I guess. Does he *know* any "young people"? We've seen children, but never in a scene with Wesley. I draw attention to this, because if someone had asked my teenage self to choose where kids would relax best, I probably would have picked a library or computer lab.
> Captain's log, supplemental. We are in orbit of a planet designated Rubicon Three, the home of a life form who call themselves the Edo. Our away team, including Wesley Crusher, has beamed down to make arrangements concerning some well-deserved recreation.
As I mentioned when discussing [*More Tribbles, More Troubles*]({% post_url 2021-09-16-troubles %}), Edo most prominently refers to [a time in Japanese history](https://en.wikipedia.org/wiki/Edo_period), mostly marked by xenophobia. Whether the writers intended the name to signal some conceptual connection to the Japanese or the Edoans from **The Animated Series**, though, I couldn't tell you.
> **RIKER**: No, it's all right, Lieutenant. Those are the Edo we met before. They certainly are fit.
>
> **TROI**: They certainly are.
>
> ...
>
> **TROI**: Healthy sensuality, sir. I feel mainly friendship, and happiness.
>
> ...
>
> **RIKER**: Play?
>
> **RIVAN**: At love. Unless you don't enjoy that. Perhaps you do?
>
> **LIATOR**: And you? Yes, I can see that you do.
Especially if we assume that the episodes represent what someone at Starfleet reconstructs from the logs, as suggested by [**The Motion Picture**]({% post_url 2022-03-10-tmp %})'s adaptation, look at how utterly prudish the crew seems, blushing at every implication that people enjoy sex, and viewing aliens comfortable with sex as obsessively nosy. Nobody asks questions about sexually-transmitted infections, how their culture handles pregnancies, or literally any serious potential consequence of casual sex, but just mentioning it calls their collective modesty into question.
This seems like a good time to take a look at the uniforms, too, jumpsuits that *could* fit the form, but despite custom tailoring to the actor, hides evidence of their bodies having curves other than breasts. Troi stands as an exception, in her shrinking denim monstrosity. It might slowly plot to choke the life out of her, but it does let her have hips...
And this sequence keeps trying to make it important that Troi feels jealous at seeing someone act affectionate towards Riker, which feels remarkably out of place.
Oh, and you might recognize Rivan as [Brenda Bakke](https://en.wikipedia.org/wiki/Brenda_Bakke), who has made a solid career of playing peculiar characters, usually in the secondary cast.
> **LIATOR**: You don't have to. Our rules are simple. No one does anything uncomfortable to them.
I don't like this episode. I doubt that *anybody* likes this episode. However, I have to appreciate that they crammed in a line explaining consent. It also strikes me as interesting that the only time that anybody discusses consent, an alien does so to assure someone from the Federation.
> **LIATOR**: Rivan, perhaps they can't run.
>
> **WESLEY**: Can't run? Sure we can run. Right, Commander?
I've seen references that Roddenberry originally imagined Wesley as a teenage girl. Exchanges like this make me wonder if and for how long they considered making him or her much younger. Teenagers can feel insecure, absolutely, but I have never met a teenager *so* insecure that you could provoke them into running, just to prove that they can do it. "I don't think that you can run" feels much more like an early elementary school kind of manipulation.
> **RIKER**: When in Rome, eh?
>
> **WORF**: When in where, sir?
Normally, the "alien doesn't know an obvious reference" jokes don't have much of an impact, because they don't have much substance behind them. In this case, however, we see an interesting situation where over-reliance on an abbreviated version of an idiom---you might recognize the full version as "[when in Rome, do as the Romans do](https://en.wiktionary.org/wiki/when_in_Rome%2C_do_as_the_Romans_do)"---makes the idea completely opaque to people who don't know the references. And yet, nobody in the crew particularly cares.
Also, depending on where you live or what media you watch, you might recognize the [Donald C. Tillman Water Reclamation Plant](https://en.wikipedia.org/wiki/Tillman_Water_Reclamation_Plant), then still relatively new.
> **DATA**: It was something unintelligible, Captain. Now running it through language and logic circuits.
Analyzing incoming messages for coherent information, I guess, qualifies as Plan-B, rather than the obvious thing to do at all times.
> **PICARD**: Geordi.
>
> **LAFORGE**: Sir.
>
> **PICARD**: Have a real look.
They don't trust the video, here, and also imply that the *Enterprise* has plain windows.
> **PICARD**: Why has everything become a something, or a whatever?
This seems like a direct reference to McCoy's "Why is any object we don't understand always called a *thing*?" line in [**The Motion Picture**]({% post_url 2022-03-10-tmp %}).
> **PICARD**: We found that world uninhabited. The life forms we left there had, had sought the challenge. At least, that is the basic reason. Had sought the challenge of creating a new lifestyle, a new society there. Life on our world is driven to protect itself by seeding itself as widely as possible.
You'll notice that he stumbles and changes his story, mid-way through the description. He claims the primal reason as adventure, then quietly slips in that they base the decision in maximizing the odds of the species surviving planetary disasters. Somewhere between the two, unspoken, we might have the colonies of the original series, established to send food and other resources back to the Federation, though that could extend to the survival pitch that Picard makes.
> **WESLEY**: Oh, sure! If you have a bat for the ball, I can show you my favorite. A bat? A stick or branch, about this thick, this long.
Wesley mimes something so unlike a baseball bat and so unwieldy, that I can't even imagine what game he wants to play...
> **WORF**: Of course, but with the females available to me, sir, Earth females, I must restrain myself too much. They are quite fragile, sir.
>
> **RIKER**: Worf, if anyone else had said that, I'd suspect he was bragging.
>
> **WORF**: Bragging, sir?
I would still call this bragging. Yes, yes, we have episodes later that elaborate on this. And much like Vulcan pon farr, I assure you that it always comes off as a traditional means of making men feel important. That especially seems true, once one realizes that not everybody prefers the same activities or stimulation during sex.
Seriously, consider what could happen with his genitals during sex, that he can guarantee that he would *definitely* harm women. Does he have something that shoots out spines at unpredictable angles? Maybe it spontaneously emits a massive electric charge? Of *course* not. We'll later find out that Klingon sexual partners traditionally throw things at each other, which can't possibly come from biology.
> **YAR**: But I see no sign of police. Those who enforce laws.
This makes it sound like Federation streets typically have a police presence, strong enough that people would instantly notice its absence, with the intent of deterring crime. Interestingly, I can't find any studies validating the idea that an increased police presence leads to less crime. Rather, they seem uncorrelated, with multiple papers sheepishly suggesting that maybe police have another function.
We have plenty of "conventional wisdom" connecting policing with deterring crime, of course, but I can't find research that confirms it, and government websites making the assertion fail to provide a citation.
> **WORF**: Anyone who commits any crime in the punishment zone dies?
>
> **LIATOR**: The law is the law. Our peace is built on that.
>
> **YAR**: Even a small thing? Such as ignoring the rule, keep off the grass?
This feels like a signal to the audience, hinting at this episode's thesis. We'll get to a more explicit statement, later, but you might find the direction interesting.
> **WESLEY**: I'm with Starfleet. We don't lie.
Somebody hasn't watched [*The Last Outpost*]({% post_url 2022-06-16-outpost %}), where Picard repeatedly lies...
> **RIKER**: In accord with the Prime Directive, I've allowed them to hold him pending the outcome of this.
Between this episode and [*Code of Honor*]({% post_url 2022-06-09-code-honor %}), it seems a lot like everybody in Starfleet thinks of the Prime Directive as a bureaucratic inconvenience.
> **RIVAN**: We are a people of law. They do sometimes bring us sadness, but we have learned to adjust to that. Perhaps your laws work as well.
>
> **PICARD**: They haven't always, but now they do.
>
> ...
>
> **LIATOR**: Do you execute criminals?
>
> **PICARD**: No, not any longer.
>
> ...
>
> **PICARD**: Some people felt that it was necessary. But we have learned to detect the seeds of criminal behavior. Capital punishment, in our world, is no longer considered a justifiable deterrent.
First, hooray, the Federation---still? I forget where we landed on this, in the original series---doesn't have a death penalty.
Above, Yar implies that the Federation has police everywhere, enforcing laws. Here, Picard tells us that they detect criminal *tendencies* early in life, and that punishment no longer makes the family and friends of the accused sad. If we can grant that the writers meant these statements to have some consistency, then it sounds suspiciously like the Federation takes some medical intervention to prevent people born with some (alleged) criminal trait from committing crimes, then heavily polices outsiders or people whom the medical intervention fails.
I'll grant that we can't treat any of this as conclusive, but I also can't see a system that makes all their statements true *and* doesn't include arbitrarily stigmatizing and persecuting at least two groups, based on their identities.
> **PICARD**: Unfortunately, we have a law known as the Prime Directive.
I mentioned above, that the crew seems opposed to the Prime Directive in general. We'll see next that they don't really seem to understand it, so I guess it makes sense for them to dislike it.
> **PICARD**: No, no, no. That's not it. I want you to identify something for me, if you can. Captain to Transporter Room. Three to beam up.
Hang on, the Prime Directive forbids them from arranging for Wesley to simply vanish, but introducing them to interstellar flight and *their literal God figure*, that, they can do...?
> **TROI**: It's understandable, sir. Sharing an orbit with God is no small experience.
I don't consider myself a particularly religious person, but this strikes me as absurdly dismissive of the native religion.
> **DATA**: Babble, sir? I'm not aware that I ever babble, sir. It may be that from time to time I have considerable information to communicate, and you may question the way I organize it...
I see that Starfleet doesn't train its officers to give *or* accept criticism with anything resembling grace.
> **DATA**: Most interesting, sir. The emotion of motherhood, compared to all others felt by---
>
> **CRUSHER**: Shut up!
I wish that writers could stop writing characters where the audience feels satisfied when someone silences them. To do it successfully, the writer needs to *alienate the audience*, and nobody in the audience finds that fun.
> Captain's log, stardate 41255.9. Whatever the object or vessel in orbit with us, it hangs there like a nemesis. It is one thing to communicate with something mysterious, but it is quite another to be silently observed by it. I am concerned whether it understands the same concept of reason that we do?
You'll notice that Picard consistently thinks of the natives as primitive and irrational, while he imagines their God as *advanced* and irrational. The possibility that they just disagree with him seems beyond his imagination.
> **PICARD**: You also see things in a way we do not, but as they truly are. I need help, my friend. I cannot permit that boy or any member of this vessel be sacrificed. The Prime Directive never intended that.
Interestingly, in original episodes such as [*Bread and Circuses*]({% post_url 2021-01-28-bread %}) told us that all space travelers from the Federation should expect to sacrifice their lives to avoid contaminating a culture's development, so it definitely did intend exactly that, at some point.
> **RIVAN**: Captain Picard. I saw you share the sky with God. You must be gods.
I'll point out again that they need to debate whether to have Wesley "mysteriously vanish," but nobody sees the natives imagining them as gods to maybe have some relevance to the discussion.
> **PICARD**: I may suffer almost as much. Starfleet takes the Prime Directive very seriously.
I don't even know what to do with this. The crew clearly has no idea what the Prime Directive actually says, basically making it up as the descriptions as they go, and they all think of the rule/law as an impediment to their jobs, but they somehow expect us to believe that the institution takes it at all seriously?
If they took it seriously, the crew would have predicted the "only gods can share space with God" response. They wouldn't keep trying to sell the Edo on Federation jurisprudence. They would probably have at least a few more concerns about engaging in casual sex with the Edo. And they certainly wouldn't complain about the law whenever it comes up in conversation.
> **YAR**: What of justice to Wesley? Does he deserve to die?
Does he...*not*?
I don't ask because I dislike the character, at least not entirely. Rather, I ask, because he broke the law, ignorance---as one of the Mediators points out, and as most Earth jurisdictions agree---can't become an excuse, or else everyone will claim ignorance of every rule, and both cultures have a mandate. What makes Wesley or his case special enough to overturn the laws of two cultures? They need to answer that question, or else it puts the Edo in a position where Starfleet dictates their laws and religious beliefs, and puts Starfleet in a position where rescuing a friend justifies any means.
In that sense, calling the star "Rubicon" makes a surprising amount of thematic sense, making me again wonder if the writers intend for us to find this crew objectionable.
It seems so lazy (both in writing and the characters' actions), since they could easily have replaced the arguments over who has the better legal system with (for example) Worf and Yar combing through legal and religious texts to find some obscure reason---the crime occurred at the end of a shift, Wesley had no intention of causing damage and can make restitution, multiple people offered to die in his place, and so forth---that follows some precedent that allows him to go. And I apologize for turning this into a complaint about the plot, but *Lonely Among Us* made a big deal about Starfleet using the transporters to create meat from the atomic level. Could they not replace Wesley with a dead duplicate and make everybody (except the audience) happy?
> **PICARD**: I don't know how to communicate this, or even if it is possible, but the question of justice has concerned me greatly of lately. And I say to any creature who may be listening, there can be no justice so long as laws are absolute. Even life itself is an exercise in exceptions.
>
> **RIKER**: When has justice ever been as simple as a rule book?
I mentioned before, that we seemed to get a hint as to where this episode would go, and it pays off here: We end the story with Picard and Riker railing against [the nanny state](https://en.wikipedia.org/wiki/Nanny_state), the hilariously misguided idea that liberals make everyone weak by passing and enforcing laws to protect public safety, instead of allowing for "personal choice" to massively endanger ourselves and others.
In that sense, you can see this episode as a companion to [*The Naked Now*]({% post_url 2022-06-02-naked %}). Where [*The Naked Time*]({% post_url 2020-02-06-trek-naked-time %}) showed the spread of an epidemic because a single jackass took off his mask to scratch his nose, *The Naked Now* gave us a story where nobody bothers to wear masks, because this crew believes that they (or the transporter) can beat any bug that they might pick up. This episode, now, goes a step further, saying that the freedom to break rules inherently deserves a high priority.
Sure, maybe they *meant* that you can't have justice without mercy. But they could easily have said that---I just did it in six words, and I know that smarter and more famous people than me have put it more eloquently---and didn't. In fact, neither the word "mercy" nor any synonym appears in the episode at all. Rather they consistently argue that the Edo should exempt Wesley from legal consequences, for...space-reasons, I guess?
## Conclusions
This episode most directly tells us that the Federation places a high value on "Earth-like" environments, and see an admission that the holodeck makes compromises when representing environments. Maybe related, outside the ship's bridge, the *Enterprise* has conventional portholes or windows, despite---at least in the real world---the dangers of radiation.
### The Good
Though the original series presented this idea and then gave us a bunch of exceptions and caveats, this show repeats the idea that the Federation has no death penalty. I suppose that this century's exceptions and caveats may arrive later.
### The Bad
Federation culture seems to have an odd stance about sex. From our samples, it appears that they love talking about their sexual prowess, but panic in embarrassment, when someone offers expresses any interest in sex with them. They all seem to want you to know that they enjoy sex, but other people enjoying sex makes them feel uncomfortable. And when convinced to express interest in casual sex, they have no questions about potential infections, managing pregnancies, or their views on consent.
We don't see any direct racism or sexism in this episode, but we do see a tendency to use exclusionary language, using truncated idioms as "inside jokes" that listeners either understand or---if they didn't grow up on Earth---don't. We also see, again, that the Federation believes that it has reached the end of its evolution, with no flaws, dishonesty, or...need to respect anybody else. They all *hate* the Prime Directive, both grossly misinterpreting it to seem incoherent, dismissing the idea that Starfleet ever intended for anybody to die for its principles, and casting it as a massive inconvenience that makes their jobs more difficult.
Indirectly, we do see something at least similar to racism, as Picard characterizes the natives as primitive and irrational, and their mentor/patron figure as overly advanced and irrational.
I mention the lack of fidelity to their simulation technologies, but part of that leads directly to a lack of trust in those technologies.
At least humans, and possibly the Federation in general, now colonize worlds, because it fears an extinction event wiping out the majority of the population, and wants to preserve the species. We don't know if they have economic reasons, as well, which could certainly feed into the survival aspect. Many have the impulse to lie about this, instead claiming that the adventure of crafting a new society drives colonization.
We find that Klingons have supplanted Vulcans, for our purposes, as representing the minority culture teaching people that their men need to look more virile than anybody else, believing that human women could never survive the encounter.
We get a brief implication that the Federation heavily polices populated areas, with the Federation imagining a direct connection between every officer and a decrease in the crime rate. However, we also learn that the Federation monitors young people for potential signs of criminal activity to "correct." They believe that the alleged medical interventions eliminate crime, but also maintain the heavy police presence.
We continue to see an unprofessional streak through the crew, this time Picard and Data arguing with each other over mild criticism.
Finally, the episode ends with a condemnation of "the nanny state," suggesting that struct laws for the public good violate some important principle that they can't bother to name, and that everyone needs to break laws.
### The Weird
Data seems to imply that nobody analyzes messages for patterns unless it doesn't make obvious sense.
They also seem strangely dismissive of a religion based on a god who they can actually see and communicate with.
## Next
Come back in seven days, when we find out that Picard probably knows more about the Ferengi than he claims, and that they don't appreciate the Federation trying to kill them, in *The Battle*.
#### <i class="far fa-hand-spock"></i>
* * *
**Credits**: The header image is adapted from [Hot springs at Aachen, Germany](https://commons.wikimedia.org/wiki/File:Aachen_Kaiserbad_1682.jpg) by Jan Luyken or Cuyken, long in the public domain.