Knowledge and Symbolic Reasoning
When people gain useable insights into the operation of the universe, such as Einstein's law of Special Relativity, they have encoded those insights into "knowledge" which can be efficiently (if not completely) transmitted to other people, usually by explanation.
Generally this means that they have encoded insights into a reduced symbolic form, like E = MC^2.
When neural networks learn information, it is encoded into vast numbers of coefficients. This form of knowledge cannot be efficiently transmitted to people, and it probably does not match symbolic concepts people already have. So LLM's implicitly learn how different words are used, without necessarily learning or using our general categories such as noun and verb (which may be the most intuitive to us even if they present troubling exceptions to LLM's).
Proponents of LLM believe this is fine. They want AI to deliver solutions. They don't need AI to explain itself. They feel that it is unnecessary for AI to help us learn how to solve problems for ourselves. We stand on the shoulders of AI to do more.
But if AI is letting us solve problems without fully understanding them, then it is making us dumber. The "solutions" that we are thereby creating might best be understood as slop (the word popularized by Cory Doctorow, who is an excellent critic of AI).
When you are a homeless person passing through a soup kitchen, they dish out slop. That's fine because they have limited resources and limited staff and that is the only way they can feed so many people cheaply. But when you are an aristocrat dining in a fine restaurant, or just a person who has enough time to do so, you want a carefully prepared meal, not slop.
Slop is perhaps unavoidable, but generally it is something we should preferably avoid. Ideally everyone should have carefully prepared meals. That's part of a quality life.
Resisting AI means we will not achieve the (alleged) productivity benefits.
But preparing slop makes us dumber. Preparing meals carefully makes us smarter. In the long run, this is more important than being "more productive." It is much better to do less, and to understand what we are doing and learn how to do it better, than just to "do more."
Consuming more slop makes us poorer, not richer. (Don't trust GDP and similar metrics here. What is really most important is not how much we consume, or how much money circulates, but deepening our quality of life.)
We should seek to invent the technologies which make us smarter, not dumber. Only by being smarter can we know and appreciate quality and how to get there.
Therefore, we should seek to build the society that makes us learn more, think, and create, not just dish out more and more slop.
Making people dumber and dumber is the quickest road to collapse of everything.
That also happens to be what you get by mindlessly raising "productivity."
"Higher Level" thinking
Proponents of AI think the sloppiness is fine and it enables us to think at a "higher" (more abstract) level while the AI does the lower level thinking for us.
But this higher level often becomes little more than BS and hand waving.
It is my feeling and my belief that the strongest learning comes from working things all the way through. This is not a new idea. Euclid famously told King Ptolemy I: There is no royal road to Geometry.
So when I build my programs, I do it this way. I think problems through with paper summaries or diagrams first. I think about the different kinds of ways they could be solved, and choose what appears to be the best one. If it proves to have been a wrong choice, I flip to another one before I have written much code, if possible. I build everything from the raw ingredients of my operating system and programming language as much as possible. Only if things appear to be particularly tricky do I look for previous solutions (aka libraries) that I can use. If fairly easy, I even reimplement the parts of those libraries that I need. I rely heavily on built-in language features or libraries including things like associative arrays (aka hashtables) which are capable of dealing with many if not most hard problems.
I know this goes against the grain. From the very beginning of my 39 year career in computer programming I was taught the mantra "Reuse." But I reject that as a general rule for many reasons:
1) Learn (everything) by doing (everything).
2) Programs built upon combinations of even fairly simple libraries can become ever more impossible to fully understand. Often different libraries do not intuitively connect with one another. Then all your code becomes translating information from one library to another--very dull.
3) Copyright, patent, and similar issues.
During the whole process, even before starting to code, I start writing the user documentation as well. This is invaluable in determining the fine details of the interfaces. If something is hard to describe, it's probably not designed well either.
I don't create a 'detailed design' such as including all variables and data structures before coding. That's basically humanly impossible. When I was required to use a formal design process, most people could not actually perform a useful Design Review to being well into the coding process if not nearly complete. As one of my colorful (and PhD) colleagues remarked, "We're supposed to do Design after Coding. I prefer design while coding."
For over a quarter century, I've either written the documentation into the program itself, or straight into fairly simple HTML. I like being that close to the metal. I hate word processing programs. I do all my editing in Gnu Emacs.
I've had some experience doing things other ways. Java programming, for example, is traditionally done with the importation of dozens or even hundreds of libraries, with interactions so complex that fancy tools are needed to work out the ramifications and keep each library installed at a compatible version and all the interfaces correct for that version. General code does little more than call one library after another. This is the pinnacle of the "Reuse" concept. I hated it. It wasn't programming in my opinion, it was dishing out slop.
AI is a vastly greater extension of this.
Now I am very happy to be able to search the web to find code to solve each unfamiliar issue as it comes up. I don't just cut and paste the bits of found (or generated!) code. I read them and figure out how they work. Then I write them into my program. (My post-retirement program MakePlaylist was created exactly as described above, except I haven't written HTML documentation for it, only in-line documentation that gets spit out into help messages and full documents by built in program options. But now I am writing HTML for a far more challenging project: a multivolume book about my life.)
No comments:
Post a Comment