MathJax

Friday, December 20, 2013

Programming Language of the Week: C

C is a compiled language. Rather than having a separate program interpret your code line by line, a compiler reads your entire program at once and changes your human-friendly source code into binary code that is directly executed your processor (e.g. an *.exe file on a Windows system). If you know how to interpret binary instructions for your processor, you can trace out which instructions will be loaded into your processor in order.

C of course isn't alone in being a compiled language, but it is perhaps the most prolific. C has been used extensively in the development of Windows, OS X, and the Linux kernel. It's the go-to for systems which have requirements which preclude the overhead of interpreting code line-by-line or pauses for garbage collection. Interpreters for interpreted languages are often written in C.

Learning C gets you "close to the metal." Understanding pointers and being responsible for your own memory management illuminates the barrier between the logical systems we aspire to create and the physical machines which realize those designs.

You can see C's linguistic influence in later languages by noting several features shared by other languages: statement blocks grouped by opening and closing {}s, primitive variable type names int/long/char, for-loop syntax, pre-/post- increment/decrement operators, the same boolean operators are shared in whole or in part by Java, JavaScript, C# and more.

To learn C, you'll need a C compiler, a motivating project, and a good tutorial. Check out http://www.cprogramming.com/tutorial/c-tutorial.html for more.


#include <stdio.h>

int main()
{
    printf("Hello, world!");
    return 0;
}

Wednesday, December 18, 2013

Computing for Everyone: Words to avoid in software, aspirational edition

The two hard things in software are cache invalidation and naming things. We haven't talked here yet about caches, yet alone their invalidation, but recent events triggered my desire to address a subtopic about naming.

As your future adjunct professor, I forbid you from using any of the following words in your names:
  • generic
  • simple
  • standard
  • reusable
  • extensible
  • pluginable
These words represent ideals your software should embody, but these words should never be part of your class names, variable names, namespace names, or overall system branding. Here's why.

Human psychic energy is limited.

Well-written code respects the limited psychic energy the maintainers can devote to understanding and making changes to it. Well-written code will minimize the effort required for someone new to a system to take any stanza of code from the system, understand the purpose of the stanza, and convince himself that the stanza correctly implements the intended purpose.

Names are among mankind's most significant allies in understanding software. Computers are happy to distinguish between variables named llllIlll and lllIllll, but humans will go insane dealing with such code in the long-term, have a huge ramp-up time understanding existing code bases written this way in the medium-term, and be much more likely to introduce errors while making changes vital to the mission of a system in the short-term.

Good names in software, then, refer to the problem domain they are addressing, rather than architectural goals. "simpleExtensibleField1" and "simpleExtensibleField2" as variable names are vastly inferior to "subtotal" and "tax".

The problem with the above list is that the extra words make the variables sound different to a computer, but add noise for a human. There's a temptation to brand large refactoring efforts as any of these words, but these refactoring efforts can end halfway through. Then another refactoring effort comes in on the half-refactored code base, sees their favorite word was taken and uses a different word from the list. While refactoring is supposed to make a code base easier to understand without changing its functionality, these sorts of names do the opposite. It would, in fact, be clearer to a human if these efforts had been branded in variable names with colors instead (not that you should do that, either). By way of example, it's much easier to remember if you're working with all GreenWhatchaMadoogies or all YellowWhatchaMadoogies rather than all GenericWhatchaMadoogies or all PluginableWhatchaMadoogies. And really, if the code is actually simple or generic, rather than just insisting via variable names that it is, the code would refer to instances of WhatchaMadoogies rather than either of the preceding. The "generic"/"simple" labels, having failed their mission, are reduced to brands.

Monday, December 16, 2013

Brain90X: Thoughts from "Characteristics of Games" and "Thinking, Fast and Slow"

There's a disturbing correlation between getting older and losing the ability to learn new things. Since learning new things is one of my favorite things in the whole wide world, I'd like to keep this ability as long as possible and increase this ability to its full potential. My plan? Games.

Here is my 3am rationale from last month, cleaned up slightly on a Saturday afternoon:

Learning new things takes a lot more energy than applying what's been learned. In a modern life, we're able to learn enough after a few decades to subsist mostly on applying what we have learned with a minimal amount of new learning required. Once we can pay our bills simply by applying learned knowledge, we aren't so motivated to go out and engage in the relatively mentally taxing enterprise of learning new things. My theory, then, is that the declining ability to learn new things with age is not a biological inevitability, but a function of atrophy once we've learned enough to pay our bills. What we need is a way to exercise the learning part of our brain so that it stays with us in perpetuity.

Climbing the heuristics ladder of a good game is the best kind of exercise you can give your brain. A "heuristics ladder" for a game is the set of increasingly complex set of rules-of-thumb you build up for yourself to let you know how well you're doing ("state heuristics") and what choices you make to maximize your chances of victory ("directional heuristics"). This type of mental activity is very different from memory recall and applying "street smarts."

Directional heuristics: Who's winning?
The first type of cheap, fast brain activity is memory recall: Erudition. The process of rote learning, while oft-scorned, is critical to competently functioning in a field. If your brain is a computer system, this is similar to "warming up your cache." (What is a cache?) I like to think of memory recall as "book knowledge." Literacy is the practice of transferring this type of information between humans (books, speech, this blog, plugs from The Matrix). (An interesting tangent is to note that body language and flirtatious subcommunication don't fit here at all. In fact, this information is background noise for the reality of most social interactions.)

Literacy of the future!
The second kind of cheap, fast brain activity is "street smarts": Worldliness. Street smarts assess both nouns and game strategies based on what has come before. Street smarts come from a statistical learning style where you observe classified and regressed examples in order to assess a new instance. In the computer world, this is how most kinds of machine learning work. Street smarts can be partially transferred through literacy as well--if it's done right and the source is trustworthy.

This leaves us with the most risky and expensive operations our brains perform: experimentation and creativity while climbing heuristics ladders: Cleverness. This is where the brain that comes up with new things to try out, where the brain sets up new literacy caches for information and aggregate data stores for examples. The more novel the situation, the more this part of the brain is engaged. It creates, tests, imagines, evaluates, and combines strategies. It working at its hardest when the answer is not available from recall or street smarts. Recall and street smarts are automatically and invisibly applied first in understanding a situation (thus introducing personal biases and new approaches). Heuristic ladder climbing is difficult, and our brains automatically take shortcuts to get to a passable approach.

This difficulty is a good thing for the brain. This is how the brain hits the gym to stay in shape. What an over-trained brain looks like is an exercise in imagination left to the reader.

Climbing heuristics ladders can itself be a function of literacy (chess books) and street smarts (what has worked for you and others in the past). There's also the punting strategy (try something. What happens?) and the combination strategy (adjust a strategy based on the assessed strengths and weaknesses of previous approaches). This engages Kahneman's "System 2" from Thinking, Fast and Slow. What's great about games is that this process happens when the stakes are low: you can face a variety of novel situations without having to bet your fortune, your business, or your life.

Now let's talk about logic's role here. Logic can transform novel situations into something we are better equipped to deal with: a rote answer or a way to proceed. It is also a System 2 function to multiply the abilities of System 1. Something about logical training is critical to our ability to climb heuristics ladders presented by games. It's a force-multiplier for erudition and honed instinct because it multiplies the situations where we can apply hard-won lessons.

Logic can take the lessons you learned from here...
...and suggest looking for opportunities such as this here.
Climbing heuristics ladders also sharpens creativity. Try something out. Give it your best shot. Compare it against what you know and how it works out. Build a logical system for thinking about the new situation. This is creativity. The germ of creativity can be something simple, but giving it an honest, competent try requires discipline. Plus as you gain experience, you realize that most ideas fail. The ones which succeed are precious. You develop a filter based on book smarts and street smarts to filter out the strategies you will even attempt! The more you know and can apply, the more narrowly you can/will filter the narrative of a creative experiment's success.

So as you get more worldly and erudite, you won't need to be as clever. Unfortunately, the clever part of the brain is a muscle. This is the learning center. This workout is what keeps your mind young and nimble. This is why we need games. Games of all sorts. Low cost of experimentation/success/failure. Creative heuristic ladder climbing/mental model building. Logical transformations to existing data stores. Creation of new data stores. Logically transforming and making analogies from games to other situations and novel situations to each other.

Playing games could keep this muscle in shape. It could be that the reason for declining mental strength is only partly chemical and very behavioral. Once you have enough book and street smarts to live and live comfortably, why continue engaging in this expensive mental exercise? Why sell past the close? Why work hard when you already have enough? To stay in shape.

Another exercise for the reader: How does this sort of narrative apply to the idea of upward mobility in a free economy?

I'd like to finally emphasize that this natural progression from the clever/creative to the worldly to the erudite is inevitable. Even in games with deep, rich heuristics ladders such as chess, a player with many years of experience can fall into winning on erudition and street smarts alone without realizing it. The key is to find competition at around the same level of erudition and worldliness so you force each other into the realm of the clever. Failing that, the game, as beautiful as it is, no longer keeps your brain in shape. It may still be enjoyable to play and very enjoyable to win, but in order to progress you'll need tougher opponents or new games.

Finally, an alternative to games, if none are available, is the deep study of various branches of mathematics. Try it out. There's much to learn, much to experiment with, and much to stir creativity.

Sources:
Footnotes:
  • A cache in computing is local storage of data that is expensive to retrieve; for example, it's faster to read from and write to RAM than to a traditional hard drive, so the contents of often-referenced files are loaded into RAM for fast manipulation. Similarly, microprocessors have a cache on the chip for data that are stored in RAM to speed up computations. When a cache is initialized, it is empty, but as the system is used the cache starts getting populated and the hoped-for speedup is realized. Using a system to populate the cache is sometimes called "warming it up."
  • Kahneman's "System 1" is the fast, cheap, automatic side of the brain. "System 2" is the slow, expensive, methodical part of the brain, sometimes an apologist for System 1, sometimes the only thing helping us see past the biases such automatic processing induces.