Week 1
Course Introduction and Thinking Like a Programmer
Last updated
Course Introduction and Thinking Like a Programmer
Last updated
Many students will find themselves needing to solve problems by programming at some point - be it programming an experimental apparatus using an archaic instruction set, writing a suite of fancy software, or analyzing and predicting trends from some data set.
In all these cases it is important to remember that programming, like most technical skills, is something which gets easier with time and practice - Even if you never use python again, you'll find it much easier to learn a new programming language once you've spent hours getting angry at another one, as they all ultimately rely on the same kinds of thinking.
The ability to code is perhaps the most sought-after technical skill, along side mathematics, in a world where attempts to automate and digitize permeate an increasing number of jobs.
This is easily the most important reason to learn to code - coding is fun!
We live in a digital world, and so having the skills to not just appreciate and understand how software and technology work, but also to manipulate them, or to create your own, unlocks more doors than you imagine.
Maybe you'll create an app which automatically sends messages to your housemates telling them to do the dishes, or you'll code an artificial intelligence which takes over the world - what you do with the skills we aim to provide is up to you.
Alright, we'll admit that we're not grizzled veterans of pedagogy, and (hopefully) lack the tweed coats and wrinkled skin which wisened lecturers don. However, it wasn't that long ago that we were first getting to grips with programming ourselves (though sometimes it sure feels like it), and we think we've got a pretty good idea of not just how to code, but also how to learn to.
Computers store information as binary values, consisting only of 1s and 0s. The reasons for this ultimately come down to the fact that is more practical to build electronics which represent only two distinct states, and are thus able to encode binary values.
Binary numbers define a "base 2" counting system, where as the one we are used to is "base 10" (known as Decimal) - For example, in base 10, the number 126 is represented by:
Where each column represents the powers of 10; i.e.
In binary, numbers are represented similarly, except that the base is now 2 rather than 10. So the number 13 would be written as
Which means that the binary representation of 13 in binary is simply 1101, where each column represents the coefficient for the powers of 2, similar to the way things work in base 10.
This may seem abstract or unnecessary, but the point of showing you binary is simply to prove to you that it is possible to represent any number using only 1s and 0s. Combined with the fact that we can represent anything in terms of numbers, this means computers can stores all kinds of information using only 1s and 0s at the electronic level.
Luckily for us, most programmers rarely work at such a low "machine-level", so don't worry about the details!
A key example of how we can represent every-day information in terms of numbers is provided by the ASCII character encoding standard. The ASCII standard defines a mapping from numbers to characters, including not just the alphabet, but also many common symbols like &, # !, etc.
The ASCII standard allows us to convert English words to numbers, which can in turn be stored and manipulated by computers, using their binary representation. For example, the word "Hello":
There are many kinds of information we might want to represent on our computers besides strings of characters (i.e. words and sentences). One obvious example is images, which ultimately consist of many thousands of individual "pixels" which have a specific color.
One simple way to encode colors is to decompose them into three primary colors, typically: Red, Green and Blue, and to figure out how much of each color you'd have to "mix" together to get the desired color (known as additive mixing).
The RBG standard encodes this information using three numbers between 0-255 which represent the "amount" of each color present in the decomposition. For example:
Some of you may be old enough to remember back in the good old days before HDMI, when SCART cables were still commonly used - these actually had red, green and blue signal channels. Furthermore, pixels on most modern displays actually consist of distinct Red, Green and Blue pixels, each of which changes brightness to give the appearance of millions of different colors.
Abstraction is a concept in computer science where some low-level implementation (such as how data is ultimately stored in binary) is simplified or taken for granted, so we can use that implementation at a higher level (such as representing letters, that we can then use in our programs).
Emojis provide a great example of abstraction - let's take a favorite of mine, the "pile of poo" emoji:💩.
Similar to the way in which common characters are encoded in the ASCII scheme, a more complex standard known as Unicode is used to represent other objects typically sent in messages - including emojis. For example, in the UTF-8 standard, the 💩 is given by "U+14F4A9".
Computers compatible with this encoding standard will know that it corresponds to a given set of RGB values, describing a 256x256 pixel image:
This is an excellent example of abstraction, because rather than sending your friends a set of 65536 RGB vectors, you just send them ":poop-emoji:" and rely on the fact that someone else has taught your computer to convert that to an image of a smiling piece of feces.
Without going into any detail here, it is important to note that the essence of what many programmers do is problem solving - more specifically, designing algorithms which take the inputs defining a problem (in a specified representation) and output the solution (in a specified representation).
For example, I might develop an algorithm which tells me what day it will be tomorrow, based on what day it currently is. This would work something like:
Where we have chosen a representation in which the days of the week are given by their names in English. Instead, we might be producing an algorithm for a group of efficient Germans, who tell us the day in German, and want the output as a number between 1 and 7 (to save valuable syllables from being spoken):
Note: This highlights an important detail about creating algorithms - Although in both these cases the algorithm works in essentially the same way, the data we feed to it and the data we want it to give us are different, and so the algorithms are distinct. In essence:
Suppose you're trying to tell a robot how to go about making a jam sandwich, you might start by giving it instructions along the lines of:
And that would probably be more than sufficient, if not slightly patronizing, if you were talking to a competent human. However, what we'll be learning is how to tell computers what to do, and computers are truly incompetent - no common sense whatsoever!
So let's try again, and be really explicit this time:
This is better, but still not even close to specific enough - computers need to be told exactly what they need to do at (preferably) the highest-level which they can understand (which is very low, by human standards). So let's suppose our robot has some sensors which can measure distance, and that it knows about angles and movement.
Then we can start by opening the cupboard which contains the jam:
Ok, we did it, we managed to tell the robot how to open the cupboard - Yes... I can hear you saying "Maybe I'll just get a subway instead", but don't worry, we can use abstraction to make our lives easier.
Notice that in the example above the robot understands an instruction called <grip>, presumably because the manufacturer thought it would come in handy. We can also create our own complicated instructions by abstracting away whole sets of consecutive moves into so-called functions, which we've been denoting using <>.
For example, suppose we went through the horrible effort of teaching the robot how to google an object's name, recognize it and then move it to another object, referring to this instruction as <pick up x and place at y>, then we could avoid bothering about coordinates and angles entirely, except for the one time where we define the function:
If we repeated this process of building layer upon layer of abstraction, then we might eventually end up with a single function called <make jam sandwich>, which we could now tell the robot to carry out as many times as we liked (though it's not clear what happens when we run out of jam or bread!)
In much the same way, most programmers rarely have to dig down to the "nitty gritty" to solve real world problems, as they can instead rely on all the functions which other people have implemented in libraries or packages, when solving related problems before - we'll come back to this later.
Alphabetical
ASCII
Binary
Hello
72 101 108 108 111
1001000 1100101 1101100 1101100 1101111