Reflections

As I pack my belongings to move to a new apartment and begin a new job, entering upon a new phase of my life, I’m quite relishing the opportunity to reflect upon tidbits of my last lives as I sort them and pack them away.

I found a tiny notebook that I had titled “Progress”, within which I’ve captured glimpses of former selves, saved in quickly doodled sketches and poetic captions, each carefully dated for posterity. In stumbling upon this, I found it necessary to contribute a newest update, which I captioned: “Her wildest, happiest dreams come true after years of pursuit… she grins gratefully and begins packing her silly life treasures to begin the NEXT BIG ADVENTURE!”

In sorting my treasured scraps of paper, cards, letters and miscellaneous office supplies, I’ve stumbled upon notes from an old unhealthy work environment, countless to-do lists and goals written to myself, memories from past loves (both good and bad), and overwhelming evidence of the support and love in my life. I am grateful not only for where I’ve just landed but for all the crazy times I’ve had striving to attain this. You’ve come a long way, babe! Dreams come true, and I couldn’t be happier.

Amazon EC2, Nodejs, Tmux quick reference

Just a quick post so I remember what I did and how to do it again:

Set up an AWS EC2 instance:

Sign up for / log in to Amazon Web Services. When you get to the main screen full of icons, select EC2 (stands for Elastic Cloud Computing — elastic because its easily expandable). Then click through many screens. This time we selected an Ubuntu 12.something 64-bit instance, and we chose the smallest (free-est) stuff possible; I believe it was called a micro instance or something. When you have to set up ports and such, the two important ones (if you’re planning on setting up some testing web host, for example), are ssh and http. Both are of type TCP, and it’s okay to let them accept all IPs for now. Next up, I think you have to set up your ssh key. Amazon will create one especially for you. Download it and put it in a safe place. Don’t lose that key or you’ll have to redo everything from scratch. Okay, now that you’ve got your ssh key, keep clicking okay/launch until you get to a screen that shows what might be a list of running instances (at this point I/you only have one going). If you scroll to the right, you’ll see the Public DNS and Public IP for this running instance. You should also see a green circle under instance status that says it’s running. Excellent! Perhaps copy that Public IP address, for we’ll need it to ssh into our server.

Log into your shiny new instance:

Great! Now let’s ssh in. To do so, first you’ll need to change the permissions on your ssh key file to make it more exclusive.


Aside on File Permissions:

File permissions are a cool relic of cleverly storing large(ish) amounts of data in small ways. Permissions are made of three digit numbers which represent the User’s permissions in the left-most digit, the Group permissions in the middle digit, and the Others permissions (world? I dunno) are represented by the right-most digit. The file permissions for each user/group/whatever are represented (each digit) by adding up the permissions you are granting (0 – nuthin’, 1 – execute, 2 – write, 4 – read), so read-write-execute (rwx) is 7 (4 + 2 + 1), while read-write (rw) is 6 (4 + 2), and so on. For our purposes, we (the main user) would like to be able to read, write and execute our ssh key (honestly, all we need to be able to do is read it, but this is fine too), and we DON’T want others to have any permissions on it, so we’ll set our permissions to 700 (7 for us, 0 for group, 0 for everyone else). To do so, we go to the directory that we stored our ssh key in and type “chmod 700 filename” where filename likely ends in pem and is the name of our ssh key file. Done.


… back to logging in to our EC2 instance

Now that our permissions are more exclusive, let’s ssh in: “ssh -i ./path/to/sshkeyfile ubuntu@ipaddress” where ubuntu happens to be our default username since we have an ubuntu instance and ipaddress is the public ip address listed in that table in the EC2 Management Console that we were in two paragraphs ago. If all goes according to plan, you should see your new shell prompt: ubuntu@ip-###-##-##-###:~$. Now you can install whatever you need to feel happy using apt-get:

apt-get-GOING:

Type “apt-get update” to update your default package manager (apt-get). [NOTE: you will likely need to sudo the apt-get commands.] First up, I installed git: “sudo apt-get install git”. Next up, I installed nodejs: “sudo apt-get install nodejs” and npm: “sudo apt-get install npm”. You get the idea. Now you should be able to “git clone https://github.com/yourUsername/yourRepo” and fetch yourself your project files. In order to run a server (such as node), you’ll likely have set up environmental variables (perhaps I’ll write a post on this someday soon, as I have recent mildly-hellish-but-ultimately-successful experience setting up env vars on heroku). To set environmental variables in bash you could either type them all out (“a=1”; “echo $a” => 1 to confirm) then run your server, OR you could write a bash script that you call. Here’s how I did that:

lil’ bash script to set environmental vars:

Your ubuntu instance comes with vim, so you can type “touch .env” to make the file, then “vim .env” to open it for editing.


Very quick Vim intro aside:

Vim in short: it has modes. Press “i” to begin editing (now you’re in INSERT mode, as you’ll see at the bottom), or the escape key to go back to COMMAND mode. Typing “:wq” in command mode will save your file and quit out of vim. That’s all that’s necessary here.


… back to a bash env script:

Now that you have your .env file open in vim for editing, type commands as you would in bash. You’ll likely want to set things like “PORT=80” and on a newline “CLIENT_SECRET=yourcrazyapistring”, etc. Once you’ve typed in those important things that your app will need to know to run, save and get out of vim (esc, “:wq”). Now, run your .env file by typing “source .env”. [NOTE: you can also run it with “bash .env” but that will run it inside a tiny bash instance that will run the given script then exit, without keeping what you’ve set as fact in the current instance of your bash shell, which is why we’re instead using the source command.] Grrreat! Now you can run your server (in my case, cd to the folder my app is in and type “node app.js”).

On tmux

Okay, so far, so good, but what happens when we control-C and log out of our ubuntu server? It powers down! Or rather, once we log out of our bash session, whatever web server we’ve started in there will stop. So. We need a way to trick it into staying open even if we are not logged in: enter tmux. Tmux will let us begin a session and detach from it, letting it run by itself. [NOTE: at this point, if we’re really going to run a real website, we would do a lot of other things instead, such as install and use nginx, but that’s for another day.] So, how do we use tmux? First up, install it “sudo apt-get install tmux”. Then run it: “tmux”, which gives us a cute little bash-within-a-bash from which we can do normal things. (To get out: type “exit”). Within tmux, use the command initiator key combo (control-b) to do things. For example, type control-b then “?” for the help. The cool part of tmux is that you can now detach from a session you have going by typing control-b then “d”. Now you have a detached session within which you could have your app server running. Try logging out of your ubuntu instance (control-c) and back in (ssh command in bold above) and then type “tmux attach” and you should be back into your detached session! How cool is that?!? (very cool). I think you can also name sessions and do a bunch of fancy stuff with tmux that I don’t know about. I did, however, find out how to list your active sessions (“tmux ls”) and kill them one-by-one (“tmux kill-session -t sessionnumber”) where sessionnumber is the number of the session listed when you type “tmux ls”. BAM!

SCP – Secure CoPy

Alright, one last tidbit: how to get files onto your new ec2 instance. Tricky. It seems that within terminal you are either looking at your own local machine or you are ssh’ed into another machine. Thankfully, there’s a secret bridge between the two: scp, which stands for secure copy. Here’s how it works: within terminal, in bash on your local machine, type “scp -i ./path/to/your/sshfile.pem ./path/to/file/youwanttocopy/onlocalmachine ubuntu@##.###.###.##:/home/ubuntu”. Let’s explain: “scp -i” means tell secure copy to use your identity file (your sshkey); “./path/to/your/sshfile.pem” the path to your ssh file on your local machine so that scp can connect securely to the remote computer; “./path/to/file/youwanttocopy/onlocalmachine” the path to the file (on your local machine) that you’d like to copy to the remote machine; “ubuntu@##.###.###.##:/home/ubuntu” remote username (default in my case is ubuntu) AT remote ip address, colon, existing file path on remote machine.

Okay, I think that’s all I learned from 9am – 10am today. Amazing!

An Itsy-Bitsy, Teeny-Weeny, Yellow Polka Dot … EventEmitter!

I recently attended the M’s workshop on functional programming, in which I learned that I needed HELP. (M is a fantastic facilitator at Hacker School. Here is the link to the blog post she wrote about said functional programming workshop.) I knew my client-side code was a mess, but I was clueless as to why exactly it was so stinky and at a total loss about how to fix it. Her workshop simplified the concepts behind functional programming, and made it approachable, and (more importantly), applicable. I’m writing this post as a summary of what I learned and how I’ve recently started applying it.

First off, here are some of the tips and tricks I took away from this afternoon of functional programming examples:

Declare It!

Write DECLARATIVE code, not IMPERATIVE code. Her example was writing a function called “make-me-a-sandwich” versus writing some code that is a tedious list of the steps necessary make a sandwich. In examining my recent code in this light, I found it full of tiny instructions that give the reader no overall sense of where it’s going and why. Plenty of room to chunk that shit up into declarative functions!

Avoid Mutating

Write functions that RETURN stuff without mutating what they are given. Functional functions take some parameters (say an array or string or object), use what’s within them (no referencing outside [especially not GLOBAL!] variables), and neatly return a modified COPY of what they were given. This is great, as you never end up writing over something you needed or modifying an outside variable unintentionally. Also, if you constantly return a clean copy of your starting data, it’s far easier to chain series of small tasks together, which makes it clear to the reader what you are doing with your code.

Quit Iterating with For Loops

I.E. Use MAP, REDUCE, and FILTER. These functions are great! I had little experience using them, but they save you a bunch of hassle and make it very clear to the reader what you are doing and why. M’s great example of all the unrelated crap people throw into a for loop made me laugh at myself out loud (new acronym: LAMOL?). I do that all the time! I used to think I was being efficient, getting more done in one loop. But it’s far cleaner to map my change onto my array of items, and it returns me a new, safe copy. I have questions about the potential Big O sacrifice in mapping over my data many times instead of doing a bunch of crap in a for loop all at once, but for my general purposes (handling a very small amount of data — generally loops under 20 items), the benefit in clarity certainly outweighs the loss in performance.

Applying My New Knowledge to My Own Code

As always, much harder than expected. After M’s workshop I knew vaguely what had to be done, but in my client-side code, I have functions that handle JSON requests and responses, and functions that handle resultant DOM manipulation, and they must work together so intricately that I found myself with a mess of delicately interwoven spaghetti code. Appetizing, sure, but not pleasant to pick apart, and definitely not easy to make changes to.

After puzzling at the mess for an afternoon and evening, I sought M’s advice. She recommended implementing my own tiny client-side Event Emitting system so that my server-communication code could alert my DOM-manipulation code to changes without being dangerously interlaced. “Brilliant!” I thought. “Terrifying,” I thought. Implement my own Event Listener and Emitter?!? Are you crazy? Luckily, M showed me some example code, and demystified such a seemingly complex idea significantly. So, drumroll please, here it is in the flesh, my Itsy-Bitsy, Teeny-Weeny, Yellow Polka Dot Event Emitter:

var EventEmitter = function () {
    this.handlers = {}; 
    // we need to keep track of what actions (functions) are mapped to which named events
};

EventEmitter.prototype = {
    bind : function (event, fxn) { 
    // this is how we add "listeners" to certain events
	this.handlers[event] = this.handlers[event] || []; 
        // make sure we have an array in our handler, mapped to our event, to push our function into
	this.handlers[event].push(fxn); 
        // add our function to our event's array of functions
    },
    emit : function (event, data) {
	if (this.handlers[event] !== []) { 
        // if this event has listeners bound to it
	    this.handlers[event].forEach(function(fxn) { 
            // for each of our functions mapped to our event
		fxn(data); 
                // do the function on the data we emitted
	    }, data); 
            // forEach takes the parameter to pass the inner functions last
	}
    }
};

var sayMessage = function (message) { 
// let's define a test function that takes a piece of data
    alert(message);
};

var emitter = new EventEmitter(); 
// instantiate a new EventEmitter

emitter.bind('open', sayMessage); 
// bind the sayMessage function to the event called "open". 
// this gets stored in our emitter's "handler" map.

emitter.emit('open','banana'); 
// let's emit an event called "open" with the data, "banana", and see what happens:

// => if you put all this in a window.onload() function and load it in your browser, it will alert "banana"!

Okay, so what’s cool about this, and how does it relate to functional programming? First off, it’s cool because now my server-message sending and delivering code can be completely separated from my DOM-manipulation code. This is important. These two chunks of code deal with completely different things, and we want to keep them in their own zones. Then we can make changes within them without worrying about fudging up the other. (For the non-technical, DOM stands for Document Object Model, which is some garbage way of saying my HTML or the text-y output and physical elements on my webpage that the user sees, in opposition to the pure data that is transferred to and from the server.)

Anyway, what was exciting about writing this bit of code was the function forEach(). I had not used that before, and my (NEW) instinct was to try to map my array of functions to my bit of data (though usually map works the other way — mapping a function to an array of data…). Thankfully, M recommended forEach(). Upon researching forEach(), I found that it takes this second argument, which the documentation defines as “the object to use as this when executing callback.”

What’s exciting is that several weeks ago I doubt I would have understood that. Now, and especially after talking about closures in the functional programming workshop, I understand much better: that parameter is defining the runtime object or data that the function(s) you are calling within your forEach loop will be applied to. This is what allows us to “emit” an event and include with it some data that is (not magically!) passed to the function(s) that are mapped to that event in our handler map. Those functions, thanks to forEach(), are able to access that data we passed with our event emission. In my example, it means my event emission code can attach whatever it wants, and through the map inside my event emitter and the forEach execution, my bound function (in this case called sayMessage) can access the data I want to give it (in this case “banana”). Amazing!

An Ode to OAuth

I lied! Not an ode:
just a mere haiku or two
detailing OAuth.

I spent a whole day
paired with a clever friend on
trial and error.

Such a simple task:
download shots from Instagram.
Not easy at all!

Step One: REDIRECT
to Instagram’s own login.
On success, return.

From login response,
“GET” client’s code, build request:
H-T-T-P-S!

Add secret dev codes,
POST to port 4-4-3 with
concat-ed path name.

If you’re deemed worthy,
you’ll receive JSON response;
else days of headaches.

end poetry;

Using OAuth & Node to Access Instagram’s API

… quite a simple task, as it turns out, but nowhere could we find the exact sequence of delicate moves to make the magic happen. So here, for all future generations, I put it forth:

Assumed: you’ve got a node.js server up and running, likely using some handy library like Express (if not, no biggie, just see one of a thousand tutorials or perhaps a future blog post of mine); you’ve registered as a developer with Instagram (it’s super-simple: go to instagram.com/developer/, register your app & such). NOTE: Instagram WORKS with your redirect uri set to “http://localhost:3000” or whatever port you’re running on your local computer’s test server. That shocked the heck outta me, but it’s fine. Go figure and rejoice! From registration, you’ll need to take note of your super-secret client_id (a misnomer, really, as in only this case are you the ‘client’ insofar as you are requesting data from their servers — confusing because you think of your users as clients, bah!), and your super-super-secret client_secret (very secret!).

Once you’ve got that jazz, build a little link or page or something that a user can arrive on and click “go” or something to be redirected to Instagram to log in to their account (it is actually very handy that Instagram handles that stuff for us!). The url you’ll want to use for that link is:

"https://api.instagram.com/oauth/authorize/?client_id=" + client_id + "&redirect_uri=" + whateverYouSetAsYourRedirectURIwithInstagram + "&response_type=code"

Note: If you’re testing on your computer, your redirect_uri (whateverYouSetAsYourRedirectURIwithInstagram) might look like “http://localhost:3000”, assuming you’re running your node server on port 3000.

Next, user will click that link, agree to give you their soul under Instagram’s oversight, then be redirected back to… your redirect link! Hopefully you gave Instagram a link to something on your server and you’ve set up your node server to handle that response. I did, so I’m cruisin’ happily at this point.

Great! User agreed and is back in your hands, probably at a url on your server that looks like “redirectURI?code=12345678901234567”, where redirectURI in my case was “http://localhost:3000/home” and code was some frequently-changing garble of many numbers. This is a “GET” request to your server, and the “?code=” is a parameter name, complete with a value (the bunch of numbers). You’ll want to grab that pile o’ numbers. You need it for your next request. (I know, right? ANOTHER request?!? yes. sorry.)

[For the cheaters: Node.js makes it super-simple to grab that code from the url. You can include a built-in node module called “url” and use it to parse that huge url for you and give you the bits you care about. Here’s how:]

var urlParser = require('url');
// "require" node module called url
...

app.get('/home', function (req, res, callback) {
// somewhere inside your function that handles the "GET" request to your redirectURI page:
    ...
    var userCode = urlParser.parse(req.url).query.toString().replace(/code=/, ''); 
    // parse the requested url, pull out the query piece, make sure it's a string, strip off the "code=" part
    ...
};

Now you’re ready for the big league: REQUEST FOR ACCESS_TOKEN. In the Instagram API documentation, they say “POST a request to their server” and show an example using CURL. If you’re coding in php or accessing data directly from your terminal or some shit, that’s great for you. If you want to fuss with a node-curl library just to do that through node, also have fun. I was not in either of those situations, and I KNEW there was a way to send an HTTPS POST to another server from my node.js server. I just KNEW it. (Okay, so I had read a lot of the Node documentation, and I did actually know it — not just some gastrointestinal hunch, unfortunately.) Anyway, it’s true, but the Node examples do not line up well with what you need to do in this situation. Here’s what has to happen:

var https = require('https');
// require 'https' module from node

var querystring = require('querystring');
// require 'querystring' module from node [NOTE: this is DIFFERENT than JSON.stringify!]

var sendData = querystring.stringify({ // build data object to send and turn it into a querystring
    'client_id' : YourClient_id, // given to you by Instagram as a developer
    'client_secret' : YourClient_secret, // given to you by Instagram as a developer, very secret!
    'grant_type' : 'authorization_code', // just type both of these strings literally
    'redirect_uri' : YourRedirectURI, // the one you set with Instagram
    'code' : userCode // the one we fetched from Instagram's first GET response above
});

var postOptions = {
    hostname: 'api.instagram.com',
    port: 443,
    method: 'POST',
    path: '/oauth/access_token',
    headers: {
        'Content-Length': sendData.length // get length of the data string you are sending
    }
};

var request = https.request(postOptions, function (response) { // prepare our request
    var receivedData = ''; // create a new, empty place to catch our response data
    response.setEncoding('utf8'); // if you don't set this, you get back a buffer of junk
    response.on('data', function (chunk) { // respond to the 'data' event by catching each chunk of data
        receivedData += chunk; // and adding it to our empty response basket
    });
    response.on('end', function () { // respond to the 'end' event
        receivedData = JSON.parse(receivedData); // by parsing the JSON we (hopefully) received
        console.log(receivedData); // for now, let's just log this to make sure it's working
    });
}).on('error', function (e) { // catch request problems
    console.log('https POST request error: ' + e); // log the error with a reminder to yourself
});
request.write(sendData); // ACTUALLY write the freakin' request object
request.end(); // send that sucker!

With any luck, this WILL work, and you should receive a JSON object that looks like this:

{ access_token: '1234567.8a90bcd.12ef3g4567890hijkl123m45678n90o1', // I made this up
    user: { 
        username: 'username',
        bio: 'the bio your user wrote',
        website: 'www.youruserswebsite.com',
        profile_picture: 'http://images.ak.instagram.com/profiles/profile_1234567_89ab_0123456789.jpg',
        full_name: 'User Name',
        id: '1234567' 
    } 
}

Now you can use that GOLDEN ACCESS_TOKEN to get ANY DATA YOU WANT! It’s amazing! And beyond that, Instagram’s API is quite lovely to work with. They use Apigee, and requests to the API are fairly simple and self-explanatory (or rather, their documentation about that is great).

That’s all for tonight, folks! Feel free to post your tales of joy or woe in working with OAuth. I’d love to hear that I’m not the only one who’s lost a day to those nitty-gritty implementation details…

Also, much credit owed to my fellow student and friend, N, who worked through this mess alongside me. We’ve learned so much from this yet-unfinished “simple” project. Stay tuned!

OSX TERMINAL FOR THE TERRIFIED

OSX TERMINAL FOR THE TERRIFIED

If you’re no coding pro, opening and using Terminal is a terrifying prospect. I was in your shoes less than a year ago. Fear not! It’s actually great, and now I use it more than Finder. I kid you not! Here’s a super-simple intro of commands:

the “prompt”

First up, the “prompt”. This is not a command, but I thought you ought to know what it is. It will likely look like “ComputerName:CurrentDirectory UserName$” or something. I’ve configured mine to be simpler, since I’m the only user and I already know the name of my computer. Mine looks like “~$”, which I’ll be using here, as it’s shorter and (ahem) prettier. The prompt just means “okay, user, I’m ready for you.” I imagine it saying “Bring it!” with a nod of its head every time I open my terminal or a command completes. Think what you will, it means “TYPE SOMETHING”.

cd

cd is the best. It means “Change Directory” or “go somewhere”. It’s harmless. You use it by typing “cd Directory/That/You/Want/” but you can leave off the trailing slash even. cd is smart. You can even tab complete if you can’t remember the full name of whatever directory (in Finder we call directories “folders”, same thing) you’re looking for, start to type it and hit the tab key. If you’ve typed enough letters for the terminal to uniquely identify it, it’ll auto-fill. If not, hit tab twice and it’ll show you the options. Amazing! Anyway, first up, cd yourself around a bit. cd into your Downloads folder or your Applications folder. To go back up a folder, type “cd ../”. Cool, huh? Love it!

ls

ls “LiSts” whatever’s in the folder (directory) that you’re in. If you want it to show invisible stuff, type “ls -a”. Very handy.

pwd

pwd DOES NOT STAND FOR PASSWORD. Go figure. But know that. It stands for “Print Working Directory”. It’s not incredibly useful to me, as my prompt shows me where I’m at, but still, you could use it to pipe that into other commands, I suppose (I don’t yet know how to do that, I just know it can be done). I’m listing it here so you don’t sound like an idiot by calling it “password”. Geeze!

mv

mv means “Move”, dammit! Not you, just your files. To mv something, type “mv Where/It/Currently.is Where/You/Want.it”. Note, you can use this to rename a file, just go to the directory that the file is in (cd Into/DirectoryName) and type “mv oldname.txt newname.txt”. Really cool!

cp

cp means “CoPy”, and it works just like mv except the first parameter is where the shit you want to copy is now and the second parameter is where you want it to be copied to. *It will not move, delete or alter the original stuff!* So fear not.

rm

rm means “ReMove”, which means DELETE, so BE CAREFUL HERE. rm only deletes one file at a time. Use it like so: “rm toDelete.txt”. Bam! GONE. No gettin’ ‘er back.

mkdir

mkdir means what you’d expect, “MaKe DIRectory”. (Kidding, I would NOT have expected that, but it seems logical enough.) It makes a folder. Use: “mkdir NewFolderName”. Done. Created.

open

open “opens” a file. Cool.

sudo

sudo, (pronounced su-du) means “SuperUser Do”. It means do this thing as the Root User of the computer. It gives Terminal permission to do necessary things at times, and scary things at other times. I would recommend against using it unless you know (at least vaguely) what you are trying to do and you don’t have the computer’s permission to do it without saying you’re the SuperUser (heh). It requires you to type in your computer’s password.

That’s enough for now. Go forth and type!

Installing Emacs 24.3 on Mac OSX Snow Leopard (10.8) — ERRORS!

Good morning! I just installed Emacs 24.3 on a fairly clean install of Mac OS 10.8.5, and I came across (and remedied) several errors that someone else might benefit from knowing about. I may be one of those benefiters next time I have to install emacs, so this is not an entirely self-less post. ANNND, I enjoy writing about little tech triumphs that the non-technical could appreciate, so here goes:

Download emacs-24.3.tar.gz from http://mirrors.syringanetworks.net/gnu/emacs/

Sound terrifying? It is. Please note, this is likely the mirror closest to me (Brooklyn, NY), yours may differ. To find a mirror or get to this page, begin here: http://www.gnu.org/software/emacs/#Obtaining.
Once you’ve downloaded the gzipped file, double-click it in your downloads folder. (HA! I love telling people to double-click.) Great job! Next up, open the file called INSTALL (this is a text file that contains instructions — peruse it so you know I’m not lying as I tell you what to do!).

Two-Minute Interlude: OSX TERMINAL FOR THE TERRIFIED

(I moved this to its own post, as it’s a handy reference and should stand alone.) Back to our regularly scheduled program:

Begin Configurin’

Open a terminal window and cd to where you downloaded emacs. (NOTE: ermm… I maybe shoulda moved that emacs junk from my Downloads folder before installing, but I didn’t. Feel free to comment and correct me! Thanks!) For me this was my “Downloads” folder, so I typed “cd Downloads/emacs-24.3”. From there we’re going to run the configure script that comes with emacs. To do so, merely type “./configure”. A whole bunch of text will run down the screen like a waterfall of characters. It’s great. At the end, if you’re me, you’ll get some warnings that are inconvenient. [NOTE: do look at the INSTALL notes, especially if you get ERRORS or weird warnings! They say to peruse the warnings and fix the following: “wrong CPU and operating system names, wrong places for headers or libraries, missing libraries that you know are installed on your system, etc.” Mine were merely missing image support libraries, as I’ll explain below.] If you have minor errors like mine, I think you might be able to ignore these and just run “./configure” again, but that’s not what I did. Instead, I dealt with them:

Dealing with the Inconvenient Warnings about non-existent Image Libraries

As you’ll see in the INSTALL text that I told you to open first, the clever builders of emacs knew I might run into these problems and included a section on “Image Support Libraries”. There they link to the three libraries that I was missing (as listed in the warnings after running “./configure”). I was missing libjpeg, libgif and libtiff, listed in that order. I therefore went to http://www.ijg.org/, http://sourceforge.net/projects/giflib/, and http://www.libtiff.org/ in that order to fetch and install the necessary libraries. These all work the same way as emacs, though some are more explicitly instructioned than others. What you do:

  • Download stuff and double-click to unzip (heh!).
  • Navigate to that folder in your downloads folder (“cd Downloads/jpeg-9”, etc.)
  • Run “./configure”
  • Check for errors and warnings (TBC below)
  • If no errors, run “make”
  • If no errors, run “make install”
  • You should be done

IF YOU RUN INTO ERRORS, AS I DID: start googling. HA. No, it’s true. But if you installed jpeg-9 and giflib-4.2.3, and then hit a snag on tiff-4.0.3 that references an error in the file “/usr/local/include/jmorecfg.h” on line 263, expecting a “}” to end the “{“, like I did, then you’re in luck. See below:

Remedying the error in “/usr/local/include/jmorecfg.h”

It seems the problem here is that JPEG-9 assumes that TRUE & FALSE aren’t defined if there isn’t a boolean type defined, so you need to add more if clauses. Replace the problem line (#263 in my case) from:

typedef enum { FALSE = 0, TRUE = 1 } boolean;

to:

#ifndef TRUE
#ifndef FALSE
typedef enum { FALSE = 0, TRUE = 1 } boolean;
#else
typedef int boolean;
#endif
#else
typedef int boolean;
#endif

Mega thanks to pingemi, whose post I followed here: https://trac.macports.org/ticket/37982. This fixed the problem for me. Then try the configure; make; make install again for tiff. If this works, then:

Image Libraries Installed, time to “MAKE” emacs!

Once you’ve got that stuff ready, run “./configure” again and it should show no warnings. Then run “make” and if no errors again, try “src/emacs -Q”, and it should open ‘src/emacs’. If something opens with a starting window, you win! Now go back to terminal and run “make install”. Once that’s done you can run “make clean” to tighten up the emacs install folder (which you may want to keep around for debugging). You’re done! Great job! Now good luck figuring out how to use it… (I have been directed to Emacs Prelude to get other configuration set up, but I’ve not yet done it, nor have I the faintest idea how to use emacs — heh.)

That’s all for today, folks! Hope you learned SOMETHING. I learned: a little more about running commands in bash and that those little black icons in finder represent bash executable files such as configure; not to be afraid to dig into some source code when there’s an error (I think that’s generally inadvisable, but in this case there were no really terrifying consequences); that I’m WAAAAYYY more confident dicking around in Terminal than I used to be — I now understood what the instructions were saying and wasn’t afraid to type them; a tiny eentsy bit of C code.

Hacker School Day 10: About TCP/IP & a Node.js Hello World

.Where. .to. .begin. .?.

I began today slightly bored of Nefario Teaches Typing. There’s a WHOLE lot more I could do with it, but I won’t learn anymore by doing those things. I mean, sure, I’d get a little faster at some simple stuff, but I’m ready to be CHALLENGED, PUZZLED, PERPLEXED. So I put that on the shelf for now (I think I’ll still work on it in the evenings, as a little side-project) and sought some facilitator advices. M suggested that I either delve into a meaty client-side project such as a drum music simulator thing or that I try to write my own super-simple, crappy web server. Since I know little (NOTHING) of sockets and such, I figured that’d be a worthy pursuit. I proceeded to spend the rest of today reading up on Node.js, packets, sockets, TCP/IP, and other bizarre terms. From this, I think I’ve extracted a bit of knowledge. Please see below:

TCP/IP

The what? TCP stands for Transmission Control Protocol. That means nothing. What they should call it is: Reliable Communication Command Center, the RCCC. Because that’s what it does. TCP is a protocol (read: set of rules or instructions) for sending and receiving lil’ packets of info between and within computers. [NOTE: feel free to well-actually me in the comments: I won’t be offended, I’ll merely learn better.] Okay, so, from what I understand, TCP works with the IP (Internet Protocol, another set of instructions) to coordinate communication between computers in packet-switched networks.
“What’s a packet-switched network?” I’m glad you asked! It’s when things (computers in this case) transfer data in little packets instead of via a constant bit stream. This means: you can send some data then stop, then send more; the packets can travel via different routes to get to the same place; you can check for errors in delivery, and you can grab data from multiple hosts (computers serving up data). Packets are great! Let’s talk more about what a packet actually is:

Packets

Packets are like datagrams, but not quite. Datagrams are little chunks of data, packed light for travel. Datagrams contain some header info about where they’ve come from and where they’re headed and what type of data they contain, then they have some sort of body of data, called the data payload. I picture this as a Chinese Dragon head that barfs up addresses and types, stuck to a dump-truck of solid gold data bars. (It’s been a long day.) The thing about datagrams is they don’t pay attention to their own survival. They contain no way to tell whether they’ve reached their final destination, and no data about error-checking for their missing friends or limbs. No guarantees! Sent then forgotten. Packets, on the other hand, are souped-up dragon-trucks: in addition to the basic datagram info, packets also contain info about which of them belong together and in what order, and they (somehow) have ways to instruct their receiver about how to test them for errors (I think this has to do with some magical size computation that would detect missing data).

IP Addresses

This is just cool. You know how when you get your IP address it just seems like a bunch of numbers? Well, it is, but there is a little logic to it. Everything in the header of these packets has a specified (by the protocol) amount of data it can take up, which I imagine makes it easier to parse and such. The amount allotted for an IP address is 4 bytes (each 8 bits long), which means the “address” is 4 “numbers” long, each separated by a period, each between 0 and 255 (which are all the numbers you can make with 8 bits: 00000000 – 11111111), hence something like 127.0.0.1. Cool, huh? I thought so. Also, IP addresses are the personal address of your computer. Something like that.

Node.js

Today I read a bunch about node and learned how to write the teeniest web server. Here it is:

// require the http module in node:
var http = require('http');

// write a listener function that will do something (send a response, ideally) when the server receives a request:
var webListener = function (request, response) {
    if (request.method == 'GET') { // what to do if this is a GET request (most are!)
        console.log("Request was a GET"); // just tell the terminal that it was a get request for now
        console.log("URL: " + request.url); // tell me that url, please!
    }
    response.writeHead (
        200, // that code means the request was successful, tell the client "200, yay!"
        { 'Content-Type' : 'text-plain' } // We're going to send back some plain ol' text
    );
    response.end('Hello World!'); // end our response with the data (text) that we want to send back
}

// make a new instance of the server object and pass it our fancy listener function from above
var server = new http.createServer(webListener);

// tell the server to listen up! (and give it a port number to listen on and a host to use -- localhost)
server.listen(8124, "127.0.0.1");

// tell me you're working, server!
console.log("Server running at http://127.0.0.1:8124");

Oh right, first install node (use homebrew!). Then put the above code in a file, and call it example.js. Save it somewhere, navigate to that place in Terminal (someday I’ll write a super-teeny Terminal intro), then type “node example.js”. BAM! Web server, up n’ runnin’.
Phew! That’s it. I’ll go into more detail tomorrow about what the particulars of each of these are, but in general, the client (your computer) connects to a host (now my computer) and they say “coo’, let’s talk.” So your client computer requests something from mine (you want to see my website, of course) at the address that is my computer, connected to the port that my server is listening to. My server gets that request and responds however I tell it to (in this case by sending you back “Hello World!”). Tomorrow’s goal: send back an html file. I guessed that I would have to read a file’s content into some sort of buffer and send it appended to a packet, and I think I’m right. I haven’t yet looked too far into the details of that, but I’ve got a general idea of how it’ll work.

Stay tuned, folks! And thanks for reading.

FORGOT! Here’s my illustrated understanding of TCP and IP:photo

Hacker School Day 9: SICP review, Sinatra solutions, JS productivity

I dabbled in too much today! But I think it’s okay. Just for today.

First up, I want to write about SICP*, and what I reviewed today. Hopefully that will help cement these complex thoughts a little more firmly into my malleable medium of brain. I started working through SICP again with another (SLOWER) group. This is more to my liking, as I merely want this to be an enlightening side-project.

SICP Exercise 1.5

(define (p) (p))

(define (test x y)
    (if (= x 0)
    0
    y))

Evaluate (test 0 (p))
This exercise tries to demonstrate the difference between Applicative-Order Evaluation and Normal-Order Evaluation. Let me try to explain: in Applicative-Order Evaluation (which is similar to how the Scheme interpreter actually evaluates code), you evaluate the parameters of the function or expression FIRST, then plug your results into the outer function(s). In Normal-Order Evaluation, you leave your parameters as junk-words and expand out the outer functions as much as possible. In fact, you expand everything down to simple operations as much as possible, THEN you go through and evaluate it all back up and simplify. So, in the case of the code above, approaching it Applicatively, we would FIRST evaluate 0 and (p) (since these are our parameters to the test function). When we try to evaluate (p), we are returned the function (p), which then returns the function (p), which then returns the function (p), etc. The program runs forever, just trying to get some real stuff to put into the parameters of test. Yikes! When we approach this Normally (ha! except not really — please note my use of capital “N”), we first expand out our function: (if (= 0 0) 0 [STOP!]. With if, since it’s a special form and not a normal procedure, we don’t plug everything in everywhere up front; we instead plug in our first test case and evaluate it, then either evaluate the success expression or evaluate the else expression. In this case, we get to (= 0 0) (does 0 equal 0? why, yes, it does!) and we return 0. Done. Easy. No infinite loop.

SICP Exercise 1.6

In this exercise, Alyssa P. Hacker and her friend Eva Lu Ator (author’s names, not mine) decide that if is just a sub-set of cond (conditional), where there is only one test case, so they decide to write their own if function based on cond:

(define (new-if predicate then-clause else-clause)
    (cond (predicate then-clause)
    (else else-clause)))

This works fine if you’re just evaluating a few simple numbers, but if you use it for a process involving iteration, it may cause an endless loop. Here’s why: (SICP example slightly simplified)
Let’s write a function called improve guess where we say “if (guess is good enough) (return guess) else (improve guess),” and improve guess calls itself. If we use the normal special form if as defined by Scheme (or any language, really), the interpreter knows to only look at the else clause IF the first clause is false. Fine. Great. What’s the big deal? If we instead use our homemade function new-if, we have to plug in: (new-if: (good enough?) (guess) (improve guess)) into our function. As I talked about above, if we were to evaluate this Applicatively, similar to the way the Scheme interpreter does, we would FIRST evaluate our parameters before doing the outer function stuff, which means we would FIRST evaluate the parameter “improve guess” which means we would call our function all over, plug in the same parameters, evaluate them (which would mean evaluating “improve guess”, which would mean calling our function yet again, plugging in the parameters, evaluating them (which would mean …)). And if you run this code in the Dr Racket interpreter (in which I declare the language to be Scheme), it only quits when it’s run out of its allotted memory. Ha! SO. THAT is why some functions are built-in to languages with SPECIAL instructions, such as IF, AND, OR. Take that, Alyssa P. Hacker and Eva Lu Ator! I guess that’s whatcha get for tryin’ tah be clever.

Sinatra!

Not Frank. The other one. Less dancy, unfortunately. This one is a light-weight Ruby web server (I think). Anyway, I used it for my tiny gmail filter maker about a month ago, so when a classmate was struggling with it I offered to lend an eye. While we didn’t make leaps and bounds of progress, it challenged us both to think about harder about GETs and POSTs and how to see what of our parameters were getting properly delivered. Ultimately, I think she’ll need to switch to session variables, which I would love to learn how to do with Ruby and Sinatra, but I also had to get back to work on:

Nefario Teaches Typing

… my typing game. Progress so far: added the beginnings of a keyboard, changed the frame rate of the character’s animation, added scoring, added pause and resume. Next up: do more with the keyboard, pull some of the html-building out of the JavaScript and let it just be already in the html the way I want (will try to do this as a new “branch” so that I can go back to it… that will be an experiment/learning opportunity with git). I also need to refine the “rules” so that the level can be won or lost and is somewhat tailored to the player’s typing speed and accuracy. So much to do!

*SICP = Structure and Interpretation of Computer Programs, a fundamental work about what it means to write programs, used at MIT, “changed my life,” blah, blah, blah. It’s actually pretty cool, though quite dense.

Hacker School Day 8.Friday: On which I “implemented” a “stack” in JavaScript!

Yes!!!

This afternoon, in a matter of just a few hours, I learned:

  • What is meant by a “stack”: a data structure in which you pile things on top, much like books. These are not meant for accessing data from the earliest elements — they are intended for pushing (adding items to the top) and popping (taking items off the top).
  • What it means to “implement” your own “__________” (in this case a stack): build your own version of the thing, ideally using fewer built-in functions or structures than would be handy. (See below for an example.)
  • What a “linked list” is: (sorta) A data structure in which you store each element and it’s “pointer” to the element before it. This means in order to get from element “a” at the beginning of a stack, to element “f” that you’ve stacked up to now, you’ve got to go from “f” to “e”, then from “e” to “d”, etc., until you get back to “a”. Not sure what situations this would be super-helpful for (same way I feel about a stack), but I guess it’s good if you only need the most recent few elements? (Comments, enlightenment, and such are most welcome!)
  • Node is easy to install with homebrew: literally (first remember to update and doctor your brew):
    ~$ brew update
    ~$ brew doctor
    ~$ brew install node

    DONE! Now I can run js files in the terminal!

On “Implementing” a “Stack”:

Okay, so in JavaScript (as in most higher-level programming languages), there are built-in tools that do the stack-y things you need to do already, so a Super Cheaty Implementation of a stack in JS might be something like this:

function CheatingStack () {
	this.contents = [];
}
CheatingStack.prototype.push = function (item) {
	this.contents.push(item);
};
CheatingStack.prototype.pop = function () {
	return this.contents.pop();
}
CheatingStack.prototype.isEmpty = function () {
	return (this.contents.length === 0);
}

That makes use of built in methods push() and pop(), which put an item on and pop an item off the stack, respectively.

That sort of implementation is neither challenging nor interesting, so naturally I had to try harder. The facilitator, AO, who was helping me with this hinted that soon in SICP (the fundamental book of computer programming that I’m reading), the author will break all data down into pairs, and he showed me how pairs could just be “implemented” in JavaScript as an object with two parameters and two properties. From there, with some on-paper reasoning help from my lovely fellow student RS, I figured it out! I then tested it using console.log(), but I would like to implement more unit-testing methods that AO demonstrated. The trick (for the non-technical readers that I someday hope to have) is to create a tiny “Item” object that merely stores a value and a “pointer” (in this case, the item that comes before it). Then in your stack when you push a new element in, you make a new “item” object for it containing itself as the value and the last head object of your list as what it points to. All you have to track in your “stack” is what object you are currently pointing to. Then when you want to pop an item off the top of your list, you grab that head object, and fetch from inside it the object it’s set to point to and make that your new head. In my initial round of code, I was also storing an extra property called “prev” in my stack, but another facilitator, AK, kindly showed me that was entirely unnecessary! (Another great reason to show someone your code immediately.) For the technically inclined and curious, here’s a link to the code: Implementation of a Stack in JavaScript (gist).