Monthly Archives: November 2013

Amazon EC2, Nodejs, Tmux quick reference

Just a quick post so I remember what I did and how to do it again:

Set up an AWS EC2 instance:

Sign up for / log in to Amazon Web Services. When you get to the main screen full of icons, select EC2 (stands for Elastic Cloud Computing — elastic because its easily expandable). Then click through many screens. This time we selected an Ubuntu 12.something 64-bit instance, and we chose the smallest (free-est) stuff possible; I believe it was called a micro instance or something. When you have to set up ports and such, the two important ones (if you’re planning on setting up some testing web host, for example), are ssh and http. Both are of type TCP, and it’s okay to let them accept all IPs for now. Next up, I think you have to set up your ssh key. Amazon will create one especially for you. Download it and put it in a safe place. Don’t lose that key or you’ll have to redo everything from scratch. Okay, now that you’ve got your ssh key, keep clicking okay/launch until you get to a screen that shows what might be a list of running instances (at this point I/you only have one going). If you scroll to the right, you’ll see the Public DNS and Public IP for this running instance. You should also see a green circle under instance status that says it’s running. Excellent! Perhaps copy that Public IP address, for we’ll need it to ssh into our server.

Log into your shiny new instance:

Great! Now let’s ssh in. To do so, first you’ll need to change the permissions on your ssh key file to make it more exclusive.

Aside on File Permissions:

File permissions are a cool relic of cleverly storing large(ish) amounts of data in small ways. Permissions are made of three digit numbers which represent the User’s permissions in the left-most digit, the Group permissions in the middle digit, and the Others permissions (world? I dunno) are represented by the right-most digit. The file permissions for each user/group/whatever are represented (each digit) by adding up the permissions you are granting (0 – nuthin’, 1 – execute, 2 – write, 4 – read), so read-write-execute (rwx) is 7 (4 + 2 + 1), while read-write (rw) is 6 (4 + 2), and so on. For our purposes, we (the main user) would like to be able to read, write and execute our ssh key (honestly, all we need to be able to do is read it, but this is fine too), and we DON’T want others to have any permissions on it, so we’ll set our permissions to 700 (7 for us, 0 for group, 0 for everyone else). To do so, we go to the directory that we stored our ssh key in and type “chmod 700 filename” where filename likely ends in pem and is the name of our ssh key file. Done.

… back to logging in to our EC2 instance

Now that our permissions are more exclusive, let’s ssh in: “ssh -i ./path/to/sshkeyfile ubuntu@ipaddress” where ubuntu happens to be our default username since we have an ubuntu instance and ipaddress is the public ip address listed in that table in the EC2 Management Console that we were in two paragraphs ago. If all goes according to plan, you should see your new shell prompt: ubuntu@ip-###-##-##-###:~$. Now you can install whatever you need to feel happy using apt-get:


Type “apt-get update” to update your default package manager (apt-get). [NOTE: you will likely need to sudo the apt-get commands.] First up, I installed git: “sudo apt-get install git”. Next up, I installed nodejs: “sudo apt-get install nodejs” and npm: “sudo apt-get install npm”. You get the idea. Now you should be able to “git clone” and fetch yourself your project files. In order to run a server (such as node), you’ll likely have set up environmental variables (perhaps I’ll write a post on this someday soon, as I have recent mildly-hellish-but-ultimately-successful experience setting up env vars on heroku). To set environmental variables in bash you could either type them all out (“a=1”; “echo $a” => 1 to confirm) then run your server, OR you could write a bash script that you call. Here’s how I did that:

lil’ bash script to set environmental vars:

Your ubuntu instance comes with vim, so you can type “touch .env” to make the file, then “vim .env” to open it for editing.

Very quick Vim intro aside:

Vim in short: it has modes. Press “i” to begin editing (now you’re in INSERT mode, as you’ll see at the bottom), or the escape key to go back to COMMAND mode. Typing “:wq” in command mode will save your file and quit out of vim. That’s all that’s necessary here.

… back to a bash env script:

Now that you have your .env file open in vim for editing, type commands as you would in bash. You’ll likely want to set things like “PORT=80” and on a newline “CLIENT_SECRET=yourcrazyapistring”, etc. Once you’ve typed in those important things that your app will need to know to run, save and get out of vim (esc, “:wq”). Now, run your .env file by typing “source .env”. [NOTE: you can also run it with “bash .env” but that will run it inside a tiny bash instance that will run the given script then exit, without keeping what you’ve set as fact in the current instance of your bash shell, which is why we’re instead using the source command.] Grrreat! Now you can run your server (in my case, cd to the folder my app is in and type “node app.js”).

On tmux

Okay, so far, so good, but what happens when we control-C and log out of our ubuntu server? It powers down! Or rather, once we log out of our bash session, whatever web server we’ve started in there will stop. So. We need a way to trick it into staying open even if we are not logged in: enter tmux. Tmux will let us begin a session and detach from it, letting it run by itself. [NOTE: at this point, if we’re really going to run a real website, we would do a lot of other things instead, such as install and use nginx, but that’s for another day.] So, how do we use tmux? First up, install it “sudo apt-get install tmux”. Then run it: “tmux”, which gives us a cute little bash-within-a-bash from which we can do normal things. (To get out: type “exit”). Within tmux, use the command initiator key combo (control-b) to do things. For example, type control-b then “?” for the help. The cool part of tmux is that you can now detach from a session you have going by typing control-b then “d”. Now you have a detached session within which you could have your app server running. Try logging out of your ubuntu instance (control-c) and back in (ssh command in bold above) and then type “tmux attach” and you should be back into your detached session! How cool is that?!? (very cool). I think you can also name sessions and do a bunch of fancy stuff with tmux that I don’t know about. I did, however, find out how to list your active sessions (“tmux ls”) and kill them one-by-one (“tmux kill-session -t sessionnumber”) where sessionnumber is the number of the session listed when you type “tmux ls”. BAM!

SCP – Secure CoPy

Alright, one last tidbit: how to get files onto your new ec2 instance. Tricky. It seems that within terminal you are either looking at your own local machine or you are ssh’ed into another machine. Thankfully, there’s a secret bridge between the two: scp, which stands for secure copy. Here’s how it works: within terminal, in bash on your local machine, type “scp -i ./path/to/your/sshfile.pem ./path/to/file/youwanttocopy/onlocalmachine ubuntu@##.###.###.##:/home/ubuntu”. Let’s explain: “scp -i” means tell secure copy to use your identity file (your sshkey); “./path/to/your/sshfile.pem” the path to your ssh file on your local machine so that scp can connect securely to the remote computer; “./path/to/file/youwanttocopy/onlocalmachine” the path to the file (on your local machine) that you’d like to copy to the remote machine; “ubuntu@##.###.###.##:/home/ubuntu” remote username (default in my case is ubuntu) AT remote ip address, colon, existing file path on remote machine.

Okay, I think that’s all I learned from 9am – 10am today. Amazing!

An Itsy-Bitsy, Teeny-Weeny, Yellow Polka Dot … EventEmitter!

I recently attended the M’s workshop on functional programming, in which I learned that I needed HELP. (M is a fantastic facilitator at Hacker School. Here is the link to the blog post she wrote about said functional programming workshop.) I knew my client-side code was a mess, but I was clueless as to why exactly it was so stinky and at a total loss about how to fix it. Her workshop simplified the concepts behind functional programming, and made it approachable, and (more importantly), applicable. I’m writing this post as a summary of what I learned and how I’ve recently started applying it.

First off, here are some of the tips and tricks I took away from this afternoon of functional programming examples:

Declare It!

Write DECLARATIVE code, not IMPERATIVE code. Her example was writing a function called “make-me-a-sandwich” versus writing some code that is a tedious list of the steps necessary make a sandwich. In examining my recent code in this light, I found it full of tiny instructions that give the reader no overall sense of where it’s going and why. Plenty of room to chunk that shit up into declarative functions!

Avoid Mutating

Write functions that RETURN stuff without mutating what they are given. Functional functions take some parameters (say an array or string or object), use what’s within them (no referencing outside [especially not GLOBAL!] variables), and neatly return a modified COPY of what they were given. This is great, as you never end up writing over something you needed or modifying an outside variable unintentionally. Also, if you constantly return a clean copy of your starting data, it’s far easier to chain series of small tasks together, which makes it clear to the reader what you are doing with your code.

Quit Iterating with For Loops

I.E. Use MAP, REDUCE, and FILTER. These functions are great! I had little experience using them, but they save you a bunch of hassle and make it very clear to the reader what you are doing and why. M’s great example of all the unrelated crap people throw into a for loop made me laugh at myself out loud (new acronym: LAMOL?). I do that all the time! I used to think I was being efficient, getting more done in one loop. But it’s far cleaner to map my change onto my array of items, and it returns me a new, safe copy. I have questions about the potential Big O sacrifice in mapping over my data many times instead of doing a bunch of crap in a for loop all at once, but for my general purposes (handling a very small amount of data — generally loops under 20 items), the benefit in clarity certainly outweighs the loss in performance.

Applying My New Knowledge to My Own Code

As always, much harder than expected. After M’s workshop I knew vaguely what had to be done, but in my client-side code, I have functions that handle JSON requests and responses, and functions that handle resultant DOM manipulation, and they must work together so intricately that I found myself with a mess of delicately interwoven spaghetti code. Appetizing, sure, but not pleasant to pick apart, and definitely not easy to make changes to.

After puzzling at the mess for an afternoon and evening, I sought M’s advice. She recommended implementing my own tiny client-side Event Emitting system so that my server-communication code could alert my DOM-manipulation code to changes without being dangerously interlaced. “Brilliant!” I thought. “Terrifying,” I thought. Implement my own Event Listener and Emitter?!? Are you crazy? Luckily, M showed me some example code, and demystified such a seemingly complex idea significantly. So, drumroll please, here it is in the flesh, my Itsy-Bitsy, Teeny-Weeny, Yellow Polka Dot Event Emitter:

var EventEmitter = function () {
    this.handlers = {}; 
    // we need to keep track of what actions (functions) are mapped to which named events

EventEmitter.prototype = {
    bind : function (event, fxn) { 
    // this is how we add "listeners" to certain events
	this.handlers[event] = this.handlers[event] || []; 
        // make sure we have an array in our handler, mapped to our event, to push our function into
        // add our function to our event's array of functions
    emit : function (event, data) {
	if (this.handlers[event] !== []) { 
        // if this event has listeners bound to it
	    this.handlers[event].forEach(function(fxn) { 
            // for each of our functions mapped to our event
                // do the function on the data we emitted
	    }, data); 
            // forEach takes the parameter to pass the inner functions last

var sayMessage = function (message) { 
// let's define a test function that takes a piece of data

var emitter = new EventEmitter(); 
// instantiate a new EventEmitter

emitter.bind('open', sayMessage); 
// bind the sayMessage function to the event called "open". 
// this gets stored in our emitter's "handler" map.

// let's emit an event called "open" with the data, "banana", and see what happens:

// => if you put all this in a window.onload() function and load it in your browser, it will alert "banana"!

Okay, so what’s cool about this, and how does it relate to functional programming? First off, it’s cool because now my server-message sending and delivering code can be completely separated from my DOM-manipulation code. This is important. These two chunks of code deal with completely different things, and we want to keep them in their own zones. Then we can make changes within them without worrying about fudging up the other. (For the non-technical, DOM stands for Document Object Model, which is some garbage way of saying my HTML or the text-y output and physical elements on my webpage that the user sees, in opposition to the pure data that is transferred to and from the server.)

Anyway, what was exciting about writing this bit of code was the function forEach(). I had not used that before, and my (NEW) instinct was to try to map my array of functions to my bit of data (though usually map works the other way — mapping a function to an array of data…). Thankfully, M recommended forEach(). Upon researching forEach(), I found that it takes this second argument, which the documentation defines as “the object to use as this when executing callback.”

What’s exciting is that several weeks ago I doubt I would have understood that. Now, and especially after talking about closures in the functional programming workshop, I understand much better: that parameter is defining the runtime object or data that the function(s) you are calling within your forEach loop will be applied to. This is what allows us to “emit” an event and include with it some data that is (not magically!) passed to the function(s) that are mapped to that event in our handler map. Those functions, thanks to forEach(), are able to access that data we passed with our event emission. In my example, it means my event emission code can attach whatever it wants, and through the map inside my event emitter and the forEach execution, my bound function (in this case called sayMessage) can access the data I want to give it (in this case “banana”). Amazing!

An Ode to OAuth

I lied! Not an ode:
just a mere haiku or two
detailing OAuth.

I spent a whole day
paired with a clever friend on
trial and error.

Such a simple task:
download shots from Instagram.
Not easy at all!

to Instagram’s own login.
On success, return.

From login response,
“GET” client’s code, build request:

Add secret dev codes,
POST to port 4-4-3 with
concat-ed path name.

If you’re deemed worthy,
you’ll receive JSON response;
else days of headaches.

end poetry;

Using OAuth & Node to Access Instagram’s API

… quite a simple task, as it turns out, but nowhere could we find the exact sequence of delicate moves to make the magic happen. So here, for all future generations, I put it forth:

Assumed: you’ve got a node.js server up and running, likely using some handy library like Express (if not, no biggie, just see one of a thousand tutorials or perhaps a future blog post of mine); you’ve registered as a developer with Instagram (it’s super-simple: go to, register your app & such). NOTE: Instagram WORKS with your redirect uri set to “http://localhost:3000” or whatever port you’re running on your local computer’s test server. That shocked the heck outta me, but it’s fine. Go figure and rejoice! From registration, you’ll need to take note of your super-secret client_id (a misnomer, really, as in only this case are you the ‘client’ insofar as you are requesting data from their servers — confusing because you think of your users as clients, bah!), and your super-super-secret client_secret (very secret!).

Once you’ve got that jazz, build a little link or page or something that a user can arrive on and click “go” or something to be redirected to Instagram to log in to their account (it is actually very handy that Instagram handles that stuff for us!). The url you’ll want to use for that link is:

"" + client_id + "&redirect_uri=" + whateverYouSetAsYourRedirectURIwithInstagram + "&response_type=code"

Note: If you’re testing on your computer, your redirect_uri (whateverYouSetAsYourRedirectURIwithInstagram) might look like “http://localhost:3000”, assuming you’re running your node server on port 3000.

Next, user will click that link, agree to give you their soul under Instagram’s oversight, then be redirected back to… your redirect link! Hopefully you gave Instagram a link to something on your server and you’ve set up your node server to handle that response. I did, so I’m cruisin’ happily at this point.

Great! User agreed and is back in your hands, probably at a url on your server that looks like “redirectURI?code=12345678901234567”, where redirectURI in my case was “http://localhost:3000/home” and code was some frequently-changing garble of many numbers. This is a “GET” request to your server, and the “?code=” is a parameter name, complete with a value (the bunch of numbers). You’ll want to grab that pile o’ numbers. You need it for your next request. (I know, right? ANOTHER request?!? yes. sorry.)

[For the cheaters: Node.js makes it super-simple to grab that code from the url. You can include a built-in node module called “url” and use it to parse that huge url for you and give you the bits you care about. Here’s how:]

var urlParser = require('url');
// "require" node module called url

app.get('/home', function (req, res, callback) {
// somewhere inside your function that handles the "GET" request to your redirectURI page:
    var userCode = urlParser.parse(req.url).query.toString().replace(/code=/, ''); 
    // parse the requested url, pull out the query piece, make sure it's a string, strip off the "code=" part

Now you’re ready for the big league: REQUEST FOR ACCESS_TOKEN. In the Instagram API documentation, they say “POST a request to their server” and show an example using CURL. If you’re coding in php or accessing data directly from your terminal or some shit, that’s great for you. If you want to fuss with a node-curl library just to do that through node, also have fun. I was not in either of those situations, and I KNEW there was a way to send an HTTPS POST to another server from my node.js server. I just KNEW it. (Okay, so I had read a lot of the Node documentation, and I did actually know it — not just some gastrointestinal hunch, unfortunately.) Anyway, it’s true, but the Node examples do not line up well with what you need to do in this situation. Here’s what has to happen:

var https = require('https');
// require 'https' module from node

var querystring = require('querystring');
// require 'querystring' module from node [NOTE: this is DIFFERENT than JSON.stringify!]

var sendData = querystring.stringify({ // build data object to send and turn it into a querystring
    'client_id' : YourClient_id, // given to you by Instagram as a developer
    'client_secret' : YourClient_secret, // given to you by Instagram as a developer, very secret!
    'grant_type' : 'authorization_code', // just type both of these strings literally
    'redirect_uri' : YourRedirectURI, // the one you set with Instagram
    'code' : userCode // the one we fetched from Instagram's first GET response above

var postOptions = {
    hostname: '',
    port: 443,
    method: 'POST',
    path: '/oauth/access_token',
    headers: {
        'Content-Length': sendData.length // get length of the data string you are sending

var request = https.request(postOptions, function (response) { // prepare our request
    var receivedData = ''; // create a new, empty place to catch our response data
    response.setEncoding('utf8'); // if you don't set this, you get back a buffer of junk
    response.on('data', function (chunk) { // respond to the 'data' event by catching each chunk of data
        receivedData += chunk; // and adding it to our empty response basket
    response.on('end', function () { // respond to the 'end' event
        receivedData = JSON.parse(receivedData); // by parsing the JSON we (hopefully) received
        console.log(receivedData); // for now, let's just log this to make sure it's working
}).on('error', function (e) { // catch request problems
    console.log('https POST request error: ' + e); // log the error with a reminder to yourself
request.write(sendData); // ACTUALLY write the freakin' request object
request.end(); // send that sucker!

With any luck, this WILL work, and you should receive a JSON object that looks like this:

{ access_token: '1234567.8a90bcd.12ef3g4567890hijkl123m45678n90o1', // I made this up
    user: { 
        username: 'username',
        bio: 'the bio your user wrote',
        website: '',
        profile_picture: '',
        full_name: 'User Name',
        id: '1234567' 

Now you can use that GOLDEN ACCESS_TOKEN to get ANY DATA YOU WANT! It’s amazing! And beyond that, Instagram’s API is quite lovely to work with. They use Apigee, and requests to the API are fairly simple and self-explanatory (or rather, their documentation about that is great).

That’s all for tonight, folks! Feel free to post your tales of joy or woe in working with OAuth. I’d love to hear that I’m not the only one who’s lost a day to those nitty-gritty implementation details…

Also, much credit owed to my fellow student and friend, N, who worked through this mess alongside me. We’ve learned so much from this yet-unfinished “simple” project. Stay tuned!



If you’re no coding pro, opening and using Terminal is a terrifying prospect. I was in your shoes less than a year ago. Fear not! It’s actually great, and now I use it more than Finder. I kid you not! Here’s a super-simple intro of commands:

the “prompt”

First up, the “prompt”. This is not a command, but I thought you ought to know what it is. It will likely look like “ComputerName:CurrentDirectory UserName$” or something. I’ve configured mine to be simpler, since I’m the only user and I already know the name of my computer. Mine looks like “~$”, which I’ll be using here, as it’s shorter and (ahem) prettier. The prompt just means “okay, user, I’m ready for you.” I imagine it saying “Bring it!” with a nod of its head every time I open my terminal or a command completes. Think what you will, it means “TYPE SOMETHING”.


cd is the best. It means “Change Directory” or “go somewhere”. It’s harmless. You use it by typing “cd Directory/That/You/Want/” but you can leave off the trailing slash even. cd is smart. You can even tab complete if you can’t remember the full name of whatever directory (in Finder we call directories “folders”, same thing) you’re looking for, start to type it and hit the tab key. If you’ve typed enough letters for the terminal to uniquely identify it, it’ll auto-fill. If not, hit tab twice and it’ll show you the options. Amazing! Anyway, first up, cd yourself around a bit. cd into your Downloads folder or your Applications folder. To go back up a folder, type “cd ../”. Cool, huh? Love it!


ls “LiSts” whatever’s in the folder (directory) that you’re in. If you want it to show invisible stuff, type “ls -a”. Very handy.


pwd DOES NOT STAND FOR PASSWORD. Go figure. But know that. It stands for “Print Working Directory”. It’s not incredibly useful to me, as my prompt shows me where I’m at, but still, you could use it to pipe that into other commands, I suppose (I don’t yet know how to do that, I just know it can be done). I’m listing it here so you don’t sound like an idiot by calling it “password”. Geeze!


mv means “Move”, dammit! Not you, just your files. To mv something, type “mv Where/It/ Where/You/”. Note, you can use this to rename a file, just go to the directory that the file is in (cd Into/DirectoryName) and type “mv oldname.txt newname.txt”. Really cool!


cp means “CoPy”, and it works just like mv except the first parameter is where the shit you want to copy is now and the second parameter is where you want it to be copied to. *It will not move, delete or alter the original stuff!* So fear not.


rm means “ReMove”, which means DELETE, so BE CAREFUL HERE. rm only deletes one file at a time. Use it like so: “rm toDelete.txt”. Bam! GONE. No gettin’ ‘er back.


mkdir means what you’d expect, “MaKe DIRectory”. (Kidding, I would NOT have expected that, but it seems logical enough.) It makes a folder. Use: “mkdir NewFolderName”. Done. Created.


open “opens” a file. Cool.


sudo, (pronounced su-du) means “SuperUser Do”. It means do this thing as the Root User of the computer. It gives Terminal permission to do necessary things at times, and scary things at other times. I would recommend against using it unless you know (at least vaguely) what you are trying to do and you don’t have the computer’s permission to do it without saying you’re the SuperUser (heh). It requires you to type in your computer’s password.

That’s enough for now. Go forth and type!

Installing Emacs 24.3 on Mac OSX Snow Leopard (10.8) — ERRORS!

Good morning! I just installed Emacs 24.3 on a fairly clean install of Mac OS 10.8.5, and I came across (and remedied) several errors that someone else might benefit from knowing about. I may be one of those benefiters next time I have to install emacs, so this is not an entirely self-less post. ANNND, I enjoy writing about little tech triumphs that the non-technical could appreciate, so here goes:

Download emacs-24.3.tar.gz from

Sound terrifying? It is. Please note, this is likely the mirror closest to me (Brooklyn, NY), yours may differ. To find a mirror or get to this page, begin here:
Once you’ve downloaded the gzipped file, double-click it in your downloads folder. (HA! I love telling people to double-click.) Great job! Next up, open the file called INSTALL (this is a text file that contains instructions — peruse it so you know I’m not lying as I tell you what to do!).


(I moved this to its own post, as it’s a handy reference and should stand alone.) Back to our regularly scheduled program:

Begin Configurin’

Open a terminal window and cd to where you downloaded emacs. (NOTE: ermm… I maybe shoulda moved that emacs junk from my Downloads folder before installing, but I didn’t. Feel free to comment and correct me! Thanks!) For me this was my “Downloads” folder, so I typed “cd Downloads/emacs-24.3”. From there we’re going to run the configure script that comes with emacs. To do so, merely type “./configure”. A whole bunch of text will run down the screen like a waterfall of characters. It’s great. At the end, if you’re me, you’ll get some warnings that are inconvenient. [NOTE: do look at the INSTALL notes, especially if you get ERRORS or weird warnings! They say to peruse the warnings and fix the following: “wrong CPU and operating system names, wrong places for headers or libraries, missing libraries that you know are installed on your system, etc.” Mine were merely missing image support libraries, as I’ll explain below.] If you have minor errors like mine, I think you might be able to ignore these and just run “./configure” again, but that’s not what I did. Instead, I dealt with them:

Dealing with the Inconvenient Warnings about non-existent Image Libraries

As you’ll see in the INSTALL text that I told you to open first, the clever builders of emacs knew I might run into these problems and included a section on “Image Support Libraries”. There they link to the three libraries that I was missing (as listed in the warnings after running “./configure”). I was missing libjpeg, libgif and libtiff, listed in that order. I therefore went to,, and in that order to fetch and install the necessary libraries. These all work the same way as emacs, though some are more explicitly instructioned than others. What you do:

  • Download stuff and double-click to unzip (heh!).
  • Navigate to that folder in your downloads folder (“cd Downloads/jpeg-9”, etc.)
  • Run “./configure”
  • Check for errors and warnings (TBC below)
  • If no errors, run “make”
  • If no errors, run “make install”
  • You should be done

IF YOU RUN INTO ERRORS, AS I DID: start googling. HA. No, it’s true. But if you installed jpeg-9 and giflib-4.2.3, and then hit a snag on tiff-4.0.3 that references an error in the file “/usr/local/include/jmorecfg.h” on line 263, expecting a “}” to end the “{“, like I did, then you’re in luck. See below:

Remedying the error in “/usr/local/include/jmorecfg.h”

It seems the problem here is that JPEG-9 assumes that TRUE & FALSE aren’t defined if there isn’t a boolean type defined, so you need to add more if clauses. Replace the problem line (#263 in my case) from:

typedef enum { FALSE = 0, TRUE = 1 } boolean;


#ifndef TRUE
#ifndef FALSE
typedef enum { FALSE = 0, TRUE = 1 } boolean;
typedef int boolean;
typedef int boolean;

Mega thanks to pingemi, whose post I followed here: This fixed the problem for me. Then try the configure; make; make install again for tiff. If this works, then:

Image Libraries Installed, time to “MAKE” emacs!

Once you’ve got that stuff ready, run “./configure” again and it should show no warnings. Then run “make” and if no errors again, try “src/emacs -Q”, and it should open ‘src/emacs’. If something opens with a starting window, you win! Now go back to terminal and run “make install”. Once that’s done you can run “make clean” to tighten up the emacs install folder (which you may want to keep around for debugging). You’re done! Great job! Now good luck figuring out how to use it… (I have been directed to Emacs Prelude to get other configuration set up, but I’ve not yet done it, nor have I the faintest idea how to use emacs — heh.)

That’s all for today, folks! Hope you learned SOMETHING. I learned: a little more about running commands in bash and that those little black icons in finder represent bash executable files such as configure; not to be afraid to dig into some source code when there’s an error (I think that’s generally inadvisable, but in this case there were no really terrifying consequences); that I’m WAAAAYYY more confident dicking around in Terminal than I used to be — I now understood what the instructions were saying and wasn’t afraid to type them; a tiny eentsy bit of C code.