I’ve been a big podcast listener for several years. Here’s roughly the current list of podcasts I subscribe too, organized by how vehemently I recommend them.

Everyone Must Listen To

These are so good, it’s not worth explaining why, just listen to:

I Recommend

  • Planet Money 🔗
  • Tim Hartford 🔗
    • 50 Things That Made the Modern Economy 🔗
    • Pop-Up Ideas 🔗
  • Flash Forward 🔗
  • BBC Analysis 🔗
  • TED Radio Hour 🔗
  • EconTalk 🔗
  • Embedded 🔗
  • BBC World Service Documentaries 🔗
    • It’s downright humbling to realize how diverse the world is.
  • BBC Seriously… 🔗
    • This one gets extra credit for being so sonically interesting.
  • Seminars about Long Term Thinking – The Long Now Foundation 🔗

I also listen to

Which is a recommendation in itself, just less strongly than the above.

  • ProPublica 🔗
  • C-Span After Words 🔗
  • NPR Story of the Day 🔗
  • Codebreaker 🔗
  • Intelligence Squared 🔗
  • The Infinite Monkey Cage 🔗
  • Reply All 🔗

Honorable Mention

I don’t really listen to these, but that’s no fault of theirs. They are worth checking out.

  • Hardcore History with Dan Carlin 🔗
  • The Joe Rogan Experience 🔗
  • Song Exploder 🔗
  • Democracy Now! 🔗
    • These guys do great journalism. I’ve contributed to them. I just can’t spare an hour a day on the daily news cycle.
  • Death, Sex and Money 🔗

I love the CockroachDB logo

cockroachdbI know nothing about design but this is a great logo. The two circular arcs that make up the body and antennae create a partial Venn diagram, referencing the set theory and relational algebra that form the theoretical foundation for this and any relational database. The shape on the back of the cockroach evokes a funnel, the universal symbol for filtering: a fundamental database operation.

Cartoon characters would be so good at computers

Because they have 8 fingers, which is base-8: octal. That’s a power of two, so it would translate to the binary that computers use very easily.

On a related note, 8 bits is a byte and 4 bits is a nibble. Which is all it takes for an animal bite off one of your fingers and leave you with a nibble on that hand.

Git freebase

I’ve considered both rebase- and merge-based workflows for my projects, and I’ve come up with an alternative I’d like to propose as an enhancement to git.

I propose a command that would behave according to this pseudocode:

This has the following benefits:

  • It results in a clean history whenever possible
  • It highlights conflicts better than merging or rebasing

Traditional techniques in git are terrible at documenting conflicts. Conflicts are not easy to deal with. By their nature, they are encountered by only half of the people responsible for them. A prudent team should always review conflicts. In the best case, the conflict was preventable and the instigator needs to learn how to avoid creating conflicts going forward, e.g. by pulling more frequently, formatting frequently edited constants across multiple lines, or picking a random position for inserting new cases to frequently edited switch statements. In the typical case, at least both parties to a conflict should review the resolution.

A typical rebase completely hides conflicts, except when a user is diligent enough to document them in the commit message, although even in that case they will hardly pop out. It’s not even totally obvious where a rebase, successful or not, has happened. You have to notice that a commit has two different timestamps for when it was committed versus authored, and even then it might have been because it was cherry-picked.

A merge is almost as bad at documenting conflicts. gitk doesn’t show the changes introduced by a merge commit. This is bad news, because it allows totally new changes to be hidden in merge commits.

This technique serves to highlight conflicts in history. Any divergence+merge was a conflict. It sticks out like a sore thumb. And relative to a merge-only workflow, you still have an easy to follow, mostly linear history.

This strategy is also optimal in the rare but possible case in which a rebase encounters a conflict that a merge would not have. This happens when a conflicting change exists in an intermediate commit in one branch, but a subsequent commit leaves the tip of the branch in a state that doesn’t conflict. It should be clarified then, that a merge will happen anytime there is a rebasing conflict. It does not mean the conflict had to be resolved manually. In that case these merges will show up as sort of false-positives of truly bad conflicts, but I believe this is still the best that could be hoped for.

This could be implemented as an option to rebase. If it were to implemented as a separate git command, or for those who would prefer to alias it, I propose the name git freebase as it is similar to rebase, but it allows the user to be free of the fear of poorly resolved conflicts hidden in history.

Note: this author does not condone (nor condemn) the use of drugs.

Cleaning up after a twitter hack, with Unix

My twitter account got hacked. I needed a way to bulk unfollow the 700 accounts I was now following. I installed the command line twitter client twidge and used a little shell-fu to unfollow 20 accounts at a time

I then looped it automatically with the watch command, resulting in

This unfollowed the newest accounts first, so I was able to catch it before it unfollowed the accounts I really care about.

Here’s how we created PurpleMarker and how you can too

  1. Learn from our mistakes by following the instructions below
  2. Install Drupal 7 via easiest method available, i.e. Softaculous
    1. Mind Drupal’s helpful warnings to stop from leaving open security holes
  3. Install CiviCRM module in Drupal
    1. create a separate database for CiviCRM specific data.
    2. install dependencies as necessary: Chaos Tools, Views
  4. Install Sunlight integration (Drupal module)
    1. https://drupal.org/project/cd_sunlight
    2. apply for Sunlight Foundation API key
  5. Enable lookup of CiviCRM contacts by Congressional District
  6. Enable CiviCRM Mapping and Geocoding
  7. Import your Contacts
  8. Famaliarize yourself with the immense capabilities of CiviCRM

Easy Budgeting Trick

The easiest way to save money is to trick yourself into thinking you have less than you do, and saving the difference. Any time you think you have money, you’re liable to think you can spend it.

This trick is based on the quirkiness of the calendar. If you get paid every other week, then it’s easy to assume that you get two paychecks per month, and you can base your budgets on that. The truth is that there are 52 weeks in a year and you’re only assuming there’s 48. So in fact twice a year you’ll have a month with 3 paychecks. If you can save these two entire checks, that puts you at a 7.7% savings rate, or even higher if you have your retirement contributions or other savings automatically deducted.

Ideas for the Kitchen of the Future

Most of these ideas could be combined into a unit that would basically resemble ceiling-mounted track lighting. Each “light” would be able to aim itself and would contain some of the following modules.

  • Kinect-like cameras analyze volume of vessels and their contents
    • This camera could track your actions to a degree
  • Projectors project lines on inside of vessels to show you how much of an ingredient to add
  • Voice controlled “assistant” (read Siri) walks you through recipes, responds to:
    • “How much flour do I add?”
    • “What do I do next?”
    • “The chicken is in the oven. Set the timer”
    • Assists in timing multiple recipes to be completed at the same time.
  • UV light reveals unclean surfaces
  • UV light used to sanitize surfaces (maybe just when no one’s in the kitchen)
  • IR thermometer measures temperatures, which are then projected onto the cooking surface
  • Sensors smell for burning (I now realize this is basically a smoke detector but I think we could do better if we designed a new gadget around this use case.)
  • Camera to upload photos of food to social media for bragging rights.
  • Cameras to record video of cooking process for tutorials or to capture entertaining mistakes.

The nice thing about this track lighting unit is that it’s relatively easily retrofittable into an existing kitchen and doesn’t require you to toss existing gadgets. A major flaw I’ve noticed in futurology is the temptation to assume that you can start from scratch. But the way things really work is that technology creeps forward by maintaining a degree of backwards compatibility. Products that require large investments don’t succeed in the market. Also the modularity of this design is nice because some technologies will reach marketability before others.

Besides this device, every surface would be a scale, especially stove-top burners and including oven racks.

Cooking appliances could be programmed by temperature vs. time graphs which would be downloadable from the internet and allow different temperatures over the course of cooking, and adapt to the temperature of the food (like microwaves already do) and also the weight of the food.

I think the sort of assistance this provides allows the cook to feel like they still have ownership of the cooking process, as opposed to more automation based technologies that can rob one of that satisfaction. I’m reminded of the story of the boxed cake mix that failed when it only required the baker to add water, but was successful when it required the addition of an egg (although for that I think you must also factor in the distrust of the artificialness of a product that used to require perishable ingredients and is now made to be shelf-stable).

While it is a notoriously hard problem to do image analysis to figure out what the ingredients are that you are working with, I think it would be fairly easy to tell the system “I’m measuring the flour” and then it would just have to track the location of the measuring cup. If it can do this, it can track your progress through a recipe and warn you if you do things out of order or forget a step. This would be one of the hardest things to get right because people hate being condescended to by technology.

Subscribe to feed in Digg Reader URL and bookmarklet

An easy way to subscribe to RSS feeds in a feed reader is to use a browser extension, such as the RSS Subscription Extension (By Google) to grab the embedded RSS feed from the page. To configure this or other extensions requires a URL that normally looks like


Where %s gets replaced with the feed URL to be added.

Digg Reader doesn’t explicitly advertise their URL for this but when you are adding a feed, it takes you to


So you can just use that when configuring the above extension at its settings page.

Here’s a bookmarklet if you prefer. Just drag it to your bookmark bar. You can also use it by navigating to the bookmarklet link from your address bar while on the page you want to subscribe to.

HT: Matt Cutts

Social News Ideas

Lately I’ve been using reddit, Hacker News, Digg, Slashdot and the app Zite to guide my web browsing. One thing I’ve noticed is how these sources use different algorithms to decide how to rank stories. Specifically reddit allows users to both upvote and downvote stories while most other places only allow upvoting.

The justification for not having downvotes I believe is to protect unpopular opinions. It’s not that the idea of downvotes came later, because some subreddits have chosen to disable downvoting. It seems to me that there is good reason to protect the minority from the tyranny of the majority. At the same time, if I want to deny the existence celebrities, I should be able to block them from my feed. Considering both of these concerns, I believe that I’ve come up with a compromise.

Upvotes are global, downvotes are local

What does this mean? The function of upvotes would remain the same but the function of downvotes would change. How can downvotes do anything if they don’t affect other users? I believe reddit could go at least two routes with this. One is to implement a Bayesian spam filter. When I downvote something it would consider the words used in the content probably out of necessity the comments of a post so that it would learn that I want to pretend that I’ve never even heard of Miley Cirus and that she’s a collective hallucination of people I hate.

The other route would be to use a personalization algorithm somewhat like Last.fm. Last.fm takes in a record of all the music you listen to, then matches you up with people who have similar taste. It then recommends to each of you the music that the other person listens to that you don’t. Reddit could do the same for content based on your upvotes and downvotes. This could even be used to show you content from subs you don’t subscribe to.

Killing reposts

Another idea I have to improve social news is to add to it one of the key features of RSS readers. That is the de-duplication of content. Good jokes can cycle through reddit multiple times since, as the saying goes, “If I haven’t seen it, it’s news to me.” The solution would be for each user to have a record of what they click through, and to have all new content filtered against that record and karma decay, which detects reposts. This wouldn’t discourage reposts, since it wouldn’t automatically downvote them. In fact it would encourage them since it would hide them from people who would want to downvote them. This is actually great since it would allow good content to makes its way to new generations of users. You could even disable filtering of content you yourself upvoted so that you can be reminded of it when it comes around again. I would call this feature or the software that implemented it Riposte.

Shared responsibility

This idea was suggested to me by a friend of mine. It addresses the issue that the “Knights of r/new” have (perhaps undue) influence over what makes it to the top of reddit by filtering content when it is most vulnerable to being hidden via downvotes. The solution is to make browsing r/new a responsibility of all users by peppering new content randomly among the front page. I believe this issue may have also been addressed by hiding the vote count for  young content, giving it at least a second chance before being hidden. This solution though addresses the issue of a selection bias being created by the “Knights of r/new”

Trial by jury

After reading Clay Shirky’s blog post A Group Is Its Own Worst Enemy, I was thinking about the issue of moderation among online communities.

Geoff Cohen has a great observation about this. He said “The likelihood that any unmoderated group will eventually get into a flame-war about whether or not to have a moderator approaches one as time increases.”

I think though that this creates a false dichotomy. I believe you can have moderation with out moderators. I draw from my anarchist leanings here. You can have society without leaders if you have real democracy. By democracy in this case, I’m not suggesting that all users vote on all decisions of moderation. I think that is impractical. I’m suggesting demarchy. It could be a modified demarchy where user selection isn’t truely random. I think it would be a good idea to ensure in each trial by jury that there are jurors that have a substantial amount of seniority. We also might want to throw in some “professional” jurors, by which I mean some that are self-selected. Whether or not consensus or a certain amount of majority is necessary could be up to debate and may depend on the severity of the infraction in question.